Weblog entry #19 for mcortese

How much entropy ASLR uses
Posted by mcortese on Mon 18 Jul 2011 at 14:06
Tags: none.

As I posted in another weblog entry, I'm experiencing a curious shortage of entropy in my system. The one feature that constantly drains the pool is the address space layout randomization (ASLR), coupled with a unique choice of drivers (e.g. the ethernet adapter) that do not contribute any random data to the kernel. Forgetting the latter issue, for which I've laready worked out a patch, I'd like to focus on the ASLR topic.

My first reaction was of disbelief: is it possible that an apparently harmless feature, that has presumably received extensive scrutiny, is able to do so much damage? To answer this question, I made up a test to measure how much entropy is consumed every time a new process is spawned.

The quantity of entropy available to the kernel, expessed in bits, can be read from /proc/sys/kernel/random/entropy_avail. But even a simple cat used to get this number is deemed to spawn a new process, thus to further drain the entropy pool. Why not using this to actually measure the entity of such drain? Here is the code:

e0=$(cat /proc/sys/kernel/random/entropy_avail); \
e1=$(cat /proc/sys/kernel/random/entropy_avail); \
echo $((e1 - e0))
The result is the difference of entropy in bits before and after a cat invocation.

Note that this compound command must be entered in one go: you cannot split it into 3 lines, as the interrupts caused by your typing would contribute extra random bits and spoil the measurement.

Just to prove that this command has no other effect on the entropy than the one stated, let's try this counter-example:

e0=$(</proc/sys/kernel/random/entropy_avail); \
e1=$(</proc/sys/kernel/random/entropy_avail); \
echo $((e1 - e0))
This is the same code as before, but the cat is replaced by a Bash trick that does the same without spawning a new process. The expected result is zero.

Now a quick comment on the results I found: my 32-bit kernel with ASLR enabled, consumes exactly 128 bits of entropy for every new process. Missing any other source of random data, it takes as few as 32 commands to exhaust a full 4096-bit pool. No surprise, then, my system comes to a halt after just some minutes of unattended aptitude upgrade!

 

Comments on this Entry

Posted by DerekA (199.213.xx.xx) on Mon 18 Jul 2011 at 18:01
[ Send Message ]
I wonder how widespread something like this is.

Even according to the urandom man page:

The kernel random-number generator is designed to produce a small
amount of high-quality seed material to seed a cryptographic pseudo-
random number generator (CPRNG). It is designed for security, not
speed, and is poorly suited to generating large amounts of random data.
Users should be very economical in the amount of seed material that
they read from /dev/urandom (and /dev/random); unnecessarily reading
large quantities of data from this device will have a negative impact
on other users of the device.

[ Parent | Reply to this comment ]