diff mbox

[kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

Message ID 20161222054125.lzxhd6ctovm3wk4p@thunk.org (mailing list archive)
State Not Applicable
Delegated to: Herbert Xu
Headers show

Commit Message

Theodore Ts'o Dec. 22, 2016, 5:41 a.m. UTC
On Thu, Dec 22, 2016 at 03:49:39AM +0100, Jason A. Donenfeld wrote:
> 
> Funny -- while you guys were sending this back & forth, I was writing
> my reply to Andy which essentially arrives at the same conclusion.
> Given that we're all arriving to the same thing, and that Ted shot in
> this direction long before we all did, I'm leaning toward abandoning
> SipHash for the de-MD5-ification of get_random_int/long, and working
> on polishing Ted's idea into something shiny for this patchset.

here are my numbers comparing siphash (using the first three patches
of the v7 siphash patches) with my batched chacha20 implementation.
The results are taken by running get_random_* 10000 times, and then
dividing the numbers by 10000 to get the average number of cycles for
the call.  I compiled 32-bit and 64-bit kernels, and ran the results
using kvm:

                   siphash                        batched chacha20
         get_random_int  get_random_long   get_random_int   get_random_long   

32-bit    270                  278             114            146
64-bit     75                   75             106            186

> I did have two objections to this. The first was that my SipHash
> construction is faster.

Well, it's faster on everything except 32-bit x86.  :-P

> The second, and the more
> important one, was that batching entropy up like this means that 32
> calls will be really fast, and then the 33rd will be slow, since it
> has to do a whole ChaCha round, because get_random_bytes must be
> called to refill the batch.

... and this will take 2121 cycles on 64-bit x86, and 2315 cycles on a
32-bit x86.  Which on a 2.3 GHz processor, is just under a
microsecond.  As far as being inconsistent on process startup, I very
much doubt a microsecond is really going to be visible to the user.  :-)

The bottom line is that I think we're really "pixel peeping" at this
point --- which is what obsessed digital photographers will do when
debating the quality of a Canon vs Nikon DSLR by blowing up a photo by
a thousand times, and then trying to claim that this is visible to the
human eye.  Or people who obsessing over the frequency response curves
of TH-X00 headphones with Mahogony vs Purpleheart wood, when it's
likely that in a blind head-to-head comparison, most people wouldn't
be able to tell the difference....

I think the main argument for using the batched getrandom approach is
that it, I would argue, simpler than introducing siphash into the
picture.  On 64-bit platforms it is faster and more consistent, so
it's basically that versus complexity of having to adding siphash to
the things that people have to analyze when considering random number
security on Linux.   But it's a close call either way, I think.

	    	     	      	      - Ted

P.S.  My benchmarking code....

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Jason A. Donenfeld Dec. 22, 2016, 6:03 a.m. UTC | #1
Hi Ted,

On Thu, Dec 22, 2016 at 6:41 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> The bottom line is that I think we're really "pixel peeping" at this
> point --- which is what obsessed digital photographers will do when
> debating the quality of a Canon vs Nikon DSLR by blowing up a photo by
> a thousand times, and then trying to claim that this is visible to the
> human eye.  Or people who obsessing over the frequency response curves
> of TH-X00 headphones with Mahogony vs Purpleheart wood, when it's
> likely that in a blind head-to-head comparison, most people wouldn't
> be able to tell the difference....

This is hilarious, thanks for the laugh. I believe you're right about this...

>
> I think the main argument for using the batched getrandom approach is
> that it, I would argue, simpler than introducing siphash into the
> picture.  On 64-bit platforms it is faster and more consistent, so
> it's basically that versus complexity of having to adding siphash to
> the things that people have to analyze when considering random number
> security on Linux.   But it's a close call either way, I think.

I find this compelling. We'll have one csprng for both
get_random_int/long and for /dev/urandom, and we'll be able to update
that silly warning on the comment of get_random_int/long to read
"gives output of either rdrand quality or of /dev/urandom quality",
which makes it more useful for other things. It introduces less error
prone code, and it lets the RNG analysis be spent on just one RNG, not
two.

So, with your blessing, I'm going to move ahead with implementing a
pretty version of this for v8.

Regards,
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa Dec. 22, 2016, 12:47 p.m. UTC | #2
Hi Ted,

On Thu, 2016-12-22 at 00:41 -0500, Theodore Ts'o wrote:
> On Thu, Dec 22, 2016 at 03:49:39AM +0100, Jason A. Donenfeld wrote:
> > 
> > Funny -- while you guys were sending this back & forth, I was writing
> > my reply to Andy which essentially arrives at the same conclusion.
> > Given that we're all arriving to the same thing, and that Ted shot in
> > this direction long before we all did, I'm leaning toward abandoning
> > SipHash for the de-MD5-ification of get_random_int/long, and working
> > on polishing Ted's idea into something shiny for this patchset.
> 
> here are my numbers comparing siphash (using the first three patches
> of the v7 siphash patches) with my batched chacha20 implementation.
> The results are taken by running get_random_* 10000 times, and then
> dividing the numbers by 10000 to get the average number of cycles for
> the call.  I compiled 32-bit and 64-bit kernels, and ran the results
> using kvm:
> 
>                    siphash                        batched chacha20
>          get_random_int  get_random_long   get_random_int   get_random_long   
> 
> 32-bit    270                  278             114            146
> 64-bit     75                   75             106            186
> 
> > I did have two objections to this. The first was that my SipHash
> > construction is faster.
> 
> Well, it's faster on everything except 32-bit x86.  :-P
> 
> > The second, and the more
> > important one, was that batching entropy up like this means that 32
> > calls will be really fast, and then the 33rd will be slow, since it
> > has to do a whole ChaCha round, because get_random_bytes must be
> > called to refill the batch.
> 
> ... and this will take 2121 cycles on 64-bit x86, and 2315 cycles on a
> 32-bit x86.  Which on a 2.3 GHz processor, is just under a
> microsecond.  As far as being inconsistent on process startup, I very
> much doubt a microsecond is really going to be visible to the user.  :-)
> 
> The bottom line is that I think we're really "pixel peeping" at this
> point --- which is what obsessed digital photographers will do when
> debating the quality of a Canon vs Nikon DSLR by blowing up a photo by
> a thousand times, and then trying to claim that this is visible to the
> human eye.  Or people who obsessing over the frequency response curves
> of TH-X00 headphones with Mahogony vs Purpleheart wood, when it's
> likely that in a blind head-to-head comparison, most people wouldn't
> be able to tell the difference....
> 
> I think the main argument for using the batched getrandom approach is
> that it, I would argue, simpler than introducing siphash into the
> picture.  On 64-bit platforms it is faster and more consistent, so
> it's basically that versus complexity of having to adding siphash to
> the things that people have to analyze when considering random number
> security on Linux.   But it's a close call either way, I think.

following up on what appears to be a random subject: ;)

IIRC, ext4 code by default still uses half_md4 for hashing of filenames
in the htree. siphash seems to fit this use case pretty good.

xfs could also need an update, as they don't seed the directory hash
tables at all (but not sure if they are vulnerable). I should improve
[1] a bit.

[1] http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=blo
b;f=src/dirhash_collide.c;h=55cec872d5061ac2ca0f56d1f11e6bf349d5bb97;hb
=HEAD

Bye,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 1:10 p.m. UTC | #3
On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> following up on what appears to be a random subject: ;)
>
> IIRC, ext4 code by default still uses half_md4 for hashing of filenames
> in the htree. siphash seems to fit this use case pretty good.

I saw this too. I'll try to address it in v8 of this series.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa Dec. 22, 2016, 3:05 p.m. UTC | #4
On 22.12.2016 14:10, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
> <hannes@stressinduktion.org> wrote:
>> following up on what appears to be a random subject: ;)
>>
>> IIRC, ext4 code by default still uses half_md4 for hashing of filenames
>> in the htree. siphash seems to fit this use case pretty good.
> 
> I saw this too. I'll try to address it in v8 of this series.

This change would need a new version of the ext4 super block, because
you should not change existing file systems.


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 3:12 p.m. UTC | #5
On Thu, Dec 22, 2016 at 4:05 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> This change would need a new version of the ext4 super block, because
> you should not change existing file systems.

Right.

As a first step, I'm considering adding a patch to move halfmd4.c
inside the ext4 domain, or at the very least, simply remove it from
linux/cryptohash.h. That'll then leave the handful of bizarre sha1
usages to consider.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 3:29 p.m. UTC | #6
On Thu, Dec 22, 2016 at 4:12 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> As a first step, I'm considering adding a patch to move halfmd4.c
> inside the ext4 domain, or at the very least, simply remove it from
> linux/cryptohash.h. That'll then leave the handful of bizarre sha1
> usages to consider.

Specifically something like this:

https://git.zx2c4.com/linux-dev/commit/?h=siphash&id=978213351f9633bd1e3d1fdc3f19d28e36eeac90

That only leaves two more uses of "cryptohash" to consider, but they
require a bit of help. First, sha_transform in net/ipv6/addrconf.c.
That might be a straight-forward conversion to SipHash, but perhaps
not; I need to look closely and think about it. The next is
sha_transform in kernel/bpf/core.c. I really have no idea what's going
on with the eBPF stuff, so that will take a bit longer to study. Maybe
sha1 is fine in the end there? I'm not sure yet.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa Dec. 22, 2016, 3:33 p.m. UTC | #7
On Thu, 2016-12-22 at 16:29 +0100, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 4:12 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> > As a first step, I'm considering adding a patch to move halfmd4.c
> > inside the ext4 domain, or at the very least, simply remove it from
> > linux/cryptohash.h. That'll then leave the handful of bizarre sha1
> > usages to consider.
> 
> Specifically something like this:
> 
> https://git.zx2c4.com/linux-dev/commit/?h=siphash&id=978213351f9633bd1e3d1fdc3f19d28e36eeac90
> 
> That only leaves two more uses of "cryptohash" to consider, but they
> require a bit of help. First, sha_transform in net/ipv6/addrconf.c.
> That might be a straight-forward conversion to SipHash, but perhaps
> not; I need to look closely and think about it. The next is
> sha_transform in kernel/bpf/core.c. I really have no idea what's going
> on with the eBPF stuff, so that will take a bit longer to study. Maybe
> sha1 is fine in the end there? I'm not sure yet.

IPv6 you cannot touch anymore. The hashing algorithm is part of uAPI.
You don't want to give people new IPv6 addresses with the same stable
secret (across reboots) after a kernel upgrade. Maybe they lose
connectivity then and it is extra work?

The bpf hash stuff can be changed during this merge window, as it is
not yet in a released kernel. Albeit I would probably have preferred
something like sha256 here, which can be easily replicated by user
space tools (minus the problem of patching out references to not
hashable data, which must be zeroed).

Bye,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 3:41 p.m. UTC | #8
Hi Hannes,

On Thu, Dec 22, 2016 at 4:33 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> IPv6 you cannot touch anymore. The hashing algorithm is part of uAPI.
> You don't want to give people new IPv6 addresses with the same stable
> secret (across reboots) after a kernel upgrade. Maybe they lose
> connectivity then and it is extra work?

Ahh, too bad. So it goes.

> The bpf hash stuff can be changed during this merge window, as it is
> not yet in a released kernel. Albeit I would probably have preferred
> something like sha256 here, which can be easily replicated by user
> space tools (minus the problem of patching out references to not
> hashable data, which must be zeroed).

Oh, interesting, so time is of the essence then. Do you want to handle
changing the new eBPF code to something not-SHA1 before it's too late,
as part of a new patchset that can fast track itself to David? And
then I can preserve my large series for the next merge window.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa Dec. 22, 2016, 3:51 p.m. UTC | #9
On Thu, 2016-12-22 at 16:41 +0100, Jason A. Donenfeld wrote:
> Hi Hannes,
> 
> On Thu, Dec 22, 2016 at 4:33 PM, Hannes Frederic Sowa
> <hannes@stressinduktion.org> wrote:
> > IPv6 you cannot touch anymore. The hashing algorithm is part of uAPI.
> > You don't want to give people new IPv6 addresses with the same stable
> > secret (across reboots) after a kernel upgrade. Maybe they lose
> > connectivity then and it is extra work?
> 
> Ahh, too bad. So it goes.

If no other users survive we can put it into the ipv6 module.

> > The bpf hash stuff can be changed during this merge window, as it is
> > not yet in a released kernel. Albeit I would probably have preferred
> > something like sha256 here, which can be easily replicated by user
> > space tools (minus the problem of patching out references to not
> > hashable data, which must be zeroed).
> 
> Oh, interesting, so time is of the essence then. Do you want to handle
> changing the new eBPF code to something not-SHA1 before it's too late,
> as part of a new patchset that can fast track itself to David? And
> then I can preserve my large series for the next merge window.

This algorithm should be a non-seeded algorithm, because the hashes
should be stable and verifiable by user space tooling. Thus this would
need a hashing algorithm that is hardened against pre-image
attacks/collision resistance, which siphash is not. I would prefer some
higher order SHA algorithm for that actually.

Bye,
Hannes
 
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 3:53 p.m. UTC | #10
On Thu, Dec 22, 2016 at 4:51 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> This algorithm should be a non-seeded algorithm, because the hashes
> should be stable and verifiable by user space tooling. Thus this would
> need a hashing algorithm that is hardened against pre-image
> attacks/collision resistance, which siphash is not. I would prefer some
> higher order SHA algorithm for that actually.

Right. SHA-256, SHA-512/256, Blake2s, or Blake2b would probably be
good candidates for this.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o Dec. 22, 2016, 3:54 p.m. UTC | #11
On Thu, Dec 22, 2016 at 02:10:33PM +0100, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
> <hannes@stressinduktion.org> wrote:
> > following up on what appears to be a random subject: ;)
> >
> > IIRC, ext4 code by default still uses half_md4 for hashing of filenames
> > in the htree. siphash seems to fit this use case pretty good.
> 
> I saw this too. I'll try to address it in v8 of this series.

This is a separate issue, and this series is getting a bit too
complex.  So I'd suggest pushing this off to a separate change.

Changing the htree hash algorithm is an on-disk format change, and so
we couldn't roll it out until e2fsprogs gets updated and rolled out
pretty broadley.  In fact George sent me patches to add siphash as a
hash algorithm for htree a while back (for both the kernel and
e2fsprogs), but I never got around to testing and applying them,
mainly because while it's technically faster, I had other higher
priority issues to work on --- and see previous comments regarding
pixel peeping.  Improving the hash algorithm by tens or even hundreds
of nanoseconds isn't really going to matter since we only do a htree
lookup on a file creation or cold cache lookup, and the SSD or HDD I/O
times will dominate.  And from the power perspective, saving
microwatts of CPU power isn't going to matter if you're going to be
spinning up the storage device....

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o Dec. 22, 2016, 3:58 p.m. UTC | #12
On Thu, Dec 22, 2016 at 07:03:29AM +0100, Jason A. Donenfeld wrote:
> I find this compelling. We'll have one csprng for both
> get_random_int/long and for /dev/urandom, and we'll be able to update
> that silly warning on the comment of get_random_int/long to read
> "gives output of either rdrand quality or of /dev/urandom quality",
> which makes it more useful for other things. It introduces less error
> prone code, and it lets the RNG analysis be spent on just one RNG, not
> two.
> 
> So, with your blessing, I'm going to move ahead with implementing a
> pretty version of this for v8.

Can we do this as a separate series, please?  At this point, it's a
completely separate change from a logical perspective, and we can take
in the change through the random.git tree.

Changes that touch files that are normally changed in several
different git trees leads to the potential for merge conflicts during
the linux-next integration and merge window processes.  Which is why
it's generally best to try to isolate changes as much as possible.

Cheers,

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 4:16 p.m. UTC | #13
Hi Ted,

On Thu, Dec 22, 2016 at 4:58 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> Can we do this as a separate series, please?  At this point, it's a
> completely separate change from a logical perspective, and we can take
> in the change through the random.git tree.
>
> Changes that touch files that are normally changed in several
> different git trees leads to the potential for merge conflicts during
> the linux-next integration and merge window processes.  Which is why
> it's generally best to try to isolate changes as much as possible.

Sure, I can separate things out.

Could you offer a bit of advice on how to manage dependencies between
patchsets during merge windows? I'm a bit new to the process.

Specifically, we how have 4 parts:
1. add siphash, and use it for some networking code. to: david miller's net-next
2. convert char/random to use siphash. to: ted ts'o's random-next
3. move lib/md5.c to static function in crypto/md5.c, remove entry
inside of linux/cryptohash.h. to: ??'s ??-next
4. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
entry inside of linux/cryptohash.c. to: td ts'o's ext-next

Problem: 2 depends on 1, 3 depends on 1 & 2. But this can be
simplified into 3 parts:

1. add siphash, and use it for some networking code. to: david miller's net-next
2. convert char/random to use siphash, move lib/md5.c to static
function in crypto/md5.c, remove entry inside of linux/cryptohash.h.
to: ted ts'o's random-next
3. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
entry inside of linux/cryptohash.c. to: td ts'o's ext-next

Problem: 2 depends on 1. Is that okay with you?

Also, would you like me to merge (3) and (2) of the second list into
one series for you?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o Dec. 22, 2016, 4:30 p.m. UTC | #14
On Thu, Dec 22, 2016 at 05:16:47PM +0100, Jason A. Donenfeld wrote:
> Could you offer a bit of advice on how to manage dependencies between
> patchsets during merge windows? I'm a bit new to the process.
> 
> Specifically, we how have 4 parts:
> 1. add siphash, and use it for some networking code. to: david miller's net-next

I'd do this first, as one set.  Adding a new file to crypto is
unlikely to cause merge conflicts.

> 2. convert char/random to use siphash. to: ted ts'o's random-next

I'm confused, I thought you had agreed to the batched chacha20
approach?

> 3. move lib/md5.c to static function in crypto/md5.c, remove entry
> inside of linux/cryptohash.h. to: ??'s ??-next

This is cleanup, so it doesn't matter that much when it happens.  md5
changes to crypto is also unlikely to cause conflicts, so we could do
this at the same time as (2), if Herbert (the crypto maintainer) agrees.

> 4. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
> entry inside of linux/cryptohash.c. to: td ts'o's ext-next

This is definitely separate.

One more thing.  Can you add some test cases to lib/siphash.h?
Triggered off of a CONFIG_SIPHASH_REGRESSION_TEST config flag, with
some test inputs and known outputs?  I'm going to need to add a
version of siphash to e2fsprogs, and I want to make sure the userspace
version is implementing the same algorithm as the kernel siphash.

	   		    	 	      - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 4:36 p.m. UTC | #15
On Thu, Dec 22, 2016 at 5:30 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> I'd do this first, as one set.  Adding a new file to crypto is
> unlikely to cause merge conflicts.

Ack.

>
>> 2. convert char/random to use siphash. to: ted ts'o's random-next
>
> I'm confused, I thought you had agreed to the batched chacha20
> approach?

Sorry, I meant to write this. Long day, little sleep. Yes, of course.
Batched entropy.

>> 3. move lib/md5.c to static function in crypto/md5.c, remove entry
>> inside of linux/cryptohash.h. to: ??'s ??-next
>
> This is cleanup, so it doesn't matter that much when it happens.  md5
> changes to crypto is also unlikely to cause conflicts, so we could do
> this at the same time as (2), if Herbert (the crypto maintainer) agrees.

Alright, sure.

>
>> 4. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
>> entry inside of linux/cryptohash.c. to: td ts'o's ext-next
>
> This is definitely separate.

Okay, I'll submit it to you separately.

> One more thing.  Can you add some test cases to lib/siphash.h?
> Triggered off of a CONFIG_SIPHASH_REGRESSION_TEST config flag, with
> some test inputs and known outputs?  I'm going to need to add a
> version of siphash to e2fsprogs, and I want to make sure the userspace
> version is implementing the same algorithm as the kernel siphash.

I've already written these. They're behind TEST_HASH. They currently
test every single line of code of all implementations of siphash. I
spent a long time on these. The test vectors themselves were taken
from the SipHash creators' reference publication. Check out
lib/test_siphash.c in my tree.

Jason

On Thu, Dec 22, 2016 at 5:30 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Thu, Dec 22, 2016 at 05:16:47PM +0100, Jason A. Donenfeld wrote:
>> Could you offer a bit of advice on how to manage dependencies between
>> patchsets during merge windows? I'm a bit new to the process.
>>
>> Specifically, we how have 4 parts:
>> 1. add siphash, and use it for some networking code. to: david miller's net-next
>
> I'd do this first, as one set.  Adding a new file to crypto is
> unlikely to cause merge conflicts.
>
>> 2. convert char/random to use siphash. to: ted ts'o's random-next
>
> I'm confused, I thought you had agreed to the batched chacha20
> approach?
>
>> 3. move lib/md5.c to static function in crypto/md5.c, remove entry
>> inside of linux/cryptohash.h. to: ??'s ??-next
>
> This is cleanup, so it doesn't matter that much when it happens.  md5
> changes to crypto is also unlikely to cause conflicts, so we could do
> this at the same time as (2), if Herbert (the crypto maintainer) agrees.
>
>> 4. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
>> entry inside of linux/cryptohash.c. to: td ts'o's ext-next
>
> This is definitely separate.
>
> One more thing.  Can you add some test cases to lib/siphash.h?
> Triggered off of a CONFIG_SIPHASH_REGRESSION_TEST config flag, with
> some test inputs and known outputs?  I'm going to need to add a
> version of siphash to e2fsprogs, and I want to make sure the userspace
> version is implementing the same algorithm as the kernel siphash.
>
>                                               - Ted
Hannes Frederic Sowa Dec. 22, 2016, 6:08 p.m. UTC | #16
On 22.12.2016 16:54, Theodore Ts'o wrote:
> On Thu, Dec 22, 2016 at 02:10:33PM +0100, Jason A. Donenfeld wrote:
>> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
>> <hannes@stressinduktion.org> wrote:
>>> following up on what appears to be a random subject: ;)
>>>
>>> IIRC, ext4 code by default still uses half_md4 for hashing of filenames
>>> in the htree. siphash seems to fit this use case pretty good.
>>
>> I saw this too. I'll try to address it in v8 of this series.
> 
> This is a separate issue, and this series is getting a bit too
> complex.  So I'd suggest pushing this off to a separate change.
> 
> Changing the htree hash algorithm is an on-disk format change, and so
> we couldn't roll it out until e2fsprogs gets updated and rolled out
> pretty broadley.  In fact George sent me patches to add siphash as a
> hash algorithm for htree a while back (for both the kernel and
> e2fsprogs), but I never got around to testing and applying them,
> mainly because while it's technically faster, I had other higher
> priority issues to work on --- and see previous comments regarding
> pixel peeping.  Improving the hash algorithm by tens or even hundreds
> of nanoseconds isn't really going to matter since we only do a htree
> lookup on a file creation or cold cache lookup, and the SSD or HDD I/O
> times will dominate.  And from the power perspective, saving
> microwatts of CPU power isn't going to matter if you're going to be
> spinning up the storage device....

I wasn't concerned about performance but more about DoS resilience. I
wonder how safe half md4 actually is in terms of allowing users to
generate long hash chains in the filesystem (in terms of length
extension attacks against half_md4).

In ext4, is it actually possible that a "disrupter" learns about the
hashing secret in the way how the inodes are returned during getdents?

Thanks,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason A. Donenfeld Dec. 22, 2016, 6:13 p.m. UTC | #17
On Thu, Dec 22, 2016 at 7:08 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> I wasn't concerned about performance but more about DoS resilience. I
> wonder how safe half md4 actually is in terms of allowing users to
> generate long hash chains in the filesystem (in terms of length
> extension attacks against half_md4).

AFAIK, this is a real vulnerability that needs to be addressed.

Judging by Ted's inquiry about my siphash testing suite, I assume he's
probably tinkering around with it as we speak. :)


Meanwhile I've separated things into several trees:

1. chacha20 rng, already submitted:
https://git.zx2c4.com/linux-dev/log/?h=random-next

2. md5 cleanup, not yet submitted:
https://git.zx2c4.com/linux-dev/log/?h=md5-cleanup

3. md4 cleanup, already submitted:
https://git.zx2c4.com/linux-dev/log/?h=ext4-next-md4-cleanup

4. siphash and networking, not yet submitted as a x/4 series:
https://git.zx2c4.com/linux-dev/log/?h=net-next-siphash

I'll submit (4) in a couple of days, waiting for any comments on the
existing patch-set.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o Dec. 22, 2016, 7:50 p.m. UTC | #18
On Thu, Dec 22, 2016 at 07:08:37PM +0100, Hannes Frederic Sowa wrote:
> I wasn't concerned about performance but more about DoS resilience. I
> wonder how safe half md4 actually is in terms of allowing users to
> generate long hash chains in the filesystem (in terms of length
> extension attacks against half_md4).
> 
> In ext4, is it actually possible that a "disrupter" learns about the
> hashing secret in the way how the inodes are returned during getdents?

They'd have to be a local user, who can execute telldir(3) --- in
which case there are plenty of other denial of service attacks one
could carry out that would be far more devastating.

It might also be an issue if the file system is exposed via NFS, but
again, there are so many other ways an attacker could DoS a NFS server
that I don't think of it as a much of a concern.

Keep in mind that worst someone can do is cause directory inserts to
fail with an ENOSPC, and there are plenty of other ways of doing that
--- such as consuming all of the blocks and inodes in the file system,
for example.

So it's a threat, but not a high priority one as far as I'm concerned.
And if this was a problem in actual practice, users could switch to
the TEA based hash, which should be far harder to attack, and
available today.

					- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a51f0ff43f00..41860864b775 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1682,6 +1682,55 @@  static int rand_initialize(void)
 }
 early_initcall(rand_initialize);
 
+static unsigned int get_random_int_new(void);
+static unsigned long get_random_long_new(void);
+
+#define NUM_CYCLES 10000
+#define AVG(finish, start) ((unsigned int)(finish - start + NUM_CYCLES/2) / NUM_CYCLES)
+
+static int rand_benchmark(void)
+{
+	cycles_t start,finish;
+	int i, out;
+
+	pr_crit("random benchmark!!\n");
+	start = get_cycles();
+	for (i = 0; i < NUM_CYCLES; i++) {
+		get_random_int();}
+	finish = get_cycles();
+	pr_err("get_random_int # cycles: %u\n", AVG(finish, start));
+
+	start = get_cycles();
+	for (i = 0; i < NUM_CYCLES; i++) {
+		get_random_int_new();
+	}
+	finish = get_cycles();
+	pr_err("get_random_int_new (batched chacha20) # cycles: %u\n", AVG(finish, start));
+
+	start = get_cycles();
+	for (i = 0; i < NUM_CYCLES; i++) {
+		get_random_long();
+	}
+	finish = get_cycles();
+	pr_err("get_random_long # cycles: %u\n", AVG(finish, start));
+
+	start = get_cycles();
+	for (i = 0; i < NUM_CYCLES; i++) {
+		get_random_long_new();
+	}
+	finish = get_cycles();
+	pr_err("get_random_long_new (batched chacha20) # cycles: %u\n", AVG(finish, start));
+
+	start = get_cycles();
+	for (i = 0; i < NUM_CYCLES; i++) {
+		get_random_bytes(&out, sizeof(out));
+	}
+	finish = get_cycles();
+	pr_err("get_random_bytes # cycles: %u\n", AVG(finish, start));
+	return 0;
+}
+device_initcall(rand_benchmark);
+
 #ifdef CONFIG_BLOCK
 void rand_initialize_disk(struct gendisk *disk)
 {
@@ -2064,8 +2113,10 @@  unsigned int get_random_int(void)
 	unsigned int ret;
 	u64 *chaining;
 
+#if 0	// force slow path
 	if (arch_get_random_int(&ret))
 		return ret;
+#endif
 
 	chaining = &get_cpu_var(get_random_int_chaining);
 	ret = *chaining = siphash_3u64(*chaining, jiffies, random_get_entropy() +
@@ -2083,8 +2134,10 @@  unsigned long get_random_long(void)
 	unsigned long ret;
 	u64 *chaining;
 
+#if 0 // force slow path
 	if (arch_get_random_long(&ret))
 		return ret;
+#endif 
 
 	chaining = &get_cpu_var(get_random_int_chaining);
 	ret = *chaining = siphash_3u64(*chaining, jiffies, random_get_entropy() +
@@ -2094,6 +2147,47 @@  unsigned long get_random_long(void)
 }
 EXPORT_SYMBOL(get_random_long);
 
+struct random_buf {
+	__u8 buf[CHACHA20_BLOCK_SIZE];
+	int ptr;
+};
+
+static DEFINE_PER_CPU(struct random_buf, batched_entropy);
+
+static void get_batched_entropy(void *buf, int n)
+{
+	struct random_buf *p;
+
+	p = &get_cpu_var(batched_entropy);
+
+	if ((p->ptr == 0) ||
+	    (p->ptr + n >= CHACHA20_BLOCK_SIZE)) {
+		extract_crng(p->buf);
+		p->ptr = 0;
+	}
+	BUG_ON(n > CHACHA20_BLOCK_SIZE);
+	memcpy(buf, p->buf, n);
+	p->ptr += n;
+	put_cpu_var(batched_entropy);
+}
+
+static unsigned int get_random_int_new(void)
+{
+	unsigned int ret;
+
+	get_batched_entropy(&ret, sizeof(ret));
+	return ret;
+}
+
+static unsigned long get_random_long_new(void)
+{
+	unsigned long ret;
+
+	get_batched_entropy(&ret, sizeof(ret));
+	return ret;
+}
+
+
 /**
  * randomize_page - Generate a random, page aligned address
  * @start:	The smallest acceptable address the caller will take.