diff mbox

siphash: add cryptographically secure hashtable function

Message ID 20161209183659.25727-1-Jason@zx2c4.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jason A. Donenfeld Dec. 9, 2016, 6:36 p.m. UTC
SipHash is a 64-bit keyed hash function that is actually a
cryptographically secure PRF, like HMAC. Except SipHash is super fast,
and is meant to be used as a hashtable keyed lookup function.

SipHash isn't just some new trendy hash function. It's been around for a
while, and there really isn't anything that comes remotely close to
being useful in the way SipHash is. With that said, why do we need this?

There are a variety of attacks known as "hashtable poisoning" in which an
attacker forms some data such that the hash of that data will be the
same, and then preceeds to fill up all entries of a hashbucket. This is
a realistic and well-known denial-of-service vector.

Linux developers already seem to be aware that this is an issue, and
various places that use hash tables in, say, a network context, use a
non-cryptographically secure function (usually jhash) and then try to
twiddle with the key on a time basis (or in many cases just do nothing
and hope that nobody notices). While this is an admirable attempt at
solving the problem, it doesn't actually fix it. SipHash fixes it.

(It fixes it in such a sound way that you could even build a stream
cipher out of SipHash that would resist the modern cryptanalysis.)

There are a modicum of places in the kernel that are vulnerable to
hashtable poisoning attacks, either via userspace vectors or network
vectors, and there's not a reliable mechanism inside the kernel at the
moment to fix it. The first step toward fixing these issues is actually
getting a secure primitive into the kernel for developers to use. Then
we can, bit by bit, port things over to it as deemed appropriate.

Dozens of languages are already using this internally for their hash
tables. Some of the BSDs already use this in their kernels. SipHash is
a widely known high-speed solution to a widely known problem, and it's
time we catch-up.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Cc: Daniel J. Bernstein <djb@cr.yp.to>
---
 include/linux/siphash.h |  18 ++++++
 lib/Makefile            |   3 +-
 lib/siphash.c           | 163 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 183 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/siphash.h
 create mode 100644 lib/siphash.c

Comments

Greg KH Dec. 10, 2016, 12:37 p.m. UTC | #1
On Fri, Dec 09, 2016 at 07:36:59PM +0100, Jason A. Donenfeld wrote:
> SipHash is a 64-bit keyed hash function that is actually a
> cryptographically secure PRF, like HMAC. Except SipHash is super fast,
> and is meant to be used as a hashtable keyed lookup function.
> 
> SipHash isn't just some new trendy hash function. It's been around for a
> while, and there really isn't anything that comes remotely close to
> being useful in the way SipHash is. With that said, why do we need this?
> 
> There are a variety of attacks known as "hashtable poisoning" in which an
> attacker forms some data such that the hash of that data will be the
> same, and then preceeds to fill up all entries of a hashbucket. This is
> a realistic and well-known denial-of-service vector.
> 
> Linux developers already seem to be aware that this is an issue, and
> various places that use hash tables in, say, a network context, use a
> non-cryptographically secure function (usually jhash) and then try to
> twiddle with the key on a time basis (or in many cases just do nothing
> and hope that nobody notices). While this is an admirable attempt at
> solving the problem, it doesn't actually fix it. SipHash fixes it.
> 
> (It fixes it in such a sound way that you could even build a stream
> cipher out of SipHash that would resist the modern cryptanalysis.)
> 
> There are a modicum of places in the kernel that are vulnerable to
> hashtable poisoning attacks, either via userspace vectors or network
> vectors, and there's not a reliable mechanism inside the kernel at the
> moment to fix it. The first step toward fixing these issues is actually
> getting a secure primitive into the kernel for developers to use. Then
> we can, bit by bit, port things over to it as deemed appropriate.
> 
> Dozens of languages are already using this internally for their hash
> tables. Some of the BSDs already use this in their kernels. SipHash is
> a widely known high-speed solution to a widely known problem, and it's
> time we catch-up.
> 
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
> Cc: Daniel J. Bernstein <djb@cr.yp.to>
> ---
>  include/linux/siphash.h |  18 ++++++
>  lib/Makefile            |   3 +-
>  lib/siphash.c           | 163 ++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 183 insertions(+), 1 deletion(-)
>  create mode 100644 include/linux/siphash.h
>  create mode 100644 lib/siphash.c

This looks really nice, but we don't usually add stuff into lib/ unless
there is an actual user of the code :)

Have you tried converting any of the existing hash users to use this
instead?  If you did that, and it shows a solution for the known
problems with our existing hashes (as you point out above), I doubt
there would be any objection for this patch at all.

Minor coding style nits below:

> @@ -0,0 +1,18 @@
> +/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>
> + *
> + * SipHash: a fast short-input PRF
> + * https://131002.net/siphash/
> + */
> +
> +#ifndef _LINUX_SIPHASH_H
> +#define _LINUX_SIPHASH_H
> +
> +#include <linux/types.h>
> +
> +enum siphash24_lengths {
> +	SIPHASH24_KEY_LEN = 16
> +};
> +
> +uint64_t siphash24(const uint8_t *data, size_t len, const uint8_t key[SIPHASH24_KEY_LEN]);

Please use u64 and u8 instead of the userspace uint64_t and uint8_t
types for kernel code.  Yes, the ship has probably sailed for trying to
strictly enforce it, but it's a good idea to do where ever possible.

> +
> +#endif /* _LINUX_SIPHASH_H */
> diff --git a/lib/Makefile b/lib/Makefile
> index 50144a3aeebd..d224337b0d01 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -22,7 +22,8 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
>  	 sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
>  	 flex_proportions.o ratelimit.o show_mem.o \
>  	 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
> -	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
> +	 earlycpio.o seq_buf.o siphash.o \
> +	 nmi_backtrace.o nodemask.o win_minmax.o
>  
>  lib-$(CONFIG_MMU) += ioremap.o
>  lib-$(CONFIG_SMP) += cpumask.o
> diff --git a/lib/siphash.c b/lib/siphash.c
> new file mode 100644
> index 000000000000..022d86f04b9b
> --- /dev/null
> +++ b/lib/siphash.c
> @@ -0,0 +1,163 @@
> +/* Copyright (C) 2015-2016 Jason A. Donenfeld <Jason@zx2c4.com>
> + * Copyright (C) 2012-2014 Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
> + * Copyright (C) 2012-2014 Daniel J. Bernstein <djb@cr.yp.to>
> + *
> + * SipHash: a fast short-input PRF
> + * https://131002.net/siphash/
> + */

Any specific license for this code?  It's good to at the least say what
it is.  Yes, we know it will default to GPLv2 only as part of the whole
kernel tree, but it's good to be explicit for when someone wants to copy
this code for their own projects...

> +
> +#include <linux/siphash.h>
> +#include <linux/kernel.h>
> +#include <linux/string.h>
> +
> +#define ROTL(x,b) (uint64_t)(((x) << (b)) | ((x) >> (64 - (b))))

Don't we have this in kernel.h somewhere?  Ah, yeah, it's rol64() in
bitops.h, no need to define it again please.

> +#define U8TO64(p) le64_to_cpu(*(__le64 *)(p))

Why the crazy casting behind a macro?

> +
> +#define SIPROUND \
> +	do { \
> +	v0 += v1; v1 = ROTL(v1, 13); v1 ^= v0; v0 = ROTL(v0, 32); \
> +	v2 += v3; v3 = ROTL(v3, 16); v3 ^= v2; \
> +	v0 += v3; v3 = ROTL(v3, 21); v3 ^= v0; \
> +	v2 += v1; v1 = ROTL(v1, 17); v1 ^= v2; v2 = ROTL(v2, 32); \
> +	} while(0)
> +
> +__attribute__((optimize("unroll-loops")))

Care to document why this attribute is needed?  Older versions of gcc
doesn't know how to handle it properly?  Faster with newer versions?
Black magic?  :)

> +uint64_t siphash24(const uint8_t *data, size_t len, const uint8_t key[SIPHASH24_KEY_LEN])
> +{
> +	uint64_t v0 = 0x736f6d6570736575ULL;

s/uint64_t/u64/g please.


> +	uint64_t v1 = 0x646f72616e646f6dULL;
> +	uint64_t v2 = 0x6c7967656e657261ULL;
> +	uint64_t v3 = 0x7465646279746573ULL;
> +	uint64_t b;
> +	uint64_t k0 = U8TO64(key);
> +	uint64_t k1 = U8TO64(key + sizeof(uint64_t));
> +	uint64_t m;
> +	const uint8_t *end = data + len - (len % sizeof(uint64_t));
> +	const uint8_t left = len & (sizeof(uint64_t) - 1);
> +	b = ((uint64_t)len) << 56;
> +	v3 ^= k1;
> +	v2 ^= k0;
> +	v1 ^= k1;
> +	v0 ^= k0;
> +	for (; data != end; data += sizeof(uint64_t)) {
> +		m = U8TO64(data);
> +		v3 ^= m;
> +		SIPROUND;
> +		SIPROUND;
> +		v0 ^= m;
> +	}
> +	switch (left) {
> +		case 7: b |= ((uint64_t)data[6]) << 48;
> +		case 6: b |= ((uint64_t)data[5]) << 40;
> +		case 5: b |= ((uint64_t)data[4]) << 32;
> +		case 4: b |= ((uint64_t)data[3]) << 24;
> +		case 3: b |= ((uint64_t)data[2]) << 16;
> +		case 2: b |= ((uint64_t)data[1]) <<  8;
> +		case 1: b |= ((uint64_t)data[0]); break;
> +		case 0: break;
> +	}
> +	v3 ^= b;
> +	SIPROUND;
> +	SIPROUND;
> +	v0 ^= b;
> +	v2 ^= 0xff;
> +	SIPROUND;
> +	SIPROUND;
> +	SIPROUND;
> +	SIPROUND;
> +	b = (v0 ^ v1) ^ (v2 ^ v3);
> +	return (__force uint64_t)cpu_to_le64(b);
> +}
> +EXPORT_SYMBOL(siphash24);

EXPORT_SYMBOL_GPL()?  I have to ask, sorry :)


> +
> +#ifdef DEBUG
> +static const uint8_t test_vectors[64][8] = {
> +	{ 0x31, 0x0e, 0x0e, 0xdd, 0x47, 0xdb, 0x6f, 0x72 },
> +	{ 0xfd, 0x67, 0xdc, 0x93, 0xc5, 0x39, 0xf8, 0x74 },
> +	{ 0x5a, 0x4f, 0xa9, 0xd9, 0x09, 0x80, 0x6c, 0x0d },
> +	{ 0x2d, 0x7e, 0xfb, 0xd7, 0x96, 0x66, 0x67, 0x85 },
> +	{ 0xb7, 0x87, 0x71, 0x27, 0xe0, 0x94, 0x27, 0xcf },
> +	{ 0x8d, 0xa6, 0x99, 0xcd, 0x64, 0x55, 0x76, 0x18 },
> +	{ 0xce, 0xe3, 0xfe, 0x58, 0x6e, 0x46, 0xc9, 0xcb },
> +	{ 0x37, 0xd1, 0x01, 0x8b, 0xf5, 0x00, 0x02, 0xab },
> +	{ 0x62, 0x24, 0x93, 0x9a, 0x79, 0xf5, 0xf5, 0x93 },
> +	{ 0xb0, 0xe4, 0xa9, 0x0b, 0xdf, 0x82, 0x00, 0x9e },
> +	{ 0xf3, 0xb9, 0xdd, 0x94, 0xc5, 0xbb, 0x5d, 0x7a },
> +	{ 0xa7, 0xad, 0x6b, 0x22, 0x46, 0x2f, 0xb3, 0xf4 },
> +	{ 0xfb, 0xe5, 0x0e, 0x86, 0xbc, 0x8f, 0x1e, 0x75 },
> +	{ 0x90, 0x3d, 0x84, 0xc0, 0x27, 0x56, 0xea, 0x14 },
> +	{ 0xee, 0xf2, 0x7a, 0x8e, 0x90, 0xca, 0x23, 0xf7 },
> +	{ 0xe5, 0x45, 0xbe, 0x49, 0x61, 0xca, 0x29, 0xa1 },
> +	{ 0xdb, 0x9b, 0xc2, 0x57, 0x7f, 0xcc, 0x2a, 0x3f },
> +	{ 0x94, 0x47, 0xbe, 0x2c, 0xf5, 0xe9, 0x9a, 0x69 },
> +	{ 0x9c, 0xd3, 0x8d, 0x96, 0xf0, 0xb3, 0xc1, 0x4b },
> +	{ 0xbd, 0x61, 0x79, 0xa7, 0x1d, 0xc9, 0x6d, 0xbb },
> +	{ 0x98, 0xee, 0xa2, 0x1a, 0xf2, 0x5c, 0xd6, 0xbe },
> +	{ 0xc7, 0x67, 0x3b, 0x2e, 0xb0, 0xcb, 0xf2, 0xd0 },
> +	{ 0x88, 0x3e, 0xa3, 0xe3, 0x95, 0x67, 0x53, 0x93 },
> +	{ 0xc8, 0xce, 0x5c, 0xcd, 0x8c, 0x03, 0x0c, 0xa8 },
> +	{ 0x94, 0xaf, 0x49, 0xf6, 0xc6, 0x50, 0xad, 0xb8 },
> +	{ 0xea, 0xb8, 0x85, 0x8a, 0xde, 0x92, 0xe1, 0xbc },
> +	{ 0xf3, 0x15, 0xbb, 0x5b, 0xb8, 0x35, 0xd8, 0x17 },
> +	{ 0xad, 0xcf, 0x6b, 0x07, 0x63, 0x61, 0x2e, 0x2f },
> +	{ 0xa5, 0xc9, 0x1d, 0xa7, 0xac, 0xaa, 0x4d, 0xde },
> +	{ 0x71, 0x65, 0x95, 0x87, 0x66, 0x50, 0xa2, 0xa6 },
> +	{ 0x28, 0xef, 0x49, 0x5c, 0x53, 0xa3, 0x87, 0xad },
> +	{ 0x42, 0xc3, 0x41, 0xd8, 0xfa, 0x92, 0xd8, 0x32 },
> +	{ 0xce, 0x7c, 0xf2, 0x72, 0x2f, 0x51, 0x27, 0x71 },
> +	{ 0xe3, 0x78, 0x59, 0xf9, 0x46, 0x23, 0xf3, 0xa7 },
> +	{ 0x38, 0x12, 0x05, 0xbb, 0x1a, 0xb0, 0xe0, 0x12 },
> +	{ 0xae, 0x97, 0xa1, 0x0f, 0xd4, 0x34, 0xe0, 0x15 },
> +	{ 0xb4, 0xa3, 0x15, 0x08, 0xbe, 0xff, 0x4d, 0x31 },
> +	{ 0x81, 0x39, 0x62, 0x29, 0xf0, 0x90, 0x79, 0x02 },
> +	{ 0x4d, 0x0c, 0xf4, 0x9e, 0xe5, 0xd4, 0xdc, 0xca },
> +	{ 0x5c, 0x73, 0x33, 0x6a, 0x76, 0xd8, 0xbf, 0x9a },
> +	{ 0xd0, 0xa7, 0x04, 0x53, 0x6b, 0xa9, 0x3e, 0x0e },
> +	{ 0x92, 0x59, 0x58, 0xfc, 0xd6, 0x42, 0x0c, 0xad },
> +	{ 0xa9, 0x15, 0xc2, 0x9b, 0xc8, 0x06, 0x73, 0x18 },
> +	{ 0x95, 0x2b, 0x79, 0xf3, 0xbc, 0x0a, 0xa6, 0xd4 },
> +	{ 0xf2, 0x1d, 0xf2, 0xe4, 0x1d, 0x45, 0x35, 0xf9 },
> +	{ 0x87, 0x57, 0x75, 0x19, 0x04, 0x8f, 0x53, 0xa9 },
> +	{ 0x10, 0xa5, 0x6c, 0xf5, 0xdf, 0xcd, 0x9a, 0xdb },
> +	{ 0xeb, 0x75, 0x09, 0x5c, 0xcd, 0x98, 0x6c, 0xd0 },
> +	{ 0x51, 0xa9, 0xcb, 0x9e, 0xcb, 0xa3, 0x12, 0xe6 },
> +	{ 0x96, 0xaf, 0xad, 0xfc, 0x2c, 0xe6, 0x66, 0xc7 },
> +	{ 0x72, 0xfe, 0x52, 0x97, 0x5a, 0x43, 0x64, 0xee },
> +	{ 0x5a, 0x16, 0x45, 0xb2, 0x76, 0xd5, 0x92, 0xa1 },
> +	{ 0xb2, 0x74, 0xcb, 0x8e, 0xbf, 0x87, 0x87, 0x0a },
> +	{ 0x6f, 0x9b, 0xb4, 0x20, 0x3d, 0xe7, 0xb3, 0x81 },
> +	{ 0xea, 0xec, 0xb2, 0xa3, 0x0b, 0x22, 0xa8, 0x7f },
> +	{ 0x99, 0x24, 0xa4, 0x3c, 0xc1, 0x31, 0x57, 0x24 },
> +	{ 0xbd, 0x83, 0x8d, 0x3a, 0xaf, 0xbf, 0x8d, 0xb7 },
> +	{ 0x0b, 0x1a, 0x2a, 0x32, 0x65, 0xd5, 0x1a, 0xea },
> +	{ 0x13, 0x50, 0x79, 0xa3, 0x23, 0x1c, 0xe6, 0x60 },
> +	{ 0x93, 0x2b, 0x28, 0x46, 0xe4, 0xd7, 0x06, 0x66 },
> +	{ 0xe1, 0x91, 0x5f, 0x5c, 0xb1, 0xec, 0xa4, 0x6c },
> +	{ 0xf3, 0x25, 0x96, 0x5c, 0xa1, 0x6d, 0x62, 0x9f },
> +	{ 0x57, 0x5f, 0xf2, 0x8e, 0x60, 0x38, 0x1b, 0xe5 },
> +	{ 0x72, 0x45, 0x06, 0xeb, 0x4c, 0x32, 0x8a, 0x95 }
> +};
> +
> +static int siphash24_selftest(void)
> +{
> +	uint8_t in[64], k[16], i;
> +	uint64_t out;
> +	int ret = 0;
> +
> +	for (i = 0; i < 16; ++i)
> +		k[i] = i;
> +
> +	for (i = 0; i < 64; ++i) {
> +		in[i] = i;
> +		out = siphash24(in, i, k);
> +		if (memcmp(&out, test_vectors[i], 8)) {
> +			printk(KERN_INFO "siphash24: self-test %u: FAIL\n", i + 1);

pr_info()?

> +			ret = -1;

Pick a real error number?

> +		}
> +	}
> +	if (!ret)
> +		printk(KERN_INFO "siphash24: self-tests: pass\n");

pr_info()?

> +	return ret;
> +}
> +__initcall(siphash24_selftest);

Don't we have a "do crypto/library/whatever selftests at boot" config
option that this little test could be put under?  It would be great to
not have to manually add DEBUG to the build to verify this works on a
specific arch.

thanks,

greg k-h
Vegard Nossum Dec. 10, 2016, 2:17 p.m. UTC | #2
On 9 December 2016 at 19:36, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> SipHash is a 64-bit keyed hash function that is actually a
> cryptographically secure PRF, like HMAC. Except SipHash is super fast,
> and is meant to be used as a hashtable keyed lookup function.
>
> SipHash isn't just some new trendy hash function. It's been around for a
> while, and there really isn't anything that comes remotely close to
> being useful in the way SipHash is. With that said, why do we need this?
>
> There are a variety of attacks known as "hashtable poisoning" in which an
> attacker forms some data such that the hash of that data will be the
> same, and then preceeds to fill up all entries of a hashbucket. This is
> a realistic and well-known denial-of-service vector.
>
> Linux developers already seem to be aware that this is an issue, and
> various places that use hash tables in, say, a network context, use a
> non-cryptographically secure function (usually jhash) and then try to
> twiddle with the key on a time basis (or in many cases just do nothing
> and hope that nobody notices). While this is an admirable attempt at
> solving the problem, it doesn't actually fix it. SipHash fixes it.

Could you give some more concrete details/examples? Here's the IPv4
hash table from include/net/inet_sock.h / net/ipv4/inet_hashtables.c:

static inline unsigned int __inet_ehashfn(const __be32 laddr,
                                         const __u16 lport,
                                         const __be32 faddr,
                                         const __be16 fport,
                                         u32 initval)
{
       return jhash_3words((__force __u32) laddr,
                           (__force __u32) faddr,
                           ((__u32) lport) << 16 | (__force __u32)fport,
                           initval);
}

static u32 inet_ehashfn(const struct net *net, const __be32 laddr,
                       const __u16 lport, const __be32 faddr,
                       const __be16 fport)
{
       static u32 inet_ehash_secret __read_mostly;

       net_get_random_once(&inet_ehash_secret, sizeof(inet_ehash_secret));

       return __inet_ehashfn(laddr, lport, faddr, fport,
                             inet_ehash_secret + net_hash_mix(net));
}

There's a 32-bit secret random salt (inet_ehash_secret) which means
that in practice, inet_ehashfn() will select 1 out of 2^32 different
hash functions at random each time you boot the kernel; without
knowing which one it selected, how can a local or remote attacker can
force IPv4 connections/whatever to go into a single hash bucket?

It is not possible to obtain the secret salt directly (except by
reading from kernel memory, in which case you've lost already), nor is
it possible to obtain the result of inet_ehashfn() other than (maybe)
by a timing attack where you somehow need to detect that two
connections went into the same hash bucket and work backwards from
that to figure out how to land more connections into into the same
bucket -- but if they can do that, you've also already lost.

The same pattern is used for IPv6 hashtables and the dentry cache.

I suppose that using a hash function proven to be cryptographically
secure gives a hard guarantee (under some assumptions) that the
salt/key will give enough diversity between the (in the example above)
2^32 different hash functions that you cannot improve your chances of
guessing that two values will map to the same bucket regardless of the
salt/key. However, I am a bit doubtful that using a cryptographically
secure hash function will make much of a difference as long as the
attacker doesn't actually have any way to get the output/result of the
hash function (and given that the hash function isn't completely
trivial, of course).

I am happy to be proven wrong, but you make it sound very easy to
exploit the current situation, so I would just like to ask whether you
have a concrete way to do that?


Vegard

> There are a modicum of places in the kernel that are vulnerable to
> hashtable poisoning attacks, either via userspace vectors or network
> vectors, and there's not a reliable mechanism inside the kernel at the
> moment to fix it. The first step toward fixing these issues is actually
> getting a secure primitive into the kernel for developers to use. Then
> we can, bit by bit, port things over to it as deemed appropriate.
>
> Dozens of languages are already using this internally for their hash
> tables. Some of the BSDs already use this in their kernels. SipHash is
> a widely known high-speed solution to a widely known problem, and it's
> time we catch-up.
George Spelvin Dec. 10, 2016, 3:35 p.m. UTC | #3
> There's a 32-bit secret random salt (inet_ehash_secret) which means
> that in practice, inet_ehashfn() will select 1 out of 2^32 different
> hash functions at random each time you boot the kernel; without
> knowing which one it selected, how can a local or remote attacker can
> force IPv4 connections/whatever to go into a single hash bucket?

By figuring out the salt.  The thing is, the timing of hash table lookups
*is externally visible*.  If I create connections to the target, then
see which ones make responses on previous connections slightly slower,
I gain information about the salt.

I dont't know *where* in the hash table the collissions occur, but I
know *which* inputs collide, and that's enough for me to learn something.

(I need more connections than the size of the hash table, but even
with just one IP source I can use 64K ports on my end times however
many the target has open on its end.)

With enough information (google "unicity distance") I can recover the
entire salt.  It's not like I care about the cryptographic strength of
the hash; simply trying all 4 billion possible seeds is pretty fast on
a 4 GHz processor.

Once that happens, I can choose a target connection whose timing I can't
observe directly and pack its hash chain without being obvious about it.

> I am happy to be proven wrong, but you make it sound very easy to
> exploit the current situation, so I would just like to ask whether you
> have a concrete way to do that?

I don't think anyone's implemented an attack on this particular hash
table yet, and the reason it hasn't been urgent is that it's just a mild
DoS attack it makes the computer noticeably slower withough disabling
it completely.

But the general style of attack is well known and has been repeatedly
demonstrated.  Its practicality is not in question.  The only question is
whether it's *more* practical that simpler techniques that don't depend
on any such algorithmic subtlety like brute-force flooding.

But if the history of Internet security has taught us one thing, it's
that naively hoping something won't be a problem is doomed.


The main issue is performance.  IPv6 addresses are big, and although
SipHash is fast by the standard of cryptographic hashes, it's far slower
than jhash or any other non-cryptographic hash.
Jean-Philippe Aumasson Dec. 10, 2016, 6:13 p.m. UTC | #4
SipHash co-designer here.

SipHash is secure when it takes a secret key/seed as parameter, meaning
that its output values are unpredictable. Concretely, when SipHash produces
64-bit output values then you've a chance 1/2^64 to guess the hash value of
a given message, provided that the key/seed is kept secret. That's the
standard security definition of a pseudorandom function (PRF), which is
typically instantiated with a MAC such as HMAC-somehash.

With djb we demonstrated that this security notion is sufficient to protect
from hash-flooding attacks wherein an attacker creates many different input
values that hash to a same value and therefore may DoS the underlying data
structure.

I admit that the naming is confusing: "SipHash" is not a hash function,
strictly speaking. In crypto we only call hash function algorithms that are
unkeyed. PRFs/MACs are sometimes called keyed hash functions though.



On Sat, Dec 10, 2016 at 3:17 PM Vegard Nossum <vegard.nossum@gmail.com>
wrote:

> On 9 December 2016 at 19:36, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> > SipHash is a 64-bit keyed hash function that is actually a
> > cryptographically secure PRF, like HMAC. Except SipHash is super fast,
> > and is meant to be used as a hashtable keyed lookup function.
> >
> > SipHash isn't just some new trendy hash function. It's been around for a
> > while, and there really isn't anything that comes remotely close to
> > being useful in the way SipHash is. With that said, why do we need this?
> >
> > There are a variety of attacks known as "hashtable poisoning" in which an
> > attacker forms some data such that the hash of that data will be the
> > same, and then preceeds to fill up all entries of a hashbucket. This is
> > a realistic and well-known denial-of-service vector.
> >
> > Linux developers already seem to be aware that this is an issue, and
> > various places that use hash tables in, say, a network context, use a
> > non-cryptographically secure function (usually jhash) and then try to
> > twiddle with the key on a time basis (or in many cases just do nothing
> > and hope that nobody notices). While this is an admirable attempt at
> > solving the problem, it doesn't actually fix it. SipHash fixes it.
>
> Could you give some more concrete details/examples? Here's the IPv4
> hash table from include/net/inet_sock.h / net/ipv4/inet_hashtables.c:
>
> static inline unsigned int __inet_ehashfn(const __be32 laddr,
>                                          const __u16 lport,
>                                          const __be32 faddr,
>                                          const __be16 fport,
>                                          u32 initval)
> {
>        return jhash_3words((__force __u32) laddr,
>                            (__force __u32) faddr,
>                            ((__u32) lport) << 16 | (__force __u32)fport,
>                            initval);
> }
>
> static u32 inet_ehashfn(const struct net *net, const __be32 laddr,
>                        const __u16 lport, const __be32 faddr,
>                        const __be16 fport)
> {
>        static u32 inet_ehash_secret __read_mostly;
>
>        net_get_random_once(&inet_ehash_secret, sizeof(inet_ehash_secret));
>
>        return __inet_ehashfn(laddr, lport, faddr, fport,
>                              inet_ehash_secret + net_hash_mix(net));
> }
>
> There's a 32-bit secret random salt (inet_ehash_secret) which means
> that in practice, inet_ehashfn() will select 1 out of 2^32 different
> hash functions at random each time you boot the kernel; without
> knowing which one it selected, how can a local or remote attacker can
> force IPv4 connections/whatever to go into a single hash bucket?
>
> It is not possible to obtain the secret salt directly (except by
> reading from kernel memory, in which case you've lost already), nor is
> it possible to obtain the result of inet_ehashfn() other than (maybe)
> by a timing attack where you somehow need to detect that two
> connections went into the same hash bucket and work backwards from
> that to figure out how to land more connections into into the same
> bucket -- but if they can do that, you've also already lost.
>
> The same pattern is used for IPv6 hashtables and the dentry cache.
>
> I suppose that using a hash function proven to be cryptographically
> secure gives a hard guarantee (under some assumptions) that the
> salt/key will give enough diversity between the (in the example above)
> 2^32 different hash functions that you cannot improve your chances of
> guessing that two values will map to the same bucket regardless of the
> salt/key. However, I am a bit doubtful that using a cryptographically
> secure hash function will make much of a difference as long as the
> attacker doesn't actually have any way to get the output/result of the
> hash function (and given that the hash function isn't completely
> trivial, of course).
>
> I am happy to be proven wrong, but you make it sound very easy to
> exploit the current situation, so I would just like to ask whether you
> have a concrete way to do that?
>
>
> Vegard
>
> > There are a modicum of places in the kernel that are vulnerable to
> > hashtable poisoning attacks, either via userspace vectors or network
> > vectors, and there's not a reliable mechanism inside the kernel at the
> > moment to fix it. The first step toward fixing these issues is actually
> > getting a secure primitive into the kernel for developers to use. Then
> > we can, bit by bit, port things over to it as deemed appropriate.
> >
> > Dozens of languages are already using this internally for their hash
> > tables. Some of the BSDs already use this in their kernels. SipHash is
> > a widely known high-speed solution to a widely known problem, and it's
> > time we catch-up.
>
Jason A. Donenfeld Dec. 11, 2016, 3:30 p.m. UTC | #5
Hi Greg,

Thanks for the review. Responses to your suggestions are inline below:

On Sat, Dec 10, 2016 at 1:37 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> Please use u64 and u8 instead of the userspace uint64_t and uint8_t
> types for kernel code.  Yes, the ship has probably sailed for trying to
> strictly enforce it, but it's a good idea to do where ever possible.

I didn't know this was a rule. Since I had seen a hodgepodge
throughout the kernel I just sort of assumed it was a free for all.
I've fixed this up for v2, and I've also gone through all of my other
[not yet submitted] code and made this change.

> Any specific license for this code?  It's good to at the least say what
> it is.  Yes, we know it will default to GPLv2 only as part of the whole
> kernel tree, but it's good to be explicit for when someone wants to copy
> this code for their own projects...

Public domain, actually. I'll add notice of this to the header.

> Don't we have this in kernel.h somewhere?  Ah, yeah, it's rol64() in
> bitops.h, no need to define it again please.

Thanks!

>
>> +#define U8TO64(p) le64_to_cpu(*(__le64 *)(p))
>
> Why the crazy casting behind a macro?

le64_to_cpup doesn't take the right type. But I agree the macro is not
the nicest way to do this. Instead, I'll copy what
crypto/chacha20_generic.c does and define locally le64_to_cpuvp which
takes a void pointer:

static inline u64 le64_to_cpuvp(const void *p)
{
        return le64_to_cpup(p);
}

>> +__attribute__((optimize("unroll-loops")))
>
> Care to document why this attribute is needed?  Older versions of gcc
> doesn't know how to handle it properly?  Faster with newer versions?
> Black magic?  :)

It tells gcc to unroll the loops. Comparing the assembly, it looks
like on x86_64, gcc does twice as many rounds per loop iteration when
asked to unroll the loops. This allows the code to remain neat and
straightforward, while instructing gcc to do the gnarly part.

Is this too much rice? Have I been Gentoo-ing for too long? Or are
limited uses of unroll-loops acceptable?

> s/uint64_t/u64/g please.

Done.

> EXPORT_SYMBOL_GPL()?  I have to ask, sorry :)

Since it's public domain, EXPORT_SYMBOL() is fine.

If you have some reason for preferring GPL2 over public domain, I'm
happy to make the change. Maybe you want to preclude the new&shiny
from proprietary modules? That's fine with me, if you desire it being
that way.

>
>
> pr_info()?

Ack.

>
>> +                     ret = -1;
>
> Pick a real error number?

I started to do this, but couldn't make up my mind, and then resigned
to -1. I'll redouble my efforts to pick something decent.

> pr_info()?

Ack.

> Don't we have a "do crypto/library/whatever selftests at boot" config
> option that this little test could be put under?  It would be great to
> not have to manually add DEBUG to the build to verify this works on a
> specific arch.

There is crypto/testmgr.c, but it's designed for testing things that
use the actual crypto API. Clearly for a hashtable function, nobody
would accept the overhead of the enormous crypto API, so siphash has
to remain an ordinary fast function call. Also, it lives in lib/, not
in crypto/. For that reason, I thought that Herbert might object if I
clutter up his testmgr with non crypto API functions. I'll CC him on
this email to double check.

> This looks really nice, but we don't usually add stuff into lib/ unless
> there is an actual user of the code :)
>
> Have you tried converting any of the existing hash users to use this
> instead?  If you did that, and it shows a solution for the known
> problems with our existing hashes (as you point out above), I doubt
> there would be any objection for this patch at all.

Here's where the controversy begins! As we've seen from this thread,
there are two hurdles:

1. Convincing people that the cryptographic properties of siphash are
important, and jhash does not have these. I think JP Aumasson's email
described things pretty clearly, and this isn't really up for debate
anymore.
2. Convincing people that for a particular use case, siphash _is_
sufficiently fast, and that any potential (tiny) slowdown, compared to
insecure function like jhash, is either a) not worth having a
known-vulnerability or b) not even measurably relavent for the actual
real life workload.

I suspect that the debates about (2.a) and (2.b) will need to be duked
out one-by-one for a bit of time. I thought that since this will be
more of an evolutionary change, it'd be best to at least get the
primitives into lib/ so they can actually be used.

For example, I found some patches from George Spelvin (CC'd) trying to
get this added a few years back, for reasons related to extfs code. I
found a discussion between Scott Bauer (CC'd) and Andy&Andi (CC'd)
about adding siphash for the purpose of SROP mitigation, but not doing
so because there wasn't the primitive in lib/.

Seeing that siphash is both a solution to current existing problems,
future security mechanisms, and current things people clearly seem to
want, I thought it might be worthwhile to add this straight-up.

But if you really really want me to submit this alongside a patch
series of places that could be changed, I guess I could take the time
to pick out the most uncontroversial places -- some network stack /
netfilter places, some userspace API hashtable DoS places, etc -- but
I fear that's going to drag so many different consumers into the fold
that in the end nothing will get merged. So I think this might be a
good case for an exception with /lib, as a means of making forward
progress in general. Feel free to disagree, though; you know best.

Regards,
Jason
Greg KH Dec. 11, 2016, 8:43 p.m. UTC | #6
On Sun, Dec 11, 2016 at 04:30:31PM +0100, Jason A. Donenfeld wrote:
> Hi Greg,
> 
> Thanks for the review. Responses to your suggestions are inline below:
> 
> On Sat, Dec 10, 2016 at 1:37 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > Please use u64 and u8 instead of the userspace uint64_t and uint8_t
> > types for kernel code.  Yes, the ship has probably sailed for trying to
> > strictly enforce it, but it's a good idea to do where ever possible.
> 
> I didn't know this was a rule. Since I had seen a hodgepodge
> throughout the kernel I just sort of assumed it was a free for all.
> I've fixed this up for v2, and I've also gone through all of my other
> [not yet submitted] code and made this change.
> 
> > Any specific license for this code?  It's good to at the least say what
> > it is.  Yes, we know it will default to GPLv2 only as part of the whole
> > kernel tree, but it's good to be explicit for when someone wants to copy
> > this code for their own projects...
> 
> Public domain, actually. I'll add notice of this to the header.

Hm, there really is no such license as "Public domain" that works in all
countries, sorry.  You will note it's not one of the "valid module
license list" we have in module.h because of that.

So, I don't know where you got the code from, but perhaps "Dual BSD/GPL"
is the correct one for you?

Note, I'm not a lawyer, so this isn't legal advice about the license of
code, but I do spend way too much time with lawyers dealing with license
issues...

> >> +#define U8TO64(p) le64_to_cpu(*(__le64 *)(p))
> >
> > Why the crazy casting behind a macro?
> 
> le64_to_cpup doesn't take the right type. But I agree the macro is not
> the nicest way to do this. Instead, I'll copy what
> crypto/chacha20_generic.c does and define locally le64_to_cpuvp which
> takes a void pointer:
> 
> static inline u64 le64_to_cpuvp(const void *p)
> {
>         return le64_to_cpup(p);
> }

Ah much better.

> >> +__attribute__((optimize("unroll-loops")))
> >
> > Care to document why this attribute is needed?  Older versions of gcc
> > doesn't know how to handle it properly?  Faster with newer versions?
> > Black magic?  :)
> 
> It tells gcc to unroll the loops. Comparing the assembly, it looks
> like on x86_64, gcc does twice as many rounds per loop iteration when
> asked to unroll the loops. This allows the code to remain neat and
> straightforward, while instructing gcc to do the gnarly part.
> 
> Is this too much rice? Have I been Gentoo-ing for too long? Or are
> limited uses of unroll-loops acceptable?

Given that this would be the first use of it in the kernel, it might be
too much rice :)

Unless you can show real numbers that it actually matters, I wouldn't do
it, it's not worth the hassle.  That's the same rule for when you use
likely()/unlikely(), if you can not measure it, don't use it as you
almost always get it wrong, the compiler is smarter, and keeps getting
better over time.

> > s/uint64_t/u64/g please.
> 
> Done.
> 
> > EXPORT_SYMBOL_GPL()?  I have to ask, sorry :)
> 
> Since it's public domain, EXPORT_SYMBOL() is fine.
> 
> If you have some reason for preferring GPL2 over public domain, I'm
> happy to make the change. Maybe you want to preclude the new&shiny
> from proprietary modules? That's fine with me, if you desire it being
> that way.

Nope, I just have to ask :)

If it's dual licensed, a normal EXPORT_SYMBOL() is just fine, I have no
objection to that at all.

thanks,

greg k-h
diff mbox

Patch

diff --git a/include/linux/siphash.h b/include/linux/siphash.h
new file mode 100644
index 000000000000..485c2101cc7d
--- /dev/null
+++ b/include/linux/siphash.h
@@ -0,0 +1,18 @@ 
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ */
+
+#ifndef _LINUX_SIPHASH_H
+#define _LINUX_SIPHASH_H
+
+#include <linux/types.h>
+
+enum siphash24_lengths {
+	SIPHASH24_KEY_LEN = 16
+};
+
+uint64_t siphash24(const uint8_t *data, size_t len, const uint8_t key[SIPHASH24_KEY_LEN]);
+
+#endif /* _LINUX_SIPHASH_H */
diff --git a/lib/Makefile b/lib/Makefile
index 50144a3aeebd..d224337b0d01 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -22,7 +22,8 @@  lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
 	 flex_proportions.o ratelimit.o show_mem.o \
 	 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
-	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
+	 earlycpio.o seq_buf.o siphash.o \
+	 nmi_backtrace.o nodemask.o win_minmax.o
 
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
diff --git a/lib/siphash.c b/lib/siphash.c
new file mode 100644
index 000000000000..022d86f04b9b
--- /dev/null
+++ b/lib/siphash.c
@@ -0,0 +1,163 @@ 
+/* Copyright (C) 2015-2016 Jason A. Donenfeld <Jason@zx2c4.com>
+ * Copyright (C) 2012-2014 Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
+ * Copyright (C) 2012-2014 Daniel J. Bernstein <djb@cr.yp.to>
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ */
+
+#include <linux/siphash.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+
+#define ROTL(x,b) (uint64_t)(((x) << (b)) | ((x) >> (64 - (b))))
+#define U8TO64(p) le64_to_cpu(*(__le64 *)(p))
+
+#define SIPROUND \
+	do { \
+	v0 += v1; v1 = ROTL(v1, 13); v1 ^= v0; v0 = ROTL(v0, 32); \
+	v2 += v3; v3 = ROTL(v3, 16); v3 ^= v2; \
+	v0 += v3; v3 = ROTL(v3, 21); v3 ^= v0; \
+	v2 += v1; v1 = ROTL(v1, 17); v1 ^= v2; v2 = ROTL(v2, 32); \
+	} while(0)
+
+__attribute__((optimize("unroll-loops")))
+uint64_t siphash24(const uint8_t *data, size_t len, const uint8_t key[SIPHASH24_KEY_LEN])
+{
+	uint64_t v0 = 0x736f6d6570736575ULL;
+	uint64_t v1 = 0x646f72616e646f6dULL;
+	uint64_t v2 = 0x6c7967656e657261ULL;
+	uint64_t v3 = 0x7465646279746573ULL;
+	uint64_t b;
+	uint64_t k0 = U8TO64(key);
+	uint64_t k1 = U8TO64(key + sizeof(uint64_t));
+	uint64_t m;
+	const uint8_t *end = data + len - (len % sizeof(uint64_t));
+	const uint8_t left = len & (sizeof(uint64_t) - 1);
+	b = ((uint64_t)len) << 56;
+	v3 ^= k1;
+	v2 ^= k0;
+	v1 ^= k1;
+	v0 ^= k0;
+	for (; data != end; data += sizeof(uint64_t)) {
+		m = U8TO64(data);
+		v3 ^= m;
+		SIPROUND;
+		SIPROUND;
+		v0 ^= m;
+	}
+	switch (left) {
+		case 7: b |= ((uint64_t)data[6]) << 48;
+		case 6: b |= ((uint64_t)data[5]) << 40;
+		case 5: b |= ((uint64_t)data[4]) << 32;
+		case 4: b |= ((uint64_t)data[3]) << 24;
+		case 3: b |= ((uint64_t)data[2]) << 16;
+		case 2: b |= ((uint64_t)data[1]) <<  8;
+		case 1: b |= ((uint64_t)data[0]); break;
+		case 0: break;
+	}
+	v3 ^= b;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= b;
+	v2 ^= 0xff;
+	SIPROUND;
+	SIPROUND;
+	SIPROUND;
+	SIPROUND;
+	b = (v0 ^ v1) ^ (v2 ^ v3);
+	return (__force uint64_t)cpu_to_le64(b);
+}
+EXPORT_SYMBOL(siphash24);
+
+#ifdef DEBUG
+static const uint8_t test_vectors[64][8] = {
+	{ 0x31, 0x0e, 0x0e, 0xdd, 0x47, 0xdb, 0x6f, 0x72 },
+	{ 0xfd, 0x67, 0xdc, 0x93, 0xc5, 0x39, 0xf8, 0x74 },
+	{ 0x5a, 0x4f, 0xa9, 0xd9, 0x09, 0x80, 0x6c, 0x0d },
+	{ 0x2d, 0x7e, 0xfb, 0xd7, 0x96, 0x66, 0x67, 0x85 },
+	{ 0xb7, 0x87, 0x71, 0x27, 0xe0, 0x94, 0x27, 0xcf },
+	{ 0x8d, 0xa6, 0x99, 0xcd, 0x64, 0x55, 0x76, 0x18 },
+	{ 0xce, 0xe3, 0xfe, 0x58, 0x6e, 0x46, 0xc9, 0xcb },
+	{ 0x37, 0xd1, 0x01, 0x8b, 0xf5, 0x00, 0x02, 0xab },
+	{ 0x62, 0x24, 0x93, 0x9a, 0x79, 0xf5, 0xf5, 0x93 },
+	{ 0xb0, 0xe4, 0xa9, 0x0b, 0xdf, 0x82, 0x00, 0x9e },
+	{ 0xf3, 0xb9, 0xdd, 0x94, 0xc5, 0xbb, 0x5d, 0x7a },
+	{ 0xa7, 0xad, 0x6b, 0x22, 0x46, 0x2f, 0xb3, 0xf4 },
+	{ 0xfb, 0xe5, 0x0e, 0x86, 0xbc, 0x8f, 0x1e, 0x75 },
+	{ 0x90, 0x3d, 0x84, 0xc0, 0x27, 0x56, 0xea, 0x14 },
+	{ 0xee, 0xf2, 0x7a, 0x8e, 0x90, 0xca, 0x23, 0xf7 },
+	{ 0xe5, 0x45, 0xbe, 0x49, 0x61, 0xca, 0x29, 0xa1 },
+	{ 0xdb, 0x9b, 0xc2, 0x57, 0x7f, 0xcc, 0x2a, 0x3f },
+	{ 0x94, 0x47, 0xbe, 0x2c, 0xf5, 0xe9, 0x9a, 0x69 },
+	{ 0x9c, 0xd3, 0x8d, 0x96, 0xf0, 0xb3, 0xc1, 0x4b },
+	{ 0xbd, 0x61, 0x79, 0xa7, 0x1d, 0xc9, 0x6d, 0xbb },
+	{ 0x98, 0xee, 0xa2, 0x1a, 0xf2, 0x5c, 0xd6, 0xbe },
+	{ 0xc7, 0x67, 0x3b, 0x2e, 0xb0, 0xcb, 0xf2, 0xd0 },
+	{ 0x88, 0x3e, 0xa3, 0xe3, 0x95, 0x67, 0x53, 0x93 },
+	{ 0xc8, 0xce, 0x5c, 0xcd, 0x8c, 0x03, 0x0c, 0xa8 },
+	{ 0x94, 0xaf, 0x49, 0xf6, 0xc6, 0x50, 0xad, 0xb8 },
+	{ 0xea, 0xb8, 0x85, 0x8a, 0xde, 0x92, 0xe1, 0xbc },
+	{ 0xf3, 0x15, 0xbb, 0x5b, 0xb8, 0x35, 0xd8, 0x17 },
+	{ 0xad, 0xcf, 0x6b, 0x07, 0x63, 0x61, 0x2e, 0x2f },
+	{ 0xa5, 0xc9, 0x1d, 0xa7, 0xac, 0xaa, 0x4d, 0xde },
+	{ 0x71, 0x65, 0x95, 0x87, 0x66, 0x50, 0xa2, 0xa6 },
+	{ 0x28, 0xef, 0x49, 0x5c, 0x53, 0xa3, 0x87, 0xad },
+	{ 0x42, 0xc3, 0x41, 0xd8, 0xfa, 0x92, 0xd8, 0x32 },
+	{ 0xce, 0x7c, 0xf2, 0x72, 0x2f, 0x51, 0x27, 0x71 },
+	{ 0xe3, 0x78, 0x59, 0xf9, 0x46, 0x23, 0xf3, 0xa7 },
+	{ 0x38, 0x12, 0x05, 0xbb, 0x1a, 0xb0, 0xe0, 0x12 },
+	{ 0xae, 0x97, 0xa1, 0x0f, 0xd4, 0x34, 0xe0, 0x15 },
+	{ 0xb4, 0xa3, 0x15, 0x08, 0xbe, 0xff, 0x4d, 0x31 },
+	{ 0x81, 0x39, 0x62, 0x29, 0xf0, 0x90, 0x79, 0x02 },
+	{ 0x4d, 0x0c, 0xf4, 0x9e, 0xe5, 0xd4, 0xdc, 0xca },
+	{ 0x5c, 0x73, 0x33, 0x6a, 0x76, 0xd8, 0xbf, 0x9a },
+	{ 0xd0, 0xa7, 0x04, 0x53, 0x6b, 0xa9, 0x3e, 0x0e },
+	{ 0x92, 0x59, 0x58, 0xfc, 0xd6, 0x42, 0x0c, 0xad },
+	{ 0xa9, 0x15, 0xc2, 0x9b, 0xc8, 0x06, 0x73, 0x18 },
+	{ 0x95, 0x2b, 0x79, 0xf3, 0xbc, 0x0a, 0xa6, 0xd4 },
+	{ 0xf2, 0x1d, 0xf2, 0xe4, 0x1d, 0x45, 0x35, 0xf9 },
+	{ 0x87, 0x57, 0x75, 0x19, 0x04, 0x8f, 0x53, 0xa9 },
+	{ 0x10, 0xa5, 0x6c, 0xf5, 0xdf, 0xcd, 0x9a, 0xdb },
+	{ 0xeb, 0x75, 0x09, 0x5c, 0xcd, 0x98, 0x6c, 0xd0 },
+	{ 0x51, 0xa9, 0xcb, 0x9e, 0xcb, 0xa3, 0x12, 0xe6 },
+	{ 0x96, 0xaf, 0xad, 0xfc, 0x2c, 0xe6, 0x66, 0xc7 },
+	{ 0x72, 0xfe, 0x52, 0x97, 0x5a, 0x43, 0x64, 0xee },
+	{ 0x5a, 0x16, 0x45, 0xb2, 0x76, 0xd5, 0x92, 0xa1 },
+	{ 0xb2, 0x74, 0xcb, 0x8e, 0xbf, 0x87, 0x87, 0x0a },
+	{ 0x6f, 0x9b, 0xb4, 0x20, 0x3d, 0xe7, 0xb3, 0x81 },
+	{ 0xea, 0xec, 0xb2, 0xa3, 0x0b, 0x22, 0xa8, 0x7f },
+	{ 0x99, 0x24, 0xa4, 0x3c, 0xc1, 0x31, 0x57, 0x24 },
+	{ 0xbd, 0x83, 0x8d, 0x3a, 0xaf, 0xbf, 0x8d, 0xb7 },
+	{ 0x0b, 0x1a, 0x2a, 0x32, 0x65, 0xd5, 0x1a, 0xea },
+	{ 0x13, 0x50, 0x79, 0xa3, 0x23, 0x1c, 0xe6, 0x60 },
+	{ 0x93, 0x2b, 0x28, 0x46, 0xe4, 0xd7, 0x06, 0x66 },
+	{ 0xe1, 0x91, 0x5f, 0x5c, 0xb1, 0xec, 0xa4, 0x6c },
+	{ 0xf3, 0x25, 0x96, 0x5c, 0xa1, 0x6d, 0x62, 0x9f },
+	{ 0x57, 0x5f, 0xf2, 0x8e, 0x60, 0x38, 0x1b, 0xe5 },
+	{ 0x72, 0x45, 0x06, 0xeb, 0x4c, 0x32, 0x8a, 0x95 }
+};
+
+static int siphash24_selftest(void)
+{
+	uint8_t in[64], k[16], i;
+	uint64_t out;
+	int ret = 0;
+
+	for (i = 0; i < 16; ++i)
+		k[i] = i;
+
+	for (i = 0; i < 64; ++i) {
+		in[i] = i;
+		out = siphash24(in, i, k);
+		if (memcmp(&out, test_vectors[i], 8)) {
+			printk(KERN_INFO "siphash24: self-test %u: FAIL\n", i + 1);
+			ret = -1;
+		}
+	}
+	if (!ret)
+		printk(KERN_INFO "siphash24: self-tests: pass\n");
+	return ret;
+}
+__initcall(siphash24_selftest);
+#endif