Message ID | 20240311213232.128240-1-chang.seok.bae@intel.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Herbert Xu |
Headers | show |
Series | crypto: x86/aesni - Update aesni_set_key() to return void | expand |
On Mon, Mar 11, 2024 at 02:32:32PM -0700, Chang S. Bae wrote: > The aesni_set_key() implementation has no error case, yet its prototype > specifies to return an error code. > > Modify the function prototype to return void and adjust the related code. > > Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> > Cc: Eric Biggers <ebiggers@kernel.org> > Cc: linux-crypto@vger.kernel.org > Cc: x86@kernel.org > Cc: linux-kernel@vger.kernel.org > --- > Previously, Eric identified a similar case in my AES-KL code [1]. Then, > this parallel issue was realized. > > [1]: https://lore.kernel.org/lkml/20230607053558.GC941@sol.localdomain/ > --- > arch/x86/crypto/aesni-intel_asm.S | 5 ++--- > arch/x86/crypto/aesni-intel_glue.c | 6 +++--- > 2 files changed, 5 insertions(+), 6 deletions(-) Reviewed-by: Eric Biggers <ebiggers@google.com> - Eric
On Mon, 11 Mar 2024 at 22:48, Chang S. Bae <chang.seok.bae@intel.com> wrote: > > The aesni_set_key() implementation has no error case, yet its prototype > specifies to return an error code. > > Modify the function prototype to return void and adjust the related code. > > Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> > Cc: Eric Biggers <ebiggers@kernel.org> > Cc: linux-crypto@vger.kernel.org > Cc: x86@kernel.org > Cc: linux-kernel@vger.kernel.org > --- > Previously, Eric identified a similar case in my AES-KL code [1]. Then, > this parallel issue was realized. > > [1]: https://lore.kernel.org/lkml/20230607053558.GC941@sol.localdomain/ > --- > arch/x86/crypto/aesni-intel_asm.S | 5 ++--- > arch/x86/crypto/aesni-intel_glue.c | 6 +++--- > 2 files changed, 5 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S > index 411d8c83e88a..7ecb55cae3d6 100644 > --- a/arch/x86/crypto/aesni-intel_asm.S > +++ b/arch/x86/crypto/aesni-intel_asm.S > @@ -1820,8 +1820,8 @@ SYM_FUNC_START_LOCAL(_key_expansion_256b) > SYM_FUNC_END(_key_expansion_256b) > > /* > - * int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, > - * unsigned int key_len) > + * void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, > + * unsigned int key_len) > */ > SYM_FUNC_START(aesni_set_key) > FRAME_BEGIN > @@ -1926,7 +1926,6 @@ SYM_FUNC_START(aesni_set_key) > sub $0x10, UKEYP > cmp TKEYP, KEYP > jb .Ldec_key_loop > - xor AREG, AREG > #ifndef __x86_64__ > popl KEYP > #endif > diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c > index b1d90c25975a..c807b2f48ea3 100644 > --- a/arch/x86/crypto/aesni-intel_glue.c > +++ b/arch/x86/crypto/aesni-intel_glue.c > @@ -87,8 +87,8 @@ static inline void *aes_align_addr(void *addr) > return PTR_ALIGN(addr, AESNI_ALIGN); > } > > -asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, > - unsigned int key_len); > +asmlinkage void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, > + unsigned int key_len); > asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); > asmlinkage void aesni_dec(const void *ctx, u8 *out, const u8 *in); > asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, > @@ -241,7 +241,7 @@ static int aes_set_key_common(struct crypto_aes_ctx *ctx, > err = aes_expandkey(ctx, in_key, key_len); > else { > kernel_fpu_begin(); > - err = aesni_set_key(ctx, in_key, key_len); > + aesni_set_key(ctx, in_key, key_len); This will leave 'err' uninitialized. > kernel_fpu_end(); > } > > -- > 2.34.1 > >
On 3/12/2024 12:46 AM, Ard Biesheuvel wrote: > On Mon, 11 Mar 2024 at 22:48, Chang S. Bae <chang.seok.bae@intel.com> wrote: >> >> @@ -241,7 +241,7 @@ static int aes_set_key_common(struct crypto_aes_ctx *ctx, >> err = aes_expandkey(ctx, in_key, key_len); >> else { >> kernel_fpu_begin(); >> - err = aesni_set_key(ctx, in_key, key_len); >> + aesni_set_key(ctx, in_key, key_len); > > This will leave 'err' uninitialized. Ah, right. Thanks for catching it. Also, upon reviewing aes_expandkey(), I noticed there's no error case, except for the key length sanity check. While addressing this, perhaps additional cleanup is considerable like: @@ -233,19 +233,20 @@ static int aes_set_key_common(struct crypto_aes_ctx *ctx, { int err; - if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; + err = aes_check_keylen(key_len); + if (err) + return err; if (!crypto_simd_usable()) - err = aes_expandkey(ctx, in_key, key_len); + /* no error with a valid key length */ + aes_expandkey(ctx, in_key, key_len); else { kernel_fpu_begin(); - err = aesni_set_key(ctx, in_key, key_len); + aesni_set_key(ctx, in_key, key_len); kernel_fpu_end(); } - return err; + return 0; } Thanks, Chang
On Tue, 12 Mar 2024 at 16:03, Chang S. Bae <chang.seok.bae@intel.com> wrote: > > On 3/12/2024 12:46 AM, Ard Biesheuvel wrote: > > On Mon, 11 Mar 2024 at 22:48, Chang S. Bae <chang.seok.bae@intel.com> wrote: > >> > >> @@ -241,7 +241,7 @@ static int aes_set_key_common(struct crypto_aes_ctx *ctx, > >> err = aes_expandkey(ctx, in_key, key_len); > >> else { > >> kernel_fpu_begin(); > >> - err = aesni_set_key(ctx, in_key, key_len); > >> + aesni_set_key(ctx, in_key, key_len); > > > > This will leave 'err' uninitialized. > > Ah, right. Thanks for catching it. > > Also, upon reviewing aes_expandkey(), I noticed there's no error case, > except for the key length sanity check. > > While addressing this, perhaps additional cleanup is considerable like: > > @@ -233,19 +233,20 @@ static int aes_set_key_common(struct > crypto_aes_ctx *ctx, > { > int err; > > - if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && > - key_len != AES_KEYSIZE_256) > - return -EINVAL; > + err = aes_check_keylen(key_len); > + if (err) > + return err; > > if (!crypto_simd_usable()) > - err = aes_expandkey(ctx, in_key, key_len); > + /* no error with a valid key length */ > + aes_expandkey(ctx, in_key, key_len); > else { > kernel_fpu_begin(); > - err = aesni_set_key(ctx, in_key, key_len); > + aesni_set_key(ctx, in_key, key_len); > kernel_fpu_end(); > } > > - return err; > + return 0; > } > I wonder whether we need aesni_set_key() at all.
On 3/12/2024 8:18 AM, Ard Biesheuvel wrote: > > I wonder whether we need aesni_set_key() at all. The following looks to be relevant from the AES-NI whitepaper [1]: The Relative Cost of the Key Expansion ... Some less frequent applications require frequent key scheduling. For example, some random number generators may rekey frequently to achieve forward secrecy. One extreme example is a Davies-Meyer hashing construction, which uses a block cipher primitive as a compression function, and the cipher is re-keyed for each processed data block. Although these are not the mainstream usage models of the AES instructions, we point out that the AESKEYGENASSIST and AESIMC instructions facilitate Key Expansion procedure which is lookup tables free, and faster than software only key expansion. In addition, we point out that unrolling of the key expansion code, which is provided in the previous sections, improves the key expansion performance. The AES256 case can also utilize the instruction AESENCLAST, for the sbox transformation, that is faster than using AESKEYGENASSIST. [1] https://www.intel.com/content/dam/doc/white-paper/advanced-encryption-standard-new-instructions-set-paper.pdf Thanks, Chang
diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 411d8c83e88a..7ecb55cae3d6 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -1820,8 +1820,8 @@ SYM_FUNC_START_LOCAL(_key_expansion_256b) SYM_FUNC_END(_key_expansion_256b) /* - * int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - * unsigned int key_len) + * void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + * unsigned int key_len) */ SYM_FUNC_START(aesni_set_key) FRAME_BEGIN @@ -1926,7 +1926,6 @@ SYM_FUNC_START(aesni_set_key) sub $0x10, UKEYP cmp TKEYP, KEYP jb .Ldec_key_loop - xor AREG, AREG #ifndef __x86_64__ popl KEYP #endif diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index b1d90c25975a..c807b2f48ea3 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -87,8 +87,8 @@ static inline void *aes_align_addr(void *addr) return PTR_ALIGN(addr, AESNI_ALIGN); } -asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len); +asmlinkage void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len); asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_dec(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, @@ -241,7 +241,7 @@ static int aes_set_key_common(struct crypto_aes_ctx *ctx, err = aes_expandkey(ctx, in_key, key_len); else { kernel_fpu_begin(); - err = aesni_set_key(ctx, in_key, key_len); + aesni_set_key(ctx, in_key, key_len); kernel_fpu_end(); }
The aesni_set_key() implementation has no error case, yet its prototype specifies to return an error code. Modify the function prototype to return void and adjust the related code. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Previously, Eric identified a similar case in my AES-KL code [1]. Then, this parallel issue was realized. [1]: https://lore.kernel.org/lkml/20230607053558.GC941@sol.localdomain/ --- arch/x86/crypto/aesni-intel_asm.S | 5 ++--- arch/x86/crypto/aesni-intel_glue.c | 6 +++--- 2 files changed, 5 insertions(+), 6 deletions(-)