Message ID | 20230731093902.1796249-1-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | target/riscv: Use accelerated helper for AES64KS1I | expand |
(cc riscv maintainers) On Mon, 31 Jul 2023 at 11:39, Ard Biesheuvel <ardb@kernel.org> wrote: > > Use the accelerated SubBytes/ShiftRows/AddRoundKey AES helper to > implement the first half of the key schedule derivation. This does not > actually involve shifting rows, so clone the same uint32_t 4 times into > the AES vector to counter that. > > Cc: Richard Henderson <richard.henderson@linaro.org> > Cc: Philippe Mathieu-Daudé <philmd@linaro.org> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > --- > target/riscv/crypto_helper.c | 17 +++++------------ > 1 file changed, 5 insertions(+), 12 deletions(-) > > diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c > index 4d65945429c6dcc4..257c5c4863fb160f 100644 > --- a/target/riscv/crypto_helper.c > +++ b/target/riscv/crypto_helper.c > @@ -148,24 +148,17 @@ target_ulong HELPER(aes64ks1i)(target_ulong rs1, target_ulong rnum) > > uint8_t enc_rnum = rnum; > uint32_t temp = (RS1 >> 32) & 0xFFFFFFFF; > - uint8_t rcon_ = 0; > - target_ulong result; > + AESState t, rc = {}; > > if (enc_rnum != 0xA) { > temp = ror32(temp, 8); /* Rotate right by 8 */ > - rcon_ = round_consts[enc_rnum]; > + rc.w[0] = rc.w[1] = rc.w[2] = rc.w[3] = round_consts[enc_rnum]; This can be simplified to rc.w[0] = rc.w[1] = round_consts[enc_rnum]; > } > > - temp = ((uint32_t)AES_sbox[(temp >> 24) & 0xFF] << 24) | > - ((uint32_t)AES_sbox[(temp >> 16) & 0xFF] << 16) | > - ((uint32_t)AES_sbox[(temp >> 8) & 0xFF] << 8) | > - ((uint32_t)AES_sbox[(temp >> 0) & 0xFF] << 0); > + t.w[0] = t.w[1] = t.w[2] = t.w[3] = temp; > + aesenc_SB_SR_AK(&t, &t, &rc, false); > > - temp ^= rcon_; > - > - result = ((uint64_t)temp << 32) | temp; > - > - return result; > + return t.d[0]; > } > > target_ulong HELPER(aes64im)(target_ulong rs1) > -- > 2.39.2 >
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c index 4d65945429c6dcc4..257c5c4863fb160f 100644 --- a/target/riscv/crypto_helper.c +++ b/target/riscv/crypto_helper.c @@ -148,24 +148,17 @@ target_ulong HELPER(aes64ks1i)(target_ulong rs1, target_ulong rnum) uint8_t enc_rnum = rnum; uint32_t temp = (RS1 >> 32) & 0xFFFFFFFF; - uint8_t rcon_ = 0; - target_ulong result; + AESState t, rc = {}; if (enc_rnum != 0xA) { temp = ror32(temp, 8); /* Rotate right by 8 */ - rcon_ = round_consts[enc_rnum]; + rc.w[0] = rc.w[1] = rc.w[2] = rc.w[3] = round_consts[enc_rnum]; } - temp = ((uint32_t)AES_sbox[(temp >> 24) & 0xFF] << 24) | - ((uint32_t)AES_sbox[(temp >> 16) & 0xFF] << 16) | - ((uint32_t)AES_sbox[(temp >> 8) & 0xFF] << 8) | - ((uint32_t)AES_sbox[(temp >> 0) & 0xFF] << 0); + t.w[0] = t.w[1] = t.w[2] = t.w[3] = temp; + aesenc_SB_SR_AK(&t, &t, &rc, false); - temp ^= rcon_; - - result = ((uint64_t)temp << 32) | temp; - - return result; + return t.d[0]; } target_ulong HELPER(aes64im)(target_ulong rs1)
Use the accelerated SubBytes/ShiftRows/AddRoundKey AES helper to implement the first half of the key schedule derivation. This does not actually involve shifting rows, so clone the same uint32_t 4 times into the AES vector to counter that. Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- target/riscv/crypto_helper.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-)