From patchwork Mon Sep 25 15:17:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13398000 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0052CE7A81 for ; Mon, 25 Sep 2023 15:31:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232086AbjIYPcA (ORCPT ); Mon, 25 Sep 2023 11:32:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232479AbjIYPb4 (ORCPT ); Mon, 25 Sep 2023 11:31:56 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B08B090; Mon, 25 Sep 2023 08:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695655909; x=1727191909; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wIJsR74uGLJIZB5dasGzbeVELDM8XVlth+LmABg2/Ts=; b=j5qk/i2B5ga/IHa83+GSCn72t+DxaUsGjKq2PQzQU82YZnJGBZ1ikn2E BWtWQqgisXPpka5e9MHdpJDAhKqcfIDTye9fquIh6A+ofqQYIb2g0g05o xNbihBsmfvPp/+4ClMS4amvzd8Zr6D/Q5wA03DMX454U51FZXyWovO5TO 6fynjckbYittQSQ/b7GZy41p8s2OZ6hYz7mgb4rhz9CV5t2TsCYkuyfcI 85pk6axVbQYxjFyPuSKE3XKkK78WBSwOcyrlDE/qceQtzrBrdrR/hT97J G4A4behQx9m7mwXSGFOFGSSj7fjL9BuAXnBEP8Wil98OzfUZCVgv1qPzq w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="361533592" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="361533592" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 08:31:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="725023009" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="725023009" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 25 Sep 2023 08:31:48 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, davem@davemloft.net, ebiggers@kernel.org, x86@kernel.org, chang.seok.bae@intel.com Subject: [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Date: Mon, 25 Sep 2023 08:17:50 -0700 Message-Id: <20230925151752.162449-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230925151752.162449-1-chang.seok.bae@intel.com> References: <20230925151752.162449-1-chang.seok.bae@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The address alignment code has been duplicated for each mode. Instead of duplicating the same code, refactor the alignment code and simplify the alignment helpers. Suggested-by: Eric Biggers Signed-off-by: Chang S. Bae Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/ --- arch/x86/crypto/aesni-intel_glue.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 39d6a62ac627..241d38ae1ed9 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -80,6 +80,13 @@ struct gcm_context_data { u8 hash_keys[GCM_BLOCK_LEN * 16]; }; +static inline void *aes_align_addr(void *addr) +{ + if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN) + return addr; + return PTR_ALIGN(addr, AESNI_ALIGN); +} + asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); @@ -201,32 +208,19 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2); static inline struct aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) { - unsigned long align = AESNI_ALIGN; - - if (align <= crypto_tfm_ctx_alignment()) - align = 1; - return PTR_ALIGN(crypto_aead_ctx(tfm), align); + return (struct aesni_rfc4106_gcm_ctx *)aes_align_addr(crypto_aead_ctx(tfm)); } static inline struct generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm) { - unsigned long align = AESNI_ALIGN; - - if (align <= crypto_tfm_ctx_alignment()) - align = 1; - return PTR_ALIGN(crypto_aead_ctx(tfm), align); + return (struct generic_gcmaes_ctx *)aes_align_addr(crypto_aead_ctx(tfm)); } #endif static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) { - unsigned long addr = (unsigned long)raw_ctx; - unsigned long align = AESNI_ALIGN; - - if (align <= crypto_tfm_ctx_alignment()) - align = 1; - return (struct crypto_aes_ctx *)ALIGN(addr, align); + return (struct crypto_aes_ctx *)aes_align_addr(raw_ctx); } static int aes_set_key_common(struct crypto_aes_ctx *ctx, From patchwork Mon Sep 25 15:17:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13398001 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4831CE7A95 for ; Mon, 25 Sep 2023 15:31:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232767AbjIYPcA (ORCPT ); Mon, 25 Sep 2023 11:32:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232660AbjIYPb4 (ORCPT ); Mon, 25 Sep 2023 11:31:56 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9D0B95; Mon, 25 Sep 2023 08:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695655909; x=1727191909; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TUSbpHufyWQOufCo4M+g4wkojF2tw2umePEM+zT+GfU=; b=fBYaqubmIJSl3bHRpoSLqQolKmNhsGwzgSm+MHAxZM01lb+PwLXSbL3l DRIay0uYT5dDOjdvnYFGYTfXve8xuzs6WFCcLOoQbgbHqnuKbC+fuaHIC jBaFb+vJBmYVZiHiHvqNxRqm7tpIslVlu8tEFEGsUMxBTqSDPbEexXjca 8DOPei/jVUK1ud+i75XPqwmC2J1sDLysIDr3mOLN7JNWmJiUk2aseoBwH Wn0Wt9HQ4EiA3BxF4FD4oD03ohWIGb9zXLFn2P6zsbFJ7ujBZPQXwDN5+ DbNzu/6OezNmYtZQJf47au7JuVv9GP6WAPg/CbPxUH9U4Kmnlbcv8ARk6 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="361533596" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="361533596" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 08:31:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="725023012" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="725023012" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 25 Sep 2023 08:31:48 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, davem@davemloft.net, ebiggers@kernel.org, x86@kernel.org, chang.seok.bae@intel.com Subject: [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Date: Mon, 25 Sep 2023 08:17:51 -0700 Message-Id: <20230925151752.162449-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230925151752.162449-1-chang.seok.bae@intel.com> References: <20230925151752.162449-1-chang.seok.bae@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Currently, every field in struct aesni_xts_ctx is defined as a byte array of the same size as struct crypto_aes_ctx. This data type is obscure and the choice lacks justification. To rectify this, update the field type in struct aesni_xts_ctx to match its actual structure. Suggested-by: Eric Biggers Signed-off-by: Chang S. Bae Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/ --- arch/x86/crypto/aesni-intel_glue.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 241d38ae1ed9..412a99e914a6 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -61,8 +61,8 @@ struct generic_gcmaes_ctx { }; struct aesni_xts_ctx { - u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; - u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; + struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR; + struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR; }; #define GCM_BLOCK_LEN 16 @@ -885,13 +885,12 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, keylen /= 2; /* first half of xts-key is for crypt */ - err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen); + err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen); if (err) return err; /* second half of xts-key is for tweak */ - return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen, - keylen); + return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen); } static int xts_crypt(struct skcipher_request *req, bool encrypt) @@ -933,7 +932,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); /* calculate first value of T */ - aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv); + aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv); while (walk.nbytes > 0) { int nbytes = walk.nbytes; @@ -942,11 +941,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) nbytes &= ~(AES_BLOCK_SIZE - 1); if (encrypt) - aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx), walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx), walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); kernel_fpu_end(); @@ -974,11 +973,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); if (encrypt) - aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx), walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx), walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); kernel_fpu_end(); From patchwork Mon Sep 25 15:17:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13398002 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CA60CE7A96 for ; Mon, 25 Sep 2023 15:31:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232844AbjIYPcB (ORCPT ); Mon, 25 Sep 2023 11:32:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbjIYPb5 (ORCPT ); Mon, 25 Sep 2023 11:31:57 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C65AC109; Mon, 25 Sep 2023 08:31:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695655910; x=1727191910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eY/JIAPvntiiqrDrvRdcoHfSNg9uoZBoHI1Bx3x9qIU=; b=M+nojskkojeLMaS5lcfjtjcW6K0konJ7AM25ICqXu7uUeF+bgXxygMaI fPSOGqVOslw5a57qHnL2eWV5/yuTCLZTaBH6HMhY0WepcWzbRLVIe0bUj LIf65AUMBg9J1z4GvYuk9lDwa9qCyRf2EJoh5St+lVrUaytTKdoOxp8Jk tyZmvyY5rgmegHfglCVJcxMHZIxhiBUrzYacnlvgpksGoFhxK2E7bm+5l PqDDEFEdV8Y1O5dEB0lRX8W9QW/xKXn73s0YsmgNvSGdjNqR6HLhjD5rQ xkgM9kuUvRSntV3191857hjzdqkLUzgwhjeeIgQKHTMMmHPHjxVrTAh61 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="361533603" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="361533603" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 08:31:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="725023015" X-IronPort-AV: E=Sophos;i="6.03,175,1694761200"; d="scan'208";a="725023015" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga006.jf.intel.com with ESMTP; 25 Sep 2023 08:31:49 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, davem@davemloft.net, ebiggers@kernel.org, x86@kernel.org, chang.seok.bae@intel.com Subject: [PATCH 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Date: Mon, 25 Sep 2023 08:17:52 -0700 Message-Id: <20230925151752.162449-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230925151752.162449-1-chang.seok.bae@intel.com> References: <20230925151752.162449-1-chang.seok.bae@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Currently, the alignment of each field in struct aesni_xts_ctx occurs right before every access. However, it's possible to perform this alignment ahead of time. Introduce a helper function that converts struct crypto_skcipher *tfm to struct aesni_xts_ctx *ctx and returns an aligned address. Utilize this helper function at the beginning of each XTS function and then eliminate redundant alignment code. Suggested-by: Eric Biggers Signed-off-by: Chang S. Bae Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/ --- arch/x86/crypto/aesni-intel_glue.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 412a99e914a6..b344652510a3 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -223,6 +223,11 @@ static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) return (struct crypto_aes_ctx *)aes_align_addr(raw_ctx); } +static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) +{ + return (struct aesni_xts_ctx *)aes_align_addr(crypto_skcipher_ctx(tfm)); +} + static int aes_set_key_common(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len) { @@ -875,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req) static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); int err; err = xts_verify_key(tfm, key, keylen); @@ -885,18 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, keylen /= 2; /* first half of xts-key is for crypt */ - err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen); + err = aes_set_key_common(&ctx->crypt_ctx, key, keylen); if (err) return err; /* second half of xts-key is for tweak */ - return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen); + return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen); } static int xts_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); int tail = req->cryptlen % AES_BLOCK_SIZE; struct skcipher_request subreq; struct skcipher_walk walk; @@ -932,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); /* calculate first value of T */ - aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv); + aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv); while (walk.nbytes > 0) { int nbytes = walk.nbytes; @@ -941,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) nbytes &= ~(AES_BLOCK_SIZE - 1); if (encrypt) - aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx), + aesni_xts_encrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx), + aesni_xts_decrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); kernel_fpu_end(); @@ -973,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); if (encrypt) - aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx), + aesni_xts_encrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx), + aesni_xts_decrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); kernel_fpu_end();