From patchwork Thu Nov 3 04:27:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029510 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96AE1C433FE for ; Thu, 3 Nov 2022 04:28:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230267AbiKCE2o (ORCPT ); Thu, 3 Nov 2022 00:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230075AbiKCE21 (ORCPT ); Thu, 3 Nov 2022 00:28:27 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB8F211A0E; Wed, 2 Nov 2022 21:28:26 -0700 (PDT) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A302CS7011246; Thu, 3 Nov 2022 04:28:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=ieDB45JQqpAn8o0cFOMeE/nL1kcgeiYxQWZKC81F9j4=; b=orKUc4UngGHaem0TAWr0Eo+3mrvdt1Kua2tiyy0hiuWzupCw9NNhtq81j0nmkvy0hhbm QqMQZAaGWZpcJiScABt+xwPAnMhGpL0/mIIRnEABOIFIa7+5Rrbt8xPeLFWQGPbEjvIK Qg1yuSewszsg+xjm0N+3oPFudnVx/daSs6sCNls9AJnXZuPBGdmzKdn14ZUIFOsA5u/w jIbqjbMJHvwqEPaYx3kIVtbYxPD78qbWrprqvL7nIBSXgtH8AUf72LnSigdVMQ7NMIkZ t7mcwjqXlX5d5+7HDR91Zyh9YEjxa6a6JL/jERJZT5M0dMiKX14qWBdjrkSw44v8X4Gr zg== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km2pfhdec-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:03 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 31F9FD25A; Thu, 3 Nov 2022 04:28:03 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id A6F93808EB4; Thu, 3 Nov 2022 04:28:02 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 01/17] crypto: tcrypt - test crc32 Date: Wed, 2 Nov 2022 23:27:24 -0500 Message-Id: <20221103042740.6556-2-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: lt5JL0LdQtIpZFpIelRq2x8RieiHiKgO X-Proofpoint-ORIG-GUID: lt5JL0LdQtIpZFpIelRq2x8RieiHiKgO X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 bulkscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=939 clxscore=1015 phishscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add self-test and speed tests for crc32, paralleling those offered for crc32c and crct10dif. Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index a82679b576bb..4426386dfb42 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1711,6 +1711,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("gcm(aria)"); break; + case 59: + ret += tcrypt_test("crc32"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; @@ -2317,6 +2321,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) generic_hash_speed_template); if (mode > 300 && mode < 400) break; fallthrough; + case 329: + test_hash_speed("crc32", sec, generic_hash_speed_template); + if (mode > 300 && mode < 400) break; + fallthrough; case 399: break; From patchwork Thu Nov 3 04:27:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029509 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D697EC4332F for ; Thu, 3 Nov 2022 04:28:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230203AbiKCE2n (ORCPT ); Thu, 3 Nov 2022 00:28:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbiKCE22 (ORCPT ); Thu, 3 Nov 2022 00:28:28 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7F4911825; Wed, 2 Nov 2022 21:28:25 -0700 (PDT) Received: from pps.filterd (m0150244.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A3222LR011922; Thu, 3 Nov 2022 04:28:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=Nj/d5OC2AY6b4soOh7HzxmFB+I7wGPKsbzfoKqMoiAo=; b=dcZIzfgH1l93vqE7+6rDgstvXPBLtHOuilrKqVk55dNojRDTOyBHzDuDBNNj3KvrgpKz MeJbZzkHCJPHuVedB/Wrg4JJJN2HS/PlmfR2d+LCsLYcMoFkQf79hVosI5YGADknnAsf La3DlR5jiHyqAuVXseebdxQOV4/dFUEf+MEEa0MGI4+zn97a7SefXlfIBKVtG0Xy/l18 OV4S/cg3KE+gsRpQkO5CeK0rfWqcAATGqLvwsX5qknNirr6k0NLseENiunFUQHeX87LX 6Ok4VqjH0+1ZCOHpZ9ugdOAOz8GAr0F+UTWKhN3LOaHuyAmeuGoN6Txb/dNEcgRCGfaE PQ== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kkstj5xsy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:06 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 28E418040FD; Thu, 3 Nov 2022 04:28:05 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 9C765808EB8; Thu, 3 Nov 2022 04:28:04 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 02/17] crypto: tcrypt - test nhpoly1305 Date: Wed, 2 Nov 2022 23:27:25 -0500 Message-Id: <20221103042740.6556-3-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: rpJkAG084Dow0BwxjEcq96z25pFW1UDD X-Proofpoint-GUID: rpJkAG084Dow0BwxjEcq96z25pFW1UDD X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0 clxscore=1015 bulkscore=0 malwarescore=0 suspectscore=0 priorityscore=1501 adultscore=0 mlxlogscore=908 lowpriorityscore=0 spamscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add self-test mode for nhpoly1305. Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 4426386dfb42..7a6a56751043 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1715,6 +1715,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("crc32"); break; + case 60: + ret += tcrypt_test("nhpoly1305"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; From patchwork Thu Nov 3 04:27:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029521 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA0ABC433FE for ; Thu, 3 Nov 2022 04:30:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230521AbiKCEah (ORCPT ); Thu, 3 Nov 2022 00:30:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231124AbiKCE3c (ORCPT ); Thu, 3 Nov 2022 00:29:32 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E93A12D35; Wed, 2 Nov 2022 21:28:43 -0700 (PDT) Received: from pps.filterd (m0134424.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33leNg022647; Thu, 3 Nov 2022 04:28:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=EFnJoixPZ3/Qjr5SSbSdQKS+GwLwG6oMKcGkzRv3xr4=; b=IuMWTAJ7ZKPlEJy37kS+AL/tc/o1fmyZoYKUgmmGJ4eUX4FAmiyfTdvY905KEQjTcH/y 07dujrahekiliFZOKs9hNtZA4VsCJLCMSXBm+RUgaAGr2IoqSftB9TvHcyHMI+1RHjva hf3Jpg5FbNzlXcWqBl1zTFznH9EjFD/AeeO51pj3DkWX9BT/bU5SukphToCalsKxy8IA YagpsC7fcfuJWuKmZlQISOZWrbncDkdSAxl1GfBQiV+tTb0hiGmfZTPzZA1yDEHOOxks R2ibDCltjg8dcov+TnND9t1Z4OW8jDe9YM1vZSQcl0mXt28Bp7hty6aLjkPG4yOGDkzM 1Q== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3km624r8vb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:07 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 65C7B806B41; Thu, 3 Nov 2022 04:28:06 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id E50EC808EAF; Thu, 3 Nov 2022 04:28:05 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 03/17] crypto: tcrypt - reschedule during cycles speed tests Date: Wed, 2 Nov 2022 23:27:26 -0500 Message-Id: <20221103042740.6556-4-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: TQQwrXZMaq1zVHMUHtqcG37LyzX2Jxm0 X-Proofpoint-GUID: TQQwrXZMaq1zVHMUHtqcG37LyzX2Jxm0 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 bulkscore=0 mlxscore=0 adultscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org commit 2af632996b89 ("crypto: tcrypt - reschedule during speed tests") added cond_resched() calls to "Avoid RCU stalls in the case of non-preemptible kernel and lengthy speed tests by rescheduling when advancing from one block size to another." It only makes those calls if the sec module parameter is used (run the speed test for a certain number of seconds), not the default "cycles" mode. Expand those to also run in "cycles" mode to reduce the rate of rcu stall warnings: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: Suggested-by: Herbert Xu Tested-by: Taehee Yoo Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 44 ++++++++++++++++++-------------------------- 1 file changed, 18 insertions(+), 26 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 7a6a56751043..c025ba26b663 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -408,14 +408,13 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs, } - if (secs) { + if (secs) ret = test_mb_aead_jiffies(data, enc, bs, secs, num_mb); - cond_resched(); - } else { + else ret = test_mb_aead_cycles(data, enc, bs, num_mb); - } + cond_resched(); if (ret) { pr_err("%s() failed return code=%d\n", e, ret); @@ -661,13 +660,11 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, bs + (enc ? 0 : authsize), iv); - if (secs) { - ret = test_aead_jiffies(req, enc, bs, - secs); - cond_resched(); - } else { + if (secs) + ret = test_aead_jiffies(req, enc, bs, secs); + else ret = test_aead_cycles(req, enc, bs); - } + cond_resched(); if (ret) { pr_err("%s() failed return code=%d\n", e, ret); @@ -917,14 +914,13 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs, ahash_request_set_crypt(req, sg, output, speed[i].plen); - if (secs) { + if (secs) ret = test_ahash_jiffies(req, speed[i].blen, speed[i].plen, output, secs); - cond_resched(); - } else { + else ret = test_ahash_cycles(req, speed[i].blen, speed[i].plen, output); - } + cond_resched(); if (ret) { pr_err("hashing failed ret=%d\n", ret); @@ -1184,15 +1180,14 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs, cur->sg, bs, iv); } - if (secs) { + if (secs) ret = test_mb_acipher_jiffies(data, enc, bs, secs, num_mb); - cond_resched(); - } else { + else ret = test_mb_acipher_cycles(data, enc, bs, num_mb); - } + cond_resched(); if (ret) { pr_err("%s() failed flags=%x\n", e, @@ -1401,14 +1396,11 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs, skcipher_request_set_crypt(req, sg, sg, bs, iv); - if (secs) { - ret = test_acipher_jiffies(req, enc, - bs, secs); - cond_resched(); - } else { - ret = test_acipher_cycles(req, enc, - bs); - } + if (secs) + ret = test_acipher_jiffies(req, enc, bs, secs); + else + ret = test_acipher_cycles(req, enc, bs); + cond_resched(); if (ret) { pr_err("%s() failed flags=%x\n", e, From patchwork Thu Nov 3 04:27:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029513 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A199C4332F for ; Thu, 3 Nov 2022 04:28:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbiKCE2p (ORCPT ); Thu, 3 Nov 2022 00:28:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbiKCE2l (ORCPT ); Thu, 3 Nov 2022 00:28:41 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F058B11A19; Wed, 2 Nov 2022 21:28:26 -0700 (PDT) Received: from pps.filterd (m0148663.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33XbFW018445; Thu, 3 Nov 2022 04:28:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=ss9sDR8jLmNywXxXFXd8GvpjB1cWWA6qyQ+4h2f2Kdw=; b=Ms8MYH4768EoEP4FMefy2QpQg+JYDxMP2VNk95WR9EBmtUUahJEGfdn3IEZVETJilLBO lXmAhAyD6ilCzRFONxjAgfjbOS1XpM6OBbgKOnuqwUIn+H/ySlJLm7ZunkgMkRqQhNwz FyNt3QWSVbL3uuW+DKk0mlAeEvew2bbNc82fnHcLJxMmZSOyGgng5NIrQzvrksf6uIdX 1UhByvos3+wPFKRs3Ali5MliJ0tqW7t6G4ErwIWjV1MTrWP0vMcUqiXWwDKoB9oo/8sB 8EdwKbE6Nv9pic+4CLVwlx+kgveZua5GgdrNPFmv85+l0bLWbv9Qr487kbCsiFFndJ7w FQ== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km5tn89xy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:09 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id E1C71295A0; Thu, 3 Nov 2022 04:28:08 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 6A251808EAF; Thu, 3 Nov 2022 04:28:08 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 04/17] crypto: x86/sha - limit FPU preemption Date: Wed, 2 Nov 2022 23:27:27 -0500 Message-Id: <20221103042740.6556-5-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: DFDtqhIZwyImnMxDMwjKPmmiOdt9MeCV X-Proofpoint-ORIG-GUID: DFDtqhIZwyImnMxDMwjKPmmiOdt9MeCV X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 suspectscore=0 impostorscore=0 phishscore=0 adultscore=0 priorityscore=1501 clxscore=1015 mlxscore=0 spamscore=0 mlxlogscore=999 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running. This leads to "rcu_preempt detected expedited stalls" with stack dumps pointing to the optimized hash function if the module is loaded and used a lot: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... For example, that can occur during boot with the stack track pointing to the sha512-x86 function if the system set to use SHA-512 for module signing. The call trace includes: module_sig_check mod_verify_sig pkcs7_verify pkcs7_digest sha512_finup sha512_base_do_update Fixes: 66be89515888 ("crypto: sha1 - SSSE3 based SHA1 implementation for x86-64") Fixes: 8275d1aa6422 ("crypto: sha256 - Create module providing optimized SHA256 routines using SSSE3, AVX or AVX2 instructions.") Fixes: 87de4579f92d ("crypto: sha512 - Create module providing optimized SHA512 routines using SSSE3, AVX or AVX2 instructions.") Fixes: aa031b8f702e ("crypto: x86/sha512 - load based on CPU features") Suggested-by: Herbert Xu Reviewed-by: Tim Chen Signed-off-by: Robert Elliott --- v3 simplify to while loops rather than do..while loops, avoid redundant checks for zero length, rename the limit macro and change into a const, vary the limit for each algo --- arch/x86/crypto/sha1_ssse3_glue.c | 64 ++++++++++++++++++++++------- arch/x86/crypto/sha256_ssse3_glue.c | 64 ++++++++++++++++++++++------- arch/x86/crypto/sha512_ssse3_glue.c | 55 +++++++++++++++++++------ 3 files changed, 140 insertions(+), 43 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 44340a1139e0..4bc77c84b0fb 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -26,8 +26,17 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +#ifdef CONFIG_AS_SHA1_NI +static const unsigned int bytes_per_fpu_shani = 34 * 1024; +#endif +static const unsigned int bytes_per_fpu_avx2 = 34 * 1024; +static const unsigned int bytes_per_fpu_avx = 30 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 26 * 1024; + static int sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha1_block_fn *sha1_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha1_block_fn *sha1_xform) { struct sha1_state *sctx = shash_desc_ctx(desc); @@ -41,22 +50,39 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0); - kernel_fpu_begin(); - sha1_base_do_update(desc, data, len, sha1_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha1_base_do_update(desc, data, chunk, sha1_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha1_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha1_block_fn *sha1_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha1_block_fn *sha1_xform) { if (!crypto_simd_usable()) return crypto_sha1_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha1_base_do_update(desc, data, chunk, sha1_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha1_base_do_update(desc, data, len, sha1_xform); sha1_base_do_finalize(desc, sha1_xform); kernel_fpu_end(); @@ -69,13 +95,15 @@ asmlinkage void sha1_transform_ssse3(struct sha1_state *state, static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_transform_ssse3); + return sha1_update(desc, data, len, bytes_per_fpu_ssse3, + sha1_transform_ssse3); } static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_transform_ssse3); + return sha1_finup(desc, data, len, bytes_per_fpu_ssse3, out, + sha1_transform_ssse3); } /* Add padding and return the message digest. */ @@ -119,13 +147,15 @@ asmlinkage void sha1_transform_avx(struct sha1_state *state, static int sha1_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_transform_avx); + return sha1_update(desc, data, len, bytes_per_fpu_avx, + sha1_transform_avx); } static int sha1_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_transform_avx); + return sha1_finup(desc, data, len, bytes_per_fpu_avx, out, + sha1_transform_avx); } static int sha1_avx_final(struct shash_desc *desc, u8 *out) @@ -201,13 +231,15 @@ static void sha1_apply_transform_avx2(struct sha1_state *state, static int sha1_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_apply_transform_avx2); + return sha1_update(desc, data, len, bytes_per_fpu_avx2, + sha1_apply_transform_avx2); } static int sha1_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_apply_transform_avx2); + return sha1_finup(desc, data, len, bytes_per_fpu_avx2, out, + sha1_apply_transform_avx2); } static int sha1_avx2_final(struct shash_desc *desc, u8 *out) @@ -251,13 +283,15 @@ asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, static int sha1_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_ni_transform); + return sha1_update(desc, data, len, bytes_per_fpu_shani, + sha1_ni_transform); } static int sha1_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_ni_transform); + return sha1_finup(desc, data, len, bytes_per_fpu_shani, out, + sha1_ni_transform); } static int sha1_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 3a5f6be7dbba..cdcdf5a80ffe 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -40,11 +40,20 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +#ifdef CONFIG_AS_SHA256_NI +static const unsigned int bytes_per_fpu_shani = 13 * 1024; +#endif +static const unsigned int bytes_per_fpu_avx2 = 13 * 1024; +static const unsigned int bytes_per_fpu_avx = 11 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 11 * 1024; + asmlinkage void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks); static int _sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha256_block_fn *sha256_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha256_block_fn *sha256_xform) { struct sha256_state *sctx = shash_desc_ctx(desc); @@ -58,22 +67,39 @@ static int _sha256_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); - kernel_fpu_begin(); - sha256_base_do_update(desc, data, len, sha256_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha256_base_do_update(desc, data, chunk, sha256_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha256_block_fn *sha256_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha256_block_fn *sha256_xform) { if (!crypto_simd_usable()) return crypto_sha256_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha256_base_do_update(desc, data, chunk, sha256_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha256_base_do_update(desc, data, len, sha256_xform); sha256_base_do_finalize(desc, sha256_xform); kernel_fpu_end(); @@ -83,13 +109,15 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_ssse3); + return _sha256_update(desc, data, len, bytes_per_fpu_ssse3, + sha256_transform_ssse3); } static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_ssse3); + return sha256_finup(desc, data, len, bytes_per_fpu_ssse3, + out, sha256_transform_ssse3); } /* Add padding and return the message digest. */ @@ -149,13 +177,15 @@ asmlinkage void sha256_transform_avx(struct sha256_state *state, static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_avx); + return _sha256_update(desc, data, len, bytes_per_fpu_avx, + sha256_transform_avx); } static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_avx); + return sha256_finup(desc, data, len, bytes_per_fpu_avx, + out, sha256_transform_avx); } static int sha256_avx_final(struct shash_desc *desc, u8 *out) @@ -225,13 +255,15 @@ asmlinkage void sha256_transform_rorx(struct sha256_state *state, static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_rorx); + return _sha256_update(desc, data, len, bytes_per_fpu_avx2, + sha256_transform_rorx); } static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_rorx); + return sha256_finup(desc, data, len, bytes_per_fpu_avx2, + out, sha256_transform_rorx); } static int sha256_avx2_final(struct shash_desc *desc, u8 *out) @@ -300,13 +332,15 @@ asmlinkage void sha256_ni_transform(struct sha256_state *digest, static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_ni_transform); + return _sha256_update(desc, data, len, bytes_per_fpu_shani, + sha256_ni_transform); } static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_ni_transform); + return sha256_finup(desc, data, len, bytes_per_fpu_shani, + out, sha256_ni_transform); } static int sha256_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 6d3b85e53d0e..c7036cfe2a7e 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -39,11 +39,17 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu_avx2 = 20 * 1024; +static const unsigned int bytes_per_fpu_avx = 17 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 17 * 1024; + asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); static int sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha512_block_fn *sha512_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha512_block_fn *sha512_xform) { struct sha512_state *sctx = shash_desc_ctx(desc); @@ -57,22 +63,39 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); - kernel_fpu_begin(); - sha512_base_do_update(desc, data, len, sha512_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha512_base_do_update(desc, data, chunk, sha512_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha512_block_fn *sha512_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha512_block_fn *sha512_xform) { if (!crypto_simd_usable()) return crypto_sha512_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha512_base_do_update(desc, data, chunk, sha512_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha512_base_do_update(desc, data, len, sha512_xform); sha512_base_do_finalize(desc, sha512_xform); kernel_fpu_end(); @@ -82,13 +105,15 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_ssse3); + return sha512_update(desc, data, len, bytes_per_fpu_ssse3, + sha512_transform_ssse3); } static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_ssse3); + return sha512_finup(desc, data, len, bytes_per_fpu_ssse3, + out, sha512_transform_ssse3); } /* Add padding and return the message digest. */ @@ -158,13 +183,15 @@ static bool avx_usable(void) static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_avx); + return sha512_update(desc, data, len, bytes_per_fpu_avx, + sha512_transform_avx); } static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_avx); + return sha512_finup(desc, data, len, bytes_per_fpu_avx, + out, sha512_transform_avx); } /* Add padding and return the message digest. */ @@ -224,13 +251,15 @@ asmlinkage void sha512_transform_rorx(struct sha512_state *state, static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_rorx); + return sha512_update(desc, data, len, bytes_per_fpu_avx2, + sha512_transform_rorx); } static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_rorx); + return sha512_finup(desc, data, len, bytes_per_fpu_avx2, + out, sha512_transform_rorx); } /* Add padding and return the message digest. */ From patchwork Thu Nov 3 04:27:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029508 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9571C43219 for ; Thu, 3 Nov 2022 04:28:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229547AbiKCE2X (ORCPT ); Thu, 3 Nov 2022 00:28:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229953AbiKCE2V (ORCPT ); Thu, 3 Nov 2022 00:28:21 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92EC3210; Wed, 2 Nov 2022 21:28:20 -0700 (PDT) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A338Wpm012666; Thu, 3 Nov 2022 04:28:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=kDCsigLBc4RiVTNQSgXraDq+QR2Ctzc5DZfvh4IK8G0=; b=D5sARtk96vU0kMAXcZa/k38vmxhz4Nv8A8uIwRD6MLhHvWSQSkLL95eiAbbOgzDjkpQt eBDcdTZnSUlsDtlSyT5H59sB5Dmk9JUGagQwaFLxi+Jm5u/Ymp3xcaB04Rw1ioHAzslJ Nhkrtjv6eNF4CI1lVNt46v999hdb6xyTFiOXWBRcpgHWSaPPiKSOgmZxFYTEdxGFZCFs MMqeVxoPcb00+AtFlyhGebHTzKgS7+C8sxvE23+mbc9DPQE28YsMDeC2mjhJs2u9MrM3 1yxZepQwVp7ltBODjODCj7jLIOYZl+8tzHlyYmJVGm4hz7QZmnPMSC7y/ZXqCY2Ri6ZR 2w== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km2pfhdey-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:11 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id BFB22D2AE; Thu, 3 Nov 2022 04:28:10 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 4DA1180A0FF; Thu, 3 Nov 2022 04:28:10 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 05/17] crypto: x86/crc - limit FPU preemption Date: Wed, 2 Nov 2022 23:27:28 -0500 Message-Id: <20221103042740.6556-6-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: kuDi_pUWUsV-mfcQTfoU9DP58tPkJBJP X-Proofpoint-ORIG-GUID: kuDi_pUWUsV-mfcQTfoU9DP58tPkJBJP X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 bulkscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 phishscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, leading to: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 78c37d191dd6 ("crypto: crc32 - add crc32 pclmulqdq implementation and wrappers for table implementation") Fixes: 6a8ce1ef3940 ("crypto: crc32c - Optimize CRC32C calculation with PCLMULQDQ instruction") Fixes: 0b95a7f85718 ("crypto: crct10dif - Glue code to cast accelerated CRCT10DIF assembly as a crypto transform") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 use while loops and static int, simplify one of the loop structures, add algorithm-specific limits, use local stack variable in crc32 finup rather than the context pointer like update uses --- arch/x86/crypto/crc32-pclmul_asm.S | 6 +-- arch/x86/crypto/crc32-pclmul_glue.c | 27 +++++++++---- arch/x86/crypto/crc32c-intel_glue.c | 52 ++++++++++++++++++------- arch/x86/crypto/crct10dif-pclmul_glue.c | 48 +++++++++++++++++------ 4 files changed, 99 insertions(+), 34 deletions(-) diff --git a/arch/x86/crypto/crc32-pclmul_asm.S b/arch/x86/crypto/crc32-pclmul_asm.S index ca53e96996ac..9abd861636c3 100644 --- a/arch/x86/crypto/crc32-pclmul_asm.S +++ b/arch/x86/crypto/crc32-pclmul_asm.S @@ -72,15 +72,15 @@ .text /** * Calculate crc32 - * BUF - buffer (16 bytes aligned) - * LEN - sizeof buffer (16 bytes aligned), LEN should be grater than 63 + * BUF - buffer - must be 16 bytes aligned + * LEN - sizeof buffer - must be multiple of 16 bytes and greater than 63 * CRC - initial crc32 * return %eax crc32 * uint crc32_pclmul_le_16(unsigned char const *buffer, * size_t len, uint crc32) */ -SYM_FUNC_START(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */ +SYM_FUNC_START(crc32_pclmul_le_16) movdqa (BUF), %xmm1 movdqa 0x10(BUF), %xmm2 movdqa 0x20(BUF), %xmm3 diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c index 98cf3b4e4c9f..df3dbc754818 100644 --- a/arch/x86/crypto/crc32-pclmul_glue.c +++ b/arch/x86/crypto/crc32-pclmul_glue.c @@ -46,6 +46,9 @@ #define SCALE_F 16L /* size of xmm register */ #define SCALE_F_MASK (SCALE_F - 1) +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 655 * 1024; + u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32); static u32 __attribute__((pure)) @@ -55,6 +58,9 @@ static u32 __attribute__((pure)) unsigned int iremainder; unsigned int prealign; + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + BUILD_BUG_ON(bytes_per_fpu & SCALE_F_MASK); + if (len < PCLMUL_MIN_LEN + SCALE_F_MASK || !crypto_simd_usable()) return crc32_le(crc, p, len); @@ -70,12 +76,19 @@ static u32 __attribute__((pure)) iquotient = len & (~SCALE_F_MASK); iremainder = len & SCALE_F_MASK; - kernel_fpu_begin(); - crc = crc32_pclmul_le_16(p, iquotient, crc); - kernel_fpu_end(); + while (iquotient >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(iquotient, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc32_pclmul_le_16(p, chunk, crc); + kernel_fpu_end(); + + iquotient -= chunk; + p += chunk; + } - if (iremainder) - crc = crc32_le(crc, p + iquotient, iremainder); + if (iquotient || iremainder) + crc = crc32_le(crc, p, iquotient + iremainder); return crc; } @@ -120,8 +133,8 @@ static int crc32_pclmul_update(struct shash_desc *desc, const u8 *data, } /* No final XOR 0xFFFFFFFF, like crc32_le */ -static int __crc32_pclmul_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32_pclmul_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { *(__le32 *)out = cpu_to_le32(crc32_pclmul_le(*crcp, data, len)); return 0; diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index feccb5254c7e..f08ed68ec93d 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -45,7 +45,10 @@ asmlinkage unsigned int crc_pcl(const u8 *buffer, int len, unsigned int crc_init); #endif /* CONFIG_X86_64 */ -static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t length) +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 868 * 1024; + +static u32 crc32c_intel_le_hw_byte(u32 crc, const unsigned char *data, size_t length) { while (length--) { asm("crc32b %1, %0" @@ -56,7 +59,7 @@ static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t le return crc; } -static u32 __pure crc32c_intel_le_hw(u32 crc, unsigned char const *p, size_t len) +static u32 __pure crc32c_intel_le_hw(u32 crc, const unsigned char *p, size_t len) { unsigned int iquotient = len / SCALE_F; unsigned int iremainder = len % SCALE_F; @@ -110,8 +113,8 @@ static int crc32c_intel_update(struct shash_desc *desc, const u8 *data, return 0; } -static int __crc32c_intel_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_intel_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { *(__le32 *)out = ~cpu_to_le32(crc32c_intel_le_hw(*crcp, data, len)); return 0; @@ -153,29 +156,52 @@ static int crc32c_pcl_intel_update(struct shash_desc *desc, const u8 *data, { u32 *crcp = shash_desc_ctx(desc); + BUILD_BUG_ON(bytes_per_fpu < CRC32C_PCL_BREAKEVEN); + BUILD_BUG_ON(bytes_per_fpu % SCALE_F); + /* * use faster PCL version if datasize is large enough to * overcome kernel fpu state save/restore overhead */ if (len >= CRC32C_PCL_BREAKEVEN && crypto_simd_usable()) { - kernel_fpu_begin(); - *crcp = crc_pcl(data, len, *crcp); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + *crcp = crc_pcl(data, chunk, *crcp); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } } else *crcp = crc32c_intel_le_hw(*crcp, data, len); return 0; } -static int __crc32c_pcl_intel_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_pcl_intel_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { + u32 crc = *crcp; + + BUILD_BUG_ON(bytes_per_fpu < CRC32C_PCL_BREAKEVEN); + BUILD_BUG_ON(bytes_per_fpu % SCALE_F); + if (len >= CRC32C_PCL_BREAKEVEN && crypto_simd_usable()) { - kernel_fpu_begin(); - *(__le32 *)out = ~cpu_to_le32(crc_pcl(data, len, *crcp)); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc_pcl(data, chunk, crc); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + *(__le32 *)out = ~cpu_to_le32(crc); } else *(__le32 *)out = - ~cpu_to_le32(crc32c_intel_le_hw(*crcp, data, len)); + ~cpu_to_le32(crc32c_intel_le_hw(crc, data, len)); return 0; } diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c index 71291d5af9f4..4f6b8c727d88 100644 --- a/arch/x86/crypto/crct10dif-pclmul_glue.c +++ b/arch/x86/crypto/crct10dif-pclmul_glue.c @@ -34,6 +34,11 @@ #include #include +#define PCLMUL_MIN_LEN 16U /* minimum size of buffer for crc_t10dif_pcl */ + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 614 * 1024; + asmlinkage u16 crc_t10dif_pcl(u16 init_crc, const u8 *buf, size_t len); struct chksum_desc_ctx { @@ -54,11 +59,21 @@ static int chksum_update(struct shash_desc *desc, const u8 *data, { struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); - if (length >= 16 && crypto_simd_usable()) { - kernel_fpu_begin(); - ctx->crc = crc_t10dif_pcl(ctx->crc, data, length); - kernel_fpu_end(); - } else + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + + if (length >= PCLMUL_MIN_LEN && crypto_simd_usable()) { + while (length >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(length, bytes_per_fpu); + + kernel_fpu_begin(); + ctx->crc = crc_t10dif_pcl(ctx->crc, data, chunk); + kernel_fpu_end(); + + length -= chunk; + data += chunk; + } + } + if (length) ctx->crc = crc_t10dif_generic(ctx->crc, data, length); return 0; } @@ -73,12 +88,23 @@ static int chksum_final(struct shash_desc *desc, u8 *out) static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out) { - if (len >= 16 && crypto_simd_usable()) { - kernel_fpu_begin(); - *(__u16 *)out = crc_t10dif_pcl(crc, data, len); - kernel_fpu_end(); - } else - *(__u16 *)out = crc_t10dif_generic(crc, data, len); + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + + if (len >= PCLMUL_MIN_LEN && crypto_simd_usable()) { + while (len >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc_t10dif_pcl(crc, data, chunk); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + } + if (len) + crc = crc_t10dif_generic(crc, data, len); + *(__u16 *)out = crc; return 0; } From patchwork Thu Nov 3 04:27:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029507 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC897C4332F for ; Thu, 3 Nov 2022 04:28:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229985AbiKCE2W (ORCPT ); Thu, 3 Nov 2022 00:28:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229950AbiKCE2V (ORCPT ); Thu, 3 Nov 2022 00:28:21 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B8981F5; Wed, 2 Nov 2022 21:28:20 -0700 (PDT) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33Phpu018190; Thu, 3 Nov 2022 04:28:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=dGSE/GY5tgSgZkoUWJYyGFHTG8j40hYCG+KmboMJvsU=; b=MA3waoJ/fDM5wjIIvDb6bY3alnyPHcpKR3vFsXfS8VJFE4xuXlxAY6OJQ5iTd0s9KE86 Z65ZyoeImpc1SfM7YKOoL+OyXu2EUlmI2lPTMcTL+FP7H9NS7bo+Jk/LKsv0oVxXSJ/L maAhQR3ot/VpcsOZ85j4YeUJYxgsy6YsVOI9jEMgvHI1IzRzwz9hd8RSgsjXx9QZ94MO WppqJHWaCEdfE6nLX8wz6cEoHY0h6AVCSRgD3V7tdKXbrx0tmQNIpLuTxxY9EP0Bse6+ MxR6OQjMY+W7tG5HEVCkPVNxPWy86YIdBDzf96DTxF9xDNHwn6w4dq2Wa0NWD/e/u+sP 9A== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kkw4a3m4y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:12 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 4CC678040F4; Thu, 3 Nov 2022 04:28:12 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id C87A080A0FF; Thu, 3 Nov 2022 04:28:11 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 06/17] crypto: x86/sm3 - limit FPU preemption Date: Wed, 2 Nov 2022 23:27:29 -0500 Message-Id: <20221103042740.6556-7-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: tNwcuweT85c46XsufA6fJGcnJT61Chnx X-Proofpoint-ORIG-GUID: tNwcuweT85c46XsufA6fJGcnJT61Chnx X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 suspectscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 impostorscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, causing: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 930ab34d906d ("crypto: x86/sm3 - add AVX assembly implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 use while loop, static int --- arch/x86/crypto/sm3_avx_glue.c | 35 ++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index 661b6f22ffcd..483aaed996ba 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -17,6 +17,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 11 * 1024; + asmlinkage void sm3_transform_avx(struct sm3_state *state, const u8 *data, int nblocks); @@ -25,8 +28,10 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, { struct sm3_state *sctx = shash_desc_ctx(desc); + BUILD_BUG_ON(bytes_per_fpu == 0); + if (!crypto_simd_usable() || - (sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) { + (sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) { sm3_update(sctx, data, len); return 0; } @@ -37,9 +42,16 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0); - kernel_fpu_begin(); - sm3_base_do_update(desc, data, len, sm3_transform_avx); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sm3_base_do_update(desc, data, chunk, sm3_transform_avx); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } @@ -47,6 +59,8 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, static int sm3_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (!crypto_simd_usable()) { struct sm3_state *sctx = shash_desc_ctx(desc); @@ -57,9 +71,18 @@ static int sm3_avx_finup(struct shash_desc *desc, const u8 *data, return 0; } + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sm3_base_do_update(desc, data, chunk, sm3_transform_avx); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sm3_base_do_update(desc, data, len, sm3_transform_avx); sm3_base_do_finalize(desc, sm3_transform_avx); kernel_fpu_end(); From patchwork Thu Nov 3 04:27:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029515 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104F7C4332F for ; Thu, 3 Nov 2022 04:29:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbiKCE3Z (ORCPT ); Thu, 3 Nov 2022 00:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230171AbiKCE2o (ORCPT ); Thu, 3 Nov 2022 00:28:44 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 450DA175B1; Wed, 2 Nov 2022 21:28:31 -0700 (PDT) Received: from pps.filterd (m0134422.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A335dTU004804; Thu, 3 Nov 2022 04:28:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=SQNfvnoONXJ54nVg/eI+GcD+ZbHMamNWL/sC3a0pwlQ=; b=omn7kUrZ9TxlDyHGEUkHcCBt6R1vpwh9r/xIV9XkiBMph8xGsEI68BELZLJbLyHHnhUo Ehhrfe6sumB1xvLPXw/UBFkNDe4MgxKkXBcXelt6c9Hm10UTD0QBTNrAfvn5BCJfaMKj xbkAP+vBuD2ycDmkibIxYAAhxK8W6mgh7w0LA21HzR6hjeJOIGcx+Xvci5Rj9cadjeyx MF10piclwGZQWEynLlbsy/bgXrBoN4dmDvoe4/shq9nlQEaCK8wB/WcGGe3xVEQ7Te4/ 9kULAJi+EvWnllFFM7DXFReOSbRZd3ticvBY9khuZZINAVPoU4pC3TSD7HfVIGwFN+M1 3w== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kksmrwmth-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:24 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id BA55E8040F4; Thu, 3 Nov 2022 04:28:13 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 41F3B802B9C; Thu, 3 Nov 2022 04:28:13 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 07/17] crypto: x86/ghash - use u8 rather than char Date: Wed, 2 Nov 2022 23:27:30 -0500 Message-Id: <20221103042740.6556-8-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: JrmBZWePH2iFNYs0lAOfbKsUZpMD4qDk X-Proofpoint-ORIG-GUID: JrmBZWePH2iFNYs0lAOfbKsUZpMD4qDk X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=985 lowpriorityscore=0 clxscore=1015 mlxscore=0 suspectscore=0 impostorscore=0 spamscore=0 phishscore=0 malwarescore=0 priorityscore=1501 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use more consistent unambivalent types for the source and destination buffer pointer arguments. Signed-off-by: Robert Elliott --- arch/x86/crypto/ghash-clmulni-intel_asm.S | 4 ++-- arch/x86/crypto/ghash-clmulni-intel_glue.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 2bf871899920..c7b8542facee 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -88,7 +88,7 @@ SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble) RET SYM_FUNC_END(__clmul_gf128mul_ble) -/* void clmul_ghash_mul(char *dst, const u128 *shash) */ +/* void clmul_ghash_mul(u8 *dst, const u128 *shash) */ SYM_FUNC_START(clmul_ghash_mul) FRAME_BEGIN movups (%rdi), DATA @@ -103,7 +103,7 @@ SYM_FUNC_START(clmul_ghash_mul) SYM_FUNC_END(clmul_ghash_mul) /* - * void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, + * void clmul_ghash_update(u8 *dst, const u8 *src, unsigned int srclen, * const u128 *shash); */ SYM_FUNC_START(clmul_ghash_update) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 1f1a95f3dd0c..e996627c6583 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -23,9 +23,9 @@ #define GHASH_BLOCK_SIZE 16 #define GHASH_DIGEST_SIZE 16 -void clmul_ghash_mul(char *dst, const u128 *shash); +void clmul_ghash_mul(u8 *dst, const u128 *shash); -void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, +void clmul_ghash_update(u8 *dst, const u8 *src, unsigned int srclen, const u128 *shash); struct ghash_async_ctx { From patchwork Thu Nov 3 04:27:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029511 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3D1DC43217 for ; Thu, 3 Nov 2022 04:28:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230302AbiKCE2o (ORCPT ); Thu, 3 Nov 2022 00:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229953AbiKCE2l (ORCPT ); Thu, 3 Nov 2022 00:28:41 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1D1811A28; Wed, 2 Nov 2022 21:28:26 -0700 (PDT) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A335icD008499; Thu, 3 Nov 2022 04:28:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=6i7qkLSf27dNfAQK+C6QTFK0YUsn0GSTSCnxogIVu/I=; b=Cct5XUWdHcVswJgk3T6pZfN6VB+icEZRaatpBT8A7lMpJnMA+kSkjHGjsFvfm8/2WPP8 JP1Kq7QvsjeH6BypBiDi5aGWyQe7Ic0m+7ej5yBrt0AMEERsNdsTMROQjkAXOIvUfu2v F3627+MhY/xkMgKuaZ4rCVmu3zn+83712QSEMvMNRPlZzTVR3S0HpXSpenrZC0Hti1q9 WFUJVgXwvLd9TUAeqSXpoMmi0+Uo4O0aK5Wl6HVGE0jXN0RnYfpreJecvmd75MVnrMfk JSmDZX/ocRL9M3Zf2DlNY8No5sUaOvB1lQPW9ubuE1k8yA5vYvSelhM6fLCwceqS0xRV ew== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km2pfhdfc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:15 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 4A1BA295B3; Thu, 3 Nov 2022 04:28:15 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 99704808EAF; Thu, 3 Nov 2022 04:28:14 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 08/17] crypto: x86/ghash - restructure FPU context saving Date: Wed, 2 Nov 2022 23:27:31 -0500 Message-Id: <20221103042740.6556-9-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: z73p0svX-2GXugifPhOieRorl8zP_M1f X-Proofpoint-ORIG-GUID: z73p0svX-2GXugifPhOieRorl8zP_M1f X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 bulkscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 phishscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Wrap each of the calls to clmul_hash_update and clmul_ghash__mul in its own set of kernel_fpu_begin and kernel_fpu_end calls, preparing to limit the amount of data processed by each _update call to avoid RCU stalls. This is more like how polyval-clmulni_glue is structured. Fixes: 0e1227d356e9 ("crypto: ghash - Add PCLMULQDQ accelerated implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index e996627c6583..22367e363d72 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -80,7 +80,6 @@ static int ghash_update(struct shash_desc *desc, struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; - kernel_fpu_begin(); if (dctx->bytes) { int n = min(srclen, dctx->bytes); u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); @@ -91,10 +90,14 @@ static int ghash_update(struct shash_desc *desc, while (n--) *pos++ ^= *src++; - if (!dctx->bytes) + if (!dctx->bytes) { + kernel_fpu_begin(); clmul_ghash_mul(dst, &ctx->shash); + kernel_fpu_end(); + } } + kernel_fpu_begin(); clmul_ghash_update(dst, src, srclen, &ctx->shash); kernel_fpu_end(); From patchwork Thu Nov 3 04:27:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029523 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84F82C433FE for ; Thu, 3 Nov 2022 04:34:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229830AbiKCEe3 (ORCPT ); Thu, 3 Nov 2022 00:34:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbiKCEe2 (ORCPT ); Thu, 3 Nov 2022 00:34:28 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9293AF584; Wed, 2 Nov 2022 21:34:27 -0700 (PDT) Received: from pps.filterd (m0150244.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A3421BR024546; Thu, 3 Nov 2022 04:33:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=tECIyFvsPJywhFXUSQexirS6np6YIq2VopMOItdY5RU=; b=YQR9oc6CQ6SrCcwODtsWIijGOX2Fs/uznXdwwMH84PhNn1nwq4/Ts1oZB4DQwxwOz5ev A2hb5ehewwI4o9ZB1dR0MgFjt+hPG6BF6WkTDhb+H5cLEEhpt++/BMbvcTfkC1c/eUny 99pvtBnzK3yGSLTGVsbt3mpVp4/Mhp/ovWRZ/XmYrAW7uJVyQeApNqGtAsoxZ06Pbjvm z51bkl2rEmhVtcB4fJynrd1rl2lAj/6E/ZX1lrP4EKAPbCeG6D9LmIODJgb/609pcvjq +xw1o/jSeWrXeFLY4OKmvOCjpMeRtmyBxabFuwQ3JT4RFjq3OoHQS3kWXNupIkuApR0L 5Q== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kkstj5yyf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:33:17 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 60D3BD254; Thu, 3 Nov 2022 04:28:16 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id D7CC8801ADB; Thu, 3 Nov 2022 04:28:15 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 09/17] crypto: x86/ghash - limit FPU preemption Date: Wed, 2 Nov 2022 23:27:32 -0500 Message-Id: <20221103042740.6556-10-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: kITOWZe31j6M8F9WiQefL_ZTAWSd0oNg X-Proofpoint-GUID: kITOWZe31j6M8F9WiQefL_ZTAWSd0oNg X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0 clxscore=1015 bulkscore=0 malwarescore=0 suspectscore=0 priorityscore=1501 adultscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, leading to: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 0e1227d356e9 ("crypto: ghash - Add PCLMULQDQ accelerated implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 change to static int, simplify while loop --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 28 +++++++++++++++------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 22367e363d72..0f24c3b23fd2 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -20,8 +20,11 @@ #include #include -#define GHASH_BLOCK_SIZE 16 -#define GHASH_DIGEST_SIZE 16 +#define GHASH_BLOCK_SIZE 16U +#define GHASH_DIGEST_SIZE 16U + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 50 * 1024; void clmul_ghash_mul(u8 *dst, const u128 *shash); @@ -80,9 +83,11 @@ static int ghash_update(struct shash_desc *desc, struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; + BUILD_BUG_ON(bytes_per_fpu < GHASH_BLOCK_SIZE); + if (dctx->bytes) { int n = min(srclen, dctx->bytes); - u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); + u8 *pos = dst + GHASH_BLOCK_SIZE - dctx->bytes; dctx->bytes -= n; srclen -= n; @@ -97,13 +102,18 @@ static int ghash_update(struct shash_desc *desc, } } - kernel_fpu_begin(); - clmul_ghash_update(dst, src, srclen, &ctx->shash); - kernel_fpu_end(); + while (srclen >= GHASH_BLOCK_SIZE) { + unsigned int chunk = min(srclen, bytes_per_fpu); + + kernel_fpu_begin(); + clmul_ghash_update(dst, src, chunk, &ctx->shash); + kernel_fpu_end(); + + src += chunk & ~(GHASH_BLOCK_SIZE - 1); + srclen -= chunk & ~(GHASH_BLOCK_SIZE - 1); + } - if (srclen & 0xf) { - src += srclen - (srclen & 0xf); - srclen &= 0xf; + if (srclen) { dctx->bytes = GHASH_BLOCK_SIZE - srclen; while (srclen--) *dst++ ^= *src++; From patchwork Thu Nov 3 04:27:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029512 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2623C43219 for ; Thu, 3 Nov 2022 04:28:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229481AbiKCE2p (ORCPT ); Thu, 3 Nov 2022 00:28:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230012AbiKCE2l (ORCPT ); Thu, 3 Nov 2022 00:28:41 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E0B112D04; Wed, 2 Nov 2022 21:28:27 -0700 (PDT) Received: from pps.filterd (m0134424.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33lb3J022608; Thu, 3 Nov 2022 04:28:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=vGuZr0PXmKpbWf3/OrUXsKYrtliLKtgqLvbd1+feraQ=; b=j+SL+nUOLN77H9p6xAu0tpDGP7/UdH5eL4rB4OYmHNA0FalTsCw60Ma/2V2kC3vDfUxw FstMrTLoZf7pQxeY2e/xJGF7erD672CMuC86WgNzfzvn7GzexrBr1+/mulnhcmGUbWNG TmwA0EpcwBn/tXJ6xxtSqQ7HcAdegCNT8/mxpI7aYeSRdVyKI3fQnVz2mkLV2yb6iOIu j3oyX45Lti7UEkv4lsDjKFYFwn8S9WmvZhnHU+j5FgOHT488zABISVXsHcm6sWhlJ6HA 3cpu1fjhALECtRhJ6kTqqthFvW9gX20YSweXbHnZXiz8OBiPnfW/BrzbW/vyq6DI5s7m pQ== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3km624r8w8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:18 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 7CD55806B55; Thu, 3 Nov 2022 04:28:17 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 0AAC0802B9C; Thu, 3 Nov 2022 04:28:17 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 10/17] crypto: x86/*poly* - limit FPU preemption Date: Wed, 2 Nov 2022 23:27:33 -0500 Message-Id: <20221103042740.6556-11-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: HS5GeoggHc905C5YyySEte_unMXZTT8I X-Proofpoint-GUID: HS5GeoggHc905C5YyySEte_unMXZTT8I X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 bulkscore=0 mlxscore=0 adultscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use a static const unsigned int for the limit of the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() rather than using the SZ_4K macro (which is a signed value), or a magic value of 4096U embedded in the C code. Use unsigned int rather than size_t for some of the arguments to avoid typecasting for the min() macro. Signed-off-by: Robert Elliott --- v3 use static int rather than macro, change to while loops rather than do/while loops --- arch/x86/crypto/nhpoly1305-avx2-glue.c | 11 +++++--- arch/x86/crypto/nhpoly1305-sse2-glue.c | 11 +++++--- arch/x86/crypto/poly1305_glue.c | 37 +++++++++++++++++--------- arch/x86/crypto/polyval-clmulni_glue.c | 8 ++++-- 4 files changed, 46 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index 8ea5ab0f1ca7..f7dc9c563bb5 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -13,6 +13,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 337 * 1024; + asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len, u8 hash[NH_HASH_BYTES]); @@ -26,18 +29,20 @@ static void _nh_avx2(const u32 *key, const u8 *message, size_t message_len, static int nhpoly1305_avx2_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (srclen < 64 || !crypto_simd_usable()) return crypto_nhpoly1305_update(desc, src, srclen); - do { - unsigned int n = min_t(unsigned int, srclen, SZ_4K); + while (srclen) { + unsigned int n = min(srclen, bytes_per_fpu); kernel_fpu_begin(); crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2); kernel_fpu_end(); src += n; srclen -= n; - } while (srclen); + } return 0; } diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c index 2b353d42ed13..daffcc7019ad 100644 --- a/arch/x86/crypto/nhpoly1305-sse2-glue.c +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -13,6 +13,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 199 * 1024; + asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len, u8 hash[NH_HASH_BYTES]); @@ -26,18 +29,20 @@ static void _nh_sse2(const u32 *key, const u8 *message, size_t message_len, static int nhpoly1305_sse2_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (srclen < 64 || !crypto_simd_usable()) return crypto_nhpoly1305_update(desc, src, srclen); - do { - unsigned int n = min_t(unsigned int, srclen, SZ_4K); + while (srclen) { + unsigned int n = min(srclen, bytes_per_fpu); kernel_fpu_begin(); crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2); kernel_fpu_end(); src += n; srclen -= n; - } while (srclen); + } return 0; } diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index 1dfb8af48a3c..16831c036d71 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -15,20 +15,27 @@ #include #include +#define POLY1305_BLOCK_SIZE_MASK (~(POLY1305_BLOCK_SIZE - 1)) + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 217 * 1024; + asmlinkage void poly1305_init_x86_64(void *ctx, const u8 key[POLY1305_BLOCK_SIZE]); asmlinkage void poly1305_blocks_x86_64(void *ctx, const u8 *inp, - const size_t len, const u32 padbit); + const unsigned int len, + const u32 padbit); asmlinkage void poly1305_emit_x86_64(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], const u32 nonce[4]); asmlinkage void poly1305_emit_avx(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], const u32 nonce[4]); -asmlinkage void poly1305_blocks_avx(void *ctx, const u8 *inp, const size_t len, - const u32 padbit); -asmlinkage void poly1305_blocks_avx2(void *ctx, const u8 *inp, const size_t len, - const u32 padbit); +asmlinkage void poly1305_blocks_avx(void *ctx, const u8 *inp, + const unsigned int len, const u32 padbit); +asmlinkage void poly1305_blocks_avx2(void *ctx, const u8 *inp, + const unsigned int len, const u32 padbit); asmlinkage void poly1305_blocks_avx512(void *ctx, const u8 *inp, - const size_t len, const u32 padbit); + const unsigned int len, + const u32 padbit); static __ro_after_init DEFINE_STATIC_KEY_FALSE(poly1305_use_avx); static __ro_after_init DEFINE_STATIC_KEY_FALSE(poly1305_use_avx2); @@ -86,14 +93,12 @@ static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_BLOCK_SIZE]) poly1305_init_x86_64(ctx, key); } -static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, +static void poly1305_simd_blocks(void *ctx, const u8 *inp, unsigned int len, const u32 padbit) { struct poly1305_arch_internal *state = ctx; - /* SIMD disables preemption, so relax after processing each page. */ - BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE || - SZ_4K % POLY1305_BLOCK_SIZE); + BUILD_BUG_ON(bytes_per_fpu < POLY1305_BLOCK_SIZE); if (!static_branch_likely(&poly1305_use_avx) || (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) || @@ -103,8 +108,14 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, return; } - do { - const size_t bytes = min_t(size_t, len, SZ_4K); + while (len) { + unsigned int bytes; + + if (len < POLY1305_BLOCK_SIZE) + bytes = len; + else + bytes = min(len, + bytes_per_fpu & POLY1305_BLOCK_SIZE_MASK); kernel_fpu_begin(); if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512)) @@ -117,7 +128,7 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, len -= bytes; inp += bytes; - } while (len); + } } static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index b7664d018851..de1c908f7412 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -29,6 +29,9 @@ #define NUM_KEY_POWERS 8 +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 393 * 1024; + struct polyval_tfm_ctx { /* * These powers must be in the order h^8, ..., h^1. @@ -107,6 +110,8 @@ static int polyval_x86_update(struct shash_desc *desc, unsigned int nblocks; unsigned int n; + BUILD_BUG_ON(bytes_per_fpu < POLYVAL_BLOCK_SIZE); + if (dctx->bytes) { n = min(srclen, dctx->bytes); pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes; @@ -123,8 +128,7 @@ static int polyval_x86_update(struct shash_desc *desc, } while (srclen >= POLYVAL_BLOCK_SIZE) { - /* Allow rescheduling every 4K bytes. */ - nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE; + nblocks = min(srclen, bytes_per_fpu) / POLYVAL_BLOCK_SIZE; internal_polyval_update(tctx, src, nblocks, dctx->buffer); srclen -= nblocks * POLYVAL_BLOCK_SIZE; src += nblocks * POLYVAL_BLOCK_SIZE; From patchwork Thu Nov 3 04:27:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029514 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4A9AC43217 for ; Thu, 3 Nov 2022 04:28:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230183AbiKCE2r (ORCPT ); Thu, 3 Nov 2022 00:28:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230041AbiKCE2m (ORCPT ); Thu, 3 Nov 2022 00:28:42 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E13912D35; Wed, 2 Nov 2022 21:28:27 -0700 (PDT) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A2NweTn002130; Thu, 3 Nov 2022 04:28:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=upITipPVUhAk6AiBvdbjppAU0W/4mLIWDGU9z0fzSQ0=; b=RPLHDz4VSKL67rKjxmQeJ4fNdgLoOBBpUXMDSmN4pjJEsqm5cfSCzk43BmrLntibHDf8 cVQLjSBnSR4AhmkoSJGSbSvlGKzk1pY9Aclb2CH5scs4kTCM4jVLpYBmLosIT6imyymH YnZz84tO7z3acs+zlDlEIHKb8ZRFxuZJn+XlYetwnrvtUX8vE2daKhAMu+TqAydb1YyJ 5yNIq0zAkffUqHQQnSIaSq2dQF+r3CFvd0MN3mJ4GwWP3Rs4wufE++MIKooZm4ZFsWcE 8PQmqsTZRm+SKQuS0p71gErElS78i80dEM9fqtiafWXDejuMoBeYVC8x7JPeu5lPjyX9 Ww== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km2pfhdfv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:19 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 161D380471E; Thu, 3 Nov 2022 04:28:19 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 79C55808EB8; Thu, 3 Nov 2022 04:28:18 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 11/17] crypto: x86/sha - register all variations Date: Wed, 2 Nov 2022 23:27:34 -0500 Message-Id: <20221103042740.6556-12-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: 8xahrXpTYQUNLbIT4F605rDgfpb6pmm- X-Proofpoint-ORIG-GUID: 8xahrXpTYQUNLbIT4F605rDgfpb6pmm- X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 bulkscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 phishscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Don't register and unregister each of the functions from least- to most-optimized (e.g., SSSE3 then AVX then AVX2); register all variations. This enables selecting those other algorithms if needed, such as for testing with: modprobe tcrypt mode=300 alg=sha512-avx modprobe tcrypt mode=400 alg=sha512-avx Suggested-by: Tim Chen Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 register all the variations, not just the best one, per Herbert's feedback. return -ENODEV if none are successful, 0 if any are successful --- arch/x86/crypto/sha1_ssse3_glue.c | 135 +++++++++++++------------ arch/x86/crypto/sha256_ssse3_glue.c | 146 ++++++++++++++-------------- arch/x86/crypto/sha512_ssse3_glue.c | 108 ++++++++++---------- 3 files changed, 193 insertions(+), 196 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 4bc77c84b0fb..89aa5b787f2f 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -128,17 +128,17 @@ static struct shash_alg sha1_ssse3_alg = { } }; -static int register_sha1_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shash(&sha1_ssse3_alg); - return 0; -} +static bool sha1_ssse3_registered; +static bool sha1_avx_registered; +static bool sha1_avx2_registered; +static bool sha1_ni_registered; static void unregister_sha1_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (sha1_ssse3_registered) { crypto_unregister_shash(&sha1_ssse3_alg); + sha1_ssse3_registered = 0; + } } asmlinkage void sha1_transform_avx(struct sha1_state *state, @@ -179,28 +179,12 @@ static struct shash_alg sha1_avx_alg = { } }; -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int register_sha1_avx(void) -{ - if (avx_usable()) - return crypto_register_shash(&sha1_avx_alg); - return 0; -} - static void unregister_sha1_avx(void) { - if (avx_usable()) + if (sha1_avx_registered) { crypto_unregister_shash(&sha1_avx_alg); + sha1_avx_registered = 0; + } } #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ @@ -208,16 +192,6 @@ static void unregister_sha1_avx(void) asmlinkage void sha1_transform_avx2(struct sha1_state *state, const u8 *data, int blocks); -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) - && boot_cpu_has(X86_FEATURE_BMI1) - && boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - static void sha1_apply_transform_avx2(struct sha1_state *state, const u8 *data, int blocks) { @@ -263,17 +237,12 @@ static struct shash_alg sha1_avx2_alg = { } }; -static int register_sha1_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shash(&sha1_avx2_alg); - return 0; -} - static void unregister_sha1_avx2(void) { - if (avx2_usable()) + if (sha1_avx2_registered) { crypto_unregister_shash(&sha1_avx2_alg); + sha1_avx2_registered = 0; + } } #ifdef CONFIG_AS_SHA1_NI @@ -315,49 +284,77 @@ static struct shash_alg sha1_ni_alg = { } }; -static int register_sha1_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - return crypto_register_shash(&sha1_ni_alg); - return 0; -} - static void unregister_sha1_ni(void) { - if (boot_cpu_has(X86_FEATURE_SHA_NI)) + if (sha1_ni_registered) { crypto_unregister_shash(&sha1_ni_alg); + sha1_ni_registered = 0; + } } #else -static inline int register_sha1_ni(void) { return 0; } static inline void unregister_sha1_ni(void) { } #endif static int __init sha1_ssse3_mod_init(void) { - if (register_sha1_ssse3()) - goto fail; + const char *feature_name; + const char *driver_name = NULL; + int ret; + +#ifdef CONFIG_AS_SHA1_NI + /* SHA-NI */ + if (boot_cpu_has(X86_FEATURE_SHA_NI)) { - if (register_sha1_avx()) { - unregister_sha1_ssse3(); - goto fail; + ret = crypto_register_shash(&sha1_ni_alg); + if (!ret) + sha1_ni_registered = 1; + } +#endif + + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { + + if (boot_cpu_has(X86_FEATURE_BMI1) && + boot_cpu_has(X86_FEATURE_BMI2)) { + + ret = crypto_register_shash(&sha1_avx2_alg); + if (!ret) { + sha1_avx2_registered = 1; + driver_name = sha1_avx2_alg.base.cra_driver_name; + } + } } - if (register_sha1_avx2()) { - unregister_sha1_avx(); - unregister_sha1_ssse3(); - goto fail; + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { + + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + + ret = crypto_register_shash(&sha1_avx_alg); + if (!ret) { + sha1_avx_registered = 1; + driver_name = sha1_avx_alg.base.cra_driver_name; + } + } } - if (register_sha1_ni()) { - unregister_sha1_avx2(); - unregister_sha1_avx(); - unregister_sha1_ssse3(); - goto fail; + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shash(&sha1_ssse3_alg); + if (!ret) { + sha1_ssse3_registered = 1; + driver_name = sha1_ssse3_alg.base.cra_driver_name; + } } - return 0; -fail: +#ifdef CONFIG_AS_SHA1_NI + if (sha1_ni_registered) + return 0; +#endif + if (sha1_avx2_registered || sha1_avx_registered || sha1_ssse3_registered) + return 0; return -ENODEV; } diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index cdcdf5a80ffe..de320973e473 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -156,19 +156,18 @@ static struct shash_alg sha256_ssse3_algs[] = { { } } }; -static int register_sha256_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha256_ssse3_algs, - ARRAY_SIZE(sha256_ssse3_algs)); - return 0; -} +static bool sha256_ssse3_registered; +static bool sha256_avx_registered; +static bool sha256_avx2_registered; +static bool sha256_ni_registered; static void unregister_sha256_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (sha256_ssse3_registered) { crypto_unregister_shashes(sha256_ssse3_algs, ARRAY_SIZE(sha256_ssse3_algs)); + sha256_ssse3_registered = 0; + } } asmlinkage void sha256_transform_avx(struct sha256_state *state, @@ -223,30 +222,13 @@ static struct shash_alg sha256_avx_algs[] = { { } } }; -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int register_sha256_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha256_avx_algs, - ARRAY_SIZE(sha256_avx_algs)); - return 0; -} - static void unregister_sha256_avx(void) { - if (avx_usable()) + if (sha256_avx_registered) { crypto_unregister_shashes(sha256_avx_algs, ARRAY_SIZE(sha256_avx_algs)); + sha256_avx_registered = 0; + } } asmlinkage void sha256_transform_rorx(struct sha256_state *state, @@ -301,28 +283,13 @@ static struct shash_alg sha256_avx2_algs[] = { { } } }; -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha256_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha256_avx2_algs, - ARRAY_SIZE(sha256_avx2_algs)); - return 0; -} - static void unregister_sha256_avx2(void) { - if (avx2_usable()) + if (sha256_avx2_registered) { crypto_unregister_shashes(sha256_avx2_algs, ARRAY_SIZE(sha256_avx2_algs)); + sha256_avx2_registered = 0; + } } #ifdef CONFIG_AS_SHA256_NI @@ -378,51 +345,86 @@ static struct shash_alg sha256_ni_algs[] = { { } } }; -static int register_sha256_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - return crypto_register_shashes(sha256_ni_algs, - ARRAY_SIZE(sha256_ni_algs)); - return 0; -} - static void unregister_sha256_ni(void) { - if (boot_cpu_has(X86_FEATURE_SHA_NI)) + if (sha256_ni_registered) { crypto_unregister_shashes(sha256_ni_algs, ARRAY_SIZE(sha256_ni_algs)); + sha256_ni_registered = 0; + } } #else -static inline int register_sha256_ni(void) { return 0; } static inline void unregister_sha256_ni(void) { } #endif static int __init sha256_ssse3_mod_init(void) { - if (register_sha256_ssse3()) - goto fail; + const char *feature_name; + const char *driver_name = NULL; + const char *driver_name2 = NULL; + int ret; - if (register_sha256_avx()) { - unregister_sha256_ssse3(); - goto fail; +#ifdef CONFIG_AS_SHA256_NI + /* SHA-NI */ + if (boot_cpu_has(X86_FEATURE_SHA_NI)) { + + ret = crypto_register_shashes(sha256_ni_algs, + ARRAY_SIZE(sha256_ni_algs)); + if (!ret) { + sha256_ni_registered = 1; + driver_name = sha256_ni_algs[0].base.cra_driver_name; + driver_name2 = sha256_ni_algs[1].base.cra_driver_name; + } } +#endif - if (register_sha256_avx2()) { - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { + + if (boot_cpu_has(X86_FEATURE_BMI2)) { + ret = crypto_register_shashes(sha256_avx2_algs, + ARRAY_SIZE(sha256_avx2_algs)); + if (!ret) { + sha256_avx2_registered = 1; + driver_name = sha256_avx2_algs[0].base.cra_driver_name; + driver_name2 = sha256_avx2_algs[1].base.cra_driver_name; + } + } } - if (register_sha256_ni()) { - unregister_sha256_avx2(); - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { + + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + ret = crypto_register_shashes(sha256_avx_algs, + ARRAY_SIZE(sha256_avx_algs)); + if (!ret) { + sha256_avx_registered = 1; + driver_name = sha256_avx_algs[0].base.cra_driver_name; + driver_name2 = sha256_avx_algs[1].base.cra_driver_name; + } + } } - return 0; -fail: + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shashes(sha256_ssse3_algs, + ARRAY_SIZE(sha256_ssse3_algs)); + if (!ret) { + sha256_ssse3_registered = 1; + driver_name = sha256_ssse3_algs[0].base.cra_driver_name; + driver_name2 = sha256_ssse3_algs[1].base.cra_driver_name; + } + } + +#ifdef CONFIG_AS_SHA256_NI + if (sha256_ni_registered) + return 0; +#endif + if (sha256_avx2_registered || sha256_avx_registered || sha256_ssse3_registered) + return 0; return -ENODEV; } diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index c7036cfe2a7e..3e96fe51f1a0 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -152,33 +152,21 @@ static struct shash_alg sha512_ssse3_algs[] = { { } } }; -static int register_sha512_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha512_ssse3_algs, - ARRAY_SIZE(sha512_ssse3_algs)); - return 0; -} +static bool sha512_ssse3_registered; +static bool sha512_avx_registered; +static bool sha512_avx2_registered; static void unregister_sha512_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (sha512_ssse3_registered) { crypto_unregister_shashes(sha512_ssse3_algs, ARRAY_SIZE(sha512_ssse3_algs)); + sha512_ssse3_registered = 0; + } } asmlinkage void sha512_transform_avx(struct sha512_state *state, const u8 *data, int blocks); -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) @@ -230,19 +218,13 @@ static struct shash_alg sha512_avx_algs[] = { { } } }; -static int register_sha512_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha512_avx_algs, - ARRAY_SIZE(sha512_avx_algs)); - return 0; -} - static void unregister_sha512_avx(void) { - if (avx_usable()) + if (sha512_avx_registered) { crypto_unregister_shashes(sha512_avx_algs, ARRAY_SIZE(sha512_avx_algs)); + sha512_avx_registered = 0; + } } asmlinkage void sha512_transform_rorx(struct sha512_state *state, @@ -298,22 +280,6 @@ static struct shash_alg sha512_avx2_algs[] = { { } } }; -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha512_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha512_avx2_algs, - ARRAY_SIZE(sha512_avx2_algs)); - return 0; -} static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), @@ -324,32 +290,64 @@ MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static void unregister_sha512_avx2(void) { - if (avx2_usable()) + if (sha512_avx2_registered) { crypto_unregister_shashes(sha512_avx2_algs, ARRAY_SIZE(sha512_avx2_algs)); + sha512_avx2_registered = 0; + } } static int __init sha512_ssse3_mod_init(void) { + const char *feature_name; + const char *driver_name = NULL; + const char *driver_name2 = NULL; + int ret; + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (register_sha512_ssse3()) - goto fail; + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { + if (boot_cpu_has(X86_FEATURE_BMI2)) { + ret = crypto_register_shashes(sha512_avx2_algs, + ARRAY_SIZE(sha512_avx2_algs)); + if (!ret) { + sha512_avx2_registered = 1; + driver_name = sha512_avx2_algs[0].base.cra_driver_name; + driver_name2 = sha512_avx2_algs[1].base.cra_driver_name; + } + } + } - if (register_sha512_avx()) { - unregister_sha512_ssse3(); - goto fail; + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { + + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + ret = crypto_register_shashes(sha512_avx_algs, + ARRAY_SIZE(sha512_avx_algs)); + if (!ret) { + sha512_avx_registered = 1; + driver_name = sha512_avx_algs[0].base.cra_driver_name; + driver_name2 = sha512_avx_algs[1].base.cra_driver_name; + } + } } - if (register_sha512_avx2()) { - unregister_sha512_avx(); - unregister_sha512_ssse3(); - goto fail; + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shashes(sha512_ssse3_algs, + ARRAY_SIZE(sha512_ssse3_algs)); + if (!ret) { + sha512_ssse3_registered = 1; + driver_name = sha512_ssse3_algs[0].base.cra_driver_name; + driver_name2 = sha512_ssse3_algs[1].base.cra_driver_name; + } } - return 0; -fail: + if (sha512_avx2_registered || sha512_avx_registered || sha512_ssse3_registered) + return 0; return -ENODEV; } From patchwork Thu Nov 3 04:27:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029522 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA6B6C433FE for ; Thu, 3 Nov 2022 04:32:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231280AbiKCEcK (ORCPT ); Thu, 3 Nov 2022 00:32:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbiKCEbz (ORCPT ); Thu, 3 Nov 2022 00:31:55 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFE681741A; Wed, 2 Nov 2022 21:30:16 -0700 (PDT) Received: from pps.filterd (m0134423.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A34ES1M005074; Thu, 3 Nov 2022 04:29:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=WJ7ii9IOMwA1u/2F4iWjmOa0t9X/ieu9t/3cUuHQVnA=; b=OJOWxDIjNSfBdRAi1kareRQAhSx39aGRIvUq4iXSYGZqXs+hAo+vcZYLvHy+aQnAJ9Ot 0Qkb2E4VETC5eSqlo1tW4/fgj1fA8TFtYNqgW3NkFKoZs5AFvzWUBbLmBXDCf1ljM4EX MOKINbWRw79djFZI4DeQtRqQo9zRXqDrKwpabCQxzTcr4pQiqq7zAoH012umMJjugMaR xVNyPJYDR8OGBsOUndLHOebR7YVbBw57XBDQFeaed5adtUlWOlXG0f3JpD7lIV+LoJUB Bld1Q1P5YONl6bSbltj0+5oOWJspgTOybz2/zQA7bqnA752XawOhd/z4qvYpIsDTy0oE IA== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kktx6w26n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:29:49 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 72EFE804720; Thu, 3 Nov 2022 04:28:20 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 0031D802B9C; Thu, 3 Nov 2022 04:28:19 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 12/17] crypto: x86/sha - minimize time in FPU context Date: Wed, 2 Nov 2022 23:27:35 -0500 Message-Id: <20221103042740.6556-13-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: Ze3RjlDYsSLA8wP4Jrm9lOb0APM3I4fX X-Proofpoint-ORIG-GUID: Ze3RjlDYsSLA8wP4Jrm9lOb0APM3I4fX X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 adultscore=10 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 impostorscore=0 priorityscore=1501 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Narrow the kernel_fpu_begin()/kernel_fpu_end() to just wrap the assembly functions, not any extra C code around them (which includes several memcpy() calls). This reduces unnecessary time in FPU context, in which the scheduler is prevented from preempting and the RCU subsystem is kept from doing its work. Example results measuring a boot, in which SHA-512 is used to check all module signatures using finup() calls: Before: calls maxcycles bpf update finup algorithm module ======== ============ ======== ======== ======== =========== ============== 168390 1233188 19456 0 19456 sha512-avx2 sha512_ssse3 After: 182694 1007224 19456 0 19456 sha512-avx2 sha512_ssse3 That means it stayed in FPU context for 226k fewer clocks cycles (which is 102 microseconds on this system, 18% less). Signed-off-by: Robert Elliott --- arch/x86/crypto/sha1_ssse3_glue.c | 82 ++++++++++++++++++++--------- arch/x86/crypto/sha256_ssse3_glue.c | 67 ++++++++++++++++++----- arch/x86/crypto/sha512_ssse3_glue.c | 48 ++++++++++++----- 3 files changed, 145 insertions(+), 52 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 89aa5b787f2f..cd390083451f 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -34,6 +34,54 @@ static const unsigned int bytes_per_fpu_avx2 = 34 * 1024; static const unsigned int bytes_per_fpu_avx = 30 * 1024; static const unsigned int bytes_per_fpu_ssse3 = 26 * 1024; +asmlinkage void sha1_transform_ssse3(struct sha1_state *state, + const u8 *data, int blocks); + +asmlinkage void sha1_transform_avx(struct sha1_state *state, + const u8 *data, int blocks); + +asmlinkage void sha1_transform_avx2(struct sha1_state *state, + const u8 *data, int blocks); + +#ifdef CONFIG_AS_SHA1_NI +asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, + int rounds); +#endif + +static void fpu_sha1_transform_ssse3(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha1_transform_avx(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha1_transform_avx2(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_avx2(state, data, blocks); + kernel_fpu_end(); +} + +#ifdef CONFIG_AS_SHA1_NI +static void fpu_sha1_transform_shani(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_ni_transform(state, data, blocks); + kernel_fpu_end(); +} +#endif + static int sha1_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha1_block_fn *sha1_xform) @@ -53,9 +101,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha1_base_do_update(desc, data, chunk, sha1_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -74,36 +120,29 @@ static int sha1_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha1_base_do_update(desc, data, chunk, sha1_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha1_base_do_finalize(desc, sha1_xform); - kernel_fpu_end(); return sha1_base_finish(desc, out); } -asmlinkage void sha1_transform_ssse3(struct sha1_state *state, - const u8 *data, int blocks); - static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_ssse3, - sha1_transform_ssse3); + fpu_sha1_transform_ssse3); } static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_ssse3, out, - sha1_transform_ssse3); + fpu_sha1_transform_ssse3); } /* Add padding and return the message digest. */ @@ -141,21 +180,18 @@ static void unregister_sha1_ssse3(void) } } -asmlinkage void sha1_transform_avx(struct sha1_state *state, - const u8 *data, int blocks); - static int sha1_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_avx, - sha1_transform_avx); + fpu_sha1_transform_avx); } static int sha1_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_avx, out, - sha1_transform_avx); + fpu_sha1_transform_avx); } static int sha1_avx_final(struct shash_desc *desc, u8 *out) @@ -189,17 +225,14 @@ static void unregister_sha1_avx(void) #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ -asmlinkage void sha1_transform_avx2(struct sha1_state *state, - const u8 *data, int blocks); - static void sha1_apply_transform_avx2(struct sha1_state *state, const u8 *data, int blocks) { /* Select the optimal transform based on data block size */ if (blocks >= SHA1_AVX2_BLOCK_OPTSIZE) - sha1_transform_avx2(state, data, blocks); + fpu_sha1_transform_avx2(state, data, blocks); else - sha1_transform_avx(state, data, blocks); + fpu_sha1_transform_avx(state, data, blocks); } static int sha1_avx2_update(struct shash_desc *desc, const u8 *data, @@ -246,21 +279,18 @@ static void unregister_sha1_avx2(void) } #ifdef CONFIG_AS_SHA1_NI -asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, - int rounds); - static int sha1_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_shani, - sha1_ni_transform); + fpu_sha1_transform_shani); } static int sha1_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_shani, out, - sha1_ni_transform); + fpu_sha1_transform_shani); } static int sha1_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index de320973e473..692d6f010a4d 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -51,6 +51,51 @@ static const unsigned int bytes_per_fpu_ssse3 = 11 * 1024; asmlinkage void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks); +asmlinkage void sha256_transform_avx(struct sha256_state *state, + const u8 *data, int blocks); + +asmlinkage void sha256_transform_rorx(struct sha256_state *state, + const u8 *data, int blocks); + +#ifdef CONFIG_AS_SHA256_NI +asmlinkage void sha256_ni_transform(struct sha256_state *digest, + const u8 *data, int rounds); +#endif + +static void fpu_sha256_transform_ssse3(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha256_transform_avx(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha256_transform_avx2(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_rorx(state, data, blocks); + kernel_fpu_end(); +} + +#ifdef CONFIG_AS_SHA1_NI +static void fpu_sha256_transform_shani(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_ni_transform(state, data, blocks); + kernel_fpu_end(); +} +#endif + static int _sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha256_block_fn *sha256_xform) @@ -70,9 +115,7 @@ static int _sha256_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha256_base_do_update(desc, data, chunk, sha256_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -91,17 +134,13 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha256_base_do_update(desc, data, chunk, sha256_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha256_base_do_finalize(desc, sha256_xform); - kernel_fpu_end(); return sha256_base_finish(desc, out); } @@ -110,14 +149,14 @@ static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_ssse3, - sha256_transform_ssse3); + fpu_sha256_transform_ssse3); } static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_ssse3, - out, sha256_transform_ssse3); + out, fpu_sha256_transform_ssse3); } /* Add padding and return the message digest. */ @@ -177,14 +216,14 @@ static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_avx, - sha256_transform_avx); + fpu_sha256_transform_avx); } static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_avx, - out, sha256_transform_avx); + out, fpu_sha256_transform_avx); } static int sha256_avx_final(struct shash_desc *desc, u8 *out) @@ -238,14 +277,14 @@ static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_avx2, - sha256_transform_rorx); + fpu_sha256_transform_avx2); } static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_avx2, - out, sha256_transform_rorx); + out, fpu_sha256_transform_avx2); } static int sha256_avx2_final(struct shash_desc *desc, u8 *out) @@ -300,14 +339,14 @@ static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_shani, - sha256_ni_transform); + fpu_sha256_transform_shani); } static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_shani, - out, sha256_ni_transform); + out, fpu_sha256_transform_shani); } static int sha256_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 3e96fe51f1a0..e2698545bf47 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -47,6 +47,36 @@ static const unsigned int bytes_per_fpu_ssse3 = 17 * 1024; asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); +asmlinkage void sha512_transform_avx(struct sha512_state *state, + const u8 *data, int blocks); + +asmlinkage void sha512_transform_rorx(struct sha512_state *state, + const u8 *data, int blocks); + +static void fpu_sha512_transform_ssse3(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha512_transform_avx(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha512_transform_avx2(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_rorx(state, data, blocks); + kernel_fpu_end(); +} + static int sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha512_block_fn *sha512_xform) @@ -66,9 +96,7 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha512_base_do_update(desc, data, chunk, sha512_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -87,17 +115,13 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha512_base_do_update(desc, data, chunk, sha512_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha512_base_do_finalize(desc, sha512_xform); - kernel_fpu_end(); return sha512_base_finish(desc, out); } @@ -106,14 +130,14 @@ static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_ssse3, - sha512_transform_ssse3); + fpu_sha512_transform_ssse3); } static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_ssse3, - out, sha512_transform_ssse3); + out, fpu_sha512_transform_ssse3); } /* Add padding and return the message digest. */ @@ -172,14 +196,14 @@ static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_avx, - sha512_transform_avx); + fpu_sha512_transform_avx); } static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_avx, - out, sha512_transform_avx); + out, fpu_sha512_transform_avx); } /* Add padding and return the message digest. */ @@ -234,14 +258,14 @@ static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_avx2, - sha512_transform_rorx); + fpu_sha512_transform_avx2); } static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_avx2, - out, sha512_transform_rorx); + out, fpu_sha512_transform_avx2); } /* Add padding and return the message digest. */ From patchwork Thu Nov 3 04:27:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029520 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23BFFC4332F for ; Thu, 3 Nov 2022 04:30:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230500AbiKCEae (ORCPT ); Thu, 3 Nov 2022 00:30:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230504AbiKCE3c (ORCPT ); Thu, 3 Nov 2022 00:29:32 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E10211A0E; Wed, 2 Nov 2022 21:28:43 -0700 (PDT) Received: from pps.filterd (m0150245.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A30jDjk010497; Thu, 3 Nov 2022 04:28:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=JZPdVhi4LuEFBoclA7AVxB12a7DjlsW/T2E4udvE+wc=; b=pAi/VWKf5eteGEOJQrk6RCvNI+dQVb3F3/jjFE3PFJVGdPCaDWxNoaf1oPuwZWNJMsqq 255r3136kSC0+YShaH0ekSHiAi0d6/pm7ZXU34NWng1dngzh+1jGrMslXZYNqnMjU/HM 6xkaaNKhQy3XMwOt4K/ZZyZqiWTOJWXhojxwVgaiO2xRItBS7yf+KkfoF/HgyhELNoy6 S3UX/FeUp+mAGhlCLr+UYYqK1y+tIXlvqi5ZqPZp1MEFDOtlfLR7iHC1M+dhNgwezyLV 63DUag1SN/igq/xbJFbEUzF+g1wbphztbck5ExdN3OSkm9asM+iLpa/DkEM5E2Le2lyP HA== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kkvxsc1px-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:22 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id E39ED804722; Thu, 3 Nov 2022 04:28:21 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 70C7A808EB4; Thu, 3 Nov 2022 04:28:21 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 13/17] crypto: x86/sha1, sha256 - load based on CPU features Date: Wed, 2 Nov 2022 23:27:36 -0500 Message-Id: <20221103042740.6556-14-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jqAOFoThLNkvOvl7CBCA1bm1G-sdLHyl X-Proofpoint-GUID: jqAOFoThLNkvOvl7CBCA1bm1G-sdLHyl X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxscore=0 spamscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 adultscore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases for x86-optimized crypto modules: sha1, sha256 based on CPU feature bits so udev gets a chance to load them later in the boot process when the filesystems are all running. Signed-off-by: Robert Elliott --- v3 put device table SHA_NI entries inside CONFIG_SHAn_NI ifdefs, ensure builds properly with arch/x86/Kconfig.assembler changed to not set CONFIG_AS_SHA*_NI --- arch/x86/crypto/sha1_ssse3_glue.c | 15 +++++++++++++++ arch/x86/crypto/sha256_ssse3_glue.c | 15 +++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index cd390083451f..7269beaa9291 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -24,6 +24,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -326,12 +327,26 @@ static void unregister_sha1_ni(void) static inline void unregister_sha1_ni(void) { } #endif +static const struct x86_cpu_id module_cpu_ids[] = { +#ifdef CONFIG_AS_SHA1_NI + X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL), +#endif + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sha1_ssse3_mod_init(void) { const char *feature_name; const char *driver_name = NULL; int ret; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + #ifdef CONFIG_AS_SHA1_NI /* SHA-NI */ if (boot_cpu_has(X86_FEATURE_SHA_NI)) { diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 692d6f010a4d..5ce42f1d228b 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -38,6 +38,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -397,6 +398,17 @@ static void unregister_sha256_ni(void) static inline void unregister_sha256_ni(void) { } #endif +static const struct x86_cpu_id module_cpu_ids[] = { +#ifdef CONFIG_AS_SHA256_NI + X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL), +#endif + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sha256_ssse3_mod_init(void) { const char *feature_name; @@ -404,6 +416,9 @@ static int __init sha256_ssse3_mod_init(void) const char *driver_name2 = NULL; int ret; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + #ifdef CONFIG_AS_SHA256_NI /* SHA-NI */ if (boot_cpu_has(X86_FEATURE_SHA_NI)) { From patchwork Thu Nov 3 04:27:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029517 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45667C4332F for ; Thu, 3 Nov 2022 04:29:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230197AbiKCE3e (ORCPT ); Thu, 3 Nov 2022 00:29:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230363AbiKCE3C (ORCPT ); Thu, 3 Nov 2022 00:29:02 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52D0A15811; Wed, 2 Nov 2022 21:28:33 -0700 (PDT) Received: from pps.filterd (m0148664.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A34EWGL017445; Thu, 3 Nov 2022 04:28:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=/br5zZ2NPvCgvFSzel58bPFop8D4VWhMAksLuIHf0rs=; b=JLku4R4CKb3n6XgLc2CxHeC+3kxaX6PBKpzkulvZPqMmDP5k4a6Cx58Da40+edFqw0sA t9q55KEeR5T3fDWkBCWBX9a07CJlorqc+Zas8gqhS9eO+YnQLlAt6fG3kzA0dt3UnZud xUBF977GHIcPVVHAhQjneyLiT4Pf9eKYvBzaKhDV+IpWwDD38WqudrlOvrmGKokck3Mk N317HJkgm7Tl0hNkD9hQdJdo5K3QKw5Hdeo5HidYmW9+zE9Oad+3KVthcLZtQCRcXivP xXUhDTtZ7OS7q36bWOYWfHFZKyTEN3JQBoSq5gn3EKIApWjsADLq3CSue8zOX6meU5lo 1Q== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kkmhxrkjb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:24 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 47E47804729; Thu, 3 Nov 2022 04:28:23 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id C9849802B9C; Thu, 3 Nov 2022 04:28:22 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 14/17] crypto: x86/crc - load based on CPU features Date: Wed, 2 Nov 2022 23:27:37 -0500 Message-Id: <20221103042740.6556-15-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: tDDaQ8NEzgkDyzS6pXvTZuQN5eMLJGFA X-Proofpoint-GUID: tDDaQ8NEzgkDyzS6pXvTZuQN5eMLJGFA X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 spamscore=0 clxscore=1015 impostorscore=0 mlxscore=0 suspectscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 phishscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), these x86-optimized crypto modules already have module aliases based on CPU feature bits: crc32, crc32c, and crct10dif Rename the unique device table data structure to a generic name so the code has the same pattern in all the modules. Remove the print on a device table mismatch from crc32 that is not present in the other modules. Modules are not supposed to print unless they are active. Signed-off-by: Robert Elliott --- arch/x86/crypto/crc32-pclmul_glue.c | 9 +++------ arch/x86/crypto/crc32c-intel_glue.c | 6 +++--- arch/x86/crypto/crct10dif-pclmul_glue.c | 6 +++--- 3 files changed, 9 insertions(+), 12 deletions(-) diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c index df3dbc754818..f6d8a933641f 100644 --- a/arch/x86/crypto/crc32-pclmul_glue.c +++ b/arch/x86/crypto/crc32-pclmul_glue.c @@ -182,20 +182,17 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crc32pclmul_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crc32pclmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crc32_pclmul_mod_init(void) { - - if (!x86_match_cpu(crc32pclmul_cpu_id)) { - pr_info("PCLMULQDQ-NI instructions are not detected.\n"); + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - } return crypto_register_shash(&alg); } diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index f08ed68ec93d..aff132e925ea 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -240,15 +240,15 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crc32c_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_XMM4_2, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crc32c_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crc32c_intel_mod_init(void) { - if (!x86_match_cpu(crc32c_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) { diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c index 4f6b8c727d88..a26dbd27da96 100644 --- a/arch/x86/crypto/crct10dif-pclmul_glue.c +++ b/arch/x86/crypto/crct10dif-pclmul_glue.c @@ -139,15 +139,15 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crct10dif_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crct10dif_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crct10dif_intel_mod_init(void) { - if (!x86_match_cpu(crct10dif_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; return crypto_register_shash(&alg); From patchwork Thu Nov 3 04:27:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029516 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8C27C433FE for ; Thu, 3 Nov 2022 04:29:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231130AbiKCE3c (ORCPT ); Thu, 3 Nov 2022 00:29:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230271AbiKCE3C (ORCPT ); Thu, 3 Nov 2022 00:29:02 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36B1F17892; Wed, 2 Nov 2022 21:28:32 -0700 (PDT) Received: from pps.filterd (m0134422.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A2IbpgP003176; Thu, 3 Nov 2022 04:28:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=0VtZpM3sw0WQAP/42QlXXg/Ajq3gvlmtNRK3M9uOdwI=; b=VMXvlTDiDdbUwdiNbuutK2cOzwUihsKShnZp+oPa3OWXEA27pXoKKGa06JfF6T/o3M0i MR9mMnAWls/WR6rpS8W4I/Z9DJb5dBaXohKHHguStKARZUkaWi8NbuFTFKyusg1PCLOe lnHJvnhYSkacLjnNn2i0/a2s24awGGbvVZAU+QOR2uzUnRNnqR+/KD/jNXswR7MV4xLP U3oWuo7FzP24RvuYNtc+HPhp+c3P8TH9D0R7NoS25mE886N3gB5zJeVcUbVmrVaWAlZT /2EzgkEm8Fa6ycsd2ZsFMaAup8Iw+b0750KtqOBopww+ox/ryoKqXLqkCvLlgk7Hy/7e EQ== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kksmrwmug-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:25 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id D271CD25B; Thu, 3 Nov 2022 04:28:24 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 53984808EB4; Thu, 3 Nov 2022 04:28:24 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 15/17] crypto: x86/sm3 - load based on CPU features Date: Wed, 2 Nov 2022 23:27:38 -0500 Message-Id: <20221103042740.6556-16-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: RW3ZDgvQIWwSwfAR1Rb_rXBOKDDhFYV4 X-Proofpoint-ORIG-GUID: RW3ZDgvQIWwSwfAR1Rb_rXBOKDDhFYV4 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 mlxscore=0 suspectscore=0 impostorscore=0 spamscore=0 phishscore=0 malwarescore=0 priorityscore=1501 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases for x86-optimized crypto modules: sm3 based on CPU feature bits so udev gets a chance to load them later in the boot process when the filesystems are all running. Signed-off-by: Robert Elliott --- arch/x86/crypto/sm3_avx_glue.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index 483aaed996ba..26256cc0cbb6 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -119,10 +120,19 @@ static struct shash_alg sm3_avx_alg = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sm3_avx_mod_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_AVX)) { pr_info("AVX instruction are not detected.\n"); return -ENODEV; From patchwork Thu Nov 3 04:27:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029519 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65933C4332F for ; Thu, 3 Nov 2022 04:30:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231169AbiKCEaO (ORCPT ); Thu, 3 Nov 2022 00:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230423AbiKCE3E (ORCPT ); Thu, 3 Nov 2022 00:29:04 -0400 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE8B417A9E; Wed, 2 Nov 2022 21:28:35 -0700 (PDT) Received: from pps.filterd (m0134424.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33lamA022598; Thu, 3 Nov 2022 04:28:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=pFdV10oyzrzBVE3PjZw0sbof6MPmvRk1DaLLGOI/Jfk=; b=A5ZW/PWm7sX0kYitQhK0h9bebWZ97PqxfzmAUDiULFKvD/r+FlD0CiF1ip3qf4Y2aFRa /gAK3LUXdrDYSUnQCTkewzRqEx65T0pSsHGqLQ1xMj20smkWlVhq2rv3JVkoDOUvQ/LA 8Tscb/KXc7D1qFr67gwuehQIx2PiwcVrM+7TRO+iEzrV7OI7wlk1mEpCrzTkk257FCJU e4VhBC6FHM83RNalhZw0yeDKsjjAMme4IGTH9wB3H1tNvfVyr4yqaYKyThYo1E9djCGD SvoQ8bRQwvJnDssjkHjaXVXuc27wGDUIbgjhX8dTGSnx7RzIyO1uA0LnNzi+ZaFDJdqI ew== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3km624r8xa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:27 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 82F381397B; Thu, 3 Nov 2022 04:28:26 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 0F50080A0FF; Thu, 3 Nov 2022 04:28:26 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 16/17] crypto: x86/ghash,polyval - load based on CPU features Date: Wed, 2 Nov 2022 23:27:39 -0500 Message-Id: <20221103042740.6556-17-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wZbPMpqOEK5fJkS-qGFPdFllzvutLOuj X-Proofpoint-GUID: wZbPMpqOEK5fJkS-qGFPdFllzvutLOuj X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 bulkscore=0 mlxscore=0 adultscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), these x86-optimized crypto modules already have module aliases based on CPU feature bits: ghash, polyval Rename the unique device table data structure to a generic name so the code has the same pattern in all the modules. Signed-off-by: Robert Elliott --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 6 +++--- arch/x86/crypto/polyval-clmulni_glue.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 0f24c3b23fd2..d19a8e9b34a6 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -325,17 +325,17 @@ static struct ahash_alg ghash_async_alg = { }, }; -static const struct x86_cpu_id pcmul_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), /* Pickle-Mickle-Duck */ {} }; -MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init ghash_pclmulqdqni_mod_init(void) { int err; - if (!x86_match_cpu(pcmul_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; err = crypto_register_shash(&ghash_alg); diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index de1c908f7412..375c845695a4 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -176,15 +176,15 @@ static struct shash_alg polyval_alg = { }, }; -__maybe_unused static const struct x86_cpu_id pcmul_cpu_id[] = { +__maybe_unused static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init polyval_clmulni_mod_init(void) { - if (!x86_match_cpu(pcmul_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AVX)) From patchwork Thu Nov 3 04:27:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 13029518 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 843F8C4332F for ; Thu, 3 Nov 2022 04:30:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230408AbiKCEaB (ORCPT ); Thu, 3 Nov 2022 00:30:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230398AbiKCE3E (ORCPT ); Thu, 3 Nov 2022 00:29:04 -0400 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E24617A8E; Wed, 2 Nov 2022 21:28:35 -0700 (PDT) Received: from pps.filterd (m0148663.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A33VbuI014797; Thu, 3 Nov 2022 04:28:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=G7u1zUhzEy2Z3v9+f98yVaQ73+U2QkY+yVwx1BjgOZE=; b=HLKouWUMYSWw7quRf3qPuFiXZblLqMTPqpxKEJhH0YseFfrq34bXknRLDWqsk2YFuB+M r3shCI+TQ9I7+vybntd4lkHcUHwctFcTE52WoI6ijEdXqtWLuiQo+S4Z1zC/q5lo6MtL 2aMmzVdHPLqV1i39WWg1306vsR1DnRkjLSzu6y2v9v2LISIEYRy5qGBKnZZLRSryqnVj tz1k9pOQMgJkVmhp2ftwtindcywpbrb1BvZd5pB0eRI3CiZVOhw186Ika4TBA8lh3dho rq8yEmuyC/BtmT3tFb+qJpOdU0kKcQ0vOCeUNdleUbnofc9pfesaCptRk+KyPvgGgXf2 EQ== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3km5tn8a1a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Nov 2022 04:28:28 +0000 Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 055FD80472B; Thu, 3 Nov 2022 04:28:28 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 85C0D808EB8; Thu, 3 Nov 2022 04:28:27 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v3 17/17] crypto: x86/nhpoly1305, poly1305 - load based on CPU features Date: Wed, 2 Nov 2022 23:27:40 -0500 Message-Id: <20221103042740.6556-18-elliott@hpe.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221103042740.6556-1-elliott@hpe.com> References: <20221012215931.3896-1-elliott@hpe.com> <20221103042740.6556-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: LvU7jicmIURC_DCUpBkpWM2HsYBXp38r X-Proofpoint-ORIG-GUID: LvU7jicmIURC_DCUpBkpWM2HsYBXp38r X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-02_15,2022-11-02_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 suspectscore=0 impostorscore=0 phishscore=0 adultscore=0 priorityscore=1501 clxscore=1015 mlxscore=0 spamscore=0 mlxlogscore=999 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030031 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases for x86-optimized crypto modules: nhpoly1305, poly1305 based on CPU feature bits so udev gets a chance to load them later in the boot process when the filesystems are all running. Signed-off-by: Robert Elliott --- arch/x86/crypto/nhpoly1305-avx2-glue.c | 10 ++++++++++ arch/x86/crypto/nhpoly1305-sse2-glue.c | 10 ++++++++++ arch/x86/crypto/poly1305_glue.c | 12 ++++++++++++ 3 files changed, 32 insertions(+) diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index f7dc9c563bb5..15f98b53bfda 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -60,8 +61,17 @@ static struct shash_alg nhpoly1305_alg = { .descsize = sizeof(struct nhpoly1305_state), }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init nhpoly1305_mod_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_AVX2) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) return -ENODEV; diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c index daffcc7019ad..533db3e0e06f 100644 --- a/arch/x86/crypto/nhpoly1305-sse2-glue.c +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -60,8 +61,17 @@ static struct shash_alg nhpoly1305_alg = { .descsize = sizeof(struct nhpoly1305_state), }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_XMM2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init nhpoly1305_mod_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_XMM2)) return -ENODEV; diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index 16831c036d71..2ff4358e4b3f 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -268,8 +269,19 @@ static struct shash_alg alg = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX512F, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init poly1305_simd_mod_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (boot_cpu_has(X86_FEATURE_AVX) && cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) static_branch_enable(&poly1305_use_avx);