From patchwork Wed Dec 6 19:43:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10096951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 11CE460327 for ; Wed, 6 Dec 2017 19:47:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03F9A28E8B for ; Wed, 6 Dec 2017 19:47:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECE1828EB8; Wed, 6 Dec 2017 19:47:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 832B228E8B for ; Wed, 6 Dec 2017 19:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IJzVKveJenM6aqxHfQ8YGxChFdKZEwv2mRFCufQf4AI=; b=sR+XDSQUDr+2Y8aWwv2mKqvjTe r4YpBVLIVJq/pW9zK7PFe8ea6YiG0MpcTmO6WDd9mBfNNbq0ln+q7EYA+LrpWY0HkH3vOjdNTxMOk qmRK5ARhPaU/171wk75tpO+4hLkZsgGuNduZUddgm8Sq1VjnurlHYQNQaikJzIov4RhuKyzzE02+l M7p4AcvcFZjx9b84SB/oQ7Bozq7d1QCGDU9R+k5K45+b0iSemYJC5bRppmIQhyknKM/eJd+WedKIy uIauLjX5cEzp+oGOhsvhHB2/ErDhDawEJDiNOe+LFpD5IXL47N9nxdELqNpLTdJaOiwsA62BaEOVX KUWzTifQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMffI-00025I-PA; Wed, 06 Dec 2017 19:47:32 +0000 Received: from mail-wr0-x241.google.com ([2a00:1450:400c:c0c::241]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMfcZ-0006Ub-0z for linux-arm-kernel@lists.infradead.org; Wed, 06 Dec 2017 19:44:52 +0000 Received: by mail-wr0-x241.google.com with SMTP id k61so5111676wrc.4 for ; Wed, 06 Dec 2017 11:44:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=TJRcRkR6osoVuJGMqyJEfzWXl3vKd0OKLO3W/3HR9ZP5LFhsD4j/QxxLUdD47IPduF Jx62UC3CMOXGDYSDbD1YlNBXWLT8yjPKis7VEM7IRmd20dxP4vL1EckdmOzwJpO1Ohf8 tJV62Wyy/9NeYPOET5kTczlyULqJaR7sOJFg4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=Su4Bf7JCcgQB1K61GI3gPMYBYj+p32yrgjJ8nQUDLjfVngV1E/k9HuT/Hyq5e84Arq HYeBNV/6WtmstTegvK/XW91dG8GCH7xLBJn6n7zYg5dn3bxERuUVkp8+ykYYVOnXHb+L ExsGs4UDwf3KQsLDSxW4XNxczptyYGY8/d8JZyZwY2F93b/VKgjKG49E8sL+ZpMeCS7g ZglnYLYo7M3poYuZZOLxuj6eng9XPwk1u6nG81hGbB9YB9YFBWTDcuRzphOESBKl0ddu 8FDPWw+D3CVGC3ABy7V8uqN/01xNZq7fsdWCW6quHH2BGvfwjWFv+W+NJwCQpzj3EC3A uu6g== X-Gm-Message-State: AJaThX5jtN/FEe+fA0iaPyqtNt4eASn9QUYoNKUvfurPzX6Qv3/rIizf SxJ7LT7TGWGxlrmAtzT7r6yDmw== X-Google-Smtp-Source: AGs4zMbcPArcMX8Vu+Df/EybOkSsnU8sXuKFFlMbpPAGsrgcadoZIzN+aEDgywc83jLl7Q0GJx8Ivw== X-Received: by 10.223.148.69 with SMTP id 63mr22031648wrq.89.1512589461631; Wed, 06 Dec 2017 11:44:21 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id b66sm3596594wmh.32.2017.12.06.11.44.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Dec 2017 11:44:20 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v3 09/20] crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels Date: Wed, 6 Dec 2017 19:43:35 +0000 Message-Id: <20171206194346.24393-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171206194346.24393-1-ard.biesheuvel@linaro.org> References: <20171206194346.24393-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171206_114444_009010_FEC0F82A X-CRM114-Status: GOOD ( 16.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , herbert@gondor.apana.org.au, Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Sebastian Andrzej Siewior , Will Deacon , Russell King - ARM Linux , Steven Rostedt , Thomas Gleixner , Dave Martin , linux-arm-kernel@lists.infradead.org, linux-rt-users@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Tweak the SHA256 update routines to invoke the SHA256 block transform block by block, to avoid excessive scheduling delays caused by the NEON algorithm running with preemption disabled. Also, remove a stale comment which no longer applies now that kernel mode NEON is actually disallowed in some contexts. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/sha256-glue.c | 36 +++++++++++++------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index b064d925fe2a..e8880ccdc71f 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -89,21 +89,32 @@ static struct shash_alg algs[] = { { static int sha256_update_neon(struct shash_desc *desc, const u8 *data, unsigned int len) { - /* - * Stacking and unstacking a substantial slice of the NEON register - * file may significantly affect performance for small updates when - * executing in interrupt context, so fall back to the scalar code - * in that case. - */ + struct sha256_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) return sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha256_block_data_order); - kernel_neon_begin(); - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); - kernel_neon_end(); + while (len > 0) { + unsigned int chunk = len; + + /* + * Don't hog the CPU for the entire time it takes to process all + * input when running on a preemptible kernel, but process the + * data block by block instead. + */ + if (IS_ENABLED(CONFIG_PREEMPT) && + chunk + sctx->count % SHA256_BLOCK_SIZE > SHA256_BLOCK_SIZE) + chunk = SHA256_BLOCK_SIZE - + sctx->count % SHA256_BLOCK_SIZE; + kernel_neon_begin(); + sha256_base_do_update(desc, data, chunk, + (sha256_block_fn *)sha256_block_neon); + kernel_neon_end(); + data += chunk; + len -= chunk; + } return 0; } @@ -117,10 +128,9 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data, sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_data_order); } else { - kernel_neon_begin(); if (len) - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); + sha256_update_neon(desc, data, len); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_neon); kernel_neon_end();