From patchwork Mon Dec 4 12:26:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10090089 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0FC086056E for ; Mon, 4 Dec 2017 12:27:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF3882915A for ; Mon, 4 Dec 2017 12:27:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D40642917A; Mon, 4 Dec 2017 12:27:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, URIBL_DBL_ABUSE_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A1862915A for ; Mon, 4 Dec 2017 12:27:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753265AbdLDM12 (ORCPT ); Mon, 4 Dec 2017 07:27:28 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:46671 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753260AbdLDM10 (ORCPT ); Mon, 4 Dec 2017 07:27:26 -0500 Received: by mail-wm0-f66.google.com with SMTP id r78so5468569wme.5 for ; Mon, 04 Dec 2017 04:27:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=jlIZxCznZx/RtAlVcf+JuB12Pzu4NtHO4tfWZeTQ9z4xtm+kg3S9YksejWa3udtb0/ 9Y6rE+cULrIqUTht1BaBHCcO9frkwJbSxKTwYQLJc0AIQzCTVcbx8Yv1dJf15ZKuG82+ Lqxu1POXcqgSWQLqSk7WRBBoqTRoLb6FRQ03U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=UF0cSR44DZq1P7pijLUaz79l0+DT14Tf6HVhmIm5moUuU1CUniloeeCQ7bfwIpGEjg /CcrfjtnLnlWqZllEpU8JgVHzEXbpWEmdI6Q4Jk45P9QUoAXDbmj8euQEhp1FbxbcL+Y uOP1dH4VYDLwGTVnz2gjwrT4jQP44ZbqXhm+nzIVWRTE74iPzCacqVAlYag8We143kBs 6Hqaw4PYbpdvNAdnbuQ2kVAdxK3nOYEhh1370S4JHEbEajfCai42vpqS0YW8D+kG9JmY Du5SHIvOVgdNo88bLhHcP6uRGFwUFzt1LbPRbt0VCjj37F0EiKsTX0IEobpQvKjXd15u uYaA== X-Gm-Message-State: AKGB3mIYfDRogbw+zyoklhvoWqA8umUywqCuj2CA3msoeRwkvBQmIjNj nuob5LGktFB+eBOh0jDaoLM04jYa95I= X-Google-Smtp-Source: AGs4zMbbVw55CSuHbQFBS4C2wytyeCrTWJroqrBOpGB0slH2RKqYrmuuTwvdfbEkBcd732Ih9pd1Gw== X-Received: by 10.28.63.16 with SMTP id m16mr2937869wma.19.1512390445103; Mon, 04 Dec 2017 04:27:25 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id a8sm7665839wmh.41.2017.12.04.04.27.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Dec 2017 04:27:24 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v2 10/19] crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels Date: Mon, 4 Dec 2017 12:26:36 +0000 Message-Id: <20171204122645.31535-11-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171204122645.31535-1-ard.biesheuvel@linaro.org> References: <20171204122645.31535-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tweak the SHA256 update routines to invoke the SHA256 block transform block by block, to avoid excessive scheduling delays caused by the NEON algorithm running with preemption disabled. Also, remove a stale comment which no longer applies now that kernel mode NEON is actually disallowed in some contexts. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/sha256-glue.c | 36 +++++++++++++------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index b064d925fe2a..e8880ccdc71f 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -89,21 +89,32 @@ static struct shash_alg algs[] = { { static int sha256_update_neon(struct shash_desc *desc, const u8 *data, unsigned int len) { - /* - * Stacking and unstacking a substantial slice of the NEON register - * file may significantly affect performance for small updates when - * executing in interrupt context, so fall back to the scalar code - * in that case. - */ + struct sha256_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) return sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha256_block_data_order); - kernel_neon_begin(); - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); - kernel_neon_end(); + while (len > 0) { + unsigned int chunk = len; + + /* + * Don't hog the CPU for the entire time it takes to process all + * input when running on a preemptible kernel, but process the + * data block by block instead. + */ + if (IS_ENABLED(CONFIG_PREEMPT) && + chunk + sctx->count % SHA256_BLOCK_SIZE > SHA256_BLOCK_SIZE) + chunk = SHA256_BLOCK_SIZE - + sctx->count % SHA256_BLOCK_SIZE; + kernel_neon_begin(); + sha256_base_do_update(desc, data, chunk, + (sha256_block_fn *)sha256_block_neon); + kernel_neon_end(); + data += chunk; + len -= chunk; + } return 0; } @@ -117,10 +128,9 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data, sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_data_order); } else { - kernel_neon_begin(); if (len) - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); + sha256_update_neon(desc, data, len); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_neon); kernel_neon_end();