From patchwork Tue Apr 1 13:47:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 3923371 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8B7759F2F7 for ; Tue, 1 Apr 2014 13:51:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AA2E4203B4 for ; Tue, 1 Apr 2014 13:51:42 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9DF9320259 for ; Tue, 1 Apr 2014 13:51:41 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WUz42-0007b5-2A; Tue, 01 Apr 2014 13:49:19 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WUz3d-0001v9-9d; Tue, 01 Apr 2014 13:48:53 +0000 Received: from mail-we0-f180.google.com ([74.125.82.180]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WUz32-0001oZ-P4 for linux-arm-kernel@lists.infradead.org; Tue, 01 Apr 2014 13:48:21 +0000 Received: by mail-we0-f180.google.com with SMTP id p61so6366163wes.11 for ; Tue, 01 Apr 2014 06:47:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nGNxxWLFG+YTo1CtFrdKOm/5ZVxB0gI0UpjUN1AKl38=; b=Pq5n7/zYw+Xph+1FilpU7VDvSe3Rk3F/3IOzxZyUJzlHHSjcJr42RNyFBALwYC7dwX VIOjRYwcize29MWmfEVLJreLsgIPJGKR0tVPX9+dYSwntMIh/dmdsEz4HK7xKQpAJGLm qwHotz9Yi6v4YDXQk/I36ICd/5ZLzIkzOfu5ET3E6eg1HIRInvrtP+IEH8cXjiCP6QzW IGazrT4+MSZ/bqcXeVVe60br0vyubNmY1CkRf4Qwfw6pkEwl3rPb/MmJNaSkyBZPxSQa 1yIUe/vOt8eqW/ikzbjm+Fcr/3DKbvnq5oBZBSjeKAkMTY3zvfdbpK7b+J8NMj1PKNzw eREg== X-Gm-Message-State: ALoCoQn6+LjXFfF8L+HkmfsJr9yMSkxvNcNAy6vx8QmdqV9uQYUhfpwqimUIKafmLpO9WUyv6Ikr X-Received: by 10.194.174.100 with SMTP id br4mr4078293wjc.83.1396360074596; Tue, 01 Apr 2014 06:47:54 -0700 (PDT) Received: from ards-macbook-pro.local ([95.129.121.210]) by mx.google.com with ESMTPSA id 48sm40727180eee.2.2014.04.01.06.47.53 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Apr 2014 06:47:53 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, nico@linaro.org Subject: [PATCH v3 7/7] arm64/crypto: add voluntary preemption to Crypto Extensions GHASH Date: Tue, 1 Apr 2014 15:47:39 +0200 Message-Id: <1396360059-31949-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1396360059-31949-1-git-send-email-ard.biesheuvel@linaro.org> References: <1396360059-31949-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140401_094816_976949_1284E611 X-CRM114-Status: GOOD ( 13.10 ) X-Spam-Score: -2.6 (--) Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The Crypto Extensions based GHASH implementation uses the NEON register file, and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED check to its inner loop so we at least give up the CPU voluntarily when if we are running in process context and have been tagged for preemption by the scheduler. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 10 ++++++---- arch/arm64/crypto/ghash-ce-glue.c | 33 +++++++++++++++++++++++++-------- 2 files changed, 31 insertions(+), 12 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index 1ca719ce9323..14240c6dc343 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -32,8 +32,9 @@ .align 3 /* - * void pmull_ghash_update(int blocks, u64 dg[], const char *src, - * struct ghash_key const *k, const char *head) + * int pmull_ghash_update(int blocks, u64 dg[], const char *src, + * struct ghash_key const *k, const char *head, + * struct thread_info *ti) */ ENTRY(pmull_ghash_update) ld1 {DATA.16b}, [x1] @@ -89,8 +90,9 @@ CPU_LE( rev64 IN1.16b, IN1.16b ) eor T1.16b, T1.16b, T2.16b eor DATA.16b, DATA.16b, T1.16b - cbnz w0, 0b + cbz w0, 2f + b_if_no_resched x5, x7, 0b - st1 {DATA.16b}, [x1] +2: st1 {DATA.16b}, [x1] ret ENDPROC(pmull_ghash_update) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index b92baf3f68c7..4df64832617d 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -33,8 +33,9 @@ struct ghash_desc_ctx { u32 count; }; -asmlinkage void pmull_ghash_update(int blocks, u64 dg[], const char *src, - struct ghash_key const *k, const char *head); +asmlinkage int pmull_ghash_update(int blocks, u64 dg[], const char *src, + struct ghash_key const *k, const char *head, + struct thread_info *ti); static int ghash_init(struct shash_desc *desc) { @@ -54,6 +55,7 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, if ((partial + len) >= GHASH_BLOCK_SIZE) { struct ghash_key *key = crypto_shash_ctx(desc->tfm); + struct thread_info *ti = NULL; int blocks; if (partial) { @@ -64,14 +66,29 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, len -= p; } + /* + * Pass current's thread info pointer to pmull_ghash_update() + * below if we want it to play nice under preemption. + */ + if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) || + IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt()) + ti = current_thread_info(); + blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin_partial(6); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); - src += blocks * GHASH_BLOCK_SIZE; + do { + int rem; + + kernel_neon_begin_partial(6); + rem = pmull_ghash_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL, ti); + kernel_neon_end(); + + src += (blocks - rem) * GHASH_BLOCK_SIZE; + blocks = rem; + partial = 0; + } while (unlikely(ti && blocks > 0)); } if (len) memcpy(ctx->buf + partial, src, len); @@ -89,7 +106,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); kernel_neon_begin_partial(6); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); + pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL, NULL); kernel_neon_end(); } put_unaligned_be64(ctx->digest[1], dst);