From patchwork Tue Oct 15 02:45:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11189595 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2B26112B for ; Tue, 15 Oct 2019 02:45:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C3DD221882 for ; Tue, 15 Oct 2019 02:45:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571107544; bh=rEgridOr8x0Q5MuTsLI43vItUDgIo4mn9nzaQ9Vl67w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DPEbLoqgj4bUCxbfeRnu4YzEMMWQCegtYnWbXIN+43rdTsahrY5yPPUuAluzt0F6M XfqTT81T6NEfCtlqgf46kbcLB/iImEk1Wk66NVCMIJageLjbgo8W7+RVYcWZltDRI3 hUrG+nonIOf3Q1w0QkLWAz4R8QAIaZEkzT1bajq4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727250AbfJOCpn (ORCPT ); Mon, 14 Oct 2019 22:45:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:47538 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727243AbfJOCpm (ORCPT ); Mon, 14 Oct 2019 22:45:42 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 02830217F9; Tue, 15 Oct 2019 02:45:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571107542; bh=rEgridOr8x0Q5MuTsLI43vItUDgIo4mn9nzaQ9Vl67w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F96iiTbN+pwBwSuudvaRKiotVJp0wfe0Jic4ohAgo+DLyt2dJ/uAeAqplBMMVpIv+ Hqe4aGidB8cMgFC2IZiaPU5f6NwBkfQ6kwOpN/sCjr7TZ/KYVbNh6t5fEougninXcY 8UOSBpMpVUx2Iwdr9/deFzDazTEqvVXCLyZrubWU= From: Eric Biggers To: linux-crypto@vger.kernel.org, Herbert Xu Cc: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Markus Stockhausen , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 1/3] crypto: powerpc - don't unnecessarily use atomic scatterwalk Date: Mon, 14 Oct 2019 19:45:15 -0700 Message-Id: <20191015024517.52790-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191015024517.52790-1-ebiggers@kernel.org> References: <20191015024517.52790-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The PowerPC SPE implementations of AES modes only disable preemption during the actual encryption/decryption, not during the scatterwalk functions. It's therefore unnecessary to request an atomic scatterwalk. So don't do so. Signed-off-by: Eric Biggers --- arch/powerpc/crypto/aes-spe-glue.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/arch/powerpc/crypto/aes-spe-glue.c b/arch/powerpc/crypto/aes-spe-glue.c index 3a4ca7d32477..319f1dbb3a70 100644 --- a/arch/powerpc/crypto/aes-spe-glue.c +++ b/arch/powerpc/crypto/aes-spe-glue.c @@ -186,7 +186,6 @@ static int ppc_ecb_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, unsigned int ubytes; int err; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); @@ -214,7 +213,6 @@ static int ppc_ecb_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, unsigned int ubytes; int err; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); @@ -242,7 +240,6 @@ static int ppc_cbc_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, unsigned int ubytes; int err; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); @@ -270,7 +267,6 @@ static int ppc_cbc_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, unsigned int ubytes; int err; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); @@ -298,7 +294,6 @@ static int ppc_ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, unsigned int pbytes, ubytes; int err; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); @@ -329,7 +324,6 @@ static int ppc_xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, int err; u32 *twk; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); twk = ctx->key_twk; @@ -360,7 +354,6 @@ static int ppc_xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, int err; u32 *twk; - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt(desc, &walk); twk = ctx->key_twk;