From patchwork Sat Dec 21 09:10:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917714 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F31A1EC4C5 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; cv=none; b=krHgG6qVhGSoG+cAv+dL+j6W500uu6IoNiwtYpN+hKDl6xk2pNM4L/s2wvEghd0STsc0c1Nv0d+h6XSQncSnK2NAniMb/oWElQdbHfyQAqkQ390p8aQoxjorevFH+yPeqKXuGuUQ77Rc12g9FGtOvU9AE5pPqedxvlrA8DISa+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; c=relaxed/simple; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Sco5Gn8iKljj2MD5HxaWCSxEeW8/QxIuCNxfVjWDjUPf0PFK25QdTy0g81MlkSsBmj9MOtR7DD1Bg+PO/AVxKOq3M+yjg6hJEYtCIDbxMzXoeH2Yf3o4yQ6O9vhcs5q6EPTYyHwnGNaKOYtYtpkAEjEBziB06B6BE9hJo8ON9KM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pFCckQQ7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pFCckQQ7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0C295C4CED6 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772291; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=pFCckQQ7ElKnHMLRLinhBSTIo7u/ony9u5lA2Mwhp8qL+NEccxrmmXHqzBUJu8bJQ cvaXDYPTw3q/vXY1iFkukIB++EcD5nikltPboXR4KsI9Q2zjYF/RgyYQC1T0GwT5yw vyK63YyfuYoEtCZrMS/ZQmqGIzloxuK+goXGSztO4wzm4hxULdTmKJU8ncLNIZP6vb V36sNgr7AHm2Zt5CxMa1L944nAvLiFk9LVckFcs9SK4zrkplyIhlP6fc+2RAHV1pte LmYDEylF+SRNTBobOsxvJ/aB6XfjoiuV0roWnw+4Ijv/nWjhPYiSAWnEf/xjnepupg UcicGCu8+R5Ng== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 01/29] crypto: skcipher - document skcipher_walk_done() and rename some vars Date: Sat, 21 Dec 2024 01:10:28 -0800 Message-ID: <20241221091056.282098-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers skcipher_walk_done() has an unusual calling convention, and some of its local variables have unclear names. Document it and rename variables to make it a bit clearer what is going on. No change in behavior. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 50 ++++++++++++++++++++---------- include/crypto/internal/skcipher.h | 2 +- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index d5fe0eca3826..8749c44f98a2 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -87,21 +87,39 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } -int skcipher_walk_done(struct skcipher_walk *walk, int err) +/** + * skcipher_walk_done() - finish one step of a skcipher_walk + * @walk: the skcipher_walk + * @res: number of bytes *not* processed (>= 0) from walk->nbytes, + * or a -errno value to terminate the walk due to an error + * + * This function cleans up after one step of walking through the source and + * destination scatterlists, and advances to the next step if applicable. + * walk->nbytes is set to the number of bytes available in the next step, + * walk->total is set to the new total number of bytes remaining, and + * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If there + * is no more data, or if an error occurred (i.e. -errno return), then + * walk->nbytes and walk->total are set to 0 and all resources owned by the + * skcipher_walk are freed. + * + * Return: 0 or a -errno value. If @res was a -errno value then it will be + * returned, but other errors may occur too. + */ +int skcipher_walk_done(struct skcipher_walk *walk, int res) { - unsigned int n = walk->nbytes; - unsigned int nbytes = 0; + unsigned int n = walk->nbytes; /* num bytes processed this step */ + unsigned int total = 0; /* new total remaining */ if (!n) goto finish; - if (likely(err >= 0)) { - n -= err; - nbytes = walk->total - n; + if (likely(res >= 0)) { + n -= res; /* subtract num bytes *not* processed */ + total = walk->total - n; } if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF)))) { @@ -113,35 +131,35 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) } else if (walk->flags & SKCIPHER_WALK_COPY) { skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { - if (err > 0) { + if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but * the algorithm requires it. */ - err = -EINVAL; - nbytes = 0; + res = -EINVAL; + total = 0; } else n = skcipher_done_slow(walk, n); } - if (err > 0) - err = 0; + if (res > 0) + res = 0; - walk->total = nbytes; + walk->total = total; walk->nbytes = 0; scatterwalk_advance(&walk->in, n); scatterwalk_advance(&walk->out, n); - scatterwalk_done(&walk->in, 0, nbytes); - scatterwalk_done(&walk->out, 1, nbytes); + scatterwalk_done(&walk->in, 0, total); + scatterwalk_done(&walk->out, 1, total); - if (nbytes) { + if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); return skcipher_walk_next(walk); } @@ -156,11 +174,11 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) kfree(walk->buffer); if (walk->page) free_page((unsigned long)walk->page); out: - return err; + return res; } EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index 08d1e8c63afc..4f49621d3eb6 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -194,11 +194,11 @@ void crypto_unregister_lskcipher(struct lskcipher_alg *alg); int crypto_register_lskciphers(struct lskcipher_alg *algs, int count); void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count); int lskcipher_register_instance(struct crypto_template *tmpl, struct lskcipher_instance *inst); -int skcipher_walk_done(struct skcipher_walk *walk, int err); +int skcipher_walk_done(struct skcipher_walk *walk, int res); int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic); int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic); From patchwork Sat Dec 21 09:10:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917717 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA98D1EE7D8 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; cv=none; b=P2FnFpGJYTBFMX86snDZZsr670XarcuULasIy3i3josx/0oha12/63Wkg/1ifcF7Q4G3UNLLsHayIHWH4qS8H+Kx6tzncAGmJUMHpNzOy3AxxDy0r5wWohZN9LHc0bK+3RqyKqqh7/QsAe8W2Eykb8r7lND3gS4gK2aY5duvwrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; c=relaxed/simple; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JeNAkTqha5sFHJEp/KfkHpJkF6qMQ509tsWaG1knKDoPf37LP8J3xd+pu78q5SIyw9k5f90DvtlPcjd/hMYPixurvNs7E/pD9UbasdOttUh4ZBVhePFKGx97d0UtATgQE6BD0fb3bv+jXjoPjQbJysBlz+P0WRyXPHi3CEtII6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aDHEKdIH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aDHEKdIH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38CA6C4CED4 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772291; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=aDHEKdIHvZ+JHWwh1eFL9YmnqGpYq1AYwdNdLqR6b06nO+oro48BXm1exzKmuJSnp YA85pCg9pLzO6HbuhqfLXa6PuqGAy+7jpP3D3kFeVL3d5/mX5QUWb+qlfDQKY6mXSK 8hJK/q7HHntoBoP3hyQXr7CqP+wTEUi51qpNwKxdjkrYK6xTNgvY0/gYw6pvTYecBT vu5tIdvfmrs/C/1Om2bZT5gtbrlCHkcxhjEWli2NURHRc76EG3G/W9jxaewEbIESWQ I69ftWuhrI55hbXVfcCvcQpmg02+lF5ZpxirOrxDG0Y0c/ZHeSrtOeSiBQfnYaj7OE 3CzuYRQ1l7SQA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 02/29] crypto: skcipher - remove unnecessary page alignment of bounce buffer Date: Sat, 21 Dec 2024 01:10:29 -0800 Message-ID: <20241221091056.282098-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the slow path of skcipher_walk where it uses a slab bounce buffer for the data and/or IV, do not bother to avoid crossing a page boundary in the part(s) of this buffer that are used, and do not bother to allocate extra space in the buffer for that purpose. The buffer is accessed only by virtual address, so pages are irrelevant for it. This logic may have been present due to the physical address support in skcipher_walk, but that has now been removed. Or it may have been present to be consistent with the fast path that currently does not hand back addresses that span pages, but that behavior is a side effect of the pages being "mapped" one by one and is not actually a requirement. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 62 ++++++++++++----------------------------------- 1 file changed, 15 insertions(+), 47 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 8749c44f98a2..887cbce8f78d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -61,32 +61,20 @@ static inline void skcipher_unmap_dst(struct skcipher_walk *walk) static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) { return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; } -/* Get a spot of the specified length that does not straddle a page. - * The caller needs to ensure that there is enough space for this operation. - */ -static inline u8 *skcipher_get_spot(u8 *start, unsigned int len) -{ - u8 *end_page = (u8 *)(((unsigned long)(start + len - 1)) & PAGE_MASK); - - return max(start, end_page); -} - static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) { - u8 *addr; + u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1); - addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1); - addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } /** @@ -181,37 +169,26 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { unsigned alignmask = walk->alignmask; - unsigned a; unsigned n; u8 *buffer; if (!walk->buffer) walk->buffer = walk->page; buffer = walk->buffer; - if (buffer) - goto ok; - - /* Start with the minimum alignment of kmalloc. */ - a = crypto_tfm_ctx_alignment() - 1; - n = bsize; - - /* Minimum size to align buffer by alignmask. */ - n += alignmask & ~a; - - /* Minimum size to ensure buffer does not straddle a page. */ - n += (bsize - 1) & ~(alignmask | a); - - buffer = kzalloc(n, skcipher_walk_gfp(walk)); - if (!buffer) - return skcipher_walk_done(walk, -ENOMEM); - walk->buffer = buffer; -ok: + if (!buffer) { + /* Min size for a buffer of bsize bytes aligned to alignmask */ + n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); + + buffer = kzalloc(n, skcipher_walk_gfp(walk)); + if (!buffer) + return skcipher_walk_done(walk, -ENOMEM); + walk->buffer = buffer; + } walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1); - walk->dst.virt.addr = skcipher_get_spot(walk->dst.virt.addr, bsize); walk->src.virt.addr = walk->dst.virt.addr; scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0); walk->nbytes = bsize; @@ -294,34 +271,25 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { - unsigned a = crypto_tfm_ctx_alignment() - 1; unsigned alignmask = walk->alignmask; unsigned ivsize = walk->ivsize; - unsigned bs = walk->stride; - unsigned aligned_bs; + unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1); unsigned size; u8 *iv; - aligned_bs = ALIGN(bs, alignmask + 1); - - /* Minimum size to align buffer by alignmask. */ - size = alignmask & ~a; - - size += aligned_bs + ivsize; - - /* Minimum size to ensure buffer does not straddle a page. */ - size += (bs - 1) & ~(alignmask | a); + /* Min size for a buffer of stride + ivsize, aligned to alignmask */ + size = aligned_stride + ivsize + + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); walk->buffer = kmalloc(size, skcipher_walk_gfp(walk)); if (!walk->buffer) return -ENOMEM; - iv = PTR_ALIGN(walk->buffer, alignmask + 1); - iv = skcipher_get_spot(iv, bs) + aligned_bs; + iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; walk->iv = memcpy(iv, walk->iv, walk->ivsize); return 0; } From patchwork Sat Dec 21 09:10:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917716 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA9311EE7D3 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; cv=none; b=Xk+hbuYjzKWNR1yV86bb9VQJDcymx3ZhA/PiX1uxzc7x4MYjDJJQuSNnORufKYK0uGKGULcOwyF8SluGzvr1jK6Q5X7yfHjHLTt6biP6OVn+ylseVi2CAqspIewo9xPbTSCdpPZlmpQa0ptDQ2SEeZs90ze0W105+E9ZF/AjxuY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; c=relaxed/simple; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=shjoHq5ka8xd3sbCsGLl33nLilRODNgRQlwBINN5Ybe08XedH0X6OnHbF723J4ycIXUd+eIYwWX+yhNx24hcrfmU3av1w1xX/XNIvOdmkx8KnQtTu/A3yVePZxJ8y9b8HeZv0Ly/reRTDcYKB0oZ7cIJeGknaHyMwY8PJrpcJsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lDRT8cSW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lDRT8cSW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 654C6C4CEDD for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772291; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=lDRT8cSWLrJs/VwBKyzrNwVsw7xvrFhtb6YT4HrJBvfOA2L1Z2SQ6QrrILCMa6q0P HHmbMxoD37gREd7ywK63dQyXrO0v7d3JaCrIshVCoYGcXljCwnnAEZWiUy1Hbrm5Zz Df0lZugAx4KiKEIbTj60qQWEppp9ds65P5BkkxuBPPm5PbBQjWdih1cZ1fjG26EBGi ba9t584Windn4gj/PPLZHIC+XDqmGA5QrWqyf3X6ZQAH0ZHXSYdf/DLChdFa0zMWqW kpMSof04D2ENFErN1IYa5GmCbGKUeYSGF0FsCm/qNJ41b8iJC7NBS7Bm3WZvqGvPOe 1gT6mSHBZPjYg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 03/29] crypto: skcipher - remove redundant clamping to page size Date: Sat, 21 Dec 2024 01:10:30 -0800 Message-ID: <20241221091056.282098-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the case where skcipher_walk_next() allocates a bounce page, that page by definition has size PAGE_SIZE. The number of bytes to copy 'n' is guaranteed to fit in it, since earlier in the function it was clamped to be at most a page. Therefore remove the unnecessary logic that tried to clamp 'n' again to fit in the bounce page. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 887cbce8f78d..c627e267b125 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -248,28 +248,24 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_walk_done(walk, -EINVAL); slow_path: return skcipher_next_slow(walk, bsize); } + walk->nbytes = n; if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { if (!walk->page) { gfp_t gfp = skcipher_walk_gfp(walk); walk->page = (void *)__get_free_page(gfp); if (!walk->page) goto slow_path; } - - walk->nbytes = min_t(unsigned, n, - PAGE_SIZE - offset_in_page(walk->page)); walk->flags |= SKCIPHER_WALK_COPY; return skcipher_next_copy(walk); } - walk->nbytes = n; - return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { From patchwork Sat Dec 21 09:10:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917715 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0B9C1EE7DD for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; cv=none; b=KQ8+TBPY19FfI+qUiDYP3LRY+vIF7XJZnsccMRVbZMa1nkRdnmMiaeENp0uViYKFZVUrl0XqXtaqZp7vdIN6TTF5ClhROKnX8uOZVgkRFQ/LtVgJCQpyk9XKcGSd0Wk1Ol+NUNmPvmoX+1/KFXbxObASqZCdTgi3Y5sk1gp/J4g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772291; c=relaxed/simple; bh=7uVm3CYoqa7+cTgqCe8InwCLob5UXrzsedA8gOn7b/Y=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tJO2l1D2y2NgEqYM5kziMF2HXoyrFdSZDRbMoUIFVldo7b+uOk8fNdRLNS6f44bsTfvVt+2g99JpcljdBFdCv1dATcxiDfnOcaWj30fAEqmj3wqHP9h6dGxret3viSlK78wFz/XBiHk74QouUNhWEE5ZRHb7Az5fYswqNZFSIzo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aoCPQn3K; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aoCPQn3K" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91E32C4CED7 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772291; bh=7uVm3CYoqa7+cTgqCe8InwCLob5UXrzsedA8gOn7b/Y=; h=From:To:Subject:Date:In-Reply-To:References:From; b=aoCPQn3K836kxPpW9S4MdPSOkVON+qcT10+BB3jK2+QEFc1k9dNFoq1siFeyhakOL TbwQq1kQz/X1RWQVkXK1EvT5PyTvixTD4YbCqUWSpgbU7WCO+kBvyndfV+OM7WxF39 TJeuwjoTo6g0oaLayechBR4At8MyTzZg8e6BCsPAuzGgRVi268TSxKvjevilQmTsHe BdFMs63Sv1ONTCEd/MHkuBkQElEd3WK6ruPvCvls7BOvdA/Mx46xq0RU3rJVp0EDv0 kTKE2mB+NpF4dJJXFR1p8UAA3STIqkIu1rNOQ0igCS4mqZx2XsmSeIw30Bvi9r/vUE SWKJP/BrwYadA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 04/29] crypto: skcipher - remove redundant check for SKCIPHER_WALK_SLOW Date: Sat, 21 Dec 2024 01:10:31 -0800 Message-ID: <20241221091056.282098-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_done(), remove the check for SKCIPHER_WALK_SLOW because it is always true. All other flags (and lack thereof) were checked earlier in the function, leaving SKCIPHER_WALK_SLOW as the only remaining possibility. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index c627e267b125..98606def1bf9 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -118,11 +118,11 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) goto unmap_src; } else if (walk->flags & SKCIPHER_WALK_COPY) { skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); - } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { + } else { /* SKCIPHER_WALK_SLOW */ if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but From patchwork Sat Dec 21 09:10:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917718 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA67D1F0E3E for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; cv=none; b=flr+4ZYoAMdX0wbRUnIFp3agVBQI5M0c8cu81eXVO7ZMGV8xzUmAnYTWjM73xnt3QWnRTxy3/H0KrynTnKQhZr7KlrbEkv/3/2nO6Za+baObnTinRvF53862uPYuHcxYDZyJh8lX1iEnhjANOKpvyZtCzpGlLlFQ5dyaek+ur+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; c=relaxed/simple; bh=+GPayxh5RqGj7KlYP7lL3+3fPmB0J8TTLnx1xFmFjIM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=crbycpq647uuqrA121LANiWhkv2uYs9jS6fWMuUlBYZPoox8502EaSr57zxn0X2QgHvOomd0RdwlmKC9iG7iNvv/VFsuQhFPkm3RJ2OXxiLG3h53YMeBIo7Oh8yJc0O/3RzGw+uKXVsnZxJ6ot64fGsB9IbqP0IDJxvd6Q8AZGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jTtZvul3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jTtZvul3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE8AAC4CECE for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772291; bh=+GPayxh5RqGj7KlYP7lL3+3fPmB0J8TTLnx1xFmFjIM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=jTtZvul3Qlt0h61Zmfs+ow+izgbhCTIGst5d3V/r2b1ANc8fwX1Z8mJhjn6Q7cyzn MOQivlGVLIrDv6Vl8VnpIbuMFHwKkhH7sevKQlCKaTMTD3iPShO7xm5wzCqpAcDYbS BeG2Ihvwe1Jzs60gvdiPXjcfSwgx+cmioIfVlE4aB276DyQyNNLzpOIadoq7XhHSQJ NgJJwkiRbdnwoufm4B4qggg7Sex9TpgCKc7kPazilch+Mm3Epay5pG9HflAlewUr1v +NvyWF6f4wsr5N6ynnb/PDxb6Rji7p9MiYo4GHJSTL4Ayfg5kShtZtm2791XC7QvTo QHKLz+g3kyQyg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 05/29] crypto: skcipher - fold skcipher_walk_skcipher() into skcipher_walk_virt() Date: Sat, 21 Dec 2024 01:10:32 -0800 Message-ID: <20241221091056.282098-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Fold skcipher_walk_skcipher() into skcipher_walk_virt() which is its only remaining caller. No change in behavior. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 98606def1bf9..17f4bc79ca8b 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -304,23 +304,26 @@ static int skcipher_walk_first(struct skcipher_walk *walk) walk->page = NULL; return skcipher_walk_next(walk); } -static int skcipher_walk_skcipher(struct skcipher_walk *walk, - struct skcipher_request *req) +int skcipher_walk_virt(struct skcipher_walk *walk, + struct skcipher_request *req, bool atomic) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + int err = 0; + + might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; if (unlikely(!walk->total)) - return 0; + goto out; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); walk->flags &= ~SKCIPHER_WALK_SLEEP; @@ -334,22 +337,12 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk, if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; - return skcipher_walk_first(walk); -} - -int skcipher_walk_virt(struct skcipher_walk *walk, - struct skcipher_request *req, bool atomic) -{ - int err; - - might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); - - err = skcipher_walk_skcipher(walk, req); - + err = skcipher_walk_first(walk); +out: walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0; return err; } EXPORT_SYMBOL_GPL(skcipher_walk_virt); From patchwork Sat Dec 21 09:10:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917719 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B0F91EE7D8 for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; cv=none; b=t9sNTh7PZnm4FB6XM7JNDM72RFbr/9PUM/0MGr454enNNg/8PiHLcywWvdMYCHd65pB4nBSThYDZ4EUMdHrtRpLtMFYFy1IhSneCZJdaXKHLFc4xPAhqLlS9vo2SLNtVRcqe24H/LYUewmgjvawd0YGqganGJAcylzOWpb+Znao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; c=relaxed/simple; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HPA2cjxiSfqfzvc4RwbCYlMBGH6Ul9P8u/r/nTed8bnpIPd+/jSgviMhCU67DNVqyL3zDNtZZSXsVW3bHF8J0hReZZApY00mDp+DENcGN/vTSWG9KUg7o3QbE09n9/ri7FbBzwsLpuBnhXCsGRfV3gPPuhwO4/YDUv+Mv1q5aTU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pjUsS303; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pjUsS303" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EAFECC4CED6 for ; Sat, 21 Dec 2024 09:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772292; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=pjUsS303ZTUiza3TzRAeyAlRGEXqC9M/zE1ArrozKRumGAXXsMZPdNWm9YI+l1zBO qemimVfWFcYqUbmILAWaTi4H8zbaKX4YCz3l3WWdMso/2XMJhePtjTDhBFAAi18rlU xhu9AL/x8pKsR3LKFW3QL6IlA3XHMI0dIM9c8qIGmDWTKg+A1qZmRVr/n8b5rU3h4A I28+WSH2CUlWFyoytvon9amuLQid3L3uTiD8vJbhFaV9N/h+6rXccFaxtjrpmbJBQ/ hBIA9hI86bdXNotO26FU3lw63++Jf8ecP8q019omUskz/XGhoSyMwhRfx/so32/VAN J9MIEZXO97IfA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 06/29] crypto: skcipher - clean up initialization of skcipher_walk::flags Date: Sat, 21 Dec 2024 01:10:33 -0800 Message-ID: <20241221091056.282098-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - Initialize SKCIPHER_WALK_SLEEP in a consistent way, and check for atomic=true at the same time as CRYPTO_TFM_REQ_MAY_SLEEP. Technically atomic=true only needs to apply after the first step, but it is very rarely used. We should optimize for the common case. So, check 'atomic' alongside CRYPTO_TFM_REQ_MAY_SLEEP. This is more efficient. - Initialize flags other than SKCIPHER_WALK_SLEEP to 0 rather than preserving them. No caller actually initializes the flags, which makes it impossible to use their original values for anything. Indeed, that does not happen and all meaningful flags get overridden anyway. It may have been thought that just clearing one flag would be faster than clearing all flags, but that's not the case as the former is a read-write operation whereas the latter is just a write. - Move the explicit clearing of SKCIPHER_WALK_SLOW, SKCIPHER_WALK_COPY, and SKCIPHER_WALK_DIFF into skcipher_walk_done(), since it is now only needed on non-first steps. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 17f4bc79ca8b..e54d1ad46566 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -146,10 +146,12 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) scatterwalk_done(&walk->out, 1, total); if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); + walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | + SKCIPHER_WALK_DIFF); return skcipher_walk_next(walk); } finish: /* Short-circuit for the common/fast path. */ @@ -233,13 +235,10 @@ static int skcipher_next_fast(struct skcipher_walk *walk) static int skcipher_walk_next(struct skcipher_walk *walk) { unsigned int bsize; unsigned int n; - walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | - SKCIPHER_WALK_DIFF); - n = walk->total; bsize = min(walk->stride, max(n, walk->blocksize)); n = scatterwalk_clamp(&walk->in, n); n = scatterwalk_clamp(&walk->out, n); @@ -309,55 +308,53 @@ static int skcipher_walk_first(struct skcipher_walk *walk) int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - int err = 0; might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) - goto out; + return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->flags |= req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? - SKCIPHER_WALK_SLEEP : 0; - walk->blocksize = crypto_skcipher_blocksize(tfm); walk->ivsize = crypto_skcipher_ivsize(tfm); walk->alignmask = crypto_skcipher_alignmask(tfm); if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; - err = skcipher_walk_first(walk); -out: - walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0; - - return err; + return skcipher_walk_first(walk); } EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - int err; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) return 0; scatterwalk_start(&walk->in, req->src); @@ -367,26 +364,16 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) - walk->flags |= SKCIPHER_WALK_SLEEP; - else - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->blocksize = crypto_aead_blocksize(tfm); walk->stride = crypto_aead_chunksize(tfm); walk->ivsize = crypto_aead_ivsize(tfm); walk->alignmask = crypto_aead_alignmask(tfm); - err = skcipher_walk_first(walk); - - if (atomic) - walk->flags &= ~SKCIPHER_WALK_SLEEP; - - return err; + return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { From patchwork Sat Dec 21 09:10:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917720 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 969321EE7DD for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; cv=none; b=HWpWapZq17v2vAXFwHlo8dAF1o8Iz1rG/JjaioSGjUPoyv37RZ6la4Mb/yCyEoxcOl2m3uuq5o0eQd1uXFSlJTXMZcc83iapmdnoQgdY8O5my59dWt+zun/RFnR0vLBYSGFMbFrbvnunvL9i+jugGodz0Nvc1OKQ/4/AutrKglA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; c=relaxed/simple; bh=dnZAK361vmWixKcZ7acGRjvfLvttpdJxa1vRNfoj5IE=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WYzf6NZoZFT2ZM7jEOujrAQ9tgIDymr+IMSyzJBI0ruRU08/VSjW/p4DI1/rtbYyVHLFLQuAKeLQrKd/2oxvtuupNEoAUDCb3dtPj97Q1y8KwhTKA6B3e6QsvAVfmPFASrXxvf7mqvbiZRmUm/L5mKDg/KDxwhgmKTV0/ioUhnM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PUEGMvVt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PUEGMvVt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23546C4CECE for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772292; bh=dnZAK361vmWixKcZ7acGRjvfLvttpdJxa1vRNfoj5IE=; h=From:To:Subject:Date:In-Reply-To:References:From; b=PUEGMvVtN7BaRuL59lUDOjAG7JYDy7LGoFRHtP+MQVNPrwk1VPQWfTpy3P/OBHByy 4YFKiJ0tH1yTN2IkFetJ8UZX/AKrVh9VfjiY3qX/Dhj0Ac78GN1QrliwGuWJ0c1sBf MBzW2F7lXRV1f6ExkrgYp+rSsIDAglQo8cDPK/Ta9jzbx9O4nNKJNOiVsO4xHha2mj r4m3SH6OslKSPYO5Or15px6Qhb0eJ5mLhuR8seRh26u0vuXF93SLy/+J9gg3pF/eaz BOVzsQ5jJAKUVWJu3LmXB4UKcMXzmaskO27IIOqL9A0FBKdd41kyqnujsAA1TTWONm C7lSvDPBxSCDA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 07/29] crypto: skcipher - optimize initializing skcipher_walk fields Date: Sat, 21 Dec 2024 01:10:34 -0800 Message-ID: <20241221091056.282098-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The helper functions like crypto_skcipher_blocksize() take in a pointer to a tfm object, but they actually return properties of the algorithm. As the Linux kernel is compiled with -fno-strict-aliasing, the compiler has to assume that the writes to struct skcipher_walk could clobber the tfm's pointer to its algorithm. Thus it gets repeatedly reloaded in the generated code. Therefore, replace the use of these helper functions with staightforward accesses to the struct fields. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index e54d1ad46566..7ef2e4ddf07a 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -306,12 +306,12 @@ static int skcipher_walk_first(struct skcipher_walk *walk) } int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + const struct skcipher_alg *alg = + crypto_skcipher_alg(crypto_skcipher_reqtfm(req)); might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; @@ -326,13 +326,13 @@ int skcipher_walk_virt(struct skcipher_walk *walk, return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->blocksize = crypto_skcipher_blocksize(tfm); - walk->ivsize = crypto_skcipher_ivsize(tfm); - walk->alignmask = crypto_skcipher_alignmask(tfm); + walk->blocksize = alg->base.cra_blocksize; + walk->ivsize = alg->co.ivsize; + walk->alignmask = alg->base.cra_alignmask; if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; @@ -342,11 +342,11 @@ int skcipher_walk_virt(struct skcipher_walk *walk, EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { - struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct aead_alg *alg = crypto_aead_alg(crypto_aead_reqtfm(req)); walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) @@ -364,14 +364,14 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - walk->blocksize = crypto_aead_blocksize(tfm); - walk->stride = crypto_aead_chunksize(tfm); - walk->ivsize = crypto_aead_ivsize(tfm); - walk->alignmask = crypto_aead_alignmask(tfm); + walk->blocksize = alg->base.cra_blocksize; + walk->stride = alg->chunksize; + walk->ivsize = alg->ivsize; + walk->alignmask = alg->base.cra_alignmask; return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, From patchwork Sat Dec 21 09:10:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917722 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA04B1F190A for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; cv=none; b=Sjja6dqiRQ3jpivfO1A930CoDkVzs77bn4wptIRwERm19HPmQe6JaA4B6dkB2gHumayUG+b/3EYOQxjaeUKzmAV+ba2g4fS2mazxgEdhLNocVHLCzR/Z6iCsiI+2Xb60bzaZcbIrOc0c7NlN/SlQYcmT2QlZgMigIcdkia5aAKc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; c=relaxed/simple; bh=2nhmId8pbA4hd6z9AgDYhLqfIP1ELc8P6LVbb7WGcqw=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vDskzeM4/Q24PeIqp1tDXpUgnxtVuuNSRTs4ZjrcA8qFduP8/dEK5kyS9uTFLdsz1pyjzMw3Apg/tULYye4rPA8irEhGPhtAluJJficl9wje2TpnHF6kEV2nXY78dVMT+GvS3gdK0/y1Tf0NqxIjG3r5JvuG3HvMLipeCfVSwyA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RMMOUh+n; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RMMOUh+n" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 503F2C4CED4 for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772292; bh=2nhmId8pbA4hd6z9AgDYhLqfIP1ELc8P6LVbb7WGcqw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=RMMOUh+npTSxrX4x6CngGwaLEw3dhtG2tAaaeLLIsWk7Hyn/8BzmFHUMcGKV9wwwX oLwHSapa9R5GmyBciACQxf1/wGPiENP/8hSNKRUPpMEB3ssGfe/qf5lihwl0EnVGcI lkjexImPCFrllbwkp7FOhAmkwNCnsO9s8cYEZT1N2i92MU4c0kcVyNIGLoR3mXQn3G eaEdqBEeuYceX77gNcZfJL6xKkh1bKyM2aqf1ZY159QcHp3fpe7loXDeNh4v2oYSgL khxkyI9UNZJmSnnyFgY1cvuQkW+aOJiF8NMhqLRT7nASB93OjW9TD+qsmQi8ByXvou +BnO+lk+DuR4g== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 08/29] crypto: skcipher - call cond_resched() directly Date: Sat, 21 Dec 2024 01:10:35 -0800 Message-ID: <20241221091056.282098-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_done(), instead of calling crypto_yield() which requires a translation between flags, just call cond_resched() directly. This has the same effect. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 7ef2e4ddf07a..441e1d254d36 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -144,12 +144,12 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) scatterwalk_advance(&walk->out, n); scatterwalk_done(&walk->in, 0, total); scatterwalk_done(&walk->out, 1, total); if (total) { - crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? - CRYPTO_TFM_REQ_MAY_SLEEP : 0); + if (walk->flags & SKCIPHER_WALK_SLEEP) + cond_resched(); walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF); return skcipher_walk_next(walk); } From patchwork Sat Dec 21 09:10:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917721 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5C0F1F0E5F for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; cv=none; b=aOxswDy/UwWqS9Ai31qdqm0/401J0GC4YjL9V8mskEq+y4M/jt/R198MuugpzPIhJ35wuE4OAY+SdIB2/OvTRDgD5lr37yYOML9javQZqAyGykKS0veqrJxI+Zdzj+n1CbD7iN2GLsGtlgbjN0pX0EI9y6O3T7/UWubWZvXmB+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772292; c=relaxed/simple; bh=qJ8+hZyETm23N5a/uptwvwer/epz/AF43NJnCgoO2n8=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=puooNoxggoRd6nraKpEutZbpU8a9KhZ1UkBOLNraldy5l8PvdULEy6MiLPHDmJ9EHadUWP5v0MMAYn5tCUOSJmHu/00uqdAOjNbEoq2NK6FnCkUIbtG7FoHY+FF5Q3LiD1P4uAGpyeyRCvKCAuy4mzNenOMC/FR7R8AQ+ZLA3yY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mGlRxxi6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mGlRxxi6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E07DC4CED7 for ; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772292; bh=qJ8+hZyETm23N5a/uptwvwer/epz/AF43NJnCgoO2n8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=mGlRxxi6oVh+DLhSjkcUgIu56tM0PwYBQQS7+sBRwFfuINUZNxUJV+K5TD5ZUlqDU 4t91+cBEqaNyVKg7d9HQsve2JGwpQtyEzwjsai3Jbbg4EvQsWA56pbQiUpDNUzogO7 qN/JbSv9GfM7P8jBAlD8rsK4Xij6VciZhJ1W3ID9Fb3bVBNdjGuRabg3MU8DEWa/kN muymEOoXpgBbA/WqMIXgtkzZFKH2TLH0E8RpGNoE4qIJYiLOf/gNzZenDEJ9plHpHT qf3lLap4Fo97cBjkIqIjceu7jOCDvQtHTi0PZXK19ijP9p8xYX1KL1HcQwOLuzMyDi 9wR3aDH44T/Zg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 09/29] crypto: omap - switch from scatter_walk to plain offset Date: Sat, 21 Dec 2024 01:10:36 -0800 Message-ID: <20241221091056.282098-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The omap driver was using struct scatter_walk, but only to maintain an offset, rather than iterating through the virtual addresses of the data contained in the scatterlist which is what scatter_walk is intended for. Make it just use a plain offset instead. This is simpler and avoids depending on functions that are planned to be removed. Signed-off-by: Eric Biggers --- drivers/crypto/omap-aes.c | 34 ++++++++++++++------------------- drivers/crypto/omap-aes.h | 6 ++---- drivers/crypto/omap-des.c | 40 ++++++++++++++++----------------------- 3 files changed, 32 insertions(+), 48 deletions(-) diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index e27b84616743..551dd32a8db0 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -16,11 +16,10 @@ #include #include #include #include #include -#include #include #include #include #include #include @@ -270,13 +269,13 @@ static int omap_aes_crypt_dma(struct omap_aes_dev *dd, struct dma_async_tx_descriptor *tx_in, *tx_out = NULL, *cb_desc; struct dma_slave_config cfg; int ret; if (dd->pio_only) { - scatterwalk_start(&dd->in_walk, dd->in_sg); + dd->in_sg_offset = 0; if (out_sg_len) - scatterwalk_start(&dd->out_walk, dd->out_sg); + dd->out_sg_offset = 0; /* Enable DATAIN interrupt and let it take care of the rest */ omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x2); return 0; @@ -869,25 +868,22 @@ static irqreturn_t omap_aes_irq(int irq, void *dev_id) if (status & AES_REG_IRQ_DATA_IN) { omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x0); BUG_ON(!dd->in_sg); - BUG_ON(_calc_walked(in) > dd->in_sg->length); + BUG_ON(dd->in_sg_offset > dd->in_sg->length); - src = sg_virt(dd->in_sg) + _calc_walked(in); + src = sg_virt(dd->in_sg) + dd->in_sg_offset; for (i = 0; i < AES_BLOCK_WORDS; i++) { omap_aes_write(dd, AES_REG_DATA_N(dd, i), *src); - - scatterwalk_advance(&dd->in_walk, 4); - if (dd->in_sg->length == _calc_walked(in)) { + dd->in_sg_offset += 4; + if (dd->in_sg_offset == dd->in_sg->length) { dd->in_sg = sg_next(dd->in_sg); if (dd->in_sg) { - scatterwalk_start(&dd->in_walk, - dd->in_sg); - src = sg_virt(dd->in_sg) + - _calc_walked(in); + dd->in_sg_offset = 0; + src = sg_virt(dd->in_sg); } } else { src++; } } @@ -902,24 +898,22 @@ static irqreturn_t omap_aes_irq(int irq, void *dev_id) } else if (status & AES_REG_IRQ_DATA_OUT) { omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x0); BUG_ON(!dd->out_sg); - BUG_ON(_calc_walked(out) > dd->out_sg->length); + BUG_ON(dd->out_sg_offset > dd->out_sg->length); - dst = sg_virt(dd->out_sg) + _calc_walked(out); + dst = sg_virt(dd->out_sg) + dd->out_sg_offset; for (i = 0; i < AES_BLOCK_WORDS; i++) { *dst = omap_aes_read(dd, AES_REG_DATA_N(dd, i)); - scatterwalk_advance(&dd->out_walk, 4); - if (dd->out_sg->length == _calc_walked(out)) { + dd->out_sg_offset += 4; + if (dd->out_sg_offset == dd->out_sg->length) { dd->out_sg = sg_next(dd->out_sg); if (dd->out_sg) { - scatterwalk_start(&dd->out_walk, - dd->out_sg); - dst = sg_virt(dd->out_sg) + - _calc_walked(out); + dd->out_sg_offset = 0; + dst = sg_virt(dd->out_sg); } } else { dst++; } } diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h index 0f35c9164764..41d67780fd45 100644 --- a/drivers/crypto/omap-aes.h +++ b/drivers/crypto/omap-aes.h @@ -12,12 +12,10 @@ #include #define DST_MAXBURST 4 #define DMA_MIN (DST_MAXBURST * sizeof(u32)) -#define _calc_walked(inout) (dd->inout##_walk.offset - dd->inout##_sg->offset) - /* * OMAP TRM gives bitfields as start:end, where start is the higher bit * number. For example 7:0 */ #define FLD_MASK(start, end) (((1 << ((start) - (end) + 1)) - 1) << (end)) @@ -184,12 +182,12 @@ struct omap_aes_dev { /* Buffers for copying for unaligned cases */ struct scatterlist in_sgl[2]; struct scatterlist out_sgl; struct scatterlist *orig_out; - struct scatter_walk in_walk; - struct scatter_walk out_walk; + unsigned int in_sg_offset; + unsigned int out_sg_offset; struct dma_chan *dma_lch_in; struct dma_chan *dma_lch_out; int in_sg_len; int out_sg_len; int pio_only; diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c index 498cbd585ed1..a099460d5f21 100644 --- a/drivers/crypto/omap-des.c +++ b/drivers/crypto/omap-des.c @@ -17,11 +17,10 @@ #endif #include #include #include -#include #include #include #include #include #include @@ -38,12 +37,10 @@ #define DST_MAXBURST 2 #define DES_BLOCK_WORDS (DES_BLOCK_SIZE >> 2) -#define _calc_walked(inout) (dd->inout##_walk.offset - dd->inout##_sg->offset) - #define DES_REG_KEY(dd, x) ((dd)->pdata->key_ofs - \ ((x ^ 0x01) * 0x04)) #define DES_REG_IV(dd, x) ((dd)->pdata->iv_ofs + ((x) * 0x04)) @@ -150,12 +147,12 @@ struct omap_des_dev { /* Buffers for copying for unaligned cases */ struct scatterlist in_sgl; struct scatterlist out_sgl; struct scatterlist *orig_out; - struct scatter_walk in_walk; - struct scatter_walk out_walk; + unsigned int in_sg_offset; + unsigned int out_sg_offset; struct dma_chan *dma_lch_in; struct dma_chan *dma_lch_out; int in_sg_len; int out_sg_len; int pio_only; @@ -377,12 +374,12 @@ static int omap_des_crypt_dma(struct crypto_tfm *tfm, struct dma_async_tx_descriptor *tx_in, *tx_out; struct dma_slave_config cfg; int ret; if (dd->pio_only) { - scatterwalk_start(&dd->in_walk, dd->in_sg); - scatterwalk_start(&dd->out_walk, dd->out_sg); + dd->in_sg_offset = 0; + dd->out_sg_offset = 0; /* Enable DATAIN interrupt and let it take care of the rest */ omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x2); return 0; @@ -834,25 +831,22 @@ static irqreturn_t omap_des_irq(int irq, void *dev_id) if (status & DES_REG_IRQ_DATA_IN) { omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x0); BUG_ON(!dd->in_sg); - BUG_ON(_calc_walked(in) > dd->in_sg->length); + BUG_ON(dd->in_sg_offset > dd->in_sg->length); - src = sg_virt(dd->in_sg) + _calc_walked(in); + src = sg_virt(dd->in_sg) + dd->in_sg_offset; for (i = 0; i < DES_BLOCK_WORDS; i++) { omap_des_write(dd, DES_REG_DATA_N(dd, i), *src); - - scatterwalk_advance(&dd->in_walk, 4); - if (dd->in_sg->length == _calc_walked(in)) { + dd->in_sg_offset += 4; + if (dd->in_sg_offset == dd->in_sg->length) { dd->in_sg = sg_next(dd->in_sg); if (dd->in_sg) { - scatterwalk_start(&dd->in_walk, - dd->in_sg); - src = sg_virt(dd->in_sg) + - _calc_walked(in); + dd->in_sg_offset = 0; + src = sg_virt(dd->in_sg); } } else { src++; } } @@ -867,24 +861,22 @@ static irqreturn_t omap_des_irq(int irq, void *dev_id) } else if (status & DES_REG_IRQ_DATA_OUT) { omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x0); BUG_ON(!dd->out_sg); - BUG_ON(_calc_walked(out) > dd->out_sg->length); + BUG_ON(dd->out_sg_offset > dd->out_sg->length); - dst = sg_virt(dd->out_sg) + _calc_walked(out); + dst = sg_virt(dd->out_sg) + dd->out_sg_offset; for (i = 0; i < DES_BLOCK_WORDS; i++) { *dst = omap_des_read(dd, DES_REG_DATA_N(dd, i)); - scatterwalk_advance(&dd->out_walk, 4); - if (dd->out_sg->length == _calc_walked(out)) { + dd->out_sg_offset += 4; + if (dd->out_sg_offset == dd->out_sg->length) { dd->out_sg = sg_next(dd->out_sg); if (dd->out_sg) { - scatterwalk_start(&dd->out_walk, - dd->out_sg); - dst = sg_virt(dd->out_sg) + - _calc_walked(out); + dd->out_sg_offset = 0; + dst = sg_virt(dd->out_sg); } } else { dst++; } } From patchwork Sat Dec 21 09:10:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917723 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B0AC1F191F for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; cv=none; b=JW4HmHLp+GJL9lxjMEiyy0ga2lzwXr3/LLMErSD5czo5jFOG/8Q7CJV/VG9UX8KusakgEk8gKbfxhGeDb0dKhNHDfMRrKJ0jD+kz28fgLZObuqnk83+yxXk9Z3bp1wm2+swBqXIB/mWLNkslS84b+MNtBZQQ40awSO8rMZCpuX4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; c=relaxed/simple; bh=l3mcIF8SEtYBua+ussMVqyNpUvcPBoNrQm89sO2KshY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IuxR3uyYPw17DcnFIhhpgrQfB5rBWkd1fyOaHqYnxARLkEPux5IdGiezCH1iwqokyAw83lPFl+r/9/YiewnDipMfnagqTVrJea8MoFqUFPFLKUeTyEjM0LXhbOYs58bP8MjOU9+/TQAvGAihT+kavRuHoJq8zAKeiikDXjdDWq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lPCAlzTL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lPCAlzTL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAB71C4CED6; Sat, 21 Dec 2024 09:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772292; bh=l3mcIF8SEtYBua+ussMVqyNpUvcPBoNrQm89sO2KshY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lPCAlzTLn/W65gAW/TitVUHpbJl+s+1DDEg/itXgrgIkSwyK2j4ddjr34saeXx/Lo lhiPqNp/rwMt/PuOq4DrFrzKO2Y0cHTvxoB3QleRaAyw9l35Fp5ggbouBmOcPbMFEr 5QHdMaH2wivdWqDtMSn0r6aMQZWnCD3ldtBTcJcQoPQoUKeqdx4QtfcXCvXP4Dh8M7 JaewEW5EiNRJaB4i+W9BXWx8VggIvG+In98ebpKQOSihl5m9IPIElTH4bcepP+2MzR yXljeZIySVQlendyQl2f5FtTZNctAjsmX3+2nK/EGGs0NwVOOV0jemWqYL1DIxf/bp sbjl51d9JMjZQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Christophe Leroy , Danny Tsen , Michael Ellerman , Naveen N Rao , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Date: Sat, 21 Dec 2024 01:10:37 -0800 Message-ID: <20241221091056.282098-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers p10_aes_gcm_crypt() is abusing the scatter_walk API to get the virtual address for the first source scatterlist element. But this code is only built for PPC64 which is a !HIGHMEM platform, and it can read past a page boundary from the address returned by scatterwalk_map() which means it already assumes the address is from the kernel's direct map. Thus, just use sg_virt() instead to get the same result in a simpler way. Cc: Christophe Leroy Cc: Danny Tsen Cc: Michael Ellerman Cc: Naveen N Rao Cc: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. arch/powerpc/crypto/aes-gcm-p10-glue.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c index f37b3d13fc53..2862c3cf8e41 100644 --- a/arch/powerpc/crypto/aes-gcm-p10-glue.c +++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c @@ -212,11 +212,10 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, struct p10_aes_gcm_ctx *ctx = crypto_tfm_ctx(tfm); u8 databuf[sizeof(struct gcm_ctx) + PPC_ALIGN]; struct gcm_ctx *gctx = PTR_ALIGN((void *)databuf, PPC_ALIGN); u8 hashbuf[sizeof(struct Hash_ctx) + PPC_ALIGN]; struct Hash_ctx *hash = PTR_ALIGN((void *)hashbuf, PPC_ALIGN); - struct scatter_walk assoc_sg_walk; struct skcipher_walk walk; u8 *assocmem = NULL; u8 *assoc; unsigned int cryptlen = req->cryptlen; unsigned char ivbuf[AES_BLOCK_SIZE+PPC_ALIGN]; @@ -232,12 +231,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, memset(ivbuf, 0, sizeof(ivbuf)); memcpy(iv, riv, GCM_IV_SIZE); /* Linearize assoc, if not already linear */ if (req->src->length >= assoclen && req->src->length) { - scatterwalk_start(&assoc_sg_walk, req->src); - assoc = scatterwalk_map(&assoc_sg_walk); + assoc = sg_virt(req->src); /* ppc64 is !HIGHMEM */ } else { gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; /* assoc can be any length, so must be on heap */ @@ -251,13 +249,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv, vsx_begin(); gcmp10_init(gctx, iv, (unsigned char *) &ctx->enc_key, hash, assoc, assoclen); vsx_end(); - if (!assocmem) - scatterwalk_unmap(assoc); - else + if (assocmem) kfree(assocmem); if (enc) ret = skcipher_walk_aead_encrypt(&walk, req, false); else From patchwork Sat Dec 21 09:10:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917724 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AC631EE7DD for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; cv=none; b=gR7vHEnPU3N5NSNIZqapw7U4j0BiQTsAHxXeQY0Oweo0yaAw/P+Mn2utxIOPnXeMxsDY03aqxi/Yea0sMaG9TdUV546tkkbnCz8t+5B4PzpinE4Zsn+HTbO8LQrvSGei28pEFSVYVB6wEtimSxqZJSuNDYYG9m5h24fZ9tK1bqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; c=relaxed/simple; bh=IpLvn2XtgphqyEr3C9o4IvzLjuZxaBrxGrUoOcvifmA=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qeDFfAQy5vN7Pdat3nw+/z52rIiK+kLlijyTOU2dNWcXgqvqTvnKFaJek1JBY8RrQ/kmK3f+P/nIuYC/Swo6fD9+IvLQL2N+o/CChPnWo8x/PE4cZvtdiIz2nbiv2wrXYQh9rcNsRP9JA32jRpibUT72CIlcc0iciKtQt8xHM4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=M0u3+23w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="M0u3+23w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14C74C4CED4 for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772293; bh=IpLvn2XtgphqyEr3C9o4IvzLjuZxaBrxGrUoOcvifmA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=M0u3+23wxoV7Qb2vc2vXkX9uoYGRF52/z4p07iUeZU3SUy/FjO1byrEge4lzsmwXL 5MWOVgv0oLx9ZEUuSDx56GpcbYjlBnVbZPeXhvgxUw+GF47D3Ynugjs9ngiNbtLeOH iE0Qlgd5YDH1rr1FSDas7r9MAbebNdQtw+n/CRlq1ezJ9FyyHRJezDXS6bRk19BBVi Cogs/cjX/Wo3P3LnxTD/ANIDwafcHOWsRQAkYRn8jCXHqD2IwJWeiBun7x1jxyKlzs Wf7vM4p0QYgo0FGqUEWC0bz9Koxfy5OhJ1172DYDqkeDHJz2vKRYFaF20rzL1W/0yL r+KyCGvJ29QSQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 11/29] crypto: scatterwalk - move to next sg entry just in time Date: Sat, 21 Dec 2024 01:10:38 -0800 Message-ID: <20241221091056.282098-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The scatterwalk_* functions are designed to advance to the next sg entry only when there is more data from the request to process. Compared to the alternative of advancing after each step if !sg_is_last(sg), this has the advantage that it doesn't cause problems if users accidentally don't terminate their scatterlist with the end marker (which is an easy mistake to make, and there are examples of this). Currently, the advance to the next sg entry happens in scatterwalk_done(), which is called after each "step" of the walk. It requires the caller to pass in a boolean 'more' that indicates whether there is more data. This works when the caller immediately knows whether there is more data, though it adds some complexity. However in the case of scatterwalk_copychunks() it's not immediately known whether there is more data, so the call to scatterwalk_done() has to happen higher up the stack. This is error-prone, and indeed the needed call to scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is sometimes called multiple times in a row. This causes a zero-length step to get added in some cases, which is unexpected and seems to work only by accident. This patch begins the switch to a less error-prone approach where the advance to the next sg entry happens just in time instead. For now, that means just doing the advance in scatterwalk_clamp() if it's needed there. Initially this is redundant, but it's needed to keep the tree in a working state as later patches change things to the final state. Later patches will similarly move the dcache flushing logic out of scatterwalk_done() and then remove scatterwalk_done() entirely. Signed-off-by: Eric Biggers --- include/crypto/scatterwalk.h | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 32fc4473175b..924efbaefe67 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -24,22 +24,30 @@ static inline void scatterwalk_crypto_chain(struct scatterlist *head, sg_chain(head, num, sg); else sg_mark_end(head); } +static inline void scatterwalk_start(struct scatter_walk *walk, + struct scatterlist *sg) +{ + walk->sg = sg; + walk->offset = sg->offset; +} + static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) { unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; unsigned int len_this_page = offset_in_page(~walk->offset) + 1; return len_this_page > len ? len : len_this_page; } static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, unsigned int nbytes) { - unsigned int len_this_page = scatterwalk_pagelen(walk); - return nbytes > len_this_page ? len_this_page : nbytes; + if (walk->offset >= walk->sg->offset + walk->sg->length) + scatterwalk_start(walk, sg_next(walk->sg)); + return min(nbytes, scatterwalk_pagelen(walk)); } static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { @@ -54,17 +62,10 @@ static inline struct page *scatterwalk_page(struct scatter_walk *walk) static inline void scatterwalk_unmap(void *vaddr) { kunmap_local(vaddr); } -static inline void scatterwalk_start(struct scatter_walk *walk, - struct scatterlist *sg) -{ - walk->sg = sg; - walk->offset = sg->offset; -} - static inline void *scatterwalk_map(struct scatter_walk *walk) { return kmap_local_page(scatterwalk_page(walk)) + offset_in_page(walk->offset); } From patchwork Sat Dec 21 09:10:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917725 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD0631F1934 for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; cv=none; b=jNqhYhnqifxkGTwo13onOatdUo5pJgz5jfGmlnzlgx1y6NUJgRt3T80MW0aTtcq+QeTkzmNKXn4amDbowOT4CbeLs9u93gJO0fOXjNwPnWmQN2n1NcybkaOth9HeKmt7LpuifUJgYWVk2sM+wQVgKfldiuftOQftkb6UhtD6xLg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; c=relaxed/simple; bh=Aca6uPsUCVLdX4GRY5oc+9eRqWCqEIgCJje21Fl6qos=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=skA5e8F57JWrjbdnPqbhrUpGr4ftJSOfWz9LtCtDY2olURzlAQzl22V1qefeEfjHuA85uiZb76OL793hCCbw8LxSyCpaESvDfjThl8ecyR87FGBBHCwTfx1kgVzjUrWzZm8/mhEryv0ZbYrP6Z3ybjHg9Llxf89d3nsaIf6iQIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mPGBks13; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mPGBks13" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40DA2C4CEDE for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772293; bh=Aca6uPsUCVLdX4GRY5oc+9eRqWCqEIgCJje21Fl6qos=; h=From:To:Subject:Date:In-Reply-To:References:From; b=mPGBks13ksjm0ws9HFGcj6woU1HYnFdK3XNldQVphhUrKV57svAfoQVA/Wvde9uZm coQwk+QcqCEK8fZrInsg7kLTdy7ifd6JaxSzEQ69JdkQvV1uozrS+X+zHAqVa+QxVi FC14jJvkOUGQNtNgs/9vJ1bAAxKhgyitAGDMHUtgAfOobV3cRGfi8onft21REOQ+Rb ncf6qHIg2FyqeGNg2ELEdb02YIYQ3siQDtNcOHIa/T9UMUc8hmju5uE49479fdtmqD 3R4Xoq/XcAKk9Mqd2i4lRDvRFvQlmWFZ6JaEghDKffFUY4VclceAClzVvA6bv2tm6r YxM9wKmmrHk6w== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 12/29] crypto: scatterwalk - add new functions for skipping data Date: Sat, 21 Dec 2024 01:10:39 -0800 Message-ID: <20241221091056.282098-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 15 +++++++++++++++ include/crypto/scatterwalk.h | 18 ++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 16f6ba896fb6..af436ad02e3f 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -13,10 +13,25 @@ #include #include #include #include +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) +{ + struct scatterlist *sg = walk->sg; + + nbytes += walk->offset - sg->offset; + + while (nbytes > sg->length) { + nbytes -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + nbytes; +} +EXPORT_SYMBOL_GPL(scatterwalk_skip); + static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) { void *src = out ? buf : sgdata; void *dst = out ? sgdata : buf; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 924efbaefe67..5c7765f601e0 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -31,10 +31,26 @@ static inline void scatterwalk_start(struct scatter_walk *walk, { walk->sg = sg; walk->offset = sg->offset; } +/* + * This is equivalent to scatterwalk_start(walk, sg) followed by + * scatterwalk_skip(walk, pos). + */ +static inline void scatterwalk_start_at_pos(struct scatter_walk *walk, + struct scatterlist *sg, + unsigned int pos) +{ + while (pos > sg->length) { + pos -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + pos; +} + static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) { unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; unsigned int len_this_page = offset_in_page(~walk->offset) + 1; return len_this_page > len ? len : len_this_page; @@ -90,10 +106,12 @@ static inline void scatterwalk_done(struct scatter_walk *walk, int out, if (!more || walk->offset >= walk->sg->offset + walk->sg->length || !(walk->offset & (PAGE_SIZE - 1))) scatterwalk_pagedone(walk, out, more); } +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); + void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, unsigned int start, unsigned int nbytes, int out); From patchwork Sat Dec 21 09:10:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917727 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E591E1F193C for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=eeeHoHCRTJTmjjT/O49IlmC72Jx0AbAMzXN5LTjDCkxGYkeuhLOcsg41hHvNpa36BLINi6fWEEgwJiMQ7hpoSk4stobLfoBSfp2woDBSzleogFTQd1UMo7VkDgFpBlOYZJ2/cTHGDgvQge/WEA9/bopr8HNVlnQYr5CxhKPNSA8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=Lc+Uvnjuqp7QZEEtyq89hKiokD0nKYE9KNHIW02xcyM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ebXPpbGX9moASDRhumX41GGWxfaHbAY2oFMsS/WnCrWStdImuui2sOANkVP+7Ns+gaxslUhj/57AawBuTmXSjPEud7MtcFqIWlUNwrmj+UnfSmdlvzWsD5IYpy0N+z4YR63KTV2edvlu+qRufdrFDaNBRd7LrOiYe12zyq+heI8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VbXv7+Lr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VbXv7+Lr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6DA06C4CECE for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772293; bh=Lc+Uvnjuqp7QZEEtyq89hKiokD0nKYE9KNHIW02xcyM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=VbXv7+LrfwawF8V++KL8u82MGaMbUP9Y0b4bV4ThXnXMKzln5rMPtor/vdDmXXNZW anbJGzjbTisQgaHF/JU7TsufnzAEehyqgr/xBhWae41Yx0W05Q94gG2s/AXmMh2XBN 1khtGSkNZQsM505toyxTMDVw4E5Zk/mUnA41ky7++n4poTxg56/yaKG809iTwNae82 Q1uWcqJgg6cFDapQm5wYA7vp3yP4NXmCpFOM5Fa2Sk7rySrCOw9s7HgoWidnILFEpS 1XszZWL4KgAJJ/5E+dwaQjQH/CnUC7OTWaMjtpy4QaadHKd6fPbgbp9+js/A+p/TYy 3phaY16SxuZbw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 13/29] crypto: scatterwalk - add new functions for iterating through data Date: Sat, 21 Dec 2024 01:10:40 -0800 Message-ID: <20241221091056.282098-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Also add scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone(). A later patch will remove scatterwalk_done() and scatterwalk_pagedone(). The new code eliminates the error-prone 'more' parameter. Advancing to the next sg entry now only happens just-in-time in scatterwalk_next(). The new code also pairs the dcache flush more closely with the actual write, similar to memcpy_to_page(). Previously it was paired with advancing to the next page. This is currently causing bugs where the dcache flush is incorrectly being skipped, usually due to scatterwalk_copychunks() being called without a following scatterwalk_done(). The dcache flush may have been placed where it was in order to not call flush_dcache_page() redundantly when visiting a page more than once. However, that case is rare in practice, and most architectures either do not implement flush_dcache_page() anyway or implement it lazily where it just clears a page flag. Another limitation of the old code was that by the time the flush happened, there was no way to tell if more than one page needed to be flushed. That has been sufficient because the code goes page by page, but I would like to optimize that on !HIGHMEM platforms. The new code makes this possible, and a later patch will implement this optimization. Signed-off-by: Eric Biggers --- include/crypto/scatterwalk.h | 64 ++++++++++++++++++++++++++++++++---- 1 file changed, 58 insertions(+), 6 deletions(-) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 5c7765f601e0..8108478d6fbf 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -62,16 +62,10 @@ static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); return min(nbytes, scatterwalk_pagelen(walk)); } -static inline void scatterwalk_advance(struct scatter_walk *walk, - unsigned int nbytes) -{ - walk->offset += nbytes; -} - static inline struct page *scatterwalk_page(struct scatter_walk *walk) { return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); } @@ -84,10 +78,28 @@ static inline void *scatterwalk_map(struct scatter_walk *walk) { return kmap_local_page(scatterwalk_page(walk)) + offset_in_page(walk->offset); } +/** + * scatterwalk_next() - Get the next data buffer in a scatterlist walk + * @walk: the scatter_walk + * @total: the total number of bytes remaining, > 0 + * @nbytes_ret: (out) the next number of bytes available, <= @total + * + * Return: A virtual address for the next segment of data from the scatterlist. + * The caller must call scatterwalk_done_src() or scatterwalk_done_dst() + * when it is done using this virtual address. + */ +static inline void *scatterwalk_next(struct scatter_walk *walk, + unsigned int total, + unsigned int *nbytes_ret) +{ + *nbytes_ret = scatterwalk_clamp(walk, total); + return scatterwalk_map(walk); +} + static inline void scatterwalk_pagedone(struct scatter_walk *walk, int out, unsigned int more) { if (out) { struct page *page; @@ -106,10 +118,50 @@ static inline void scatterwalk_done(struct scatter_walk *walk, int out, if (!more || walk->offset >= walk->sg->offset + walk->sg->length || !(walk->offset & (PAGE_SIZE - 1))) scatterwalk_pagedone(walk, out, more); } +static inline void scatterwalk_advance(struct scatter_walk *walk, + unsigned int nbytes) +{ + walk->offset += nbytes; +} + +/** + * scatterwalk_done_src() - Finish one step of a walk of source scatterlist + * @walk: the scatter_walk + * @vaddr: the address returned by scatterwalk_next() + * @nbytes: the number of bytes processed this step, less than or equal to the + * number of bytes that scatterwalk_next() returned. + * + * Use this if the @vaddr was not written to, i.e. it is source data. + */ +static inline void scatterwalk_done_src(struct scatter_walk *walk, + const void *vaddr, unsigned int nbytes) +{ + scatterwalk_unmap((void *)vaddr); + scatterwalk_advance(walk, nbytes); +} + +/** + * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist + * @walk: the scatter_walk + * @vaddr: the address returned by scatterwalk_next() + * @nbytes: the number of bytes processed this step, less than or equal to the + * number of bytes that scatterwalk_next() returned. + * + * Use this if the @vaddr may have been written to, i.e. it is destination data. + */ +static inline void scatterwalk_done_dst(struct scatter_walk *walk, + void *vaddr, unsigned int nbytes) +{ + scatterwalk_unmap(vaddr); + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) + flush_dcache_page(scatterwalk_page(walk)); + scatterwalk_advance(walk, nbytes); +} + void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); From patchwork Sat Dec 21 09:10:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917726 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2EDD1F193B for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; cv=none; b=PdUTxNcZqz7i9Rl+8tuYG1a7+iwrHDU3+qnLQMjqigns/w8Jg5d5Ey4ulKji7HOK75vQk4hwrAoJjeTTA7G5WDjuLns5NFKz7rRhJbr+ZQujfZ+eTjwdpLnSB8dNwuCCfRsabzd4Ja1nW3g+A5CNQPMUMl+N+xMMCDWDzzf5Blo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772293; c=relaxed/simple; bh=qFIhI2WWT95ca0nboKXO/kdfL5eHUFHVS3JDkJcv558=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CmoIO6E9PdX89ry20lTMceRO4GllGqvCfUbxTh6O3k2KE+mIZVXUApIdKF+1Q1B9NCairJJPpKA3lZX+dlxXPoKzLasy7uZPBg/Btk50WqmTlwvEh0MZJUmJZOAAkduJyeyIs9tEzLoMWEF3vLcvr88Ylt43NwyjkwmrWvPMoTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HuPK6x3k; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HuPK6x3k" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99F87C4CED4 for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772293; bh=qFIhI2WWT95ca0nboKXO/kdfL5eHUFHVS3JDkJcv558=; h=From:To:Subject:Date:In-Reply-To:References:From; b=HuPK6x3k2Q2HWX1EgX7FB8xzF4w9I7wih557kTRLpSl/FUkIime6WdD9MswuyCJZ1 LfDqL4hbGA6yvcdlXTMwaq4j/+vnLSv9HyUGbCJu05ty2W/ZOUssYs4u6+Cdr4WvWE qfb6gYb8l4//JuIt/Ipfk+07sfzbyS/YEMP5wPKpidjIQ3a47Yf+On9L6BanWWFWzl kkslYtn3oOCrKs4QYN2QLo38VlT65NDpwIfTT7o/hJDNtQ/cUyWaRGhX2ywXgcxvsT ccA1Iy/CAvnzgHafkQpCnJXmvbjK4WspAOnRjj/I/bvMNU+7DgWl8qaBBwGOwqrLN5 Q7fC9i3fyieww== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 14/29] crypto: scatterwalk - add new functions for copying data Date: Sat, 21 Dec 2024 01:10:41 -0800 Message-ID: <20241221091056.282098-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1 respectively. They follow the same argument order as memcpy_from_page() and memcpy_to_page() from . Note that in the case of memcpy_from_sglist(), this also happens to be the same argument order that scatterwalk_map_and_copy() uses. The new code is also faster, mainly because it builds the scatter_walk directly without creating a temporary scatterlist. E.g., a 20% performance improvement is seen for copying the AES-GCM auth tag. Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist() and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but there are a lot of them so they aren't all being updated right away. Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk() which are similar but operate on a scatter_walk instead of a scatterlist. These will replace scatterwalk_copychunks() with the 'out' argument 0 and 1 respectively. Their behavior differs slightly from scatterwalk_copychunks() in that they automatically take care of flushing the dcache when needed, making them easier to use. scatterwalk_copychunks() itself is left unchanged for now. It will be removed after its callers are updated to use other functions instead. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 59 ++++++++++++++++++++++++++++++------ include/crypto/scatterwalk.h | 24 +++++++++++++-- 2 files changed, 72 insertions(+), 11 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index af436ad02e3f..2e7a532152d6 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -65,26 +65,67 @@ void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, scatterwalk_pagedone(walk, out & 1, 1); } } EXPORT_SYMBOL_GPL(scatterwalk_copychunks); -void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, - unsigned int start, unsigned int nbytes, int out) +inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, + unsigned int nbytes) +{ + do { + const void *src_addr; + unsigned int to_copy; + + src_addr = scatterwalk_next(walk, nbytes, &to_copy); + memcpy(buf, src_addr, to_copy); + scatterwalk_done_src(walk, src_addr, to_copy); + buf += to_copy; + nbytes -= to_copy; + } while (nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_from_scatterwalk); + +inline void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, + unsigned int nbytes) +{ + do { + void *dst_addr; + unsigned int to_copy; + + dst_addr = scatterwalk_next(walk, nbytes, &to_copy); + memcpy(dst_addr, buf, to_copy); + scatterwalk_done_dst(walk, dst_addr, to_copy); + buf += to_copy; + nbytes -= to_copy; + } while (nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_to_scatterwalk); + +void memcpy_from_sglist(void *buf, struct scatterlist *sg, + unsigned int start, unsigned int nbytes) { struct scatter_walk walk; - struct scatterlist tmp[2]; - if (!nbytes) + if (unlikely(nbytes == 0)) /* in case sg == NULL */ return; - sg = scatterwalk_ffwd(tmp, sg, start); + scatterwalk_start_at_pos(&walk, sg, start); + memcpy_from_scatterwalk(buf, &walk, nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_from_sglist); + +void memcpy_to_sglist(struct scatterlist *sg, unsigned int start, + const void *buf, unsigned int nbytes) +{ + struct scatter_walk walk; + + if (unlikely(nbytes == 0)) /* in case sg == NULL */ + return; - scatterwalk_start(&walk, sg); - scatterwalk_copychunks(buf, &walk, nbytes, out); - scatterwalk_done(&walk, out, 0); + scatterwalk_start_at_pos(&walk, sg, start); + memcpy_to_scatterwalk(&walk, buf, nbytes); } -EXPORT_SYMBOL_GPL(scatterwalk_map_and_copy); +EXPORT_SYMBOL_GPL(memcpy_to_sglist); struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, unsigned int len) { diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 8108478d6fbf..5e12c07be89b 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -163,12 +163,32 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); -void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, - unsigned int start, unsigned int nbytes, int out); +void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, + unsigned int nbytes); + +void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, + unsigned int nbytes); + +void memcpy_from_sglist(void *buf, struct scatterlist *sg, + unsigned int start, unsigned int nbytes); + +void memcpy_to_sglist(struct scatterlist *sg, unsigned int start, + const void *buf, unsigned int nbytes); + +/* In new code, please use memcpy_{from,to}_sglist() directly instead. */ +static inline void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, + unsigned int start, + unsigned int nbytes, int out) +{ + if (out) + memcpy_to_sglist(sg, start, buf, nbytes); + else + memcpy_from_sglist(buf, sg, start, nbytes); +} struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, unsigned int len); From patchwork Sat Dec 21 09:10:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917728 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED7021F2361 for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=Ydngat96nN5RwbofFjProG7S6ZEhnRs/cEf14q8vjbQ0X+Uw7zy006P1XxIgQ8ES5crYBHhViu1Iag+HyFYoSXfjQEnFyamDr4yP8DH5yGx3BxJN7qyU7Z/5VujnI8c1wsX4LGdKsbeSsBYfpNNagMqQsK9qI6Z4GK7HpMZ5TeQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=gTqiJ83GC+y9ntGuuGLBowL4XoaSaAg3tom8EPVefuE=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RmwJyuMtdjvMoR1ph75EscdoNM7HUsP4HlrFnjSkVwh5iAFAd2ULLiSG1RjHI5InARddFXSiUidDT44UmGzlbNOnHktGo2W4ktjpS0/DFYkwRHqTXdcGf8ocPy672kWSga9AZRgR9UGMrb+AwfFGlm7/CxWtFuF0m2iHMkgCw5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y8Id6Zul; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y8Id6Zul" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C603AC4CEDD for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772293; bh=gTqiJ83GC+y9ntGuuGLBowL4XoaSaAg3tom8EPVefuE=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Y8Id6Zulh0YG6d8I5noG2Nw/d2ppYHV4Qm8IoRNMsNfL0jFy7+SeZdFA+/cBC//Ab E2oanVBd6Ame730CCC6/Py8eDgNy2hLKPKAAupynslHTQGpAO+XmuUb9rbIFwK8Q7r aXLV2mZFZK0nOD2SOnCPpmvPlIgPLW3VpjKvd2DYmYELlpySs/OozpKhFcEwb/zYtW 9U3jF/p8rtMwF2H7BIkTvGeAJRkZmcYarxHhy0gRjsB0uiEk6okavzaXUd3Nim423T Fv0fJ2kDoGTAedZzj2CTcg5o/S3Ro881PD1cZ+sohw2JhDpVg3M+cruAB0rYeze9D4 lqcUbLRpEKJgA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 15/29] crypto: skcipher - use scatterwalk_start_at_pos() Date: Sat, 21 Dec 2024 01:10:42 -0800 Message-ID: <20241221091056.282098-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2), and scatterwalk_done(). This is simpler and faster. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 441e1d254d36..7abafe385fd5 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -355,18 +355,12 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, walk->flags = 0; if (unlikely(!walk->total)) return 0; - scatterwalk_start(&walk->in, req->src); - scatterwalk_start(&walk->out, req->dst); - - scatterwalk_copychunks(NULL, &walk->in, req->assoclen, 2); - scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); - - scatterwalk_done(&walk->in, 0, walk->total); - scatterwalk_done(&walk->out, 0, walk->total); + scatterwalk_start_at_pos(&walk->in, req->src, req->assoclen); + scatterwalk_start_at_pos(&walk->out, req->dst, req->assoclen); walk->blocksize = alg->base.cra_blocksize; walk->stride = alg->chunksize; walk->ivsize = alg->ivsize; walk->alignmask = alg->base.cra_alignmask; From patchwork Sat Dec 21 09:10:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917729 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27C931F2369 for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=OneiNwUKSo+tn7D60vNywp+vVSoPmmsdhrcptr9HxC7GjNYczhiH2A9eGfscOhGHJVS7Pndny+Q/Z+Pnpy4GJIyskQvk7e0pnQaWsEr0bkYn6oTi2T9sTcR6B2CMCFQ+UGPbxyLCAprhVEtxS5IYbk56mgDB3OO4e+1npqP84AU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=ouuUySUoLhMrrlBcXOe8B+9HrJzjwC7w9DOOQhZHjCM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YxHxw61zpB/Z9ncY+95TEKqT7FhX4MgqrXjecKyxBaUppb5LOtiwSS0ACLQh7vxKyWe4DrNrOFeP+b3JlfygnfJ1YZdhQGBNsNKAiXwEs5rm32JhApZiwhuT9k3emqOgKsehGeDavVVbPChiUDYoYxC/ZNnVt734fKbfV8DwffQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qV/4wEpA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qV/4wEpA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3816C4CED4 for ; Sat, 21 Dec 2024 09:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772294; bh=ouuUySUoLhMrrlBcXOe8B+9HrJzjwC7w9DOOQhZHjCM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=qV/4wEpA+ibhPg11jbI9o4q2nunPZbAcT+wP9GN67Jq1/0x61BjIij5c95CNCjI7w 5rXZ+oycnkyiR4OFUaNAl7OlBOuoSZREQnMIpAQxJ5YgeLGQ41/pLu2+CEiH4ONZ7I ZNdYCHD8o4vIGi6zxUA4fYUxuaI0InDPptpqkv7SsVBtL8VEYwsoGQNSaIDXdCSLRc DcZkUl9so4ugjbrtdP1K30tpkUpC4VYHK1KIAgBl3nfMCMB+5GvuNH0iTyVFYEZfrQ gsjg4ch3NnYHzVy2ac5TJyDDhtuebmOpYLFWvFOMuK8xc4jTJPWhAXQhdEj6RTlV/B zpRfxB8wNatxg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 16/29] crypto: aegis - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:43 -0800 Message-ID: <20241221091056.282098-17-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers --- crypto/aegis128-core.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/crypto/aegis128-core.c b/crypto/aegis128-core.c index 6cbff298722b..15d64d836356 100644 --- a/crypto/aegis128-core.c +++ b/crypto/aegis128-core.c @@ -282,14 +282,14 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, union aegis_block buf; unsigned int pos = 0; scatterwalk_start(&walk, sg_src); while (assoclen != 0) { - unsigned int size = scatterwalk_clamp(&walk, assoclen); + unsigned int size; + const u8 *mapped = scatterwalk_next(&walk, assoclen, &size); unsigned int left = size; - void *mapped = scatterwalk_map(&walk); - const u8 *src = (const u8 *)mapped; + const u8 *src = mapped; if (pos + size >= AEGIS_BLOCK_SIZE) { if (pos > 0) { unsigned int fill = AEGIS_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); @@ -306,13 +306,11 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, memcpy(buf.bytes + pos, src, left); pos += left; assoclen -= size; - scatterwalk_unmap(mapped); - scatterwalk_advance(&walk, size); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, mapped, size); } if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS_BLOCK_SIZE - pos); crypto_aegis128_update_a(state, &buf, do_simd); From patchwork Sat Dec 21 09:10:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917730 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 562921F2376 for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=EwMYW8xCUteYhfxh+zw9QSsm4tQNokVgckvk2JgKQpJlygM6/v2XJp/hfIa6M+IfnD7xZthlJ33nJN/zqzDelyKyFBj6NPhPcES47pY464HlmsIsiQ3YwWm+HABjMqQ9toprtA5jvGrvrx6sRtKWXKzLhZxNASPC/PROoP2rM/8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=flgH9kGUEUUdU5GGIoLcecbOKpZ/9P71he8YGiVnKlw=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kfIfrHZIXy/us/2pYjNQwVVJ32jAzmq7hIzzS2Low1s0IlB0CbsACA2lXx6lDjB3f+Mj1xnmfVFKwZ3k4E4RDNpyTVLIpqlZle6hAykalKs9JzOtrq0lGGlnXA54jUCg8YGncBo9jTV7d1OTQfdGgJ+78gnmlAPKRxSpjQETUt4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=t8fL2qOK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="t8fL2qOK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BF4FC4CED6 for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772294; bh=flgH9kGUEUUdU5GGIoLcecbOKpZ/9P71he8YGiVnKlw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=t8fL2qOKGbTnhMv9a04sUoTFR28WY+/BLHluuRKR37EYbGg53O2FD1WfWe3KQLxZc 7Dwvc0LB9ltuJBs43LTlmVOGagHJy9mTid+EqiR3CFecyFv/6e/h/1EPaa7RqGcU4J WMqZi5whEsL14ppW0zd2M4KWbaOZh4qYKIqb5KbWphrL+IpNhiW/+V3nURknPPf1gu llSv0csshe+x4l0Y8n4fO9xHdtQmGeb+9dQA2IgrePYPw1Z2PBLo+veq/ZBbIce3SN hdf/161PNFpzaYyO2M4uFzC9ciCB8ZtRCIhuNmVDcQcTEOzpd1UiFmitP1em1BiTra 5lljalwi2kAVQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 17/29] crypto: arm/ghash - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:44 -0800 Message-ID: <20241221091056.282098-18-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Signed-off-by: Eric Biggers --- arch/arm/crypto/ghash-ce-glue.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index 3af997082534..9613ffed84f9 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -457,30 +457,23 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len) int buf_count = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; + unsigned int n; + const u8 *p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - - p = scatterwalk_map(&walk); + p = scatterwalk_next(&walk, len, &n); gcm_update_mac(dg, p, n, buf, &buf_count, ctx); - scatterwalk_unmap(p); + scatterwalk_done_src(&walk, p, n); if (unlikely(len / SZ_4K > (len - n) / SZ_4K)) { kernel_neon_end(); kernel_neon_begin(); } len -= n; - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); pmull_ghash_update_p64(1, dg, buf, ctx->h, NULL); From patchwork Sat Dec 21 09:10:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917731 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EBE51F2379 for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=X/4LSy++BGesHsgAfJx0RrQWHkpmnecsS0VI9EmRMS8MfKH5z/8xDifv2/Zo2H6Apo9JIzzdRhC1l1d0b3XdwsWJqUG5iCtKQWVBXP4u3uCPankSHi7CvwQ5r++84MgHf6sKKUviO58Ct8rV7/1HC34DQq7mYfH5TjUNYJ3zohE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=mSW8mWLKgO+z9pqafkyRDJs4ecTLipklsIRH/V4AmPg=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=doLJgFhTqSieiMVIeC6GEB9PNZKF7+zomUprUufXZ+YPvbKPEa0B1cW4dgl7W2MdJ4MWyg4n+WPsKoucIs78bN+FfYl+DDpYEh3HMwI8j4091vT8arc/RLitU8mGjRCNaxPxsp3cr52fshfA1Qone6G94QyDlW5bqtUME0iUH6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FqIHPQpO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FqIHPQpO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59390C4CED7 for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772294; bh=mSW8mWLKgO+z9pqafkyRDJs4ecTLipklsIRH/V4AmPg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=FqIHPQpOXLPHJLxvBjN1aw3o4x+3607L406ANER6Rb1YI3jotUQBzQ1i3YWpJ0Zk0 gqhCdEET+cM+WL2HcxiZvu2+Y1UDxF141qAJMCEfhOlXJJ7kqnoNCz2C3hZQ3Fp5Ax G8B+uvtvUMVqu6itRVEzLrQHg0C/IqoJKoCRPc8XbcmhHW2w33NQg853MvysydSIfz BhMimxGfYhwlbg4wzqAHyl68VaUEHgo2X/nJ00yEDVqAxtGHZLarmdh0nGlNmulIGY Om90T8Lb/jpjQEPIDzIdj7CiJna6vGTmG3OAbzhNajCL+Pj/NVeePk5Bly6qmdEOGW 37yK++XQJEO/g== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 18/29] crypto: arm64 - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:45 -0800 Message-ID: <20241221091056.282098-19-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Adjust variable naming slightly to keep things consistent. Signed-off-by: Eric Biggers --- arch/arm64/crypto/aes-ce-ccm-glue.c | 17 ++++------------ arch/arm64/crypto/ghash-ce-glue.c | 16 ++++----------- arch/arm64/crypto/sm4-ce-ccm-glue.c | 27 ++++++++++--------------- arch/arm64/crypto/sm4-ce-gcm-glue.c | 31 ++++++++++++----------------- 4 files changed, 32 insertions(+), 59 deletions(-) diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index a2b5d6f20f4d..1c29546983bf 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -154,27 +154,18 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) macp = ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, macp, ctx->key_enc, num_rounds(ctx)); scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; - - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - p = scatterwalk_map(&walk); + unsigned int n; + const u8 *p; + p = scatterwalk_next(&walk, len, &n); macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc, num_rounds(ctx)); - + scatterwalk_done_src(&walk, p, n); len -= n; - - scatterwalk_unmap(p); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); } static int ccm_encrypt(struct aead_request *req) { diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index da7b7ec1a664..69d4fb78c30d 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -306,25 +306,17 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len) int buf_count = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; - - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - p = scatterwalk_map(&walk); + unsigned int n; + const u8 *p; + p = scatterwalk_next(&walk, len, &n); gcm_update_mac(dg, p, n, buf, &buf_count, ctx); + scatterwalk_done_src(&walk, p, n); len -= n; - - scatterwalk_unmap(p); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); ghash_do_simd_update(1, dg, buf, &ctx->ghash_key, NULL, diff --git a/arch/arm64/crypto/sm4-ce-ccm-glue.c b/arch/arm64/crypto/sm4-ce-ccm-glue.c index 5e7e17bbec81..119f86eb7cc9 100644 --- a/arch/arm64/crypto/sm4-ce-ccm-glue.c +++ b/arch/arm64/crypto/sm4-ce-ccm-glue.c @@ -110,21 +110,16 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) crypto_xor(mac, (const u8 *)&aadlen, len); scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, assoclen); - u8 *p, *ptr; + unsigned int n, orig_n; + const u8 *p, *orig_p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, assoclen); - } - - p = ptr = scatterwalk_map(&walk); - assoclen -= n; - scatterwalk_advance(&walk, n); + orig_p = scatterwalk_next(&walk, assoclen, &orig_n); + p = orig_p; + n = orig_n; while (n > 0) { unsigned int l, nblocks; if (len == SM4_BLOCK_SIZE) { @@ -134,30 +129,30 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) len = 0; } else { nblocks = n / SM4_BLOCK_SIZE; sm4_ce_cbcmac_update(ctx->rkey_enc, - mac, ptr, nblocks); + mac, p, nblocks); - ptr += nblocks * SM4_BLOCK_SIZE; + p += nblocks * SM4_BLOCK_SIZE; n %= SM4_BLOCK_SIZE; continue; } } l = min(n, SM4_BLOCK_SIZE - len); if (l) { - crypto_xor(mac + len, ptr, l); + crypto_xor(mac + len, p, l); len += l; - ptr += l; + p += l; n -= l; } } - scatterwalk_unmap(p); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_p, orig_n); + assoclen -= orig_n; } while (assoclen); } static int ccm_crypt(struct aead_request *req, struct skcipher_walk *walk, u32 *rkey_enc, u8 mac[], diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index 73bfb6972d3a..2e27d7752d4f 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -80,53 +80,48 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[]) unsigned int buflen = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, assoclen); - u8 *p, *ptr; + unsigned int n, orig_n; + const u8 *p, *orig_p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, assoclen); - } - - p = ptr = scatterwalk_map(&walk); - assoclen -= n; - scatterwalk_advance(&walk, n); + orig_p = scatterwalk_next(&walk, assoclen, &orig_n); + p = orig_p; + n = orig_n; if (n + buflen < GHASH_BLOCK_SIZE) { - memcpy(&buffer[buflen], ptr, n); + memcpy(&buffer[buflen], p, n); buflen += n; } else { unsigned int nblocks; if (buflen) { unsigned int l = GHASH_BLOCK_SIZE - buflen; - memcpy(&buffer[buflen], ptr, l); - ptr += l; + memcpy(&buffer[buflen], p, l); + p += l; n -= l; pmull_ghash_update(ctx->ghash_table, ghash, buffer, 1); } nblocks = n / GHASH_BLOCK_SIZE; if (nblocks) { pmull_ghash_update(ctx->ghash_table, ghash, - ptr, nblocks); - ptr += nblocks * GHASH_BLOCK_SIZE; + p, nblocks); + p += nblocks * GHASH_BLOCK_SIZE; } buflen = n % GHASH_BLOCK_SIZE; if (buflen) - memcpy(&buffer[0], ptr, buflen); + memcpy(&buffer[0], p, buflen); } - scatterwalk_unmap(p); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_p, orig_n); + assoclen -= orig_n; } while (assoclen); /* padding with '0' */ if (buflen) { memset(&buffer[buflen], 0, GHASH_BLOCK_SIZE - buflen); From patchwork Sat Dec 21 09:10:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917732 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE25D1F237C for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; cv=none; b=SAzqcCrKjdIKFwMN0bG5PHnihIUUjFsPu1KnkoeBdH7gaJ6+vd2HAYRqIKbv+zYV0kei3tp5hekmJ+IfB8P6A6RRNss/t0ijL2pQZqDkfx8oZo2tmLErRdDyS3TD7gdfDXomySYJrcjUsNEbDT/XQH7NbT8x5oY7mlwp4FvlrpM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772294; c=relaxed/simple; bh=Tid3uWR33zstKwWKrQjCdNjEr2JkFhqZE4IB1spq8hc=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dNK+t+3UMxo6nk4yiyYusprK/6vgaMouOge8CYE0KMaq7Dhab74n/xOSacmqoi7ea7nzKCaWL0Nxj11fizIlNgcBmtIadhcPOiCLW2b2KmMymuv1HFZJZ8RxSd6k2SFw+PfSYeuVfocsAk0M0NkIryVKho1OdbTxMJfLTko1UW0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FkzPDXJx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FkzPDXJx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85D8BC4CEDD for ; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772294; bh=Tid3uWR33zstKwWKrQjCdNjEr2JkFhqZE4IB1spq8hc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=FkzPDXJxo7GeSpcDcuxd3ohOIMj/pMgsrmdXVXtXXB9wP1P/bnA7sltQHWGUNJ+Ov lJBXL83cSvLa3P9L9CtLwPWv00Mj4s8e6ldBG3LgIzNbQnQMxU1qmk+gc38W3xpnYP 8eNbR8FTVZ4ZKGVeoHYWD0hx8lM1YmlIuVYus/FbQkenhUdNl68RNd/L1IJg6aU36s YfQtzT1HfG0KSFmcKIW08tE5PXumDaeGeZf9AnpeRjuWyIheAFIeSFAQvPqF9hsaB+ yUZD9M/1lQCyD5SS6/jD3y8dvWXmYBl0+9WxAaD7azqhxcTrnJwaL+0IJlR4p3OxxD w2QQYZXHFdF4A== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 19/29] crypto: keywrap - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:46 -0800 Message-ID: <20241221091056.282098-20-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_{from,to}_scatterwalk(), or just memcpy_{from,to}_sglist(). Since scatterwalk_copychunks() was incorrectly being called without being followed by scatterwalk_done(), this also fixes a bug where the dcache of the destination page(s) was not being flushed on architectures that need that. Signed-off-by: Eric Biggers --- crypto/keywrap.c | 48 ++++++------------------------------------------ 1 file changed, 6 insertions(+), 42 deletions(-) diff --git a/crypto/keywrap.c b/crypto/keywrap.c index 5ec4f94d46bd..700b7b79a93d 100644 --- a/crypto/keywrap.c +++ b/crypto/keywrap.c @@ -92,37 +92,10 @@ struct crypto_kw_block { #define SEMIBSIZE 8 __be64 A; __be64 R; }; -/* - * Fast forward the SGL to the "end" length minus SEMIBSIZE. - * The start in the SGL defined by the fast-forward is returned with - * the walk variable - */ -static void crypto_kw_scatterlist_ff(struct scatter_walk *walk, - struct scatterlist *sg, - unsigned int end) -{ - unsigned int skip = 0; - - /* The caller should only operate on full SEMIBLOCKs. */ - BUG_ON(end < SEMIBSIZE); - - skip = end - SEMIBSIZE; - while (sg) { - if (sg->length > skip) { - scatterwalk_start(walk, sg); - scatterwalk_advance(walk, skip); - break; - } - - skip -= sg->length; - sg = sg_next(sg); - } -} - static int crypto_kw_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_cipher *cipher = skcipher_cipher_simple(tfm); struct crypto_kw_block block; @@ -148,34 +121,27 @@ static int crypto_kw_decrypt(struct skcipher_request *req) */ src = req->src; dst = req->dst; for (i = 0; i < 6; i++) { - struct scatter_walk src_walk, dst_walk; unsigned int nbytes = req->cryptlen; while (nbytes) { - /* move pointer by nbytes in the SGL */ - crypto_kw_scatterlist_ff(&src_walk, src, nbytes); + nbytes -= SEMIBSIZE; + /* get the source block */ - scatterwalk_copychunks(&block.R, &src_walk, SEMIBSIZE, - false); + memcpy_from_sglist(&block.R, src, nbytes, SEMIBSIZE); /* perform KW operation: modify IV with counter */ block.A ^= cpu_to_be64(t); t--; /* perform KW operation: decrypt block */ crypto_cipher_decrypt_one(cipher, (u8 *)&block, (u8 *)&block); - /* move pointer by nbytes in the SGL */ - crypto_kw_scatterlist_ff(&dst_walk, dst, nbytes); /* Copy block->R into place */ - scatterwalk_copychunks(&block.R, &dst_walk, SEMIBSIZE, - true); - - nbytes -= SEMIBSIZE; + memcpy_to_sglist(dst, nbytes, &block.R, SEMIBSIZE); } /* we now start to operate on the dst SGL only */ src = req->dst; dst = req->dst; @@ -229,23 +195,21 @@ static int crypto_kw_encrypt(struct skcipher_request *req) scatterwalk_start(&src_walk, src); scatterwalk_start(&dst_walk, dst); while (nbytes) { /* get the source block */ - scatterwalk_copychunks(&block.R, &src_walk, SEMIBSIZE, - false); + memcpy_from_scatterwalk(&block.R, &src_walk, SEMIBSIZE); /* perform KW operation: encrypt block */ crypto_cipher_encrypt_one(cipher, (u8 *)&block, (u8 *)&block); /* perform KW operation: modify IV with counter */ block.A ^= cpu_to_be64(t); t++; /* Copy block->R into place */ - scatterwalk_copychunks(&block.R, &dst_walk, SEMIBSIZE, - true); + memcpy_to_scatterwalk(&dst_walk, &block.R, SEMIBSIZE); nbytes -= SEMIBSIZE; } /* we now start to operate on the dst SGL only */ From patchwork Sat Dec 21 09:10:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917733 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DB3A1F2384 for ; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; cv=none; b=ZHv/6pNxY14tq9XXq35FzB+NLW9yLxDZlpGsafny0npjvFjsKJx5p55FmRuckv5hVq2vyHBZrxVQmTEh2kEXRYqpuYjJPwzVI/AdjsNgH5WaUO9QuH4x9w+XHRtMUHP2lwBNeDooDf3xw856/FB9JnorYVDo9F7HONdGrC3+KoY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; c=relaxed/simple; bh=FDzNrJCoVdAmqgX/rTeMmUPgE6IOzOA1GEP6JN6T8HU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZZEFrodiiOugRfsGnPDfrpY3FThTwkdsOAnjTzHxO0ez44Zdg2EmLjWX8+pYbUV4bZSqL29i4vn2GHQCyFG0efJy198ca+hZymsZRY1OxfoP5TYB81s1qvkFAVcSZGvB++n4L8GCoRSKCRIxlI/F9E5syg88hcOcgelGtYMQMKM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ec0NnNJ1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ec0NnNJ1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B272FC4CED4; Sat, 21 Dec 2024 09:11:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772294; bh=FDzNrJCoVdAmqgX/rTeMmUPgE6IOzOA1GEP6JN6T8HU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ec0NnNJ1pLsqD0aKoM7ZsJBvcVGVHOks5yPcwosrJ2Vja8d27my2ZED5IJlSTmptW OhsT11jQ+EvQoHc3sr/uIzJnnxLTl1SRzjauxWq8TBXSqUPcRowTxoCOU5h1G57cGd ilmzzVkQDlLeMZi4jkQDMnTUj85vYW72iBlQMSfTYqKh1VWz3Exs3E1RFc6kCopW26 thM56rmFasviCRGSo1/6XRtxxZaxfwoGeEk7xVJGvkgYeXWjvA2jAxFmIdS5jDIY77 hfdUNI/FvtcpuaYaoTiTOgm19vV01bX4a+6eChAZ5dHi6m2I3eqMMAH5Alglx5IO5T boSkPyFdDfrZw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Christophe Leroy , Madhavan Srinivasan , Michael Ellerman , Naveen N Rao , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH 20/29] crypto: nx - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:47 -0800 Message-ID: <20241221091056.282098-21-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a more complex way to achieve the same result. - Also in nx_walk_and_build(), use the new functions scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Note that nx_walk_and_build() does not actually read or write the mapped virtual address, and thus it is misusing the scatter_walk API. It really should just access the scatterlist directly. This patch does not try to address this existing issue. - In nx_gca(), use memcpy_from_sglist() instead of a more complex way to achieve the same result. - In various functions, replace calls to scatterwalk_map_and_copy() with memcpy_from_sglist() or memcpy_to_sglist() as appropriate. Note that this eliminates the confusing 'out' argument (which this driver had tried to work around by defining the missing constants for it...) Cc: Christophe Leroy Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Naveen N Rao Cc: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. drivers/crypto/nx/nx-aes-ccm.c | 16 ++++++---------- drivers/crypto/nx/nx-aes-gcm.c | 17 ++++++----------- drivers/crypto/nx/nx.c | 31 +++++-------------------------- drivers/crypto/nx/nx.h | 3 --- 4 files changed, 17 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c index c843f4c6f684..56a0b3a67c33 100644 --- a/drivers/crypto/nx/nx-aes-ccm.c +++ b/drivers/crypto/nx/nx-aes-ccm.c @@ -215,17 +215,15 @@ static int generate_pat(u8 *iv, */ if (b1) { memset(b1, 0, 16); if (assoclen <= 65280) { *(u16 *)b1 = assoclen; - scatterwalk_map_and_copy(b1 + 2, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 2, req->src, 0, iauth_len); } else { *(u16 *)b1 = (u16)(0xfffe); *(u32 *)&b1[2] = assoclen; - scatterwalk_map_and_copy(b1 + 6, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 6, req->src, 0, iauth_len); } } /* now copy any remaining AAD to scatterlist and call nx... */ if (!assoclen) { @@ -339,13 +337,12 @@ static int ccm_nx_decrypt(struct aead_request *req, spin_lock_irqsave(&nx_ctx->lock, irq_flags); nbytes -= authsize; /* copy out the auth tag to compare with later */ - scatterwalk_map_and_copy(priv->oauth_tag, - req->src, nbytes + req->assoclen, authsize, - SCATTERWALK_FROM_SG); + memcpy_from_sglist(priv->oauth_tag, req->src, nbytes + req->assoclen, + authsize); rc = generate_pat(iv, req, nx_ctx, authsize, nbytes, assoclen, csbcpb->cpb.aes_ccm.in_pat_or_b0); if (rc) goto out; @@ -463,13 +460,12 @@ static int ccm_nx_encrypt(struct aead_request *req, processed += to_process; } while (processed < nbytes); /* copy out the auth tag */ - scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac, - req->dst, nbytes + req->assoclen, authsize, - SCATTERWALK_TO_SG); + memcpy_to_sglist(req->dst, nbytes + req->assoclen, + csbcpb->cpb.aes_ccm.out_pat_or_mac, authsize); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c index 4a796318b430..b7fe2de96d96 100644 --- a/drivers/crypto/nx/nx-aes-gcm.c +++ b/drivers/crypto/nx/nx-aes-gcm.c @@ -101,20 +101,17 @@ static int nx_gca(struct nx_crypto_ctx *nx_ctx, u8 *out, unsigned int assoclen) { int rc; struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead; - struct scatter_walk walk; struct nx_sg *nx_sg = nx_ctx->in_sg; unsigned int nbytes = assoclen; unsigned int processed = 0, to_process; unsigned int max_sg_len; if (nbytes <= AES_BLOCK_SIZE) { - scatterwalk_start(&walk, req->src); - scatterwalk_copychunks(out, &walk, nbytes, SCATTERWALK_FROM_SG); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0); + memcpy_from_sglist(out, req->src, 0, nbytes); return 0; } NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_CONTINUATION; @@ -389,23 +386,21 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc, } while (processed < nbytes); mac: if (enc) { /* copy out the auth tag */ - scatterwalk_map_and_copy( - csbcpb->cpb.aes_gcm.out_pat_or_mac, + memcpy_to_sglist( req->dst, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_TO_SG); + csbcpb->cpb.aes_gcm.out_pat_or_mac, + crypto_aead_authsize(crypto_aead_reqtfm(req))); } else { u8 *itag = nx_ctx->priv.gcm.iauth_tag; u8 *otag = csbcpb->cpb.aes_gcm.out_pat_or_mac; - scatterwalk_map_and_copy( + memcpy_from_sglist( itag, req->src, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_FROM_SG); + crypto_aead_authsize(crypto_aead_reqtfm(req))); rc = crypto_memneq(itag, otag, crypto_aead_authsize(crypto_aead_reqtfm(req))) ? -EBADMSG : 0; } out: diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c index 010e87d9da36..dd95e5361d88 100644 --- a/drivers/crypto/nx/nx.c +++ b/drivers/crypto/nx/nx.c @@ -151,44 +151,23 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *nx_dst, unsigned int start, unsigned int *src_len) { struct scatter_walk walk; struct nx_sg *nx_sg = nx_dst; - unsigned int n, offset = 0, len = *src_len; + unsigned int n, len = *src_len; char *dst; /* we need to fast forward through @start bytes first */ - for (;;) { - scatterwalk_start(&walk, sg_src); - - if (start < offset + sg_src->length) - break; - - offset += sg_src->length; - sg_src = sg_next(sg_src); - } - - /* start - offset is the number of bytes to advance in the scatterlist - * element we're currently looking at */ - scatterwalk_advance(&walk, start - offset); + scatterwalk_start_at_pos(&walk, sg_src, start); while (len && (nx_sg - nx_dst) < sglen) { - n = scatterwalk_clamp(&walk, len); - if (!n) { - /* In cases where we have scatterlist chain sg_next - * handles with it properly */ - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - dst = scatterwalk_map(&walk); + dst = scatterwalk_next(&walk, len, &n); nx_sg = nx_build_sg_list(nx_sg, dst, &n, sglen - (nx_sg - nx_dst)); - len -= n; - scatterwalk_unmap(dst); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, len); + scatterwalk_done_src(&walk, dst, n); + len -= n; } /* update to_process */ *src_len -= len; /* return the moved destination pointer */ diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h index 2697baebb6a3..e1b4b6927bec 100644 --- a/drivers/crypto/nx/nx.h +++ b/drivers/crypto/nx/nx.h @@ -187,9 +187,6 @@ extern struct shash_alg nx_shash_aes_xcbc_alg; extern struct shash_alg nx_shash_sha512_alg; extern struct shash_alg nx_shash_sha256_alg; extern struct nx_crypto_driver nx_driver; -#define SCATTERWALK_TO_SG 1 -#define SCATTERWALK_FROM_SG 0 - #endif From patchwork Sat Dec 21 09:10:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917734 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C2911F238A; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; cv=none; b=g9oXL273n0hHcwlUB4kTTtHmSMmfrmwZoTYvNQba1lnTuXXCCRSMlpbwBVDTSs7Gjf02WzxwDnvvENNC1NVQj3+8WqyZpbQsNuM3Knao13ey3eiGb4hTQmbi0kpA3mI7ZKrEbgYoHL1mT9Jso7RQndgNZ+f0VJJxHHC9s0GLs3s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; c=relaxed/simple; bh=H979TZHDgKLaUbglwnxL0UQbpoLtqMQtPXY6pBrdkHY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M7VhZalaUcsAhBLW/pj1orqEaHzb2Rq5zmfVMhiAPie3Tbqqcj/C4uJsC337dTlaj9GQSkEaXJZkkJt3sTlG/pC6yxuEDujQ4TTTysThkKfBtUKeQLjYFfH5Dq1hWGEGKdz/EMr1zgDDQMlmPNjceQrZIxXYE3o1ncyarGRAbVc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KjcPLqiH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KjcPLqiH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11E55C4CED7; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772295; bh=H979TZHDgKLaUbglwnxL0UQbpoLtqMQtPXY6pBrdkHY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KjcPLqiHfTy2/XRWS3CpUuGEGYQBpRM73/IBhcg1xQPXNH8tHsWjzre8sOAbDtZ72 z9Bnvt/nYMEpZTxz/6dsekP22L3wpPmKO7tQTLfm3SARRl0UoE8dZyjwhcaSTLLCjD i9+hGXTjCj8AqrMkMzk0RqS3D9B15ciuannRdSTUv7UBsndTL/LoWrN2K4KdNDsCwQ fdvEaMNfwT/eTqIr8pWglKV27o+aSxC3YSC27d8mPTpOluAfgW5j8bKLRleJu+yLf4 k1hObXjJH6ON8rtjpC3Jo57Vb28msuxETe18w7pGks/cl6/1llnl3DA5rOh7GTV5bK OFBBs2aYM6KUg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Harald Freudenberger , Holger Dengler , linux-s390@vger.kernel.org Subject: [PATCH 21/29] crypto: s390/aes-gcm - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:48 -0800 Message-ID: <20241221091056.282098-22-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Besides the new functions being a bit easier to use, this is necessary because scatterwalk_done() is planned to be removed. Cc: Harald Freudenberger Cc: Holger Dengler Cc: linux-s390@vger.kernel.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. arch/s390/crypto/aes_s390.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index 9c46b1b630b1..7fd303df05ab 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -785,32 +785,25 @@ static void gcm_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg, scatterwalk_start(&gw->walk, sg); } static inline unsigned int _gcm_sg_clamp_and_map(struct gcm_sg_walk *gw) { - struct scatterlist *nextsg; - - gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain); - while (!gw->walk_bytes) { - nextsg = sg_next(gw->walk.sg); - if (!nextsg) - return 0; - scatterwalk_start(&gw->walk, nextsg); - gw->walk_bytes = scatterwalk_clamp(&gw->walk, - gw->walk_bytes_remain); - } - gw->walk_ptr = scatterwalk_map(&gw->walk); + if (gw->walk_bytes_remain == 0) + return 0; + gw->walk_ptr = scatterwalk_next(&gw->walk, gw->walk_bytes_remain, + &gw->walk_bytes); return gw->walk_bytes; } static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw, - unsigned int nbytes) + unsigned int nbytes, bool out) { gw->walk_bytes_remain -= nbytes; - scatterwalk_unmap(gw->walk_ptr); - scatterwalk_advance(&gw->walk, nbytes); - scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); + if (out) + scatterwalk_done_dst(&gw->walk, gw->walk_ptr, nbytes); + else + scatterwalk_done_src(&gw->walk, gw->walk_ptr, nbytes); gw->walk_ptr = NULL; } static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) { @@ -842,11 +835,11 @@ static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) while (1) { n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes); memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n); gw->buf_bytes += n; - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, false); if (gw->buf_bytes >= minbytesneeded) { gw->ptr = gw->buf; gw->nbytes = gw->buf_bytes; goto out; } @@ -902,11 +895,11 @@ static int gcm_in_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) memmove(gw->buf, gw->buf + bytesdone, n); gw->buf_bytes = n; } else gw->buf_bytes = 0; } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, false); return bytesdone; } static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) @@ -920,14 +913,14 @@ static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) for (i = 0; i < bytesdone; i += n) { if (!_gcm_sg_clamp_and_map(gw)) return i; n = min(gw->walk_bytes, bytesdone - i); memcpy(gw->walk_ptr, gw->buf + i, n); - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, true); } } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, true); return bytesdone; } static int gcm_aes_crypt(struct aead_request *req, unsigned int flags) From patchwork Sat Dec 21 09:10:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917735 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 987911F2399; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; cv=none; b=lMyWwdDmrCBmAqHZiFBYFGrd/6yaRwXpRjsVYrWo90VQKce/dnXDJ8OjrHaeGYRjYNiQP96u5L/nXT0TVHdiZCIQ5UJqaPG+um1Aqnno98B8B3gsSNFFZJdS7Q29wJN1tKkO6tGLPo8jloLGDvAjHWjvm00pZE10IPteil+Swzw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; c=relaxed/simple; bh=dJNf7Ef27a6NAHmh6+QRwRI/qSPxeV2L1TYNFHLLkzw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UGcBYZWC1RPNdqgngbPpGViNueYqk05vJjc3LQNVd6aIsECoGAL+kmYtoL9b/70RZJrb7vB1A3Vvk2T3r6MWh7NjQidwnGf3AWSZKCAlFb2YuRb+F96O+L+4tiGquqV0ckJ1Sb4cA+7zjHdaicO0F51hk4rc3rf9szAdu12V2gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rOByZ+Mm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rOByZ+Mm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53BFEC4CECE; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772295; bh=dJNf7Ef27a6NAHmh6+QRwRI/qSPxeV2L1TYNFHLLkzw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rOByZ+MmldENSL1r5dbGRpI8uFki0fQoDB9a+fNNVU/IrJANVztXQKsaGmTYReMCp nEfFDhX8gitMArDdEGE2TfTZJANLGAf8WztBbVQ3gOCS5FZ5dhK/Jm16SACnarcw4Q uqJGENwFJsn95GhUIJ0JLdYMgyaMemPXlLB6KhDAqa/FPq62XrBcBYH3IMzBdznaEZ D+TvpP0yfvtRP6YSLPR3xc8Yz85uvwdcr/4Q9EKPnK2Nfs1CEW4Lehx+8CWimu/Gni 3IN4cTsSpbR4jl7RlIS7NJ22DAyXY1FhphTx9siZur/Wuz4lRJxpdgCXo606XlABuQ mBlPcX2seKpXg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Krzysztof Kozlowski , Vladimir Zapolskiy , linux-samsung-soc@vger.kernel.org Subject: [PATCH 22/29] crypto: s5p-sss - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:49 -0800 Message-ID: <20241221091056.282098-23-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers s5p_sg_copy_buf() open-coded a copy from/to a scatterlist using scatterwalk_* functions that are planned for removal. Replace it with the new functions memcpy_from_sglist() and memcpy_to_sglist() instead. Also take the opportunity to replace calls to scatterwalk_map_and_copy() in the same file; this eliminates the confusing 'out' argument. Cc: Krzysztof Kozlowski Cc: Vladimir Zapolskiy Cc: linux-samsung-soc@vger.kernel.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. drivers/crypto/s5p-sss.c | 38 +++++++++++--------------------------- 1 file changed, 11 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index 57ab237e899e..b4c3c14dafd5 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -456,34 +456,21 @@ static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg) kfree(*sg); *sg = NULL; } -static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg, - unsigned int nbytes, int out) -{ - struct scatter_walk walk; - - if (!nbytes) - return; - - scatterwalk_start(&walk, sg); - scatterwalk_copychunks(buf, &walk, nbytes, out); - scatterwalk_done(&walk, out, 0); -} - static void s5p_sg_done(struct s5p_aes_dev *dev) { struct skcipher_request *req = dev->req; struct s5p_aes_reqctx *reqctx = skcipher_request_ctx(req); if (dev->sg_dst_cpy) { dev_dbg(dev->dev, "Copying %d bytes of output data back to original place\n", dev->req->cryptlen); - s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst, - dev->req->cryptlen, 1); + memcpy_to_sglist(dev->req->dst, 0, sg_virt(dev->sg_dst_cpy), + dev->req->cryptlen); } s5p_free_sg_cpy(dev, &dev->sg_src_cpy); s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); if (reqctx->mode & FLAGS_AES_CBC) memcpy_fromio(req->iv, dev->aes_ioaddr + SSS_REG_AES_IV_DATA(0), AES_BLOCK_SIZE); @@ -524,11 +511,11 @@ static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, kfree(*dst); *dst = NULL; return -ENOMEM; } - s5p_sg_copy_buf(pages, src, dev->req->cryptlen, 0); + memcpy_from_sglist(pages, src, 0, dev->req->cryptlen); sg_init_table(*dst, 1); sg_set_buf(*dst, pages, len); return 0; @@ -1033,12 +1020,11 @@ static int s5p_hash_copy_sgs(struct s5p_hash_reqctx *ctx, } if (ctx->bufcnt) memcpy(buf, ctx->dd->xmit_buf, ctx->bufcnt); - scatterwalk_map_and_copy(buf + ctx->bufcnt, sg, ctx->skip, - new_len, 0); + memcpy_from_sglist(buf + ctx->bufcnt, sg, ctx->skip, new_len); sg_init_table(ctx->sgl, 1); sg_set_buf(ctx->sgl, buf, len); ctx->sg = ctx->sgl; ctx->sg_len = 1; ctx->bufcnt = 0; @@ -1227,12 +1213,11 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) int len = BUFLEN - ctx->bufcnt % BUFLEN; if (len > nbytes) len = nbytes; - scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, req->src, - 0, len, 0); + memcpy_from_sglist(ctx->buffer + ctx->bufcnt, req->src, 0, len); ctx->bufcnt += len; nbytes -= len; ctx->skip = len; } else { ctx->skip = 0; @@ -1251,13 +1236,12 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) xmit_len -= xmit_len & (BUFLEN - 1); hash_later = ctx->total - xmit_len; /* copy hash_later bytes from end of req->src */ /* previous bytes are in xmit_buf, so no overwrite */ - scatterwalk_map_and_copy(ctx->buffer, req->src, - req->nbytes - hash_later, - hash_later, 0); + memcpy_from_sglist(ctx->buffer, req->src, + req->nbytes - hash_later, hash_later); } if (xmit_len > BUFLEN) { ret = s5p_hash_prepare_sgs(ctx, req->src, nbytes - hash_later, final); @@ -1265,12 +1249,12 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) return ret; } else { /* have buffered data only */ if (unlikely(!ctx->bufcnt)) { /* first update didn't fill up buffer */ - scatterwalk_map_and_copy(ctx->dd->xmit_buf, req->src, - 0, xmit_len, 0); + memcpy_from_sglist(ctx->dd->xmit_buf, req->src, + 0, xmit_len); } sg_init_table(ctx->sgl, 1); sg_set_buf(ctx->sgl, ctx->dd->xmit_buf, xmit_len); @@ -1504,12 +1488,12 @@ static int s5p_hash_update(struct ahash_request *req) if (!req->nbytes) return 0; if (ctx->bufcnt + req->nbytes <= BUFLEN) { - scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, req->src, - 0, req->nbytes, 0); + memcpy_from_sglist(ctx->buffer + ctx->bufcnt, req->src, + 0, req->nbytes); ctx->bufcnt += req->nbytes; return 0; } return s5p_hash_enqueue(req, true); /* HASH_OP_UPDATE */ From patchwork Sat Dec 21 09:10:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917736 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBDCC1F2C20 for ; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; cv=none; b=VuVW9xJDRYHF7wMbWNGtVtg45RPymZqgd9zg/kH4kSKWzBcG0uyNtHm816Gsh1uR+yDRkcH1ckEEseiGXX3CeirBUlZFGqLZDwlhnhG7XOJ/HUR/i7RFdvj5PpDWd9PGifFJz/eQw89tn2RG72b0lO312MUOJgG7LiSdLVnkNw8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772295; c=relaxed/simple; bh=Zhg/KDiZCPgy+MF6pUjpqhVE/E/er2PToHEWYDXwwVE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PQdJXHyR3GQL4UmL7GtZ6itNIulMbQAjjM4gxUnKugMb3tIMvxA6uByhsKln0YquUuwA1Xhs+gvXkFXyTcUPPzwbLTUniui5hR9P1t11DukioVwp7CiSIkk5vjxZyFnx8svCuajPXyg74NuSSs+msFTZZH9YF2wM7gI3rVWdA/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=g+kScLD9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="g+kScLD9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94532C4CED6; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772295; bh=Zhg/KDiZCPgy+MF6pUjpqhVE/E/er2PToHEWYDXwwVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g+kScLD9dkCdQQjdAXzak26Iro/ycGzhYTF2hfYDnAiuXENXVXBroulQKZQ0RdgPi gleM0ReXaWZQH5XqsN/0mwjLfF71TCANHGBUTpfAg3Qe4GvGOgUAs19uSyZmySXE0U DzwjrGQGsSWNNhTOUPwaVp7NEoelM13DrCxyUFGd1celhiKu/kDonwRQlXzYPxh1rW Y0BpsoeG4ncJAufi20b6iJwzE2pe72iRi/JPHEjFrKdNvCO7RkUsR/8pFiSMe5JI+G mfFYlkzlXsTp1hWbCPZI5ASfDdstR5JzOTkEZzMUpCYcteS++a2SM7oOmSnE96LqDt 6MsbvBtiSldeg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Alexandre Torgue , Maxime Coquelin , =?utf-8?b?TWF4aW1lIE3DqXLDqQ==?= , Thomas Bourgoin , linux-stm32@st-md-mailman.stormreply.com Subject: [PATCH 23/29] crypto: stm32 - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:50 -0800 Message-ID: <20241221091056.282098-24-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), scatterwalk_skip(), or scatterwalk_start_at_pos() as appropriate. Cc: Alexandre Torgue Cc: Maxime Coquelin Cc: Maxime Méré Cc: Thomas Bourgoin Cc: linux-stm32@st-md-mailman.stormreply.com Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. drivers/crypto/stm32/stm32-cryp.c | 34 +++++++++++++++---------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 14c6339c2e43..5ce88e7a8f65 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -664,11 +664,11 @@ static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp) len = 6; } written = min_t(size_t, AES_BLOCK_SIZE - len, alen); - scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0); + memcpy_from_scatterwalk((char *)block + len, &cryp->in_walk, written); writesl(cryp->regs + cryp->caps->din, block, AES_BLOCK_32); cryp->header_in -= written; @@ -991,11 +991,11 @@ static int stm32_cryp_header_dma_start(struct stm32_cryp *cryp) tx_in->callback_param = cryp; tx_in->callback = stm32_cryp_header_dma_callback; /* Advance scatterwalk to not DMA'ed data */ align_size = ALIGN_DOWN(cryp->header_in, cryp->hw_blocksize); - scatterwalk_copychunks(NULL, &cryp->in_walk, align_size, 2); + scatterwalk_skip(&cryp->in_walk, align_size); cryp->header_in -= align_size; ret = dma_submit_error(dmaengine_submit(tx_in)); if (ret < 0) { dev_err(cryp->dev, "DMA in submit failed\n"); @@ -1054,22 +1054,22 @@ static int stm32_cryp_dma_start(struct stm32_cryp *cryp) tx_out->callback = stm32_cryp_dma_callback; tx_out->callback_param = cryp; /* Advance scatterwalk to not DMA'ed data */ align_size = ALIGN_DOWN(cryp->payload_in, cryp->hw_blocksize); - scatterwalk_copychunks(NULL, &cryp->in_walk, align_size, 2); + scatterwalk_skip(&cryp->in_walk, align_size); cryp->payload_in -= align_size; ret = dma_submit_error(dmaengine_submit(tx_in)); if (ret < 0) { dev_err(cryp->dev, "DMA in submit failed\n"); return ret; } dma_async_issue_pending(cryp->dma_lch_in); /* Advance scatterwalk to not DMA'ed data */ - scatterwalk_copychunks(NULL, &cryp->out_walk, align_size, 2); + scatterwalk_skip(&cryp->out_walk, align_size); cryp->payload_out -= align_size; ret = dma_submit_error(dmaengine_submit(tx_out)); if (ret < 0) { dev_err(cryp->dev, "DMA out submit failed\n"); return ret; @@ -1735,13 +1735,13 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req, in_sg = areq->src; out_sg = areq->dst; scatterwalk_start(&cryp->in_walk, in_sg); - scatterwalk_start(&cryp->out_walk, out_sg); /* In output, jump after assoc data */ - scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2); + scatterwalk_start_at_pos(&cryp->out_walk, out_sg, + areq->assoclen); ret = stm32_cryp_hw_init(cryp); if (ret) return ret; @@ -1871,16 +1871,16 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) if (is_encrypt(cryp)) { u32 out_tag[AES_BLOCK_32]; /* Get and write tag */ readsl(cryp->regs + cryp->caps->dout, out_tag, AES_BLOCK_32); - scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1); + memcpy_to_scatterwalk(&cryp->out_walk, out_tag, cryp->authsize); } else { /* Get and check tag */ u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32]; - scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0); + memcpy_from_scatterwalk(in_tag, &cryp->in_walk, cryp->authsize); readsl(cryp->regs + cryp->caps->dout, out_tag, AES_BLOCK_32); if (crypto_memneq(in_tag, out_tag, cryp->authsize)) ret = -EBADMSG; } @@ -1921,22 +1921,22 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp) static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp) { u32 block[AES_BLOCK_32]; readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); } static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp) { u32 block[AES_BLOCK_32] = {0}; - scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_in), 0); + memcpy_from_scatterwalk(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_in)); writesl(cryp->regs + cryp->caps->din, block, cryp->hw_blocksize / sizeof(u32)); cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in); } static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) @@ -1979,12 +1979,12 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) * Same code as stm32_cryp_irq_read_data(), but we want to store * block value */ readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); /* d) change mode back to AES GCM */ cfg &= ~CR_ALGO_MASK; @@ -2077,12 +2077,12 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) * Same code as stm32_cryp_irq_read_data(), but we want to store * block value */ readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); /* d) Load again CRYP_CSGCMCCMxR */ for (i = 0; i < ARRAY_SIZE(cstmp2); i++) cstmp2[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4); @@ -2159,11 +2159,11 @@ static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp) u32 block[AES_BLOCK_32] = {0}; size_t written; written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in); - scatterwalk_copychunks(block, &cryp->in_walk, written, 0); + memcpy_from_scatterwalk(block, &cryp->in_walk, written); writesl(cryp->regs + cryp->caps->din, block, AES_BLOCK_32); cryp->header_in -= written; From patchwork Sat Dec 21 09:10:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917737 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 164351F2C2A for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; cv=none; b=cnnHGM8JtODR0HmfNpO6x4MOgf1GLVDW3HoVcotypQCVoAQzFDqr7U36hplo49WfJ+G8ofA3Onmj79KbLLKOf2EJXwpkVlV2QvlDg/OXoLnJ2KwMPX6F6MAaVzIP83r4g7/BdmZxvHzDdvcE70ab9+noGFJOe456metZeB2t6Rk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; c=relaxed/simple; bh=O3Xdr0JpCB71MKHbMwJUz16l51VzXXpHu+RjIVWy0zA=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EP2i0TffT2FS+WZXlJDLo1DpwTmz2Ilrg82wfHuFoz5KinkFzrwD8Y1HRmNBPR+mXTFbXnmiIsz1kz44pzrK2Nq5l1Xgn8ww4wvBc+168LRNBItvnrP4LMhSpdIgtBAyiIip+xkSiwWQRuqQmEvZgDpy+44mjLvpoQeUxSk4WM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WAbBMalS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WAbBMalS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1AEFC4CEDD for ; Sat, 21 Dec 2024 09:11:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772296; bh=O3Xdr0JpCB71MKHbMwJUz16l51VzXXpHu+RjIVWy0zA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=WAbBMalSXw5biKS4eDF/+4xtBAo0gpP6lRKeIHsD2HZRt4nueUWVVreT9KqKkYy/v D7/3fkSOj43L9MSNah+K6O5UY1Ju71UXMPy66ebP2J2EyHK0KyzMbWm3ild0VHBcYh xL3qBEENqDfxP4QdLoPNF5eWoFgGN2fC3Th4HUx2Xzv+0aulBfg/nuJwUxHumy2Pg5 YNoB9NL72WP5+MASAQomvyoY21uxOZBkAe4xwWdZlZcG9lKzOjCOuz4W6wI2+cKThh 5j50J9istktOLGN3Kqdigmr/dN5ipCLI+qhzbfIUqHqYYFQwCu6BkV5E4Lr4DZSpwU s9yW1GdgodCkg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 24/29] crypto: x86/aes-gcm - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:51 -0800 Message-ID: <20241221091056.282098-25-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In gcm_process_assoc(), use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Also rename some variables to avoid implying that anything is actually mapped (it's not), or that the loop is going page by page (it is for now, but nothing actually requires that to be the case). Signed-off-by: Eric Biggers --- arch/x86/crypto/aesni-intel_glue.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index fbf43482e1f5..c65d44b037b5 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1289,45 +1289,45 @@ static void gcm_process_assoc(const struct aes_gcm_key *key, u8 ghash_acc[16], memset(ghash_acc, 0, 16); scatterwalk_start(&walk, sg_src); while (assoclen) { - unsigned int len_this_page = scatterwalk_clamp(&walk, assoclen); - void *mapped = scatterwalk_map(&walk); - const void *src = mapped; + unsigned int orig_len_this_step; + const u8 *orig_src = scatterwalk_next(&walk, assoclen, + &orig_len_this_step); + unsigned int len_this_step = orig_len_this_step; unsigned int len; + const u8 *src = orig_src; - assoclen -= len_this_page; - scatterwalk_advance(&walk, len_this_page); if (unlikely(pos)) { - len = min(len_this_page, 16 - pos); + len = min(len_this_step, 16 - pos); memcpy(&buf[pos], src, len); pos += len; src += len; - len_this_page -= len; + len_this_step -= len; if (pos < 16) goto next; aes_gcm_aad_update(key, ghash_acc, buf, 16, flags); pos = 0; } - len = len_this_page; + len = len_this_step; if (unlikely(assoclen)) /* Not the last segment yet? */ len = round_down(len, 16); aes_gcm_aad_update(key, ghash_acc, src, len, flags); src += len; - len_this_page -= len; - if (unlikely(len_this_page)) { - memcpy(buf, src, len_this_page); - pos = len_this_page; + len_this_step -= len; + if (unlikely(len_this_step)) { + memcpy(buf, src, len_this_step); + pos = len_this_step; } next: - scatterwalk_unmap(mapped); - scatterwalk_pagedone(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_src, orig_len_this_step); if (need_resched()) { kernel_fpu_end(); kernel_fpu_begin(); } + assoclen -= orig_len_this_step; } if (unlikely(pos)) aes_gcm_aad_update(key, ghash_acc, buf, pos, flags); } From patchwork Sat Dec 21 09:10:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917738 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 427121F2399 for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; cv=none; b=IvE8w4A2ubDvTNn3dAEdh+XP90TKlDH1st3Jzvllu6NPRDoRdV5HSxO2huni2KxivZRRhqu3XbNJZgYdFCoEjYJ47rhy1zConXqAYvv0HIMy7+SwybCy1NhrmPp0kkS+qvBasmrFg3rCkylR3CsNHx5MwAT4bzFD3vGpQsHEWLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; c=relaxed/simple; bh=5pj/D1hOOx/ISCCP0hxDOq++HsD61KuEGNf//Yqj0yQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uH+4wJ0M0uLXjNtCCr0hl0zJkMUAzJCPfTYmf8zMXs8HTM0YEwvcJ7Sady+nYcWeKKwnX8YvVAJUv9Ou1+9NojhNanJdcmTsfgofdvrUBnEcNN1wv3ZqfKc8veQYwoGPXXFGffg4KloKU7AEUF9milodJ9rDEyhc76qmV5kBAso= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KekUY5XG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KekUY5XG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19F41C4CECE for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772296; bh=5pj/D1hOOx/ISCCP0hxDOq++HsD61KuEGNf//Yqj0yQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=KekUY5XGZO3ebARzWGDgXwieVrb5pkBafoIE5HRjhSRTMKoSTbVVuh/pTv79bY1m/ P5Zjt2LSMD+TKcaVF/2NpSKVAIGs2R0p/yDyCD6wl7bH19n6vfsR8JzUu6ljYA35OO 7+SJHQnIbZQe9PQoMufFMyBNDBALkG84Qt2/JW59zPDyZm4lcpfpRU0o9bK/1xxwpL bxYjsnDycQQvRtwjlTuwdBax/xzVZHyp4+/Xuny+vCtYGrral3qxnDwJ2WwlzxLl0e G8zuzI7MYH8yEYeXqlHUbq5LUlngUlIof6q9loLfUNsbWJL4KQvKmZHq5/DEFR3IHi IxDlHMqiBkyPQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 25/29] crypto: x86/aegis - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:52 -0800 Message-ID: <20241221091056.282098-26-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In crypto_aegis128_aesni_process_ad(), use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers --- arch/x86/crypto/aegis128-aesni-glue.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index 01fa568dc5fc..1bd093d073ed 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -69,14 +69,14 @@ static void crypto_aegis128_aesni_process_ad( struct aegis_block buf; unsigned int pos = 0; scatterwalk_start(&walk, sg_src); while (assoclen != 0) { - unsigned int size = scatterwalk_clamp(&walk, assoclen); + unsigned int size; + const u8 *mapped = scatterwalk_next(&walk, assoclen, &size); unsigned int left = size; - void *mapped = scatterwalk_map(&walk); - const u8 *src = (const u8 *)mapped; + const u8 *src = mapped; if (pos + size >= AEGIS128_BLOCK_SIZE) { if (pos > 0) { unsigned int fill = AEGIS128_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); @@ -95,13 +95,11 @@ static void crypto_aegis128_aesni_process_ad( memcpy(buf.bytes + pos, src, left); pos += left; assoclen -= size; - scatterwalk_unmap(mapped); - scatterwalk_advance(&walk, size); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, mapped, size); } if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS128_BLOCK_SIZE - pos); aegis128_aesni_ad(state, buf.bytes, AEGIS128_BLOCK_SIZE); From patchwork Sat Dec 21 09:10:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917739 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 873E21F2C38; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; cv=none; b=bBsj2d/pP4khvGu0xh9YW+jVh1ostzmU+BYLzqFqfmt15yMl/B/r3y0ib747ZCweD3mU+h9j8stu26Mj4bEv4a7AULJDZdQBZdJzrSHDjTJnt6IMU9p2eWZs9EnIC4510DKo01PtAa627xO8BdX30cY8gkhEh2tBX9fBr/1XyIo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; c=relaxed/simple; bh=xG4c1QDQkeqmFr3xpqJo4Ptz4ABndWUjgc6I+1XA6i0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q13RyUNfasMh8rf4UaWXPtP8h4tVlzVTBZf1uG4HVnXSln9ItW+xjDLBXMZT0BXph2SC2KJqb5UE5f/g+ROTIlBhwJZsXEC7RM7btydTVWTOF0VNCkmk6J7qunSfn3gpJPC0N/JoLil/8F3URIYmZ/RUYBzSmJp9NIl3xj6SsFc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JUY4X2ds; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JUY4X2ds" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46698C4CED7; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772296; bh=xG4c1QDQkeqmFr3xpqJo4Ptz4ABndWUjgc6I+1XA6i0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JUY4X2dsvUZCVkG3vD1om9Bjs10e0juTq7qRZNtdgmsiyKsZyfKEycU7VPSp4phFB oJw0HYdxyr00hIr82r5OraLaJolhjLjN3OiVvPXfZ/GfgQ3dqAWcdihu5moFGXmied NX5Lqq3P0OvqyPQjjoFXA8hybbTwJ+J1SoftNh3G/RbMBIOifTcmdt6e4oszUOfiTu CzxMEuToQZOj36wxN6rwXPBrHBnb9HRhT2/u1Nw26731jAo54twSPmgua5bDvu8CM6 8g0G2TPcpcJJOHqfpmi9cAyYwEuXzzWCq3sXW63dAD5rfsxedXNtU9axBb/rf1c8Kf JBpxg0uaCtpIQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Boris Pismenny , Jakub Kicinski , John Fastabend , netdev@vger.kernel.org Subject: [PATCH 26/29] net/tls: use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:53 -0800 Message-ID: <20241221091056.282098-27-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or scatterwalk_skip() as appropriate. The new functions behave more as expected and eliminate the need to call scatterwalk_done() or scatterwalk_pagedone(). This was not always being done when needed, and therefore the old code appears to have also had a bug where the dcache of the destination page(s) was not always being flushed on architectures that need that. Cc: Boris Pismenny Cc: Jakub Kicinski Cc: John Fastabend Cc: netdev@vger.kernel.org Signed-off-by: Eric Biggers --- This patch is part of a long series touching many files, so I have limited the Cc list on the full series. If you want the full series and did not receive it, please retrieve it from lore.kernel.org. net/tls/tls_device_fallback.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index f9e3d3d90dcf..ec7017c80b6a 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -67,20 +67,17 @@ static int tls_enc_record(struct aead_request *aead_req, DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable); buf_size = TLS_HEADER_SIZE + cipher_desc->iv; len = min_t(int, *in_len, buf_size); - scatterwalk_copychunks(buf, in, len, 0); - scatterwalk_copychunks(buf, out, len, 1); + memcpy_from_scatterwalk(buf, in, len); + memcpy_to_scatterwalk(out, buf, len); *in_len -= len; if (!*in_len) return 0; - scatterwalk_pagedone(in, 0, 1); - scatterwalk_pagedone(out, 1, 1); - len = buf[4] | (buf[3] << 8); len -= cipher_desc->iv; tls_make_aad(aad, len - cipher_desc->tag, (char *)&rcd_sn, buf[0], prot); @@ -108,14 +105,12 @@ static int tls_enc_record(struct aead_request *aead_req, *in_len = 0; } if (*in_len) { - scatterwalk_copychunks(NULL, in, len, 2); - scatterwalk_pagedone(in, 0, 1); - scatterwalk_copychunks(NULL, out, len, 2); - scatterwalk_pagedone(out, 1, 1); + scatterwalk_skip(in, len); + scatterwalk_skip(out, len); } len -= cipher_desc->tag; aead_request_set_crypt(aead_req, sg_in, sg_out, len, iv); @@ -160,13 +155,10 @@ static int tls_enc_records(struct aead_request *aead_req, cpu_to_be64(rcd_sn), &in, &out, &len, prot); rcd_sn++; } while (rc == 0 && len); - scatterwalk_done(&in, 0, 0); - scatterwalk_done(&out, 1, 0); - return rc; } /* Can't use icsk->icsk_af_ops->send_check here because the ip addresses * might have been changed by NAT. From patchwork Sat Dec 21 09:10:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917740 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF4ED1F2C44 for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; cv=none; b=VuG4Amjgs1TR0XlCNYHlpbK2GqBcaXY+cWPuCj9zxx+N0NtqHUYWNwhmA+EWDqQ2Hv4YgLYINA/4B8IIbEPDqPUV4bcZI5nrZV1pKZZIdy7I/kRKRiSjHuDPwBgrTtII9u8LMLmg8anc11mZA43Vyj8q1h0YoPGNzd/XD4OBNIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772296; c=relaxed/simple; bh=9+aJ/7IU1xoI3FM4cnTH4BXwVQl9uMeigli+J8RUP4E=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lQvUMegogU4APHbzLjeN/x2RdfFMxk3DJs005bYxLEURdPKZwPFwgm6qyOCSLFIwGPrPkjlai6YQvZm5Nv+7FYJhOdqwB4FaZi5p2N14dmYAM9EZhfoJPJW2PZps/0s2qP3lkoCTxLOCWMxLF5qSMFHth17/X416d9M9qCNNmHg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lhBoTlRP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lhBoTlRP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8DC64C4CED6 for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772296; bh=9+aJ/7IU1xoI3FM4cnTH4BXwVQl9uMeigli+J8RUP4E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=lhBoTlRP9jpOHUstQWB5i0+2vxFrOn03vMLOskm2M53BK1wSNr6Hsn4kciTV2aQGh NVsejg5LnaT2mberCS30mt1NOEkjIwR3I0vLn1kgqjeMjcfp5P9J63TNN1BhBw30No gt1RvGR+grDCNWFOP8et/4ivOfn/RAwITGhsFx+qqxwVYAnk0H3dxmAFaSR+Bs5Mu0 /E53M00H2gYlhjQoEScCHlpuYsUi8gvKy6v4GjdxyHPDp0xYxRoZJCCNbL2unTZgy0 P27ZJIsvrjGPa6sg/L6UbBmydXFX04GM/EW/qmO3IMBLrG4kQBG0iBP9WCLTJGpv02 lCqXgjK9urxbQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 27/29] crypto: skcipher - use the new scatterwalk functions Date: Sat, 21 Dec 2024 01:10:54 -0800 Message-ID: <20241221091056.282098-28-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Convert skcipher_walk to use the new scatterwalk functions. This includes a few changes to exactly where the different parts of the iteration happen. For example the dcache flush that previously happened in scatterwalk_done() now happens in scatterwalk_dst_done() or in memcpy_to_scatterwalk(). Advancing to the next sg entry now happens just-in-time in scatterwalk_clamp() instead of in scatterwalk_done(). Signed-off-by: Eric Biggers --- crypto/skcipher.c | 51 ++++++++++++++++++----------------------------- 1 file changed, 19 insertions(+), 32 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 7abafe385fd5..8f6b09377368 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -46,20 +46,10 @@ static inline void skcipher_map_src(struct skcipher_walk *walk) static inline void skcipher_map_dst(struct skcipher_walk *walk) { walk->dst.virt.addr = scatterwalk_map(&walk->out); } -static inline void skcipher_unmap_src(struct skcipher_walk *walk) -{ - scatterwalk_unmap(walk->src.virt.addr); -} - -static inline void skcipher_unmap_dst(struct skcipher_walk *walk) -{ - scatterwalk_unmap(walk->dst.virt.addr); -} - static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) { return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; } @@ -67,18 +57,10 @@ static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } -static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) -{ - u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1); - - scatterwalk_copychunks(addr, &walk->out, bsize, 1); - return 0; -} - /** * skcipher_walk_done() - finish one step of a skcipher_walk * @walk: the skcipher_walk * @res: number of bytes *not* processed (>= 0) from walk->nbytes, * or a -errno value to terminate the walk due to an error @@ -109,44 +91,45 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) } if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF)))) { -unmap_src: - skcipher_unmap_src(walk); + scatterwalk_advance(&walk->in, n); } else if (walk->flags & SKCIPHER_WALK_DIFF) { - skcipher_unmap_dst(walk); - goto unmap_src; + scatterwalk_unmap(walk->src.virt.addr); + scatterwalk_advance(&walk->in, n); } else if (walk->flags & SKCIPHER_WALK_COPY) { + scatterwalk_advance(&walk->in, n); skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); - skcipher_unmap_dst(walk); } else { /* SKCIPHER_WALK_SLOW */ if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but * the algorithm requires it. */ res = -EINVAL; total = 0; - } else - n = skcipher_done_slow(walk, n); + } else { + u8 *buf = PTR_ALIGN(walk->buffer, walk->alignmask + 1); + + memcpy_to_scatterwalk(&walk->out, buf, n); + } + goto dst_done; } + scatterwalk_done_dst(&walk->out, walk->dst.virt.addr, n); +dst_done: + if (res > 0) res = 0; walk->total = total; walk->nbytes = 0; - scatterwalk_advance(&walk->in, n); - scatterwalk_advance(&walk->out, n); - scatterwalk_done(&walk->in, 0, total); - scatterwalk_done(&walk->out, 1, total); - if (total) { if (walk->flags & SKCIPHER_WALK_SLEEP) cond_resched(); walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF); @@ -189,11 +172,11 @@ static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) walk->buffer = buffer; } walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1); walk->src.virt.addr = walk->dst.virt.addr; - scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0); + memcpy_from_scatterwalk(walk->src.virt.addr, &walk->in, bsize); walk->nbytes = bsize; walk->flags |= SKCIPHER_WALK_SLOW; return 0; @@ -203,11 +186,15 @@ static int skcipher_next_copy(struct skcipher_walk *walk) { u8 *tmp = walk->page; skcipher_map_src(walk); memcpy(tmp, walk->src.virt.addr, walk->nbytes); - skcipher_unmap_src(walk); + scatterwalk_unmap(walk->src.virt.addr); + /* + * walk->in is advanced later when the number of bytes actually + * processed (which might be less than walk->nbytes) is known. + */ walk->src.virt.addr = tmp; walk->dst.virt.addr = tmp; return 0; } From patchwork Sat Dec 21 09:10:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917741 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5D601F2C48 for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772297; cv=none; b=NjcInJrAtajGhKiTVGr6DQ5GJkBEQF08WeBYQlJeuTED+nFK1mCl9I/hjy2y/rnQIVaEOrrdmzwEv/jLCgNyvGli4plB5D4qXEySWwdkxXW0NdQuP7Sk8Nn2r9q5mW4A/pGh6Gh5p9w6zHxZLO8/p1c0eYWrGNh/8BZrc/WX+wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772297; c=relaxed/simple; bh=hzp1ie+wDuon/4DJgKw5g8w6X6lFOVe2M3urTd/ukOk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b+OTeDl9PB93esDxWJRb1E++WgJZ6jqdfDGCOlIccyR63RNzDlbCve/CDMacR8EEPFFG03I47kEAtc8UAtd0psL/HH7SkkXvNJzWYLNgkxg70wz0YEu8tqOXov2c7TE8Ipx1W7dg/5wfB7XElXJl7yLkxpGWkhNOOWczHskv86U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Zj46DLaq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Zj46DLaq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB006C4CEDD for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772296; bh=hzp1ie+wDuon/4DJgKw5g8w6X6lFOVe2M3urTd/ukOk=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Zj46DLaqO+tsyB/yLtciLFmAeh0B8I38tRKncHTXwUNzxoQyajpkpSCZENfcKRuOD 5fYTGpjgsdIx22mw8kRY9uAhRKYpSMIC/NWhOlj2ku75XKgMvkvU+pILiVKEnyOT8a R+1vzgOX7F7SPnxzOspLR3KrUKqHK9MUhR103G/6xmStF6UeeuleXa22AWRUOuOild 9QUVk/RpAuLr/Z+A3fGdsZe6fo6ReZ9wcxU2TqceRL5+Zkd6z+UHWBuHKYg+v2dSB9 XSlRfARhfZm7V+HBDiFSzpU7iB5iywjoe1WUZTlzSnj4xX5PqvwcKGoLX0gVIQbqBD XzYNsOzNRrjoQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 28/29] crypto: scatterwalk - remove obsolete functions Date: Sat, 21 Dec 2024 01:10:55 -0800 Message-ID: <20241221091056.282098-29-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Remove various functions that are no longer used. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 37 ------------------------------------ include/crypto/scatterwalk.h | 25 ------------------------ 2 files changed, 62 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 2e7a532152d6..87c080f565d4 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -28,47 +28,10 @@ void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) walk->sg = sg; walk->offset = sg->offset + nbytes; } EXPORT_SYMBOL_GPL(scatterwalk_skip); -static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) -{ - void *src = out ? buf : sgdata; - void *dst = out ? sgdata : buf; - - memcpy(dst, src, nbytes); -} - -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out) -{ - for (;;) { - unsigned int len_this_page = scatterwalk_pagelen(walk); - u8 *vaddr; - - if (len_this_page > nbytes) - len_this_page = nbytes; - - if (out != 2) { - vaddr = scatterwalk_map(walk); - memcpy_dir(buf, vaddr, len_this_page, out); - scatterwalk_unmap(vaddr); - } - - scatterwalk_advance(walk, len_this_page); - - if (nbytes == len_this_page) - break; - - buf += len_this_page; - nbytes -= len_this_page; - - scatterwalk_pagedone(walk, out & 1, 1); - } -} -EXPORT_SYMBOL_GPL(scatterwalk_copychunks); - inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes) { do { const void *src_addr; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 5e12c07be89b..b542ce69d0bb 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -96,32 +96,10 @@ static inline void *scatterwalk_next(struct scatter_walk *walk, { *nbytes_ret = scatterwalk_clamp(walk, total); return scatterwalk_map(walk); } -static inline void scatterwalk_pagedone(struct scatter_walk *walk, int out, - unsigned int more) -{ - if (out) { - struct page *page; - - page = sg_page(walk->sg) + ((walk->offset - 1) >> PAGE_SHIFT); - flush_dcache_page(page); - } - - if (more && walk->offset >= walk->sg->offset + walk->sg->length) - scatterwalk_start(walk, sg_next(walk->sg)); -} - -static inline void scatterwalk_done(struct scatter_walk *walk, int out, - int more) -{ - if (!more || walk->offset >= walk->sg->offset + walk->sg->length || - !(walk->offset & (PAGE_SIZE - 1))) - scatterwalk_pagedone(walk, out, more); -} - static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { walk->offset += nbytes; } @@ -160,13 +138,10 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, scatterwalk_advance(walk, nbytes); } void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out); - void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes); void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, unsigned int nbytes); From patchwork Sat Dec 21 09:10:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13917742 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2948A1F2C58 for ; Sat, 21 Dec 2024 09:11:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772297; cv=none; b=JRRfXU2tRMCU15FS7e41mB/jEl66jzXJ54b5GqYWi3b+ZwHTMN2bvzfALaaDX1OxEwoDnY3Ftk/HmMopSBhiBI2jn366jIuokcTXYkfMgXa+NvJ+k/emphy6MHmLab7lwYyfNlExRRsXD7/tfjcwPAMSwfI8AFsv2qC0E3X3YtU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734772297; c=relaxed/simple; bh=NZzDEBd60cjX713tMZLABarMJgnPJcruN1LWg55Hmro=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o44nky+43IF3FsukjhfGjeNb23CZouclje/fdjR9NSnf6F/Rv5x1u1//6Ffy6ZIzriobw1yIQwUf4C9Vw3Bp1U3fLBzivZXibnj4t/4k9+SzPzPM1F/cdGuNoAZ/zjV0ZR6JjyPIQt++txs6Tie8oqtGgSewffXC/KW7mFjJh2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PsFV1gkE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PsFV1gkE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E94C8C4CED4 for ; Sat, 21 Dec 2024 09:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734772297; bh=NZzDEBd60cjX713tMZLABarMJgnPJcruN1LWg55Hmro=; h=From:To:Subject:Date:In-Reply-To:References:From; b=PsFV1gkEjnqhv5SIiUFfBfqpnF4jFvUfixXKoRvuFXlS6hyC+j1hM4adjbipVg1fw VNWKk/sMTQhFlJfg8s9repT+ZzF/o5oN0nHzqd5CkXhuP7PHWSu0lLfiwiJxm64jN8 XAQp3MikxoMVPvFP2jcxi9F13qbt4GJq2cjxt+9RXUTGqSwtHPRBbP0TtSZy6I+Qy/ /TcuC/l1hH55bMaenze3uJJPHBcJau/mM6qlCvBevnBpyl032FTcsH4rjznJhqA+Ib ru5nTthdxdVQuLZ+AWGGwYgPxXh0hs8uyRpaB4j6Ow3N3pywyUKWz6HGkFJ0edNrMv 3BA9UvHfjPdYw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 29/29] crypto: scatterwalk - don't split at page boundaries when !HIGHMEM Date: Sat, 21 Dec 2024 01:10:56 -0800 Message-ID: <20241221091056.282098-30-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241221091056.282098-1-ebiggers@kernel.org> References: <20241221091056.282098-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not actually map anything, and the address it returns is just the address from the kernel's direct map, where each sg entry's data is virtually contiguous. To improve performance, stop unnecessarily clamping data segments to page boundaries in this case. For now the segments are still limited to PAGE_SIZE to prevent preemption from being disabled for too long when SIMD is used, and to support the alignmask case which still uses a page-sized bounce buffer. Even so, this change still helps a lot in cases where messages cross a page boundary. For example, testing IPsec with AES-GCM on x86_64, the messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side over a third cross a page boundary. These ended up being processed in three parts, with the middle part going through skcipher_next_slow which uses a 16-byte bounce buffer. That was causing a significant amount of overhead which unnecessarily reduced the performance benefit of the new x86_64 AES-GCM assembly code. This change solves the problem; all these messages now get passed to the assembly code in one part. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 4 +- include/crypto/scatterwalk.h | 77 +++++++++++++++++++++++++----------- 2 files changed, 57 insertions(+), 24 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 8f6b09377368..16db19663c3d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -203,12 +203,12 @@ static int skcipher_next_fast(struct skcipher_walk *walk) { unsigned long diff; diff = offset_in_page(walk->in.offset) - offset_in_page(walk->out.offset); - diff |= (u8 *)scatterwalk_page(&walk->in) - - (u8 *)scatterwalk_page(&walk->out); + diff |= (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - + (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); skcipher_map_src(walk); walk->dst.virt.addr = walk->src.virt.addr; if (diff) { diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index b542ce69d0bb..fbb5867545a6 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -47,39 +47,58 @@ static inline void scatterwalk_start_at_pos(struct scatter_walk *walk, } walk->sg = sg; walk->offset = sg->offset + pos; } -static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) -{ - unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; - unsigned int len_this_page = offset_in_page(~walk->offset) + 1; - return len_this_page > len ? len : len_this_page; -} - static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, unsigned int nbytes) { + unsigned int len_this_sg; + unsigned int limit; + if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); - return min(nbytes, scatterwalk_pagelen(walk)); -} - -static inline struct page *scatterwalk_page(struct scatter_walk *walk) -{ - return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); -} + len_this_sg = walk->sg->offset + walk->sg->length - walk->offset; + + /* + * HIGHMEM case: the page may have to be mapped into memory. To avoid + * the complexity of having to map multiple pages at once per sg entry, + * clamp the returned length to not cross a page boundary. + * + * !HIGHMEM case: no mapping is needed; all pages of the sg entry are + * already mapped contiguously in the kernel's direct map. For improved + * performance, allow the walker to return data segments that cross a + * page boundary. Do still cap the length to PAGE_SIZE, since some + * users rely on that to avoid disabling preemption for too long when + * using SIMD. It's also needed for when skcipher_walk uses a bounce + * page due to the data not being aligned to the algorithm's alignmask. + */ + if (IS_ENABLED(CONFIG_HIGHMEM)) + limit = PAGE_SIZE - offset_in_page(walk->offset); + else + limit = PAGE_SIZE; -static inline void scatterwalk_unmap(void *vaddr) -{ - kunmap_local(vaddr); + return min3(nbytes, len_this_sg, limit); } static inline void *scatterwalk_map(struct scatter_walk *walk) { - return kmap_local_page(scatterwalk_page(walk)) + - offset_in_page(walk->offset); + struct page *base_page = sg_page(walk->sg); + + if (IS_ENABLED(CONFIG_HIGHMEM)) + return kmap_local_page(base_page + (walk->offset >> PAGE_SHIFT)) + + offset_in_page(walk->offset); + /* + * When !HIGHMEM we allow the walker to return segments that span a page + * boundary; see scatterwalk_clamp(). To make it clear that in this + * case we're working in the linear buffer of the whole sg entry in the + * kernel's direct map rather than within the mapped buffer of a single + * page, compute the address as an offset from the page_address() of the + * first page of the sg entry. Either way the result is the address in + * the direct map, but this makes it clearer what is really going on. + */ + return page_address(base_page) + walk->offset; } /** * scatterwalk_next() - Get the next data buffer in a scatterlist walk * @walk: the scatter_walk @@ -96,10 +115,16 @@ static inline void *scatterwalk_next(struct scatter_walk *walk, { *nbytes_ret = scatterwalk_clamp(walk, total); return scatterwalk_map(walk); } +static inline void scatterwalk_unmap(const void *vaddr) +{ + if (IS_ENABLED(CONFIG_HIGHMEM)) + kunmap_local(vaddr); +} + static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { walk->offset += nbytes; } @@ -114,11 +139,11 @@ static inline void scatterwalk_advance(struct scatter_walk *walk, * Use this if the @vaddr was not written to, i.e. it is source data. */ static inline void scatterwalk_done_src(struct scatter_walk *walk, const void *vaddr, unsigned int nbytes) { - scatterwalk_unmap((void *)vaddr); + scatterwalk_unmap(vaddr); scatterwalk_advance(walk, nbytes); } /** * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist @@ -131,12 +156,20 @@ static inline void scatterwalk_done_src(struct scatter_walk *walk, */ static inline void scatterwalk_done_dst(struct scatter_walk *walk, void *vaddr, unsigned int nbytes) { scatterwalk_unmap(vaddr); - if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) - flush_dcache_page(scatterwalk_page(walk)); + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) { + struct page *base_page, *start_page, *end_page, *page; + + base_page = sg_page(walk->sg); + start_page = base_page + (walk->offset >> PAGE_SHIFT); + end_page = base_page + ((walk->offset + nbytes + + PAGE_SIZE - 1) >> PAGE_SHIFT); + for (page = start_page; page < end_page; page++) + flush_dcache_page(page); + } scatterwalk_advance(walk, nbytes); } void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes);