From patchwork Wed Feb 19 18:23:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982635 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2958214A6F; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989467; cv=none; b=kJiDbDRBLZj7ZtLrqtcyrtl2B/FXbmlva/StF1xHFBpkm3cEUbPN4se4f3kfbvjnsyEFcSC+jpbmNKzGfl3yPNT+vX3OoILr7InADlq1ZJPtWWpDrjgNN8HBQUBn8lIaymSaXSRx4I5jD74g5GzMUB4Vis99wcowjQXdhoLn1yo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989467; c=relaxed/simple; bh=HO2nS9e5yumnBiRIcG2Rxrl7IXs3SEM+FOvSNOffi1o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NGiJF/LJr8fp7Ct+OBE5+S74BhM8iFnxET72YmCIhqMGA7LLB6Kd3Qt8mAmEFScrmimFyVZ5nC6xzC8Xpu19pT/TRpWppGJPJc/Oexp4OaFjwredJEWX1DLwBTusY6bI8lhcWG+9ipMmllmwp7mVAoVB+ETBEgURmEkBz94Zqyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CSu2xVkc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CSu2xVkc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 410C9C4CEE0; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989467; bh=HO2nS9e5yumnBiRIcG2Rxrl7IXs3SEM+FOvSNOffi1o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CSu2xVkcybgK7UX+OY08miKmSSTEajE0z2mWORv/90c59mgDQVOvN8fnWIpDRvgLV Nl0AjxuPq68dKK2DNjb1GHiDL7ujKsZ7E+d5qxqnZhn+R5ar6woyj1I5zPtlf/+S0w 9AlSS3mlVUZ0MzfAl/JyJvX9PAarjtPmI6k68m7RMR97gCQHHgyqKg5XpVeM7IqU63 j4d7fSR59bc5i5aDZSEslV4DHXuFpkw9KVm7mDwLNOGAR4BYfjOkPFLti6WTj3mDPi WtZ17QKfSupObXw1Jm1dJZIOlBydGSexOKoCckmRmjHrJTOEIC2HLUUUfRWULmnKw6 /OFpGnWnAwuRg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 01/19] crypto: scatterwalk - move to next sg entry just in time Date: Wed, 19 Feb 2025 10:23:23 -0800 Message-ID: <20250219182341.43961-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The scatterwalk_* functions are designed to advance to the next sg entry only when there is more data from the request to process. Compared to the alternative of advancing after each step if !sg_is_last(sg), this has the advantage that it doesn't cause problems if users accidentally don't terminate their scatterlist with the end marker (which is an easy mistake to make, and there are examples of this). Currently, the advance to the next sg entry happens in scatterwalk_done(), which is called after each "step" of the walk. It requires the caller to pass in a boolean 'more' that indicates whether there is more data. This works when the caller immediately knows whether there is more data, though it adds some complexity. However in the case of scatterwalk_copychunks() it's not immediately known whether there is more data, so the call to scatterwalk_done() has to happen higher up the stack. This is error-prone, and indeed the needed call to scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is sometimes called multiple times in a row. This causes a zero-length step to get added in some cases, which is unexpected and seems to work only by accident. This patch begins the switch to a less error-prone approach where the advance to the next sg entry happens just in time instead. For now, that means just doing the advance in scatterwalk_clamp() if it's needed there. Initially this is redundant, but it's needed to keep the tree in a working state as later patches change things to the final state. Later patches will similarly move the dcache flushing logic out of scatterwalk_done() and then remove scatterwalk_done() entirely. Signed-off-by: Eric Biggers --- include/crypto/scatterwalk.h | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 32fc4473175b1..924efbaefe67a 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -24,22 +24,30 @@ static inline void scatterwalk_crypto_chain(struct scatterlist *head, sg_chain(head, num, sg); else sg_mark_end(head); } +static inline void scatterwalk_start(struct scatter_walk *walk, + struct scatterlist *sg) +{ + walk->sg = sg; + walk->offset = sg->offset; +} + static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) { unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; unsigned int len_this_page = offset_in_page(~walk->offset) + 1; return len_this_page > len ? len : len_this_page; } static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, unsigned int nbytes) { - unsigned int len_this_page = scatterwalk_pagelen(walk); - return nbytes > len_this_page ? len_this_page : nbytes; + if (walk->offset >= walk->sg->offset + walk->sg->length) + scatterwalk_start(walk, sg_next(walk->sg)); + return min(nbytes, scatterwalk_pagelen(walk)); } static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { @@ -54,17 +62,10 @@ static inline struct page *scatterwalk_page(struct scatter_walk *walk) static inline void scatterwalk_unmap(void *vaddr) { kunmap_local(vaddr); } -static inline void scatterwalk_start(struct scatter_walk *walk, - struct scatterlist *sg) -{ - walk->sg = sg; - walk->offset = sg->offset; -} - static inline void *scatterwalk_map(struct scatter_walk *walk) { return kmap_local_page(scatterwalk_page(walk)) + offset_in_page(walk->offset); } From patchwork Wed Feb 19 18:23:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982637 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2210E215047; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; cv=none; b=PIggyfCJQlGrP19npt6dARJCFybWaMqr248fOOxGAg/rbHHH2yvzyBzQXcztEWsdzigzmhgg99PBcCiFY3PEIoNKJXJiMyy3P5aR3mmyyNMIEdVy89ai0S0lvm2zdrJAtietaeKXtSoLhBDzf468PWtGVTaFIDopp3jcLh4Cebk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; c=relaxed/simple; bh=4ROhMmEsNLF2YYzrY0DHsJpJVGk+E2lah7WqkxnGUvs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i2qHYbU9yy9rQzMINp9r87bCd5ZN4J8uvi9OEmbxslKcQuga84S8MOcIRp/QLUo0QMR326408bkizKluhcHfyH4h9iNeuNWC1Pfmv/ptatoMp8zDgsHHPX1XGqj66oA5ohDpzYfw8mSN6jJrdA9TWEBE/xd50ZlnvXXFAg5zjig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bmexRrs5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bmexRrs5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7BB42C4CEE6; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989467; bh=4ROhMmEsNLF2YYzrY0DHsJpJVGk+E2lah7WqkxnGUvs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bmexRrs5fkL2+9EDPVU/2KZCdQihNyFviHSlDsrWbVz6x5YLZN67AyTHtqWhO2gkj HrpnS5SAlRZtFhMnkUIlU+7xnXwzosgvTqS3+pzgz4jhGuQh6JX8CiGDwf1BpNMAg3 /Jo05NhOv17CuQA9Y175TerzY2o+QRVVyc+6kc4Q7WCam9hzGRySKCvfA9LtETSBmM 7nSZMVj3X0QWS3e1ijL6r/g/M05uN7lWTRsEimvIkRWU7EI2Dh9wsXF0lSlK0gMe0E GQ+ntcWwYLcdwVx4neMUQQuB8VeXmQJ88GaLpdTgNYcoCWBRFNkVYN9WEmbH7p0c+f isoxEh7TtJatg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 02/19] crypto: scatterwalk - add new functions for skipping data Date: Wed, 19 Feb 2025 10:23:24 -0800 Message-ID: <20250219182341.43961-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 15 +++++++++++++++ include/crypto/scatterwalk.h | 18 ++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 16f6ba896fb63..af436ad02e3ff 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -13,10 +13,25 @@ #include #include #include #include +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) +{ + struct scatterlist *sg = walk->sg; + + nbytes += walk->offset - sg->offset; + + while (nbytes > sg->length) { + nbytes -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + nbytes; +} +EXPORT_SYMBOL_GPL(scatterwalk_skip); + static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) { void *src = out ? buf : sgdata; void *dst = out ? sgdata : buf; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 924efbaefe67a..5c7765f601e0c 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -31,10 +31,26 @@ static inline void scatterwalk_start(struct scatter_walk *walk, { walk->sg = sg; walk->offset = sg->offset; } +/* + * This is equivalent to scatterwalk_start(walk, sg) followed by + * scatterwalk_skip(walk, pos). + */ +static inline void scatterwalk_start_at_pos(struct scatter_walk *walk, + struct scatterlist *sg, + unsigned int pos) +{ + while (pos > sg->length) { + pos -= sg->length; + sg = sg_next(sg); + } + walk->sg = sg; + walk->offset = sg->offset + pos; +} + static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) { unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; unsigned int len_this_page = offset_in_page(~walk->offset) + 1; return len_this_page > len ? len : len_this_page; @@ -90,10 +106,12 @@ static inline void scatterwalk_done(struct scatter_walk *walk, int out, if (!more || walk->offset >= walk->sg->offset + walk->sg->length || !(walk->offset & (PAGE_SIZE - 1))) scatterwalk_pagedone(walk, out, more); } +void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); + void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, unsigned int start, unsigned int nbytes, int out); From patchwork Wed Feb 19 18:23:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982636 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFD00214A8E; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; cv=none; b=R9Q7IzWNgR56y9RKWr60/CRVaoqwq7dtgmGwK4EY0w607MatMC77LIy6KWndh6NLVddv1hMC7U4MJSeRAthX9nFKqL3L5GRzM7cSLYb33P49vVwbjCloud6AmEkxvAdShPBGlXFm6YA3ksgK5GOZ25Rhfrg65+yXpOYrBKwmtdE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; c=relaxed/simple; bh=fD4KH5qGZ4+qoD7hgMydLC8k+AmNAEYvkto3BsxnsJ0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cwNP3TKGLwE6Kzs2auTjP90Itaj/lqvpBvlduYrHp7k1E74/F6FDliXbtK01dSXGiYU7yT7JwPDiMc5QtjcMjVsZxhSPT7+feqjJ3b66Qy1BMqNp1ofOkD/fZjMwSKcFkmhNK0Rruyi61+1Hm40evKBluo38GLh4dECLENxSqfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=McbyXwh/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="McbyXwh/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6CCDC4CEDD; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989467; bh=fD4KH5qGZ4+qoD7hgMydLC8k+AmNAEYvkto3BsxnsJ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=McbyXwh/WoFQ9s5UaUSFTKW1au/M504rCCUmWAys4QvMpsvSV2v2tMYPq2id/UJQp q1IIV5oTAqsCwDmtylDktmKbFCRGqxlvsO2oVcrStT1byjyT3Ev4OyA8izXbUVSQWN fpplmMsdFBxEc89tJcKlVCIhB+nkvGv/Nry4QQEUzHGbe8IibP61Vo17NB2enz3kPY Poh5iEkcfUzgm0H5tZPc1kfzkkZgQzX8cG2+KeGW3ByM+J0bBBA/G3Ct4JsYYOwzbB bbTXF+doiilhAacZuAQHfkIYuPUZdCZBXra6vdVDCLM39xdCuXRMCPFL57+6GGSQ4M bWMNs1bgoX9pw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 03/19] crypto: scatterwalk - add new functions for iterating through data Date: Wed, 19 Feb 2025 10:23:25 -0800 Message-ID: <20250219182341.43961-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Also add scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone(). A later patch will remove scatterwalk_done() and scatterwalk_pagedone(). The new code eliminates the error-prone 'more' parameter. Advancing to the next sg entry now only happens just-in-time in scatterwalk_next(). The new code also pairs the dcache flush more closely with the actual write, similar to memcpy_to_page(). Previously it was paired with advancing to the next page. This is currently causing bugs where the dcache flush is incorrectly being skipped, usually due to scatterwalk_copychunks() being called without a following scatterwalk_done(). The dcache flush may have been placed where it was in order to not call flush_dcache_page() redundantly when visiting a page more than once. However, that case is rare in practice, and most architectures either do not implement flush_dcache_page() anyway or implement it lazily where it just clears a page flag. Another limitation of the old code was that by the time the flush happened, there was no way to tell if more than one page needed to be flushed. That has been sufficient because the code goes page by page, but I would like to optimize that on !HIGHMEM platforms. The new code makes this possible, and a later patch will implement this optimization. Signed-off-by: Eric Biggers --- include/crypto/scatterwalk.h | 69 ++++++++++++++++++++++++++++++++---- 1 file changed, 63 insertions(+), 6 deletions(-) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 5c7765f601e0c..8e83c43016c9d 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -62,16 +62,10 @@ static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); return min(nbytes, scatterwalk_pagelen(walk)); } -static inline void scatterwalk_advance(struct scatter_walk *walk, - unsigned int nbytes) -{ - walk->offset += nbytes; -} - static inline struct page *scatterwalk_page(struct scatter_walk *walk) { return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); } @@ -84,10 +78,28 @@ static inline void *scatterwalk_map(struct scatter_walk *walk) { return kmap_local_page(scatterwalk_page(walk)) + offset_in_page(walk->offset); } +/** + * scatterwalk_next() - Get the next data buffer in a scatterlist walk + * @walk: the scatter_walk + * @total: the total number of bytes remaining, > 0 + * @nbytes_ret: (out) the next number of bytes available, <= @total + * + * Return: A virtual address for the next segment of data from the scatterlist. + * The caller must call scatterwalk_done_src() or scatterwalk_done_dst() + * when it is done using this virtual address. + */ +static inline void *scatterwalk_next(struct scatter_walk *walk, + unsigned int total, + unsigned int *nbytes_ret) +{ + *nbytes_ret = scatterwalk_clamp(walk, total); + return scatterwalk_map(walk); +} + static inline void scatterwalk_pagedone(struct scatter_walk *walk, int out, unsigned int more) { if (out) { struct page *page; @@ -106,10 +118,55 @@ static inline void scatterwalk_done(struct scatter_walk *walk, int out, if (!more || walk->offset >= walk->sg->offset + walk->sg->length || !(walk->offset & (PAGE_SIZE - 1))) scatterwalk_pagedone(walk, out, more); } +static inline void scatterwalk_advance(struct scatter_walk *walk, + unsigned int nbytes) +{ + walk->offset += nbytes; +} + +/** + * scatterwalk_done_src() - Finish one step of a walk of source scatterlist + * @walk: the scatter_walk + * @vaddr: the address returned by scatterwalk_next() + * @nbytes: the number of bytes processed this step, less than or equal to the + * number of bytes that scatterwalk_next() returned. + * + * Use this if the @vaddr was not written to, i.e. it is source data. + */ +static inline void scatterwalk_done_src(struct scatter_walk *walk, + const void *vaddr, unsigned int nbytes) +{ + scatterwalk_unmap((void *)vaddr); + scatterwalk_advance(walk, nbytes); +} + +/** + * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist + * @walk: the scatter_walk + * @vaddr: the address returned by scatterwalk_next() + * @nbytes: the number of bytes processed this step, less than or equal to the + * number of bytes that scatterwalk_next() returned. + * + * Use this if the @vaddr may have been written to, i.e. it is destination data. + */ +static inline void scatterwalk_done_dst(struct scatter_walk *walk, + void *vaddr, unsigned int nbytes) +{ + scatterwalk_unmap(vaddr); + /* + * Explicitly check ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE instead of just + * relying on flush_dcache_page() being a no-op when not implemented, + * since otherwise the BUG_ON in sg_page() does not get optimized out. + */ + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) + flush_dcache_page(scatterwalk_page(walk)); + scatterwalk_advance(walk, nbytes); +} + void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); From patchwork Wed Feb 19 18:23:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982638 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32032215053; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; cv=none; b=HmkuipXnLaX3z8Op84Wcltt2Ke7cdh27mmz47DD/7BDl00IiW0ou9ENzK2Wq00RIKG+MDIONymAcLz1tIqrNIUI+SUgmsL8K93zfogk1mNVvzs/LKrJFXHyjMfXjkiXur8KtWpbQ6Qh6prfF8RMxRfSyaEV2MGV+XjJSuQ7j4gk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; c=relaxed/simple; bh=B94Zf/9ecwbLJRGlZygRaCWWQNJw7qERS46dXk1bops=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rqauSynOSXdD/13wLnk+WbLeON+WLjtbJ8uIAF3vrUn46qVgArn8MZrssQ2IqRjxm5vxdGG838saVPk1wzjuPwBxjfXF85OgxmaY0itOxmOTxEB2rXaMNL22eii7cOFmeMmVtzqIHVyQdy4PQmVeHkIYHjYLYnajKSWwP/kZBUg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MYpOTwsq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MYpOTwsq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1652C4CEE7; Wed, 19 Feb 2025 18:24:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989468; bh=B94Zf/9ecwbLJRGlZygRaCWWQNJw7qERS46dXk1bops=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MYpOTwsqtsFphHk3bYX2KYLcOnNJJZ6U0psJuditGIz33kygDLh0RLKw98kzhVkn4 cfN2EloO3IAFYH+e6jOSyIBEv30uulkZztbVXB6+DxjBuqdPbeZYS9LWIEdRKvTsEK m7YZiLK4gSPl7zUNxY/dxuTtgT9UnVJH2Tb4v7omixyqp0nK8f2SKPodjNZZT995fq qEClzohCyOdnlJBnaUsqidBhiNApX+MzPBuhAruYC5/+zOqqj6l+CqE7ZN2w3B6Qgk k0Y0rkb5aedYGLJf45djsy0kjnoQ4Fu/Y3Op+qOCNB7L4KakkTl03680ylytIJ8CFm 1jv22HnW7AGDg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 04/19] crypto: scatterwalk - add new functions for copying data Date: Wed, 19 Feb 2025 10:23:26 -0800 Message-ID: <20250219182341.43961-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1 respectively. They follow the same argument order as memcpy_from_page() and memcpy_to_page() from . Note that in the case of memcpy_from_sglist(), this also happens to be the same argument order that scatterwalk_map_and_copy() uses. The new code is also faster, mainly because it builds the scatter_walk directly without creating a temporary scatterlist. E.g., a 20% performance improvement is seen for copying the AES-GCM auth tag. Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist() and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but there are a lot of them so they aren't all being updated right away. Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk() which are similar but operate on a scatter_walk instead of a scatterlist. These will replace scatterwalk_copychunks() with the 'out' argument 0 and 1 respectively. Their behavior differs slightly from scatterwalk_copychunks() in that they automatically take care of flushing the dcache when needed, making them easier to use. scatterwalk_copychunks() itself is left unchanged for now. It will be removed after its callers are updated to use other functions instead. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 59 ++++++++++++++++++++++++++++++------ include/crypto/scatterwalk.h | 24 +++++++++++++-- 2 files changed, 72 insertions(+), 11 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index af436ad02e3ff..2e7a532152d61 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -65,26 +65,67 @@ void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, scatterwalk_pagedone(walk, out & 1, 1); } } EXPORT_SYMBOL_GPL(scatterwalk_copychunks); -void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, - unsigned int start, unsigned int nbytes, int out) +inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, + unsigned int nbytes) +{ + do { + const void *src_addr; + unsigned int to_copy; + + src_addr = scatterwalk_next(walk, nbytes, &to_copy); + memcpy(buf, src_addr, to_copy); + scatterwalk_done_src(walk, src_addr, to_copy); + buf += to_copy; + nbytes -= to_copy; + } while (nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_from_scatterwalk); + +inline void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, + unsigned int nbytes) +{ + do { + void *dst_addr; + unsigned int to_copy; + + dst_addr = scatterwalk_next(walk, nbytes, &to_copy); + memcpy(dst_addr, buf, to_copy); + scatterwalk_done_dst(walk, dst_addr, to_copy); + buf += to_copy; + nbytes -= to_copy; + } while (nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_to_scatterwalk); + +void memcpy_from_sglist(void *buf, struct scatterlist *sg, + unsigned int start, unsigned int nbytes) { struct scatter_walk walk; - struct scatterlist tmp[2]; - if (!nbytes) + if (unlikely(nbytes == 0)) /* in case sg == NULL */ return; - sg = scatterwalk_ffwd(tmp, sg, start); + scatterwalk_start_at_pos(&walk, sg, start); + memcpy_from_scatterwalk(buf, &walk, nbytes); +} +EXPORT_SYMBOL_GPL(memcpy_from_sglist); + +void memcpy_to_sglist(struct scatterlist *sg, unsigned int start, + const void *buf, unsigned int nbytes) +{ + struct scatter_walk walk; + + if (unlikely(nbytes == 0)) /* in case sg == NULL */ + return; - scatterwalk_start(&walk, sg); - scatterwalk_copychunks(buf, &walk, nbytes, out); - scatterwalk_done(&walk, out, 0); + scatterwalk_start_at_pos(&walk, sg, start); + memcpy_to_scatterwalk(&walk, buf, nbytes); } -EXPORT_SYMBOL_GPL(scatterwalk_map_and_copy); +EXPORT_SYMBOL_GPL(memcpy_to_sglist); struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, unsigned int len) { diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 8e83c43016c9d..1689ecd7ddafa 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -168,12 +168,32 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out); -void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, - unsigned int start, unsigned int nbytes, int out); +void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, + unsigned int nbytes); + +void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, + unsigned int nbytes); + +void memcpy_from_sglist(void *buf, struct scatterlist *sg, + unsigned int start, unsigned int nbytes); + +void memcpy_to_sglist(struct scatterlist *sg, unsigned int start, + const void *buf, unsigned int nbytes); + +/* In new code, please use memcpy_{from,to}_sglist() directly instead. */ +static inline void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, + unsigned int start, + unsigned int nbytes, int out) +{ + if (out) + memcpy_to_sglist(sg, start, buf, nbytes); + else + memcpy_from_sglist(buf, sg, start, nbytes); +} struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], struct scatterlist *src, unsigned int len); From patchwork Wed Feb 19 18:23:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982640 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE1D4215190; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; cv=none; b=gH3wk7iIMkRe0Zbz3VOuVvCnA0isC5pvCXTcJg9X9xm6LVAy84Z86ncGMZMG/hMg8gWro4khUN3q6xZ/CzF75ALNygBIkBfdU1WvvB5NmJV6i7R0pKOlffOEF5XtgKg9tOT9iy8VyXzfKo/WE9rLMYIYsa1PPtDIau0pOHuoqvk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; c=relaxed/simple; bh=EysGDlA5oq+5j65CW6jee6qyfljbpHPT03HmtDvEdaM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YuK/0uUxe5HNoHPIgSqkY2FlDuMwDQoCEgCEekewRivRWTtjN56ndvvmLZPiHvELe5xvwv7TVZUbQR4hD3/jhVsAJ5aQohfK2tttyzeYoqeX6Degy/Rs3j9g3gwAC7D0XZ+HTPcTdgAorLsTZ3OlYKeCIrvEfWBgae7hcQ52HKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TSIX94DU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TSIX94DU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 390A4C4CEF0; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989468; bh=EysGDlA5oq+5j65CW6jee6qyfljbpHPT03HmtDvEdaM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TSIX94DUVd7IDrUg3YeyYUii15FYTIUgo72G/3I4+nNvGA34b0rB8tjgPNuJkbn4K ea6xBl4YiGEMSTQeJ5YZBxe3KoWJUihhuxTds9lNbvf5DURmQIZFsO+w5MEJRuhV/+ U2iWf1UYf75AFbl+64eOL1Sb7ck2rLoRNXypJJfybiLi2xqGme8Os3KN9GCkcxrQ/s mfOmH2NpmTSOlQeVtqjoo1fcMPeEDX3ODRP+LeMZLxgL/gHzHEdM+FFImgLHryL3/q u/W/VIZDbJlOXn20/blQhQVFWdN61MrHbXq8DAVevb1PhyWzY/H4SyUPQhNvJR42oK gHanPW+MKUNJQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Boris Pismenny , Jakub Kicinski , John Fastabend Subject: [PATCH v3 05/19] crypto: scatterwalk - add scatterwalk_get_sglist() Date: Wed, 19 Feb 2025 10:23:27 -0800 Message-ID: <20250219182341.43961-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add a function that creates a scatterlist that represents the remaining data in a walk. This will be used to replace chain_to_walk() in net/tls/tls_device_fallback.c so that it will no longer need to reach into the internals of struct scatter_walk. Cc: Boris Pismenny Cc: Jakub Kicinski Cc: John Fastabend Signed-off-by: Eric Biggers --- include/crypto/scatterwalk.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index 1689ecd7ddafa..f6262d05a3c75 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -67,10 +67,27 @@ static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, static inline struct page *scatterwalk_page(struct scatter_walk *walk) { return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); } +/* + * Create a scatterlist that represents the remaining data in a walk. Uses + * chaining to reference the original scatterlist, so this uses at most two + * entries in @sg_out regardless of the number of entries in the original list. + * Assumes that sg_init_table() was already done. + */ +static inline void scatterwalk_get_sglist(struct scatter_walk *walk, + struct scatterlist sg_out[2]) +{ + if (walk->offset >= walk->sg->offset + walk->sg->length) + scatterwalk_start(walk, sg_next(walk->sg)); + sg_set_page(sg_out, sg_page(walk->sg), + walk->sg->offset + walk->sg->length - walk->offset, + walk->offset); + scatterwalk_crypto_chain(sg_out, sg_next(walk->sg), 2); +} + static inline void scatterwalk_unmap(void *vaddr) { kunmap_local(vaddr); } From patchwork Wed Feb 19 18:23:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982639 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C924B215184; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; cv=none; b=qiIN1zVZuH1rMT6nGo4IExqSWjY76N44pxmpR6isxhiEk5+Rh9uitwYPqNkyzRA1vPN0dSOJ9QA6CFH3ti/ugoG7arVrPwZU2olADw0XXTWC2NHDyuaZWqguiOniatPTi9oIeZq6dUacIv7uS3rbGGlT/7qBroogkgYMegYxGzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989468; c=relaxed/simple; bh=ZqAvF4vHiLc/5vatLVvUZCNM6+C9ciriG0My6qjcqpQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CUMTRFj2zokYzN2vrNwTuSzz6K69hBCGzZvVeOekaGAR3/maLYDG/Yrv53CKEHvtPf4J556XSbknqAQfPxKbUVPEwRAP5mCIxpwjDl8yjYcLCZZYwLAlIhq/hyczQMNoRAHnVSFTCRWEuTXK1GgPn+hUDfiJBCZ8VC1D2zt74MI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Tf71jMKO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tf71jMKO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 906E6C4CEF4; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989468; bh=ZqAvF4vHiLc/5vatLVvUZCNM6+C9ciriG0My6qjcqpQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tf71jMKOQx7DJoQrus86YNwFbYNGvhjrDjpn89t2/idNGmlpafmlMyzpPLZYxW5P5 Lc3C7Sk4LJfP/qm8EQRjJd2E4HqRClTI8tzdJKMKETj6OSZB5Q/t0g+B9yxGPbMl26 /8ERue1WMBtRlPsrXbdd7thBoqPLyfbnEpxnKEBSSPOpNK7sFsjIfY6EibpaEC1o/J 7PFlXszA64wye9VlKGInJ4F9cPIH7YYmpiCJPy+Lbp6PNOnH3hWcjEPegILvE6sYvs sfKQDcERIL94QbaBmVZ3VqCdaa+gVWGg+6Bci6Df5XKJVqYuiPLYcoiKOSO//x/ABE XqkhC9PlzQ/Kw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 06/19] crypto: skcipher - use scatterwalk_start_at_pos() Date: Wed, 19 Feb 2025 10:23:28 -0800 Message-ID: <20250219182341.43961-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2), and scatterwalk_done(). This is simpler and faster. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index e3751cc88b76e..33508d001f361 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -361,18 +361,12 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, walk->flags = 0; if (unlikely(!walk->total)) return 0; - scatterwalk_start(&walk->in, req->src); - scatterwalk_start(&walk->out, req->dst); - - scatterwalk_copychunks(NULL, &walk->in, req->assoclen, 2); - scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); - - scatterwalk_done(&walk->in, 0, walk->total); - scatterwalk_done(&walk->out, 0, walk->total); + scatterwalk_start_at_pos(&walk->in, req->src, req->assoclen); + scatterwalk_start_at_pos(&walk->out, req->dst, req->assoclen); /* * Accessing 'alg' directly generates better code than using the * crypto_aead_blocksize() and similar helper functions here, as it * prevents the algorithm pointer from being repeatedly reloaded. From patchwork Wed Feb 19 18:23:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982641 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F9B7215196; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; cv=none; b=tJDzAbA533OnQ/WiNK3JTVaISMboy8pBcS7j5eiKcDYuaxGo7dR7iek3A6XXCT6ShuCnhgqAY4f2cQyGRN02kpt/9YjYJj1+MO8FOUs4mVc3fsAGiT95OCbSNHswBbfz+tD1yk3Wxx6i0xXUPJk5ojWOyzHKqPIhmOx/haSfibM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; c=relaxed/simple; bh=n97Wpwb+rCABvDN9tt69mu3JcMHR+6ylMZTfgUxLF9c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k7OYL2UmpOGZYTEGL+Ecelcxusoq0TMfJjdGKDqMYBu2Uli7XTZgnHNdGKt591Z8SnMoNbIYbKG2XBoMnnWLy0tNMoW4q4cxKYJWc6jAVO2+ddsbDC3QN/cu+mSyh32+TL6esy5yfc9aWi3D1DYxaiSKiXtm/5EBOgN7P19fMpk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=utTKV1bw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="utTKV1bw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC2C5C4CEFC; Wed, 19 Feb 2025 18:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989468; bh=n97Wpwb+rCABvDN9tt69mu3JcMHR+6ylMZTfgUxLF9c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=utTKV1bwjvzHP2IycvmtH8HDphgGwAjGCQAV7zGNXutttI2fFIDWEVfCYuKsGM1E9 Ge+ieXX7fJociF1CuVRLMnPmokD9NLc0axuzgjdK3F9nyg+CI7+sdYtO0xp1iB0bK5 MOmvGvVPA1OGv3t9a2Em4QSvzVEDfVSBYMfEHrpI6bxXhHy/ljqSmwgu+7qc3wVofk 9mBzJbAU6MkYSkU4+03BNWSLGPDzRN7jRxkGkBppKNrB6QupelOjsCMs6IutTtOdJt 3NEo/dD6cIj7l1V41gtKxiHDsT7W7IDfIhSJUwIRYrl/vRMGaGjYQVY07mPslbFxlS X9zmr6aAujeZg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 07/19] crypto: aegis - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:29 -0800 Message-ID: <20250219182341.43961-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers --- crypto/aegis128-core.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/crypto/aegis128-core.c b/crypto/aegis128-core.c index 6cbff298722b4..15d64d836356d 100644 --- a/crypto/aegis128-core.c +++ b/crypto/aegis128-core.c @@ -282,14 +282,14 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, union aegis_block buf; unsigned int pos = 0; scatterwalk_start(&walk, sg_src); while (assoclen != 0) { - unsigned int size = scatterwalk_clamp(&walk, assoclen); + unsigned int size; + const u8 *mapped = scatterwalk_next(&walk, assoclen, &size); unsigned int left = size; - void *mapped = scatterwalk_map(&walk); - const u8 *src = (const u8 *)mapped; + const u8 *src = mapped; if (pos + size >= AEGIS_BLOCK_SIZE) { if (pos > 0) { unsigned int fill = AEGIS_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); @@ -306,13 +306,11 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, memcpy(buf.bytes + pos, src, left); pos += left; assoclen -= size; - scatterwalk_unmap(mapped); - scatterwalk_advance(&walk, size); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, mapped, size); } if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS_BLOCK_SIZE - pos); crypto_aegis128_update_a(state, &buf, do_simd); From patchwork Wed Feb 19 18:23:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982642 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9C9D215782; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; cv=none; b=Vs+4uiBObGIWCDHMNUJPtMuU+/W4PT4ogRVuSlAiLc10ucHd1ekdaLHSrI4kcU+hnTAzIFQ85ATeU/zTq4XYRST71WHeMEHNcP1u0v7XC0fI0C0xp5B6y49QGSTgrWZLTY+6R4Vii5FvMshtMhUQHIppDuCmszWCg7JIMKGSCsY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; c=relaxed/simple; bh=VRdQ2k8FdPmnqmKHx3au+3j0XAULugqyn31iF/f7f3c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uSuuhoMK6PBFfCVKLzsDrcwZCBxg6/vIKs/UDn5aXnhTMx5oQgr5igK55+OdRcLfN+g55kBwBeBE9vT2f3mTSbD/7d8DU+V9c3LzdUv6G7U2AVflucwIFUjJtDyRg0Rt4TvzgLuX9ZjlKwFzIcNglvLmp8bDnbbgR3CkTkfe3l0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OyExh26Z; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OyExh26Z" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15272C4CEDD; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989469; bh=VRdQ2k8FdPmnqmKHx3au+3j0XAULugqyn31iF/f7f3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OyExh26ZqVNYSB/UUGR/UPV1JuE64mzG37cGURIG8Dh+wa3uIP7ZC2JrJmTZ/zTWK vEwylh5guEPOIJ2O4+rUTVXy3AjroxP38ham1Cv3JaCHS2oRdgTKEqroXVKHBH08+G Q0C1PLyKsxxioiT0kKFccE42a8LZUuDMTsgxgrq1YzDuaqavSdo6ro2F+vLogXaTeT vQVCRG0Iqv8waLjpUOC2Z0i2QloX93Pdij4JZ37SvJk6710B9kEzlRgb96xZpQqkGc 3ZVollhWCrpMXt0qmvfGn3TT7Se64JyC0upWYkE/TvV1p8Z15DDw2m2mU7B+CoSjw2 YADpdk0PG47TQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 08/19] crypto: arm/ghash - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:30 -0800 Message-ID: <20250219182341.43961-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Signed-off-by: Eric Biggers --- arch/arm/crypto/ghash-ce-glue.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index 3af9970825340..9613ffed84f93 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -457,30 +457,23 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len) int buf_count = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; + unsigned int n; + const u8 *p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - - p = scatterwalk_map(&walk); + p = scatterwalk_next(&walk, len, &n); gcm_update_mac(dg, p, n, buf, &buf_count, ctx); - scatterwalk_unmap(p); + scatterwalk_done_src(&walk, p, n); if (unlikely(len / SZ_4K > (len - n) / SZ_4K)) { kernel_neon_end(); kernel_neon_begin(); } len -= n; - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); pmull_ghash_update_p64(1, dg, buf, ctx->h, NULL); From patchwork Wed Feb 19 18:23:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982643 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB5D9215F47; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; cv=none; b=II3upYnQO6rXqdqgiiiZ1V/WXOSBj0hPNdVVfdlkOne89g4sgRrfG3u0xjfmHnPckuYePQF/asr8o/jc7KpP4eSWjShUprILTHoVUZeMABxIN1pm7NQYsUw3AfZW/3jKkPiwz8fhn2GhPnXfUeY1MH1Aa7noYlFx8DDc/GvuFOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989469; c=relaxed/simple; bh=57f6dgKwvmTWrQwK7r34cnN3lsabMngpj38CTQ5TNwc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dTeQJaQBLgsM6Ll211qxcvl2JPkREHtjFvxM3rBfk/tUeTKZfTxaGLO4Mg3GWAEe2rv5KVLmm8Fyt2bnxaWK8Os4ENypuvNobY0mcmcLVlBO4LzpPmOVsmH8sAB4cCxrm4DCp/O4ots7AvgGhXHI7L4tXkG4lLByqvWl/VTJxTs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Sy8S/fq/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Sy8S/fq/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FC4FC4CEF2; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989469; bh=57f6dgKwvmTWrQwK7r34cnN3lsabMngpj38CTQ5TNwc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sy8S/fq/8DcTFKHba2sgHIPIhBKBFOCrhriy1k5AJn8o+Y123zxfTR+Cwrm0WBG2n bYxYT18yzU6V/zELeVrxepCIHE4T/Me4zXmnWo0C7OLopnSjdLL39lgkQUK1G5C/N+ g7J2SlKlnch3DHdRnd4s3yHzo4k32ducUUWDEJsDI0hpkSkFN314CLvjlAM4BIwtUp YVNpVAdOwc9o7R/YAS6EkI6yHnYCfNVhuyCWyyBhU2dzaEEfj2CD/pnIivoN3A66w0 Jekbo0AGumsxZm/qcI0J0C+xoXMt5/qSJ675ffcpFfPx80fS1cQS0Gf8n7l0vxCas7 W3ooI54N+XjNA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 09/19] crypto: arm64 - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:31 -0800 Message-ID: <20250219182341.43961-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Adjust variable naming slightly to keep things consistent. Signed-off-by: Eric Biggers --- arch/arm64/crypto/aes-ce-ccm-glue.c | 17 ++++------------ arch/arm64/crypto/ghash-ce-glue.c | 16 ++++----------- arch/arm64/crypto/sm4-ce-ccm-glue.c | 27 ++++++++++--------------- arch/arm64/crypto/sm4-ce-gcm-glue.c | 31 ++++++++++++----------------- 4 files changed, 32 insertions(+), 59 deletions(-) diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index a2b5d6f20f4d1..1c29546983bfc 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -154,27 +154,18 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) macp = ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, macp, ctx->key_enc, num_rounds(ctx)); scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; - - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - p = scatterwalk_map(&walk); + unsigned int n; + const u8 *p; + p = scatterwalk_next(&walk, len, &n); macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc, num_rounds(ctx)); - + scatterwalk_done_src(&walk, p, n); len -= n; - - scatterwalk_unmap(p); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); } static int ccm_encrypt(struct aead_request *req) { diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index da7b7ec1a664e..69d4fb78c30d7 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -306,25 +306,17 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len) int buf_count = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, len); - u8 *p; - - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - p = scatterwalk_map(&walk); + unsigned int n; + const u8 *p; + p = scatterwalk_next(&walk, len, &n); gcm_update_mac(dg, p, n, buf, &buf_count, ctx); + scatterwalk_done_src(&walk, p, n); len -= n; - - scatterwalk_unmap(p); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, 0, len); } while (len); if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); ghash_do_simd_update(1, dg, buf, &ctx->ghash_key, NULL, diff --git a/arch/arm64/crypto/sm4-ce-ccm-glue.c b/arch/arm64/crypto/sm4-ce-ccm-glue.c index 5e7e17bbec81e..119f86eb7cc98 100644 --- a/arch/arm64/crypto/sm4-ce-ccm-glue.c +++ b/arch/arm64/crypto/sm4-ce-ccm-glue.c @@ -110,21 +110,16 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) crypto_xor(mac, (const u8 *)&aadlen, len); scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, assoclen); - u8 *p, *ptr; + unsigned int n, orig_n; + const u8 *p, *orig_p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, assoclen); - } - - p = ptr = scatterwalk_map(&walk); - assoclen -= n; - scatterwalk_advance(&walk, n); + orig_p = scatterwalk_next(&walk, assoclen, &orig_n); + p = orig_p; + n = orig_n; while (n > 0) { unsigned int l, nblocks; if (len == SM4_BLOCK_SIZE) { @@ -134,30 +129,30 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) len = 0; } else { nblocks = n / SM4_BLOCK_SIZE; sm4_ce_cbcmac_update(ctx->rkey_enc, - mac, ptr, nblocks); + mac, p, nblocks); - ptr += nblocks * SM4_BLOCK_SIZE; + p += nblocks * SM4_BLOCK_SIZE; n %= SM4_BLOCK_SIZE; continue; } } l = min(n, SM4_BLOCK_SIZE - len); if (l) { - crypto_xor(mac + len, ptr, l); + crypto_xor(mac + len, p, l); len += l; - ptr += l; + p += l; n -= l; } } - scatterwalk_unmap(p); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_p, orig_n); + assoclen -= orig_n; } while (assoclen); } static int ccm_crypt(struct aead_request *req, struct skcipher_walk *walk, u32 *rkey_enc, u8 mac[], diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index 73bfb6972d3a3..2e27d7752d4f5 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -80,53 +80,48 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[]) unsigned int buflen = 0; scatterwalk_start(&walk, req->src); do { - u32 n = scatterwalk_clamp(&walk, assoclen); - u8 *p, *ptr; + unsigned int n, orig_n; + const u8 *p, *orig_p; - if (!n) { - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, assoclen); - } - - p = ptr = scatterwalk_map(&walk); - assoclen -= n; - scatterwalk_advance(&walk, n); + orig_p = scatterwalk_next(&walk, assoclen, &orig_n); + p = orig_p; + n = orig_n; if (n + buflen < GHASH_BLOCK_SIZE) { - memcpy(&buffer[buflen], ptr, n); + memcpy(&buffer[buflen], p, n); buflen += n; } else { unsigned int nblocks; if (buflen) { unsigned int l = GHASH_BLOCK_SIZE - buflen; - memcpy(&buffer[buflen], ptr, l); - ptr += l; + memcpy(&buffer[buflen], p, l); + p += l; n -= l; pmull_ghash_update(ctx->ghash_table, ghash, buffer, 1); } nblocks = n / GHASH_BLOCK_SIZE; if (nblocks) { pmull_ghash_update(ctx->ghash_table, ghash, - ptr, nblocks); - ptr += nblocks * GHASH_BLOCK_SIZE; + p, nblocks); + p += nblocks * GHASH_BLOCK_SIZE; } buflen = n % GHASH_BLOCK_SIZE; if (buflen) - memcpy(&buffer[0], ptr, buflen); + memcpy(&buffer[0], p, buflen); } - scatterwalk_unmap(p); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_p, orig_n); + assoclen -= orig_n; } while (assoclen); /* padding with '0' */ if (buflen) { memset(&buffer[buflen], 0, GHASH_BLOCK_SIZE - buflen); From patchwork Wed Feb 19 18:23:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982644 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7864215F6D; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989470; cv=none; b=aYt8CCpid1M0ntSwBKYGxOuO0uXURumwESaAh2B8RT+6t9HmfYQQbdD0a0/fuf5BU7hkRJhdFGjTpIICWAgxhUAtqh8A11kS3d3OzackUUyLFP5FAbddBM7ct5fEKEusjQCTRGj47emRoYE0RZiFVl9Sa0MocIqMEC55Dw+5y6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989470; c=relaxed/simple; bh=IweyEzHVugUZRfF2BC17ceLBJxhhb+nfKN/4JWYuPNo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r9zc0jIIeweJqcQefjxYF5RPy3f7/pnBTUPxXQCDZMzC/jj9T9oydzApiz39mv9X8QGMECncDLXKpAHoqAINybHsonuur9lueI3UUjGrBPwtPIMjK5bxTqheW44WnOljkF4svr99i+r+fhRpFkSxnW/S3qzYBqHMJIJuNAEl054= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YfocSNg/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YfocSNg/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8ACC3C4CEE0; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989469; bh=IweyEzHVugUZRfF2BC17ceLBJxhhb+nfKN/4JWYuPNo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YfocSNg/nuCK5LdQF7MglpbQVv42eWvN2zezEudFgrSUsQW4AbTtaROP1eL8CftyQ nsG4zrsleyoIGoU+mFwHuYEGqBzFszKwi37PMutZj9o4uhDjZhZ2NiOv5skkbPqh6q GXgtZl4SCb/ZJRaTj4CpL1Op0hPnK5TYR5P1oa/xrhCXGQzDWAnv+Nt8Mvqd1/Rmaw CxPlSwUc6yZeIhrS/5BywE0/UQOB02wDVQ240Jy7YiscmC/P2jg0Adj7GmqyeopAbD 92YYQovjY2HnoD4jc9842Bay59Xx3GwSH4xFmBKocxfSGVTHlnpZLuJJtFcxiZ0Gay sAGAlwX0Vhstw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Christophe Leroy , Madhavan Srinivasan , Michael Ellerman , Naveen N Rao , Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 10/19] crypto: nx - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:32 -0800 Message-ID: <20250219182341.43961-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a more complex way to achieve the same result. - Also in nx_walk_and_build(), use the new functions scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary code that seemed to be intended to advance to the next sg entry, which is already handled by the scatterwalk functions. Note that nx_walk_and_build() does not actually read or write the mapped virtual address, and thus it is misusing the scatter_walk API. It really should just access the scatterlist directly. This patch does not try to address this existing issue. - In nx_gca(), use memcpy_from_sglist() instead of a more complex way to achieve the same result. - In various functions, replace calls to scatterwalk_map_and_copy() with memcpy_from_sglist() or memcpy_to_sglist() as appropriate. Note that this eliminates the confusing 'out' argument (which this driver had tried to work around by defining the missing constants for it...) Cc: Christophe Leroy Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Naveen N Rao Cc: Nicholas Piggin Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Eric Biggers --- drivers/crypto/nx/nx-aes-ccm.c | 16 ++++++---------- drivers/crypto/nx/nx-aes-gcm.c | 17 ++++++----------- drivers/crypto/nx/nx.c | 31 +++++-------------------------- drivers/crypto/nx/nx.h | 3 --- 4 files changed, 17 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c index c843f4c6f684d..56a0b3a67c330 100644 --- a/drivers/crypto/nx/nx-aes-ccm.c +++ b/drivers/crypto/nx/nx-aes-ccm.c @@ -215,17 +215,15 @@ static int generate_pat(u8 *iv, */ if (b1) { memset(b1, 0, 16); if (assoclen <= 65280) { *(u16 *)b1 = assoclen; - scatterwalk_map_and_copy(b1 + 2, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 2, req->src, 0, iauth_len); } else { *(u16 *)b1 = (u16)(0xfffe); *(u32 *)&b1[2] = assoclen; - scatterwalk_map_and_copy(b1 + 6, req->src, 0, - iauth_len, SCATTERWALK_FROM_SG); + memcpy_from_sglist(b1 + 6, req->src, 0, iauth_len); } } /* now copy any remaining AAD to scatterlist and call nx... */ if (!assoclen) { @@ -339,13 +337,12 @@ static int ccm_nx_decrypt(struct aead_request *req, spin_lock_irqsave(&nx_ctx->lock, irq_flags); nbytes -= authsize; /* copy out the auth tag to compare with later */ - scatterwalk_map_and_copy(priv->oauth_tag, - req->src, nbytes + req->assoclen, authsize, - SCATTERWALK_FROM_SG); + memcpy_from_sglist(priv->oauth_tag, req->src, nbytes + req->assoclen, + authsize); rc = generate_pat(iv, req, nx_ctx, authsize, nbytes, assoclen, csbcpb->cpb.aes_ccm.in_pat_or_b0); if (rc) goto out; @@ -463,13 +460,12 @@ static int ccm_nx_encrypt(struct aead_request *req, processed += to_process; } while (processed < nbytes); /* copy out the auth tag */ - scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac, - req->dst, nbytes + req->assoclen, authsize, - SCATTERWALK_TO_SG); + memcpy_to_sglist(req->dst, nbytes + req->assoclen, + csbcpb->cpb.aes_ccm.out_pat_or_mac, authsize); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c index 4a796318b4306..b7fe2de96d962 100644 --- a/drivers/crypto/nx/nx-aes-gcm.c +++ b/drivers/crypto/nx/nx-aes-gcm.c @@ -101,20 +101,17 @@ static int nx_gca(struct nx_crypto_ctx *nx_ctx, u8 *out, unsigned int assoclen) { int rc; struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead; - struct scatter_walk walk; struct nx_sg *nx_sg = nx_ctx->in_sg; unsigned int nbytes = assoclen; unsigned int processed = 0, to_process; unsigned int max_sg_len; if (nbytes <= AES_BLOCK_SIZE) { - scatterwalk_start(&walk, req->src); - scatterwalk_copychunks(out, &walk, nbytes, SCATTERWALK_FROM_SG); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0); + memcpy_from_sglist(out, req->src, 0, nbytes); return 0; } NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_CONTINUATION; @@ -389,23 +386,21 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc, } while (processed < nbytes); mac: if (enc) { /* copy out the auth tag */ - scatterwalk_map_and_copy( - csbcpb->cpb.aes_gcm.out_pat_or_mac, + memcpy_to_sglist( req->dst, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_TO_SG); + csbcpb->cpb.aes_gcm.out_pat_or_mac, + crypto_aead_authsize(crypto_aead_reqtfm(req))); } else { u8 *itag = nx_ctx->priv.gcm.iauth_tag; u8 *otag = csbcpb->cpb.aes_gcm.out_pat_or_mac; - scatterwalk_map_and_copy( + memcpy_from_sglist( itag, req->src, req->assoclen + nbytes, - crypto_aead_authsize(crypto_aead_reqtfm(req)), - SCATTERWALK_FROM_SG); + crypto_aead_authsize(crypto_aead_reqtfm(req))); rc = crypto_memneq(itag, otag, crypto_aead_authsize(crypto_aead_reqtfm(req))) ? -EBADMSG : 0; } out: diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c index 010e87d9da36b..dd95e5361d88c 100644 --- a/drivers/crypto/nx/nx.c +++ b/drivers/crypto/nx/nx.c @@ -151,44 +151,23 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *nx_dst, unsigned int start, unsigned int *src_len) { struct scatter_walk walk; struct nx_sg *nx_sg = nx_dst; - unsigned int n, offset = 0, len = *src_len; + unsigned int n, len = *src_len; char *dst; /* we need to fast forward through @start bytes first */ - for (;;) { - scatterwalk_start(&walk, sg_src); - - if (start < offset + sg_src->length) - break; - - offset += sg_src->length; - sg_src = sg_next(sg_src); - } - - /* start - offset is the number of bytes to advance in the scatterlist - * element we're currently looking at */ - scatterwalk_advance(&walk, start - offset); + scatterwalk_start_at_pos(&walk, sg_src, start); while (len && (nx_sg - nx_dst) < sglen) { - n = scatterwalk_clamp(&walk, len); - if (!n) { - /* In cases where we have scatterlist chain sg_next - * handles with it properly */ - scatterwalk_start(&walk, sg_next(walk.sg)); - n = scatterwalk_clamp(&walk, len); - } - dst = scatterwalk_map(&walk); + dst = scatterwalk_next(&walk, len, &n); nx_sg = nx_build_sg_list(nx_sg, dst, &n, sglen - (nx_sg - nx_dst)); - len -= n; - scatterwalk_unmap(dst); - scatterwalk_advance(&walk, n); - scatterwalk_done(&walk, SCATTERWALK_FROM_SG, len); + scatterwalk_done_src(&walk, dst, n); + len -= n; } /* update to_process */ *src_len -= len; /* return the moved destination pointer */ diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h index 2697baebb6a35..e1b4b6927bec3 100644 --- a/drivers/crypto/nx/nx.h +++ b/drivers/crypto/nx/nx.h @@ -187,9 +187,6 @@ extern struct shash_alg nx_shash_aes_xcbc_alg; extern struct shash_alg nx_shash_sha512_alg; extern struct shash_alg nx_shash_sha256_alg; extern struct nx_crypto_driver nx_driver; -#define SCATTERWALK_TO_SG 1 -#define SCATTERWALK_FROM_SG 0 - #endif From patchwork Wed Feb 19 18:23:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982645 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF15B217F33; Wed, 19 Feb 2025 18:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989470; cv=none; b=tKMAyaltsISTmLAttVfAeKKZ6Qw8zsdR+W6Y8RLj0TDhD06vLxx1piMYggNetodWgpu2a4qgd2ZNR6VbfbuBg6vEfKha8r5fvYvf1yool2ne96MNf2jlZNZY+2RBaHmhvbG91A9l/8WLiYPmiRtxAe52ajhcBwQSr10T8Pc25eY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989470; c=relaxed/simple; bh=qzT0MJ9mKrHr5wUabQxbT1zAY3hHrks2Yh9qtZdKQD8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KLAFfOjg4Mpi+U6usi0BWYT/JRMbQpFfkDSdCcw2+b+CzSm6OTNR++LIC08JXncW+MhJyjb/DtqT4FdXKiw91EKIxfAfwQWgLbH1A4YglORjgm0Vl5Jco5mHt8Z5XXnqDTG1D0HwRa8yH+7tRDBb8b2GrQcm7ba7YVCXivGddDM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i1T/Ts0k; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i1T/Ts0k" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDD44C4CEEC; Wed, 19 Feb 2025 18:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989470; bh=qzT0MJ9mKrHr5wUabQxbT1zAY3hHrks2Yh9qtZdKQD8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i1T/Ts0kwZ+vSYuzOse9KsmIzaVpXGAhwtksjtcvf+Q+Cbj5YF0PE9KBn7dRMXMTy 4zJYEmrqjSbLKfQS5NNQkgGrBSSX0QyfeY9AAV9sWmDjzHGD5TRLdi2FHhI+iUvQDD rV6M6FEoQQM7UEuS5naXQfaFdoCDKPqdfi0Hg4tJrblV8nakTq2rA0j9bLALofYvqR SJz4Xh+hlbwGEwg2rF14tbN/aUgttIREgbj8pzHuz5rDHHHTsfzMNwUMsupRB/F6Jy APMfjNTgNsFDPZdyy9JpoqJ6s0QV0ji1aKUs+8uXAk56Hk1rEqH7op9Kr28eIne+Ti w5FN6NjUHTLFg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Holger Dengler , linux-s390@vger.kernel.org, Harald Freudenberger Subject: [PATCH v3 11/19] crypto: s390/aes-gcm - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:33 -0800 Message-ID: <20250219182341.43961-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Besides the new functions being a bit easier to use, this is necessary because scatterwalk_done() is planned to be removed. Reviewed-by: Harald Freudenberger Tested-by: Harald Freudenberger Cc: Holger Dengler Cc: linux-s390@vger.kernel.org Signed-off-by: Eric Biggers --- arch/s390/crypto/aes_s390.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index 9c46b1b630b1a..7fd303df05abd 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -785,32 +785,25 @@ static void gcm_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg, scatterwalk_start(&gw->walk, sg); } static inline unsigned int _gcm_sg_clamp_and_map(struct gcm_sg_walk *gw) { - struct scatterlist *nextsg; - - gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain); - while (!gw->walk_bytes) { - nextsg = sg_next(gw->walk.sg); - if (!nextsg) - return 0; - scatterwalk_start(&gw->walk, nextsg); - gw->walk_bytes = scatterwalk_clamp(&gw->walk, - gw->walk_bytes_remain); - } - gw->walk_ptr = scatterwalk_map(&gw->walk); + if (gw->walk_bytes_remain == 0) + return 0; + gw->walk_ptr = scatterwalk_next(&gw->walk, gw->walk_bytes_remain, + &gw->walk_bytes); return gw->walk_bytes; } static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw, - unsigned int nbytes) + unsigned int nbytes, bool out) { gw->walk_bytes_remain -= nbytes; - scatterwalk_unmap(gw->walk_ptr); - scatterwalk_advance(&gw->walk, nbytes); - scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); + if (out) + scatterwalk_done_dst(&gw->walk, gw->walk_ptr, nbytes); + else + scatterwalk_done_src(&gw->walk, gw->walk_ptr, nbytes); gw->walk_ptr = NULL; } static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) { @@ -842,11 +835,11 @@ static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) while (1) { n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes); memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n); gw->buf_bytes += n; - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, false); if (gw->buf_bytes >= minbytesneeded) { gw->ptr = gw->buf; gw->nbytes = gw->buf_bytes; goto out; } @@ -902,11 +895,11 @@ static int gcm_in_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) memmove(gw->buf, gw->buf + bytesdone, n); gw->buf_bytes = n; } else gw->buf_bytes = 0; } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, false); return bytesdone; } static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) @@ -920,14 +913,14 @@ static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) for (i = 0; i < bytesdone; i += n) { if (!_gcm_sg_clamp_and_map(gw)) return i; n = min(gw->walk_bytes, bytesdone - i); memcpy(gw->walk_ptr, gw->buf + i, n); - _gcm_sg_unmap_and_advance(gw, n); + _gcm_sg_unmap_and_advance(gw, n, true); } } else - _gcm_sg_unmap_and_advance(gw, bytesdone); + _gcm_sg_unmap_and_advance(gw, bytesdone, true); return bytesdone; } static int gcm_aes_crypt(struct aead_request *req, unsigned int flags) From patchwork Wed Feb 19 18:23:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982646 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCF0121858A; Wed, 19 Feb 2025 18:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; cv=none; b=CWi+9/9f7EIvcfayAbMl2YIfoI3d3pfRTFZoKk8elXeNVQjoZrKvIfr/dWpxxqFg4S+KRLBOgjbOuu1rcBIwR+DiiRAhfWmGYHtKm8Xa4WeAPPGAApPByzyOlKDxv3kvpdyyS8NuX6sRAUy9Aqt9QYtDnBUQQvRFafRPNWylLTo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; c=relaxed/simple; bh=HOiG+O4t1HfyKMWrjgRzX33QT2jP4GmEeXBmoU0A50Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GblpZumGVTWtQV4sLnVPNZx6K2cl9+CEsu8bsRMbGm+tFgBYHT7fpIL/pNWHp8lwB9fJDJZli5b+vLkgFd7L5zLIjGsUjgbSGZhm1hLcWUqhpvjndqNK5gcDLt/TANJFOLrLLEb6g54vtumkK5GlvFwaaMStM4r9iCYel1adxIs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IPJrzJKy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IPJrzJKy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49A27C4CEE7; Wed, 19 Feb 2025 18:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989470; bh=HOiG+O4t1HfyKMWrjgRzX33QT2jP4GmEeXBmoU0A50Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IPJrzJKyTjzzW5651CV6n/FmWiHbbrYHJPlpLGR8jJjga6vogmJH80C1lBCe9WRr7 taoB9Ut8ch1UHalGw161IFk3yGolguEinnhPeMhWbJtwKNdpcZOf03QY8v2VWCKy18 fl9Q/nm2nnPj0/DD/ZB4oR57O6iSB9SowtWLX0rIpnVgQ5y2xZtzJaEvBzspy0mDh3 rOiZ3dJvv+NI6cLgfUQdyvb9NY5PRGzCVQhXqFllHugz5unoeIhlQ2jRJT4KaQx+dw WU+BFK+3dM71cjDW6FctN6r6mBtLf7y4Ef3xztvb9h3hbC1sWkUzoaBIapd+sobwoE xDuzG1poSqe7A== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Krzysztof Kozlowski , Vladimir Zapolskiy , linux-samsung-soc@vger.kernel.org Subject: [PATCH v3 12/19] crypto: s5p-sss - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:34 -0800 Message-ID: <20250219182341.43961-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers s5p_sg_copy_buf() open-coded a copy from/to a scatterlist using scatterwalk_* functions that are planned for removal. Replace it with the new functions memcpy_from_sglist() and memcpy_to_sglist() instead. Also take the opportunity to replace calls to scatterwalk_map_and_copy() in the same file; this eliminates the confusing 'out' argument. Cc: Krzysztof Kozlowski Cc: Vladimir Zapolskiy Cc: linux-samsung-soc@vger.kernel.org Signed-off-by: Eric Biggers --- drivers/crypto/s5p-sss.c | 38 +++++++++++--------------------------- 1 file changed, 11 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index 57ab237e899e3..b4c3c14dafd5c 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -456,34 +456,21 @@ static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg) kfree(*sg); *sg = NULL; } -static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg, - unsigned int nbytes, int out) -{ - struct scatter_walk walk; - - if (!nbytes) - return; - - scatterwalk_start(&walk, sg); - scatterwalk_copychunks(buf, &walk, nbytes, out); - scatterwalk_done(&walk, out, 0); -} - static void s5p_sg_done(struct s5p_aes_dev *dev) { struct skcipher_request *req = dev->req; struct s5p_aes_reqctx *reqctx = skcipher_request_ctx(req); if (dev->sg_dst_cpy) { dev_dbg(dev->dev, "Copying %d bytes of output data back to original place\n", dev->req->cryptlen); - s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst, - dev->req->cryptlen, 1); + memcpy_to_sglist(dev->req->dst, 0, sg_virt(dev->sg_dst_cpy), + dev->req->cryptlen); } s5p_free_sg_cpy(dev, &dev->sg_src_cpy); s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); if (reqctx->mode & FLAGS_AES_CBC) memcpy_fromio(req->iv, dev->aes_ioaddr + SSS_REG_AES_IV_DATA(0), AES_BLOCK_SIZE); @@ -524,11 +511,11 @@ static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, kfree(*dst); *dst = NULL; return -ENOMEM; } - s5p_sg_copy_buf(pages, src, dev->req->cryptlen, 0); + memcpy_from_sglist(pages, src, 0, dev->req->cryptlen); sg_init_table(*dst, 1); sg_set_buf(*dst, pages, len); return 0; @@ -1033,12 +1020,11 @@ static int s5p_hash_copy_sgs(struct s5p_hash_reqctx *ctx, } if (ctx->bufcnt) memcpy(buf, ctx->dd->xmit_buf, ctx->bufcnt); - scatterwalk_map_and_copy(buf + ctx->bufcnt, sg, ctx->skip, - new_len, 0); + memcpy_from_sglist(buf + ctx->bufcnt, sg, ctx->skip, new_len); sg_init_table(ctx->sgl, 1); sg_set_buf(ctx->sgl, buf, len); ctx->sg = ctx->sgl; ctx->sg_len = 1; ctx->bufcnt = 0; @@ -1227,12 +1213,11 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) int len = BUFLEN - ctx->bufcnt % BUFLEN; if (len > nbytes) len = nbytes; - scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, req->src, - 0, len, 0); + memcpy_from_sglist(ctx->buffer + ctx->bufcnt, req->src, 0, len); ctx->bufcnt += len; nbytes -= len; ctx->skip = len; } else { ctx->skip = 0; @@ -1251,13 +1236,12 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) xmit_len -= xmit_len & (BUFLEN - 1); hash_later = ctx->total - xmit_len; /* copy hash_later bytes from end of req->src */ /* previous bytes are in xmit_buf, so no overwrite */ - scatterwalk_map_and_copy(ctx->buffer, req->src, - req->nbytes - hash_later, - hash_later, 0); + memcpy_from_sglist(ctx->buffer, req->src, + req->nbytes - hash_later, hash_later); } if (xmit_len > BUFLEN) { ret = s5p_hash_prepare_sgs(ctx, req->src, nbytes - hash_later, final); @@ -1265,12 +1249,12 @@ static int s5p_hash_prepare_request(struct ahash_request *req, bool update) return ret; } else { /* have buffered data only */ if (unlikely(!ctx->bufcnt)) { /* first update didn't fill up buffer */ - scatterwalk_map_and_copy(ctx->dd->xmit_buf, req->src, - 0, xmit_len, 0); + memcpy_from_sglist(ctx->dd->xmit_buf, req->src, + 0, xmit_len); } sg_init_table(ctx->sgl, 1); sg_set_buf(ctx->sgl, ctx->dd->xmit_buf, xmit_len); @@ -1504,12 +1488,12 @@ static int s5p_hash_update(struct ahash_request *req) if (!req->nbytes) return 0; if (ctx->bufcnt + req->nbytes <= BUFLEN) { - scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, req->src, - 0, req->nbytes, 0); + memcpy_from_sglist(ctx->buffer + ctx->bufcnt, req->src, + 0, req->nbytes); ctx->bufcnt += req->nbytes; return 0; } return s5p_hash_enqueue(req, true); /* HASH_OP_UPDATE */ From patchwork Wed Feb 19 18:23:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982647 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0152D2185B8; Wed, 19 Feb 2025 18:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; cv=none; b=IvzL+n8VvJEAb+eqEJSEXTM4QE+yz1VisaLSR8AXVeORqy9JnxYpLhyqq32tJTQ7xRAFmTZhIqcsHUyOOLSeBCA+bAke7HoqSRnMcAW1dT7GUFDO+zcGRNdPGPXkQnm060L4z9ISAfhT/PZBJojCzecVtYF9/5pD9dcJOaXsfV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; c=relaxed/simple; bh=4rt8U10q2E5aveL8U8OSMVVCSPSkb8/hTdHib2bvw+c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZTPDNMplKpOd3PSu16tPOcwkVhQ4yghlVdwzrOFn9OoF5Zjxo5pdUAAvzNky8ZJc713lft6xsA+osVe6RI4F1gIU2vMakxskNAtZzFfnTxZ12RTAO37RdHKxIlZBmqCxBSOYMivKdql+sPzSLMLIvkV3gv3Crx0h3+zPsMzs5O0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=t4YclHx8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="t4YclHx8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1C90C4CEEB; Wed, 19 Feb 2025 18:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989470; bh=4rt8U10q2E5aveL8U8OSMVVCSPSkb8/hTdHib2bvw+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t4YclHx8RkftLzOpCdZakGuM2bVU88+tmdWM9jn+VZz9odi7oJTrY0SRF6P9JA9u1 M/wpSSuB89ToRT4yiITDcg9RqbHoH77qST94FabHswTnJ2T/+LCwXulgoyouQShSBx I0+ex+IcD3tcU4QbL/VI3coGMNLrYL4S9IIsFWh7DvioZpfSRONFT6vdCS467ivZAc oaxh2u+3t2LG+PQOn/x40VHrEriiZ0Fd9GjCD1K6RjVKCjO7ARGqp01IxmR466fQGE QUYmCdFaziA+v7+yVh1kLrYrTWfJefRwxyrgt6SEncLZTuwUQ8rAXpPxa/hiw2CzMt sYv7a3NHnUf7w== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Alexandre Torgue , Maxime Coquelin , =?utf-8?b?TWF4aW1lIE3DqXLDqQ==?= , Thomas Bourgoin , linux-stm32@st-md-mailman.stormreply.com Subject: [PATCH v3 13/19] crypto: stm32 - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:35 -0800 Message-ID: <20250219182341.43961-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), scatterwalk_skip(), or scatterwalk_start_at_pos() as appropriate. Cc: Alexandre Torgue Cc: Maxime Coquelin Cc: Maxime Méré Cc: Thomas Bourgoin Cc: linux-stm32@st-md-mailman.stormreply.com Signed-off-by: Eric Biggers --- drivers/crypto/stm32/stm32-cryp.c | 34 +++++++++++++++---------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 14c6339c2e43c..5ce88e7a8f657 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -664,11 +664,11 @@ static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp) len = 6; } written = min_t(size_t, AES_BLOCK_SIZE - len, alen); - scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0); + memcpy_from_scatterwalk((char *)block + len, &cryp->in_walk, written); writesl(cryp->regs + cryp->caps->din, block, AES_BLOCK_32); cryp->header_in -= written; @@ -991,11 +991,11 @@ static int stm32_cryp_header_dma_start(struct stm32_cryp *cryp) tx_in->callback_param = cryp; tx_in->callback = stm32_cryp_header_dma_callback; /* Advance scatterwalk to not DMA'ed data */ align_size = ALIGN_DOWN(cryp->header_in, cryp->hw_blocksize); - scatterwalk_copychunks(NULL, &cryp->in_walk, align_size, 2); + scatterwalk_skip(&cryp->in_walk, align_size); cryp->header_in -= align_size; ret = dma_submit_error(dmaengine_submit(tx_in)); if (ret < 0) { dev_err(cryp->dev, "DMA in submit failed\n"); @@ -1054,22 +1054,22 @@ static int stm32_cryp_dma_start(struct stm32_cryp *cryp) tx_out->callback = stm32_cryp_dma_callback; tx_out->callback_param = cryp; /* Advance scatterwalk to not DMA'ed data */ align_size = ALIGN_DOWN(cryp->payload_in, cryp->hw_blocksize); - scatterwalk_copychunks(NULL, &cryp->in_walk, align_size, 2); + scatterwalk_skip(&cryp->in_walk, align_size); cryp->payload_in -= align_size; ret = dma_submit_error(dmaengine_submit(tx_in)); if (ret < 0) { dev_err(cryp->dev, "DMA in submit failed\n"); return ret; } dma_async_issue_pending(cryp->dma_lch_in); /* Advance scatterwalk to not DMA'ed data */ - scatterwalk_copychunks(NULL, &cryp->out_walk, align_size, 2); + scatterwalk_skip(&cryp->out_walk, align_size); cryp->payload_out -= align_size; ret = dma_submit_error(dmaengine_submit(tx_out)); if (ret < 0) { dev_err(cryp->dev, "DMA out submit failed\n"); return ret; @@ -1735,13 +1735,13 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req, in_sg = areq->src; out_sg = areq->dst; scatterwalk_start(&cryp->in_walk, in_sg); - scatterwalk_start(&cryp->out_walk, out_sg); /* In output, jump after assoc data */ - scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2); + scatterwalk_start_at_pos(&cryp->out_walk, out_sg, + areq->assoclen); ret = stm32_cryp_hw_init(cryp); if (ret) return ret; @@ -1871,16 +1871,16 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) if (is_encrypt(cryp)) { u32 out_tag[AES_BLOCK_32]; /* Get and write tag */ readsl(cryp->regs + cryp->caps->dout, out_tag, AES_BLOCK_32); - scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1); + memcpy_to_scatterwalk(&cryp->out_walk, out_tag, cryp->authsize); } else { /* Get and check tag */ u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32]; - scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0); + memcpy_from_scatterwalk(in_tag, &cryp->in_walk, cryp->authsize); readsl(cryp->regs + cryp->caps->dout, out_tag, AES_BLOCK_32); if (crypto_memneq(in_tag, out_tag, cryp->authsize)) ret = -EBADMSG; } @@ -1921,22 +1921,22 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp) static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp) { u32 block[AES_BLOCK_32]; readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); } static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp) { u32 block[AES_BLOCK_32] = {0}; - scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_in), 0); + memcpy_from_scatterwalk(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_in)); writesl(cryp->regs + cryp->caps->din, block, cryp->hw_blocksize / sizeof(u32)); cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in); } static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) @@ -1979,12 +1979,12 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) * Same code as stm32_cryp_irq_read_data(), but we want to store * block value */ readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); /* d) change mode back to AES GCM */ cfg &= ~CR_ALGO_MASK; @@ -2077,12 +2077,12 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) * Same code as stm32_cryp_irq_read_data(), but we want to store * block value */ readsl(cryp->regs + cryp->caps->dout, block, cryp->hw_blocksize / sizeof(u32)); - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, - cryp->payload_out), 1); + memcpy_to_scatterwalk(&cryp->out_walk, block, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out)); cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); /* d) Load again CRYP_CSGCMCCMxR */ for (i = 0; i < ARRAY_SIZE(cstmp2); i++) cstmp2[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4); @@ -2159,11 +2159,11 @@ static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp) u32 block[AES_BLOCK_32] = {0}; size_t written; written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in); - scatterwalk_copychunks(block, &cryp->in_walk, written, 0); + memcpy_from_scatterwalk(block, &cryp->in_walk, written); writesl(cryp->regs + cryp->caps->din, block, AES_BLOCK_32); cryp->header_in -= written; From patchwork Wed Feb 19 18:23:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982648 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C82721A455; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; cv=none; b=NfddZh4AOZl9isUgWm0FTKcYCsdUEvYH2REnGwzF7scMVpHmF/CXhwdoc604L6b8vXh4mCejQbPG+MBgxD2eIV/0ILFlSjWccEwS2B7plyaTxjgcWa5OuUUGPooUH4lYDw4A+TX392ypa+uvXQX9C2Vfa5E3HwHcuf7TmSGsP18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; c=relaxed/simple; bh=DMl5gHfD8qUZ3YL0b5yRXo2ontnZm0E6uzFbcU4n97Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hvQWXVj0/lcCV7qLF/E7kJOm10n9XXqDhT65oqyyD7edzGfEk9cN4D2tdvAOk/zlgXQ0PmyEk407EHLC5Cr9OxlCBPVCbdJYPCTxuVYB0gAI6k1+Vo1rLgIlFMRgaI2U9q+T/CLgY1wruCw1/qA1DJ/iAho16aJ6n8GbolGOuas= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pm5aLEls; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pm5aLEls" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 08CECC4CEDD; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989471; bh=DMl5gHfD8qUZ3YL0b5yRXo2ontnZm0E6uzFbcU4n97Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pm5aLElsFC9f8IRhcPWIG7PYWCKWJe5LAjTvmEo+WqLtfMWdqKMTvpte5ibgOSdhW d9GIcEHvqXx8VCOvVzkt5/RwQ5VRaNHA+c92FDEX5jRm+c4EteQp5lgtm89OkV6v8m czhZWMA/4mYa+9izPUDyJ1p0B9fs6WKMAXzSsXc3dEhn1CoL0pQa6AjwIztVVC+7AA yaEBGcYSnfifqZtYo6NbX3SNWFADfupAEGA7L7kuBQzLT9QypC11vgkxzTQgubKV9h g+ln6hlW8NpYyBkhWMb+VbgWoxDsN2nlSDdoTrMngr+kvFaoEf5k4yzQ0u69rWJtfw HTPivI50Kxevw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 14/19] crypto: x86/aes-gcm - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:36 -0800 Message-ID: <20250219182341.43961-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In gcm_process_assoc(), use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Also rename some variables to avoid implying that anything is actually mapped (it's not), or that the loop is going page by page (it is for now, but nothing actually requires that to be the case). Signed-off-by: Eric Biggers --- arch/x86/crypto/aesni-intel_glue.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 3e0cc15050f32..f963f5c04006d 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1279,45 +1279,45 @@ static void gcm_process_assoc(const struct aes_gcm_key *key, u8 ghash_acc[16], memset(ghash_acc, 0, 16); scatterwalk_start(&walk, sg_src); while (assoclen) { - unsigned int len_this_page = scatterwalk_clamp(&walk, assoclen); - void *mapped = scatterwalk_map(&walk); - const void *src = mapped; + unsigned int orig_len_this_step; + const u8 *orig_src = scatterwalk_next(&walk, assoclen, + &orig_len_this_step); + unsigned int len_this_step = orig_len_this_step; unsigned int len; + const u8 *src = orig_src; - assoclen -= len_this_page; - scatterwalk_advance(&walk, len_this_page); if (unlikely(pos)) { - len = min(len_this_page, 16 - pos); + len = min(len_this_step, 16 - pos); memcpy(&buf[pos], src, len); pos += len; src += len; - len_this_page -= len; + len_this_step -= len; if (pos < 16) goto next; aes_gcm_aad_update(key, ghash_acc, buf, 16, flags); pos = 0; } - len = len_this_page; + len = len_this_step; if (unlikely(assoclen)) /* Not the last segment yet? */ len = round_down(len, 16); aes_gcm_aad_update(key, ghash_acc, src, len, flags); src += len; - len_this_page -= len; - if (unlikely(len_this_page)) { - memcpy(buf, src, len_this_page); - pos = len_this_page; + len_this_step -= len; + if (unlikely(len_this_step)) { + memcpy(buf, src, len_this_step); + pos = len_this_step; } next: - scatterwalk_unmap(mapped); - scatterwalk_pagedone(&walk, 0, assoclen); + scatterwalk_done_src(&walk, orig_src, orig_len_this_step); if (need_resched()) { kernel_fpu_end(); kernel_fpu_begin(); } + assoclen -= orig_len_this_step; } if (unlikely(pos)) aes_gcm_aad_update(key, ghash_acc, buf, pos, flags); } From patchwork Wed Feb 19 18:23:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982650 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8D7321B9CE; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; cv=none; b=CaBzQJm8CTlT8mBLqE/W09bfgUa+liGL5cTaqTtgID/W/i0uAbdDS0hf7CRZ8bBCSSyHD0kSAX/nPQuBq5lrfGHOIlzy4ykCQ3A4dmbO1CBE6ZvsmybJnon/siCBUJiLKlsYlwwlHRIqjQrP5T1mXCDtcn9A5IBvok25DBRuIxQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; c=relaxed/simple; bh=e6jH75bm56vh0co03y8l2mrU5UpBPrA2ekMFTLfJovw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VkaJAJo5LO47Lidt1m9BloWNapA9065ROTEonIGwpn4tnDDsErj1u2H50g2E0S80ln1+bcUk3dyZG9y+dimYjBf8+2j4EiCboZjJfkaFYb8xMXxxXOS8fxznzbKXC2GLjOfVJCJmDde4JBm8cKTy804MSdWm3A7p5ws8WLzl+s0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=szreaeud; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="szreaeud" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43D3CC4CED1; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989471; bh=e6jH75bm56vh0co03y8l2mrU5UpBPrA2ekMFTLfJovw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=szreaeudSyGlyXEDsUCwHo8DEQpRXBjNhy9pD92PJoyBb/e3GJkH6AB+a4+3oRrxt UXXn3jZeFiqiMBA9WG47io0uf6U6coNDD3dIhCBWaQ+a+/1VhurQmuKMrFa5H16kwP MqONiVwbHOLwPqjxV56LVK3BEmnb5XjwVHc9+av6MLDk9GIZ8U7SINvRDaow7JibjQ vAq2zkMC6bNIlsNpkAg1d4hnDz57Q2ypV6c2B+gBtiz7AgWoUxq8tCMGnujxmk5XiM EVUR7A6UZ4vlrmlNXuhgOowEK0R2+JtITaAahHORHuLxo28tJAYg6eOseqK6UR+76D 6/sQXnCH/QU4A== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 15/19] crypto: x86/aegis - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:37 -0800 Message-ID: <20250219182341.43961-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In crypto_aegis128_aesni_process_ad(), use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers --- arch/x86/crypto/aegis128-aesni-glue.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index 01fa568dc5fc4..1bd093d073ed6 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -69,14 +69,14 @@ static void crypto_aegis128_aesni_process_ad( struct aegis_block buf; unsigned int pos = 0; scatterwalk_start(&walk, sg_src); while (assoclen != 0) { - unsigned int size = scatterwalk_clamp(&walk, assoclen); + unsigned int size; + const u8 *mapped = scatterwalk_next(&walk, assoclen, &size); unsigned int left = size; - void *mapped = scatterwalk_map(&walk); - const u8 *src = (const u8 *)mapped; + const u8 *src = mapped; if (pos + size >= AEGIS128_BLOCK_SIZE) { if (pos > 0) { unsigned int fill = AEGIS128_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); @@ -95,13 +95,11 @@ static void crypto_aegis128_aesni_process_ad( memcpy(buf.bytes + pos, src, left); pos += left; assoclen -= size; - scatterwalk_unmap(mapped); - scatterwalk_advance(&walk, size); - scatterwalk_done(&walk, 0, assoclen); + scatterwalk_done_src(&walk, mapped, size); } if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS128_BLOCK_SIZE - pos); aegis128_aesni_ad(state, buf.bytes, AEGIS128_BLOCK_SIZE); From patchwork Wed Feb 19 18:23:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982649 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB2B421B1B5; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; cv=none; b=U25LTH5DjJnbGI2GJ9KR4NpJTbGg2UpYjnheSSKM5inriUHM46Hgqp8hVuOEuQzllhyBALTZlwS01PdQdcqdeRl0d2DbXNoixiB5WaSuWeFjzopIL1uQ0z5ALEzr8abrtVStPlUyyJX3zOtpp24A8ErhEAKQLhpvxR/lPwRI4BM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989471; c=relaxed/simple; bh=cXUqK6gzrQ6Jb8f2XQHgstHPltOXwTTStbk2H1bPmow=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xygc26FoTw8ZRlkCkKjGWW2cyu/KGdZtMqufiblvilP58y+068vsKhcOrXCf+arC5xDYgcipidgJjgwHMXDuGrcTx0t/75ZDGA0/79YYrF0ZBiDSKx4fu51whM4+EHmXuGzp+3pliIbOcwALEAMHCiDWkb/dFLIOzwjZ8N0NC1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kBs4SFo1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kBs4SFo1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F8FDC4CEE9; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989471; bh=cXUqK6gzrQ6Jb8f2XQHgstHPltOXwTTStbk2H1bPmow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kBs4SFo1yJD+EOXqO74HzIV7aOWP8Zs4387sigrp++bBLIaVkEQxhnBDdW+6dEGh1 AlrcIasD9/EKoOd+GGwmKXWPq1yFkUkv/4/LjCpv8sGPBZSQ9BdBVncrONvEKO7Y4p sga/5WKolx5qUc5EJUO/wYsK3nRygi4L9W8NPfUoIeQx/ryqCHRBMCgJGxBaQkv0MC Umj1Q5PtHGD8INYDOVPMA6BZBzRFqSxy1abrg9k7WsPvv2uXqlr7YHRxWyeI4xByh4 7RfW1OxrI3s8FVHx48kjN+dnVt4kYd67egfiD2N0USjGVVYV5e/qe9zL+ahymajH0t BYu9hHiyjDqWw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Boris Pismenny , Jakub Kicinski , John Fastabend Subject: [PATCH v3 16/19] net/tls: use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:38 -0800 Message-ID: <20250219182341.43961-17-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Replace calls to the deprecated function scatterwalk_copychunks() with memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or scatterwalk_skip() as appropriate. The new functions generally behave more as expected and eliminate the need to call scatterwalk_done() or scatterwalk_pagedone(). However, the new functions intentionally do not advance to the next sg entry right away, which would have broken chain_to_walk() which is accessing the fields of struct scatter_walk directly. To avoid this, replace chain_to_walk() with scatterwalk_get_sglist() which supports the needed functionality. Cc: Boris Pismenny Cc: Jakub Kicinski Cc: John Fastabend Signed-off-by: Eric Biggers --- net/tls/tls_device_fallback.c | 31 ++++++------------------------- 1 file changed, 6 insertions(+), 25 deletions(-) diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index f9e3d3d90dcf5..03d508a45aaee 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -35,21 +35,10 @@ #include #include #include "tls.h" -static void chain_to_walk(struct scatterlist *sg, struct scatter_walk *walk) -{ - struct scatterlist *src = walk->sg; - int diff = walk->offset - src->offset; - - sg_set_page(sg, sg_page(src), - src->length - diff, walk->offset); - - scatterwalk_crypto_chain(sg, sg_next(src), 2); -} - static int tls_enc_record(struct aead_request *aead_req, struct crypto_aead *aead, char *aad, char *iv, __be64 rcd_sn, struct scatter_walk *in, struct scatter_walk *out, int *in_len, @@ -67,20 +56,17 @@ static int tls_enc_record(struct aead_request *aead_req, DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable); buf_size = TLS_HEADER_SIZE + cipher_desc->iv; len = min_t(int, *in_len, buf_size); - scatterwalk_copychunks(buf, in, len, 0); - scatterwalk_copychunks(buf, out, len, 1); + memcpy_from_scatterwalk(buf, in, len); + memcpy_to_scatterwalk(out, buf, len); *in_len -= len; if (!*in_len) return 0; - scatterwalk_pagedone(in, 0, 1); - scatterwalk_pagedone(out, 1, 1); - len = buf[4] | (buf[3] << 8); len -= cipher_desc->iv; tls_make_aad(aad, len - cipher_desc->tag, (char *)&rcd_sn, buf[0], prot); @@ -88,12 +74,12 @@ static int tls_enc_record(struct aead_request *aead_req, sg_init_table(sg_in, ARRAY_SIZE(sg_in)); sg_init_table(sg_out, ARRAY_SIZE(sg_out)); sg_set_buf(sg_in, aad, TLS_AAD_SPACE_SIZE); sg_set_buf(sg_out, aad, TLS_AAD_SPACE_SIZE); - chain_to_walk(sg_in + 1, in); - chain_to_walk(sg_out + 1, out); + scatterwalk_get_sglist(in, sg_in + 1); + scatterwalk_get_sglist(out, sg_out + 1); *in_len -= len; if (*in_len < 0) { *in_len += cipher_desc->tag; /* the input buffer doesn't contain the entire record. @@ -108,14 +94,12 @@ static int tls_enc_record(struct aead_request *aead_req, *in_len = 0; } if (*in_len) { - scatterwalk_copychunks(NULL, in, len, 2); - scatterwalk_pagedone(in, 0, 1); - scatterwalk_copychunks(NULL, out, len, 2); - scatterwalk_pagedone(out, 1, 1); + scatterwalk_skip(in, len); + scatterwalk_skip(out, len); } len -= cipher_desc->tag; aead_request_set_crypt(aead_req, sg_in, sg_out, len, iv); @@ -160,13 +144,10 @@ static int tls_enc_records(struct aead_request *aead_req, cpu_to_be64(rcd_sn), &in, &out, &len, prot); rcd_sn++; } while (rc == 0 && len); - scatterwalk_done(&in, 0, 0); - scatterwalk_done(&out, 1, 0); - return rc; } /* Can't use icsk->icsk_af_ops->send_check here because the ip addresses * might have been changed by NAT. From patchwork Wed Feb 19 18:23:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982651 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1266F21B9DE; Wed, 19 Feb 2025 18:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; cv=none; b=lAAGTYsFo35m7+EqhsgaFz8VmwEAaxOTT75DotCcaQEtDHmv/FPrt1c/oSnj/TLPPtPwRS+PybCeHz+BPUA5s4r4ulPj1F65L4ev1y13FOqcKZdOqQ1lj+EJafLtp/XSPCeL4VFzAHZpc31mYkzkoCW3FXoJjozfpIuZznx7bic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; c=relaxed/simple; bh=O7RPYfjnYfLaOxt45Eob56Z/zyh6ZSMPzVlUtammgZM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E1H75hAhY27FiU6ARb0fmkTcjXz2SuF/WKmgHoKtK63oblDfu4XpBNa5mea8zO3LcS4nP+YMYVAYqx4wbjP/tJ/vRhFbMBsaJ4i5HFLwvKNUohXLPG5krAw4eeSzEp6x6sxnDDmFyQs0B0Xy21tjKWzOYWWu5+pB0hwD5KoWFy4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=H9UEEboq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="H9UEEboq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE845C4CEE0; Wed, 19 Feb 2025 18:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989471; bh=O7RPYfjnYfLaOxt45Eob56Z/zyh6ZSMPzVlUtammgZM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H9UEEboq4ARdzWf1aAgGNsYR0NOLiOiwrmEv50htPRV1RE+WhWZJsoJAtFG1cAYZ5 6vde5N4B/P0tUwgAv1RQ98ORJMVBaS4S7oHG7OyXHsDXHRjbK0cL8owJ+g9GS3H8zP fRSI0JOuURqk37CdqSJ5A+0TfZYIn4bPtVfWWI5xq5abogtvemgp+rIyFyr4l9HPbI +QfJi4169ZZd0Jhr7uYZeJlcpk4OLCwQdXqRTHFyw8zpEzRUwvf5OcUBVqR7TOipPS A8W5I8bu+y7wtsxkVcc1CNnY2G2TtN7zYxJkMBCikdlcu60ywnvtnLzpYAvNSMc7QA 9ccan5zgiIT6A== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 17/19] crypto: skcipher - use the new scatterwalk functions Date: Wed, 19 Feb 2025 10:23:39 -0800 Message-ID: <20250219182341.43961-18-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Convert skcipher_walk to use the new scatterwalk functions. This includes a few changes to exactly where the different parts of the iteration happen. For example the dcache flush that previously happened in scatterwalk_done() now happens in scatterwalk_dst_done() or in memcpy_to_scatterwalk(). Advancing to the next sg entry now happens just-in-time in scatterwalk_clamp() instead of in scatterwalk_done(). Signed-off-by: Eric Biggers --- crypto/skcipher.c | 51 ++++++++++++++++++----------------------------- 1 file changed, 19 insertions(+), 32 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 33508d001f361..0a78a96d8583d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -47,20 +47,10 @@ static inline void skcipher_map_src(struct skcipher_walk *walk) static inline void skcipher_map_dst(struct skcipher_walk *walk) { walk->dst.virt.addr = scatterwalk_map(&walk->out); } -static inline void skcipher_unmap_src(struct skcipher_walk *walk) -{ - scatterwalk_unmap(walk->src.virt.addr); -} - -static inline void skcipher_unmap_dst(struct skcipher_walk *walk) -{ - scatterwalk_unmap(walk->dst.virt.addr); -} - static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) { return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; } @@ -68,18 +58,10 @@ static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } -static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) -{ - u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1); - - scatterwalk_copychunks(addr, &walk->out, bsize, 1); - return 0; -} - /** * skcipher_walk_done() - finish one step of a skcipher_walk * @walk: the skcipher_walk * @res: number of bytes *not* processed (>= 0) from walk->nbytes, * or a -errno value to terminate the walk due to an error @@ -110,44 +92,45 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) } if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF)))) { -unmap_src: - skcipher_unmap_src(walk); + scatterwalk_advance(&walk->in, n); } else if (walk->flags & SKCIPHER_WALK_DIFF) { - skcipher_unmap_dst(walk); - goto unmap_src; + scatterwalk_unmap(walk->src.virt.addr); + scatterwalk_advance(&walk->in, n); } else if (walk->flags & SKCIPHER_WALK_COPY) { + scatterwalk_advance(&walk->in, n); skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); - skcipher_unmap_dst(walk); } else { /* SKCIPHER_WALK_SLOW */ if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but * the algorithm requires it. */ res = -EINVAL; total = 0; - } else - n = skcipher_done_slow(walk, n); + } else { + u8 *buf = PTR_ALIGN(walk->buffer, walk->alignmask + 1); + + memcpy_to_scatterwalk(&walk->out, buf, n); + } + goto dst_done; } + scatterwalk_done_dst(&walk->out, walk->dst.virt.addr, n); +dst_done: + if (res > 0) res = 0; walk->total = total; walk->nbytes = 0; - scatterwalk_advance(&walk->in, n); - scatterwalk_advance(&walk->out, n); - scatterwalk_done(&walk->in, 0, total); - scatterwalk_done(&walk->out, 1, total); - if (total) { if (walk->flags & SKCIPHER_WALK_SLEEP) cond_resched(); walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF); @@ -190,11 +173,11 @@ static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) walk->buffer = buffer; } walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1); walk->src.virt.addr = walk->dst.virt.addr; - scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0); + memcpy_from_scatterwalk(walk->src.virt.addr, &walk->in, bsize); walk->nbytes = bsize; walk->flags |= SKCIPHER_WALK_SLOW; return 0; @@ -204,11 +187,15 @@ static int skcipher_next_copy(struct skcipher_walk *walk) { u8 *tmp = walk->page; skcipher_map_src(walk); memcpy(tmp, walk->src.virt.addr, walk->nbytes); - skcipher_unmap_src(walk); + scatterwalk_unmap(walk->src.virt.addr); + /* + * walk->in is advanced later when the number of bytes actually + * processed (which might be less than walk->nbytes) is known. + */ walk->src.virt.addr = tmp; walk->dst.virt.addr = tmp; return 0; } From patchwork Wed Feb 19 18:23:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982652 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5268621C9FA; Wed, 19 Feb 2025 18:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; cv=none; b=U42j79IASLxAxZltGBVf5OxyAoGSz7tL+IkNIwkq0OXBOyn6eFpftHFI8Gx686Pv7Oxdri0K55Qd9crXlZfoifiAyVpYXeAzU/+DtQ/Kj+veUIHziP53yLgbQtNnS3I6RGF46G0nH0ROrhjOMoRptDc+TqNTbY9k6n6AXzpoVXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; c=relaxed/simple; bh=4lMjZPPR5tAt3RCT7E5nLuW/hwq2SL9e/KRXA8qjeXs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JDNkbwiAW4HaSQzdShZkmL+IdEoRp1W2ajDJudVqb9/a0Zv7LbXIIlTMSi38aKl02JJzjfrxAOt40SkbU+xju/bH2UIGKAa5xhS2zFQVIeSdHqtD51rZdiqyMbWH625+NN0ceBSeERQmfIU+mKn5C4SKRPVlj3CtzMcq02OkliU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bl9Z7AZM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bl9Z7AZM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14EB5C4CEEB; Wed, 19 Feb 2025 18:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989472; bh=4lMjZPPR5tAt3RCT7E5nLuW/hwq2SL9e/KRXA8qjeXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bl9Z7AZM1wlXC9fPoO45VIQY87igC9vxRgXSke8uCaQeZYNTiTzqp2+WWyXogYmdq 2QHv7M4z+VAvoW4lgPcW5B6Ejw5ydbKYWrunmuZTDq/srEW6AVzvFl4Kjg2008gNkr QBc7mUnkW/aw6Cnu/k98SCi1BjW7m5JhCY37A9jNNjKqiL2+BsvXxJ/jmWrOsx7Fys qwUPOZfN/WhAjwpnvOI+PLhtJS5YEehiGG7wlxFWTIvOBtewyPgwZVoJlXpPK+Lnth gLFOaCA8vYTx53EHEgKy0W0U1ZcCaOQcsRU1pffLufUX1wGhgvpybQ5xawDpCJWriX wJ2FTSNJxPRBA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 18/19] crypto: scatterwalk - remove obsolete functions Date: Wed, 19 Feb 2025 10:23:40 -0800 Message-ID: <20250219182341.43961-19-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Remove various functions that are no longer used. Signed-off-by: Eric Biggers --- crypto/scatterwalk.c | 37 ------------------------------------ include/crypto/scatterwalk.h | 25 ------------------------ 2 files changed, 62 deletions(-) diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 2e7a532152d61..87c080f565d45 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -28,47 +28,10 @@ void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) walk->sg = sg; walk->offset = sg->offset + nbytes; } EXPORT_SYMBOL_GPL(scatterwalk_skip); -static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out) -{ - void *src = out ? buf : sgdata; - void *dst = out ? sgdata : buf; - - memcpy(dst, src, nbytes); -} - -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out) -{ - for (;;) { - unsigned int len_this_page = scatterwalk_pagelen(walk); - u8 *vaddr; - - if (len_this_page > nbytes) - len_this_page = nbytes; - - if (out != 2) { - vaddr = scatterwalk_map(walk); - memcpy_dir(buf, vaddr, len_this_page, out); - scatterwalk_unmap(vaddr); - } - - scatterwalk_advance(walk, len_this_page); - - if (nbytes == len_this_page) - break; - - buf += len_this_page; - nbytes -= len_this_page; - - scatterwalk_pagedone(walk, out & 1, 1); - } -} -EXPORT_SYMBOL_GPL(scatterwalk_copychunks); - inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes) { do { const void *src_addr; diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index f6262d05a3c75..ac03fdf88b2a0 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -113,32 +113,10 @@ static inline void *scatterwalk_next(struct scatter_walk *walk, { *nbytes_ret = scatterwalk_clamp(walk, total); return scatterwalk_map(walk); } -static inline void scatterwalk_pagedone(struct scatter_walk *walk, int out, - unsigned int more) -{ - if (out) { - struct page *page; - - page = sg_page(walk->sg) + ((walk->offset - 1) >> PAGE_SHIFT); - flush_dcache_page(page); - } - - if (more && walk->offset >= walk->sg->offset + walk->sg->length) - scatterwalk_start(walk, sg_next(walk->sg)); -} - -static inline void scatterwalk_done(struct scatter_walk *walk, int out, - int more) -{ - if (!more || walk->offset >= walk->sg->offset + walk->sg->length || - !(walk->offset & (PAGE_SIZE - 1))) - scatterwalk_pagedone(walk, out, more); -} - static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { walk->offset += nbytes; } @@ -182,13 +160,10 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, scatterwalk_advance(walk, nbytes); } void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes); -void scatterwalk_copychunks(void *buf, struct scatter_walk *walk, - size_t nbytes, int out); - void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk, unsigned int nbytes); void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf, unsigned int nbytes); From patchwork Wed Feb 19 18:23:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13982653 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9302B221D85; Wed, 19 Feb 2025 18:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; cv=none; b=RP7U8V5mHmyE2Xd/IscXWsOZlgq4o5D7x03gEfShP6Y78+CH8DsA0mhhtHVwvpeBSB1ABYsYnmJYD2UeUECGCm09ouLGnPW6Tv4BAVJ9b5mebj+RblKyqQoCz0pfhoV6Gn1iXDGEdptMaoT4amNKcNhRluxeE8WU1/Yl0PiXw8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739989472; c=relaxed/simple; bh=fsEGT8ce6IZC15bJSoxxYn8asneke/qLTUiCbHYcOdI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rXZJeFagwNVEranQvGVRDiXYNI71JxawWYTpKUGI5MlD5q0n1cgsoIrCuPOwqV3KzwdNQjGrajOIWQMvIgLOqGyAxdGAcyUGeAM0cCSKmpWCGRbDdFwvwhON4ymuffysnX8hSkRCUKn6IDmZyc2bZMyOn3CBMEwLoDjsnA1oenQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mKBgyfIE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mKBgyfIE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5166BC4CED1; Wed, 19 Feb 2025 18:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739989472; bh=fsEGT8ce6IZC15bJSoxxYn8asneke/qLTUiCbHYcOdI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mKBgyfIE/UUfItzEpbhOOFAzmgJuW2CnZQ+WVnhAUmGEeni33+ZRx4DY3NcwLTnMx eUtbYpN204LiKNifDEAko+ZdlpOV7+bYOUNJLfLmrbjJhdnvjRKE9MSuHY5c/Qk/jo LtXevN0oFksw/gehAC+LRzM8MiqTFqbR8NG3EKDKGGUNgR1hL/FemFicdc7tLcoxlc YNja7SeYJOTXJA6WQqZJIRrsluCk4rMhVm100wBpNiWIgPfOd38mfxpeas4OF4zO9b aL+HQ2Xv2vxAn78/08+YjgtqQR1fdRU95QvwCbF74Lau2BuSanPFI0hmAteVXWLlUN Lycd0VZBVAOBw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v3 19/19] crypto: scatterwalk - don't split at page boundaries when !HIGHMEM Date: Wed, 19 Feb 2025 10:23:41 -0800 Message-ID: <20250219182341.43961-20-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250219182341.43961-1-ebiggers@kernel.org> References: <20250219182341.43961-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not actually map anything, and the address it returns is just the address from the kernel's direct map, where each sg entry's data is virtually contiguous. To improve performance, stop unnecessarily clamping data segments to page boundaries in this case. For now, still limit segments to PAGE_SIZE. This is needed to prevent preemption from being disabled for too long when SIMD is used, and to support the alignmask case which still uses a page-sized bounce buffer. Even so, this change still helps a lot in cases where messages cross a page boundary. For example, testing IPsec with AES-GCM on x86_64, the messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side over a third cross a page boundary. These ended up being processed in three parts, with the middle part going through skcipher_next_slow which uses a 16-byte bounce buffer. That was causing a significant amount of overhead which unnecessarily reduced the performance benefit of the new x86_64 AES-GCM assembly code. This change solves the problem; all these messages now get passed to the assembly code in one part. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 4 +- include/crypto/scatterwalk.h | 79 ++++++++++++++++++++++++++---------- 2 files changed, 59 insertions(+), 24 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 0a78a96d8583d..7506a46cf8e0d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -204,12 +204,12 @@ static int skcipher_next_fast(struct skcipher_walk *walk) { unsigned long diff; diff = offset_in_page(walk->in.offset) - offset_in_page(walk->out.offset); - diff |= (u8 *)scatterwalk_page(&walk->in) - - (u8 *)scatterwalk_page(&walk->out); + diff |= (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - + (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); skcipher_map_src(walk); walk->dst.virt.addr = walk->src.virt.addr; if (diff) { diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h index ac03fdf88b2a0..3024adbdd443b 100644 --- a/include/crypto/scatterwalk.h +++ b/include/crypto/scatterwalk.h @@ -47,28 +47,39 @@ static inline void scatterwalk_start_at_pos(struct scatter_walk *walk, } walk->sg = sg; walk->offset = sg->offset + pos; } -static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk) -{ - unsigned int len = walk->sg->offset + walk->sg->length - walk->offset; - unsigned int len_this_page = offset_in_page(~walk->offset) + 1; - return len_this_page > len ? len : len_this_page; -} - static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, unsigned int nbytes) { + unsigned int len_this_sg; + unsigned int limit; + if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); - return min(nbytes, scatterwalk_pagelen(walk)); -} + len_this_sg = walk->sg->offset + walk->sg->length - walk->offset; -static inline struct page *scatterwalk_page(struct scatter_walk *walk) -{ - return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); + /* + * HIGHMEM case: the page may have to be mapped into memory. To avoid + * the complexity of having to map multiple pages at once per sg entry, + * clamp the returned length to not cross a page boundary. + * + * !HIGHMEM case: no mapping is needed; all pages of the sg entry are + * already mapped contiguously in the kernel's direct map. For improved + * performance, allow the walker to return data segments that cross a + * page boundary. Do still cap the length to PAGE_SIZE, since some + * users rely on that to avoid disabling preemption for too long when + * using SIMD. It's also needed for when skcipher_walk uses a bounce + * page due to the data not being aligned to the algorithm's alignmask. + */ + if (IS_ENABLED(CONFIG_HIGHMEM)) + limit = PAGE_SIZE - offset_in_page(walk->offset); + else + limit = PAGE_SIZE; + + return min3(nbytes, len_this_sg, limit); } /* * Create a scatterlist that represents the remaining data in a walk. Uses * chaining to reference the original scatterlist, so this uses at most two @@ -84,19 +95,27 @@ static inline void scatterwalk_get_sglist(struct scatter_walk *walk, walk->sg->offset + walk->sg->length - walk->offset, walk->offset); scatterwalk_crypto_chain(sg_out, sg_next(walk->sg), 2); } -static inline void scatterwalk_unmap(void *vaddr) -{ - kunmap_local(vaddr); -} - static inline void *scatterwalk_map(struct scatter_walk *walk) { - return kmap_local_page(scatterwalk_page(walk)) + - offset_in_page(walk->offset); + struct page *base_page = sg_page(walk->sg); + + if (IS_ENABLED(CONFIG_HIGHMEM)) + return kmap_local_page(base_page + (walk->offset >> PAGE_SHIFT)) + + offset_in_page(walk->offset); + /* + * When !HIGHMEM we allow the walker to return segments that span a page + * boundary; see scatterwalk_clamp(). To make it clear that in this + * case we're working in the linear buffer of the whole sg entry in the + * kernel's direct map rather than within the mapped buffer of a single + * page, compute the address as an offset from the page_address() of the + * first page of the sg entry. Either way the result is the address in + * the direct map, but this makes it clearer what is really going on. + */ + return page_address(base_page) + walk->offset; } /** * scatterwalk_next() - Get the next data buffer in a scatterlist walk * @walk: the scatter_walk @@ -113,10 +132,16 @@ static inline void *scatterwalk_next(struct scatter_walk *walk, { *nbytes_ret = scatterwalk_clamp(walk, total); return scatterwalk_map(walk); } +static inline void scatterwalk_unmap(const void *vaddr) +{ + if (IS_ENABLED(CONFIG_HIGHMEM)) + kunmap_local(vaddr); +} + static inline void scatterwalk_advance(struct scatter_walk *walk, unsigned int nbytes) { walk->offset += nbytes; } @@ -131,11 +156,11 @@ static inline void scatterwalk_advance(struct scatter_walk *walk, * Use this if the @vaddr was not written to, i.e. it is source data. */ static inline void scatterwalk_done_src(struct scatter_walk *walk, const void *vaddr, unsigned int nbytes) { - scatterwalk_unmap((void *)vaddr); + scatterwalk_unmap(vaddr); scatterwalk_advance(walk, nbytes); } /** * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist @@ -152,13 +177,23 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk, scatterwalk_unmap(vaddr); /* * Explicitly check ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE instead of just * relying on flush_dcache_page() being a no-op when not implemented, * since otherwise the BUG_ON in sg_page() does not get optimized out. + * This also avoids having to consider whether the loop would get + * reliably optimized out or not. */ - if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) - flush_dcache_page(scatterwalk_page(walk)); + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) { + struct page *base_page, *start_page, *end_page, *page; + + base_page = sg_page(walk->sg); + start_page = base_page + (walk->offset >> PAGE_SHIFT); + end_page = base_page + ((walk->offset + nbytes + + PAGE_SIZE - 1) >> PAGE_SHIFT); + for (page = start_page; page < end_page; page++) + flush_dcache_page(page); + } scatterwalk_advance(walk, nbytes); } void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes);