From patchwork Thu Nov 5 21:52:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 11885329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52F04C388F7 for ; Thu, 5 Nov 2020 21:52:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04AEC20728 for ; Thu, 5 Nov 2020 21:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732317AbgKEVw1 (ORCPT ); Thu, 5 Nov 2020 16:52:27 -0500 Received: from smtp.infotech.no ([82.134.31.41]:34318 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731508AbgKEVw1 (ORCPT ); Thu, 5 Nov 2020 16:52:27 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id CB7F620426C; Thu, 5 Nov 2020 22:52:24 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eUVnhS7OG68Z; Thu, 5 Nov 2020 22:52:22 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id 0C6E7204248; Thu, 5 Nov 2020 22:52:20 +0100 (CET) From: Douglas Gilbert To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Cc: axboe@kernel.dk, martin.petersen@oracle.com, bostroesser@gmail.com, bvanassche@acm.org, ddiss@suse.de Subject: [PATCH v4 1/4] sgl_alloc_order: remove 4 GiB limit, sgl_free() warning Date: Thu, 5 Nov 2020 16:52:13 -0500 Message-Id: <20201105215216.23304-2-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201105215216.23304-1-dgilbert@interlog.com> References: <20201105215216.23304-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This patch fixes a check done by sgl_alloc_order() before it starts any allocations. The comment in the original said: "Check for integer overflow" but the check itself contained an integer overflow! The right hand side (rhs) of the expression in the condition is resolved as u32 so it could not exceed UINT32_MAX (4 GiB) which means 'length' could not exceed that value. If that was the intention then the comment above it could be dropped and the condition rewritten more clearly as: if (length > UINT32_MAX) <>; Get around the integer overflow problem in the rhs of the original check by taking ilog2() of both sides. This function may be used to replace vmalloc(unsigned long) for a large allocation (e.g. a ramdisk). vmalloc has no limit at 4 GiB so it seems unreasonable that: sgl_alloc_order(unsigned long long length, ....) does. sgl_s made with sgl_alloc_order() have equally sized segments placed in a scatter gather array. That allows O(1) navigation around a big sgl using some simple integer arithmetic. Revise some of this function's description to more accurately reflect what this function is doing. An earlier patch fixed a memory leak in sg_alloc_order() due to the misuse of sgl_free(). Take the opportunity to put a one line comment above sgl_free()'s declaration warning that it is not suitable when order > 0 . Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 1 + lib/scatterlist.c | 14 ++++++++------ 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 36c47e7e66a2..d9443ebd0a8e 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -308,6 +308,7 @@ struct scatterlist *sgl_alloc(unsigned long long length, gfp_t gfp, unsigned int *nent_p); void sgl_free_n_order(struct scatterlist *sgl, int nents, int order); void sgl_free_order(struct scatterlist *sgl, int order); +/* Only use sgl_free() when order is 0 */ void sgl_free(struct scatterlist *sgl); #endif /* CONFIG_SGL_ALLOC */ diff --git a/lib/scatterlist.c b/lib/scatterlist.c index a59778946404..4986545beef9 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -554,13 +554,15 @@ EXPORT_SYMBOL(sg_alloc_table_from_pages); #ifdef CONFIG_SGL_ALLOC /** - * sgl_alloc_order - allocate a scatterlist and its pages + * sgl_alloc_order - allocate a scatterlist with equally sized elements * @length: Length in bytes of the scatterlist. Must be at least one - * @order: Second argument for alloc_pages() + * @order: Second argument for alloc_pages(). Each sgl element size will + * be (PAGE_SIZE*2^order) bytes * @chainable: Whether or not to allocate an extra element in the scatterlist - * for scatterlist chaining purposes + * for scatterlist chaining purposes * @gfp: Memory allocation flags - * @nent_p: [out] Number of entries in the scatterlist that have pages + * @nent_p: [out] Number of entries in the scatterlist that have pages. + * Ignored if NULL is given. * * Returns: A pointer to an initialized scatterlist or %NULL upon failure. */ @@ -574,8 +576,8 @@ struct scatterlist *sgl_alloc_order(unsigned long long length, u32 elem_len; nent = round_up(length, PAGE_SIZE << order) >> (PAGE_SHIFT + order); - /* Check for integer overflow */ - if (length > (nent << (PAGE_SHIFT + order))) + /* Integer overflow if: length > nent*2^(PAGE_SHIFT+order) */ + if (ilog2(length) > ilog2(nent) + PAGE_SHIFT + order) return NULL; nalloc = nent; if (chainable) { From patchwork Thu Nov 5 21:52:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 11885331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53CB2C388F7 for ; Thu, 5 Nov 2020 21:52:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1AA822078E for ; Thu, 5 Nov 2020 21:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732425AbgKEVw2 (ORCPT ); Thu, 5 Nov 2020 16:52:28 -0500 Received: from smtp.infotech.no ([82.134.31.41]:34331 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732354AbgKEVw2 (ORCPT ); Thu, 5 Nov 2020 16:52:28 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id A2D3D20426D; Thu, 5 Nov 2020 22:52:26 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WYPjtFDRzZ6v; Thu, 5 Nov 2020 22:52:24 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id 5A3F220425B; Thu, 5 Nov 2020 22:52:22 +0100 (CET) From: Douglas Gilbert To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Cc: axboe@kernel.dk, martin.petersen@oracle.com, bostroesser@gmail.com, bvanassche@acm.org, ddiss@suse.de Subject: [PATCH v4 2/4] scatterlist: add sgl_copy_sgl() function Date: Thu, 5 Nov 2020 16:52:14 -0500 Message-Id: <20201105215216.23304-3-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201105215216.23304-1-dgilbert@interlog.com> References: <20201105215216.23304-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Both the SCSI and NVMe subsystems receive user data from the block layer in scatterlist_s (aka scatter gather lists (sgl) which are often arrays). If drivers in those subsystems represent storage (e.g. a ramdisk) or cache "hot" user data then they may also choose to use scatterlist_s. Currently there are no sgl to sgl operations in the kernel. Start with a sgl to sgl copy. Stops when the first of the number of requested bytes to copy, or the source sgl, or the destination sgl is exhausted. So the destination sgl will _not_ grow. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 4 ++ lib/scatterlist.c | 74 +++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index d9443ebd0a8e..f2922a34b140 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -327,6 +327,10 @@ size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents, size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, size_t buflen, off_t skip); +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip, + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, + size_t n_bytes); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 4986545beef9..af9cd7b9dc19 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1060,3 +1060,77 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, return offset; } EXPORT_SYMBOL(sg_zero_buffer); + +/** + * sgl_copy_sgl - Copy over a destination sgl from a source sgl + * @d_sgl: Destination sgl + * @d_nents: Number of SG entries in destination sgl + * @d_skip: Number of bytes to skip in destination before starting + * @s_sgl: Source sgl + * @s_nents: Number of SG entries in source sgl + * @s_skip: Number of bytes to skip in source before starting + * @n_bytes: The (maximum) number of bytes to copy + * + * Returns: + * The number of copied bytes. + * + * Notes: + * Destination arguments appear before the source arguments, as with memcpy(). + * + * Stops copying if either d_sgl, s_sgl or n_bytes is exhausted. + * + * Since memcpy() is used, overlapping copies (where d_sgl and s_sgl belong + * to the same sgl and the copy regions overlap) are not supported. + * + * Large copies are broken into copy segments whose sizes may vary. Those + * copy segment sizes are chosen by the min3() statement in the code below. + * Since SG_MITER_ATOMIC is used for both sides, each copy segment is started + * with kmap_atomic() [in sg_miter_next()] and completed with kunmap_atomic() + * [in sg_miter_stop()]. This means pre-emption is inhibited for relatively + * short periods even in very large copies. + * + * If d_skip is large, potentially spanning multiple d_nents then some + * integer arithmetic to adjust d_sgl may improve performance. For example + * if d_sgl is built using sgl_alloc_order(chainable=false) then the sgl + * will be an array with equally sized segments facilitating that + * arithmetic. The suggestion applies to s_skip, s_sgl and s_nents as well. + * + **/ +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip, + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, + size_t n_bytes) +{ + size_t len; + size_t offset = 0; + struct sg_mapping_iter d_iter, s_iter; + + if (n_bytes == 0) + return 0; + sg_miter_start(&s_iter, s_sgl, s_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + sg_miter_start(&d_iter, d_sgl, d_nents, SG_MITER_ATOMIC | SG_MITER_TO_SG); + if (!sg_miter_skip(&s_iter, s_skip)) + goto fini; + if (!sg_miter_skip(&d_iter, d_skip)) + goto fini; + + while (offset < n_bytes) { + if (!sg_miter_next(&s_iter)) + break; + if (!sg_miter_next(&d_iter)) + break; + len = min3(d_iter.length, s_iter.length, n_bytes - offset); + + memcpy(d_iter.addr, s_iter.addr, len); + offset += len; + /* LIFO order (stop d_iter before s_iter) needed with SG_MITER_ATOMIC */ + d_iter.consumed = len; + sg_miter_stop(&d_iter); + s_iter.consumed = len; + sg_miter_stop(&s_iter); + } +fini: + sg_miter_stop(&d_iter); + sg_miter_stop(&s_iter); + return offset; +} +EXPORT_SYMBOL(sgl_copy_sgl); From patchwork Thu Nov 5 21:52:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 11885335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECD07C5DF9D for ; Thu, 5 Nov 2020 21:52:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF6C12080D for ; Thu, 5 Nov 2020 21:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732446AbgKEVwc (ORCPT ); Thu, 5 Nov 2020 16:52:32 -0500 Received: from smtp.infotech.no ([82.134.31.41]:34349 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731508AbgKEVwb (ORCPT ); Thu, 5 Nov 2020 16:52:31 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id 648EC2040E4; Thu, 5 Nov 2020 22:52:29 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id z6WKc1B86q7C; Thu, 5 Nov 2020 22:52:26 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id A7914204269; Thu, 5 Nov 2020 22:52:23 +0100 (CET) From: Douglas Gilbert To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Cc: axboe@kernel.dk, martin.petersen@oracle.com, bostroesser@gmail.com, bvanassche@acm.org, ddiss@suse.de Subject: [PATCH v4 3/4] scatterlist: add sgl_compare_sgl() function Date: Thu, 5 Nov 2020 16:52:15 -0500 Message-Id: <20201105215216.23304-4-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201105215216.23304-1-dgilbert@interlog.com> References: <20201105215216.23304-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org After enabling copies between scatter gather lists (sgl_s), another storage related operation is to compare two sgl_s. This new function is modelled on NVMe's Compare command and the SCSI VERIFY(BYTCHK=1) command. Like memcmp() this function returns false on the first miscompare and stops comparing. A helper function called sgl_compare_sgl_idx() is added. It takes an additional parameter (miscompare_idx) which is a pointer. If that pointer is non-NULL and a miscompare is detected (i.e. the function returns false) then the byte index of the first miscompare is written to *miscomapre_idx. Knowing the location of the first miscompare is needed to implement the SCSI COMPARE AND WRITE command properly. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 8 +++ lib/scatterlist.c | 109 ++++++++++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index f2922a34b140..0f6d59bf66cb 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -331,6 +331,14 @@ size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_ski struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, size_t n_bytes); +bool sgl_compare_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes); + +bool sgl_compare_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes, size_t *miscompare_idx); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index af9cd7b9dc19..9332365e7eb6 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1134,3 +1134,112 @@ size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_ski return offset; } EXPORT_SYMBOL(sgl_copy_sgl); + +/** + * sgl_compare_sgl_idx - Compare x and y (both sgl_s) + * @x_sgl: x (left) sgl + * @x_nents: Number of SG entries in x (left) sgl + * @x_skip: Number of bytes to skip in x (left) before starting + * @y_sgl: y (right) sgl + * @y_nents: Number of SG entries in y (right) sgl + * @y_skip: Number of bytes to skip in y (right) before starting + * @n_bytes: The (maximum) number of bytes to compare + * @miscompare_idx: if return is false, index of first miscompare written + * to this pointer (if non-NULL). Value will be < n_bytes + * + * Returns: + * true if x and y compare equal before x, y or n_bytes is exhausted. + * Otherwise on a miscompare, returns false (and stops comparing). If return + * is false and miscompare_idx is non-NULL, then index of first miscompared + * byte written to *miscompare_idx. + * + * Notes: + * x and y are symmetrical: they can be swapped and the result is the same. + * + * Implementation is based on memcmp(). x and y segments may overlap. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +bool sgl_compare_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes, size_t *miscompare_idx) +{ + bool equ = true; + size_t len; + size_t offset = 0; + struct sg_mapping_iter x_iter, y_iter; + + if (n_bytes == 0) + return true; + sg_miter_start(&x_iter, x_sgl, x_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + sg_miter_start(&y_iter, y_sgl, y_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + if (!sg_miter_skip(&x_iter, x_skip)) + goto fini; + if (!sg_miter_skip(&y_iter, y_skip)) + goto fini; + + while (offset < n_bytes) { + if (!sg_miter_next(&x_iter)) + break; + if (!sg_miter_next(&y_iter)) + break; + len = min3(x_iter.length, y_iter.length, n_bytes - offset); + + equ = !memcmp(x_iter.addr, y_iter.addr, len); + if (!equ) + goto fini; + offset += len; + /* LIFO order is important when SG_MITER_ATOMIC is used */ + y_iter.consumed = len; + sg_miter_stop(&y_iter); + x_iter.consumed = len; + sg_miter_stop(&x_iter); + } +fini: + if (miscompare_idx && !equ) { + u8 *xp = x_iter.addr; + u8 *yp = y_iter.addr; + u8 *x_endp; + + for (x_endp = xp + len ; xp < x_endp; ++xp, ++yp) { + if (*xp != *yp) + break; + } + *miscompare_idx = offset + len - (x_endp - xp); + } + sg_miter_stop(&y_iter); + sg_miter_stop(&x_iter); + return equ; +} +EXPORT_SYMBOL(sgl_compare_sgl_idx); + +/** + * sgl_compare_sgl - Compare x and y (both sgl_s) + * @x_sgl: x (left) sgl + * @x_nents: Number of SG entries in x (left) sgl + * @x_skip: Number of bytes to skip in x (left) before starting + * @y_sgl: y (right) sgl + * @y_nents: Number of SG entries in y (right) sgl + * @y_skip: Number of bytes to skip in y (right) before starting + * @n_bytes: The (maximum) number of bytes to compare + * + * Returns: + * true if x and y compare equal before x, y or n_bytes is exhausted. + * Otherwise on a miscompare, returns false (and stops comparing). + * + * Notes: + * x and y are symmetrical: they can be swapped and the result is the same. + * + * Implementation is based on memcmp(). x and y segments may overlap. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +bool sgl_compare_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes) +{ + return sgl_compare_sgl_idx(x_sgl, x_nents, x_skip, y_sgl, y_nents, y_skip, n_bytes, NULL); +} +EXPORT_SYMBOL(sgl_compare_sgl); From patchwork Thu Nov 5 21:52:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 11885333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB4E2C55ABD for ; Thu, 5 Nov 2020 21:52:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E8AB2078E for ; Thu, 5 Nov 2020 21:52:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732450AbgKEVwb (ORCPT ); Thu, 5 Nov 2020 16:52:31 -0500 Received: from smtp.infotech.no ([82.134.31.41]:34360 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732446AbgKEVwb (ORCPT ); Thu, 5 Nov 2020 16:52:31 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id D23EB20425C; Thu, 5 Nov 2020 22:52:29 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id erpKoSAVDvFT; Thu, 5 Nov 2020 22:52:27 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id 0132D204248; Thu, 5 Nov 2020 22:52:24 +0100 (CET) From: Douglas Gilbert To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Cc: axboe@kernel.dk, martin.petersen@oracle.com, bostroesser@gmail.com, bvanassche@acm.org, ddiss@suse.de Subject: [PATCH v4 4/4] scatterlist: add sgl_memset() Date: Thu, 5 Nov 2020 16:52:16 -0500 Message-Id: <20201105215216.23304-5-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201105215216.23304-1-dgilbert@interlog.com> References: <20201105215216.23304-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The existing sg_zero_buffer() function is a bit restrictive. For example protection information (PI) blocks are usually initialized to 0xff bytes. As its name suggests sgl_memset() is modelled on memset(). One difference is the type of the val argument which is u8 rather than int. Plus it returns the number of bytes (over)written. Change implementation of sg_zero_buffer() to call this new function. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 3 ++ lib/scatterlist.c | 65 +++++++++++++++++++++++++------------ 2 files changed, 48 insertions(+), 20 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 0f6d59bf66cb..8e4c050e6237 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -339,6 +339,9 @@ bool sgl_compare_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, size_t n_bytes, size_t *miscompare_idx); +size_t sgl_memset(struct scatterlist *sgl, unsigned int nents, off_t skip, + u8 val, size_t n_bytes); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 9332365e7eb6..f06614a880c8 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1038,26 +1038,7 @@ EXPORT_SYMBOL(sg_pcopy_to_buffer); size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, size_t buflen, off_t skip) { - unsigned int offset = 0; - struct sg_mapping_iter miter; - unsigned int sg_flags = SG_MITER_ATOMIC | SG_MITER_TO_SG; - - sg_miter_start(&miter, sgl, nents, sg_flags); - - if (!sg_miter_skip(&miter, skip)) - return false; - - while (offset < buflen && sg_miter_next(&miter)) { - unsigned int len; - - len = min(miter.length, buflen - offset); - memset(miter.addr, 0, len); - - offset += len; - } - - sg_miter_stop(&miter); - return offset; + return sgl_memset(sgl, nents, skip, 0, buflen); } EXPORT_SYMBOL(sg_zero_buffer); @@ -1243,3 +1224,47 @@ bool sgl_compare_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_sk return sgl_compare_sgl_idx(x_sgl, x_nents, x_skip, y_sgl, y_nents, y_skip, n_bytes, NULL); } EXPORT_SYMBOL(sgl_compare_sgl); + +/** + * sgl_memset - set byte 'val' up to n_bytes times on SG list + * @sgl: The SG list + * @nents: Number of SG entries in sgl + * @skip: Number of bytes to skip before starting + * @val: byte value to write to sgl + * @n_bytes: The (maximum) number of bytes to modify + * + * Returns: + * The number of bytes written. + * + * Notes: + * Stops writing if either sgl or n_bytes is exhausted. If n_bytes is + * set SIZE_MAX then val will be written to each byte until the end + * of sgl. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +size_t sgl_memset(struct scatterlist *sgl, unsigned int nents, off_t skip, + u8 val, size_t n_bytes) +{ + size_t offset = 0; + size_t len; + struct sg_mapping_iter miter; + + if (n_bytes == 0) + return 0; + sg_miter_start(&miter, sgl, nents, SG_MITER_ATOMIC | SG_MITER_TO_SG); + if (!sg_miter_skip(&miter, skip)) + goto fini; + + while ((offset < n_bytes) && sg_miter_next(&miter)) { + len = min(miter.length, n_bytes - offset); + memset(miter.addr, val, len); + offset += len; + } +fini: + sg_miter_stop(&miter); + return offset; +} +EXPORT_SYMBOL(sgl_memset); +