From patchwork Wed Mar 29 14:13:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0653C74A5B for ; Wed, 29 Mar 2023 14:37:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230165AbjC2OhP (ORCPT ); Wed, 29 Mar 2023 10:37:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230168AbjC2OhB (ORCPT ); Wed, 29 Mar 2023 10:37:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32C0729B for ; Wed, 29 Mar 2023 07:31:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680100266; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s7FY6IkB7NRnaaunO8kQ3wM8dgoy+C5dVBJnz6ApgXk=; b=HMYGQ7rRL+cUcz/NaXAikV78wio+7/mtcWHqStk+/YuZ3GFozBvHK3jKWD7H7ZGNuBEw9l yxUvJ4yvKjZE8roYrWSk7qlrQylSIOhrSop3tpPq6Or/9TaCtSDitp7BZnFCFV7ROo6Xhw sDSwRou2JMGRjT9XCJN4K+/2YEgupOE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-440-h-W_J7FgPEG0aunda_tCKQ-1; Wed, 29 Mar 2023 10:14:05 -0400 X-MC-Unique: h-W_J7FgPEG0aunda_tCKQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF299855315; Wed, 29 Mar 2023 14:14:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6A6F492B01; Wed, 29 Mar 2023 14:14:02 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [RFC PATCH v2 02/48] iov_iter: Remove last_offset member Date: Wed, 29 Mar 2023 15:13:08 +0100 Message-Id: <20230329141354.516864-3-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org With the removal of ITER_PIPE, the last_offset member of struct iov_iter is no longer used, so remove it and un-unionise the remaining member. Signed-off-by: David Howells cc: Jens Axboe cc: Matthew Wilcox cc: Alexander Viro cc: Jeff Layton cc: linux-nfs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/uio.h | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 74598426edb4..2d8a70cb9b26 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -43,10 +43,7 @@ struct iov_iter { bool nofault; bool data_source; bool user_backed; - union { - size_t iov_offset; - int last_offset; - }; + size_t iov_offset; size_t count; union { const struct iovec *iov; From patchwork Wed Mar 29 14:13:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69B74C6FD18 for ; Wed, 29 Mar 2023 14:16:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230414AbjC2OQm (ORCPT ); Wed, 29 Mar 2023 10:16:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbjC2OQb (ORCPT ); Wed, 29 Mar 2023 10:16:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F76759D0 for ; Wed, 29 Mar 2023 07:15:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YKmPIBLTzwmxTywoQlbWhVqEAglt5Diim843HzqnLOY=; b=aXFcvEXtD7XvCD/ASPnedZlzndn0nSxmDHWFTnkzUx4w4LjVwgEbS5ym35zp12Kfx/lL/R 0NSwjw6MTFRGDse+L3ajZr1uz/ZJBvW5byr8AkP4jRgne5fn0l+SNBj2P8GD4ZgitS+6XX 6l5rwAU9otlKd2F7AhqpjCxlsbwpx4E= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-132-y_B6QV2XMFK1PdJabznHmQ-1; Wed, 29 Mar 2023 10:14:09 -0400 X-MC-Unique: y_B6QV2XMFK1PdJabznHmQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 93F51381459A; Wed, 29 Mar 2023 14:14:07 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B2C718EC2; Wed, 29 Mar 2023 14:14:05 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Trond Myklebust , Anna Schumaker , linux-nfs@vger.kernel.org Subject: [RFC PATCH v2 03/48] iov_iter: Add an iterator-of-iterators Date: Wed, 29 Mar 2023 15:13:09 +0100 Message-Id: <20230329141354.516864-4-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Add a new I/O iterator type, ITER_ITERLIST, that allows iteration over a series of I/O iterators, provided the iterators are all the same direction (all ITER_SOURCE or all ITER_DEST) and none of them are themselves ITER_ITERLIST (this function is recursive). To make reversion possible, I've added an 'orig_count' member into the iov_iter struct so that reversion of an ITER_ITERLIST can know when to go move backwards through the iter list. It might make more sense to make the iterator list element, say: struct itervec { struct iov_iter iter; size_t orig_count; }; rather than expanding struct iov_iter itself and have iov_iter_iterlist() set vec[i].orig_count from vec[i].iter->count. Also, for the moment, I've only permitted its use with source iterators (eg. sendmsg). To use this, you allocate an array of iterators and point the list iterator at it, e.g.: struct iov_iter iters[3]; struct msghdr msg; iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, sizeof(marker) + head->iov_len); iov_iter_xarray(&iters[1], ITER_SOURCE, xdr->pages, xdr->page_fpos, xdr->page_len); iov_iter_kvec(&iters[2], ITER_SOURCE, &tail_kv, 1, tail->iov_len); iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, 3, size); This can be used by network filesystem protocols, such as sunrpc, to glue a header and a trailer on to some data to form a message and then dump the entire message onto the socket in a single go. [!] Note: I'm not entirely sure that this is a good idea: the problem is that it's reasonably common practice to copy an iterator by direct assignment - and that works for the existing iterators... but not this one. With the iterator-of-iterators, the list of iterators has to be modified if we recurse. It's probably fine just for calling sendmsg() from network filesystems, but I'm not 100% sure of that. Suggested-by: Trond Myklebust Signed-off-by: David Howells cc: Trond Myklebust cc: Anna Schumaker cc: Chuck Lever cc: Jens Axboe cc: Matthew Wilcox cc: Alexander Viro cc: Jeff Layton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-nfs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/uio.h | 13 ++- lib/iov_iter.c | 254 ++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 260 insertions(+), 7 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 2d8a70cb9b26..6c75c94566b8 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -27,6 +27,7 @@ enum iter_type { ITER_XARRAY, ITER_DISCARD, ITER_UBUF, + ITER_ITERLIST, }; #define ITER_SOURCE 1 // == WRITE @@ -45,12 +46,14 @@ struct iov_iter { bool user_backed; size_t iov_offset; size_t count; + size_t orig_count; union { const struct iovec *iov; const struct kvec *kvec; const struct bio_vec *bvec; struct xarray *xarray; void __user *ubuf; + struct iov_iter *iterlist; }; union { unsigned long nr_segs; @@ -101,6 +104,11 @@ static inline bool iov_iter_is_xarray(const struct iov_iter *i) return iov_iter_type(i) == ITER_XARRAY; } +static inline bool iov_iter_is_iterlist(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_ITERLIST; +} + static inline unsigned char iov_iter_rw(const struct iov_iter *i) { return i->data_source ? WRITE : READ; @@ -235,6 +243,8 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_ void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, loff_t start, size_t count); +void iov_iter_iterlist(struct iov_iter *i, unsigned int direction, struct iov_iter *iterlist, + unsigned long nr_segs, size_t count); ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start, iov_iter_extraction_t extraction_flags); @@ -342,7 +352,8 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, .user_backed = true, .data_source = direction, .ubuf = buf, - .count = count + .count = count, + .orig_count = count, }; } /* Flags for iov_iter_get/extract_pages*() */ diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fad95e4cf372..8a9ae4af45fc 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -282,7 +282,8 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, .iov = iov, .nr_segs = nr_segs, .iov_offset = 0, - .count = count + .count = count, + .orig_count = count, }; } EXPORT_SYMBOL(iov_iter_init); @@ -364,6 +365,26 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) if (WARN_ON_ONCE(!i->data_source)) return 0; + if (unlikely(iov_iter_is_iterlist(i))) { + size_t copied = 0; + + while (bytes && i->count) { + size_t part = min(bytes, i->iterlist->count), n; + + if (part > 0) + n = _copy_from_iter(addr, part, i->iterlist); + addr += n; + copied += n; + bytes -= n; + i->count -= n; + if (n < part || !bytes) + break; + i->iterlist++; + i->nr_segs--; + } + return copied; + } + if (user_backed_iter(i)) might_fault(); iterate_and_advance(i, bytes, base, len, off, @@ -380,6 +401,27 @@ size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) if (WARN_ON_ONCE(!i->data_source)) return 0; + if (unlikely(iov_iter_is_iterlist(i))) { + size_t copied = 0; + + while (bytes && i->count) { + size_t part = min(bytes, i->iterlist->count), n; + + if (part > 0) + n = _copy_from_iter_nocache(addr, part, + i->iterlist); + addr += n; + copied += n; + bytes -= n; + i->count -= n; + if (n < part || !bytes) + break; + i->iterlist++; + i->nr_segs--; + } + return copied; + } + iterate_and_advance(i, bytes, base, len, off, __copy_from_user_inatomic_nocache(addr + off, base, len), memcpy(addr + off, base, len) @@ -411,6 +453,27 @@ size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i) if (WARN_ON_ONCE(!i->data_source)) return 0; + if (unlikely(iov_iter_is_iterlist(i))) { + size_t copied = 0; + + while (bytes && i->count) { + size_t part = min(bytes, i->iterlist->count), n; + + if (part > 0) + n = _copy_from_iter_flushcache(addr, part, + i->iterlist); + addr += n; + copied += n; + bytes -= n; + i->count -= n; + if (n < part || !bytes) + break; + i->iterlist++; + i->nr_segs--; + } + return copied; + } + iterate_and_advance(i, bytes, base, len, off, __copy_from_user_flushcache(addr + off, base, len), memcpy_flushcache(addr + off, base, len) @@ -514,7 +577,31 @@ EXPORT_SYMBOL(iov_iter_zero); size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t bytes, struct iov_iter *i) { - char *kaddr = kmap_atomic(page), *p = kaddr + offset; + char *kaddr, *p; + + if (unlikely(iov_iter_is_iterlist(i))) { + size_t copied = 0; + + while (bytes && i->count) { + size_t part = min(bytes, i->iterlist->count), n; + + if (part > 0) + n = copy_page_from_iter_atomic(page, offset, part, + i->iterlist); + offset += n; + copied += n; + bytes -= n; + i->count -= n; + if (n < part || !bytes) + break; + i->iterlist++; + i->nr_segs--; + } + return copied; + } + + kaddr = kmap_atomic(page); + p = kaddr + offset; if (!page_copy_sane(page, offset, bytes)) { kunmap_atomic(kaddr); return 0; @@ -585,19 +672,49 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_bvec_advance(i, size); } else if (iov_iter_is_discard(i)) { i->count -= size; + }else if (iov_iter_is_iterlist(i)) { + i->count -= size; + for (;;) { + size_t part = min(size, i->iterlist->count); + + if (part > 0) + iov_iter_advance(i->iterlist, part); + size -= part; + if (!size) + break; + i->iterlist++; + i->nr_segs--; + } } } EXPORT_SYMBOL(iov_iter_advance); +static void iov_iter_revert_iterlist(struct iov_iter *i, size_t unroll) +{ + for (;;) { + size_t part = min(unroll, i->iterlist->orig_count - i->iterlist->count); + + if (part > 0) + iov_iter_revert(i->iterlist, part); + unroll -= part; + if (!unroll) + break; + i->iterlist--; + i->nr_segs++; + } +} + void iov_iter_revert(struct iov_iter *i, size_t unroll) { if (!unroll) return; - if (WARN_ON(unroll > MAX_RW_COUNT)) + if (WARN_ON(unroll > i->orig_count - i->count)) return; i->count += unroll; if (unlikely(iov_iter_is_discard(i))) return; + if (unlikely(iov_iter_is_iterlist(i))) + return iov_iter_revert_iterlist(i, unroll); if (unroll <= i->iov_offset) { i->iov_offset -= unroll; return; @@ -641,6 +758,8 @@ EXPORT_SYMBOL(iov_iter_revert); */ size_t iov_iter_single_seg_count(const struct iov_iter *i) { + if (iov_iter_is_iterlist(i)) + i = i->iterlist; if (i->nr_segs > 1) { if (likely(iter_is_iovec(i) || iov_iter_is_kvec(i))) return min(i->count, i->iov->iov_len - i->iov_offset); @@ -662,7 +781,8 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, .kvec = kvec, .nr_segs = nr_segs, .iov_offset = 0, - .count = count + .count = count, + .orig_count = count, }; } EXPORT_SYMBOL(iov_iter_kvec); @@ -678,7 +798,8 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, .bvec = bvec, .nr_segs = nr_segs, .iov_offset = 0, - .count = count + .count = count, + .orig_count = count, }; } EXPORT_SYMBOL(iov_iter_bvec); @@ -706,6 +827,7 @@ void iov_iter_xarray(struct iov_iter *i, unsigned int direction, .xarray = xarray, .xarray_start = start, .count = count, + .orig_count = count, .iov_offset = 0 }; } @@ -727,11 +849,47 @@ void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count) .iter_type = ITER_DISCARD, .data_source = false, .count = count, + .orig_count = count, .iov_offset = 0 }; } EXPORT_SYMBOL(iov_iter_discard); +/** + * iov_iter_iterlist - Initialise an I/O iterator that is a list of iterators + * @iter: The iterator to initialise. + * @direction: The direction of the transfer. + * @iterlist: The list of iterators + * @nr_segs: The number of elements in the list + * @count: The size of the I/O buffer in bytes. + * + * Set up an I/O iterator that just discards everything that's written to it. + * It's only available as a source iterator (for WRITE), all the iterators in + * the list must be the same and none of them can be ITER_ITERLIST type. + */ +void iov_iter_iterlist(struct iov_iter *iter, unsigned int direction, + struct iov_iter *iterlist, unsigned long nr_segs, + size_t count) +{ + unsigned long i; + + BUG_ON(direction != WRITE); + for (i = 0; i < nr_segs; i++) { + BUG_ON(iterlist[i].iter_type == ITER_ITERLIST); + BUG_ON(!iterlist[i].data_source); + } + + *iter = (struct iov_iter){ + .iter_type = ITER_ITERLIST, + .data_source = true, + .count = count, + .orig_count = count, + .iterlist = iterlist, + .nr_segs = nr_segs, + }; +} +EXPORT_SYMBOL(iov_iter_iterlist); + static bool iov_iter_aligned_iovec(const struct iov_iter *i, unsigned addr_mask, unsigned len_mask) { @@ -879,6 +1037,15 @@ unsigned long iov_iter_alignment(const struct iov_iter *i) if (iov_iter_is_xarray(i)) return (i->xarray_start + i->iov_offset) | i->count; + if (iov_iter_is_iterlist(i)) { + unsigned long align = 0; + unsigned int j; + + for (j = 0; j < i->nr_segs; j++) + align |= iov_iter_alignment(&i->iterlist[j]); + return align; + } + return 0; } EXPORT_SYMBOL(iov_iter_alignment); @@ -1078,6 +1245,18 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, } if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); + if (iov_iter_is_iterlist(i)) { + ssize_t size; + + while (!i->iterlist->count) { + i->iterlist++; + i->nr_segs--; + } + size = __iov_iter_get_pages_alloc(i->iterlist, pages, maxsize, maxpages, + start, extraction_flags); + i->count -= size; + return size; + } return -EFAULT; } @@ -1126,6 +1305,31 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); +static size_t csum_and_copy_from_iterlist(void *addr, size_t bytes, __wsum *csum, + struct iov_iter *i) +{ + size_t copied = 0, n; + + while (i->count && i->nr_segs) { + struct iov_iter *j = i->iterlist; + + if (j->count == 0) { + i->iterlist++; + i->nr_segs--; + continue; + } + + n = csum_and_copy_from_iter(addr, bytes - copied, csum, j); + addr += n; + copied += n; + i->count -= n; + if (n == 0) + break; + } + + return copied; +} + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) { @@ -1133,6 +1337,8 @@ size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, sum = *csum; if (WARN_ON_ONCE(!i->data_source)) return 0; + if (iov_iter_is_iterlist(i)) + return csum_and_copy_from_iterlist(addr, bytes, csum, i); iterate_and_advance(i, bytes, base, len, off, ({ next = csum_and_copy_from_user(base, addr + off, len); @@ -1236,6 +1442,21 @@ static int bvec_npages(const struct iov_iter *i, int maxpages) return npages; } +static int iterlist_npages(const struct iov_iter *i, int maxpages) +{ + ssize_t size = i->count; + const struct iov_iter *p; + int npages = 0; + + for (p = i->iterlist; size; p++) { + size -= p->count; + npages += iov_iter_npages(p, maxpages - npages); + if (unlikely(npages >= maxpages)) + return maxpages; + } + return npages; +} + int iov_iter_npages(const struct iov_iter *i, int maxpages) { if (unlikely(!i->count)) @@ -1255,6 +1476,8 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages) int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); return min(npages, maxpages); } + if (iov_iter_is_iterlist(i)) + return iterlist_npages(i, maxpages); return 0; } EXPORT_SYMBOL(iov_iter_npages); @@ -1266,11 +1489,14 @@ const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags) return new->bvec = kmemdup(new->bvec, new->nr_segs * sizeof(struct bio_vec), flags); - else if (iov_iter_is_kvec(new) || iter_is_iovec(new)) + if (iov_iter_is_kvec(new) || iter_is_iovec(new)) /* iovec and kvec have identical layout */ return new->iov = kmemdup(new->iov, new->nr_segs * sizeof(struct iovec), flags); + if (WARN_ON_ONCE(iov_iter_is_iterlist(old))) + /* Don't allow dup'ing of iterlist as the cleanup is complicated */ + return NULL; return NULL; } EXPORT_SYMBOL(dup_iter); @@ -1759,6 +1985,22 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i, return iov_iter_extract_xarray_pages(i, pages, maxsize, maxpages, extraction_flags, offset0); + if (iov_iter_is_iterlist(i)) { + ssize_t size; + + while (i->nr_segs && !i->iterlist->count) { + i->iterlist++; + i->nr_segs--; + } + if (!i->nr_segs) { + WARN_ON_ONCE(i->count); + return 0; + } + size = iov_iter_extract_pages(i->iterlist, pages, maxsize, maxpages, + extraction_flags, offset0); + i->count -= size; + return size; + } return -EFAULT; } EXPORT_SYMBOL_GPL(iov_iter_extract_pages); From patchwork Wed Mar 29 14:13:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0023C761AF for ; Wed, 29 Mar 2023 14:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230320AbjC2OWd (ORCPT ); Wed, 29 Mar 2023 10:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231302AbjC2OUd (ORCPT ); Wed, 29 Mar 2023 10:20:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A14F61A8 for ; Wed, 29 Mar 2023 07:16:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099356; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=soF5gavmzneSvIIvHwhPEmiQq18BggR/9B/uQZFrN8w=; b=ZXW1HzVsOGyt5V4GXikPJkY5ih0M+B27i+lSSZ0oZqoJ9AyiDpVLy/cGyWq+j+g7WFJRMf Qic3wt5LyJ8XCaso4qI0+qXqIbP7hTSS/H+HsNSMGXZtfu8syaWTSZQ51rTrPNfvGNEPtI 5rjCT7O9gt66J/xrNqwkZ8UF+QsoWcQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-56-ruOkGzEjPIqn8iT3b-ztNw-1; Wed, 29 Mar 2023 10:15:54 -0400 X-MC-Unique: ruOkGzEjPIqn8iT3b-ztNw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 99B85855300; Wed, 29 Mar 2023 14:15:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6CE9C2027040; Wed, 29 Mar 2023 14:15:47 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Trond Myklebust , Anna Schumaker , linux-nfs@vger.kernel.org Subject: [RFC PATCH v2 40/48] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage Date: Wed, 29 Mar 2023 15:13:46 +0100 Message-Id: <20230329141354.516864-41-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org When transmitting data, call down into TCP using a single sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing several sendmsg and sendpage calls to transmit header, data pages and trailer. To make this work, the data is assembled in a bio_vec array and attached to a BVEC-type iterator. The header and trailer are copied into page fragments so that they can be freed with put_page and attached to iterators of their own. An iterator-of-iterators is then created to bridge all three iterators (headers, data, trailer) and that is passed to sendmsg to pass the entire message in a single call. Signed-off-by: David Howells cc: Trond Myklebust cc: Anna Schumaker cc: Chuck Lever cc: Jeff Layton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org Signed-off-by: David Howells Signed-off-by: David Howells Signed-off-by: David Howells Acked-by: Chuck Lever Tested-by: Daire Byrne --- include/linux/sunrpc/svc.h | 11 +++-- net/sunrpc/svcsock.c | 89 +++++++++++++++----------------------- 2 files changed, 40 insertions(+), 60 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..456ae554aa11 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) extern u32 svc_max_payload(const struct svc_rqst *rqstp); /* - * RPC Requsts and replies are stored in one or more pages. + * RPC Requests and replies are stored in one or more pages. * We maintain an array of pages for each server thread. * Requests are copied into these pages as they arrive. Remaining * pages are available to write the reply into. * - * Pages are sent using ->sendpage so each server thread needs to - * allocate more to replace those used in sending. To help keep track - * of these pages we have a receive list where all pages initialy live, - * and a send list where pages are moved to when there are to be part - * of a reply. + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread + * needs to allocate more to replace those used in sending. To help keep track + * of these pages we have a receive list where all pages initialy live, and a + * send list where pages are moved to when there are to be part of a reply. * * We use xdr_buf for holding responses as it fits well with NFS * read responses (that have a header, and some data pages, and possibly diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 03a4f5615086..f1cc53aad6e0 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1060,16 +1060,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) return 0; /* record not complete */ } -static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, - int flags) -{ - return kernel_sendpage(sock, virt_to_page(vec->iov_base), - offset_in_page(vec->iov_base), - vec->iov_len, flags); -} - /* - * kernel_sendpage() is used exclusively to reduce the number of + * MSG_SPLICE_PAGES is used exclusively to reduce the number of * copy operations in this path. Therefore the caller must ensure * that the pages backing @xdr are unchanging. * @@ -1081,65 +1073,54 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, { const struct kvec *head = xdr->head; const struct kvec *tail = xdr->tail; - struct kvec rm = { - .iov_base = &marker, - .iov_len = sizeof(marker), - }; + struct iov_iter iters[3]; + struct bio_vec head_bv, tail_bv; struct msghdr msg = { - .msg_flags = 0, + .msg_flags = MSG_SPLICE_PAGES, }; - int ret; + void *m, *t; + int ret, n = 2, size; *sentp = 0; ret = xdr_alloc_bvec(xdr, GFP_KERNEL); if (ret < 0) return ret; - ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != rm.iov_len) - return -EAGAIN; + m = page_frag_alloc(NULL, sizeof(marker) + head->iov_len + tail->iov_len, + GFP_KERNEL); + if (!m) + return -ENOMEM; - ret = svc_tcp_send_kvec(sock, head, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != head->iov_len) - goto out; + memcpy(m, &marker, sizeof(marker)); + if (head->iov_len) + memcpy(m + sizeof(marker), head->iov_base, head->iov_len); + bvec_set_virt(&head_bv, m, sizeof(marker) + head->iov_len); + iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, + sizeof(marker) + head->iov_len); - if (xdr->page_len) { - unsigned int offset, len, remaining; - struct bio_vec *bvec; - - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); - offset = offset_in_page(xdr->page_base); - remaining = xdr->page_len; - while (remaining > 0) { - len = min(remaining, bvec->bv_len - offset); - ret = kernel_sendpage(sock, bvec->bv_page, - bvec->bv_offset + offset, - len, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != len) - goto out; - remaining -= len; - offset = 0; - bvec++; - } - } + iov_iter_bvec(&iters[1], ITER_SOURCE, xdr->bvec, + xdr_buf_pagecount(xdr), xdr->page_len); if (tail->iov_len) { - ret = svc_tcp_send_kvec(sock, tail, 0); - if (ret < 0) - return ret; - *sentp += ret; + t = page_frag_alloc(NULL, tail->iov_len, GFP_KERNEL); + if (!t) + return -ENOMEM; + memcpy(t, tail->iov_base, tail->iov_len); + bvec_set_virt(&tail_bv, t, tail->iov_len); + iov_iter_bvec(&iters[2], ITER_SOURCE, &tail_bv, 1, tail->iov_len); + n++; } -out: + size = sizeof(marker) + head->iov_len + xdr->page_len + tail->iov_len; + iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, n, size); + + ret = sock_sendmsg(sock, &msg); + if (ret < 0) + return ret; + if (ret > 0) + *sentp = ret; + if (ret != size) + return -EAGAIN; return 0; } From patchwork Wed Mar 29 14:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84AE2C761AF for ; Wed, 29 Mar 2023 14:22:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231305AbjC2OWi (ORCPT ); Wed, 29 Mar 2023 10:22:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231308AbjC2OUi (ORCPT ); Wed, 29 Mar 2023 10:20:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78F3761B2 for ; Wed, 29 Mar 2023 07:16:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wvXdMcltJv+r5QoZnXaavWZcwnuZQd4mm223J864Zdo=; b=gkmwO5KnFEEULSgM+JAQmebbKvMOHMy2IHY7Rmw6Q0YwDnU7aGiZxdyr56WY76dTcKv6oM uObbXRQGobJI28SRxSnuOd/0EESGH/akqR/UhWcws3Ec7qh8puBdGtZs7m3/kKG+oBUiZq xFJRW2FdAq2W6eYbnAIfpE8bUSmRP14= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-482-Ff5bQfAPN6KgS6uIgJRP1w-1; Wed, 29 Mar 2023 10:15:53 -0400 X-MC-Unique: Ff5bQfAPN6KgS6uIgJRP1w-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82A463814947; Wed, 29 Mar 2023 14:15:52 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5750F492B02; Wed, 29 Mar 2023 14:15:50 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Trond Myklebust , Anna Schumaker , linux-nfs@vger.kernel.org Subject: [RFC PATCH v2 41/48] sunrpc: Rely on TCP sendmsg + MSG_SPLICE_PAGES to copy unspliceable data Date: Wed, 29 Mar 2023 15:13:47 +0100 Message-Id: <20230329141354.516864-42-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Rather than copying data in svc_tcp_sendmsg() into page fragments, just hand in ITER_KVEC iterators as part of the ITER_ITERLIST and rely on TCP to copy them if the pages they're residing on are belong to the slab or have a zero refcount. Signed-off-by: David Howells cc: Trond Myklebust cc: Anna Schumaker cc: Chuck Lever cc: Jeff Layton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- net/sunrpc/svcsock.c | 44 ++++++++++++-------------------------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index f1cc53aad6e0..c1421f6fe57a 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1071,47 +1071,27 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, rpc_fraghdr marker, unsigned int *sentp) { - const struct kvec *head = xdr->head; - const struct kvec *tail = xdr->tail; - struct iov_iter iters[3]; - struct bio_vec head_bv, tail_bv; - struct msghdr msg = { - .msg_flags = MSG_SPLICE_PAGES, - }; - void *m, *t; - int ret, n = 2, size; + struct iov_iter iters[4]; + struct kvec marker_kv; + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES, }; + int ret, n = 0, size; *sentp = 0; ret = xdr_alloc_bvec(xdr, GFP_KERNEL); if (ret < 0) return ret; - m = page_frag_alloc(NULL, sizeof(marker) + head->iov_len + tail->iov_len, - GFP_KERNEL); - if (!m) - return -ENOMEM; - - memcpy(m, &marker, sizeof(marker)); - if (head->iov_len) - memcpy(m + sizeof(marker), head->iov_base, head->iov_len); - bvec_set_virt(&head_bv, m, sizeof(marker) + head->iov_len); - iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, - sizeof(marker) + head->iov_len); - - iov_iter_bvec(&iters[1], ITER_SOURCE, xdr->bvec, + marker_kv.iov_base = ▮ + marker_kv.iov_len = sizeof(marker); + iov_iter_kvec(&iters[n++], ITER_SOURCE, &marker_kv, 1, sizeof(marker)); + iov_iter_kvec(&iters[n++], ITER_SOURCE, xdr->head, 1, xdr->head->iov_len); + iov_iter_bvec(&iters[n++], ITER_SOURCE, xdr->bvec, xdr_buf_pagecount(xdr), xdr->page_len); - if (tail->iov_len) { - t = page_frag_alloc(NULL, tail->iov_len, GFP_KERNEL); - if (!t) - return -ENOMEM; - memcpy(t, tail->iov_base, tail->iov_len); - bvec_set_virt(&tail_bv, t, tail->iov_len); - iov_iter_bvec(&iters[2], ITER_SOURCE, &tail_bv, 1, tail->iov_len); - n++; - } + if (xdr->tail->iov_len) + iov_iter_kvec(&iters[n++], ITER_SOURCE, xdr->tail, 1, xdr->tail->iov_len); - size = sizeof(marker) + head->iov_len + xdr->page_len + tail->iov_len; + size = sizeof(marker) + xdr->head->iov_len + xdr->page_len + xdr->tail->iov_len; iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, n, size); ret = sock_sendmsg(sock, &msg);