From patchwork Thu Feb 9 10:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 495EBC05027 for ; Thu, 9 Feb 2023 10:30:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC82D6B0072; Thu, 9 Feb 2023 05:30:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C78136B0074; Thu, 9 Feb 2023 05:30:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3FCE6B0075; Thu, 9 Feb 2023 05:30:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A507D6B0072 for ; Thu, 9 Feb 2023 05:30:10 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3C5BB160795 for ; Thu, 9 Feb 2023 10:30:10 +0000 (UTC) X-FDA: 80447383380.07.D4B1895 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 731BE120004 for ; Thu, 9 Feb 2023 10:30:07 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FtLVb1+j; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938607; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zsYNekVz+lznQWi+DyiWbMmMgsK+xoFDa6MvvOVfs9k=; b=ZKHYdrYWfdi1wW9AMJ3q+95lNuJJA/D9kF+XbkoyJOqzI3S1B6yrKkmDOqsDd3KTp9S/hJ uuu3s6kRductq4HHgRxlorE0RIUsZbpDRE+Ae6Yv92CsbpzBuyJzytnvVcKz0CHmsAG4uV ulV0JsNEQAEHHlx+bud/ks6s60ZKsIA= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FtLVb1+j; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938607; a=rsa-sha256; cv=none; b=aI8/CTgICaqi1+7WEm1oUSfXw5Mrfz2By3tVwSjgbilpQ7bkEdpvshcLMbZOS+fhuJcWgx wREhPgkELvkDI/98QHpaUt3EfHIKy+CGJ1Ggo6wdDBkETLUoumYm/JBqvWaj30ZHVuzIGk HSK6vX+KrESS/AQzeQ4f8auwMGu7Bxs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zsYNekVz+lznQWi+DyiWbMmMgsK+xoFDa6MvvOVfs9k=; b=FtLVb1+jF2mazh7dR+1ljOy6PAOoIHYNeKeNEwVhB0EPqbB/+YKTduVxaD1PfFmET8J9RL 7SF526t/Sz8xbI2i+UWDzobPwSTLHwAUOXai+FXuim3oVaP9qxCrbdgWMzGj7Xo/QqNrzd rXM5ZKfP6woAV8gVh+q35JtLq3tPERg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-svCu24HANLy3Imi4GjyZKQ-1; Thu, 09 Feb 2023 05:30:02 -0500 X-MC-Unique: svCu24HANLy3Imi4GjyZKQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8BAD52806041; Thu, 9 Feb 2023 10:30:01 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 849DB403D0C5; Thu, 9 Feb 2023 10:29:59 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzbot+a440341a59e3b7142895@syzkaller.appspotmail.com, Christoph Hellwig , John Hubbard Subject: [PATCH v13 01/12] splice: Fix O_DIRECT file read splice to avoid reversion of ITER_PIPE Date: Thu, 9 Feb 2023 10:29:43 +0000 Message-Id: <20230209102954.528942-2-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 731BE120004 X-Rspam-User: X-Stat-Signature: 3ik6eetxxhxfg96gffuo91e5nurjcz7z X-HE-Tag: 1675938607-73533 X-HE-Meta: U2FsdGVkX19pk47JLh734nQYBexha1DGLd3/wFYP5BwXg7F49eHEZRuavy+4ayJRsfnR7an+tfwfEzmePTUSGl9KcXlJDvfvsK12IWkl3UJ9ckrvBS9BqUrl6s5+J6CjUpNcLyz7yAOwIrkRnb563KAo1jw8g6HU1KCPPQWkc1yrbaZM6QmZYD7y3N5v9CblHGi+YzXl/q9RKgxsa8lKk/2u/D6IvYOrmq1LrIwj730KDiJrHNDkfuQEqKddMKGxNt2hIb9yWKeke4YGApC7T6RAyAS3TgYqoZPCD27B8MOyNRccp82r9Ni3xeB1l4+X4LIhtWd9X1s+9DFT+GsIhHRvxv8TZs/MAUALQJtqneyWAWnSsvHFpg6HgQdFEWZaxlLTR3PgEFQ+ZfeEIZaQ3OKza9+in7eNuP5fK+SDXKZVJmCecPyS8x1OM1GCnD1r+Ms0lVWmB1UH4B9tQa5n14qI/0WwP2Ij62WSrUfw6G8WXM0M0/yYAi1VhO+bL/0NrmatARt+TO09eHKerPxqZFKM8ayXix4JySzjla2L1ImINUdZ9s2MvxJWoEj/hLwi6e+yXZmW5hDs6e4+55M77CJ7wqrzFo5ii29NsNnUJQ1xaJcUahjLaroeULBG/7ah8Hf/x8ExAy+bsiqIIsfeK4hZHM4ef+HTWXj2EluvkqqnmrBYMWu330hxybRmriiYFuo8T9NejVihC3oQVgitpnQR8U4tG5E958T9c1gGWmUrm1lUZpElop0kGKxkFk8bidliZM5WlpsPDd8CcC6Re35wrufM/kZl4UehSPhHXuJcct8Rlbw5J2oC3Eom16eU3klqeprgJXn3TlgvDMfb686SHRfJeFPStGX/2oe8TRyjyFN3+FJtBD3HpjLvUycdQfPQI6p94il7CtxhDU2uUy8t1vo1uAX2tGGlDDC85k+e7f458e7rOf7AmWHFs/I6Ceyjn1w2UyVWLY9gCPf QJyf+z5O 0hODR6a4l0baopkR5Bb0adyPQ8B09ykBK8lS/jnPJfJC5Z7IM1BBLQroDfhgmVbpXF/ZBSY/8sIP6+5ad3xZVynnSiU8Mt3FvnuB0XmbTcHIsFsZATo2YquzT89zTOKGasQXJeHEkOT9YZWfbmExe/Un48fXaTAjvEX9QlFES93aGCbdNCikNtE6xbw7oioxVk2+iOwr6cSVjPuEWqmBq5Tq/7wzuV4Gaf1SsrJjsXzOC1dNc6h7g/RwVtl/NPB/0AXSwuhXcu4hXvElc11bukhAAL4/hbLC8PTLIyfzd7p0BV0PL+L5ev5XNrzRS4RZbdqaXS5dTY8AZDoS9o0Oc4DI4XudNDIr7S/YQL+LddSmro8mRdiB5nNjWyI1jVgoOiiM07kNBQImKd0tqEnTjmEX09sxF45iSot2W4lYW7phVxeHu41UcJLhLAQzRJzoBaNckTubjDag+MbWsvh73txuQuAbJQe76JEe5N3OVZbzv78PlObaMznwGsHUeKlFxiT/31A9YlF6rIfkwilWMwx5AMB/Cb7v819enOu+c5V4wJFJRV0a0U7RvdAKOHQpHQdNZNWMH3S/+2QFWnVibi5mrLjwSZvydG1jf/zoFXJeL9YN+yT/4d8qKxm3eP0trBMZyxSQ2PtDrJYtNLmjEFcRHVZcjXQqDujFftdJUsDeFJ/I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With the upcoming iov_iter_extract_pages() function, pages extracted from a non-user-backed iterator such as ITER_PIPE aren't pinned. __iomap_dio_rw(), however, calls iov_iter_revert() to shorten the iterator to just the bufferage it is going to use - which has the side-effect of freeing the excess pipe buffers, even though they're attached to a bio and may get written to by DMA (thanks to Hillf Danton for spotting this[1]). This then causes memory corruption that is particularly noticable when the syzbot test[2] is run. The test boils down to: out = creat(argv[1], 0666); ftruncate(out, 0x800); lseek(out, 0x200, SEEK_SET); in = open(argv[1], O_RDONLY | O_DIRECT | O_NOFOLLOW); sendfile(out, in, NULL, 0x1dd00); run repeatedly in parallel. What I think is happening is that ftruncate() occasionally shortens the DIO read that's about to be made by sendfile's splice core by reducing i_size. Fix this by splitting the handling of a splice from an O_DIRECT file fd off from that of non-DIO and in this case, replacing the use of an ITER_PIPE iterator with an ITER_BVEC iterator for which reversion won't free the buffers. The DIO-specific code bulk allocates all the buffers it thinks it is going to use in advance, does the read synchronously and only then trims the buffer down. The pages we did use get pushed into the pipe. This should be more efficient for DIO read by virtue of doing a bulk page allocation, but slightly less efficient by ignoring any partial page in the pipe. Fixes: 920756a3306a ("block: Convert bio_iov_iter_get_pages to use iov_iter_extract_pages") Reported-by: syzbot+a440341a59e3b7142895@syzkaller.appspotmail.com Signed-off-by: David Howells cc: Jens Axboe cc: Christoph Hellwig cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: linux-mm@kvack.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/20230207094731.1390-1-hdanton@sina.com/ [1] Link: https://lore.kernel.org/r/000000000000b0b3c005f3a09383@google.com/ [2] --- Notes: ver #13) - Don't completely replace generic_file_splice_read(), but rather only use this if we're doing a splicing from an O_DIRECT file fd. fs/splice.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/fs/splice.c b/fs/splice.c index 5969b7a1d353..b4be6fc314a1 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -282,6 +282,99 @@ void splice_shrink_spd(struct splice_pipe_desc *spd) kfree(spd->partial); } +/* + * Splice data from an O_DIRECT file into pages and then add them to the output + * pipe. + */ +static ssize_t generic_file_direct_splice_read(struct file *in, loff_t *ppos, + struct pipe_inode_info *pipe, + size_t len, unsigned int flags) +{ + LIST_HEAD(pages); + struct iov_iter to; + struct bio_vec *bv; + struct kiocb kiocb; + struct page *page; + unsigned int head; + ssize_t ret; + size_t used, npages, chunk, remain, reclaim; + int i; + + /* Work out how much data we can actually add into the pipe */ + used = pipe_occupancy(pipe->head, pipe->tail); + npages = max_t(ssize_t, pipe->max_usage - used, 0); + len = min_t(size_t, len, npages * PAGE_SIZE); + npages = DIV_ROUND_UP(len, PAGE_SIZE); + + bv = kmalloc(array_size(npages, sizeof(bv[0])), GFP_KERNEL); + if (!bv) + return -ENOMEM; + + npages = alloc_pages_bulk_list(GFP_USER, npages, &pages); + if (!npages) { + kfree(bv); + return -ENOMEM; + } + + remain = len = min_t(size_t, len, npages * PAGE_SIZE); + + for (i = 0; i < npages; i++) { + chunk = min_t(size_t, PAGE_SIZE, remain); + page = list_first_entry(&pages, struct page, lru); + list_del_init(&page->lru); + bv[i].bv_page = page; + bv[i].bv_offset = 0; + bv[i].bv_len = chunk; + remain -= chunk; + } + + /* Do the I/O */ + iov_iter_bvec(&to, ITER_DEST, bv, npages, len); + init_sync_kiocb(&kiocb, in); + kiocb.ki_pos = *ppos; + ret = call_read_iter(in, &kiocb, &to); + + reclaim = npages * PAGE_SIZE; + remain = 0; + if (ret > 0) { + reclaim -= ret; + remain = ret; + *ppos = kiocb.ki_pos; + file_accessed(in); + } else if (ret < 0) { + /* + * callers of ->splice_read() expect -EAGAIN on + * "can't put anything in there", rather than -EFAULT. + */ + if (ret == -EFAULT) + ret = -EAGAIN; + } + + /* Free any pages that didn't get touched at all. */ + for (; reclaim >= PAGE_SIZE; reclaim -= PAGE_SIZE) + __free_page(bv[--npages].bv_page); + + /* Push the remaining pages into the pipe. */ + head = pipe->head; + for (i = 0; i < npages; i++) { + struct pipe_buffer *buf = &pipe->bufs[head & (pipe->ring_size - 1)]; + + chunk = min_t(size_t, remain, PAGE_SIZE); + *buf = (struct pipe_buffer) { + .ops = &default_pipe_buf_ops, + .page = bv[i].bv_page, + .offset = 0, + .len = chunk, + }; + head++; + remain -= chunk; + } + pipe->head = head; + + kfree(bv); + return ret; +} + /** * generic_file_splice_read - splice data from file to a pipe * @in: file to splice from @@ -303,6 +396,9 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos, struct kiocb kiocb; int ret; + if (in->f_flags & O_DIRECT) + return generic_file_direct_splice_read(in, ppos, pipe, len, flags); + iov_iter_pipe(&to, ITER_DEST, pipe, len); init_sync_kiocb(&kiocb, in); kiocb.ki_pos = *ppos; From patchwork Thu Feb 9 10:29:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C4B5C61DA4 for ; Thu, 9 Feb 2023 10:30:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F12016B0074; Thu, 9 Feb 2023 05:30:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC20A6B0075; Thu, 9 Feb 2023 05:30:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D89946B0078; Thu, 9 Feb 2023 05:30:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CB2EE6B0074 for ; Thu, 9 Feb 2023 05:30:26 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 673351602D7 for ; Thu, 9 Feb 2023 10:30:26 +0000 (UTC) X-FDA: 80447384052.24.8ABA469 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 7BB7480002 for ; Thu, 9 Feb 2023 10:30:23 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q6PFSxzI; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938623; a=rsa-sha256; cv=none; b=qK4/ONq9AWLo8J87tXCB5OclCOsljCFyB4qceYzli9x5lNkoo1ZrWmWYSazgsPfdJXCtf6 koAu7Neh/9lD+JTC9AmP7qyuJqdremwjHrEcOoPaNQkis9WmADhmpSRyqAB5/GPqQ6vjP6 WKIB+iAVgOqbMx1S4EdtEaUUB5UQHRw= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q6PFSxzI; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938623; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PwfaTp9eGrV7iL5hQ8gz8NVeVdzWatLunSphj9wOeU8=; b=C0bbqilPSIhA4T0+M+1ROsUmmnt1EnRlnZuKACa1u/7EHBu9zIIYTGLALhVtqvv3YFNpqo Fj6XQDT52urJ8LypBc6al4kSTahsUBzkcDjbS2kQspRoiLJAtyTfFDP07DtePuemC9bgvU NMr8iGm25lqKgMi6EVk2FEDotfOx2Og= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938622; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PwfaTp9eGrV7iL5hQ8gz8NVeVdzWatLunSphj9wOeU8=; b=Q6PFSxzIXu8HUKICq+VgrOL/g0rKGs48rW26fSo2phKonsPOt96MsCBesScn5l26V5oiSj ab/lShGgT3GkIQ1ZRXwfyBrlv8Vur8NraHnY/8JJKmmq/gZabxQHha0h/UD31AhJb/lkrz XCzxVofYaL3J9nzNjv5d3for+3LF2Ro= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-159-9nBR7GipO8SN11GsW6sDPQ-1; Thu, 09 Feb 2023 05:30:20 -0500 X-MC-Unique: 9nBR7GipO8SN11GsW6sDPQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3C19E1871D9B; Thu, 9 Feb 2023 10:30:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D5E640C83B6; Thu, 9 Feb 2023 10:30:02 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 02/12] mm: Pass info, not iter, into filemap_get_pages() and unstatic it Date: Thu, 9 Feb 2023 10:29:44 +0000 Message-Id: <20230209102954.528942-3-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspam-User: X-Rspamd-Queue-Id: 7BB7480002 X-Rspamd-Server: rspam01 X-Stat-Signature: 3z5b9zp7j8zo5kpkjqtig9co45skpcbj X-HE-Tag: 1675938623-954244 X-HE-Meta: U2FsdGVkX1803JxSTOsGPEkcDlhbEJ5kkFF+ktscF5VZkwRt4Q1NVtkQqK6XJTCknRzJOeMFqAC4tyTnMjvIFXmw4eOi4zsR3wv1AjZmypFbLIOn5geqLtFszcegcKKx9eUAIru/I8+/xZMLbHuTFTbLqvEh6mtOe7PNIdZ9jbZkpkaozvlKqPa/wVV+5r/TgfrPm+ytdvcBp3mtgwRDZairVY8aumkZbYWVUiZTqLD/ck0AJa8xnZP7jZ22kE9Z5kdZsWkAly/3Nu4NtwilRcHiThbthuqFYHqxkusElBzBzXiTKb3HZVqH2g4kCRr9Fu0DphqRPvRqC/uAQR6sLBKcIOnAK+YR/vp70djimBCGrW4pR5brz2re1AhvKg2rjHZ/Kdde5WFDvgKQMEfpRfeRVrFBTM7Rh5+/1Ydp3PD3dDW0OS/lHwhjXnYiPuHGROJG6qo2cOw/6mMXD/0/aMIz7utJA0Ent7RKyrS2jstn+/V6+C884Y/4MzYLclqmZ8R2FikzUgQe1lY1YGCC4oqVi0csGO97wON0mOjKnDuuqXcRax5V/+Q4v1ALZj7P7wtUbtSheLihiRBeUAaEsltXQxJ/5Zyf1flmmTzdDXpHIR7L+9vpXjtxVG8JSkHO+4CsbUmqQ03HbNTBs8YUcxo2XgsDU0ACQa3AcQ466mjMrCs7rdebbR946yuC7bs+LR465/NZZ6RXQ1aJ8U0Xa5lsewOxsGuUEIYCPWZQzF8p711CGNUSZlB7c28/t0zT8RC1HWFn5iAJD3n6tQrPUaOCzxz1qi85OB8aANOodvQvPNx7Ac2/FqipJgg8Kp1YYbaKyZAi8WfOdaclapU1EicBqb42qcbnuibhMPwUbQ6Ad5GHwkVN3Rr3+s1POLDd2ymKV9rICIBFf9a09Qr238wMACH9JX3C2G1CzNGVhZuqT33dMJdI5njAGIgdaAar0APam/uSnEwAcvOVvbK SXBkTgH6 JgL1kA++X7g+J8ap3YenU0C5tnTVrj05WpitclnaYQ3n3ztsSsgg1LnF27HX8NSqCsKzTuIO0RAkgCu+j2RTxC9XV9eHkU355PF9krSBBgIfvzHttr2+KszNk7M2aX/U2/Zo8sFQN3YyATlIvtJSuMpXZ3lhuKsGpZq+D8tr9cDngtQgQUXgwag8emS5m9ubeEPPkTu6umv9YZ3ZRLLBaDFFYvDnFvshMZb0BfugBlZr81XEkGQniJRiYRgm79T45oPtFwRDUCFa7wFzMwS/su1WHG43GDjnyX4L7k+nZ9lfJSYzP7vrC23LzCHibXnIK0zs2lRdXDdK4/561r5KilIR05PWMWLTlEFysUkQFE1ZEwKXv2hXkB7FhSuAdAteMhYto/tCRGYhgvOeFJrvn8UW3Kx01drS5nasX5DqQW1ga4VkfOjf4tXWsIOxU2mIdEwYnmw9OOakXkqwIudqfHqPdOwtvgSo/keUN9hyslhT+0Xm7B8mtM/HRFw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: filemap_get_pages() and a number of functions that it calls take an iterator to provide two things: the number of bytes to be got from the file specified and whether partially uptodate pages are allowed. Change these functions so that this information is passed in directly. This allows it to be called without having an iterator to hand. Also make filemap_get_pages() available so that it can be used by a later patch to fix splicing from a buffered file. Signed-off-by: David Howells cc: Jens Axboe cc: Christoph Hellwig cc: Matthew Wilcox cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: linux-mm@kvack.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Reviewed-by: Christoph Hellwig --- include/linux/pagemap.h | 2 ++ mm/filemap.c | 31 ++++++++++++++++++------------- 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 29e1f9e76eb6..3a7bdb35acff 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -748,6 +748,8 @@ struct page *read_cache_page(struct address_space *, pgoff_t index, filler_t *filler, struct file *file); extern struct page * read_cache_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); +int filemap_get_pages(struct kiocb *iocb, size_t count, + struct folio_batch *fbatch, bool need_uptodate); static inline struct page *read_mapping_page(struct address_space *mapping, pgoff_t index, struct file *file) diff --git a/mm/filemap.c b/mm/filemap.c index c4d4ace9cc70..b31168a9bafd 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2440,21 +2440,19 @@ static int filemap_read_folio(struct file *file, filler_t filler, } static bool filemap_range_uptodate(struct address_space *mapping, - loff_t pos, struct iov_iter *iter, struct folio *folio) + loff_t pos, size_t count, struct folio *folio, + bool need_uptodate) { - int count; - if (folio_test_uptodate(folio)) return true; /* pipes can't handle partially uptodate pages */ - if (iov_iter_is_pipe(iter)) + if (need_uptodate) return false; if (!mapping->a_ops->is_partially_uptodate) return false; if (mapping->host->i_blkbits >= folio_shift(folio)) return false; - count = iter->count; if (folio_pos(folio) > pos) { count -= folio_pos(folio) - pos; pos = 0; @@ -2466,8 +2464,8 @@ static bool filemap_range_uptodate(struct address_space *mapping, } static int filemap_update_page(struct kiocb *iocb, - struct address_space *mapping, struct iov_iter *iter, - struct folio *folio) + struct address_space *mapping, size_t count, + struct folio *folio, bool need_uptodate) { int error; @@ -2501,7 +2499,8 @@ static int filemap_update_page(struct kiocb *iocb, goto unlock; error = 0; - if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, folio)) + if (filemap_range_uptodate(mapping, iocb->ki_pos, count, folio, + need_uptodate)) goto unlock; error = -EAGAIN; @@ -2577,8 +2576,12 @@ static int filemap_readahead(struct kiocb *iocb, struct file *file, return 0; } -static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, - struct folio_batch *fbatch) +/* + * Extract some folios from the pagecache of a file, reading those pages from + * the backing store if necessary and waiting for them. + */ +int filemap_get_pages(struct kiocb *iocb, size_t count, + struct folio_batch *fbatch, bool need_uptodate) { struct file *filp = iocb->ki_filp; struct address_space *mapping = filp->f_mapping; @@ -2588,7 +2591,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, struct folio *folio; int err = 0; - last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE); + last_index = DIV_ROUND_UP(iocb->ki_pos + count, PAGE_SIZE); retry: if (fatal_signal_pending(current)) return -EINTR; @@ -2621,7 +2624,8 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, if ((iocb->ki_flags & IOCB_WAITQ) && folio_batch_count(fbatch) > 1) iocb->ki_flags |= IOCB_NOWAIT; - err = filemap_update_page(iocb, mapping, iter, folio); + err = filemap_update_page(iocb, mapping, count, folio, + need_uptodate); if (err) goto err; } @@ -2691,7 +2695,8 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter, if (unlikely(iocb->ki_pos >= i_size_read(inode))) break; - error = filemap_get_pages(iocb, iter, &fbatch); + error = filemap_get_pages(iocb, iter->count, &fbatch, + iov_iter_is_pipe(iter)); if (error < 0) break; From patchwork Thu Feb 9 10:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C65C6C636D4 for ; Thu, 9 Feb 2023 10:30:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0953B6B0078; Thu, 9 Feb 2023 05:30:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F36F46B007B; Thu, 9 Feb 2023 05:30:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD6E76B007D; Thu, 9 Feb 2023 05:30:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D081C6B0078 for ; Thu, 9 Feb 2023 05:30:27 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8047540869 for ; Thu, 9 Feb 2023 10:30:27 +0000 (UTC) X-FDA: 80447384094.12.73011DE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 9B8534000C for ; Thu, 9 Feb 2023 10:30:25 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=P9mEUDzS; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gZh4MABlkpe6ltIe1JtsGk3eMjQ687XGJ1n+AAo3k5s=; b=sB7nqXweuQwS8xEIYBVF3Oh0B7HRM0vFTDna7TLrc56oYy1hzq9LXqItw58/fNSY2/M/Vx rXlrFp1LFA7M68PjcsFAgMOgwwlw/FA9V6yZurHnhgW0rPFTnpedvSxSkI2KzDkQam/zFw OogZQTdU6wzorUGtf5LvW6dSw0QJj6s= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=P9mEUDzS; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938625; a=rsa-sha256; cv=none; b=Cpk7owwe/0cONfd2j9Xnay8JVsNwNB+PBLeNQkTssO12LQdDdGedScbUIoaLguGBI2Uh4C UlIO94jZym5EIT42J2m04uSCoAOBdpJRA9U95IGz7eIho7cr1eU9egrlD5AlNw/pnp42tG jJxBit0wcFCpFSprBYFKmanTMqnsfcA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938624; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gZh4MABlkpe6ltIe1JtsGk3eMjQ687XGJ1n+AAo3k5s=; b=P9mEUDzSQ3G+NoMLpTNK00H0ZK0MUwQVnK+gUFiNNuiX+T5pKH0ybuPVoBA4WentwDTCHf t2Hn1tmRAhc+zImwSI4yYIoFJ0F5Ke4mC7IGlj6uF114aZC06eGKgMK/aJv+hleU6/NpTM LIrX/fxNX9IAqDHQOGZVveoKPJkOcOM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-528-mHs8wxQ_M9q60lEOpooSWQ-1; Thu, 09 Feb 2023 05:30:19 -0500 X-MC-Unique: mHs8wxQ_M9q60lEOpooSWQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BFC1D100F83C; Thu, 9 Feb 2023 10:30:06 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC3E7C16022; Thu, 9 Feb 2023 10:30:04 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 03/12] splice: Do splice read from a buffered file without using ITER_PIPE Date: Thu, 9 Feb 2023 10:29:45 +0000 Message-Id: <20230209102954.528942-4-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9B8534000C X-Stat-Signature: znzb4ejn67czrmahuxzpmj3mb1kbbi5c X-HE-Tag: 1675938625-982218 X-HE-Meta: U2FsdGVkX1+5Jy+DzFLNSJ+EZKDxVyCE8vS2pHUJp5oIpd+FJxRozg3H31VEeJQTgm74a86euYzilEzxuzoFszoigs3187RJamgLDbnjxacWEPl0ceVlEpfO7JMkacpzFJp1e4C88lOV4w4xDEtUjuvTsuHilbj5y4CHEtQ/Vt6nKzRJ7KITNX0fDLLrn44ZWSkWlnJ/vwK50Ff4M03k5Gs/xM0oPkAvVp2xjWUoEcRs02P2KZ3vgAKsWkBcnCTWeNTi/aHP43SG9TfRBkYLjA8lwPdzn9XPZTBDejLXYYdqBLdSC/fDYAdR6/LF0mIwhPnOs9TsSmXP3Ez9wEYjIm498GcHUAoUfGbroBz5Sa0mFxgFUMNtDP0WUlHdm3UnYB/PHuRimWi4eTPXSvYfhO8wgZ+q2kXoNoYPZK5dBEBW270eVEEcLuFQKazL4o5EVJaJ99ew+rGnW03/IGmCkUwV5hUB9PooqPiEVexpYy6KvZTnmsTxS6O+xtUgp5Uh5v6Pa0r49A65vh64HQzRAksSEN0XS1yRa56oWstNFhFZBOnsvzo/ozlpj5GBZb2/Rv7m2DFRnGop/rElYd3Sr7Q4YjVFBqjQxMxJVppnqbGJNG5DnjWXmeaP+0TbfN8AmkK44lGYQB5KEBeeodGIm0GbZWdrvmN7L/DZ7TZ5rZaNbX2QjfXiNDGMzJX0Zm7iouYqCVU6z4QOlf1jVjFXV6FFHOHNwsRmLJbz4LYlzpcLeKn41MUMnW57peDH8hCN8wzuVC4Cnc1DRf48HznHJBxzJuHDhUD/0J5QcKUExQ7gkz4tIaxfsLl1g7ddWTYdDDCt0v8ZhmYymzm9zZkQKTwjF5rfeIq6MoOvrMihbef6I+JFbzUVpxfPs0qsJ0cyLLqvVy8T5thxNwDIV8grxaVJonSocM3T0LvCNjHVYxr7BeXFZlpnZcvLbl9BJocPTutsypQ5H/Z8HyqJcd9 K1edotWP mWUNe1RaqMyUgNidxrck8vLdU1bA5TjPXYnXIv+hXbe307XulpbYAkUq/u02QLgj+rOOsZ/UG+0mPnDTynJMZMum8iGnhbPi5kdI83bhsRxNcITmZnEU1BpZm7ZNT/h0JfkrrJ5h4ZQrSiaD8KYsdzLr3AFquo6klPfjX3nHyR17ThSNjRDZLKNRcveFUz0ZM2vUjp5x1+v1g5EIJEaxij8HqIyKgn64na8/3nuQN1Q+cP6Wl2auC7PtJCJEVOzEQKj0ovlMXDjyKd1vHb05Gtc6WJ/+twMAGgJydN2TtZ00qF4E06gnZqOE96o5WxdaFNVOT87x98aPUKB9V55TVm/Hq0tf1sUNSwZ2J9jwWdXjGCzPdrM+EE3+HMGKAiyBzbV1bqmvxyc/qESlz54jqWwl0omx25pn3e4OtCxSPRojwWKt6isjfV2pUcCj6vBnP8GLd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a function to do splice read from a buffered file, pulling the folios out of the pagecache directly by calling filemap_get_pages() to do any required reading and then pasting the returned folios into the pipe. A helper function is provided to do the actual folio pasting and will handle multipage folios by splicing as many of the relevant subpages as will fit into the pipe. The ITER_BVEC-based splicing previously added is then only used for splicing from O_DIRECT files. The code is loosely based on filemap_read() and might belong in mm/filemap.c with that as it needs to use filemap_get_pages(). With this, ITER_PIPE is no longer used. Signed-off-by: David Howells cc: Jens Axboe cc: Christoph Hellwig cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: linux-mm@kvack.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org --- fs/splice.c | 159 ++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 135 insertions(+), 24 deletions(-) diff --git a/fs/splice.c b/fs/splice.c index b4be6fc314a1..963cbf20abc8 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -375,6 +376,135 @@ static ssize_t generic_file_direct_splice_read(struct file *in, loff_t *ppos, return ret; } +/* + * Splice subpages from a folio into a pipe. + */ +static size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, + struct folio *folio, + loff_t fpos, size_t size) +{ + struct page *page; + size_t spliced = 0, offset = offset_in_folio(folio, fpos); + + page = folio_page(folio, offset / PAGE_SIZE); + size = min(size, folio_size(folio) - offset); + offset %= PAGE_SIZE; + + while (spliced < size && + !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) { + struct pipe_buffer *buf = &pipe->bufs[pipe->head & (pipe->ring_size - 1)]; + size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced); + + *buf = (struct pipe_buffer) { + .ops = &page_cache_pipe_buf_ops, + .page = page, + .offset = offset, + .len = part, + }; + folio_get(folio); + pipe->head++; + page++; + spliced += part; + offset = 0; + } + + return spliced; +} + +/* + * Splice folios from the pagecache of a buffered (ie. non-O_DIRECT) file into + * a pipe. + */ +static ssize_t generic_file_buffered_splice_read(struct file *in, loff_t *ppos, + struct pipe_inode_info *pipe, + size_t len, + unsigned int flags) +{ + struct folio_batch fbatch; + size_t total_spliced = 0, used, npages; + loff_t isize, end_offset; + bool writably_mapped; + int i, error = 0; + + struct kiocb iocb = { + .ki_filp = in, + .ki_pos = *ppos, + }; + + /* Work out how much data we can actually add into the pipe */ + used = pipe_occupancy(pipe->head, pipe->tail); + npages = max_t(ssize_t, pipe->max_usage - used, 0); + len = min_t(size_t, len, npages * PAGE_SIZE); + + folio_batch_init(&fbatch); + + do { + cond_resched(); + + if (*ppos >= i_size_read(file_inode(in))) + break; + + iocb.ki_pos = *ppos; + error = filemap_get_pages(&iocb, len, &fbatch, true); + if (error < 0) + break; + + /* + * i_size must be checked after we know the pages are Uptodate. + * + * Checking i_size after the check allows us to calculate + * the correct value for "nr", which means the zero-filled + * part of the page is not copied back to userspace (unless + * another truncate extends the file - this is desired though). + */ + isize = i_size_read(file_inode(in)); + if (unlikely(*ppos >= isize)) + break; + end_offset = min_t(loff_t, isize, *ppos + len); + + /* + * Once we start copying data, we don't want to be touching any + * cachelines that might be contended: + */ + writably_mapped = mapping_writably_mapped(in->f_mapping); + + for (i = 0; i < folio_batch_count(&fbatch); i++) { + struct folio *folio = fbatch.folios[i]; + size_t n; + + if (folio_pos(folio) >= end_offset) + goto out; + folio_mark_accessed(folio); + + /* + * If users can be writing to this folio using arbitrary + * virtual addresses, take care of potential aliasing + * before reading the folio on the kernel side. + */ + if (writably_mapped) + flush_dcache_folio(folio); + + n = splice_folio_into_pipe(pipe, folio, *ppos, len); + if (!n) + goto out; + len -= n; + total_spliced += n; + *ppos += n; + in->f_ra.prev_pos = *ppos; + if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) + goto out; + } + + folio_batch_release(&fbatch); + } while (len); + +out: + folio_batch_release(&fbatch); + file_accessed(in); + + return total_spliced ? total_spliced : error; +} + /** * generic_file_splice_read - splice data from file to a pipe * @in: file to splice from @@ -392,32 +522,13 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe, size_t len, unsigned int flags) { - struct iov_iter to; - struct kiocb kiocb; - int ret; - + if (unlikely(*ppos >= file_inode(in)->i_sb->s_maxbytes)) + return 0; + if (unlikely(!len)) + return 0; if (in->f_flags & O_DIRECT) return generic_file_direct_splice_read(in, ppos, pipe, len, flags); - - iov_iter_pipe(&to, ITER_DEST, pipe, len); - init_sync_kiocb(&kiocb, in); - kiocb.ki_pos = *ppos; - ret = call_read_iter(in, &kiocb, &to); - if (ret > 0) { - *ppos = kiocb.ki_pos; - file_accessed(in); - } else if (ret < 0) { - /* free what was emitted */ - pipe_discard_from(pipe, to.start_head); - /* - * callers of ->splice_read() expect -EAGAIN on - * "can't put anything in there", rather than -EFAULT. - */ - if (ret == -EFAULT) - ret = -EAGAIN; - } - - return ret; + return generic_file_buffered_splice_read(in, ppos, pipe, len, flags); } EXPORT_SYMBOL(generic_file_splice_read); From patchwork Thu Feb 9 10:29:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A20BC05027 for ; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A31FE6B0075; Thu, 9 Feb 2023 05:30:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BA2B6B0078; Thu, 9 Feb 2023 05:30:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 720936B007B; Thu, 9 Feb 2023 05:30:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5F6956B0075 for ; Thu, 9 Feb 2023 05:30:27 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EC7B7AB0FA for ; Thu, 9 Feb 2023 10:30:26 +0000 (UTC) X-FDA: 80447384052.19.D114195 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 0815218001B for ; Thu, 9 Feb 2023 10:30:24 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UxKgOcn6; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938625; a=rsa-sha256; cv=none; b=2sq01NM6FEu+W3nenqS8Z3nLxO9WFdccTqxAwnJ+hVYdVJDF73uaEbhZ0n57VuCHR2+gRn qZcB4b0x7rlbUadgKMEp2wQp+sdpr+FzHOMqizj9vAdH7INBuOgdjWtlHFOlwbwBCg4DQh xPJ5eX39khaDiSyXdQtNEGYJ+lzl3vk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UxKgOcn6; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qwz2q+yLr2zGFFawjjmWjHNoqop7eGuYZKEvUt7Y67M=; b=lLnY7SUcvJcTbnuz1B2bUXjF7vbSdqR74Tc/ON8b+Z/NY+I1Sz+wV8xdOtAo5bd0vpL03S rUo3hzQTFNNZagP0VOLTYB8oodh6CbzpSzomUB/M2ytpAi7IntvooJzKbLTen64b8E8AKa dby89IFyg7PdTMARHc9y9JXWmOjyZPs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938624; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qwz2q+yLr2zGFFawjjmWjHNoqop7eGuYZKEvUt7Y67M=; b=UxKgOcn6/MUPpJtamd3v+S5Jp+h599ecZnPDcIb0fOkkgQaAdpskaCbxXrBv0cDCkhAeA4 eJskidwDVkGDivRX0wBAvn1yoqYjBx7YnCBHky3aRfL7d4bFBJTb1iy2qaUFe1Gmh//6Ba 0h+A+fJNFw8eX3Tfu7/5b6fsvKFxik0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-155-cPu1SGShOLWHoyJH53fjYQ-1; Thu, 09 Feb 2023 05:30:19 -0500 X-MC-Unique: cPu1SGShOLWHoyJH53fjYQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A13A080590E; Thu, 9 Feb 2023 10:30:09 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7DFBE403D0CE; Thu, 9 Feb 2023 10:30:07 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 04/12] iov_iter: Kill ITER_PIPE Date: Thu, 9 Feb 2023 10:29:46 +0000 Message-Id: <20230209102954.528942-5-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Rspam-User: X-Rspamd-Queue-Id: 0815218001B X-Rspamd-Server: rspam01 X-Stat-Signature: nrzi49zbhiiika944gq5jky1w4geohb9 X-HE-Tag: 1675938624-666948 X-HE-Meta: U2FsdGVkX19coKBmSofavQwG52xQ2+CkeEBKSMq/pwWCKmgoGT66U0aziLyIGICllvTxjDs43Y1WCPWlH93fk38LLhzbBmM07IurSAAEkPjos6ixDhE/bFkNDtoJPs9i5rSM2sHx0YZS3RZnSBTrdvYauSjlV1O4QGrIIXl+UkLZHByurxIvDrx9TagD7xxqG9FLj2pAnucac4lJy0T6d0jNKVX0f0SjAbsm+z59pH+ReNy96IRMH12O2oGjtqTFelBRxgz8xILl7BnCxCHehsFP9Cc6h2zUM3zeCLkliId0eZxhbWC8nyEI3e7JiOBSLXAMMSR3Y26qPf36y+ZUGRDjMVuQP9u83DIMZxF0A75W1KJhCGSdQYsT1sOR6GvJABzqV8wYthWtiS3BUknlXakFLvavrkdPc9wdT/j9TXuQPC9XrAXu99Na+cFytQiV4DpPYpjmTv6YBmoeNB/+EDluRWVTjKa87IN2nPAnLXCsbeNJK3J60LxMqdn0XyK7m9zy6WJlClN89ok0HAwvs1D642CUVv7bSIb6XmoEfsGakPR1B/HN9CmOy1Bnhe/Ta95DlCxUKJDFr1rXqEiNl3wl8CD8ZnVNeZcW2asgmpMUy/sQBoaW+iYvjGYdPTzMtahxsh6ngPQEvZG6mlgC97qxGg8PGqPe0CGxScW5w2PVpU99YlkeMPDnWIyyWsplGKZdic6R0Pex9veP6b8CoWUvU+0K7+lbP4poo3OV+bE+9mZ1YTCEMC51z3iy/hba9E1z3fUP8xcNCGo83UhNA8l0Gi+pJI8aGqaXlLHoo+6Mo8A+1bx/OPaWiGnvbkJIYjhVFuw1xlGB+YAnnmVcQfgQb3K6QlqiBJapzv7mVDAduK2k4zSnkLHRKzCBEwRdBPmL9RQQXeO28CWUJTj6VP68wlKfb4TpjWdxHOEFoNJ48Y/hMaiBvH2BTTHZIantFG4Z0n/4mGDbvF/Sp6h of/+qYMe r0wD7xVQmTOWQivKBG0F3aoYJ32+1th/6H3wf59GYWYt3BXt8CVyHMKanMkMOhozeIitDJhX3hoF/uRUfz6qhrlBMG1rfPKVBo2j3wYpq+fCwlBemfO0cB2+cWrzKjLSiTYHeih8puMFLZIIpZ0DvyJ9vquVW6HdPRXYMxMPGnaLSAlyrVzZo3fietO1+BQ9zeZY7zuItcw+snwLZq9tYDxNS1MBku/irukVWKsBnKOwp5NYGzdSlSbZY1HqmgKgVSG7WdPBRwxwOk6hYlhpwYVcbwm22x6zn1/oViCZpCjN8w1tnLkAyCTnx3OtEZrLpcjy49n0lCMKH/Agd5D+OqPuBHM0VDZLHSslxT0EN53PHY8jYw/J+Fa1J5s8cqVnIwRoLiQbNg6eoOtrBKfQpOY21gjoI1VMva0bqeLkROT3iBagzDd9qinvKBMegBUQqn3g7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The ITER_PIPE-type iterator was only used for generic_file_splice_read(), but that has now been switched to either pull pages directly from the pagecache for buffered file splice-reads or to use ITER_BVEC instead for O_DIRECT file splice-reads. This leaves ITER_PIPE unused - so remove it. Signed-off-by: David Howells cc: Jens Axboe cc: Christoph Hellwig cc: Al Viro cc: David Hildenbrand cc: John Hubbard cc: linux-mm@kvack.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Reviewed-by: Christoph Hellwig --- fs/cifs/file.c | 8 +- include/linux/uio.h | 14 -- lib/iov_iter.c | 435 +------------------------------------------- mm/filemap.c | 3 +- 4 files changed, 5 insertions(+), 455 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 22dfc1f8b4f1..57ca4eea69dd 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3806,13 +3806,7 @@ cifs_readdata_to_iov(struct cifs_readdata *rdata, struct iov_iter *iter) size_t copy = min_t(size_t, remaining, PAGE_SIZE); size_t written; - if (unlikely(iov_iter_is_pipe(iter))) { - void *addr = kmap_atomic(page); - - written = copy_to_iter(addr, copy, iter); - kunmap_atomic(addr); - } else - written = copy_page_to_iter(page, 0, copy, iter); + written = copy_page_to_iter(page, 0, copy, iter); remaining -= written; if (written < copy && iov_iter_count(iter) > 0) break; diff --git a/include/linux/uio.h b/include/linux/uio.h index 9f158238edba..dcc0ca5ef491 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -11,7 +11,6 @@ #include struct page; -struct pipe_inode_info; struct kvec { void *iov_base; /* and that should *never* hold a userland pointer */ @@ -23,7 +22,6 @@ enum iter_type { ITER_IOVEC, ITER_KVEC, ITER_BVEC, - ITER_PIPE, ITER_XARRAY, ITER_DISCARD, ITER_UBUF, @@ -53,15 +51,10 @@ struct iov_iter { const struct kvec *kvec; const struct bio_vec *bvec; struct xarray *xarray; - struct pipe_inode_info *pipe; void __user *ubuf; }; union { unsigned long nr_segs; - struct { - unsigned int head; - unsigned int start_head; - }; loff_t xarray_start; }; }; @@ -99,11 +92,6 @@ static inline bool iov_iter_is_bvec(const struct iov_iter *i) return iov_iter_type(i) == ITER_BVEC; } -static inline bool iov_iter_is_pipe(const struct iov_iter *i) -{ - return iov_iter_type(i) == ITER_PIPE; -} - static inline bool iov_iter_is_discard(const struct iov_iter *i) { return iov_iter_type(i) == ITER_DISCARD; @@ -245,8 +233,6 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec unsigned long nr_segs, size_t count); void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec, unsigned long nr_segs, size_t count); -void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode_info *pipe, - size_t count); void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, loff_t start, size_t count); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index f9a3ff37ecd1..adc5e8aa8ae8 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -14,8 +14,6 @@ #include #include -#define PIPE_PARANOIA /* for now */ - /* covers ubuf and kbuf alike */ #define iterate_buf(i, n, base, len, off, __p, STEP) { \ size_t __maybe_unused off = 0; \ @@ -186,156 +184,6 @@ static int copyin(void *to, const void __user *from, size_t n) return res; } -static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *pipe, - unsigned int slot) -{ - return &pipe->bufs[slot & (pipe->ring_size - 1)]; -} - -#ifdef PIPE_PARANOIA -static bool sanity(const struct iov_iter *i) -{ - struct pipe_inode_info *pipe = i->pipe; - unsigned int p_head = pipe->head; - unsigned int p_tail = pipe->tail; - unsigned int p_occupancy = pipe_occupancy(p_head, p_tail); - unsigned int i_head = i->head; - unsigned int idx; - - if (i->last_offset) { - struct pipe_buffer *p; - if (unlikely(p_occupancy == 0)) - goto Bad; // pipe must be non-empty - if (unlikely(i_head != p_head - 1)) - goto Bad; // must be at the last buffer... - - p = pipe_buf(pipe, i_head); - if (unlikely(p->offset + p->len != abs(i->last_offset))) - goto Bad; // ... at the end of segment - } else { - if (i_head != p_head) - goto Bad; // must be right after the last buffer - } - return true; -Bad: - printk(KERN_ERR "idx = %d, offset = %d\n", i_head, i->last_offset); - printk(KERN_ERR "head = %d, tail = %d, buffers = %d\n", - p_head, p_tail, pipe->ring_size); - for (idx = 0; idx < pipe->ring_size; idx++) - printk(KERN_ERR "[%p %p %d %d]\n", - pipe->bufs[idx].ops, - pipe->bufs[idx].page, - pipe->bufs[idx].offset, - pipe->bufs[idx].len); - WARN_ON(1); - return false; -} -#else -#define sanity(i) true -#endif - -static struct page *push_anon(struct pipe_inode_info *pipe, unsigned size) -{ - struct page *page = alloc_page(GFP_USER); - if (page) { - struct pipe_buffer *buf = pipe_buf(pipe, pipe->head++); - *buf = (struct pipe_buffer) { - .ops = &default_pipe_buf_ops, - .page = page, - .offset = 0, - .len = size - }; - } - return page; -} - -static void push_page(struct pipe_inode_info *pipe, struct page *page, - unsigned int offset, unsigned int size) -{ - struct pipe_buffer *buf = pipe_buf(pipe, pipe->head++); - *buf = (struct pipe_buffer) { - .ops = &page_cache_pipe_buf_ops, - .page = page, - .offset = offset, - .len = size - }; - get_page(page); -} - -static inline int last_offset(const struct pipe_buffer *buf) -{ - if (buf->ops == &default_pipe_buf_ops) - return buf->len; // buf->offset is 0 for those - else - return -(buf->offset + buf->len); -} - -static struct page *append_pipe(struct iov_iter *i, size_t size, - unsigned int *off) -{ - struct pipe_inode_info *pipe = i->pipe; - int offset = i->last_offset; - struct pipe_buffer *buf; - struct page *page; - - if (offset > 0 && offset < PAGE_SIZE) { - // some space in the last buffer; add to it - buf = pipe_buf(pipe, pipe->head - 1); - size = min_t(size_t, size, PAGE_SIZE - offset); - buf->len += size; - i->last_offset += size; - i->count -= size; - *off = offset; - return buf->page; - } - // OK, we need a new buffer - *off = 0; - size = min_t(size_t, size, PAGE_SIZE); - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) - return NULL; - page = push_anon(pipe, size); - if (!page) - return NULL; - i->head = pipe->head - 1; - i->last_offset = size; - i->count -= size; - return page; -} - -static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t bytes, - struct iov_iter *i) -{ - struct pipe_inode_info *pipe = i->pipe; - unsigned int head = pipe->head; - - if (unlikely(bytes > i->count)) - bytes = i->count; - - if (unlikely(!bytes)) - return 0; - - if (!sanity(i)) - return 0; - - if (offset && i->last_offset == -offset) { // could we merge it? - struct pipe_buffer *buf = pipe_buf(pipe, head - 1); - if (buf->page == page) { - buf->len += bytes; - i->last_offset -= bytes; - i->count -= bytes; - return bytes; - } - } - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) - return 0; - - push_page(pipe, page, offset, bytes); - i->last_offset = -(offset + bytes); - i->head = head; - i->count -= bytes; - return bytes; -} - /* * fault_in_iov_iter_readable - fault in iov iterator for reading * @i: iterator @@ -439,46 +287,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_init); -// returns the offset in partial buffer (if any) -static inline unsigned int pipe_npages(const struct iov_iter *i, int *npages) -{ - struct pipe_inode_info *pipe = i->pipe; - int used = pipe->head - pipe->tail; - int off = i->last_offset; - - *npages = max((int)pipe->max_usage - used, 0); - - if (off > 0 && off < PAGE_SIZE) { // anon and not full - (*npages)++; - return off; - } - return 0; -} - -static size_t copy_pipe_to_iter(const void *addr, size_t bytes, - struct iov_iter *i) -{ - unsigned int off, chunk; - - if (unlikely(bytes > i->count)) - bytes = i->count; - if (unlikely(!bytes)) - return 0; - - if (!sanity(i)) - return 0; - - for (size_t n = bytes; n; n -= chunk) { - struct page *page = append_pipe(i, n, &off); - chunk = min_t(size_t, n, PAGE_SIZE - off); - if (!page) - return bytes - n; - memcpy_to_page(page, off, addr, chunk); - addr += chunk; - } - return bytes; -} - static __wsum csum_and_memcpy(void *to, const void *from, size_t len, __wsum sum, size_t off) { @@ -486,44 +294,10 @@ static __wsum csum_and_memcpy(void *to, const void *from, size_t len, return csum_block_add(sum, next, off); } -static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes, - struct iov_iter *i, __wsum *sump) -{ - __wsum sum = *sump; - size_t off = 0; - unsigned int chunk, r; - - if (unlikely(bytes > i->count)) - bytes = i->count; - if (unlikely(!bytes)) - return 0; - - if (!sanity(i)) - return 0; - - while (bytes) { - struct page *page = append_pipe(i, bytes, &r); - char *p; - - if (!page) - break; - chunk = min_t(size_t, bytes, PAGE_SIZE - r); - p = kmap_local_page(page); - sum = csum_and_memcpy(p + r, addr + off, chunk, sum, off); - kunmap_local(p); - off += chunk; - bytes -= chunk; - } - *sump = sum; - return off; -} - size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(i->data_source)) return 0; - if (unlikely(iov_iter_is_pipe(i))) - return copy_pipe_to_iter(addr, bytes, i); if (user_backed_iter(i)) might_fault(); iterate_and_advance(i, bytes, base, len, off, @@ -545,42 +319,6 @@ static int copyout_mc(void __user *to, const void *from, size_t n) return n; } -static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes, - struct iov_iter *i) -{ - size_t xfer = 0; - unsigned int off, chunk; - - if (unlikely(bytes > i->count)) - bytes = i->count; - if (unlikely(!bytes)) - return 0; - - if (!sanity(i)) - return 0; - - while (bytes) { - struct page *page = append_pipe(i, bytes, &off); - unsigned long rem; - char *p; - - if (!page) - break; - chunk = min_t(size_t, bytes, PAGE_SIZE - off); - p = kmap_local_page(page); - rem = copy_mc_to_kernel(p + off, addr + xfer, chunk); - chunk -= rem; - kunmap_local(p); - xfer += chunk; - bytes -= chunk; - if (rem) { - iov_iter_revert(i, rem); - break; - } - } - return xfer; -} - /** * _copy_mc_to_iter - copy to iter with source memory error exception handling * @addr: source kernel address @@ -600,9 +338,8 @@ static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes, * alignment and poison alignment assumptions to avoid re-triggering * hardware exceptions. * - * * ITER_KVEC, ITER_PIPE, and ITER_BVEC can return short copies. - * Compare to copy_to_iter() where only ITER_IOVEC attempts might return - * a short copy. + * * ITER_KVEC and ITER_BVEC can return short copies. Compare to + * copy_to_iter() where only ITER_IOVEC attempts might return a short copy. * * Return: number of bytes copied (may be %0) */ @@ -610,8 +347,6 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(i->data_source)) return 0; - if (unlikely(iov_iter_is_pipe(i))) - return copy_mc_pipe_to_iter(addr, bytes, i); if (user_backed_iter(i)) might_fault(); __iterate_and_advance(i, bytes, base, len, off, @@ -717,8 +452,6 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, return 0; if (WARN_ON_ONCE(i->data_source)) return 0; - if (unlikely(iov_iter_is_pipe(i))) - return copy_page_to_iter_pipe(page, offset, bytes, i); page += offset / PAGE_SIZE; // first subpage offset %= PAGE_SIZE; while (1) { @@ -767,36 +500,8 @@ size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, } EXPORT_SYMBOL(copy_page_from_iter); -static size_t pipe_zero(size_t bytes, struct iov_iter *i) -{ - unsigned int chunk, off; - - if (unlikely(bytes > i->count)) - bytes = i->count; - if (unlikely(!bytes)) - return 0; - - if (!sanity(i)) - return 0; - - for (size_t n = bytes; n; n -= chunk) { - struct page *page = append_pipe(i, n, &off); - char *p; - - if (!page) - return bytes - n; - chunk = min_t(size_t, n, PAGE_SIZE - off); - p = kmap_local_page(page); - memset(p + off, 0, chunk); - kunmap_local(p); - } - return bytes; -} - size_t iov_iter_zero(size_t bytes, struct iov_iter *i) { - if (unlikely(iov_iter_is_pipe(i))) - return pipe_zero(bytes, i); iterate_and_advance(i, bytes, base, len, count, clear_user(base, len), memset(base, 0, len) @@ -827,32 +532,6 @@ size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t byt } EXPORT_SYMBOL(copy_page_from_iter_atomic); -static void pipe_advance(struct iov_iter *i, size_t size) -{ - struct pipe_inode_info *pipe = i->pipe; - int off = i->last_offset; - - if (!off && !size) { - pipe_discard_from(pipe, i->start_head); // discard everything - return; - } - i->count -= size; - while (1) { - struct pipe_buffer *buf = pipe_buf(pipe, i->head); - if (off) /* make it relative to the beginning of buffer */ - size += abs(off) - buf->offset; - if (size <= buf->len) { - buf->len = size; - i->last_offset = last_offset(buf); - break; - } - size -= buf->len; - i->head++; - off = 0; - } - pipe_discard_from(pipe, i->head + 1); // discard everything past this one -} - static void iov_iter_bvec_advance(struct iov_iter *i, size_t size) { const struct bio_vec *bvec, *end; @@ -904,8 +583,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_iovec_advance(i, size); } else if (iov_iter_is_bvec(i)) { iov_iter_bvec_advance(i, size); - } else if (iov_iter_is_pipe(i)) { - pipe_advance(i, size); } else if (iov_iter_is_discard(i)) { i->count -= size; } @@ -919,26 +596,6 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll) if (WARN_ON(unroll > MAX_RW_COUNT)) return; i->count += unroll; - if (unlikely(iov_iter_is_pipe(i))) { - struct pipe_inode_info *pipe = i->pipe; - unsigned int head = pipe->head; - - while (head > i->start_head) { - struct pipe_buffer *b = pipe_buf(pipe, --head); - if (unroll < b->len) { - b->len -= unroll; - i->last_offset = last_offset(b); - i->head = head; - return; - } - unroll -= b->len; - pipe_buf_release(pipe, b); - pipe->head--; - } - i->last_offset = 0; - i->head = head; - return; - } if (unlikely(iov_iter_is_discard(i))) return; if (unroll <= i->iov_offset) { @@ -1026,24 +683,6 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_bvec); -void iov_iter_pipe(struct iov_iter *i, unsigned int direction, - struct pipe_inode_info *pipe, - size_t count) -{ - BUG_ON(direction != READ); - WARN_ON(pipe_full(pipe->head, pipe->tail, pipe->ring_size)); - *i = (struct iov_iter){ - .iter_type = ITER_PIPE, - .data_source = false, - .pipe = pipe, - .head = pipe->head, - .start_head = pipe->head, - .last_offset = 0, - .count = count - }; -} -EXPORT_SYMBOL(iov_iter_pipe); - /** * iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray * @i: The iterator to initialise. @@ -1168,19 +807,6 @@ bool iov_iter_is_aligned(const struct iov_iter *i, unsigned addr_mask, if (iov_iter_is_bvec(i)) return iov_iter_aligned_bvec(i, addr_mask, len_mask); - if (iov_iter_is_pipe(i)) { - size_t size = i->count; - - if (size & len_mask) - return false; - if (size && i->last_offset > 0) { - if (i->last_offset & addr_mask) - return false; - } - - return true; - } - if (iov_iter_is_xarray(i)) { if (i->count & len_mask) return false; @@ -1250,14 +876,6 @@ unsigned long iov_iter_alignment(const struct iov_iter *i) if (iov_iter_is_bvec(i)) return iov_iter_alignment_bvec(i); - if (iov_iter_is_pipe(i)) { - size_t size = i->count; - - if (size && i->last_offset > 0) - return size | i->last_offset; - return size; - } - if (iov_iter_is_xarray(i)) return (i->xarray_start + i->iov_offset) | i->count; @@ -1309,36 +927,6 @@ static int want_pages_array(struct page ***res, size_t size, return count; } -static ssize_t pipe_get_pages(struct iov_iter *i, - struct page ***pages, size_t maxsize, unsigned maxpages, - size_t *start) -{ - unsigned int npages, count, off, chunk; - struct page **p; - size_t left; - - if (!sanity(i)) - return -EFAULT; - - *start = off = pipe_npages(i, &npages); - if (!npages) - return -EFAULT; - count = want_pages_array(pages, maxsize, off, min(npages, maxpages)); - if (!count) - return -ENOMEM; - p = *pages; - for (npages = 0, left = maxsize ; npages < count; npages++, left -= chunk) { - struct page *page = append_pipe(i, left, &off); - if (!page) - break; - chunk = min_t(size_t, left, PAGE_SIZE - off); - get_page(*p++ = page); - } - if (!npages) - return -EFAULT; - return maxsize - left; -} - static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa, pgoff_t index, unsigned int nr_pages) { @@ -1486,8 +1074,6 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, } return maxsize; } - if (iov_iter_is_pipe(i)) - return pipe_get_pages(i, pages, maxsize, maxpages, start); if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); return -EFAULT; @@ -1577,9 +1163,7 @@ size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, } sum = csum_shift(csstate->csum, csstate->off); - if (unlikely(iov_iter_is_pipe(i))) - bytes = csum_and_copy_to_pipe_iter(addr, bytes, i, &sum); - else iterate_and_advance(i, bytes, base, len, off, ({ + iterate_and_advance(i, bytes, base, len, off, ({ next = csum_and_copy_to_user(addr + off, base, len); sum = csum_block_add(sum, next, off); next ? 0 : len; @@ -1664,15 +1248,6 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages) return iov_npages(i, maxpages); if (iov_iter_is_bvec(i)) return bvec_npages(i, maxpages); - if (iov_iter_is_pipe(i)) { - int npages; - - if (!sanity(i)) - return 0; - - pipe_npages(i, &npages); - return min(npages, maxpages); - } if (iov_iter_is_xarray(i)) { unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE; int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); @@ -1685,10 +1260,6 @@ EXPORT_SYMBOL(iov_iter_npages); const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags) { *new = *old; - if (unlikely(iov_iter_is_pipe(new))) { - WARN_ON(1); - return NULL; - } if (iov_iter_is_bvec(new)) return new->bvec = kmemdup(new->bvec, new->nr_segs * sizeof(struct bio_vec), diff --git a/mm/filemap.c b/mm/filemap.c index b31168a9bafd..6970be64a3e0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2695,8 +2695,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter, if (unlikely(iocb->ki_pos >= i_size_read(inode))) break; - error = filemap_get_pages(iocb, iter->count, &fbatch, - iov_iter_is_pipe(iter)); + error = filemap_get_pages(iocb, iter->count, &fbatch, false); if (error < 0) break; From patchwork Thu Feb 9 10:29:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD2C5C05027 for ; Thu, 9 Feb 2023 10:30:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D31D6B0081; Thu, 9 Feb 2023 05:30:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 01EC86B0080; Thu, 9 Feb 2023 05:30:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C79066B0082; Thu, 9 Feb 2023 05:30:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AFCEE6B0080 for ; Thu, 9 Feb 2023 05:30:30 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9072FC0506 for ; Thu, 9 Feb 2023 10:30:30 +0000 (UTC) X-FDA: 80447384220.11.02C9F08 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id B51B9180022 for ; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="KIA/LSzn"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938628; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q5bJAUy0Yf/xIkHbL0gYjIax658ix+8tiANnbR9foj4=; b=A1BcG3CRaDvm1qu+RgA6lHYDEbHCMwfJ/yz1CBQ/+GKhGZIphmsT1gzLjCd6k8PPOH8RR3 FfLLPMkah41HCqk1uH+6C1RJSs3mUDDxEIg6JzwlSg5Fm44OMsc5W+v/BoN4PfboprT+97 wzU9UrrwGirb8VXAb7bX3K4xC2xNv/U= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="KIA/LSzn"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938628; a=rsa-sha256; cv=none; b=lT+CaUoiXwmL1FH3KDV+3o3aRbU9vmkXVw1t2ow/SvtBioj0PREEw9Y8szNn/1CYATkRFx LM08z1yPHBtlg2V3DifvbIlTqflzjDo88Rdt9JBBtDGorYPCw3yl028X7/tfZZlLN4cUSd Y5puKPMoNgoyeX274jH0QLmORq9+UI0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938628; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q5bJAUy0Yf/xIkHbL0gYjIax658ix+8tiANnbR9foj4=; b=KIA/LSzn6NsaDu50B6MaGbvZnkb8LFUq4nm5Hef2Vb2CrKl6eTn3ojo6xvXWh8m0Kaz26M 60bCj5lDhwkQRhoWuBajNVmY1RJ1ISmip509uUPM96MKAePBMMs/h55PvzKbSFgAr4ZS0H MoSZg3374rSmRM5saZ3cCDtVvK8GguY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-HKGEz01pP0Wfszk7FqXLoA-1; Thu, 09 Feb 2023 05:30:22 -0500 X-MC-Unique: HKGEz01pP0Wfszk7FqXLoA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 312BA3C16E94; Thu, 9 Feb 2023 10:30:12 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3ED8F1415127; Thu, 9 Feb 2023 10:30:10 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 05/12] iov_iter: Define flags to qualify page extraction. Date: Thu, 9 Feb 2023 10:29:47 +0000 Message-Id: <20230209102954.528942-6-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspamd-Queue-Id: B51B9180022 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: iyhxi8xozuktytobxdi78nd6bbi6actd X-HE-Tag: 1675938628-92314 X-HE-Meta: U2FsdGVkX1+565obyMfIWL7rxPZ4k2B0XFT49UOCAa45CkUXPqPwSrIIO2cBuZgCL/dDjhWd+QK9dc4iPn0sfqgZ5J+3Zo9KX+eWnhP6DiUYUMKxgxbymD7acruP6sxaYi/F6fPW1CAOpNZ3M8kweIqF62ki8ggqvNB0dzTEN5rKayyVEYDr59afbUNJh6UllzX/FlNcUyOCHYQKm8jcfI+bT9hmjpk3yY6azci8qXnV3X8y3V0fXFCDaK3F51rD3mjDQhP+LRlghNu1c1Uw0Ar5Bz0JS1CdVRAeT0VmShE2rwxnpkNRbT10Acxfqc91dwXQVkywNFuKNZm4IRmjvswEjFR9LPGjlwlLWP61AThZDElLKvy6Y2qSuGrnRSEAxzfxd1M/i9vMKdhlTVEoAUfms+36iER8pKFbrxdw4SwRtb8wMsoYMXQrnfFWxM6D2ouLBhF1lPiOa4sPa6E/GGlOEZVaUVtvyUnmf2wdGAtc90C2QPl4xSP3E072ZnGPrXqM88J0OvOfujt1aRYVdkbpxOFCL4hfizg5CJ4ELzminI4d9JQWzkWDFQMl0dWxqmrlfA13leAXGLtaNIKJsa4RUuA0dnTuW1YIVs3kh2H22/WHUFjW8qilwl3KyRTlwn0vPrUXma54Qoe30w1N8CZoN9HTI/xkD5QXkD2PVKBi7FOAf7Apz2cnmhheChm84kaHiGG2WihvqYNsxio6xydghTMZg8XdBMUxnC/+mpKV/Cso0CFZvSIpn90ZinGxq+9LVN/15sAlgEve691t6BSEfyUEH7jpw4j6rqSLViLOr38fZxK1A5O98DwdfOGXmhcTIzZNV0kGQ1aUQEHycB12gVv3KrsxP9wbUpqXi6S9oW2gBgKuXmn2Bt/5stAsl95eaoJ3OA90M79ca5qOKBGn6aPpUsuhqf2y8dYv7UQrEn2izge4CAy7augbyc/QPXoD/OTQTdz30pTaUOn 4FHEHy0O wAoBmgqDxd4yLjsdjQXyhg/lezMdvda18qVJPpzacVF3e1UboSEpY4dI4wcjzxH0usWmJuVrHDYwSwmxSn5s+PJK/qaPPDiuU+wCCWggniuQiZsgZqRMhAIqT2qL8NSRKOCUR3F3zcauIFYlChvNF+JV9K3L4thSEo3uTd4lyRnUHoYeZMVpIfwY10sdGdOIQwDn9ErHjvESzERS/Hde7LMco3fZHKrB/qa9IF8p8gamvswwAoKoVqUDCvT5JamYVy/do2vNR0L+ou9cLfjQv0nQFN/smyUngQGbxyhGyFalNVx8YtZpmzLtu1gPo2CChyHYf+D9sLiI58cCkIMQOdZeXeCxz8GZZvg3yKJdwc20At9GWoXVc3FVabapHdkki+26XdjNsG/h/enYW7ZBCImp0NI5FhuUGEstFopFoDFRh1t8xb5jrTO4eTb8/HLFY0YiX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Define flags to qualify page extraction to pass into iov_iter_*_pages*() rather than passing in FOLL_* flags. For now only a flag to allow peer-to-peer DMA is supported. Signed-off-by: David Howells Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard cc: Al Viro cc: Jens Axboe cc: Logan Gunthorpe cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org --- Notes: ver #12) - Use __bitwise for the extraction flags typedef. ver #11) - Use __bitwise for the extraction flags. ver #9) - Change extract_flags to extraction_flags. ver #7) - Don't use FOLL_* as a parameter, but rather define constants specifically to use with iov_iter_*_pages*(). - Drop the I/O direction constants for now. block/bio.c | 6 +++--- block/blk-map.c | 8 ++++---- include/linux/uio.h | 10 ++++++++-- lib/iov_iter.c | 14 ++++++++------ 4 files changed, 23 insertions(+), 15 deletions(-) diff --git a/block/bio.c b/block/bio.c index ab59a491a883..b97f3991c904 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1245,11 +1245,11 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, */ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { + iov_iter_extraction_t extraction_flags = 0; unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt; struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; - unsigned int gup_flags = 0; ssize_t size, left; unsigned len, i = 0; size_t offset, trim; @@ -1264,7 +1264,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); if (bio->bi_bdev && blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue)) - gup_flags |= FOLL_PCI_P2PDMA; + extraction_flags |= ITER_ALLOW_P2PDMA; /* * Each segment in the iov is required to be a block size multiple. @@ -1275,7 +1275,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) */ size = iov_iter_get_pages(iter, pages, UINT_MAX - bio->bi_iter.bi_size, - nr_pages, &offset, gup_flags); + nr_pages, &offset, extraction_flags); if (unlikely(size <= 0)) return size ? size : -EFAULT; diff --git a/block/blk-map.c b/block/blk-map.c index 19940c978c73..080dd60485be 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -265,9 +265,9 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq, static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, gfp_t gfp_mask) { + iov_iter_extraction_t extraction_flags = 0; unsigned int max_sectors = queue_max_hw_sectors(rq->q); unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS); - unsigned int gup_flags = 0; struct bio *bio; int ret; int j; @@ -280,7 +280,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, return -ENOMEM; if (blk_queue_pci_p2pdma(rq->q)) - gup_flags |= FOLL_PCI_P2PDMA; + extraction_flags |= ITER_ALLOW_P2PDMA; while (iov_iter_count(iter)) { struct page **pages, *stack_pages[UIO_FASTIOV]; @@ -291,10 +291,10 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (nr_vecs <= ARRAY_SIZE(stack_pages)) { pages = stack_pages; bytes = iov_iter_get_pages(iter, pages, LONG_MAX, - nr_vecs, &offs, gup_flags); + nr_vecs, &offs, extraction_flags); } else { bytes = iov_iter_get_pages_alloc(iter, &pages, - LONG_MAX, &offs, gup_flags); + LONG_MAX, &offs, extraction_flags); } if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; diff --git a/include/linux/uio.h b/include/linux/uio.h index dcc0ca5ef491..af70e4c9ea27 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -12,6 +12,8 @@ struct page; +typedef unsigned int __bitwise iov_iter_extraction_t; + struct kvec { void *iov_base; /* and that should *never* hold a userland pointer */ size_t iov_len; @@ -238,12 +240,12 @@ void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray * loff_t start, size_t count); ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start, - unsigned gup_flags); + iov_iter_extraction_t extraction_flags); ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start); ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start, - unsigned gup_flags); + iov_iter_extraction_t extraction_flags); ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start); int iov_iter_npages(const struct iov_iter *i, int maxpages); @@ -346,4 +348,8 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, }; } +/* Flags for iov_iter_get/extract_pages*() */ +/* Allow P2PDMA on the extracted pages */ +#define ITER_ALLOW_P2PDMA ((__force iov_iter_extraction_t)0x01) + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index adc5e8aa8ae8..34ee3764d0fa 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1020,9 +1020,9 @@ static struct page *first_bvec_segment(const struct iov_iter *i, static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, unsigned int maxpages, size_t *start, - unsigned int gup_flags) + iov_iter_extraction_t extraction_flags) { - unsigned int n; + unsigned int n, gup_flags = 0; if (maxsize > i->count) maxsize = i->count; @@ -1030,6 +1030,8 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, return 0; if (maxsize > MAX_RW_COUNT) maxsize = MAX_RW_COUNT; + if (extraction_flags & ITER_ALLOW_P2PDMA) + gup_flags |= FOLL_PCI_P2PDMA; if (likely(user_backed_iter(i))) { unsigned long addr; @@ -1081,14 +1083,14 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, - size_t *start, unsigned gup_flags) + size_t *start, iov_iter_extraction_t extraction_flags) { if (!maxpages) return 0; BUG_ON(!pages); return __iov_iter_get_pages_alloc(i, &pages, maxsize, maxpages, - start, gup_flags); + start, extraction_flags); } EXPORT_SYMBOL_GPL(iov_iter_get_pages); @@ -1101,14 +1103,14 @@ EXPORT_SYMBOL(iov_iter_get_pages2); ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, - size_t *start, unsigned gup_flags) + size_t *start, iov_iter_extraction_t extraction_flags) { ssize_t len; *pages = NULL; len = __iov_iter_get_pages_alloc(i, pages, maxsize, ~0U, start, - gup_flags); + extraction_flags); if (len <= 0) { kvfree(*pages); *pages = NULL; From patchwork Thu Feb 9 10:29:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 428F9C636D7 for ; Thu, 9 Feb 2023 10:30:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7BF286B007B; Thu, 9 Feb 2023 05:30:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 796676B007D; Thu, 9 Feb 2023 05:30:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59B016B007E; Thu, 9 Feb 2023 05:30:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4968D6B007B for ; Thu, 9 Feb 2023 05:30:28 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1ABEB12059E for ; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) X-FDA: 80447384136.05.73B5B20 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 471C91A0008 for ; Thu, 9 Feb 2023 10:30:26 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jHN+1Fdv; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938626; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G1Az3EBV9SOKUryI5J/Szq/Ryazp7FGIfAGTSAZ/Hdw=; b=Va8bqsVNiABVAgBdeIAinIIjtuv1Z1k5YMfhd5LdmWUekicHRUh7nE6KMzk7KMVvlwnM06 is7wg7hXxIzbfLcLMRUKEhgHZlDyXqfLzhXbXo2bTLe1RYl23hglI/21c65xCzF8Y/BZIm faNgSV6ZMxar4fy1iNjk0Po+XSMSAmY= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jHN+1Fdv; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938626; a=rsa-sha256; cv=none; b=4t/Kr/y13KT1C4VghEDhB/qPI9eECkAeQvY+YB6KNSC+7VKmrmpMQOBmt8aZFlqX0j9ibr iIZ7vUTcNFnJ4dh5R1ZMbV7MtVWO+77ttcFfNZ6PuLRGt3gH/VD8Sy0M/tf79K1azH1qrO qzrxgCiKQ4VTkKNLdub2fDWIgqiE+zI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938625; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G1Az3EBV9SOKUryI5J/Szq/Ryazp7FGIfAGTSAZ/Hdw=; b=jHN+1Fdves7JZjHRhGI0ozMMamwPgyIZGeRYwLmFtFycT4bvujmjaSboiaXIjeu4MTT1bp kzYkOD9LMdgwFbduadjIFo0ZS+keWBCenUkR2FJcTdCjnxrsc/SSBmeNWHZOj0AGTrRCPq cjbGsfGSN0k2vVP047C07vG2WxpLuYs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-321-jaEj5sufMhODZlB0qhi8BQ-1; Thu, 09 Feb 2023 05:30:23 -0500 X-MC-Unique: jaEj5sufMhODZlB0qhi8BQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 02181884DC9; Thu, 9 Feb 2023 10:30:15 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id EE831492B00; Thu, 9 Feb 2023 10:30:12 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 06/12] iov_iter: Add a function to extract a page list from an iterator Date: Thu, 9 Feb 2023 10:29:48 +0000 Message-Id: <20230209102954.528942-7-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 471C91A0008 X-Stat-Signature: eo8qmf3gwe7717kjcidje1tx6y6topm8 X-HE-Tag: 1675938626-274308 X-HE-Meta: U2FsdGVkX19a4yhmsHgVZH4THgs90FHjRu89vVXOf2JtVIjypfmsv7mG23/9rbW9ZwAP/ZqWakAfhp92+3zThKTqh5UOIzdAhVaRo2i9qVpWrg91BSMcrwWpR/Eb6X10IJjuGTmCHMdVbDBYhR+w+78Uw2JsdObzZepRh9/V4X5OsdH2d1cF/Yqsz1uFmqsMM76MEhEtpGhozNYsnegSmLaebGq24jkDA6JMidPGpa0DppLI6W8sQIYESePnFeQM49JGLM/2V3tPjYfhC6AK5gRYyu6VARzWJaTW9SvBhgwzvodt1E0pnbz395PdQRThdS9hT5ocCC1F5nuK9+vf8rpsFt6FdggmC2jF+pqOiipnnG31RtkaWBZ/42xCU+Zy0RSLNnwVV1jlqW7uhChlhWQPBDeqrUGDt6NrSzPYxnFblP9NYRvnzUnKnfKG6ARtQJ9/ryp0mW6jR2F22HtyYxZ8QGcjwxMWKHad8LbIagizW/UEuAfOiKf5xnUH4PGNimeiEcYmVizC0pgl+XeD1w/VSp178w4EWI/lb/baFW5+KUJjG0WaRyrA6Tcqe/hVxQA0vKCyI/58Q/UbfURolqysKpFeqCtdLirsXK84hOdlLRY38GDeVcno82KSsUrpBAbWyY3u+mIGF7lJYBwQ6zg/2PAec5uXwkKFDKhi7oFSPqplPOSw5QkiSKMcLDVr0kjC13bBgUHh7OtiPOlGeibKvARhyYzfsFCaYrFnam1z4BbE6gZKX7JL++9lcDGIOI0ntf9Z54eVpaLsKj3CYT8fbabszieHZtDE1kmr56lGTjyrabpdI+uembb+xuXPYHN28AIeQvvDS8ZNuDRYCah1AQlvSTNQ8KH+KiK64BJS6l1fhh/6Q4b3c05vcSN8XJGg0jQoI9rInudi9RGuCPnPhCQ9lwK5jOmjc24PaiZGTnz9pczV2vIq3kxjqzBuOnIu2zW3MtFOC87tI5g 4mjFaiM8 VArrjMddmh5IHkl/C/ZiuHthC5ufTjdC+2KA4LWNpvG7fQRfjQXByG5TaAj7egMFre6KOwPrTYvr7MJD1SpW21ekbnh0qB+hFCvH4kIox6FtsxFkQSFDaI28ZRIko2iGiD+AyOZObu/wpAP/c+RzKoqnA4EmjerOUiecNzK9ZQZCUUsmmCkyE+R+l+pRMrgW6MnRzgciEpK9jmrZeghkWKYnm3ZoPE1Vh4DGpeJzYkELADisMnSygF7yY46hk9iU6ta+htxcewxSaQYZPYHfQHcuYx+35WRDyO7Ijk2MchLvQHyHiq12yV/Llp2bbUD2R+cTP9XYNdRlzPLNAoatq/mAyxzrQQMLYZInMV0Glw2S98RsPLyOEmnZILJc5/dyP4RaiyxIDjj19ib4ApKTnkeBvO6OunhfOdmJbmQtdoLLTJn10YbxkR793S/4Zs8wWHwHM79jhrFxH29AgMkNm+bUnzi8U/0isGQ8S2Fy3JCKMRInQqD6/u0CvQrW+V2GH+wIy53K2XwxWmXu1pvPSYENvz0qYWH2+cScdBMCgmYxzaZ9G01NC4JNw0A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function, iov_iter_extract_pages(), to extract a list of pages from an iterator. The pages may be returned with a pin added or nothing, depending on the type of iterator. Add a second function, iov_iter_extract_will_pin(), to determine how the cleanup should be done. There are two cases: (1) ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have pins (FOLL_PIN) obtained on them so that a concurrent fork() will forcibly copy the page so that DMA is done to/from the parent's buffer and is unavailable to/unaffected by the child process. iov_iter_extract_will_pin() will return true for this case. The caller should use something like unpin_user_page() to dispose of the page. (2) Any other sort of iterator. No refs or pins are obtained on the page, the assumption is made that the caller will manage page retention. iov_iter_extract_will_pin() will return false. The pages don't need additional disposal. Signed-off-by: David Howells Reviewed-by: Christoph Hellwig cc: Al Viro cc: John Hubbard cc: David Hildenbrand cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- Notes: ver #12) - ITER_PIPE is gone, so drop related bits. - Don't specify FOLL_PIN as that's implied by pin_user_pages_fast(). ver #11) - Fix iov_iter_extract_kvec_pages() to include the offset into the page in the returned starting offset. - Use __bitwise for the extraction flags ver #10) - Fix use of i->kvec in iov_iter_extract_bvec_pages() to be i->bvec. ver #9) - Rename iov_iter_extract_mode() to iov_iter_extract_will_pin() and make it return true/false not FOLL_PIN/0 as FOLL_PIN is going to be made private to mm/. - Change extract_flags to extraction_flags. ver #8) - It seems that all DIO is supposed to be done under FOLL_PIN now, and not FOLL_GET, so switch to only using pin_user_pages() for user-backed iters. - Wrap an argument in brackets in the iov_iter_extract_mode() macro. - Drop the extract_flags argument to iov_iter_extract_mode() for now [hch]. ver #7) - Switch to passing in iter-specific flags rather than FOLL_* flags. - Drop the direction flags for now. - Use ITER_ALLOW_P2PDMA to request FOLL_PCI_P2PDMA. - Disallow use of ITER_ALLOW_P2PDMA with non-user-backed iter. - Add support for extraction from KVEC-type iters. - Use iov_iter_advance() rather than open-coding it. - Make BVEC- and KVEC-type skip over initial empty vectors. ver #6) - Add back the function to indicate the cleanup mode. - Drop the cleanup_mode return arg to iov_iter_extract_pages(). - Pass FOLL_SOURCE/DEST_BUF in gup_flags. Check this against the iter data_source. ver #4) - Use ITER_SOURCE/DEST instead of WRITE/READ. - Allow additional FOLL_* flags, such as FOLL_PCI_P2PDMA to be passed in. ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. include/linux/uio.h | 27 ++++- lib/iov_iter.c | 264 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 290 insertions(+), 1 deletion(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index af70e4c9ea27..cf6658066736 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -347,9 +347,34 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, .count = count }; } - /* Flags for iov_iter_get/extract_pages*() */ /* Allow P2PDMA on the extracted pages */ #define ITER_ALLOW_P2PDMA ((__force iov_iter_extraction_t)0x01) +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0); + +/** + * iov_iter_extract_will_pin - Indicate how pages from the iterator will be retained + * @iter: The iterator + * + * Examine the iterator and indicate by returning true or false as to how, if + * at all, pages extracted from the iterator will be retained by the extraction + * function. + * + * %true indicates that the pages will have a pin placed in them that the + * caller must unpin. This is must be done for DMA/async DIO to force fork() + * to forcibly copy a page for the child (the parent must retain the original + * page). + * + * %false indicates that no measures are taken and that it's up to the caller + * to retain the pages. + */ +static inline bool iov_iter_extract_will_pin(const struct iov_iter *iter) +{ + return user_backed_iter(iter); +} + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 34ee3764d0fa..8d34b6552179 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1487,3 +1487,267 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->iov -= state->nr_segs - i->nr_segs; i->nr_segs = state->nr_segs; } + +/* + * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + struct page *page, **p; + unsigned int nr = 0, offset; + loff_t pos = i->xarray_start + i->iov_offset; + pgoff_t index = pos >> PAGE_SHIFT; + XA_STATE(xas, i->xarray, index); + + offset = pos & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + rcu_read_lock(); + for (page = xas_load(&xas); page; page = xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + + /* Has the page moved or been split? */ + if (unlikely(page != xas_reload(&xas))) { + xas_reset(&xas); + continue; + } + + p[nr++] = find_subpage(page, xas.xa_index); + if (nr == maxpages) + break; + } + rcu_read_unlock(); + + maxsize = min_t(size_t, nr * PAGE_SIZE - offset, maxsize); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from an ITER_BVEC iterator. This does + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + struct page **p, *page; + size_t skip = i->iov_offset, offset; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->bvec->bv_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->bvec++; + skip = 0; + } + + skip += i->bvec->bv_offset; + page = i->bvec->bv_page + skip / PAGE_SIZE; + offset = skip % PAGE_SIZE; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + for (k = 0; k < maxpages; k++) + p[k] = page + k; + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of virtually contiguous pages from an ITER_KVEC iterator. + * This does not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + struct page **p, *page; + const void *kaddr; + size_t skip = i->iov_offset, offset, len; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->kvec->iov_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->kvec++; + skip = 0; + } + + kaddr = i->kvec->iov_base + skip; + offset = (unsigned long)kaddr & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + kaddr -= offset; + len = offset + maxsize; + for (k = 0; k < maxpages; k++) { + size_t seg = min_t(size_t, len, PAGE_SIZE); + + if (is_vmalloc_or_module_addr(kaddr)) + page = vmalloc_to_page(kaddr); + else + page = virt_to_page(kaddr); + + p[k] = page; + len -= seg; + kaddr += PAGE_SIZE; + } + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from a user iterator and get a pin on + * each of them. This should only be used if the iterator is user-backed + * (IOBUF/UBUF). + * + * It does not get refs on the pages, but the pages must be unpinned by the + * caller once the transfer is complete. + * + * This is safe to be used where background IO/DMA *is* going to be modifying + * the buffer; using a pin rather than a ref makes forces fork() to give the + * child a copy of the page. + */ +static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + unsigned long addr; + unsigned int gup_flags = 0; + size_t offset; + int res; + + if (i->data_source == ITER_DEST) + gup_flags |= FOLL_WRITE; + if (extraction_flags & ITER_ALLOW_P2PDMA) + gup_flags |= FOLL_PCI_P2PDMA; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/** + * iov_iter_extract_pages - Extract a list of contiguous pages from an iterator + * @i: The iterator to extract from + * @pages: Where to return the list of pages + * @maxsize: The maximum amount of iterator to extract + * @maxpages: The maximum size of the list of pages + * @extraction_flags: Flags to qualify request + * @offset0: Where to return the starting offset into (*@pages)[0] + * + * Extract a list of contiguous pages from the current point of the iterator, + * advancing the iterator. The maximum number of pages and the maximum amount + * of page contents can be set. + * + * If *@pages is NULL, a page list will be allocated to the required size and + * *@pages will be set to its base. If *@pages is not NULL, it will be assumed + * that the caller allocated a page list at least @maxpages in size and this + * will be filled in. + * + * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA + * be allowed on the pages extracted. + * + * The iov_iter_extract_will_pin() function can be used to query how cleanup + * should be performed. + * + * Extra refs or pins on the pages may be obtained as follows: + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF), pins will be + * added to the pages, but refs will not be taken. + * iov_iter_extract_will_pin() will return true. + * + * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are + * merely listed; no extra refs or pins are obtained. + * iov_iter_extract_will_pin() will return 0. + * + * Note also: + * + * (*) Use with ITER_DISCARD is not supported as that has no content. + * + * On success, the function sets *@pages to the new pagelist, if allocated, and + * sets *offset0 to the offset into the first page. + * + * It may also return -ENOMEM and -EFAULT. + */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + maxsize = min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT); + if (!maxsize) + return 0; + + if (likely(user_backed_iter(i))) + return iov_iter_extract_user_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); + if (iov_iter_is_kvec(i)) + return iov_iter_extract_kvec_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); + if (iov_iter_is_bvec(i)) + return iov_iter_extract_bvec_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); + if (iov_iter_is_xarray(i)) + return iov_iter_extract_xarray_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(iov_iter_extract_pages); From patchwork Thu Feb 9 10:29:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6B90C05027 for ; Thu, 9 Feb 2023 10:30:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E51D46B007D; Thu, 9 Feb 2023 05:30:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E00886B007E; Thu, 9 Feb 2023 05:30:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDE006B0080; Thu, 9 Feb 2023 05:30:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B048B6B007D for ; Thu, 9 Feb 2023 05:30:29 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 86BBDC046E for ; Thu, 9 Feb 2023 10:30:29 +0000 (UTC) X-FDA: 80447384178.21.9585DB6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id BB2BB18000F for ; Thu, 9 Feb 2023 10:30:27 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Z0mlKeO/"; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AA3SDxqAw6tV+jeEhwhhXCCPQx7iw6NiGu/cAPT0aQk=; b=29+VBy0Rdck5elu5UjLKwe5D971cjp0YyjFGwpiy6hCTnddI9dihq5hDeuvWZcrjF8aMdG RAYo6niXPshfXxYvWWVeAQ5VFkbNxjbAZKmAHxAdzdcLkOmFWnH4jPNCRUcG+KQDsZY2gr WzvtLwObj4Vy7skV5z3tArGoppMC59Y= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Z0mlKeO/"; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938627; a=rsa-sha256; cv=none; b=CQH2pSaFP6KhCCQ3JpY7xiN8gXotxW58KZspzEmPWNWpMJ8Giy8Tj1sX77Z2QBLv7x4fLk jq608ln8iE8nJ1O8d1V69V+u77V8e7tjQEOLGz7PSG00bQMEw50hZYfKvvOkZ/WW4t5Si8 988vBrp6mdw2kg8p9UnkmgkiFrwYWzo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AA3SDxqAw6tV+jeEhwhhXCCPQx7iw6NiGu/cAPT0aQk=; b=Z0mlKeO/EjZSVCXGvuR3N+a7yjIuBqjPQWtZY0am8frZiUxlV2MRnYTk36uvcc8xDGKnmv rHZ+D2bMz2DD9Uh882VX7DXegSs+eTswPn142xevXn1R1vM9NN2N92dvA+yU4pxE+b4f6I lyMtK7qhNcR0Gb4HiMmAYaD0R9htn+E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-480-gCLZSpzUOyqIbLen8hMYnQ-1; Thu, 09 Feb 2023 05:30:23 -0500 X-MC-Unique: gCLZSpzUOyqIbLen8hMYnQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E0DF18E005F; Thu, 9 Feb 2023 10:30:17 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 986F41121314; Thu, 9 Feb 2023 10:30:15 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, John Hubbard Subject: [PATCH v13 07/12] iomap: Don't get an reference on ZERO_PAGE for direct I/O block zeroing Date: Thu, 9 Feb 2023 10:29:49 +0000 Message-Id: <20230209102954.528942-8-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BB2BB18000F X-Rspam-User: X-Stat-Signature: yene5gezk4rh1ojwyec66gj6iemhz6q1 X-HE-Tag: 1675938627-767741 X-HE-Meta: U2FsdGVkX183/cSeujPOoGCZhI+/2BXWoJEmRPN6q45unUh+VtQnhTuSkh+iXcHBhfvq2HoLNkDJFALA1IJFb6hQikW+/PKeB2GTjrV8WHxvNwfW9YJ1YZluVbjlggk4hKu0fe42fZKlLfX6vRhLRkCw99dAlAJztFWJlGzRy8jfnvvLreWsqZlRKY6mWcEV51WUZpnifeRBwBYaCND+txJKbKkZuCpEp8JqV3grpJxeNFShrrV7GycAdtwCJd6i87V879j5Lm52X8XYdgWHFvR0Mlt8PrJB7joWcYrNyCcdfDRo2jhRbbI4dQzljnei3emcWPqIbeust/CSHSPwMDVgC2PHuMBCIFIsjPZjavPKy6YJDcUr8faajBMd/91JCx2BxlwugaME5jsxytJZ3wYgwZAz3gxMIRjSWLLc2a6QOJIagOCt8I5ACoO2AejKzRfcz1EyIFQYWf8L9snIdOCh2tQXYYuWtw58JtVlcFZTWEs4gNHa4vVs+XXKPL7iIIr/XBiE+XqpftccaRaBhJKQqUmc4Ub5vMjqiEdTglF4vW2xrK4YaUjLvwkub+Ra5Rx0dp2iZnIRq/OoKi8cj3ALjjxxGoG93VITTG9Xz06hso7fPSIK2/gnS38GpKWTvRQxJNr3F8121gn8nx+Xkn935iCMTYz+z0+n5/eCk8q+sKJAwtE5N93oXQoH9ozsBCDFt1Hfr1X5L5nu6P2XvbBBOuJTGiMEk2hNs7eqGMY93Yj38na93sDsjSqIeV277+gpuombgqX52zsMlkNcxdcwp2VoR1r0kEp/utUbLyPfNCKVYZQi7C2k1tQUZAN6p3VEWUaJquPZMw8jvFZhuABj3APgZ1cJa8HXf/F545HVFI5+igzJSPGf2Uy9W1s+cNZbIFYNjIsHIVn91g28WD6hEnbRh34LcHJCRPvze9Oh3AtYu1r5wgSRZAGGMMq+L6r4deZ8jfNMb9G7mDj vxtArfCW 066PMx1ioJHwnyTdWCqu9Qz9StaL9yGKKHixkd0fgjFhuESAjaQTsv+OYKis3+APq3CHwn4bQASskhWxjx4b5CZxsPo/xzQBsEmvCz+N/TgqgTl889W9DisEXI+VNsXDUgTfkGf9Ia/s8CA0rcMCPT3NqsIuPPhO4MXh3/mm/3gVcvKUm7cv1EvuNU9ITT99XPwS07AYWw54qimFwqr6EQTsxTc5REmxpspVeSCpoFfnPCTaxcQfVsk9ttIyJl0G+jkQUmM5YXinG1T29qPkGfIJRwxP/NdsEHgZWnIgNkckrpiWo+IIhYQsiBIzivVMidp66UNvBr3BKPpT0zvpbTgoiguAcOWZfKnkrBMeVSwYODoyu+hb40o7/5UGhGx7C/0gm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ZERO_PAGE can't go away, no need to hold an extra reference. Signed-off-by: David Howells Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard cc: Al Viro cc: David Hildenbrand cc: linux-fsdevel@vger.kernel.org --- fs/iomap/direct-io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 9804714b1751..47db4ead1e74 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -202,7 +202,7 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - get_page(page); + bio_set_flag(bio, BIO_NO_PAGE_REF); __bio_add_page(bio, page, len, 0); iomap_dio_submit_bio(iter, dio, bio, pos); } From patchwork Thu Feb 9 10:29:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5524AC61DA4 for ; Thu, 9 Feb 2023 10:30:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4641B6B007E; Thu, 9 Feb 2023 05:30:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 350A16B0080; Thu, 9 Feb 2023 05:30:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A5D26B0081; Thu, 9 Feb 2023 05:30:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F34B26B0080 for ; Thu, 9 Feb 2023 05:30:29 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C8E98C01A2 for ; Thu, 9 Feb 2023 10:30:29 +0000 (UTC) X-FDA: 80447384178.19.F29B4AA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id F2F3C12001C for ; Thu, 9 Feb 2023 10:30:27 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Tbbc1PoH; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938628; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9PNg3WPaFWzxoLjSof2qU8psox3R+qpqOMDOMyjiLi8=; b=4CDhjIxaDLDmTAXY+lGUvHk6M9Wo5wU7U2lPmyAkMToncxG89pMMhB6twhCfNhy49RZJui KQk3H0fkWFAcqJWgEq2se7wR8p3jFg6BjPFK4nCJwTTMhG+g2uuT+DN0MNkdo10x0AzsIP W9TpfrHN3MOuFMQiby6EXGPhO919+/k= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Tbbc1PoH; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938628; a=rsa-sha256; cv=none; b=DA4vzLfx9FK2pAyWQvTK69uwJW+gkwWCst/czaEfUWeScjG0s14SMMQs6n9V3OVbL2fRGm TPvUARCgvajGa920hVVWzIqlPRXD4h1uijLcye+2b5ykSp6e+2TEo6ofnTfRcoMgufVltV uTxQf7Rx/Ij3CPthcIyUAZoeWM8xcA8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9PNg3WPaFWzxoLjSof2qU8psox3R+qpqOMDOMyjiLi8=; b=Tbbc1PoHGxkF3GCeCubs0Dpupx/xEhfrPFzcMmU2Q05Ec9NyFyfhDphGOZK4PHao1QaxzQ 1g+tvCeXfWj4zXCAMVzRVZ16kG74H9mQ+as+EqweocM8DHxYXStpSBvKpkng8AAq4zgRlk zZUvBB6JPJhBwFMxIQ8Dt3N4GveM4Kk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-557-B72h0Av4Pby1SBTw9mI3sg-1; Thu, 09 Feb 2023 05:30:23 -0500 X-MC-Unique: B72h0Av4Pby1SBTw9mI3sg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2ACFD832E70; Thu, 9 Feb 2023 10:30:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3DD82C16022; Thu, 9 Feb 2023 10:30:18 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 08/12] block: Fix bio_flagged() so that gcc can better optimise it Date: Thu, 9 Feb 2023 10:29:50 +0000 Message-Id: <20230209102954.528942-9-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 8pfx99ztkob7nim1f7cyxoyni3uqnpsg X-Rspamd-Queue-Id: F2F3C12001C X-HE-Tag: 1675938627-781394 X-HE-Meta: U2FsdGVkX1+JWRexlq9GS1eZEWOHW36Kazq6LTQSjTDS460PY/ZuHz3cbGo/bOg3lSwClatAfuLeKro6qLAKdAfiWpfrbhk+33um8yvRbuh2mp/lHxWUd0f2pwNxT78d8elqEuhvth6uWi8T+JbxUnDmPbWrORktXQNhNgBoD6fESsH1KYu7rLU5oPB3aN92gFqRNWVUIlkB0OTDSTQtGtq+19xAdGBdEMs/NuHCfgLv4JRq7xjm4V1wXFZZWmnhVc8VDxkTc9yj0kdIoECOxYXBQUPw2xePpomUhENlQ/RLnU6NudDvR5IHcPwAda7+Vm5jbKJ7oWXqAFLgbapWl6JbzjVRDgxDo0j0CMnM7h4MYIAToosJ5pilrOQuxhh7msVfkjdZT18/nhONsL0GLTpw2KpfE7TjNsH3BsU50KPHlnS2XzCI2vROOTrJWtTp0EwiTAsXqERt/+o7UnzhF9f1a7KAh8Z3zZQ19E+WVGBFgi5nEahwTvLG4hF6Bx6s3mG6PMRgLp+bJmT9pUuHh9Km/ZiA4vf9R4nBif7aQ1vaXpL7ZQbsLjEOWedOD8bE9H2SUurPkl7N7CP9qz0NHgSp60nvJqj9zyKFqRHEzF/laWCPOO6ScUpKR/M+qmhyvTzp/i61dtijVO8UPpK78IKntFSusdBhN882q22WVI+ZqIvlt08CzmzQPBx+N8gSN27k79XUtNEPzf2tL23ySpTktu3RIkKWgDPMPgEtuUzGSpzHQ8FRZzW4ojDpmlPjeFfA7T+aHyvaPwtAPwTIP2lLA9JFs3r56fHyezse+1MMUCshtpv75/hNXqCQUKF4umWS6+vdFxr55q1f3Sgpto9UeMNJvIavLqlp+nLr9tp3/YYRc8loYI1QY4V1F8bWDk5NrcoeK/6Ipe2+MbWQS1H2q7kYUSrGX8Usa0uTy0aRn1FlA+dYZw+SkQbUBdNKKckeb7daVrnRz4+5RXL 0v7AR/35 xYq467vV/CC2gyYhE5Tv4bkIPdcISMDfcQCAt2Cej9/4IH9uDBaDmhYDzzA6yrk71VRT1QiBN+eluxnqg60+WyXv/DHFEifQScp2EEUtvmOI7KYgM5t2f1F14ocC3AmY0aOqUJtCO48sxqGFxPYCd9B/iw6AsYByi59bLFOZe5wnWUAHhrWgH4CgNaBZ2DI2/ds54eBEEpeiQR6q8XWorWbIe+0Mmh+usEY7kUPLYx/moosAKHUEe0wXqugc7tpIoYo+hrU5UM3uc5mZcwZhpD3wOZKdAI0QYIEiAGzGpg3Tu03TVVoJdp+i426OIgU2hVJ3M3t2O4vRo12QMUXYA0A+5+eFOD2W5E5C8AUI+wdZSWSg2m7/g7hYlQoLQ1PRUMJyTspFEeysof16mGXdw51Tfx7nsPRFf2x8L/fR+jqxTBqROS6CEHzU3y4mOSnseShox X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Fix bio_flagged() so that multiple instances of it, such as: if (bio_flagged(bio, BIO_PAGE_REFFED) || bio_flagged(bio, BIO_PAGE_PINNED)) can be combined by the gcc optimiser into a single test in assembly (arguably, this is a compiler optimisation issue[1]). The missed optimisation stems from bio_flagged() comparing the result of the bitwise-AND to zero. This results in an out-of-line bio_release_page() being compiled to something like: <+0>: mov 0x14(%rdi),%eax <+3>: test $0x1,%al <+5>: jne 0xffffffff816dac53 <+7>: test $0x2,%al <+9>: je 0xffffffff816dac5c <+11>: movzbl %sil,%esi <+15>: jmp 0xffffffff816daba1 <__bio_release_pages> <+20>: jmp 0xffffffff81d0b800 <__x86_return_thunk> However, the test is superfluous as the return type is bool. Removing it results in: <+0>: testb $0x3,0x14(%rdi) <+4>: je 0xffffffff816e4af4 <+6>: movzbl %sil,%esi <+10>: jmp 0xffffffff816dab7c <__bio_release_pages> <+15>: jmp 0xffffffff81d0b7c0 <__x86_return_thunk> instead. Also, the MOVZBL instruction looks unnecessary[2] - I think it's just 're-booling' the mark_dirty parameter. Signed-off-by: David Howells Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard cc: Jens Axboe cc: linux-block@vger.kernel.org Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108370 [1] Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108371 [2] Link: https://lore.kernel.org/r/167391056756.2311931.356007731815807265.stgit@warthog.procyon.org.uk/ # v6 --- include/linux/bio.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/bio.h b/include/linux/bio.h index c1da63f6c808..10366b8bdb13 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -227,7 +227,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count) static inline bool bio_flagged(struct bio *bio, unsigned int bit) { - return (bio->bi_flags & (1U << bit)) != 0; + return bio->bi_flags & (1U << bit); } static inline void bio_set_flag(struct bio *bio, unsigned int bit) From patchwork Thu Feb 9 10:29:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06D79C636D4 for ; Thu, 9 Feb 2023 10:30:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A20F6B0080; Thu, 9 Feb 2023 05:30:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D9116B0083; Thu, 9 Feb 2023 05:30:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC4CC6B0085; Thu, 9 Feb 2023 05:30:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CAC926B0080 for ; Thu, 9 Feb 2023 05:30:30 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 719DA1C6163 for ; Thu, 9 Feb 2023 10:30:30 +0000 (UTC) X-FDA: 80447384220.12.CBABBC3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id 9180AC0023 for ; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CjQlF4so; spf=pass (imf10.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938628; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dqniJi1fcm9DuVb2jGUzwy7ATEE/wnT//KjRosSsKEA=; b=leeLWVVFTqe00swCzmx1td9K71Qyl9yzlFiLT4sTu5iYoAQF4YSriJdGeL0QxvjqlqI8rQ T+O/v2Jm1vviAgKolQCcsqEXBuNsB1F5HOVbcJEE/goFUKzEUe9g5cVABFsvvBJftyILuG 7JLEvXU6vXID2XzaNlkTfuXmCTM4HDk= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CjQlF4so; spf=pass (imf10.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938628; a=rsa-sha256; cv=none; b=oh8wd4T6eZHE7hkD/xQU/+HYJDmieTQWDZelVAlFXfXBzvUxOz62LJGU/jnLmlutXVbjXf rLvcXp3CjAfLR6S3eX2XPFEAESrV0O4JHneBsQG1d30yx858TNjEU/DG2cLT41VM0yNAQE Sd6mq3HzRJFEEErvD3UG/RVc1tsseFQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dqniJi1fcm9DuVb2jGUzwy7ATEE/wnT//KjRosSsKEA=; b=CjQlF4sov46r+KmquqXpNlLXjlWhy+yflkxB9hkz4pvhr3iC8k7ovxSH3lhth2JSxLzYLl b66J7XO1ljvvm8lRi/1aGJRSfKTjZjdT/PuDRH+j9VLYA+GzcoC4KLgR4XllNf3d4YYN5b Y1blDeky/7uaSUhXrAdY68Y5JO+Ees0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-286-2Kg3XRHMMZGGmZ87YUpyow-1; Thu, 09 Feb 2023 05:30:25 -0500 X-MC-Unique: 2Kg3XRHMMZGGmZ87YUpyow-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B451F800DA6; Thu, 9 Feb 2023 10:30:22 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2EA8492C3F; Thu, 9 Feb 2023 10:30:20 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 09/12] block: Replace BIO_NO_PAGE_REF with BIO_PAGE_REFFED with inverted logic Date: Thu, 9 Feb 2023 10:29:51 +0000 Message-Id: <20230209102954.528942-10-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9180AC0023 X-Rspam-User: X-Stat-Signature: 8pscrck7dtmx9zcqskyq58np76sj91rj X-HE-Tag: 1675938628-449242 X-HE-Meta: U2FsdGVkX18Z2ssbsJoG2c6RiCuI5uMln2End24ZdXFNgVyujKk86icRTNYSufHon3m5gGcoE0wXOciXN1Ussik/Atk21fCZrnGzoBxmMVXDmSme+n/WjkI34tBirsRffOK2t/ZE7UumZgko7X/i/ZgncOAvE+aOuPOxznC+d/GFiNFMcmeO2RSOplLlwZBqDyDKKvXsnrvmZpvcl0+3Y/nuB4oQnRQzBjC4hncRrnIuENCjHYTgCtgEjDrQvo2Mc+xyls8lD5nyhJL4UsMTEJ6p44TJH+yhMWWbX0cSNUkhgT1T9k4pJQOPtIS0v5YZXKyT68sgjdDHwk+2FTo5lgjG85D9eK8Bn1ALv8olE5OKaDQPU6/kwRn4nlNRTJsP5FsjdBJkq5A9O/egAdMDVDVnI+cJkQ5nxyspTRfEj6zCOlYKz4s3znXyQjBeBJpoOn69Me1T47tN8Zgyo7tTP6m2BTKs1rRszcYiwT+zEOAX+ZQbmdTCv6QadXqQjkl3yCYxwtYs2dfvr7AoE/duX31ky9EPkoUKAxEvghIXZo8XEk5chxFhdG0Hs3/vWv+ZpT02pv32ksRBTKmhb+HVEXjqJS1XaQBShY1gwxF74BvX32U2+OyN00HaXaGKtiiEQMeJIY36Rr59UKBDuziRXkjGY+hQa6dSLk/56HFdPD3eMY2RWvCtVEoIbdr9oa7YrfeMSx9KzR0N5X3RPklVG5VCrjTcLaSTFLN+iZpA4DKMqtYoUTbLfP2eJvsiDMFMl2DbEdlI3fnduuCGgJol+3AEqnyrbr3auYzIks50sZzWA571fTz98zV+CTdC79ADLT3YtIKzgHHdmzceJSqjfQGHGlrDve04icrvSEURtrQYFwRuthk86J0h7sKCgUYMgbxjkiUzvBzPXyo9lNkuih2KvEztqWkmtanac0pY/MV16mIFLItTfEhvpsGTRMRwohAZ0GThDA9TSWPeyzp fgEdCdvL ZRJOnXKhsgHIgbMqiHWVvw2TgfL8sdkxXGZyjBVjfMRYxP9NYhYIJQe+QLJ7CwhpQ3fbpnQDIvSMXRxHW8Db9610MwC+aKrjelFjOAY2ssStE95lOSyivFl1tb7jt6gj+ikjQRDs7kVV0bcialLIQwz/BklhKFZ52iG0cpTD5k6TwdRrATseVRWd6iF0rWsH0lV4fD5TCDl0wWaB+aFRA1tmqvvzPyxBTaT9ZFmMtNhmvQkVXAWXE6T2YfuI0FUN/omteIEkdyHE+Rr93G4LkFv2NgHxnR0v+Y66EayutNdriYGBTergmxEPq67St1zPQ4QCMNE46ktee2W6n00IXSDEuMz6TyT87T22mExkHOZaCk4ivUl/HfLdHiASYyOUPjNYkyTUWV7QVP/ZgTWXdRlR1nIuCR5YfmFxMU5NTarAWnxjmq/h8jDPKo2VV3BiDF0MOMb7vORthcKuF4LyC+uHYMXuAAXJ/P1BixDqcRAlQ7nhQGZa644/m4g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Christoph Hellwig Replace BIO_NO_PAGE_REF with a BIO_PAGE_REFFED flag that has the inverted meaning is only set when a page reference has been acquired that needs to be released by bio_release_pages(). Signed-off-by: Christoph Hellwig Signed-off-by: David Howells Reviewed-by: John Hubbard cc: Al Viro cc: Jens Axboe cc: Jan Kara cc: Matthew Wilcox cc: Logan Gunthorpe cc: linux-block@vger.kernel.org --- Notes: ver #8) - Split out from another patch [hch]. - Don't default to BIO_PAGE_REFFED [hch]. ver #5) - Split from patch that uses iov_iter_extract_pages(). block/bio.c | 2 +- block/blk-map.c | 1 + fs/direct-io.c | 2 ++ fs/iomap/direct-io.c | 1 - include/linux/bio.h | 2 +- include/linux/blk_types.h | 2 +- 6 files changed, 6 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index b97f3991c904..bf9bf53232be 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1198,7 +1198,6 @@ void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter) bio->bi_io_vec = (struct bio_vec *)iter->bvec; bio->bi_iter.bi_bvec_done = iter->iov_offset; bio->bi_iter.bi_size = size; - bio_set_flag(bio, BIO_NO_PAGE_REF); bio_set_flag(bio, BIO_CLONED); } @@ -1343,6 +1342,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) return 0; } + bio_set_flag(bio, BIO_PAGE_REFFED); do { ret = __bio_iov_iter_get_pages(bio, iter); } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); diff --git a/block/blk-map.c b/block/blk-map.c index 080dd60485be..f1f70b50388d 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -282,6 +282,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (blk_queue_pci_p2pdma(rq->q)) extraction_flags |= ITER_ALLOW_P2PDMA; + bio_set_flag(bio, BIO_PAGE_REFFED); while (iov_iter_count(iter)) { struct page **pages, *stack_pages[UIO_FASTIOV]; ssize_t bytes; diff --git a/fs/direct-io.c b/fs/direct-io.c index 03d381377ae1..07810465fc9d 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -403,6 +403,8 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, bio->bi_end_io = dio_bio_end_aio; else bio->bi_end_io = dio_bio_end_io; + /* for now require references for all pages */ + bio_set_flag(bio, BIO_PAGE_REFFED); sdio->bio = bio; sdio->logical_offset_in_bio = sdio->cur_page_fs_offset; } diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 47db4ead1e74..c0e75900e754 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -202,7 +202,6 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - bio_set_flag(bio, BIO_NO_PAGE_REF); __bio_add_page(bio, page, len, 0); iomap_dio_submit_bio(iter, dio, bio, pos); } diff --git a/include/linux/bio.h b/include/linux/bio.h index 10366b8bdb13..805957c99147 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -484,7 +484,7 @@ void zero_fill_bio(struct bio *bio); static inline void bio_release_pages(struct bio *bio, bool mark_dirty) { - if (!bio_flagged(bio, BIO_NO_PAGE_REF)) + if (bio_flagged(bio, BIO_PAGE_REFFED)) __bio_release_pages(bio, mark_dirty); } diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 99be590f952f..7daa261f4f98 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -318,7 +318,7 @@ struct bio { * bio flags */ enum { - BIO_NO_PAGE_REF, /* don't put release vec pages */ + BIO_PAGE_REFFED, /* put pages in bio_release_pages() */ BIO_CLONED, /* doesn't own data */ BIO_BOUNCED, /* bio is a bounce bio */ BIO_QUIET, /* Make BIO Quiet */ From patchwork Thu Feb 9 10:29:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F981C61DA4 for ; Thu, 9 Feb 2023 10:30:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C52AC6B0082; Thu, 9 Feb 2023 05:30:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDD9D6B0083; Thu, 9 Feb 2023 05:30:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B99F6B0085; Thu, 9 Feb 2023 05:30:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8D39D6B0082 for ; Thu, 9 Feb 2023 05:30:33 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 38CDDC07BF for ; Thu, 9 Feb 2023 10:30:33 +0000 (UTC) X-FDA: 80447384346.07.FB722C9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 7F02C80015 for ; Thu, 9 Feb 2023 10:30:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=X+lv7+8I; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938631; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GZNnQ4PFOAio/M36ha2lSqPmGocJT5Xn1bbdC3s23DY=; b=UsJDzxg2NSJxBxtf5M05O4qbOI9f9CUk6sqHeYghQpYvn1G5WfVq1Fs0ic6A/3hfAPdG9H xrTMNQWbthoN52h2RCLu7j1CvUhwlf5POXpIr6H2W0fpw0mQbiMdsMg0Jn/1yoRAl0tyx6 30UC5meN4VJpWJ3V/3oQF1c3YBkKBUE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=X+lv7+8I; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938631; a=rsa-sha256; cv=none; b=2VjExEjXLZSky5ZS8Ply5NupExhdgr2FI3HUH/m0PNakSkkF/7w91GVtfEz/2S/QC/2NK0 X3xGukalDSoRR44nWvAcC28eQJ7wiUVNUOQPZhu+Qie+kq34j/0tw3n68BKGRI7iyRM4Zy bFYx82kDdavMyMsMa63DrZ9svEAW58Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938630; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GZNnQ4PFOAio/M36ha2lSqPmGocJT5Xn1bbdC3s23DY=; b=X+lv7+8Il0rA7sOp1zuUIjiWQAwJJfi062QdeAy6UKTdezdQFU3WQ0MkLbjcuaQ8unqenj hL+rnNfrolYgC8wKg/Lr2bHuq8EhW1/fk6gE7z4DIObMvWYHJYoiShBxW254Sy8JquqNW0 nvdS+ZUGEzq2a/2wVVJ/QuQdE5lA1Mo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-528-NoFlqzZSNlqsj-nOj8RXSw-1; Thu, 09 Feb 2023 05:30:26 -0500 X-MC-Unique: NoFlqzZSNlqsj-nOj8RXSw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 848DD85D06D; Thu, 9 Feb 2023 10:30:25 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B36F140EBF6; Thu, 9 Feb 2023 10:30:23 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 10/12] block: Add BIO_PAGE_PINNED and associated infrastructure Date: Thu, 9 Feb 2023 10:29:52 +0000 Message-Id: <20230209102954.528942-11-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspamd-Queue-Id: 7F02C80015 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 1ki7n8y5w8nbbg6oujcb3xg4wia3xhyh X-HE-Tag: 1675938631-940771 X-HE-Meta: U2FsdGVkX19rs7jTPNEuijVLxvbGjw6rbBAHDumEiotgqWWIbRk9YvRv0EqbxN8snLsIM4EKCjAok1DZxrjsE2/z7NlkM3hktyir4fdKrU3nqBKaqBqwwIQFJiUafh/KnSYnYQioave2H5VY069DThVEQ+jpzWwdj3dw4QXlkBrdNKTFqIh6/kXJNxbZM+gU1zv7tC8sIVwl1VHJiJApE22eSirJUr8ddxi3MCW6vPZiA1UqQ68e04ENx1c9E4ubeQeoE5nR1hSSHu7H8jCTRcMxK9pfhKwfgcIdlR1g49NOihavac7wE1yYh8t24+qFoTfXRmkXsZNNJ5LhImJ4WWERZGR9xMMZLm2ClY7A4MT1bJXxKAFX840rSbKBGkEg8rXoDEvs/t/UQMLEUKfZUW5vHYkcdciwKNVu+jRdduUh3hxS08hdTcfSxZWMQWV0rB/UuU0zFdlMejw/oTVhiMV1UZg4Sx6463tZbap3+6ylFcZuFAUfuW5U+RbVRn/QcIdBipVoGGpb31LgNQ5nqoXER0KYU8Ns5ETCCbIUB0NCDm0TwKZSRFzPsqIFWkvV4+nExxcFAwHN9AczU5V728r/Q+bxWY9PIZgvbWAQgBc5Eo9GZpYjzuFeSuHqfjvCFUsEZjSJjA9aUpTBFosIF/wBXF49ndOX1Oce7T1drtl/x4Q8jFdtT1Cv2SUHl+a7r+Mh1KCBqKTmKIvfzCKGh8x3F46EU3yQkmDWzQ1YeDR3dIU6bvBkrs28I3qcHUTgmdU8X5JdUMs7I7BVQ7zcsRJYIQuHoDYwhuX7AEoxaYrj/NuyXUqQ9TR4yOWBY1JGtH7/uiBqfpH3lo6E/T3yDIEQMYayOwcmw483AYzYi6CF3LgD3OPFr+K+ywc2ymp2he7sJu8ngEr/uncOdduF1xoj49GA49SyomfJLQ/LLs6BerPoqdZRZ2dcv5MQuTrbgxF4hpFWtTxbfCH9QUJ j3+yu132 sOEsRq40yv7I1d9uVvo36nob66t3GYm74vZ/tIGgKKTek3SE4JJlKHEVu2TPIOQnKCokk511f4AsQ2e4/6r0w7pvx1LIY6//M6dcvy8dgREM+ZkL5lksNfJXddF7xpuaTyL1UBB39E0TzoJCEC/Lqnu+C9hP1d5KqwRHIH8c4zeafTDT+OqrPOlJcgWoUEVPlctl0K+pxIZ2aPekWmnqiMNsgeQ+z/Cz6a3+1/44PvSaDCuEB4/24a+cjKTu+zIqzAS6VvIAKrZ2ctmndJYIQlSs5281JfCslJjYJ6kQs9hEUXwi0EySxPGao9B+wKuPb69vHATJtPIYt+oo4zbFSEAyXAw9ifSPbvthRnci+8cWkQ7j9zERJuEG+DU5kB/OJFPTeYkoAacqsaPYQnBV8ogkmOZ70XxVgBuNESLPOayG9RGC4tOEX25C8NvbvXpkGaPNK4UDmMci6844iDjWYf1+gXDFeB41/2W+x+Sg0fGNDGUaiD+t+w/9sCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add BIO_PAGE_PINNED to indicate that the pages in a bio are pinned (FOLL_PIN) and that the pin will need removing. Signed-off-by: David Howells Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard cc: Al Viro cc: Jens Axboe cc: Jan Kara cc: Matthew Wilcox cc: Logan Gunthorpe cc: linux-block@vger.kernel.org --- Notes: ver #10) - Drop bio_set_cleanup_mode(), open coding it instead. ver #9) - Only consider pinning in bio_set_cleanup_mode(). Ref'ing pages in struct bio is going away. - page_put_unpin() is removed; call unpin_user_page() and put_page() directly. - Use bio_release_page() in __bio_release_pages(). - BIO_PAGE_PINNED and BIO_PAGE_REFFED can't both be set, so use if-else when testing both of them. ver #8) - Move the infrastructure to clean up pinned pages to this patch [hch]. - Put BIO_PAGE_PINNED before BIO_PAGE_REFFED as the latter should probably be removed at some point. FOLL_PIN can then be renumbered first. block/bio.c | 6 +++--- block/blk.h | 12 ++++++++++++ include/linux/bio.h | 3 ++- include/linux/blk_types.h | 1 + 4 files changed, 18 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index bf9bf53232be..547e38883934 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1176,7 +1176,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty) bio_for_each_segment_all(bvec, bio, iter_all) { if (mark_dirty && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); - put_page(bvec->bv_page); + bio_release_page(bio, bvec->bv_page); } } EXPORT_SYMBOL_GPL(__bio_release_pages); @@ -1496,8 +1496,8 @@ void bio_set_pages_dirty(struct bio *bio) * the BIO and re-dirty the pages in process context. * * It is expected that bio_check_pages_dirty() will wholly own the BIO from - * here on. It will run one put_page() against each page and will run one - * bio_put() against the BIO. + * here on. It will unpin each page and will run one bio_put() against the + * BIO. */ static void bio_dirty_fn(struct work_struct *work); diff --git a/block/blk.h b/block/blk.h index 4c3b3325219a..f02381405311 100644 --- a/block/blk.h +++ b/block/blk.h @@ -425,6 +425,18 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset, unsigned int max_sectors, bool *same_page); +/* + * Clean up a page appropriately, where the page may be pinned, may have a + * ref taken on it or neither. + */ +static inline void bio_release_page(struct bio *bio, struct page *page) +{ + if (bio_flagged(bio, BIO_PAGE_PINNED)) + unpin_user_page(page); + else if (bio_flagged(bio, BIO_PAGE_REFFED)) + put_page(page); +} + struct request_queue *blk_alloc_queue(int node_id); int disk_scan_partitions(struct gendisk *disk, fmode_t mode, void *owner); diff --git a/include/linux/bio.h b/include/linux/bio.h index 805957c99147..b2c09997d79c 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -484,7 +484,8 @@ void zero_fill_bio(struct bio *bio); static inline void bio_release_pages(struct bio *bio, bool mark_dirty) { - if (bio_flagged(bio, BIO_PAGE_REFFED)) + if (bio_flagged(bio, BIO_PAGE_REFFED) || + bio_flagged(bio, BIO_PAGE_PINNED)) __bio_release_pages(bio, mark_dirty); } diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 7daa261f4f98..a0e339ff3d09 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -318,6 +318,7 @@ struct bio { * bio flags */ enum { + BIO_PAGE_PINNED, /* Unpin pages in bio_release_pages() */ BIO_PAGE_REFFED, /* put pages in bio_release_pages() */ BIO_CLONED, /* doesn't own data */ BIO_BOUNCED, /* bio is a bounce bio */ From patchwork Thu Feb 9 10:29:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14AB9C05027 for ; Thu, 9 Feb 2023 10:30:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A7526B0085; Thu, 9 Feb 2023 05:30:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 12D936B0087; Thu, 9 Feb 2023 05:30:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC1BB6B0088; Thu, 9 Feb 2023 05:30:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C5FBC6B0085 for ; Thu, 9 Feb 2023 05:30:37 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A05581A0C9A for ; Thu, 9 Feb 2023 10:30:37 +0000 (UTC) X-FDA: 80447384514.22.9D8B459 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf07.hostedemail.com (Postfix) with ESMTP id D7BB540009 for ; Thu, 9 Feb 2023 10:30:35 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="H1lVfS/7"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YX3eOMnEnHmNO2fsUm7QGJ9rpDZ86GbDPShQEUjns7k=; b=wRsadJqsyHTp8ygemktvRjsrPbgvJMfEJvnGZ6UiMjj4jhrfahqURxIo1r9Tr/z6oHAmWB SXzwS/oHiWxhnVjO+bCA+zRUz3RGNvwFIBhBWMxwZ5EpKnyaMQ4+EPsmrlzKTFkg+mbFEj cFG1OZngetF4kU+35esh4ob9+1uS534= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="H1lVfS/7"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938635; a=rsa-sha256; cv=none; b=fP5iEA9JuwbcC47BG+PyxsZ6/C7YGljLD9Kcr1LzTZhM/HN0KPqyANWKM8scXzyUe+lQb2 0E7irkOxKOHsTD2TH+CXpN/hUBCeuDQHYkRCMdziZVwvRvFCoX5PdseqRTXJzrYoMiCgfF vxByjo4vBkvQgQP/wbIdKlLFlhNKmiI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YX3eOMnEnHmNO2fsUm7QGJ9rpDZ86GbDPShQEUjns7k=; b=H1lVfS/7SB0LDAFyNLEWoScGY7Ij3W3d7sTC/9wQkMKWOUTnZdLEJICM1BwKUKUlb5E2bc D8A3MQ6T+6WkuRgCxfuU3M2DtHqZq+zZrB3AHKDYG7ODmYwEHE4BvyoNKvGR/kP4kgT/NQ vABuSxlaf8ue6Zjt6tXQFckAMQy/GUA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-331-cAnVFdLZOYyXeog1AF3Bew-1; Thu, 09 Feb 2023 05:30:30 -0500 X-MC-Unique: cAnVFdLZOYyXeog1AF3Bew-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0FFD885D064; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 207EC1121315; Thu, 9 Feb 2023 10:30:26 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 11/12] block: Convert bio_iov_iter_get_pages to use iov_iter_extract_pages Date: Thu, 9 Feb 2023 10:29:53 +0000 Message-Id: <20230209102954.528942-12-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D7BB540009 X-Stat-Signature: 3jwcojpfo9edoegapr11cjmgj4uaszof X-HE-Tag: 1675938635-75658 X-HE-Meta: U2FsdGVkX1+6erTlms7JZtkBygiWOfOlYuGXkcS+9gm+Ys3f1ZXEXTb0BHuksXVNfc/9ylH6iZIRWNH6UprKTEv9BLVaccaC79e/JijRWm5i7V4IW3f66sAXpMGw4PHRvsk4OUpPwMBjKv3t0JtFxC58eqZpTNqw1rnnRTSE0nhk3GcvUyC1RPobGZxgmcWxMDjeFbGE8G/aHG0pPy6RYUSR25e2lOluHQcKc/3KirBgE2AWNo/Ti0g5kukkUrDI4vVo0gL9M2FStMqaBvwgivbGAxYIxV/iqsCtV5sFE4DP6dTIxBlnPOlTGFLVvVYhqBvtZlfl2yioV3E0ZELxhz4br81Rgp7TtTLvSD+kbv+7HD4Nbn+UeWpTmX3ixAgVqlcLn3/qwfoRK8wZNZ3M+AXWlJmVSjCeVsAn0+nFnYrd8NwXbBK1gcwREiFePqasB1GsF28N0C6RLuMoRnCFGCtHOqDkrD55gJCNCu7Ju//3KqSBSexdDGAcnXBqxVRUYDy8RNxVAL3nkLJIJtep8WCywuPvwnomsI9RPunhACFktZ3r3D0VU//N1yLn1Ux/GPpqM5UKic/dVE9wzWd9j28GUdIB8lcuZIRib8Yy1Jw/S1AnesdhXzO85wS7G48J6UvntAZTkJWRiMfMzrsdOTZaoMkqYdAU0zY7kt2HWkkL8l34o5Nkbq4WenanajJdOg2YbW+jTVGV+/CxCtP/sA8XMxeiPVTH/q7jwJdLBkL+1cGp6wNm4Iv4m/hKG3baVL2/Hv3mMjKUS1TU152iLg1sBJOLu97UkILHHbDUOolmqcfTcDgZ0hxvgc/tjRTWDfsCNnhI5AInrSEVzUcHRN1IMyCvwcngze/N3En26wYMGwn4YVkKRJiGP4pnA5qIWxP478IU48CLxWu4qlpqFmBRVxcOY/kjRadhlObgIjiLiMB4Zb1DQwWi5VUYPCnaCQztlRXyIeuAJwX2yV9 cvToOB8r WRF6cfl+Kw9R8Rwpb79y4adaFKbpZkHfNjg6CHOBkPZ1b06f630GsrpVl+9GMaAcBBNSDXMn1jb9lnG26KHa2HH8wMfDVR703NkMlJHnbtsPAsirqnYJuH/42jAyNbdAW3AXqoWvoAWdVvLb5BEyehn8e8NTKSZ+mR53JRtFZSdnHMc/NZ1uWLWZFtKdXTKkuq5mMrtENg1nFr4s9zSeIfnBxCXkmeuc8FFc7//TpLqD+S8dmoYOdu3Y/IFVzsgdzK0DSkTEncuy0Nxs6mhBWZvl+jkaAqMhJD9ef/dKHzZHrQdc57bB7Hmi8QBV1iG8L0Uv/2lw3J4clZ7L8phFIjjVN208qqpEOwVBApoZsm//nYRAFuGwoP5AXXcVPDq+Td2ZdNdMpqA7bUJdTgr+vnzVvAz0zF+2u2ZsjdyDQBGQis342/JW2x1N+6e53pjhxZZ7CfKzCot9QqaNLNc3qFAqJDMlwYaD9lu/+4/yXk4o9OTPW/4f9bySJtA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will pin pages or leave them unaltered rather than getting a ref on them as appropriate to the iterator. The pages need to be pinned for DIO rather than having refs taken on them to prevent VM copy-on-write from malfunctioning during a concurrent fork() (the result of the I/O could otherwise end up being affected by/visible to the child process). Signed-off-by: David Howells Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard cc: Al Viro cc: Jens Axboe cc: Jan Kara cc: Matthew Wilcox cc: Logan Gunthorpe cc: linux-block@vger.kernel.org --- Notes: ver #10) - Drop bio_set_cleanup_mode(), open coding it instead. ver #8) - Split the patch up a bit [hch]. - We should only be using pinned/non-pinned pages and not ref'd pages, so adjust the comments appropriately. ver #7) - Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN. ver #5) - Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to BIO_* flags and got rid of bi_cleanup_mode. - Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch. block/bio.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/block/bio.c b/block/bio.c index 547e38883934..fc57f0aa098e 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1212,7 +1212,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, } if (same_page) - put_page(page); + bio_release_page(bio, page); return 0; } @@ -1226,7 +1226,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, queue_max_zone_append_sectors(q), &same_page) != len) return -EINVAL; if (same_page) - put_page(page); + bio_release_page(bio, page); return 0; } @@ -1237,10 +1237,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, * @bio: bio to add pages to * @iter: iov iterator describing the region to be mapped * - * Pins pages from *iter and appends them to @bio's bvec array. The - * pages will have to be released using put_page() when done. - * For multi-segment *iter, this function only adds pages from the - * next non-empty segment of the iov iterator. + * Extracts pages from *iter and appends them to @bio's bvec array. The pages + * will have to be cleaned up in the way indicated by the BIO_PAGE_PINNED flag. + * For a multi-segment *iter, this function only adds pages from the next + * non-empty segment of the iov iterator. */ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { @@ -1272,9 +1272,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * result to ensure the bio's total size is correct. The remainder of * the iov data will be picked up in the next bio iteration. */ - size = iov_iter_get_pages(iter, pages, - UINT_MAX - bio->bi_iter.bi_size, - nr_pages, &offset, extraction_flags); + size = iov_iter_extract_pages(iter, &pages, + UINT_MAX - bio->bi_iter.bi_size, + nr_pages, extraction_flags, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; @@ -1307,7 +1307,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) iov_iter_revert(iter, left); out: while (i < nr_pages) - put_page(pages[i++]); + bio_release_page(bio, pages[i++]); return ret; } @@ -1342,7 +1342,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) return 0; } - bio_set_flag(bio, BIO_PAGE_REFFED); + if (iov_iter_extract_will_pin(iter)) + bio_set_flag(bio, BIO_PAGE_PINNED); do { ret = __bio_iov_iter_get_pages(bio, iter); } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); From patchwork Thu Feb 9 10:29:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13134349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB570C636D4 for ; Thu, 9 Feb 2023 10:30:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BBC16B0083; Thu, 9 Feb 2023 05:30:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 96FFF6B0085; Thu, 9 Feb 2023 05:30:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FA7C6B0087; Thu, 9 Feb 2023 05:30:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 556B76B0083 for ; Thu, 9 Feb 2023 05:30:37 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 35F7A1609D9 for ; Thu, 9 Feb 2023 10:30:37 +0000 (UTC) X-FDA: 80447384514.25.A7B15CF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 7B27B140021 for ; Thu, 9 Feb 2023 10:30:35 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KOKZsLzY; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675938635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7RYncx3d2QCFno76YxQ26A3gARIW2WsBk/g5n5z32ew=; b=kPiXN6F6Xfu70lVn55rQaj7iJSrUpg0aKLFVQNrDiQUC0M49b3awxubxxYYuBW95W4wsLk tJyUehrZU5GOMFVeMP+XYFRb+yp+kOD2SswXEb4stcOHz35lGCfyaY2wOf1b7+QIzPnEno VNVNeDbsjWLMptJXdFR/Bdqwsj7/8zQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KOKZsLzY; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675938635; a=rsa-sha256; cv=none; b=4wlVRiLhTdnGxD33MNdlsn2Y06YpqypB3vAihIefZTtflah+/yDQgkJn0I5VSGGKT3cu3P y8YfZsOHC/Muj8qxsT0tABZddz7/Tm8ZjJUcxf5W3z7up7woVnluwQ+GpMVWFX+pCT7ice IfG4t3VzYD6Dir5EUyCJpDN9mk/tPCc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675938634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7RYncx3d2QCFno76YxQ26A3gARIW2WsBk/g5n5z32ew=; b=KOKZsLzYA9YeQE+dygKUxbWFT3v3jX/jRXQXg9MNgGNuQrBAjjCwJLawHvwI7OhV+7kVNH efFAJ+DVfU8ilEPk1fz+6Xeo/JG1B4MiiiYaTtjUdb69yOxRaz7497Ws8HrdPzFpaG7xya 0J0WG38V/bZt2v7U8KrVFtfqGXkvYgA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-657-CE4ph_C5NsGWDH9cPINp8A-1; Thu, 09 Feb 2023 05:30:31 -0500 X-MC-Unique: CE4ph_C5NsGWDH9cPINp8A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B5B3385D062; Thu, 9 Feb 2023 10:30:30 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4D75140EBF6; Thu, 9 Feb 2023 10:30:28 +0000 (UTC) From: David Howells To: Jens Axboe , Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jan Kara , Jeff Layton , David Hildenbrand , Jason Gunthorpe , Logan Gunthorpe , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , John Hubbard Subject: [PATCH v13 12/12] block: convert bio_map_user_iov to use iov_iter_extract_pages Date: Thu, 9 Feb 2023 10:29:54 +0000 Message-Id: <20230209102954.528942-13-dhowells@redhat.com> In-Reply-To: <20230209102954.528942-1-dhowells@redhat.com> References: <20230209102954.528942-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 7B27B140021 X-Rspam-User: X-Stat-Signature: tt69xih4pxz4ytqtqnnwbj46sxs3y5nm X-HE-Tag: 1675938635-50466 X-HE-Meta: U2FsdGVkX1+lKR2lj7MgSeV6hTZ/pGpMjo4J2DXULvxgcrmhAJNbbtH2xMqfUo2ID9WqpiYB32+F0OzmQt822gkPYLdvFsBZRW8FiMifaxJvUoJsOJ71vNphgIDRP9z5bIC0TLzCsaQtgjCriHxG3Xyrr1D1P5GS2s742GOHR5ScHqLHhHLGb0EX78Wlgujc7NOI1nwXDsQ0TMKwq5A9+YdSeNamILkBy+C3BXxDo5HXwlzug6oz6RGYRmq53XUnBKuhPpNcoKij2t1e5TMPLklCYpSlE3bBULmRnIy1qmaci6KS+ZpnhLuSZlhzPIP/iY9NNuOtcdEi6OTzwWzHAO7wgt8Bi7/0HZxPqIDstdKyjFQ0YvxL7EuZBdgwUUVLKMhtMtyNybCjd82mRXFinBet0Onl5pmEF3BBaIjyh5lO605C7l0mRH7snJRjFCntx1QcHo8sRmqv32nb1tXoPb3PqyMMWUK/udD/rRksGxVO70z/PYnbfyp/SK2Ei1PBsAp6B5KoMgPx/haCrCzNfzllXsq1BspGjjpa9Ed2Nm2+/ucXeh/we+EgJQy5CgNTgM3PTXgTjyIf3BwUsJErqhEYiuVN1fQBovJR04zn+Bl/UNVM6Lmo0X6om6mzAFct5TQ12ofux3K7Cufkeb+WpdI/pNaHtUdqZJrBsLJHXMr3rbVKQvWSysrY6gLwazjYOv2sbKL/sSFiWpck9z22eyhXrsiKjOWBHmXWIx12TvNdIbhW7w9+l7PaONPgqj6WoYgtaNDQk6pmEbGQzBkgV9fZ0VXsli2isPQ1MSSoG594crxB5ulbILZosiSc5Mx38ZHJwd7cCwgbFv8WjmprQxHesTqtMXHFkkhRSsQCQc+z3yC51DHfyb31Jli1RvqyALbxMk6cNfvWZXN21W52rLyifl/RZR0T779X6/7B1yx37seScNAeL1+lvHs53z+iF1P93y/pSTUvrO3V/5h +Mae7ueF TX51PerZAOHVkdYukBSKampr31ZkZmWz+D6JPRj4NyyKU1QM2dUkgXm/qiYKCueOhcQwWop3NoJV6uFXdw45vGY3izfVqXBTfO84+MNU1LAWgPzCPCbNBFJtlwht1wylWZv7DnEcHIF3scVlywF7LYmaex0XtJr+FY2dDYcPyNf1TLM4UpU4GodvRYeUW0cuzucC0/nrK1ylRgysYQontug2dHRxWARN+e9r6iOIlGlGSCIVsF+pT2P3C63/LNhhrf64j82qHeUK09PG6DRcAB0i9hKtj3Lzx1j2L1KWFOFg5LHxSWxh1nzvP16jo9Z6AWhPl5Ux5cEGtGJH+RmyNXAn6gtYTWzl8au5L81kdMjIK4InoSuGc5s71FQZFSe1AUJyiSO8p4XsFF8ghWJJYWoVycphlZWGNXoaveVipRhc1NB3C8XNtv3QXHtImTS7VzLRFXOKsnaGmrytQlwqCY9ltRmuMYoklbHNC75r0zXHGKbhCjHz/ID86Sw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will pin pages or leave them unaltered rather than getting a ref on them as appropriate to the iterator. The pages need to be pinned for DIO rather than having refs taken on them to prevent VM copy-on-write from malfunctioning during a concurrent fork() (the result of the I/O could otherwise end up being visible to/affected by the child process). Signed-off-by: David Howells Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard cc: Al Viro cc: Jens Axboe cc: Jan Kara cc: Matthew Wilcox cc: Logan Gunthorpe cc: linux-block@vger.kernel.org --- Notes: ver #10) - Drop bio_set_cleanup_mode(), open coding it instead. ver #8) - Split the patch up a bit [hch]. - We should only be using pinned/non-pinned pages and not ref'd pages, so adjust the comments appropriately. ver #7) - Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN. ver #5) - Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to BIO_* flags and got rid of bi_cleanup_mode. - Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch. block/blk-map.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index f1f70b50388d..0f1593e144da 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -281,22 +281,21 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (blk_queue_pci_p2pdma(rq->q)) extraction_flags |= ITER_ALLOW_P2PDMA; + if (iov_iter_extract_will_pin(iter)) + bio_set_flag(bio, BIO_PAGE_PINNED); - bio_set_flag(bio, BIO_PAGE_REFFED); while (iov_iter_count(iter)) { - struct page **pages, *stack_pages[UIO_FASTIOV]; + struct page *stack_pages[UIO_FASTIOV]; + struct page **pages = stack_pages; ssize_t bytes; size_t offs; int npages; - if (nr_vecs <= ARRAY_SIZE(stack_pages)) { - pages = stack_pages; - bytes = iov_iter_get_pages(iter, pages, LONG_MAX, - nr_vecs, &offs, extraction_flags); - } else { - bytes = iov_iter_get_pages_alloc(iter, &pages, - LONG_MAX, &offs, extraction_flags); - } + if (nr_vecs > ARRAY_SIZE(stack_pages)) + pages = NULL; + + bytes = iov_iter_extract_pages(iter, &pages, LONG_MAX, + nr_vecs, extraction_flags, &offs); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; @@ -318,7 +317,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (!bio_add_hw_page(rq->q, bio, page, n, offs, max_sectors, &same_page)) { if (same_page) - put_page(page); + bio_release_page(bio, page); break; } @@ -330,7 +329,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, * release the pages we didn't map into the bio, if any */ while (j < npages) - put_page(pages[j++]); + bio_release_page(bio, pages[j++]); if (pages != stack_pages) kvfree(pages); /* couldn't stuff something into bio? */