From patchwork Wed Nov 6 12:35:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13864788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03EC5D44D54 for ; Wed, 6 Nov 2024 12:38:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B7166B00B6; Wed, 6 Nov 2024 07:38:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 83F906B00B8; Wed, 6 Nov 2024 07:38:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DFC86B00C9; Wed, 6 Nov 2024 07:38:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4C05D6B00B6 for ; Wed, 6 Nov 2024 07:38:52 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0A7B441B48 for ; Wed, 6 Nov 2024 12:38:52 +0000 (UTC) X-FDA: 82755623412.10.43AAF6E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 213B5100014 for ; Wed, 6 Nov 2024 12:38:11 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="O/d057lB"; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730896593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NCjiDuYPLh8EX/SFjmoUAd9DPkvd6EWcQ/jqW9O8MdE=; b=0ApdSSNtrF8gsxEhqQBktkPh/pzqhC9qdmCH5kBlEufrBoDC8nUQfh8WD68mTL5q14sV7L OwATJIHECaHQPHE2a9stqkPTX/PtDolBjiF61slKxjf2mwxzh0tzjnpFbEvsCMi8vC4sSP hhcTZ3iCGsLhNOz0bzoHPcZHpH3/B+0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730896593; a=rsa-sha256; cv=none; b=RlDyttMCbt7NpiGSKi2brnhQpkMNAEjrr+rQc9z8Z0nvTMa4AItfEcH95ob599ssRupodE 8XYLOKa4psneXc+hBfW4ba0cwlxVBgGg7UHc0Q+XrhlnN20DTYvvM0ju4McJy8YcDzeUS8 PvdHPclhUJJKB6t7ony5mJ633+opDvE= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="O/d057lB"; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896729; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NCjiDuYPLh8EX/SFjmoUAd9DPkvd6EWcQ/jqW9O8MdE=; b=O/d057lBMdjsyMDW+6JyBUJOt8GjJfTod3kF+aEEO1Q2pOfTRXpPRlCpzhHxIzeedef9U3 QKBbkZNKzdAi7nbjbpSzadFWjg+1mYvEel9ng9ED//Hz1IAbq/siM8+fV7fcT18ER0wViG X/8qV5+ywDGA/YvuHVb/tHJSEmE8tB0= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-6-m52UULKbN3mMfxfkis9tYA-1; Wed, 06 Nov 2024 07:38:43 -0500 X-MC-Unique: m52UULKbN3mMfxfkis9tYA-1 X-Mimecast-MFC-AGG-ID: m52UULKbN3mMfxfkis9tYA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0F2B91955EE7; Wed, 6 Nov 2024 12:38:40 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8EAF319560AA; Wed, 6 Nov 2024 12:38:34 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 20/33] netfs: Add functions to build/clean a buffer in a folio_queue Date: Wed, 6 Nov 2024 12:35:44 +0000 Message-ID: <20241106123559.724888-21-dhowells@redhat.com> In-Reply-To: <20241106123559.724888-1-dhowells@redhat.com> References: <20241106123559.724888-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 213B5100014 X-Stat-Signature: r53mdzzzxguhqkypktkik9sufymtwbbw X-HE-Tag: 1730896691-935358 X-HE-Meta: U2FsdGVkX19zKrVV9U9s1vPKj+YmnKy/6I4CjiNghuHIV2mdiJ8zD7sruTtZjuvTCawLaSvT28Cj+TEceb/o4fkeHwIdAkYaRK1mS3lkYVhWEP7Paw08+Isj2QlBh45ZFXU7FKRg16aDpQPH+3SnF2r823V5a4Ykoj7zkQrJ57xbixTMdFxscNuBvnQucNnHmnu+S7GxDdBZMkqQfTY1kcPoTaRK+S/eow0gC6L+y/orkXjlSO5OOYxsw7gpI0ogWZgND6kIaO4omMrLqQEvkFYYWvc5qoO3DFnaLHo7Xzlk05d73MaubJZcMWPTA28urIoTkDi+DKC909A3Rlja/XmtIRfc4eyMdZaGWEuMZ3x+dyP9Hwr9D6IZRtqTqLU/8RFnqaM/PS54wi9GC24M3AvhDMgkHPP45l7GRA+taEb+S2lE50hGQbnxIt/5tJvQtZ38CXWZTR58DMFhoxHvGeXBaNjm8OgBUnVZGvEkcpm81MkndDNkBceDZivCbk4C1n30M/KkBitcmNtK1FCVDegJ+F1vLe/ioN6sOg5rLvhPcJEDAgF/wvn6Auo79tB0IxFR2SeV/jF/98jG0Kc1cnk/bnu5rz9jfSXBXC7sDssypyoIA/8HHBZ0KLUziuF5EvV6TH4E0QAxDyeJLZ+wfqwCVdyvMQPcMd94vWhfqjpVE25wDAgPFXCtvbFisWkmEJQdL6HwFS22LV/yBZvfIQfxLVVjzkhM5dsNpcgmRQbKlQ+CKpvYGn8/b1jdh+R3pfH4fRWUOP1bWs6b8ZvIgzeVmaaQK5saPU/jB8l/SrHBx7vOQM0V0oAhmn7+YVwhgF7txhKXVbTvjyBdHNKSHv6kZCHZecIK2k0lU6iJPJrKaA7ljTbI7L8mXSvfF31ENLKkZLagdorM3QJw5Qvpx9+Yn0zVpyXcspmAca9gzsshA7hFVp+Y82Htaa8i/HgU83cX2MzpkupjMx1IQin 5Z9oHXUO bExfz2vKouBH23nuQ3amy/gZtB2HYtIGSJmX+/G1M9MUKQpvExRZqcljur9MTpB8p9qA2E7RLpqybZ7vTC+usGfcArcBpBjD4lQGJzmguFdgETyjR9ctCGMWTwmKw3Ys1kxgNSGFY0Yk85ZL50/jVcev/52Pp3xyhJu1a4hU3TqlzjdzgZKBYtoE+ieIhHzKi9tGNgRP+KEhNpvHXVOF+wVnEhrohGvpg3EhEsci42olVWtf6DiZC1as3arDsDOMqRVkpCrXOizPNexFEyA3sy6noj85BJVn3YaxQrejl/kTxo6vAYctvSq3yehEn04Bm4eAfaz+UA/0D9nFr2FpohYD2SlhCFbjbqpwFQaZKTYWzkmskOeXoHopZV0qj8LoAvKdR9lOjwJdAOt/hyUoVS/WfG7gmuYgsawvkuBGXYIke6tFO54y3tE8ttYHMF4bQdIp6n6aelal8ypMryYMLUPrKL61Q1FD/pxE8fNjRUl/koEC8PgsFVaoWmjtQCmxbtA8PyVYLjsKiEaKeGX/tfMoONhJmSM71k2SG4AcL7tgDoLwIbeOqiqUPv7lUvS69/cO/yVwoe/sP1oZSAqQxL3GbbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add two netfslib functions to build up or clean up a buffer in a folio_queue. The first, netfs_alloc_folioq_buffer() will add folios to a buffer, extending up at least to the given size. If it can, it will add multipage folios. The folios are optionally have the mapping set and will have the index set according to the distance from the front of the folio queue. The second function will free up a folio queue and put any folios in the queue that have the first mark set. The netfs_folio tracepoint is also altered to cope with folios that have a NULL mapping, and the folios being added/put will have trace lines emitted and will be accounted in the stats. Signed-off-by: David Howells cc: Jeff Layton cc: Marc Dionne cc: netfs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/misc.c | 95 ++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 6 +++ include/trace/events/netfs.h | 6 +-- 3 files changed, 103 insertions(+), 4 deletions(-) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 4249715f4171..01a6ba0e2f82 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,101 @@ #include #include "internal.h" +/** + * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue + * @mapping: Address space to set on the folio (or NULL). + * @_buffer: Pointer to the folio queue to add to (may point to a NULL; updated). + * @_cur_size: Current size of the buffer (updated). + * @size: Target size of the buffer. + * @gfp: The allocation constraints. + */ +int netfs_alloc_folioq_buffer(struct address_space *mapping, + struct folio_queue **_buffer, + size_t *_cur_size, ssize_t size, gfp_t gfp) +{ + struct folio_queue *tail = *_buffer, *p; + + size = round_up(size, PAGE_SIZE); + if (*_cur_size >= size) + return 0; + + if (tail) + while (tail->next) + tail = tail->next; + + do { + struct folio *folio; + int order = 0, slot; + + if (!tail || folioq_full(tail)) { + p = netfs_folioq_alloc(0, GFP_NOFS, netfs_trace_folioq_alloc_buffer); + if (!p) + return -ENOMEM; + if (tail) { + tail->next = p; + p->prev = tail; + } else { + *_buffer = p; + } + tail = p; + } + + if (size - *_cur_size > PAGE_SIZE) + order = umin(ilog2(size - *_cur_size) - PAGE_SHIFT, + MAX_PAGECACHE_ORDER); + + folio = folio_alloc(gfp, order); + if (!folio && order > 0) + folio = folio_alloc(gfp, 0); + if (!folio) + return -ENOMEM; + + folio->mapping = mapping; + folio->index = *_cur_size / PAGE_SIZE; + trace_netfs_folio(folio, netfs_folio_trace_alloc_buffer); + slot = folioq_append_mark(tail, folio); + *_cur_size += folioq_folio_size(tail, slot); + } while (*_cur_size < size); + + return 0; +} +EXPORT_SYMBOL(netfs_alloc_folioq_buffer); + +/** + * netfs_free_folioq_buffer - Free a folio queue. + * @fq: The start of the folio queue to free + * + * Free up a chain of folio_queues and, if marked, the marked folios they point + * to. + */ +void netfs_free_folioq_buffer(struct folio_queue *fq) +{ + struct folio_queue *next; + struct folio_batch fbatch; + + folio_batch_init(&fbatch); + + for (; fq; fq = next) { + for (int slot = 0; slot < folioq_count(fq); slot++) { + struct folio *folio = folioq_folio(fq, slot); + if (!folio || + !folioq_is_marked(fq, slot)) + continue; + + trace_netfs_folio(folio, netfs_folio_trace_put); + if (folio_batch_add(&fbatch, folio)) + folio_batch_release(&fbatch); + } + + netfs_stat_d(&netfs_n_folioq); + next = fq->next; + kfree(fq); + } + + folio_batch_release(&fbatch); +} +EXPORT_SYMBOL(netfs_free_folioq_buffer); + /* * Reset the subrequest iterator to refer just to the region remaining to be * read. The iterator may or may not have been advanced by socket ops or diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 738c9c8763f0..921cfcfc62f1 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -458,6 +458,12 @@ struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, void netfs_folioq_free(struct folio_queue *folioq, unsigned int /*enum netfs_trace_folioq*/ trace); +/* Buffer wrangling helpers API. */ +int netfs_alloc_folioq_buffer(struct address_space *mapping, + struct folio_queue **_buffer, + size_t *_cur_size, ssize_t size, gfp_t gfp); +void netfs_free_folioq_buffer(struct folio_queue *fq); + /** * netfs_inode - Get the netfs inode context from the inode * @inode: The inode to query diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 7c3c866ae183..167c89bc62e0 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -155,6 +155,7 @@ EM(netfs_streaming_filled_page, "mod-streamw-f") \ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ EM(netfs_folio_trace_abandon, "abandon") \ + EM(netfs_folio_trace_alloc_buffer, "alloc-buf") \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_cancel_store, "cancel-store") \ EM(netfs_folio_trace_clear, "clear") \ @@ -195,10 +196,7 @@ E_(netfs_trace_donate_to_deferred_next, "defer-next") #define netfs_folioq_traces \ - EM(netfs_trace_folioq_alloc_append_folio, "alloc-apf") \ - EM(netfs_trace_folioq_alloc_read_prep, "alloc-r-prep") \ - EM(netfs_trace_folioq_alloc_read_prime, "alloc-r-prime") \ - EM(netfs_trace_folioq_alloc_read_sing, "alloc-r-sing") \ + EM(netfs_trace_folioq_alloc_buffer, "alloc-buf") \ EM(netfs_trace_folioq_clear, "clear") \ EM(netfs_trace_folioq_delete, "delete") \ EM(netfs_trace_folioq_make_space, "make-space") \