From patchwork Mon Dec 16 20:40:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13910424 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D44D91D45EF for ; Mon, 16 Dec 2024 20:42:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734381728; cv=none; b=NSL2vdbQlNqCOf/qAsoqEVkIuBI8kn3sPNRxe2jXi2lzewyOve0hRVoWCKyuVimFdkypiEwLraf4sVS0F8/CFjFRjeDiiNUdszcNG85XwBjXis8TIQo+Flwab9docVlUfMSeI7YOfRIu/BLsl0Fvo9wS0+VtVxGInhKl1e0WKg8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734381728; c=relaxed/simple; bh=eIbRx8ypTkig42hOjaDzAJJew8u8Lg9taPqti72ooUo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pcpPr86UkL3S5lb+DjgKWhHeKdv/h2FIvbmSGWQV7+4Ur/yno8FyCZvPSveSV++6aohmbjKB1SA05ZRqeR7FvWhtDowkC8JFq5qgWCuHlSaglGpBJh3wwJcEg2FUOu2TV6N9YXSeDDZ26ZcQ/noF0zA9dgHAkXomIESCiQghoWE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XUol+WxL; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XUol+WxL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734381726; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c4ynYPVximdGHs7dVqzSfcBpzcrEHVdMF9zavpdQExQ=; b=XUol+WxL1mqzhVaHTi3HhjbN7y9KJesyMpFUlnXxuqL8vs0/CIuaczP6rKcMUXEpa5c4HF OgIKPxi9q/pPuNZnpOPxbylGbh27qPwxBBR23kBy7IyUa61CSVfVe0JDU4p5bneukq5oYF dLxYeMSCYyPna3Wmc/BBERias5lUxAo= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-392-HeKwHV38PVG_Gm0rYfgLLg-1; Mon, 16 Dec 2024 15:42:02 -0500 X-MC-Unique: HeKwHV38PVG_Gm0rYfgLLg-1 X-Mimecast-MFC-AGG-ID: HeKwHV38PVG_Gm0rYfgLLg Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0F2BE1956056; Mon, 16 Dec 2024 20:41:58 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.48]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3B4F919560AD; Mon, 16 Dec 2024 20:41:52 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 03/32] netfs: Use a folio_queue allocation and free functions Date: Mon, 16 Dec 2024 20:40:53 +0000 Message-ID: <20241216204124.3752367-4-dhowells@redhat.com> In-Reply-To: <20241216204124.3752367-1-dhowells@redhat.com> References: <20241216204124.3752367-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-cifs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Provide and use folio_queue allocation and free functions to combine the allocation, initialisation and stat (un)accounting steps that are repeated in several places. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_read.c | 12 +++--------- fs/netfs/misc.c | 38 ++++++++++++++++++++++++++++++++++---- include/linux/netfs.h | 5 +++++ 3 files changed, 42 insertions(+), 13 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 4dc9b8286355..78b04763bed6 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -131,11 +131,9 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) struct folio_queue *tail = rreq->buffer_tail, *new; size_t added; - new = kmalloc(sizeof(*new), GFP_NOFS); + new = netfs_folioq_alloc(GFP_NOFS); if (!new) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(new); new->prev = tail; tail->next = new; rreq->buffer_tail = new; @@ -363,11 +361,9 @@ static int netfs_prime_buffer(struct netfs_io_request *rreq) struct folio_batch put_batch; size_t added; - folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); + folioq = netfs_folioq_alloc(GFP_KERNEL); if (!folioq) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); rreq->buffer = folioq; rreq->buffer_tail = folioq; rreq->submitted = rreq->start; @@ -440,12 +436,10 @@ static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct fo { struct folio_queue *folioq; - folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); + folioq = netfs_folioq_alloc(GFP_KERNEL); if (!folioq) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); folioq_append(folioq, folio); BUG_ON(folioq_folio(folioq, 0) != folio); BUG_ON(folioq_folio_order(folioq, 0) != folio_order(folio)); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 78fe5796b2b2..6cd7e1ee7a14 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,38 @@ #include #include "internal.h" +/** + * netfs_folioq_alloc - Allocate a folio_queue struct + * @gfp: Allocation constraints + * + * Allocate, initialise and account the folio_queue struct. + */ +struct folio_queue *netfs_folioq_alloc(gfp_t gfp) +{ + struct folio_queue *fq; + + fq = kmalloc(sizeof(*fq), gfp); + if (fq) { + netfs_stat(&netfs_n_folioq); + folioq_init(fq); + } + return fq; +} +EXPORT_SYMBOL(netfs_folioq_alloc); + +/** + * netfs_folioq_free - Free a folio_queue struct + * @folioq: The object to free + * + * Free and unaccount the folio_queue struct. + */ +void netfs_folioq_free(struct folio_queue *folioq) +{ + netfs_stat_d(&netfs_n_folioq); + kfree(folioq); +} +EXPORT_SYMBOL(netfs_folioq_free); + /* * Make sure there's space in the rolling queue. */ @@ -87,8 +119,7 @@ struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq) if (next) next->prev = NULL; - netfs_stat_d(&netfs_n_folioq); - kfree(head); + netfs_folioq_free(head); wreq->buffer = next; return next; } @@ -111,8 +142,7 @@ void netfs_clear_buffer(struct netfs_io_request *rreq) folio_put(folio); } } - netfs_stat_d(&netfs_n_folioq); - kfree(p); + netfs_folioq_free(p); } } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index ecdd5ced16a8..c69e0f02c30f 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -21,6 +21,7 @@ enum netfs_sreq_ref_trace; typedef struct mempool_s mempool_t; +struct folio_queue; /** * folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED] @@ -453,6 +454,10 @@ void netfs_end_io_write(struct inode *inode); int netfs_start_io_direct(struct inode *inode); void netfs_end_io_direct(struct inode *inode); +/* Miscellaneous APIs. */ +struct folio_queue *netfs_folioq_alloc(gfp_t gfp); +void netfs_folioq_free(struct folio_queue *folioq); + /** * netfs_inode - Get the netfs inode context from the inode * @inode: The inode to query