From patchwork Fri Oct 13 16:03:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C262CDB482 for ; Fri, 13 Oct 2023 16:06:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A21B56B02A4; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D0676B02A5; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84AAE6B02A6; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6EC496B02A4 for ; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2509C1202D1 for ; Fri, 13 Oct 2023 16:05:53 +0000 (UTC) X-FDA: 81340914186.26.CEDC198 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 43026C0004 for ; Fri, 13 Oct 2023 16:05:51 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K8kva5L1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213151; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qXcme0kNMrH3V4PIjyzNcjxLbny8iXa7oebH1ad5t2M=; b=MkjOFz+cT/Mm/Sl92IKWZc8mXhkDBLHzMrOuyVVCvuGeDE2HD+fqe+jQoI1QX9HjIG+Rji tDCt4PEfWe42pM6zK19wP90L1zrrK6nC8szzlIuzvsGy+1Yy54Ix6cGt+mFjPkwsVjI6aF Fok+tn6B0SMta1GRiiA8nqBOOG9l7Uc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K8kva5L1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213151; a=rsa-sha256; cv=none; b=D1DEbUcMPoduKzP8YcpgkLoUVrkRhunNFjTd8yE/SAmtOPecy7NTSIrSgV3y3mbwB6WbAJ q5v5IiMs6A8rA4Nv8H/BpNbO68P4umXyIDupoSfjZlEkVNa+wwgjq2KI4RQi4ibzBIHfJP zvXe3jzM5WQ7x0E/cM6w9yHtlL0S0xM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213150; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qXcme0kNMrH3V4PIjyzNcjxLbny8iXa7oebH1ad5t2M=; b=K8kva5L1N13du+xD5idxIMpkq/RsFJNe3i2sMjUEDMEbVYp7I7LKBG1Sq1xAI1b30kLlJ5 KXoequK1FZK9mmL+bFhZ2d3ROWECVJ2JqfbalAgNrw1sxC2OEz+my6dDTKWWcZlbzjzy0q CzAzvUReDUxydLPhX2fqwnFVxshedAY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-110-wbIsFAmxMSm-WFxmFZQxmw-1; Fri, 13 Oct 2023 12:05:47 -0400 X-MC-Unique: wbIsFAmxMSm-WFxmFZQxmw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2D4EF280D584; Fri, 13 Oct 2023 16:05:45 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id A81CC201F457; Fri, 13 Oct 2023 16:05:42 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 22/53] netfs: Prep to use folio->private for write grouping and streaming write Date: Fri, 13 Oct 2023 17:03:51 +0100 Message-ID: <20231013160423.2218093-23-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspamd-Queue-Id: 43026C0004 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: b4mdxxhxhbh6frgjrw7tsme31hinr8c3 X-HE-Tag: 1697213151-587781 X-HE-Meta: U2FsdGVkX18V4tFqx2Y8nkM+C9r5tddcEgPT+OOpql3ml6XfvXuNo7NZBGMpmdOMXQIcH45cGu6iIr/+SnDqiIQkdm/rIpRv9N8o2/XTvMFzZi01UlBRZnRnOfrxrs4Rl1G6NqLSaxOz589qv5NvXarGqUk2rQhubkPRNwlbZgLltoPXLtRIUNhgexVo0t5DPH5PrWLNt8Mq1lkaH6WnzGA7Vn4pn9pQ1yn4VH1HhSPhoAdpFLhKJ/+WjA0Ad9raqKWYx4mAbD65WQXm93BxmRqiz7K9tyzF/aZkLS9dwLCfO5eTwwSmNS2F3Bo4pZ1UTr81qXM5utbLerv4PU7n6dBMgqCwou7IiFIymSA16IAIG+33iovEpfJWwAtLYp2+AY6TfUPRrhUtm4oV9zg2wf72fXpFiS7wYfK+JNOeburTItRllEtjP8xg4ep8MvFYny6N5HjRphRZyinOj7g7OcvXwLa7pALXvwpCuYJkF7vs2di1xX+QhCNYUdOx4OoIC8d0QiuXKXUENZz671Tnwg8n7Iq7faTlsXU5TU/u1qjcTXgHCPrhPm5J/EJ5L3e/TyQxjn72+SobKE/6aXbmWFpBlivqdlAoYslODn1vJPOmpUzb+IXcw+EIKOPwX4zNWLgmIzNdRv7XfrCoiag1nlnNVGSvQll2tsTRGgTuo5NTmYW3tANDjhf8tnQpCudgTOvHKBOjv7QKA9UJSM626thisu4eIE4FI9s4RyBaelx/xZcUId2ji5CIdgAXU2AYFO1l9LrE5FV4QmNZpZM+/Bhy1ultJ+0E0hOEQe8P2wWhlB5zHc7sicALeFdyvrPneSQXQaLlJ9D10H7b1gKiR+uEykCcxByJujMkhiXIpIIt0fZMkJd0OhI3NDjQYAIGLChU99OUwoiGq8jzpKwecw9j3HC6s+9o0a62TnSgIbl1JuN6ifyAaT38TH6m1jc2YMPqErJ+joEOpTuOIEg rURaRrhn TYzNBobz1NTvvecU3jBd+11jabsC5EmpMkB5IkQZ1S8jNKNTSMrPasCtwX9Q/iZizTmgCWMVy7lLW6ELo2Nj/GYWkO1OC0Aihc7+IsBC0D+irpg2cTqimGMomO0SKGaKGBfnbrIO5ar+rmyXt9EUx90NRgDDh3EUakU3RbVGhMReUFXnfJQulMN/furxl3o89hGS7tKGN+TEutxefzDw59u4HXmZDUasKjqtakkdX14Kw1deUzvYXvKD/uOT9hVErVmiWHzFtBptmnPa0vxIsy5Zw9SA3uosxCT43mrOLTx5CdVc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prepare to use folio->private to hold information write grouping and streaming write. These are implemented in the same commit as they both make use of folio->private and will be both checked at the same time in several places. "Write grouping" involves ordering the writeback of groups of writes, such as is needed for ceph snaps. A group is represented by a filesystem-supplied object which must contain a netfs_group struct. This contains just a refcount and a pointer to a destructor. "Streaming write" is the storage of data in folios that are marked dirty, but not uptodate, to avoid unnecessary reads of data. This is represented by a netfs_folio struct. This contains the offset and length of the modified region plus the otherwise displaced write grouping pointer. The way folio->private is multiplexed is: (1) If private is NULL then neither is in operation on a dirty folio. (2) If private is set, with bit 0 clear, then this points to a group. (3) If private is set, with bit 0 set, then this points to a netfs_folio struct (with bit 0 AND'ed out). Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 28 ++++++++++++++++++++++++++ fs/netfs/misc.c | 46 +++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 41 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 115 insertions(+) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 46183dad4d50..83418a918ee1 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -147,6 +147,34 @@ static inline bool netfs_is_cache_enabled(struct netfs_inode *ctx) #endif } +/* + * Get a ref on a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline struct netfs_group *netfs_get_group(struct netfs_group *netfs_group) +{ + if (netfs_group) + refcount_inc(&netfs_group->ref); + return netfs_group; +} + +/* + * Dispose of a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline void netfs_put_group(struct netfs_group *netfs_group) +{ + if (netfs_group && refcount_dec_and_test(&netfs_group->ref)) + netfs_group->free(netfs_group); +} + +/* + * Dispose of a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr) +{ + if (netfs_group && refcount_sub_and_test(nr, &netfs_group->ref)) + netfs_group->free(netfs_group); +} + /*****************************************************************************/ /* * debug tracing diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index c70f856f3129..8a2a56f1f623 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -159,9 +159,55 @@ void netfs_clear_buffer(struct xarray *buffer) */ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) { + struct netfs_folio *finfo = NULL; + size_t flen = folio_size(folio); + _enter("{%lx},%zx,%zx", folio_index(folio), offset, length); folio_wait_fscache(folio); + + if (!folio_test_private(folio)) + return; + + finfo = netfs_folio_info(folio); + + if (offset == 0 && length >= flen) + goto erase_completely; + + if (finfo) { + /* We have a partially uptodate page from a streaming write. */ + unsigned int fstart = finfo->dirty_offset; + unsigned int fend = fstart + finfo->dirty_len; + unsigned int end = offset + length; + + if (offset >= fend) + return; + if (end <= fstart) + return; + if (offset <= fstart && end >= fend) + goto erase_completely; + if (offset <= fstart && end > fstart) + goto reduce_len; + if (offset > fstart && end >= fend) + goto move_start; + /* A partial write was split. The caller has already zeroed + * it, so just absorb the hole. + */ + } + return; + +erase_completely: + netfs_put_group(netfs_folio_group(folio)); + folio_detach_private(folio); + folio_clear_uptodate(folio); + kfree(finfo); + return; +reduce_len: + finfo->dirty_len = offset + length - finfo->dirty_offset; + return; +move_start: + finfo->dirty_len -= offset - finfo->dirty_offset; + finfo->dirty_offset = offset; } EXPORT_SYMBOL(netfs_invalidate_folio); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 39b3eeefa03c..11a073506f98 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -142,6 +142,47 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ }; +/* + * A netfs group - for instance a ceph snap. This is marked on dirty pages and + * pages marked with a group must be flushed before they can be written under + * the domain of another group. + */ +struct netfs_group { + refcount_t ref; + void (*free)(struct netfs_group *netfs_group); +}; + +/* + * Information about a dirty page (attached only if necessary). + * folio->private + */ +struct netfs_folio { + struct netfs_group *netfs_group; /* Filesystem's grouping marker (or NULL). */ + unsigned int dirty_offset; /* Write-streaming dirty data offset */ + unsigned int dirty_len; /* Write-streaming dirty data length */ +}; +#define NETFS_FOLIO_INFO 0x1UL /* OR'd with folio->private. */ + +static inline struct netfs_folio *netfs_folio_info(struct folio *folio) +{ + void *priv = folio_get_private(folio); + + if ((unsigned long)priv & NETFS_FOLIO_INFO) + return (struct netfs_folio *)((unsigned long)priv & ~NETFS_FOLIO_INFO); + return NULL; +} + +static inline struct netfs_group *netfs_folio_group(struct folio *folio) +{ + struct netfs_folio *finfo; + void *priv = folio_get_private(folio); + + finfo = netfs_folio_info(folio); + if (finfo) + return finfo->netfs_group; + return priv; +} + /* * Resources required to do operations on a cache. */