From patchwork Fri Oct 13 16:03:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B2FECDB483 for ; Fri, 13 Oct 2023 16:04:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7A166B0071; Fri, 13 Oct 2023 12:04:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2A4E6B007D; Fri, 13 Oct 2023 12:04:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2E8B6B008A; Fri, 13 Oct 2023 12:04:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9FAE76B0071 for ; Fri, 13 Oct 2023 12:04:38 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 68A7B1A0340 for ; Fri, 13 Oct 2023 16:04:38 +0000 (UTC) X-FDA: 81340911036.03.83CDF1D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 240E918001B for ; Fri, 13 Oct 2023 16:04:35 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBPJCqKJ; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213076; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KKwIDnJBdKbpqLQU76AV5hRxBgMKROUmlCRt7k06nqI=; b=y69G6Rz7FVbo3KKlSyroJApGOUHiHZp3jJa9CcUWmDvkLj7dOXI2pR1IZTw/8Hmt/x8NMJ 8lkB6vEoHwUtR0pJDgT+exczWUfUDE6L1DXh2l7C52egth+TwfG2z+BdsL/JEGABvt1tW7 SAb0krDBsXa7SF9t1RdPW9TY5/uPRss= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBPJCqKJ; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213076; a=rsa-sha256; cv=none; b=znX+8o1oBxDE06YYG+nHPxs9llFSfEDH8egeQQ8ahoTAiEw2jYBInviZcTib1kK2JH77fT pVrCfVPXvY1fzsvITCNJe2bkunxOhrLZarRqbRSEQcue8Q9a/7zoPd/lH6JKJBhNxI8t8r 3m52eMsXS6y5CBRATp9vzqQyrgQYdek= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213075; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KKwIDnJBdKbpqLQU76AV5hRxBgMKROUmlCRt7k06nqI=; b=XBPJCqKJmmQnQYE/HKu3nuuKpQJKtXBPG0VLrbuyBR0IC2eB1NLg/oiOT+RiOehwf+Uzjt sRFbZWdI/VeHZ3ntsCKeQXko9fJfaffy9W0fXHy0K3KAOqKy/bgUxrfuYhj47ttwOkT22j 0vMQveOODwq50i104Vty2vqRxdylsdQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-517-1E_l9WU-NESrpd-0ykig5A-1; Fri, 13 Oct 2023 12:04:32 -0400 X-MC-Unique: 1E_l9WU-NESrpd-0ykig5A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 18C5988B7A1; Fri, 13 Oct 2023 16:04:31 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8210AC15BBC; Fri, 13 Oct 2023 16:04:28 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 01/53] netfs: Add a procfile to list in-progress requests Date: Fri, 13 Oct 2023 17:03:30 +0100 Message-ID: <20231013160423.2218093-2-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Queue-Id: 240E918001B X-Rspam-User: X-Stat-Signature: buy31jdbohp1jsqs3z7jgbjxhrr1zgtd X-Rspamd-Server: rspam01 X-HE-Tag: 1697213075-941637 X-HE-Meta: U2FsdGVkX1+Kc1Nh4UbruIAZtLyN3iyNqvg4awRE4C2ihDxaRjhaysBAzPzwgIiNIOPK0VDcF7C2yGnYkVs7m9R5gGyfCBS1VpFaLSWkLUNYcKoGhb3S44HEkl+taAqBJdtrTUF/YL98LTeF9Jc4g7r2RoGtJAMvyJXmaX4bKaFEqQIzDCApluBEo1MQyAagwRaVQYwY4N3T2nZLZWWU+lS5aWdyhU1nNuCIh1Su44KX74wI8PKlYYuMengeoArn5oMsIaPiRso5ZA78gmf/S4lj048EgoByZyican46H46GPmmF70GRFurmLcBjsfK+REFwewkeJE6j9zy5cQ6yw3h0nWAjy2soUIFWK1KVJ9pVDuovriV20RYRUx7gD5aU2oX/HmNXhBIf5ybp/V2v93KRN2Vh2zG3LIlr+jU5huyHeJVUmth0nxc0ih/lWKzcgaA7TGorgvG82mUA8yQI1arWM9sKaqWQFDEri60qavA9LxDDm+ZW3ReoXRBL3uAu9d3jEaHvAChddIJFcW36Nm6pt9Z5XT7KOO4ADK77PxQXbYoq5Yc8yhAd99JkFD0QKGF5zlwjnFtM2HseGGTZN3baycWdFCKWvRwv0DOozEH/0G1Nc8xU4AeLEkkQCAkVkXR1TMi4S9eqzbbjfsbqrIfe4/hvj4u5foX4qCyTSreIlXws+cardYJcjJ+Hm7Tl7xRYuyT+b0LvPAdAfu1pem2bAxD684rE7jelgJ7JC9WzRwqcu7k2PWjkjojpw42334xzAs7k0SfOfOIBzSVzf/3YnZ217BbiBvnqwRhNvXTJ+EYvSu9J0a8zMownTAlgiF0OGvs4sAn6j+L5N/+uZhss3e8SxBEw7WsmEWF5O7wFYeoPDZryjuvpcIGhecFGJE5JxRC8nrnUwxSeGQqXeMdZtXdXy/PFc1ltD+YR5yCeZBARSfIRul57+eDcoNfoXDVJ26xvycH/AuhCHvS lyX1Z66U R6aGAisq+qiIkc2y5QfzI1C+2LNR7vhznWpHmhqKljO6vuGQS1/cRQE11a45najz+cMMh2X2W+QjRUMyPf78ODdFqpx+8RRGbeBQbU8k64uzjRCsptnmpI49GErgVfYqCikxDdD09gTzQp85UpS3cC4olJLPy3GtNVbQ5VwfueZXJRM0sl4h1Gd+rRzE9VlhPdkzUO1M3wMTW3as6xYqZi6JZ0mFn/38+ISVvLlSNMpgCXMOK+FYzRP9hOyAmQjm9UpMBXnQUcIyxCl54NuX8CY8sD9bE6j59zdCZydiPfT4tM9c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a procfile, /proc/fs/netfs/requests, to list in-progress netfslib I/O requests. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 22 +++++++++++ fs/netfs/main.c | 91 +++++++++++++++++++++++++++++++++++++++++++ fs/netfs/objects.c | 4 +- include/linux/netfs.h | 6 ++- 4 files changed, 121 insertions(+), 2 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 43fac1b14e40..1f067aa96c50 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -29,6 +29,28 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync); * main.c */ extern unsigned int netfs_debug; +extern struct list_head netfs_io_requests; +extern spinlock_t netfs_proc_lock; + +#ifdef CONFIG_PROC_FS +static inline void netfs_proc_add_rreq(struct netfs_io_request *rreq) +{ + spin_lock(&netfs_proc_lock); + list_add_tail_rcu(&rreq->proc_link, &netfs_io_requests); + spin_unlock(&netfs_proc_lock); +} +static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) +{ + if (!list_empty(&rreq->proc_link)) { + spin_lock(&netfs_proc_lock); + list_del_rcu(&rreq->proc_link); + spin_unlock(&netfs_proc_lock); + } +} +#else +static inline void netfs_proc_add_rreq(struct netfs_io_request *rreq) {} +static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {} +#endif /* * objects.c diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 068568702957..21f814eee6af 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -7,6 +7,8 @@ #include #include +#include +#include #include "internal.h" #define CREATE_TRACE_POINTS #include @@ -18,3 +20,92 @@ MODULE_LICENSE("GPL"); unsigned netfs_debug; module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO); MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask"); + +#ifdef CONFIG_PROC_FS +LIST_HEAD(netfs_io_requests); +DEFINE_SPINLOCK(netfs_proc_lock); + +static const char *netfs_origins[] = { + [NETFS_READAHEAD] = "RA", + [NETFS_READPAGE] = "RP", + [NETFS_READ_FOR_WRITE] = "RW", +}; + +/* + * Generate a list of I/O requests in /proc/fs/netfs/requests + */ +static int netfs_requests_seq_show(struct seq_file *m, void *v) +{ + struct netfs_io_request *rreq; + + if (v == &netfs_io_requests) { + seq_puts(m, + "REQUEST OR REF FL ERR OPS COVERAGE\n" + "======== == === == ==== === =========\n" + ); + return 0; + } + + rreq = list_entry(v, struct netfs_io_request, proc_link); + seq_printf(m, + "%08x %s %3d %2lx %4d %3d @%04llx %zx/%zx", + rreq->debug_id, + netfs_origins[rreq->origin], + refcount_read(&rreq->ref), + rreq->flags, + rreq->error, + atomic_read(&rreq->nr_outstanding), + rreq->start, rreq->submitted, rreq->len); + seq_putc(m, '\n'); + return 0; +} + +static void *netfs_requests_seq_start(struct seq_file *m, loff_t *_pos) + __acquires(rcu) +{ + rcu_read_lock(); + return seq_list_start_head(&netfs_io_requests, *_pos); +} + +static void *netfs_requests_seq_next(struct seq_file *m, void *v, loff_t *_pos) +{ + return seq_list_next(v, &netfs_io_requests, _pos); +} + +static void netfs_requests_seq_stop(struct seq_file *m, void *v) + __releases(rcu) +{ + rcu_read_unlock(); +} + +static const struct seq_operations netfs_requests_seq_ops = { + .start = netfs_requests_seq_start, + .next = netfs_requests_seq_next, + .stop = netfs_requests_seq_stop, + .show = netfs_requests_seq_show, +}; +#endif /* CONFIG_PROC_FS */ + +static int __init netfs_init(void) +{ + if (!proc_mkdir("fs/netfs", NULL)) + goto error; + + if (!proc_create_seq("fs/netfs/requests", S_IFREG | 0444, NULL, + &netfs_requests_seq_ops)) + goto error_proc; + + return 0; + +error_proc: + remove_proc_entry("fs/netfs", NULL); +error: + return -ENOMEM; +} +fs_initcall(netfs_init); + +static void __exit netfs_exit(void) +{ + remove_proc_entry("fs/netfs", NULL); +} +module_exit(netfs_exit); diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index e17cdf53f6a7..85f428fc52e6 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -45,6 +45,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, } } + netfs_proc_add_rreq(rreq); netfs_stat(&netfs_n_rh_rreq); return rreq; } @@ -76,12 +77,13 @@ static void netfs_free_request(struct work_struct *work) container_of(work, struct netfs_io_request, work); trace_netfs_rreq(rreq, netfs_rreq_trace_free); + netfs_proc_del_rreq(rreq); netfs_clear_subrequests(rreq, false); if (rreq->netfs_ops->free_request) rreq->netfs_ops->free_request(rreq); if (rreq->cache_resources.ops) rreq->cache_resources.ops->end_operation(&rreq->cache_resources); - kfree(rreq); + kfree_rcu(rreq, rcu); netfs_stat_d(&netfs_n_rh_rreq); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b11a84f6c32b..b447cb67f599 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -175,10 +175,14 @@ enum netfs_io_origin { * operations to a variety of data stores and then stitch the result together. */ struct netfs_io_request { - struct work_struct work; + union { + struct work_struct work; + struct rcu_head rcu; + }; struct inode *inode; /* The file being accessed */ struct address_space *mapping; /* The mapping being accessed */ struct netfs_cache_resources cache_resources; + struct list_head proc_link; /* Link in netfs_iorequests */ struct list_head subrequests; /* Contributory I/O operations */ void *netfs_priv; /* Private data for the netfs */ unsigned int debug_id; From patchwork Fri Oct 13 16:03:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45771CDB47E for ; Fri, 13 Oct 2023 16:04:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D53056B008A; Fri, 13 Oct 2023 12:04:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2A4B6B0095; Fri, 13 Oct 2023 12:04:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCAC56B009D; Fri, 13 Oct 2023 12:04:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AD5476B008A for ; Fri, 13 Oct 2023 12:04:41 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 800D91CA590 for ; Fri, 13 Oct 2023 16:04:41 +0000 (UTC) X-FDA: 81340911162.17.1B1014C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 8B47440012 for ; Fri, 13 Oct 2023 16:04:39 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NipNRHbI; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213079; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ybL5Z84E2MKfw9/KPwUGR/SMS/5stuqwlJ261jfXYyg=; b=sDgJpbUhpm+AJ3caSHNDZbHZ7nSebf1+pzkCIIlaMUIbMl1Yn63NHq6oetI4PoTndy1DjQ GPNLkx+FvjTxRH/UyqC2acYU4off4PT9l0eb5+ItloZ+xxAVT5hsAN0KcqxfzPbk1wPa8R 33Oq2TU7j4cu939nxP1K4qCMwm90zaU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NipNRHbI; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213079; a=rsa-sha256; cv=none; b=kqLBOasVbWDK0svPkZyj0IqZWI8uxapQ0XDw7ifz84M3/WBiEPjEWbixAJp6gudqsd0vCh fc3NckzqpOCLnvtz1GamNdWE8rJNiEYb21ol9NrAo5ICCMY1dfNki6FljY2wql9bbI+qe1 /lyoHKf5G5Xs8m3NCVeBAJCJZh50v+c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213078; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ybL5Z84E2MKfw9/KPwUGR/SMS/5stuqwlJ261jfXYyg=; b=NipNRHbIFgI+y3c4J666CGh32DcSxi/ToiSOwq1CG1a0bEs2WTtCdShrsWMmr6CppmrMD3 PrQPNpV4EJbYo+XLc0T2lYOMehi9IYFuB5K+uZPQYwdg1WSB0bGR+ZU6zrmsRlDy5QaRuV JK2CmPfKqA/YAHzDgiBiRt7H1EVTrCc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-222-9yKRiki2O6mUFlRHiZ7SOw-1; Fri, 13 Oct 2023 12:04:35 -0400 X-MC-Unique: 9yKRiki2O6mUFlRHiZ7SOw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6B4ED85A5A8; Fri, 13 Oct 2023 16:04:34 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id D8FD040C6F79; Fri, 13 Oct 2023 16:04:31 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 02/53] netfs: Track the fpos above which the server has no data Date: Fri, 13 Oct 2023 17:03:31 +0100 Message-ID: <20231013160423.2218093-3-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Queue-Id: 8B47440012 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: gkoa9wig66jnxjdcgysdyxjbdyk1bhrp X-HE-Tag: 1697213079-337741 X-HE-Meta: U2FsdGVkX1+1XhskrFqxuJJ48q/CLVBnPYYIjKxAInZ34SMXszUvysiaMY1Hlo0TEOeLdav/o+FLqDCnCsosAopdPxGgTfH+BdFSPLgNvw51A4xvTPEXUvZtOfSQ7iOTSJyOlSIPJWai4Yv0lz8VDR/tL179BkarvPsssmw1I2XTy9H136z2qmLhg12QAx+KVW98SbvvI0cew8VzNlrqRfFNelVPM5iH0x+qIM/sDaZXime2gusmCFTVuljgIokiCyAioM5QMZqIGbP4YzyK0CUEu3rgR3RmKnbL14MHnxi7XZ3C+LGWaeP3KzAzP5BmkXNkLDN9A0OjY8FnyaxuLfc+rq3nMD80nzDSy/uNj/a9bpmst4Mp7KU8Pfs//wm6gXWj27k7rQ+0beFoWFi5uIFQZs7nWWGcSvnwdBgkM9Apme74N5bZVU80pysR02W/eZzlIUko4hfTNLPAn7LoBDytOR9IoF1prPmNaBWIyPjHHm7lcuOiqpSHIcjX9Q0evo8qJb8YptB9XzYc1uknTbJ862Z7gPfuf/nnQbt4oVBOlUwGCJzVXGLpbjuR2K9ej/fBbIefKRB6W1tF453Hhk+cx9RgZZhXbJxRQ/fnnn6MQgShGbwSU0ybFgJ73fvfUnKiySMQMckqYSRZrnLvnD9AmX2fpoQK13ucZr5uEk4tf8FfSQDFPEhexjJEJGGcHGEBP1vNCXcCZsF0FcPjPuZIr3P7SSlJ3yjOUmcmGGAkNmX9OkC7DsM/tU2wUvJ2lffz9pdxGB0Q5MtgMOnn6W0DYhZA/UZyePopweWzbgq8GP86LEiFNS/7GRoIPczfKxyKYeXF9bF/kf3QFFiU6P7hyYuNjgQNMQqzCmiBUWWgiO8a+gagK32Gx0T7hdbaMRiabBjXAoEOT/p4BEpZ9ndZOgG0s0OxKXOv5u1KZbazbt1daJhYkH0mk2Rgc3eyHj93sI+BlqjaSQf3usj dZAOjPWP WH1yZK4ScovkdXq+xgXimWe22o32WsDuxIpLe+tV9V/MK6w93MY6pOqluvG39o6wOdHc5Gp10h+NExu3fh+dW2KPwrZD5pSU9DIlmelwisljN2bQa2UbHfFAxecAwAwEfAD7CNtLFbp5ecQ67Vk4ZYoI51CUKX/DcOM8MMaqYL6XaaR1OuDoNExJUbe3lp9JdgzrIpFFlpb1rMb6CScSWlmBl5O9voKjwlUJfHT1khQsbpcThZzpck/hSLxG0j6OnsHwgPk6dXD3W97463Ia66qM7mtnWDYUZ4G8rSu2Ll4XcEr8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Track the file position above which the server is not expected to have any data and preemptively assume that we can simply fill blocks with zeroes locally rather than attempting to download them - even if we've written data back to the server. Assume that any data that was written back above that position is held in the local cache. Call this the "zero point". Make use of this to optimise away some reads from the server. We need to set the zero point in the following circumstances: (1) When we see an extant remote inode and have no cache for it, we set the zero_point to i_size. (2) On local inode creation, we set zero_point to 0. (3) On local truncation down, we reduce zero_point to the new i_size if the new i_size is lower. (4) On local truncation up, we don't change zero_point. (5) On local modification, we don't change zero_point. (6) On remote invalidation, we set zero_point to the new i_size. (7) If stored data is culled from the local cache, we must set zero_point above that if the data also got written to the server. (8) If dirty data is written back to the server, but not the local cache, we must set zero_point above that. Assuming the above, any read from the server at or above the zero_point position will return all zeroes. The zero_point value can be stored in the cache, provided the above rules are applied to it by any code that culls part of the local cache. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/inode.c | 13 +++++++------ fs/netfs/buffered_read.c | 40 +++++++++++++++++++++++++--------------- include/linux/netfs.h | 5 +++++ 3 files changed, 37 insertions(+), 21 deletions(-) diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 1c794a1896aa..46bc5574d6f5 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -252,6 +252,7 @@ static void afs_apply_status(struct afs_operation *op, vnode->netfs.remote_i_size = status->size; if (change_size) { afs_set_i_size(vnode, status->size); + vnode->netfs.zero_point = status->size; inode_set_ctime_to_ts(inode, t); inode->i_atime = t; } @@ -865,17 +866,17 @@ static void afs_setattr_success(struct afs_operation *op) static void afs_setattr_edit_file(struct afs_operation *op) { struct afs_vnode_param *vp = &op->file[0]; - struct inode *inode = &vp->vnode->netfs.inode; + struct afs_vnode *vnode = vp->vnode; if (op->setattr.attr->ia_valid & ATTR_SIZE) { loff_t size = op->setattr.attr->ia_size; loff_t i_size = op->setattr.old_i_size; - if (size < i_size) - truncate_pagecache(inode, size); - if (size != i_size) - fscache_resize_cookie(afs_vnode_cache(vp->vnode), - vp->scb.status.size); + if (size != i_size) { + truncate_pagecache(&vnode->netfs.inode, size); + netfs_resize_file(&vnode->netfs, size); + fscache_resize_cookie(afs_vnode_cache(vnode), size); + } } } diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 2cd3ccf4c439..a2852fa64ad0 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -147,6 +147,22 @@ static void netfs_rreq_expand(struct netfs_io_request *rreq, } } +/* + * Begin an operation, and fetch the stored zero point value from the cookie if + * available. + */ +static int netfs_begin_cache_operation(struct netfs_io_request *rreq, + struct netfs_inode *ctx) +{ + int ret = -ENOBUFS; + + if (ctx->ops->begin_cache_operation) { + ret = ctx->ops->begin_cache_operation(rreq); + /* TODO: Get the zero point value from the cache */ + } + return ret; +} + /** * netfs_readahead - Helper to manage a read request * @ractl: The description of the readahead request @@ -180,11 +196,9 @@ void netfs_readahead(struct readahead_control *ractl) if (IS_ERR(rreq)) return; - if (ctx->ops->begin_cache_operation) { - ret = ctx->ops->begin_cache_operation(rreq); - if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) - goto cleanup_free; - } + ret = netfs_begin_cache_operation(rreq, ctx); + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) + goto cleanup_free; netfs_stat(&netfs_n_rh_readahead); trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl), @@ -238,11 +252,9 @@ int netfs_read_folio(struct file *file, struct folio *folio) goto alloc_error; } - if (ctx->ops->begin_cache_operation) { - ret = ctx->ops->begin_cache_operation(rreq); - if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) - goto discard; - } + ret = netfs_begin_cache_operation(rreq, ctx); + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) + goto discard; netfs_stat(&netfs_n_rh_readpage); trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); @@ -390,11 +402,9 @@ int netfs_write_begin(struct netfs_inode *ctx, rreq->no_unlock_folio = folio_index(folio); __set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags); - if (ctx->ops->begin_cache_operation) { - ret = ctx->ops->begin_cache_operation(rreq); - if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) - goto error_put; - } + ret = netfs_begin_cache_operation(rreq, ctx); + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) + goto error_put; netfs_stat(&netfs_n_rh_write_begin); trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b447cb67f599..282511090ead 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -129,6 +129,8 @@ struct netfs_inode { struct fscache_cookie *cache; #endif loff_t remote_i_size; /* Size of the remote file */ + loff_t zero_point; /* Size after which we assume there's no data + * on the server */ }; /* @@ -330,6 +332,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, { ctx->ops = ops; ctx->remote_i_size = i_size_read(&ctx->inode); + ctx->zero_point = ctx->remote_i_size; #if IS_ENABLED(CONFIG_FSCACHE) ctx->cache = NULL; #endif @@ -345,6 +348,8 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, static inline void netfs_resize_file(struct netfs_inode *ctx, loff_t new_i_size) { ctx->remote_i_size = new_i_size; + if (new_i_size < ctx->zero_point) + ctx->zero_point = new_i_size; } /** From patchwork Fri Oct 13 16:03:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 466BBCDB482 for ; Fri, 13 Oct 2023 16:04:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB8746B00B0; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C685D6B00AA; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC53A6B00B3; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9367F6B00AA for ; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 31F4FC0370 for ; Fri, 13 Oct 2023 16:04:53 +0000 (UTC) X-FDA: 81340911666.04.7A758F2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id BE7C210002D for ; Fri, 13 Oct 2023 16:04:50 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=F9DVC78R; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213091; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VdTOgUT9KIMzQnxrll+Ln7vU3lh7L6xf921vj9gXc+g=; b=Kc/h8bwIV3/SQ2sE3/+isDQTz/DYgk6Q7rARggqeYshd2vdx4/7wgiwP9TdbrKpBr7dHcw VNRHu+MPLh2N/+PnADPj53BMQKVR/ipPMI8BifWjw997jBtkoMH/BMqJTpJcqNbAg0oC9J E2eMHz4FuEpIfRkM9wCOPC/AZZk0JhU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213091; a=rsa-sha256; cv=none; b=P3BzBQ9f7O+Aguyw+g9Tt2/KKVm6gd9BZwmgo7+EdCq8WFdUUKPlqhRq0IfGOlx1eyZgWY VoF9+eJ5DKQ+xth1uPYlfCxKjb1yQwQNhc8Fxf30WQHh+9oeX7QrTfbW9E2AcAZs5IVRw9 8TybXMEXaJxOfhe6CiAdahVwHMmxP7k= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=F9DVC78R; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213090; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VdTOgUT9KIMzQnxrll+Ln7vU3lh7L6xf921vj9gXc+g=; b=F9DVC78RlM10fe245vf6s0+2n869LOaNPfkaP5EppaIqUji7RrCN33gy/cHV2EwzXc1xqg oZxfKKqIjR7rAGEUqoiNN7VOOkBRks+awr4W+jLVYZESd+eOrMgmBSHrGmAOem/zt9zRuD DTOSgoNgDA23nJLBo4sHm5eTfjq0t68= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-lniRQbNTMuOczg5-kXAeJA-1; Fri, 13 Oct 2023 12:04:38 -0400 X-MC-Unique: lniRQbNTMuOczg5-kXAeJA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA0C288B7A6; Fri, 13 Oct 2023 16:04:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F13CC15BBC; Fri, 13 Oct 2023 16:04:35 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 03/53] netfs: Note nonblockingness in the netfs_io_request struct Date: Fri, 13 Oct 2023 17:03:32 +0100 Message-ID: <20231013160423.2218093-4-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Queue-Id: BE7C210002D X-Rspam-User: X-Stat-Signature: ye919u3kebb8fnxx13sagniq3ypntapg X-Rspamd-Server: rspam03 X-HE-Tag: 1697213090-785310 X-HE-Meta: U2FsdGVkX1+6yDiZ1GQqL2X89SONNh2mO4+JjUYiNdO626QuCs8nISqMSVVk3rhxyUIN7bO945uS5Y1dEJzDd/ZzxBHEmoLsn3hR59FQnnk9Pbxwy5FSt4FDmP4NDCETDG29/oVlh/48rnI9fbGH9kjhz1px7UBvIX3GN7hpuJrkZMspz6ZJzLYNhZPphtP7D1xADWlPJkOJiwM0wRcl2z+us06eL3Mqt9mX1y6jmMTk3F86vDtc+EBSWy8My/R54u6b0kEtqfUzZVXqLwTFoYMa1+ob3fI8d4CP3RA8Wu+PxFtmKJc/LFoufWyMx90E3sD186imb9obBV6UseUVUICrVJiy/Tx1KpmQgm+xFRL8E47Nmy6e7vQiFik/dZwr/bWHKkgRvHyV+ib1ienxSh+64BzrbBTM/UcImsTdQnlH3muvqZtBj8yQJr65MZhU+34vER3mPSrZ2wK7ZVMgG2jx/5VHQU2N5prST18V2EFvHDlR215yZB/7maeMCOiW2g18i6d+lmDyEZxxlH3JJ9kzuS7DUhDPIekMiF5wnw0VB7fpGF/DFX1NgG17lBZnCE7jnGl1N0aJmP3X9U6NAbEH9srg8NJr9SFZCdjKl7/vo/4TgJNRHxTQYpZAkGo/3xNP4gm5v7dlELiYqt3R3DdgngU7g7FqecfI8wlR+c8nBctUsrrb4dY5ErIYxQ12GkaGZ2PlznvTBCS12WhYcl6Ta9wuntMUuTukHnsdwOiishaU7C4HPYINte/1OjJ2bLCHfwkYOYRHzHB7AeTyqdn5u9Amcr2r74n5IkjnReSIhkRrsG8gkocC5ml82rCV+nwjjYtS/orlJ17mmWPc0Jz3VO1n+P+9yU2BUXfxW1f8LBU+wo+ECW1ODSkKxdZmgZcwliko+3adbIn98t6BT0WsHueOLycamNvRR94/xzDXTj9p9GidiC1rMoU8+IVTZ5lmbTunPPccQlXgE45 yV3/+9x1 9ThOM6XMy1Q3Ufxi6P78mNDUsU4UHcgRZfYkTSRO9hUrpkqcJ6CN7zazit1M5ixZV4UpjsPEm2q5BMysXOHsaNdCmN0bjFUvzknjPSAlhjun+9ozsK+pYQrOZITeOKQbL3lFb4btPSsdqJa2R1kBXrpL2GCr0eAjkmkTmrvjwp28zZ6seFDqDkZvs0gjSu2js9HgwUQQVbteNw3nyuR65FlBgb4sxdCUq302CwsaDXUeQj+6vxH0RujhAOHfXk1j4N/p82wyhj/xXBNc27bFbXEt3o9wsnZWppvRmXhXiNoCsxFxdCtjYF+xKPZMzAHeRWJQaThOuu4tnAc2CYf6JNPYkdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow O_NONBLOCK to be noted in the netfs_io_request struct. Also add a flag, NETFS_RREQ_BLOCKED to record if we did block. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/objects.c | 2 ++ include/linux/netfs.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 85f428fc52e6..e41f9fc9bdd2 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -37,6 +37,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, INIT_LIST_HEAD(&rreq->subrequests); refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); + if (file && file->f_flags & O_NONBLOCK) + __set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags); if (rreq->netfs_ops->init_request) { ret = rreq->netfs_ops->init_request(rreq, file); if (ret < 0) { diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 282511090ead..b92e982ac4a0 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -205,6 +205,8 @@ struct netfs_io_request { #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ #define NETFS_RREQ_FAILED 4 /* The request failed */ #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ +#define NETFS_RREQ_NONBLOCK 6 /* Don't block if possible (O_NONBLOCK) */ +#define NETFS_RREQ_BLOCKED 7 /* We blocked */ const struct netfs_request_ops *netfs_ops; }; From patchwork Fri Oct 13 16:03:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5888FCDB47E for ; Fri, 13 Oct 2023 16:04:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3E016B00A2; Fri, 13 Oct 2023 12:04:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DED386B00A4; Fri, 13 Oct 2023 12:04:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB6146B00A9; Fri, 13 Oct 2023 12:04:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BB2206B00A2 for ; Fri, 13 Oct 2023 12:04:49 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3B07480306 for ; Fri, 13 Oct 2023 16:04:49 +0000 (UTC) X-FDA: 81340911498.02.856DD3B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 68C3118001E for ; Fri, 13 Oct 2023 16:04:47 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q6YzsoWy; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gcj4568/Cja75cJgRJVpO1Pa05t3rY5GzidcrAxRiJc=; b=pr4gr6LOQWBFi/izOySd1/6reOSDWxUw3TdGayK4HgjsJw0388uZ+cNYZLV69kvD2QlNXz 4sfKnjkubSvRMjig+H0K5t9h09DNBQpz/UaAjDbh9/gCvuGIAwSu4jQ2myT93gXEt1heqm i/TVYTn2vMOBZN0WotBtpyyLN783Fpc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213087; a=rsa-sha256; cv=none; b=MDL/244woy3qW6q5xGO5iP9gd0XtFNnIq8Uz9tsYzOIU7BwiICrT7zA882BYwJCan45xey UCyAfUOqZKF9eSA8/2BI4tyD7i3sEgO3GpjVWW4PJ+tHwIOKlC9xvf2804+BQVKG3zBrst uGWSxkS5Qf9wTTBZWMB9YDemPRp5cYY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q6YzsoWy; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213086; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gcj4568/Cja75cJgRJVpO1Pa05t3rY5GzidcrAxRiJc=; b=Q6YzsoWyj+TpRSwSKxEPkpYkRzk+JH6wEb0RpzMHiwzu0xjuXvW8gBnUj5zot5pK2LbIrt gQB5ZUd0a00RzcOGx5zpiPmmFgkbZmB97z56AmdBJn77HxfYfyrFNjDFqTklk4HCMj5sUx Hr3Z84OiADmZemXaQwj6LWGQcaf7zYI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-637-7pwZptOZM4Gupdab9T4_0w-1; Fri, 13 Oct 2023 12:04:42 -0400 X-MC-Unique: 7pwZptOZM4Gupdab9T4_0w-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16FBB2810D7B; Fri, 13 Oct 2023 16:04:41 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90FC92157F5A; Fri, 13 Oct 2023 16:04:38 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 04/53] netfs: Allow the netfs to make the io (sub)request alloc larger Date: Fri, 13 Oct 2023 17:03:33 +0100 Message-ID: <20231013160423.2218093-5-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspamd-Queue-Id: 68C3118001E X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: o7bhbz7ozw4anqqgpw7r7zikgtkcfe45 X-HE-Tag: 1697213087-522349 X-HE-Meta: U2FsdGVkX1/Ir6OQGEGN6AzPFJ0IQBjyIn7HjCyliRGdWrNGpE+yFaRUnD1XZqOF1ulUgArneQZuteD+0TqZOO8jPyVTbSBVc3ycEnnjc6+WXMwJv/Qsijn5rhlb4qLZWAOX4WhIYOr5w+bdMN0Vd3Ns4IdJ5gZmWlISLkdPegv9zMQbNJdxVf9QTa6cDGQcctMF0La0dEdlgmvK7Vmo4fz3iKUtwxovdHeh1HujymcODTOFwJ11u50URjR8hDAGYleFwfGcstN6GY0dYnPGAzJmzGO0IH5iWbTxFzfxV/8SpuPskZoB4psssLSzS4+WE9C8tmu3jLZXJ/y2qlolUBJenXRLd5mRLoJywmqCOS8UVwefWeriGK0fHv16Hy3H9exGPFi/3BBirPxdgbltGrV6Wj4gnqzKnzmVorzTwzqwegB1huMlP9Py8KqFf8ommuaQ3GeY1sTbHEUgg8MjLPFhHtYnCe9ASaJM7+LJyisLeYqWGQwqmWo9gzFmX+/x03dU4arwFvLU93UH1JHivXZlAL1EEA9DdPXNIIYvQQHX49LxP1AK849YCP6hPTRsfdETlhMotvRDNBx+xeBdydflLIeEhBTnBvam68F3Oe2fdrNftgJQ6PfiuS31SbWkEujLwBEsJz5/CoOS7RAyvnEplZeg5nUqnitf7NzUQvl66Ot/QFGsEKnILfgcu5dQYQspfUIsGSmo06Rn70HRCSngsSK1ubbXV7horG1DdM/eA71FtOA4zRstLmiWGu7uFaotnqn78yXBfUBxW7+EOLgMuyxgnsSt2AtopqLIzaM3P64/GxKdzfGuqK3tK1IVtK0U8dkZo0m80h/qNsaHbmGr8YAdnbbXpG0MvH2Oje2AsKjm3dfo34hfB5YXCosqtvJrMUyzNhu3fYmnuEK5jDgIaZ2uezlRDkrk3qDIQL2fd3+ZHEaASj9tj3nUI4UwbZlsxv0Qc9dBYn3g/UT snxiGG3z n/G3klUXDC2y8c3rQvsISxiDEQMDcFqdZGSgaztxBiTnB2iEwj0NQyVNZmgMLb3T530FYJ6BsjIWXa8ZJucgq0tI/RXapPFItovDNHNp9q4JckimasArSW+p7U/5Js+rdCaPguPvPfbHhnkiRhf8ndkmHfIRUZbeyrIIiLpB9UhsgX0D+47ZABYP/7vH/wniS7RkbiHLeUqcZ9b7EvNbcertwfC/jG10dfisAieJZh0kmqNYTjBEhOLc1bolcoWG/Hx4Lvb4YuuVLgXwsY/sCVKfxCdV0fXsmwx0Uu587K8WQA7Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow the network filesystem to specify extra space to be allocated on the end of the io (sub)request. This allows cifs, for example, to use this space rather than allocating its own cifs_readdata struct. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/objects.c | 7 +++++-- include/linux/netfs.h | 2 ++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index e41f9fc9bdd2..2f1865ff7cce 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -22,7 +22,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, struct netfs_io_request *rreq; int ret; - rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL); + rreq = kzalloc(ctx->ops->io_request_size ?: sizeof(struct netfs_io_request), + GFP_KERNEL); if (!rreq) return ERR_PTR(-ENOMEM); @@ -116,7 +117,9 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq { struct netfs_io_subrequest *subreq; - subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL); + subreq = kzalloc(rreq->netfs_ops->io_subrequest_size ?: + sizeof(struct netfs_io_subrequest), + GFP_KERNEL); if (subreq) { INIT_LIST_HEAD(&subreq->rreq_link); refcount_set(&subreq->ref, 2); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b92e982ac4a0..6942b8cf03dc 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -214,6 +214,8 @@ struct netfs_io_request { * Operations the network filesystem can/must provide to the helpers. */ struct netfs_request_ops { + unsigned int io_request_size; /* Alloc size for netfs_io_request struct */ + unsigned int io_subrequest_size; /* Alloc size for netfs_io_subrequest struct */ int (*init_request)(struct netfs_io_request *rreq, struct file *file); void (*free_request)(struct netfs_io_request *rreq); int (*begin_cache_operation)(struct netfs_io_request *rreq); From patchwork Fri Oct 13 16:03:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE32ACDB47E for ; Fri, 13 Oct 2023 16:04:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D3DE6B00AF; Fri, 13 Oct 2023 12:04:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08C116B00AA; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C19A96B00AF; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A7C9A6B00B0 for ; Fri, 13 Oct 2023 12:04:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 777571A03B4 for ; Fri, 13 Oct 2023 16:04:53 +0000 (UTC) X-FDA: 81340911666.25.EC3A660 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf18.hostedemail.com (Postfix) with ESMTP id B2A811C0033 for ; Fri, 13 Oct 2023 16:04:51 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hpi0ilEB; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213091; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qZVSDRhH8TCwDqqtYYaWhbNXvdmkKJuIxH0fFLIHYlQ=; b=V5sYDvlx28aZfUlRKJx77KeJczGLvgi8RHjoocvNvbI+buIhpG2mnFbaPuPE1lp9XrUom5 IkhpreLI75Lcslb/XvmioldPfM04kn4kiiEX8bDEqMH3tAH0PqdQ1TmbIeU6JCJDTWNIke fmbLRlkgP4xzGF+QrEZHQTudJ+P7CNk= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hpi0ilEB; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213091; a=rsa-sha256; cv=none; b=4OTMSf20/0D37+rm7aaOuKIuuFZ/rMEghCbXtBpEvW2jWwTXX73TDV07QZwsAiuOKaevb/ f3oHLGlk4oCeCIBdvtM62qzLyJOrqw3FKcoYvnG8nnhh/GSUBehzslhY6RdjVjrW06VKMj G4Otin7j1ikqXLV0zV7nCzpQXiqhCB4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213090; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qZVSDRhH8TCwDqqtYYaWhbNXvdmkKJuIxH0fFLIHYlQ=; b=hpi0ilEB00DhYfTihJX0KXx2OwxgV88WRvmlGcfr/6YNboQjZzbH/YEE/PRJJqv6DSLZIc gfldEI+yCYBonR0zI/YDuGnvPVV1nsXqRry4rO39DCfa4lpNnkSJyKpWt2cWw/naXDzMYD aDXFhwnEvxMc2WMYXpaXMjX2M0rLO80= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-584-1TUMEKTKP-Sh9eEOTPIfmg-1; Fri, 13 Oct 2023 12:04:46 -0400 X-MC-Unique: 1TUMEKTKP-Sh9eEOTPIfmg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6368C87B2D6; Fri, 13 Oct 2023 16:04:44 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id B58021C060DF; Fri, 13 Oct 2023 16:04:41 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 05/53] netfs: Add a ->free_subrequest() op Date: Fri, 13 Oct 2023 17:03:34 +0100 Message-ID: <20231013160423.2218093-6-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B2A811C0033 X-Stat-Signature: 9zdsa8uqo5nugq8yf3gnh3g9hzxyx95c X-Rspam-User: X-HE-Tag: 1697213091-477377 X-HE-Meta: U2FsdGVkX19cqF4VLAkYm2VY/2w6yvkkPBdAcJq0DmPbQzdE9kprsN0fB8L7TewX6Cp9iDWC7Sn5FFsu5NpuXK3lMHGzGEEtjMP3+NLoJK/RH0kJMUswKSTONqoGG/FJAqAbVgncNAltoFTK5gwr5X0HIPeGyBXwbkES0tO/z3rW1QF558LaLiM/Oo9CSNv28RBlpQFOB3h2T9t1rq2k71qovk8s46lfRWxd2L3Pv7ZvViPFyLCU5Hqpz/QFIL1HX2QTE5shXCa8cIB/nsTPHaz862ikJupoy5FgWVM0tbTWvO+ffan9/4DbvGzuEpZld8K1j9Uu27tai+2Gky2Pgzjn6FPfpG8dPJmjIiGGtx6sdmDaUQuOKaO4VY3VWpl1cw0V6VsnA5FInd7X+qxhZow/osKJLhX5HCE+ZLjS8TAQxggvRsEDYpNK5Ub7CAc+a0y9r39NRpqx86FjXflpc3gI/ys8qVk/QnHZFNbg/ntnQbrTIgw9cnRfj41v4TuUO3xdenDs/IwQ8O4a1f75L+dKIw6pJanNw6oqoJFpXJfPetCdG9wHj1qd4aafFMJgs8rtdYWfldBaGdBh1XFfI1kfuA4TKAQyMkGtt+Duzi+ffMhxfwEb2iD41QwjtPCqtmePn0Mlhj4eXTAnT34EPcQk0wGU4npyu36Q6ovYFzTXMGxfzzHo/TLh0C0DAEbiK10xfNaWbX5HfQJI12GEId3Ha2OW54QmkOG+BUBvOnBh+CmNZBDwH8pfN5+n7opHBi1Fk4/tSFJUm00c/ONke+qnuemJZ62NkMsOtxHRCgHxwKhlVficcjnLtE3Z0Kt1gjyhiFEf99/tTUnaQt6sCchVvyxXrb3ZjpLRwIR9xE3FQNkf5v8h4LIAKJxlmn6mACxblq+DWyejL8WEf+NJnAyJuP+VBLyCzvSyBVl6mQEcIczQs63SGV0KRkwDcTr7ha45g08EgnO8VmQZLjU q202n9UW BUiC+5YKaz5h42J8DhU4o4YDZNqOVa4cJsm8Ifjf3bJW9Lp4dtF3p6g3REAXzrKSLclREEYvYYP85542e7GThz9/1HSGegOWlPMPubzUnyJFZ0R3JWtFehVQefixwgacUvW+eBaYY6vpjtRlz2Am8KkFDY+jA6SxfTUfjPwfJpVYcvIlm6VAwBNGFX+evloRpxC6T6rSDw/52Z6KyIiSS7S0Fpg7HM9KfF4+nYsJ51m9TXQndopOQdrWUT3UMHvF1uOg/VZ+4iBoFPjSLO7oBskiiHv02EXrLZjCA8hCyQR6ZnuXT9Gcgm5Optauvep1ix2RWkOs1rgMRBChK/fh1niLqlw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a ->free_subrequest() op so that the netfs can clean up data attached to a subrequest. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/objects.c | 2 ++ include/linux/netfs.h | 1 + 2 files changed, 3 insertions(+) diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 2f1865ff7cce..8e92b8401aaa 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -147,6 +147,8 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq, struct netfs_io_request *rreq = subreq->rreq; trace_netfs_sreq(subreq, netfs_sreq_trace_free); + if (rreq->netfs_ops->free_subrequest) + rreq->netfs_ops->free_subrequest(subreq); kfree(subreq); netfs_stat_d(&netfs_n_rh_sreq); netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 6942b8cf03dc..ed64d1034afa 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -218,6 +218,7 @@ struct netfs_request_ops { unsigned int io_subrequest_size; /* Alloc size for netfs_io_subrequest struct */ int (*init_request)(struct netfs_io_request *rreq, struct file *file); void (*free_request)(struct netfs_io_request *rreq); + void (*free_subrequest)(struct netfs_io_subrequest *rreq); int (*begin_cache_operation)(struct netfs_io_request *rreq); void (*expand_readahead)(struct netfs_io_request *rreq); From patchwork Fri Oct 13 16:03:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421148 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ADA0CDB482 for ; Fri, 13 Oct 2023 16:04:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCD726B00AA; Fri, 13 Oct 2023 12:04:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7C616B00B6; Fri, 13 Oct 2023 12:04:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 962B86B00BA; Fri, 13 Oct 2023 12:04:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7E2596B00AA for ; Fri, 13 Oct 2023 12:04:56 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3A6381202F3 for ; Fri, 13 Oct 2023 16:04:56 +0000 (UTC) X-FDA: 81340911792.28.EF7C1C4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 1757840011 for ; Fri, 13 Oct 2023 16:04:53 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Vgepma4W; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213094; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=llxaxL18vzT97SlAC08pP78keH4zO9uZz2+AcLWkac4=; b=dcA3sXiExTp2ZFuhFqCl5aBzHJAvEjyI+i5jhPgzcVq9FTqC5VMYbOqIVU9L5w/6ERpBWB MRfE3/KIvIl0Vxx9g4ES0eGYjNCJXXwRWuwSRdCr3rREcy3WE7AaVgd62Bh3RiG655Hl5B rEnqSQ6G/hsOz/Tnfv3zSaP6Ie4CMss= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Vgepma4W; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213094; a=rsa-sha256; cv=none; b=R6P34+vBAchIzvDyRkD4bnGSqb4nA+Xfn8GMGEYVRCm24vJY/e8/fOtIZdtAmySE/cvFbZ kuOlXnqVvaxwlSeV3jGA8X8bkQnYs2ES7NlFxSVLorRQM4NE1NCbuU6Ro3N6I5ZggPf3rb bXP6l1lqHjjTnkr46C+6fIyuy+elfaw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213093; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=llxaxL18vzT97SlAC08pP78keH4zO9uZz2+AcLWkac4=; b=Vgepma4WkWCfZ+gGJJLfMPVFrmX2nr0uXUCKGvB/6U+N3QJOuXGmsxN32/47IZrfaVsR+m JmfHI7gGCFei5VBQ9ahX2sRiKk+bBvoxloAXv5J5A7UzLbJfDR0ltRkoestT+owIzHR3rt nXmZQDmDKRw6Qi9nutXRJLjhwXkkAGw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-317-2hi2d62TPAy0QDX-ikIE5w-1; Fri, 13 Oct 2023 12:04:49 -0400 X-MC-Unique: 2hi2d62TPAy0QDX-ikIE5w-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD4AC800162; Fri, 13 Oct 2023 16:04:47 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 37F04492BD9; Fri, 13 Oct 2023 16:04:45 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 06/53] afs: Don't use folio->private to record partial modification Date: Fri, 13 Oct 2023 17:03:35 +0100 Message-ID: <20231013160423.2218093-7-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 1757840011 X-Rspam-User: X-Stat-Signature: pxxz9kqyw39tnjusdbgrnf4shfbbhgba X-Rspamd-Server: rspam01 X-HE-Tag: 1697213093-270771 X-HE-Meta: U2FsdGVkX1/nMY0/KPPQFGX8fMlQ5fTNVdyCBW7SiSv3tPDk4/LWQlULUyNy5c5+QdhC23NPLxBaWZ9KckWWpm3wiGM2pXnTxO4jR2uzEaAVyqAlzOjtDyds2Mu2xbErgChFI34luaAXbalhN4phIYwUHpvMyqiLjxiB/sElwTGHuxVcPw5P80jTXyMNBYZMFTKWZ/e5brCkxQnnymMHV5SGKz1Jlwp5Q+2Hpz7VIqNpzXhN81R/KcHvtxFRU7Hu7DVge0Z38eNwhNnYXeKs3wa563pgq8XAV+WIbj1HQnUCJ43pbeAjUYqvJcopaSfKOW5VZGdLq/8ShvXZ7xUhzn1isvNh4SE4/Boqx7y7wTxnQq6BBw7ucUe4O68WFrZfS2oQmJEEc27FnzCsUqkeiN+hpNBe0L6eFb12BWqtDlw/PqQDHui+qMIlX/my3/XCnEMlZ7fD1GmMHhZfQwfqvjWm5E7cXb9ncOxYc+zX9H9dx4StQyA0f61yAc3LOGGBiUh3woPicSrifVlDk3PWHdfi+6PCdPObUmxrQnoHx545z2ooTPVSkN/xEwugm/rtRpPyHpSiGuoeXDFtupeFWs4vdGK2b7/d1HclrmtV2kEdxH7iVd2s2VCxfibvwQtNcCTeDQB8/bYB3bbcs8XhaEnqfkm/AWld/SsiWxQeEC/m2ixm2rKqVTqsdxIFrfvsvIL256MpxCo31MmBdRL6Ui1CNvMo7zWCkF+TJEMcXJsewHNiB1VkA1/lrY5CL2PzA9uj4YhgcNBe+xVOJXI56lUe+wc3T21vJrqoA7Q+PwtmxZfVHSmOwPLn25BvSC887NAHLrCWaUJHXiGtf2cmv/BJL82SxapinVGzuoqI3VJ1+r5Rhc34fEnxonrQ+57tcerPi6NxwSZiYRfCj4roHj2Erkmj6OelmVLEdp4+O30hFiE6S8eSfOGdhddqvcsG20eXBQxMU+yo/9SSGwr vNc/UqYF sK79tqeZXF2JlMCPdvRYx3J7Uos6oNrb2asdH2ZjZkxWOclZQH7AATiCG216BrbiZqRO9WG/wcEAVe2HETsOrBMI98Yiyoj5yprc+u47AnTjWSwUmgW2b8h7uxd5hrrxeZivgLdUKtKP12fHPVugWosHeRg5bImso2XqDm8UniUBkUXpj9xD2EfXcyDQwO1IvfYC1EGL2C9qm8Cm6m7fvOpd2rFwAcpVridbmyIjiHHCrnuueFzoErYHsaXSIWYY5X7wBut3Y2gWrAVF8Dg9GcguC+CSM4Droas3GDMqsoHeFfPVbeo8UwafcTR7X3yaSAPihMOYSBEAUnG3vTVpFsopQIUksB3DiGHGYAGvvvMWgymfe7U69CgLRBb3dLzTCtxOInZTc6+NQywKpWEYdF2DU0E57vCQ/mGoL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: AFS currently uses folio->private to store the range of bytes within a folio that have been modified - the idea being that if we have, say, a 2MiB folio and someone writes a single byte, we only have to write back that single page and not the whole 2MiB folio - thereby saving on network bandwidth. Remove this, at least for now, and accept the extra network load (which doesn't matter in the common case of writing a whole file at a time from beginning to end). This makes folio->private available for netfslib to use. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/file.c | 67 ------------- fs/afs/internal.h | 56 ----------- fs/afs/write.c | 188 ++++++++----------------------------- include/trace/events/afs.h | 16 +--- 4 files changed, 42 insertions(+), 285 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index d37dd201752b..0c49b3b6f214 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -405,63 +405,6 @@ int afs_write_inode(struct inode *inode, struct writeback_control *wbc) return 0; } -/* - * Adjust the dirty region of the page on truncation or full invalidation, - * getting rid of the markers altogether if the region is entirely invalidated. - */ -static void afs_invalidate_dirty(struct folio *folio, size_t offset, - size_t length) -{ - struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); - unsigned long priv; - unsigned int f, t, end = offset + length; - - priv = (unsigned long)folio_get_private(folio); - - /* we clean up only if the entire page is being invalidated */ - if (offset == 0 && length == folio_size(folio)) - goto full_invalidate; - - /* If the page was dirtied by page_mkwrite(), the PTE stays writable - * and we don't get another notification to tell us to expand it - * again. - */ - if (afs_is_folio_dirty_mmapped(priv)) - return; - - /* We may need to shorten the dirty region */ - f = afs_folio_dirty_from(folio, priv); - t = afs_folio_dirty_to(folio, priv); - - if (t <= offset || f >= end) - return; /* Doesn't overlap */ - - if (f < offset && t > end) - return; /* Splits the dirty region - just absorb it */ - - if (f >= offset && t <= end) - goto undirty; - - if (f < offset) - t = offset; - else - f = end; - if (f == t) - goto undirty; - - priv = afs_folio_dirty(folio, f, t); - folio_change_private(folio, (void *)priv); - trace_afs_folio_dirty(vnode, tracepoint_string("trunc"), folio); - return; - -undirty: - trace_afs_folio_dirty(vnode, tracepoint_string("undirty"), folio); - folio_clear_dirty_for_io(folio); -full_invalidate: - trace_afs_folio_dirty(vnode, tracepoint_string("inval"), folio); - folio_detach_private(folio); -} - /* * invalidate part or all of a page * - release a page and clean up its private data if offset is 0 (indicating @@ -472,11 +415,6 @@ static void afs_invalidate_folio(struct folio *folio, size_t offset, { _enter("{%lu},%zu,%zu", folio->index, offset, length); - BUG_ON(!folio_test_locked(folio)); - - if (folio_get_private(folio)) - afs_invalidate_dirty(folio, offset, length); - folio_wait_fscache(folio); _leave(""); } @@ -504,11 +442,6 @@ static bool afs_release_folio(struct folio *folio, gfp_t gfp) fscache_note_page_release(afs_vnode_cache(vnode)); #endif - if (folio_test_private(folio)) { - trace_afs_folio_dirty(vnode, tracepoint_string("rel"), folio); - folio_detach_private(folio); - } - /* Indicate that the folio can be released */ _leave(" = T"); return true; diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 469a717467a4..03fed7ecfab9 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -892,62 +892,6 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl i_size_read(&vnode->netfs.inode), flags); } -/* - * We use folio->private to hold the amount of the folio that we've written to, - * splitting the field into two parts. However, we need to represent a range - * 0...FOLIO_SIZE, so we reduce the resolution if the size of the folio - * exceeds what we can encode. - */ -#ifdef CONFIG_64BIT -#define __AFS_FOLIO_PRIV_MASK 0x7fffffffUL -#define __AFS_FOLIO_PRIV_SHIFT 32 -#define __AFS_FOLIO_PRIV_MMAPPED 0x80000000UL -#else -#define __AFS_FOLIO_PRIV_MASK 0x7fffUL -#define __AFS_FOLIO_PRIV_SHIFT 16 -#define __AFS_FOLIO_PRIV_MMAPPED 0x8000UL -#endif - -static inline unsigned int afs_folio_dirty_resolution(struct folio *folio) -{ - int shift = folio_shift(folio) - (__AFS_FOLIO_PRIV_SHIFT - 1); - return (shift > 0) ? shift : 0; -} - -static inline size_t afs_folio_dirty_from(struct folio *folio, unsigned long priv) -{ - unsigned long x = priv & __AFS_FOLIO_PRIV_MASK; - - /* The lower bound is inclusive */ - return x << afs_folio_dirty_resolution(folio); -} - -static inline size_t afs_folio_dirty_to(struct folio *folio, unsigned long priv) -{ - unsigned long x = (priv >> __AFS_FOLIO_PRIV_SHIFT) & __AFS_FOLIO_PRIV_MASK; - - /* The upper bound is immediately beyond the region */ - return (x + 1) << afs_folio_dirty_resolution(folio); -} - -static inline unsigned long afs_folio_dirty(struct folio *folio, size_t from, size_t to) -{ - unsigned int res = afs_folio_dirty_resolution(folio); - from >>= res; - to = (to - 1) >> res; - return (to << __AFS_FOLIO_PRIV_SHIFT) | from; -} - -static inline unsigned long afs_folio_dirty_mmapped(unsigned long priv) -{ - return priv | __AFS_FOLIO_PRIV_MMAPPED; -} - -static inline bool afs_is_folio_dirty_mmapped(unsigned long priv) -{ - return priv & __AFS_FOLIO_PRIV_MMAPPED; -} - #include /*****************************************************************************/ diff --git a/fs/afs/write.c b/fs/afs/write.c index e1c45341719b..cdb1391ec46e 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -16,7 +16,8 @@ static int afs_writepages_region(struct address_space *mapping, struct writeback_control *wbc, - loff_t start, loff_t end, loff_t *_next, + unsigned long long start, + unsigned long long end, loff_t *_next, bool max_one_loop); static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len, @@ -43,25 +44,6 @@ static void afs_folio_start_fscache(bool caching, struct folio *folio) } #endif -/* - * Flush out a conflicting write. This may extend the write to the surrounding - * pages if also dirty and contiguous to the conflicting region.. - */ -static int afs_flush_conflicting_write(struct address_space *mapping, - struct folio *folio) -{ - struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, - .nr_to_write = LONG_MAX, - .range_start = folio_pos(folio), - .range_end = LLONG_MAX, - }; - loff_t next; - - return afs_writepages_region(mapping, &wbc, folio_pos(folio), LLONG_MAX, - &next, true); -} - /* * prepare to perform part of a write to a page */ @@ -71,10 +53,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping, { struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); struct folio *folio; - unsigned long priv; - unsigned f, from; - unsigned t, to; - pgoff_t index; int ret; _enter("{%llx:%llu},%llx,%x", @@ -88,49 +66,20 @@ int afs_write_begin(struct file *file, struct address_space *mapping, if (ret < 0) return ret; - index = folio_index(folio); - from = pos - index * PAGE_SIZE; - to = from + len; - try_again: /* See if this page is already partially written in a way that we can * merge the new write with. */ - if (folio_test_private(folio)) { - priv = (unsigned long)folio_get_private(folio); - f = afs_folio_dirty_from(folio, priv); - t = afs_folio_dirty_to(folio, priv); - ASSERTCMP(f, <=, t); - - if (folio_test_writeback(folio)) { - trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio); - folio_unlock(folio); - goto wait_for_writeback; - } - /* If the file is being filled locally, allow inter-write - * spaces to be merged into writes. If it's not, only write - * back what the user gives us. - */ - if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) && - (to < f || from > t)) - goto flush_conflicting_write; + if (folio_test_writeback(folio)) { + trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio); + folio_unlock(folio); + goto wait_for_writeback; } *_page = folio_file_page(folio, pos / PAGE_SIZE); _leave(" = 0"); return 0; - /* The previous write and this write aren't adjacent or overlapping, so - * flush the page out. - */ -flush_conflicting_write: - trace_afs_folio_dirty(vnode, tracepoint_string("confl"), folio); - folio_unlock(folio); - - ret = afs_flush_conflicting_write(mapping, folio); - if (ret < 0) - goto error; - wait_for_writeback: ret = folio_wait_writeback_killable(folio); if (ret < 0) @@ -156,9 +105,6 @@ int afs_write_end(struct file *file, struct address_space *mapping, { struct folio *folio = page_folio(subpage); struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); - unsigned long priv; - unsigned int f, from = offset_in_folio(folio, pos); - unsigned int t, to = from + copied; loff_t i_size, write_end_pos; _enter("{%llx:%llu},{%lx}", @@ -188,23 +134,6 @@ int afs_write_end(struct file *file, struct address_space *mapping, fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos); } - if (folio_test_private(folio)) { - priv = (unsigned long)folio_get_private(folio); - f = afs_folio_dirty_from(folio, priv); - t = afs_folio_dirty_to(folio, priv); - if (from < f) - f = from; - if (to > t) - t = to; - priv = afs_folio_dirty(folio, f, t); - folio_change_private(folio, (void *)priv); - trace_afs_folio_dirty(vnode, tracepoint_string("dirty+"), folio); - } else { - priv = afs_folio_dirty(folio, from, to); - folio_attach_private(folio, (void *)priv); - trace_afs_folio_dirty(vnode, tracepoint_string("dirty"), folio); - } - if (folio_mark_dirty(folio)) _debug("dirtied %lx", folio_index(folio)); @@ -309,7 +238,6 @@ static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsign } trace_afs_folio_dirty(vnode, tracepoint_string("clear"), folio); - folio_detach_private(folio); folio_end_writeback(folio); } @@ -463,17 +391,12 @@ static void afs_extend_writeback(struct address_space *mapping, long *_count, loff_t start, loff_t max_len, - bool new_content, bool caching, - unsigned int *_len) + size_t *_len) { struct folio_batch fbatch; struct folio *folio; - unsigned long priv; - unsigned int psize, filler = 0; - unsigned int f, t; - loff_t len = *_len; - pgoff_t index = (start + len) / PAGE_SIZE; + pgoff_t index = (start + *_len) / PAGE_SIZE; bool stop = true; unsigned int i; @@ -501,7 +424,7 @@ static void afs_extend_writeback(struct address_space *mapping, continue; } - /* Has the page moved or been split? */ + /* Has the folio moved or been split? */ if (unlikely(folio != xas_reload(&xas))) { folio_put(folio); break; @@ -519,24 +442,13 @@ static void afs_extend_writeback(struct address_space *mapping, break; } - psize = folio_size(folio); - priv = (unsigned long)folio_get_private(folio); - f = afs_folio_dirty_from(folio, priv); - t = afs_folio_dirty_to(folio, priv); - if (f != 0 && !new_content) { - folio_unlock(folio); - folio_put(folio); - break; - } - - len += filler + t; - filler = psize - t; - if (len >= max_len || *_count <= 0) + index += folio_nr_pages(folio); + *_count -= folio_nr_pages(folio); + *_len += folio_size(folio); + stop = false; + if (*_len >= max_len || *_count <= 0) stop = true; - else if (t == psize || new_content) - stop = false; - index += folio_nr_pages(folio); if (!folio_batch_add(&fbatch, folio)) break; if (stop) @@ -562,16 +474,12 @@ static void afs_extend_writeback(struct address_space *mapping, if (folio_start_writeback(folio)) BUG(); afs_folio_start_fscache(caching, folio); - - *_count -= folio_nr_pages(folio); folio_unlock(folio); } folio_batch_release(&fbatch); cond_resched(); } while (!stop); - - *_len = len; } /* @@ -581,14 +489,13 @@ static void afs_extend_writeback(struct address_space *mapping, static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, struct writeback_control *wbc, struct folio *folio, - loff_t start, loff_t end) + unsigned long long start, + unsigned long long end) { struct afs_vnode *vnode = AFS_FS_I(mapping->host); struct iov_iter iter; - unsigned long priv; - unsigned int offset, to, len, max_len; - loff_t i_size = i_size_read(&vnode->netfs.inode); - bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); + unsigned long long i_size = i_size_read(&vnode->netfs.inode); + size_t len, max_len; bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode)); long count = wbc->nr_to_write; int ret; @@ -606,13 +513,9 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, * immediately lockable, is not dirty or is missing, or we reach the * end of the range. */ - priv = (unsigned long)folio_get_private(folio); - offset = afs_folio_dirty_from(folio, priv); - to = afs_folio_dirty_to(folio, priv); trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio); - len = to - offset; - start += offset; + len = folio_size(folio); if (start < i_size) { /* Trim the write to the EOF; the extra data is ignored. Also * put an upper limit on the size of a single storedata op. @@ -621,12 +524,10 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, max_len = min_t(unsigned long long, max_len, end - start + 1); max_len = min_t(unsigned long long, max_len, i_size - start); - if (len < max_len && - (to == folio_size(folio) || new_content)) + if (len < max_len) afs_extend_writeback(mapping, vnode, &count, - start, max_len, new_content, - caching, &len); - len = min_t(loff_t, len, max_len); + start, max_len, caching, &len); + len = min_t(unsigned long long, len, i_size - start); } /* We now have a contiguous set of dirty pages, each with writeback @@ -636,7 +537,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, folio_unlock(folio); if (start < i_size) { - _debug("write back %x @%llx [%llx]", len, start, i_size); + _debug("write back %zx @%llx [%llx]", len, start, i_size); /* Speculatively write to the cache. We have to fix this up * later if the store fails. @@ -646,7 +547,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len); ret = afs_store_data(vnode, &iter, start, false); } else { - _debug("write discard %x @%llx [%llx]", len, start, i_size); + _debug("write discard %zx @%llx [%llx]", len, start, i_size); /* The dirty region was entirely beyond the EOF. */ fscache_clear_page_bits(mapping, start, len, caching); @@ -702,7 +603,8 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, */ static int afs_writepages_region(struct address_space *mapping, struct writeback_control *wbc, - loff_t start, loff_t end, loff_t *_next, + unsigned long long start, + unsigned long long end, loff_t *_next, bool max_one_loop) { struct folio *folio; @@ -914,7 +816,6 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) struct inode *inode = file_inode(file); struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_file *af = file->private_data; - unsigned long priv; vm_fault_t ret = VM_FAULT_RETRY; _enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio)); @@ -938,24 +839,15 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) if (folio_lock_killable(folio) < 0) goto out; - /* We mustn't change folio->private until writeback is complete as that - * details the portion of the page we need to write back and we might - * need to redirty the page if there's a problem. - */ if (folio_wait_writeback_killable(folio) < 0) { folio_unlock(folio); goto out; } - priv = afs_folio_dirty(folio, 0, folio_size(folio)); - priv = afs_folio_dirty_mmapped(priv); - if (folio_test_private(folio)) { - folio_change_private(folio, (void *)priv); + if (folio_test_dirty(folio)) trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio); - } else { - folio_attach_private(folio, (void *)priv); + else trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio); - } file_update_time(file); ret = VM_FAULT_LOCKED; @@ -1000,30 +892,26 @@ int afs_launder_folio(struct folio *folio) struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); struct iov_iter iter; struct bio_vec bv; - unsigned long priv; - unsigned int f, t; + unsigned long long fend, i_size = vnode->netfs.inode.i_size; + size_t len; int ret = 0; _enter("{%lx}", folio->index); - priv = (unsigned long)folio_get_private(folio); - if (folio_clear_dirty_for_io(folio)) { - f = 0; - t = folio_size(folio); - if (folio_test_private(folio)) { - f = afs_folio_dirty_from(folio, priv); - t = afs_folio_dirty_to(folio, priv); - } + if (folio_clear_dirty_for_io(folio) && folio_pos(folio) < i_size) { + len = folio_size(folio); + fend = folio_pos(folio) + len; + if (vnode->netfs.inode.i_size < fend) + len = fend - i_size; - bvec_set_folio(&bv, folio, t - f, f); - iov_iter_bvec(&iter, ITER_SOURCE, &bv, 1, bv.bv_len); + bvec_set_folio(&bv, folio, len, 0); + iov_iter_bvec(&iter, WRITE, &bv, 1, len); trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio); - ret = afs_store_data(vnode, &iter, folio_pos(folio) + f, true); + ret = afs_store_data(vnode, &iter, folio_pos(folio), true); } trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio); - folio_detach_private(folio); folio_wait_fscache(folio); return ret; } diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 597677acc6b1..08506680350c 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -846,26 +846,18 @@ TRACE_EVENT(afs_folio_dirty, __field(struct afs_vnode *, vnode) __field(const char *, where) __field(pgoff_t, index) - __field(unsigned long, from) - __field(unsigned long, to) + __field(size_t, size) ), TP_fast_assign( - unsigned long priv = (unsigned long)folio_get_private(folio); __entry->vnode = vnode; __entry->where = where; __entry->index = folio_index(folio); - __entry->from = afs_folio_dirty_from(folio, priv); - __entry->to = afs_folio_dirty_to(folio, priv); - __entry->to |= (afs_is_folio_dirty_mmapped(priv) ? - (1UL << (BITS_PER_LONG - 1)) : 0); + __entry->size = folio_size(folio); ), - TP_printk("vn=%p %lx %s %lx-%lx%s", - __entry->vnode, __entry->index, __entry->where, - __entry->from, - __entry->to & ~(1UL << (BITS_PER_LONG - 1)), - __entry->to & (1UL << (BITS_PER_LONG - 1)) ? " M" : "") + TP_printk("vn=%p ix=%05lx s=%05lx %s", + __entry->vnode, __entry->index, __entry->size, __entry->where) ); TRACE_EVENT(afs_call_state, From patchwork Fri Oct 13 16:03:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DF5ACDB484 for ; Fri, 13 Oct 2023 16:05:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F9256B00BA; Fri, 13 Oct 2023 12:05:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A80D6B00BB; Fri, 13 Oct 2023 12:05:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FA836B00BC; Fri, 13 Oct 2023 12:05:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5BE906B00BA for ; Fri, 13 Oct 2023 12:05:01 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 303491CA91F for ; Fri, 13 Oct 2023 16:05:01 +0000 (UTC) X-FDA: 81340912002.16.5A0D29A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 0C3E7100033 for ; Fri, 13 Oct 2023 16:04:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="F/yDHuDJ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aqcfzX40HIma5NWVee2e/vr5lWKlCuYEF11W+qKc9UY=; b=Ii6jyvvTzrqDCA5z8T0b1kvyTx91NdCY7KfviVTnhho/YyTHTW3DNcKwH9yLXNd8bhlsLJ 81mXjt9LvSjZ6gx8oqaFHdWPQUyF8BCwyHWGaRLXj4QUNX/g4BPzSC/bH+4S5H3yaMRg2v NJSB78RNr7usOGkj7kWJW3/wfh/QAB8= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="F/yDHuDJ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213099; a=rsa-sha256; cv=none; b=fCqyjagDSbmF3LeHnRiBq+u076Vmtj7tGU1+37claB8gSdiji2LvABJMB06E+j/LL4s19i S3ktc5k939a4mxRiOt7VuBmksdNt7u7WMWXveK4o2Z8t1oxDdZUNWgF8af7zZzsEHp55kp pOutKEwr+x/IUBSg0YMoOoCMJbP1o6s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213098; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aqcfzX40HIma5NWVee2e/vr5lWKlCuYEF11W+qKc9UY=; b=F/yDHuDJyo4nWQODxXEfs/MIElx7mtNKSGAIdos9Sxet1ANLPeXnNDSqLackvucbtVlE9F 9NKkHNtTL1qNuZQXs63rPsoR7N5ByBirmf7XWm3BRXBoxz1MAKRtKOMBCM562QebYP0Dea cILNana4EEpPV/x3u5NuQVeYRyZyeFE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-27-8vmob4DEMPKj3lAspxqLxw-1; Fri, 13 Oct 2023 12:04:53 -0400 X-MC-Unique: 8vmob4DEMPKj3lAspxqLxw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6C86918186BB; Fri, 13 Oct 2023 16:04:51 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8E94CC15BBC; Fri, 13 Oct 2023 16:04:48 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 07/53] netfs: Provide invalidate_folio and release_folio calls Date: Fri, 13 Oct 2023 17:03:36 +0100 Message-ID: <20231013160423.2218093-8-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Queue-Id: 0C3E7100033 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: o7hyhingqyexpboj9g57a7tqi6p5ashi X-HE-Tag: 1697213098-664111 X-HE-Meta: U2FsdGVkX1/TXa+J7TcBMHfTeRjIRUGODUbYcah9zFJQ0UgJlnToYu4Fe/tqq7d/OKOdsm6g+voqZnXsRNQWWDaj8G9m6TQHDTu26wXVdha5JTHYM4R55AU1Ap7KQwZTg9RzecLotXBKvz496zsYLYlT5zDldQ3iGZDXp0uD0ILKZ06H45aWEhePBPmAy2aN6Pb1xDtAw7EzyEp+fnsU8FsHKvBFctjZHDuWunfYB/2NZD60nXSbJlywmK21D9BkZWJVOhTzyu8xcyYLLcS9O+MnB0vXH/ODiHhQYFBspKcVEo0ED7K6xDsphSLPsJ30SfktEQdcZLBn5pMgNk0vGis30HYo3ba1lyzcbhMC0TK9Xb4tn8k74AClra4U3rFy61HUXs5oq8mAS8NU0OVXy5jBxjGUOkTctmSnqS4O4bjRMw2JAr4pvAWbGBqQot3o9MsnZ+1rwKbmd/bfmK0XeKSRBEoOQ+PJlrjdMYNwT9ga/M1HDA1tSPuHBhFO97wGmlxRLqNQpCR5d+JBlYbtcHORHEmtpGk90MjWB2Zx08UGkq1FDYhO3aal6Pfee+9EKGOigXIx0y6XKIfNx3FVjZEjhG/hNdisVKvcrLD1z7Tx8/GLu9OIbB7xjZlk9QqFXFj8hS15XOl/fDiif+6hoF1JmwP55YWt3aGxBzw1WCO8OCwJyHd3PLdhTpZY+quFi/g7rMcJ8VCW4cPXChjBOthCGKTGGCHpHG3snYWbUDvsk5rQcNQCXHiZDlZbE5oAghR/e3206xLy7LxiO3jzOoJSnZ04DVksaalwbL2ub4FNiznG94ufKoOcT9ZxH27Cv/IlxVQ7k6CfBPqhMoy/BToU8DAewP4N5RVCHZVwbYvw0K8SGFvJWd5jKvFU2rCul+GKZJJw+YDyT+cfQG9mb5OGvvqHj3gCvnWofFzw90iFF8xwD/cLcQDPdLC6v6OgB3aR4BcDneQbWHqFeH+ ONoQ0aoD OM4fsu2TmEvgbqKpJceqd4KrYwQJ7oTVSxW4rlvXsLtEBFLPwvFayaYLu9j802qedT4rzCfNHUKWP6MNCT5DSDE5P/10L4afo4HEc0EThmcvyJSSLho+o7wcsFopAjN4FJSLpwEVi4em4+haF0UOllgOA1cXpm0+zItCdGSHOX6+bx3LO+FMqkYgVlFJg4ZrTi6PeZgSvSZy1wL/VmqiAnrR0vA8GY4J/XU/Zt6VkDFIMNvEuTk+K4zWPvPV4t14JSwVerBT5kEsv39Pc+hBoOBpe09OOVItkOsYqYKOMgo5Zj8s/XhHuWyp5VQWlcUlHjPok9+1ODIdHBu3Z1flZJA11QGC/jHUUboPiMNszvLkW6r4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide default invalidate_folio and release_folio calls. These will need to interact with invalidation correctly at some point. They will be needed if netfslib is to make use of folio->private for its own purposes. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Jeff Layton --- fs/9p/vfs_addr.c | 33 ++------------------------- fs/afs/file.c | 53 ++++--------------------------------------- fs/ceph/addr.c | 24 ++------------------ fs/netfs/Makefile | 1 + fs/netfs/misc.c | 51 +++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 6 +++-- 6 files changed, 64 insertions(+), 104 deletions(-) create mode 100644 fs/netfs/misc.c diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 8a635999a7d6..18a666c43e4a 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -104,35 +104,6 @@ const struct netfs_request_ops v9fs_req_ops = { .issue_read = v9fs_issue_read, }; -/** - * v9fs_release_folio - release the private state associated with a folio - * @folio: The folio to be released - * @gfp: The caller's allocation restrictions - * - * Returns true if the page can be released, false otherwise. - */ - -static bool v9fs_release_folio(struct folio *folio, gfp_t gfp) -{ - if (folio_test_private(folio)) - return false; -#ifdef CONFIG_9P_FSCACHE - if (folio_test_fscache(folio)) { - if (current_is_kswapd() || !(gfp & __GFP_FS)) - return false; - folio_wait_fscache(folio); - } - fscache_note_page_release(v9fs_inode_cookie(V9FS_I(folio_inode(folio)))); -#endif - return true; -} - -static void v9fs_invalidate_folio(struct folio *folio, size_t offset, - size_t length) -{ - folio_wait_fscache(folio); -} - #ifdef CONFIG_9P_FSCACHE static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error, bool was_async) @@ -355,8 +326,8 @@ const struct address_space_operations v9fs_addr_operations = { .writepage = v9fs_vfs_writepage, .write_begin = v9fs_write_begin, .write_end = v9fs_write_end, - .release_folio = v9fs_release_folio, - .invalidate_folio = v9fs_invalidate_folio, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, .launder_folio = v9fs_launder_folio, .direct_IO = v9fs_direct_IO, }; diff --git a/fs/afs/file.c b/fs/afs/file.c index 0c49b3b6f214..3fea5cd8ef13 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -20,9 +20,6 @@ static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); static int afs_symlink_read_folio(struct file *file, struct folio *folio); -static void afs_invalidate_folio(struct folio *folio, size_t offset, - size_t length); -static bool afs_release_folio(struct folio *folio, gfp_t gfp_flags); static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos, @@ -57,8 +54,8 @@ const struct address_space_operations afs_file_aops = { .readahead = netfs_readahead, .dirty_folio = afs_dirty_folio, .launder_folio = afs_launder_folio, - .release_folio = afs_release_folio, - .invalidate_folio = afs_invalidate_folio, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, .write_begin = afs_write_begin, .write_end = afs_write_end, .writepages = afs_writepages, @@ -67,8 +64,8 @@ const struct address_space_operations afs_file_aops = { const struct address_space_operations afs_symlink_aops = { .read_folio = afs_symlink_read_folio, - .release_folio = afs_release_folio, - .invalidate_folio = afs_invalidate_folio, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, .migrate_folio = filemap_migrate_folio, }; @@ -405,48 +402,6 @@ int afs_write_inode(struct inode *inode, struct writeback_control *wbc) return 0; } -/* - * invalidate part or all of a page - * - release a page and clean up its private data if offset is 0 (indicating - * the entire page) - */ -static void afs_invalidate_folio(struct folio *folio, size_t offset, - size_t length) -{ - _enter("{%lu},%zu,%zu", folio->index, offset, length); - - folio_wait_fscache(folio); - _leave(""); -} - -/* - * release a page and clean up its private state if it's not busy - * - return true if the page can now be released, false if not - */ -static bool afs_release_folio(struct folio *folio, gfp_t gfp) -{ - struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); - - _enter("{{%llx:%llu}[%lu],%lx},%x", - vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags, - gfp); - - /* deny if folio is being written to the cache and the caller hasn't - * elected to wait */ -#ifdef CONFIG_AFS_FSCACHE - if (folio_test_fscache(folio)) { - if (current_is_kswapd() || !(gfp & __GFP_FS)) - return false; - folio_wait_fscache(folio); - } - fscache_note_page_release(afs_vnode_cache(vnode)); -#endif - - /* Indicate that the folio can be released */ - _leave(" = T"); - return true; -} - static void afs_add_open_mmap(struct afs_vnode *vnode) { if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) { diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index f4863078f7fe..ced19ff08988 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -160,27 +160,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset, ceph_put_snap_context(snapc); } - folio_wait_fscache(folio); -} - -static bool ceph_release_folio(struct folio *folio, gfp_t gfp) -{ - struct inode *inode = folio->mapping->host; - - dout("%llx:%llx release_folio idx %lu (%sdirty)\n", - ceph_vinop(inode), - folio->index, folio_test_dirty(folio) ? "" : "not "); - - if (folio_test_private(folio)) - return false; - - if (folio_test_fscache(folio)) { - if (current_is_kswapd() || !(gfp & __GFP_FS)) - return false; - folio_wait_fscache(folio); - } - ceph_fscache_note_page_release(inode); - return true; + netfs_invalidate_folio(folio, offset, length); } static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq) @@ -1563,7 +1543,7 @@ const struct address_space_operations ceph_aops = { .write_end = ceph_write_end, .dirty_folio = ceph_dirty_folio, .invalidate_folio = ceph_invalidate_folio, - .release_folio = ceph_release_folio, + .release_folio = netfs_release_folio, .direct_IO = noop_direct_IO, }; diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 386d6fb92793..cd22554d9048 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -5,6 +5,7 @@ netfs-y := \ io.o \ iterator.o \ main.o \ + misc.o \ objects.o netfs-$(CONFIG_NETFS_STATS) += stats.o diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c new file mode 100644 index 000000000000..c3baf2b247d9 --- /dev/null +++ b/fs/netfs/misc.c @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Miscellaneous routines. + * + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include "internal.h" + +/** + * netfs_invalidate_folio - Invalidate or partially invalidate a folio + * @folio: Folio proposed for release + * @offset: Offset of the invalidated region + * @length: Length of the invalidated region + * + * Invalidate part or all of a folio for a network filesystem. The folio will + * be removed afterwards if the invalidated region covers the entire folio. + */ +void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) +{ + _enter("{%lx},%zx,%zx", folio_index(folio), offset, length); + + folio_wait_fscache(folio); +} +EXPORT_SYMBOL(netfs_invalidate_folio); + +/** + * netfs_release_folio - Try to release a folio + * @folio: Folio proposed for release + * @gfp: Flags qualifying the release + * + * Request release of a folio and clean up its private state if it's not busy. + * Returns true if the folio can now be released, false if not + */ +bool netfs_release_folio(struct folio *folio, gfp_t gfp) +{ + struct netfs_inode *ctx = netfs_inode(folio_inode(folio)); + + if (folio_test_private(folio)) + return false; + if (folio_test_fscache(folio)) { + if (current_is_kswapd() || !(gfp & __GFP_FS)) + return false; + folio_wait_fscache(folio); + } + + fscache_note_page_release(netfs_i_cookie(ctx)); + return true; +} +EXPORT_SYMBOL(netfs_release_folio); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index ed64d1034afa..daa431c4148d 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -299,8 +299,10 @@ struct readahead_control; void netfs_readahead(struct readahead_control *); int netfs_read_folio(struct file *, struct folio *); int netfs_write_begin(struct netfs_inode *, struct file *, - struct address_space *, loff_t pos, unsigned int len, - struct folio **, void **fsdata); + struct address_space *, loff_t pos, unsigned int len, + struct folio **, void **fsdata); +void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); +bool netfs_release_folio(struct folio *folio, gfp_t gfp); void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, From patchwork Fri Oct 13 16:03:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40E79CDB482 for ; Fri, 13 Oct 2023 16:05:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB8926B00CB; Fri, 13 Oct 2023 12:05:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C18F46B00D1; Fri, 13 Oct 2023 12:05:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE1066B00CC; Fri, 13 Oct 2023 12:05:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9E6486B00C5 for ; Fri, 13 Oct 2023 12:05:05 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F2D5C1602E0 for ; Fri, 13 Oct 2023 16:05:04 +0000 (UTC) X-FDA: 81340912128.08.F6F9CD0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id D17EC180032 for ; Fri, 13 Oct 2023 16:05:01 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Vg9GqyvP; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=od1nOg0d5KcBROFP/nmLvEU1/p7aWHgHI8UqWVAfctw=; b=gTmfWiNruPYAVM9ujhug5FloOgyZI/YAUsL/S2e+YTs4E1wI6CAGChyuXeTmjZ1hA0XH17 LSEIyAeXSqzNvrVZeiKdWUoJQPevlCaQlEMvB+sNWzErGH9v+3KcwjrpaudzqfgF9HtvkF U5pYOqj+ECpXhY4HXVL1jehR+cSlSRA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213101; a=rsa-sha256; cv=none; b=4K7+gILytZM3FPjRDgaYuejODMTl6NdwbsnQtjsSokLwC7D/UvhMk6VLO5d9E9h2nGFT4o Hcmwu8x7MVt1huk1zoGzKx4q/yB5AIGJ5Sqrg1I0zKaFHu9CAnfM43YApi0KWFyA2ZV3zJ zTi96E9Ft6/0kiwWgoLkQYQLMn9wig8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Vg9GqyvP; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213101; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=od1nOg0d5KcBROFP/nmLvEU1/p7aWHgHI8UqWVAfctw=; b=Vg9GqyvPo5fMLPWkRhDN0dpYvFzFdLDBnZBgmKcuHiDJeZ3q8AvZzI+1Efhihm5ndjWJhU N9vYOB7hFkkZtGMfa73/BfVVxcF7p1yJIsOV9ZQ4od+8CA4g65SIHOcEfPntuwlyr6zsCX I1cNB3/i4Q5HKNLABuV12W18ejvcwVU= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-378--MYQi_f-M-Sjm_LBP-JE9A-1; Fri, 13 Oct 2023 12:04:55 -0400 X-MC-Unique: -MYQi_f-M-Sjm_LBP-JE9A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BB9F53C40C2F; Fri, 13 Oct 2023 16:04:54 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 253CB1C06535; Fri, 13 Oct 2023 16:04:52 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 08/53] netfs: Add rsize to netfs_io_request Date: Fri, 13 Oct 2023 17:03:37 +0100 Message-ID: <20231013160423.2218093-9-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: D17EC180032 X-Rspam-User: X-Stat-Signature: kucoa4pgkcxab6nkwz7e8cfutzqtrpck X-Rspamd-Server: rspam03 X-HE-Tag: 1697213101-598032 X-HE-Meta: U2FsdGVkX18ebIUC5QlalE5vwwIfANE37CAJnlFXIdoIZU8rRxwckM8KJQfyXLgkYUA0nhQ15ASbaNC4Bj2U6q7Vrgwu86phs/oow6+UOwuqFoJ2oJsENeRhaPSH9SgnBJ2sEFjKN4YpT7ZUI1JxF1LWwcF6jTjIr0ORyeAEBwvkjSkyOFDdanKd3GHWbUhleptZ1kouNRHRRB/HyVMQ/+NxRWYtp2olAgN2DiGCAkoDPp91DlBwopvfbxYUqq46AbXmhNmspEaR+PyZblqfm/vXMNA5vCvwyOXzUHhBFx1OikYsi+JF4S7addNlASMFzBVBYTIYjGGkkIkViWf/y4JLKwTF4mgDbGQVaTSmYFEHJ8tnNCu38ISv68MxzrVAblfoTwvA071CYAOTB807vqPpEBN5kFflI4QpHNul5rLdnxN0KCMPPBwQAFBhQ1ERHHgDhAR3PFfIoQK9bv6bJlvbulsUImrUbx9mUG0i+chn8huo2gOR/z5ZkJg88EpBi1lz1INZyj5yz1LrU116P66JPf6FqDmr6Ay/2UzqpKPhjz70DFCQT1g9NyNB2dsSS3/9v/TvVd/uNwlXS6mW6UIkhsT2C2DLpEKvCEcUOA2kmOe3iriRo+oAm8A1by8JDx3j9PBkCZIXejqvqF/DQwF0Q46B6arSTCDTldguDgLm4V1Q+1+SwvY9JMuH1lGW5H+rJDTVYNSASfumFktwWBTWloAPglBN1++Fsu5Zezs9FyqDnzzvEX22UcO8mKibs52o9Cqcl01bQymK/DyKyPdC1veZx+Y9geyGKQw0PMaVU6E/wojWckgmDWvgnhbqXTE+hDXHrhW3VvsKG5eQRQe4CDat3qEiaGkl2HZ0+gx6TNtxz4YRzJjRO/H8fKDmq43ISNbCtbHakXqiZwioQA0Ig91it91R/5QhNgi50oB4aAzh9VwAU4da4sBo9+sIpNXRU0CviEbc9ZydYKo RoVFeuPh Eshoxm//4/lenns//ZroX4YSs6QOIbMRTV9gEzm+rryuxpYKc4bfoq8Wp7sYjm2klfatltwxq5YFiOZnkR/wwv1QcedHErPrkvyCrYW0QKxvIittrymxCQBCoiT4x3YZFHy123/IE33jceGSQaobC7Kl9GAH71RtuF6lf/4/WIpmgkUVKaCQZNZsIg0m++vDBILLpMUpp3+xvba+3SN5NZKpNC+sIZAHMqAGi5vYOrbhXlmfLRg17Umyu1lVdaxsoOk/oF76Jn3OSXO7nubkLUkSwFrN46ezpylZDak2+AhjIWI9qBH2YUSfc4dJ9ojzzu29jS16gqQ6OgtB0QIfRqXM0tw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an rsize parameter to netfs_io_request to be filled in by the network filesystem when the request is initialised. This indicates the maximum size of a read request that the netfs will honour in that region. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/file.c | 1 + fs/ceph/addr.c | 2 ++ include/linux/netfs.h | 1 + 3 files changed, 4 insertions(+) diff --git a/fs/afs/file.c b/fs/afs/file.c index 3fea5cd8ef13..3d2e1913ea27 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -360,6 +360,7 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio) static int afs_init_request(struct netfs_io_request *rreq, struct file *file) { rreq->netfs_priv = key_get(afs_file_key(file)); + rreq->rsize = 4 * 1024 * 1024; return 0; } diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index ced19ff08988..92a5ddcd9a76 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -419,6 +419,8 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file) struct ceph_netfs_request_data *priv; int ret = 0; + rreq->rsize = 1024 * 1024; + if (rreq->origin != NETFS_READAHEAD) return 0; diff --git a/include/linux/netfs.h b/include/linux/netfs.h index daa431c4148d..02e888c170da 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -188,6 +188,7 @@ struct netfs_io_request { struct list_head subrequests; /* Contributory I/O operations */ void *netfs_priv; /* Private data for the netfs */ unsigned int debug_id; + unsigned int rsize; /* Maximum read size (0 for none) */ atomic_t nr_outstanding; /* Number of ops in progress */ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ size_t submitted; /* Amount submitted for I/O so far */ From patchwork Fri Oct 13 16:03:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3FD6CDB47E for ; Fri, 13 Oct 2023 16:05:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D060B6B012E; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB5866B0139; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B55B36B013A; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A0F356B012E for ; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5F05C120264 for ; Fri, 13 Oct 2023 16:05:17 +0000 (UTC) X-FDA: 81340912674.22.92F0DB0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf07.hostedemail.com (Postfix) with ESMTP id DD97540035 for ; Fri, 13 Oct 2023 16:05:14 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PJna7IDA; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213115; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jWgnbo20VG+CFeOrUh8dec9VrlPmTpjksj3Uwlk63eE=; b=HCZ9bYABVN+OUPUv4mtd4FE8a5CdpNek09CE1Ybo0f++G7SV7XQM7JEvbP92mUxZ7f300y u4FU/dZ3ZO24+vZdhKmor3EZNCrhU2WRjlyU5qMfLjuv5rX3pIA3pKcH0gXVQ744JdiP4+ Yo2TOSHbKH10/XhjdRECZ8neaUCRJKc= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PJna7IDA; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213115; a=rsa-sha256; cv=none; b=v0Wx5Zo6o+NOacsX/2B7RlNjOwqLP6Yw22a0K7W9Wvrm8lgORdHoxd/eFxbJQpsi66kaFx 6tkHdwNumWtFJ5GgyAM0NCuTLCuaEdeOIzr3u4kIkaA1CnPXUXnOdxleV2Rf7AeQtuTBbY JsdB9xJm3DFEO9StbTss7e1ORQ71D+8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213113; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jWgnbo20VG+CFeOrUh8dec9VrlPmTpjksj3Uwlk63eE=; b=PJna7IDA0C7HblpFzzO4KtPmljq2d+L4MD4EtdE4w37qa+2l7szub6N3r14DvIHBLou1LG QeyIbjkzSu8fymb/NWR+S5Uvgekc9R77/E3MU8r5njQT5Kj9ua4i5fz9Yx+GTt5Ni5iKxS h+FdoBUHs67Yb+mdxbTJf6nkL+zU2/Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-483-TfxWRZ-fOGSKd-858CLGYg-1; Fri, 13 Oct 2023 12:04:59 -0400 X-MC-Unique: TfxWRZ-fOGSKd-858CLGYg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F243C858F19; Fri, 13 Oct 2023 16:04:57 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 68C676403C; Fri, 13 Oct 2023 16:04:55 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 09/53] netfs: Implement unbuffered/DIO vs buffered I/O locking Date: Fri, 13 Oct 2023 17:03:38 +0100 Message-ID: <20231013160423.2218093-10-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Rspamd-Queue-Id: DD97540035 X-Rspam-User: X-Stat-Signature: e196phcz1gbdc6537aak8cd9z3g5ykd1 X-Rspamd-Server: rspam01 X-HE-Tag: 1697213114-260208 X-HE-Meta: U2FsdGVkX1+AhoMaZkad9I4Noix2274gmZq9RAlcGi4NZZAW4xo3LYHhgZSYViPfw+L7yVRG8rFMzsj48U2tjdq7l0+Fp3GwkZ1+fZ33KZu7DSNc4P1OF3XMej33vO4Cw9Nifg4W7iVZhES8qPsSIM6Li2k2liS0ssw39hA+iP7/9C93Rp+58WIzstEluUwmkbBAltj+pc+R7kIbko2JzqTLbXi2QgncpQQ8qSvK5jkmuPliX7tALj4DN58MUh546cx4zMKDRdt5ssE/QK656TCjJv1YcUUHGBCF1+SHhsjLwtA6Cm2GA3cTQOwsmZXlwLCs1dMNbw2P3cHUuu6BEJqLYVfTLdKsNqLDCaXqSbEV1OCQGJBxB6CoNY4p2goeeTcpwoxj1xMa7iopVTzYfOmUwM0BcupIIDHir27jKigS1lc6br0G5yG+bP0ns5y+wbkCS6MNkUmG6kANfXckojTndiV/YC3x5i1cMEz9clvqHlOEo4T8sDWzMaHsr8MX/SfReyLYgVaC4wFRn12T3xLS1TcE6s2gpSgJwM7hCC9kybuvdI/XjVmEgQY6S7yejU/m0ATm6bG+llXTz7XvxF2jZ9AttCgtDjyN/5cfnglR4A9Dnuqj63gIuaPtxui1KIZVqhLwimZpmXLmFdp2vx7Yc1J/rz9+tSDC2lWQWWObwFfd1zlZubVyjU454gJpLP3stCKAiYnfiXRhbWiVINi1FFfP4U+LPgLHesHU0iV7H56PeXrd/SCw0riqnx5T6ZT9SPE5ZwmExp2KnUh7E+kfD+1fMDPeztvPTtoKEh4amRi63YWskMuT1D+0llvQgrADjLU8U+/hfIattXT7WVWSl+mpyDyNVWQCfzgS9m8QCzMcHD2/YG+fwj6IzVwlYi6/0/FKU1NUbN+OteiDUdfvv4AOJvWHG7tYBmRNUfmrt0DRn8r9FPr/VLYjp3VL6xBbWaAYlr5GKmwX33p xfv411vZ o2+dLsl6d88sg1+9007pRUCWz//oERd7YpLc196oJ3tE3sowC1TtqowU31ut1VTEmTrKILab7dREexWmipzrYL6e3mwAXtWxWWoqbEUGMFQwPuFTUkA+ZRjzPq8oETwSBV22RZPkik9uKZTeqjpLpUW70izNnEB/OXXGKnYXe7pm6I0No5X1REr/2qus67JvOQK6KWCGgfBJWIN3e8QNIrxjdlimmBo0xKHpVq1yl7KpEi/TxUArEzdY78FZMmVxvVfX1JHefSPB2kbTO4TxxYSc4Gz7M9lwsWS8MgeJAvMNPUr4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Borrow NFS's direct-vs-buffered I/O locking into netfslib. Similar code is also used in ceph. Modify it to have the correct checker annotations for i_rwsem lock acquisition/release and to return -ERESTARTSYS if waits are interrupted. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/Makefile | 1 + fs/netfs/locking.c | 209 ++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 10 ++ 3 files changed, 220 insertions(+) create mode 100644 fs/netfs/locking.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index cd22554d9048..647ce1935674 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -4,6 +4,7 @@ netfs-y := \ buffered_read.o \ io.o \ iterator.o \ + locking.o \ main.o \ misc.o \ objects.o diff --git a/fs/netfs/locking.c b/fs/netfs/locking.c new file mode 100644 index 000000000000..fecca8ea6322 --- /dev/null +++ b/fs/netfs/locking.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * I/O and data path helper functionality. + * + * Borrowed from NFS Copyright (c) 2016 Trond Myklebust + */ + +#include +#include + +/* + * inode_dio_wait_interruptible - wait for outstanding DIO requests to finish + * @inode: inode to wait for + * + * Waits for all pending direct I/O requests to finish so that we can + * proceed with a truncate or equivalent operation. + * + * Must be called under a lock that serializes taking new references + * to i_dio_count, usually by inode->i_mutex. + */ +static int inode_dio_wait_interruptible(struct inode *inode) +{ + if (!atomic_read(&inode->i_dio_count)) + return 0; + + wait_queue_head_t *wq = bit_waitqueue(&inode->i_state, __I_DIO_WAKEUP); + DEFINE_WAIT_BIT(q, &inode->i_state, __I_DIO_WAKEUP); + + for (;;) { + prepare_to_wait(wq, &q.wq_entry, TASK_INTERRUPTIBLE); + if (!atomic_read(&inode->i_dio_count)) + break; + if (signal_pending(current)) + break; + schedule(); + } + finish_wait(wq, &q.wq_entry); + + return atomic_read(&inode->i_dio_count) ? -ERESTARTSYS : 0; +} + +/* Call with exclusively locked inode->i_rwsem */ +static int netfs_block_o_direct(struct netfs_inode *ictx) +{ + if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags)) + return 0; + clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags); + return inode_dio_wait_interruptible(&ictx->inode); +} + +/** + * netfs_start_io_read - declare the file is being used for buffered reads + * @inode: file inode + * + * Declare that a buffered read operation is about to start, and ensure + * that we block all direct I/O. + * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is unset, + * and holds a shared lock on inode->i_rwsem to ensure that the flag + * cannot be changed. + * In practice, this means that buffered read operations are allowed to + * execute in parallel, thanks to the shared lock, whereas direct I/O + * operations need to wait to grab an exclusive lock in order to set + * NETFS_ICTX_ODIRECT. + * Note that buffered writes and truncates both take a write lock on + * inode->i_rwsem, meaning that those are serialised w.r.t. the reads. + */ +int netfs_start_io_read(struct inode *inode) + __acquires(inode->i_rwsem) +{ + struct netfs_inode *ictx = netfs_inode(inode); + + /* Be an optimist! */ + if (down_read_interruptible(&inode->i_rwsem) < 0) + return -ERESTARTSYS; + if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) == 0) + return 0; + up_read(&inode->i_rwsem); + + /* Slow path.... */ + if (down_write_killable(&inode->i_rwsem) < 0) + return -ERESTARTSYS; + if (netfs_block_o_direct(ictx) < 0) { + up_write(&inode->i_rwsem); + return -ERESTARTSYS; + } + downgrade_write(&inode->i_rwsem); + return 0; +} + +/** + * netfs_end_io_read - declare that the buffered read operation is done + * @inode: file inode + * + * Declare that a buffered read operation is done, and release the shared + * lock on inode->i_rwsem. + */ +void netfs_end_io_read(struct inode *inode) + __releases(inode->i_rwsem) +{ + up_read(&inode->i_rwsem); +} + +/** + * netfs_start_io_write - declare the file is being used for buffered writes + * @inode: file inode + * + * Declare that a buffered read operation is about to start, and ensure + * that we block all direct I/O. + */ +int netfs_start_io_write(struct inode *inode) + __acquires(inode->i_rwsem) +{ + struct netfs_inode *ictx = netfs_inode(inode); + + if (down_write_killable(&inode->i_rwsem) < 0) + return -ERESTARTSYS; + if (netfs_block_o_direct(ictx) < 0) { + up_write(&inode->i_rwsem); + return -ERESTARTSYS; + } + return 0; +} + +/** + * netfs_end_io_write - declare that the buffered write operation is done + * @inode: file inode + * + * Declare that a buffered write operation is done, and release the + * lock on inode->i_rwsem. + */ +void netfs_end_io_write(struct inode *inode) + __releases(inode->i_rwsem) +{ + up_write(&inode->i_rwsem); +} + +/* Call with exclusively locked inode->i_rwsem */ +static int netfs_block_buffered(struct inode *inode) +{ + struct netfs_inode *ictx = netfs_inode(inode); + int ret; + + if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags)) { + set_bit(NETFS_ICTX_ODIRECT, &ictx->flags); + if (inode->i_mapping->nrpages != 0) { + unmap_mapping_range(inode->i_mapping, 0, 0, 0); + ret = filemap_fdatawait(inode->i_mapping); + if (ret < 0) { + clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags); + return ret; + } + } + } + return 0; +} + +/** + * netfs_start_io_direct - declare the file is being used for direct i/o + * @inode: file inode + * + * Declare that a direct I/O operation is about to start, and ensure + * that we block all buffered I/O. + * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is set, + * and holds a shared lock on inode->i_rwsem to ensure that the flag + * cannot be changed. + * In practice, this means that direct I/O operations are allowed to + * execute in parallel, thanks to the shared lock, whereas buffered I/O + * operations need to wait to grab an exclusive lock in order to clear + * NETFS_ICTX_ODIRECT. + * Note that buffered writes and truncates both take a write lock on + * inode->i_rwsem, meaning that those are serialised w.r.t. O_DIRECT. + */ +int netfs_start_io_direct(struct inode *inode) + __acquires(inode->i_rwsem) +{ + struct netfs_inode *ictx = netfs_inode(inode); + int ret; + + /* Be an optimist! */ + if (down_read_interruptible(&inode->i_rwsem) < 0) + return -ERESTARTSYS; + if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) != 0) + return 0; + up_read(&inode->i_rwsem); + + /* Slow path.... */ + if (down_write_killable(&inode->i_rwsem) < 0) + return -ERESTARTSYS; + ret = netfs_block_buffered(inode); + if (ret < 0) { + up_write(&inode->i_rwsem); + return ret; + } + downgrade_write(&inode->i_rwsem); + return 0; +} + +/** + * netfs_end_io_direct - declare that the direct i/o operation is done + * @inode: file inode + * + * Declare that a direct I/O operation is done, and release the shared + * lock on inode->i_rwsem. + */ +void netfs_end_io_direct(struct inode *inode) + __releases(inode->i_rwsem) +{ + up_read(&inode->i_rwsem); +} diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 02e888c170da..33d4487a91e9 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -131,6 +131,8 @@ struct netfs_inode { loff_t remote_i_size; /* Size of the remote file */ loff_t zero_point; /* Size after which we assume there's no data * on the server */ + unsigned long flags; +#define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ }; /* @@ -315,6 +317,13 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); +int netfs_start_io_read(struct inode *inode); +void netfs_end_io_read(struct inode *inode); +int netfs_start_io_write(struct inode *inode); +void netfs_end_io_write(struct inode *inode); +int netfs_start_io_direct(struct inode *inode); +void netfs_end_io_direct(struct inode *inode); + /** * netfs_inode - Get the netfs inode context from the inode * @inode: The inode to query @@ -341,6 +350,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, ctx->ops = ops; ctx->remote_i_size = i_size_read(&ctx->inode); ctx->zero_point = ctx->remote_i_size; + ctx->flags = 0; #if IS_ENABLED(CONFIG_FSCACHE) ctx->cache = NULL; #endif From patchwork Fri Oct 13 16:03:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70470CDB47E for ; Fri, 13 Oct 2023 16:05:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D55D6B00D1; Fri, 13 Oct 2023 12:05:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 786DF6B00D4; Fri, 13 Oct 2023 12:05:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D7F26B00DB; Fri, 13 Oct 2023 12:05:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 45D386B00D1 for ; Fri, 13 Oct 2023 12:05:08 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1DBB2803EA for ; Fri, 13 Oct 2023 16:05:08 +0000 (UTC) X-FDA: 81340912296.12.FDEBC5B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 46E262001D for ; Fri, 13 Oct 2023 16:05:06 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBR4Zfjo; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213106; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NA5voU2TPNETiBa3Jeysht+DKSb2W+LGsNodyRcOMkg=; b=lVjpD+2fMsTtETVCke6IpodEcCPjy+8WH7ud8THK3nngb6+NqqzAQxmqUMOUIqq+D+E7aC Xo4kEcbcjulj8zN1Fp22yeC+N6lulT5/1X+/TNRRyBy2IcsQbD5/gaN7Y+mR6zS+ne6jaC k/BWi7y5nS4MwmFThFRzDT3FK/uwTss= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBR4Zfjo; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213106; a=rsa-sha256; cv=none; b=iFFpSXz0daTZsmtDjiVSdhd3vZGlfuk2jVb/NZ42rHiRURFKVAv36dwjnjF8s1XUWmKNIJ I8EkTLelF6X3ZX5BJDtL2J2xsPacUcN8tNuGJOstWF/eSFFjH4kl/xUbJGeMNQK9hYv3AI v8C/woCwE4iVnYGT1V7mannF81Zqfuk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213105; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NA5voU2TPNETiBa3Jeysht+DKSb2W+LGsNodyRcOMkg=; b=XBR4ZfjohZh5kqF4mSPnTrkOmYgH9h9NwxlYJmBfV/6do8CDSr7XvDJaNRBGp5/1xFl9hf 9Bhh5K1fqMxYUQRD4o8K2SvEo0zJC4rz7Ajn8qkodTOiM0EbvCN4YnKmjRwycxIYLAbJJ5 7Cvyk4BR92kU+OUcYCIGM68BTUwk4cw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-611-VhAQVmXcMOO84Z8gqlATMw-1; Fri, 13 Oct 2023 12:05:02 -0400 X-MC-Unique: VhAQVmXcMOO84Z8gqlATMw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 47E8C182B867; Fri, 13 Oct 2023 16:05:01 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id BACC725C0; Fri, 13 Oct 2023 16:04:58 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 10/53] netfs: Add iov_iters to (sub)requests to describe various buffers Date: Fri, 13 Oct 2023 17:03:39 +0100 Message-ID: <20231013160423.2218093-11-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: 46E262001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 977bpt7rnqdh8qwjdccf8psrrz7uwx84 X-HE-Tag: 1697213106-332594 X-HE-Meta: U2FsdGVkX187dkwVR7NT3zkCKtRYg1bygfY6s1RW6PhLe4a4RaItjVsE0+ZdbUYUdvwJS9hZqTBDyG7pxT7RSSvhI48B39RLNLtsVUzCqyCWQC4l+CnrWGUtoHpyTC35i+x2yxnl2Jqq1hQ49FQt3Les6t0J26AzS2bBHzHev06HDrhKZvnJP5WNpCJmFZpa037WQdhS94M6mBxVe7nA0/D4GokGQamJKsUTLjW9PF+58o9u8ff7v60sJswowykBETVKrGiHnUCcthwpR3rRXmpwlAY2uwpmM8pXd3ZmTDlm77dKNeFuxxiOIkbD9KC4OC30Wa/r9pJxrphKcq1B0FulHEPW9wr4yTXASL107IMXcho/ufrBhMKslTtH4tuBpdDnOlov1wkolugCB2/EZf1waG9kCQHOhe1kgJFOeTIKRY4mPjmC7rt3ZbgaWz8fP4rN5QVU4akcrX0rS7qUg64L/7MYfExF4THCXpnZaYtKbx5LQKIiI2WiFGDFo5f/ZFYpsWzM8jmIaGj06+spzXMxDC+NrQJU/q0qRcV/RSUJ3HkSUkAGgJ+wBSNY6oYEFLtiG+CHBV01N7mhju+dLgYm5QRU8xfJAoeia5BCnFgjWBmxJEKGEgaCRrT/hWi10sDnR3VY1GJ8Mwnf3UPC9qsumn2abHkrANq0FLOR5pZZtNVmOILbBvuG/hSQRa+TrWf+sUlUz2ExfezBCSOBCY9QGaxq2pSTKtYz4f2bwXC9khGk36k6d/gnGnkKf0ZT6SvXXG0QNyYhSI1/IQ53oBTmXTzFVFTTlcB889RQiWGwuuz3Pn717XT/M+ObulTdEj3uWNnm0JFs3Btwk3JMVvsRuFp1IN3HDfIsl35lbhpNmM6mgypoQhTFnPOI4VHX+q5P+SD7BDGqM1458SJUjyFUrXItE1G90Bin6t+eD/U9lk+ZS3gpuMmzjnLi/uYaaRGM96fVfiSyoKVAX8N +SdhWXCP SwCtwcjR3d9EiV5usBuAr3Qu+5e4iEkJ5nK7p2Y+cnRXrVf1rqlWx2P4fhTj32wlnLLzdWZTFSMsdYw45v8VsiSVPxql2/l+ODV8AIqWxTtA01Tg++Uj7M273Z3KmMovVAoCp9znmS7Vh/f5CGTTH2ceAGO5hXkdhEV7dg1avc+5gVPme7wvIo+jVoPKUpoO0QB/vNv2cWnIXq8TenJm+WjradQf6eS1D61B3tGa85lr3wsn4qPePy6CIeh4LqWiUdbO/kiOK4uO0l8jacCuAmjgv3jbjiZjI1IANx1q6mx59edZLU8t46Rq3V9B0JyX4tYArYXMyL6/24DMpKPs6WOyVmQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add three iov_iter structs: (1) Add an iov_iter (->iter) to the I/O request to describe the unencrypted-side buffer. (2) Add an iov_iter (->io_iter) to the I/O request to describe the encrypted-side I/O buffer. This may be a different size to the buffer in (1). (3) Add an iov_iter (->io_iter) to the I/O subrequest to describe the part of the I/O buffer for that subrequest. This will allow future patches to point to a bounce buffer instead for purposes of handling oversize writes, decryption (where we want to save the encrypted data to the cache) and decompression. These iov_iters persist for the lifetime of the (sub)request, and so can be accessed multiple times without worrying about them being deallocated upon return to the caller. The network filesystem must appropriately advance the iterator before terminating the request. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/file.c | 6 +--- fs/netfs/buffered_read.c | 13 ++++++++ fs/netfs/io.c | 69 +++++++++++++++++++++++++++++----------- include/linux/netfs.h | 3 ++ 4 files changed, 67 insertions(+), 24 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index 3d2e1913ea27..3e39a2ebcad6 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -323,11 +323,7 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq) fsreq->len = subreq->len - subreq->transferred; fsreq->key = key_get(subreq->rreq->netfs_priv); fsreq->vnode = vnode; - fsreq->iter = &fsreq->def_iter; - - iov_iter_xarray(&fsreq->def_iter, ITER_DEST, - &fsreq->vnode->netfs.inode.i_mapping->i_pages, - fsreq->pos, fsreq->len); + fsreq->iter = &subreq->io_iter; afs_fetch_data(fsreq->vnode, fsreq); afs_put_read(fsreq); diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index a2852fa64ad0..3b7eb706f2fe 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -206,6 +206,10 @@ void netfs_readahead(struct readahead_control *ractl) netfs_rreq_expand(rreq, ractl); + /* Set up the output buffer */ + iov_iter_xarray(&rreq->iter, ITER_DEST, &ractl->mapping->i_pages, + rreq->start, rreq->len); + /* Drop the refs on the folios here rather than in the cache or * filesystem. The locks will be dropped in netfs_rreq_unlock(). */ @@ -258,6 +262,11 @@ int netfs_read_folio(struct file *file, struct folio *folio) netfs_stat(&netfs_n_rh_readpage); trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); + + /* Set up the output buffer */ + iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, + rreq->start, rreq->len); + return netfs_begin_read(rreq, true); discard: @@ -415,6 +424,10 @@ int netfs_write_begin(struct netfs_inode *ctx, ractl._nr_pages = folio_nr_pages(folio); netfs_rreq_expand(rreq, &ractl); + /* Set up the output buffer */ + iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, + rreq->start, rreq->len); + /* We hold the folio locks, so we can drop the references */ folio_get(folio); while (readahead_folio(&ractl)) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 7f753380e047..e9d408e211b8 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -21,12 +21,7 @@ */ static void netfs_clear_unread(struct netfs_io_subrequest *subreq) { - struct iov_iter iter; - - iov_iter_xarray(&iter, ITER_DEST, &subreq->rreq->mapping->i_pages, - subreq->start + subreq->transferred, - subreq->len - subreq->transferred); - iov_iter_zero(iov_iter_count(&iter), &iter); + iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); } static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, @@ -46,14 +41,9 @@ static void netfs_read_from_cache(struct netfs_io_request *rreq, enum netfs_read_from_hole read_hole) { struct netfs_cache_resources *cres = &rreq->cache_resources; - struct iov_iter iter; netfs_stat(&netfs_n_rh_read); - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, - subreq->start + subreq->transferred, - subreq->len - subreq->transferred); - - cres->ops->read(cres, subreq->start, &iter, read_hole, + cres->ops->read(cres, subreq->start, &subreq->io_iter, read_hole, netfs_cache_read_terminated, subreq); } @@ -88,6 +78,11 @@ static void netfs_read_from_server(struct netfs_io_request *rreq, struct netfs_io_subrequest *subreq) { netfs_stat(&netfs_n_rh_download); + if (iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) + pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n", + rreq->debug_id, subreq->debug_index, + iov_iter_count(&subreq->io_iter), subreq->len, + subreq->transferred, subreq->flags); rreq->netfs_ops->issue_read(subreq); } @@ -259,6 +254,30 @@ static void netfs_rreq_short_read(struct netfs_io_request *rreq, netfs_read_from_server(rreq, subreq); } +/* + * Reset the subrequest iterator prior to resubmission. + */ +static void netfs_reset_subreq_iter(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + size_t remaining = subreq->len - subreq->transferred; + size_t count = iov_iter_count(&subreq->io_iter); + + if (count == remaining) + return; + + _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n", + rreq->debug_id, subreq->debug_index, + iov_iter_count(&subreq->io_iter), subreq->transferred, + subreq->len, rreq->i_size, + subreq->io_iter.iter_type); + + if (count < remaining) + iov_iter_revert(&subreq->io_iter, remaining - count); + else + iov_iter_advance(&subreq->io_iter, count - remaining); +} + /* * Resubmit any short or failed operations. Returns true if we got the rreq * ref back. @@ -287,6 +306,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq) trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); atomic_inc(&rreq->nr_outstanding); + netfs_reset_subreq_iter(rreq, subreq); netfs_read_from_server(rreq, subreq); } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) { netfs_rreq_short_read(rreq, subreq); @@ -399,9 +419,9 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq, struct netfs_io_request *rreq = subreq->rreq; int u; - _enter("[%u]{%llx,%lx},%zd", - subreq->debug_index, subreq->start, subreq->flags, - transferred_or_error); + _enter("R=%x[%x]{%llx,%lx},%zd", + rreq->debug_id, subreq->debug_index, + subreq->start, subreq->flags, transferred_or_error); switch (subreq->source) { case NETFS_READ_FROM_CACHE: @@ -501,7 +521,8 @@ static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest */ static enum netfs_io_source netfs_rreq_prepare_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct iov_iter *io_iter) { enum netfs_io_source source; @@ -528,9 +549,14 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, } } - if (WARN_ON(subreq->len == 0)) + if (WARN_ON(subreq->len == 0)) { source = NETFS_INVALID_READ; + goto out; + } + subreq->io_iter = *io_iter; + iov_iter_truncate(&subreq->io_iter, subreq->len); + iov_iter_advance(io_iter, subreq->len); out: subreq->source = source; trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); @@ -541,6 +567,7 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, * Slice off a piece of a read request and submit an I/O request for it. */ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, + struct iov_iter *io_iter, unsigned int *_debug_index) { struct netfs_io_subrequest *subreq; @@ -565,7 +592,7 @@ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, * (the starts must coincide), in which case, we go around the loop * again and ask it to download the next piece. */ - source = netfs_rreq_prepare_read(rreq, subreq); + source = netfs_rreq_prepare_read(rreq, subreq, io_iter); if (source == NETFS_INVALID_READ) goto subreq_failed; @@ -603,6 +630,7 @@ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, */ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) { + struct iov_iter io_iter; unsigned int debug_index = 0; int ret; @@ -615,6 +643,8 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) return -EIO; } + rreq->io_iter = rreq->iter; + INIT_WORK(&rreq->work, netfs_rreq_work); if (sync) @@ -624,8 +654,9 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) * want and submit each one. */ atomic_set(&rreq->nr_outstanding, 1); + io_iter = rreq->io_iter; do { - if (!netfs_rreq_submit_slice(rreq, &debug_index)) + if (!netfs_rreq_submit_slice(rreq, &io_iter, &debug_index)) break; } while (rreq->submitted < rreq->len); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 33d4487a91e9..bd0437088f0e 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -152,6 +152,7 @@ struct netfs_cache_resources { struct netfs_io_subrequest { struct netfs_io_request *rreq; /* Supervising I/O request */ struct list_head rreq_link; /* Link in rreq->subrequests */ + struct iov_iter io_iter; /* Iterator for this subrequest */ loff_t start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ @@ -188,6 +189,8 @@ struct netfs_io_request { struct netfs_cache_resources cache_resources; struct list_head proc_link; /* Link in netfs_iorequests */ struct list_head subrequests; /* Contributory I/O operations */ + struct iov_iter iter; /* Unencrypted-side iterator */ + struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ void *netfs_priv; /* Private data for the netfs */ unsigned int debug_id; unsigned int rsize; /* Maximum read size (0 for none) */ From patchwork Fri Oct 13 16:03:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 646A2CDB487 for ; Fri, 13 Oct 2023 16:05:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F8F96B0146; Fri, 13 Oct 2023 12:05:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A9176B0185; Fri, 13 Oct 2023 12:05:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8798B6B0146; Fri, 13 Oct 2023 12:05:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 732CA6B0146 for ; Fri, 13 Oct 2023 12:05:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 439A8B517A for ; Fri, 13 Oct 2023 16:05:24 +0000 (UTC) X-FDA: 81340912968.22.9C897D7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id A54871C0044 for ; Fri, 13 Oct 2023 16:05:09 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QkUMsCdF; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213109; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ntUQ/reWYDAgg2miRGVf+hR8qevYgo6i7or36auU25I=; b=L3z14UUoZvhQogwByvafXgBZEbaieeYvPH1QW6MmBLGa2WDM3y9MkBi82Ek/Ck/c22YPVT 7b/UIRvAjUgmtHFt9DYwtIWq2Qz7zPdIPu5x5LOM++ewCeupklivbcWOFKK2yC2jC95+qL cqsf604X2E4Tuz4+CSoFsWekLA/fp5g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213109; a=rsa-sha256; cv=none; b=dFP61kwsk9qywJsw45SvTAgKZrjmA/UWD2E86HorFOAJI1RRyFmjYTKHAeCXE54/7f4U6M BYLGNUs8G1EzSp9OrlpDQGNnL2fHxGnejDrDBz7tqSfGL7xRaSQyLRzjFPBuRnTNvC9xzb CfpGYyLgzzLRIe42O8HuugFZfb7TeQI= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QkUMsCdF; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213109; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ntUQ/reWYDAgg2miRGVf+hR8qevYgo6i7or36auU25I=; b=QkUMsCdFmYcgezhYZ3qOYa74kMHRbgJes3Frc8pS+lA9Ltn7nBmBNCiuIb7eFVBDQiCHcf AlloG7XceJj2A7rBvajfm8q+OyHMyPr+3W55Ukm33smrhP8bJY/yEG384qx4ayRQFLokKQ 9sJHmSO6B92ZEM6PE2tuQghg5aiW6h8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-681--Tly13tIP_OoFZmS3aBALw-1; Fri, 13 Oct 2023 12:05:05 -0400 X-MC-Unique: -Tly13tIP_OoFZmS3aBALw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6FBB83C40C20; Fri, 13 Oct 2023 16:05:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id E4F5C1C06535; Fri, 13 Oct 2023 16:05:01 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 11/53] netfs: Add support for DIO buffering Date: Fri, 13 Oct 2023 17:03:40 +0100 Message-ID: <20231013160423.2218093-12-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: A54871C0044 X-Rspam-User: X-Stat-Signature: in8dd1b5erio6mmy6dfoqq5349ouwzut X-Rspamd-Server: rspam03 X-HE-Tag: 1697213109-311757 X-HE-Meta: U2FsdGVkX1/1z1C3Y5Le4NDENgvCAhKJlTdIj+5u2fGDAioM5kjz9/R+zikG3u/l6v5EN5RWREYYcRvVOJb/Sc+BlFzO2xe9Je5dqreA57Ea85Pp57jZ3LlkLmuwk4LpHv2UhZu31hx313AOHXXw9UUowWPtewjZj4v5UBquuVJgSfT6OiTscNN/mKwiEGgH1rSidouu26ipLfY012y0krhvSkq2m90E9LOa3smRCnybnoh4vXJh5GIGZF24MWLpxGe9QUxCOBXfi2+fioQnlgZhbB6maaBJ1owadjlPxThUDbEluaYiambXlY0IyyUypyt7iX8C2PqXLxn2kpvDM+lbsU2I5fy6P2yGFAbBGzuBYv0Jv8QGHu6uSMF73SOMLOjEv4HuyuVQolyJS4FpWOlvWcQHq5f1eY4thReFnqYlf+wy0P3wXQO+OqrUJP/jAVG9VXikkAU7RswNLf2jzMjpMuAqVN1Hi6JMTqGgY+cTg4QpzGB4kkt9cG3Liemh7p1OZlAxNgPEVkwrNSFaAfM4UwYe6IlwJwwjnV9fJ5YiS9V86gem+Rb5m6s8eycuUEej5CWBKbLcoI4A//WkjZcDPL/8epQcEjSc2AkVyIk3njS1kEwyfV3WdlcHN27CNdPcNrW8Vl6u12Lwjb9UAnSGlpaw01Zfa0uXYMdQsMIIFX+LK4unwvHBODmsAFdZPS5269ftXF4oUqmvnpd9Du9rGrC7QKdxLYZe4VOauwAy6Ysrzry7vwuMKzQMc0gADhrPcMwmuboi/rxq4b7grQIHJul1F/Wn1tdP+A2tFBW7E9GCPEBGCsduQ9hnNvJrUf4gxIZxqa0U6Jd3CLtIJ7BTofwRVmIgGz5nKrg5DVs9g9YMxGs1ul20Rnj4EWdT0RpBsFRn91QoVwwWnbqVEDi9dxmC99dmmHxhqxAmCU1bP6LATMSMUeoZMl1/LCcsudVDQr3shMPsMNycQMv 2MwTiHlY ZjxIiTqLANWEmm6Uaeq6UeAzTU76132xOfuIX1ztYQHS+vwY/f7n4ptWcGeeBsrND82+pBIPXteTJ2qrLTp/jB2BP0aIrh+OIheBlnN8ZkivP1Ee/ZBjy5JkuS5pZMJpSd3eJIzymvzw43qXrluXHzLU49R0nOTCMc7psZdPDRCByVvGKEoySW8s4JefPGjMSfFGTfV5MXrCikLFprT9gr0kGH1OdvXucCX6JptHqOXiaxjb6uSh7BIAM0RFisFmsOeN8Do9gjXBKWE8KUEvLIcBWrycsVSJ5i092 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a bvec array pointer and an iterator to netfs_io_request for either holding a copy of a DIO iterator or a list of all the bits of buffer pointed to by a DIO iterator. There are two problems: Firstly, if an iovec-class iov_iter is passed to ->read_iter() or ->write_iter(), this cannot be passed directly to kernel_sendmsg() or kernel_recvmsg() as that may cause locking recursion if a fault is generated, so we need to keep track of the pages involved separately. Secondly, if the I/O is asynchronous, we must copy the iov_iter describing the buffer before returning to the caller as it may be immediately deallocated. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/objects.c | 10 ++++++++++ include/linux/netfs.h | 3 +++ 2 files changed, 13 insertions(+) diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 8e92b8401aaa..4396318081bf 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -78,6 +78,7 @@ static void netfs_free_request(struct work_struct *work) { struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work); + unsigned int i; trace_netfs_rreq(rreq, netfs_rreq_trace_free); netfs_proc_del_rreq(rreq); @@ -86,6 +87,15 @@ static void netfs_free_request(struct work_struct *work) rreq->netfs_ops->free_request(rreq); if (rreq->cache_resources.ops) rreq->cache_resources.ops->end_operation(&rreq->cache_resources); + if (rreq->direct_bv) { + for (i = 0; i < rreq->direct_bv_count; i++) { + if (rreq->direct_bv[i].bv_page) { + if (rreq->direct_bv_unpin) + unpin_user_page(rreq->direct_bv[i].bv_page); + } + } + kvfree(rreq->direct_bv); + } kfree_rcu(rreq, rcu); netfs_stat_d(&netfs_n_rh_rreq); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index bd0437088f0e..66479a61ad00 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -191,7 +191,9 @@ struct netfs_io_request { struct list_head subrequests; /* Contributory I/O operations */ struct iov_iter iter; /* Unencrypted-side iterator */ struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ + struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter) */ void *netfs_priv; /* Private data for the netfs */ + unsigned int direct_bv_count; /* Number of elements in bv[] */ unsigned int debug_id; unsigned int rsize; /* Maximum read size (0 for none) */ atomic_t nr_outstanding; /* Number of ops in progress */ @@ -200,6 +202,7 @@ struct netfs_io_request { size_t len; /* Length of the request */ short error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ + bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ loff_t i_size; /* Size of the file */ loff_t start; /* Start position */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ From patchwork Fri Oct 13 16:03:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA845CDB47E for ; Fri, 13 Oct 2023 16:05:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 627C96B012B; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D7846B012E; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4787E6B0139; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 33D346B012B for ; Fri, 13 Oct 2023 12:05:17 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D12FA1403D3 for ; Fri, 13 Oct 2023 16:05:16 +0000 (UTC) X-FDA: 81340912632.05.C00BDAF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id C616E20014 for ; Fri, 13 Oct 2023 16:05:13 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JTnz/fFn"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213113; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=obp1C/bTKivreVyZbMeOEbpW8BryZ03pHgo31ZL3OJs=; b=GiKvP7mJjoCpkM4X3q/QGAJnOO8e5ikMYhaBUQjHzI47uh2i6yyr0zwyNp9d7zqEgwNv7H /9ouSnosc/pXJoyHmj5nJSJQ89zWr3cJjdx7gFlUH9foRqIYkHqeha3d3us/1oyw1zddiX TTjCC3NC57KR5BfqIzfyieuS0y+e3M0= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JTnz/fFn"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213113; a=rsa-sha256; cv=none; b=VbGEJrzNFCgR0wN6TQzTYOYQ1Zg6ldiTEMg/0eVTzqTcIffHuH1WnD4BG56NU9aFT0A/GH c9oJYpC3VGxn2TfKt9W3Mbibar+sQFw3zYn+el1jQHq0aLBXYs4mLjZJnXx6XtRdS7nVE5 yPj01jAwEAEacMYJlTtzLvC0nWy3m5M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213112; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=obp1C/bTKivreVyZbMeOEbpW8BryZ03pHgo31ZL3OJs=; b=JTnz/fFnz1xSNBSnkASA6++sWaNRMgynQ+0eXO2KARxb4EIOlIJbaNSlga5j4JIccfyNig Y13v8Zp8NZCyM1AISLJtbdfDf8pqHPfKz+RQOVh2WSxrVDI9saCPKBgwFQEdhy4dsFoc5E sMEmD4AOAz+DPhViwE1rVhCD1vSB9ps= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-173-MyygaxCRPVa_0k68CjzVSg-1; Fri, 13 Oct 2023 12:05:08 -0400 X-MC-Unique: MyygaxCRPVa_0k68CjzVSg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B13E9889067; Fri, 13 Oct 2023 16:05:07 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 33CF025C2; Fri, 13 Oct 2023 16:05:05 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 12/53] netfs: Provide tools to create a buffer in an xarray Date: Fri, 13 Oct 2023 17:03:41 +0100 Message-ID: <20231013160423.2218093-13-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspam-User: X-Stat-Signature: nbiyjj5fgdmjoq6nizc4ku54grwz9hia X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C616E20014 X-HE-Tag: 1697213113-913222 X-HE-Meta: U2FsdGVkX1/xKCS8390MoD1zmXXOh1LdZ8C3kFZy8dcTC3KKGqbsRE2YSvWVSTC+ae5meAjfmIasrdvKGP6f74zy7ioTOYwD3v4UMmT8rGAZiWnvDqyKsLcXdpGAdLnV0053tcDhrjQXE7LjKALiKYUetRC2SH5XOAKKUYQ2M+q7KQtxJlq9jeFEMLSihhbcnY+AGXmGV5HbuoNqUciuBkydzmevQpjrltYNowNTAe4DecHPl1lCL1FuPxpP+s84QkV48Dct+f6xHxEevx/f69BepHuSGOl/VTDQSzozp7k9/d1fb8Yv54jS81GYUBa5Gm5j7DoDCY6+TbDckF+HkU0pnmvJaCYljkHJPhmFQx6ogw7k/Ect1ezJpHACbYLEOI6yhhhftGiTzvV9PUn0ypl8h2ulv32ZSSay9MhNwWVzqiLuY6ZKHeHxt52L4cx6gh6PwR8Xnspg+DyOjNZgRpQzToabSiREGxSyEww/pn9434KW7DmiNMu+xZFfPNQffox8sTBWJQxFa+ZKK1KV5eYpjRlmDgvA8KwhEmNPJRXdsRwmCGaIDZCHQbFEdKO8R2T1qXiWRhVMRGRVcNm2jYR5rnNMSe8CCTB8OP1ofy3wU+ENvw0n26P0bSdPt6XiXVLBrRi2QtujUUt0emi/K2KmlhEIRvyrj/X/YsqCxIToFMO4pO2tZHc+RB454TuiF1pUmwkhtQA9RLyOr1vKlZHwM4eSNPjsYcrQGB4afjlAfqAkiYVi1hZFYAGgezTj/D3ZXrKx+ZnyckpZeR3u4beDPdxgijddel+0DCi/7a6VHlKUWKz64o/UqPv5/xRuFhiwk+XTymS4lfdWtZkjGKqHTE4vPTOjTxw3T9TomOBZW87ESp3y5clOdDiPpx41b7sCRbnbR+isrnirQwqRF+VrXgm4cCl7EcZ2f+1pOtywpOacfkdOZbmdiIycaKIbCqPtie00QMtUB6lkJ5f LCU+/cv0 qWfP07jtecsLnAdg3e+iWJksSqFBAl6p6cnIe9t9dgpfg28GyPo2mW5W4dF/aSD+Y+e32jJ4RcNF5KLqD9WSl13ebuq75Ee8HWqeMWMfxnX1MFCsIYJt2kyuuJvLSPkqJ3rwZZLlfjKAdH3aed3c2GOtjTuY3G+s+c9Bzf6M5NIihysFdDMHrrdPX5evQByjWRhuLY+8OjcCRUDHrceVqTHXx4jPkrfMHw5NdxU/rGz2+KI8M/WczyBTUB4GbZKstwx4+46BeqrGwIsVA5LTw1N7rm9lncr8ONACxqWmjg4e+NmiY/Qek8svSSKb9bvK/T2SjfBBWHfPX9WXVDYMwdKctzBVjYx0SBj6uVSnvhG/4s/0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide tools to create a buffer in an xarray, with a function to add new folios with a mark. This will be used to create bounce buffer and can be used more easily to create a list of folios the span of which would require more than a page's worth of bio_vec structs. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 16 +++++ fs/netfs/misc.c | 140 ++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 4 ++ 3 files changed, 160 insertions(+) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 1f067aa96c50..00e01278316f 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -52,6 +52,22 @@ static inline void netfs_proc_add_rreq(struct netfs_io_request *rreq) {} static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {} #endif +/* + * misc.c + */ +int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index, + struct folio *folio, bool put_mark, + bool pagecache_mark, gfp_t gfp_mask); +int netfs_add_folios_to_buffer(struct xarray *buffer, + struct address_space *mapping, + pgoff_t index, pgoff_t to, gfp_t gfp_mask); +int netfs_set_up_buffer(struct xarray *buffer, + struct address_space *mapping, + struct readahead_control *ractl, + struct folio *keep, + pgoff_t have_index, unsigned int have_folios); +void netfs_clear_buffer(struct xarray *buffer); + /* * objects.c */ diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index c3baf2b247d9..c70f856f3129 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,146 @@ #include #include "internal.h" +/* + * Attach a folio to the buffer and maybe set marks on it to say that we need + * to put the folio later and twiddle the pagecache flags. + */ +int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index, + struct folio *folio, bool put_mark, + bool pagecache_mark, gfp_t gfp_mask) +{ + XA_STATE_ORDER(xas, xa, index, folio_order(folio)); + +retry: + xas_lock(&xas); + for (;;) { + xas_store(&xas, folio); + if (!xas_error(&xas)) + break; + xas_unlock(&xas); + if (!xas_nomem(&xas, gfp_mask)) + return xas_error(&xas); + goto retry; + } + + if (put_mark) + xas_set_mark(&xas, NETFS_BUF_PUT_MARK); + if (pagecache_mark) + xas_set_mark(&xas, NETFS_BUF_PAGECACHE_MARK); + xas_unlock(&xas); + return xas_error(&xas); +} + +/* + * Create the specified range of folios in the buffer attached to the read + * request. The folios are marked with NETFS_BUF_PUT_MARK so that we know that + * these need freeing later. + */ +int netfs_add_folios_to_buffer(struct xarray *buffer, + struct address_space *mapping, + pgoff_t index, pgoff_t to, gfp_t gfp_mask) +{ + struct folio *folio; + int ret; + + if (to + 1 == index) /* Page range is inclusive */ + return 0; + + do { + /* TODO: Figure out what order folio can be allocated here */ + folio = filemap_alloc_folio(readahead_gfp_mask(mapping), 0); + if (!folio) + return -ENOMEM; + folio->index = index; + ret = netfs_xa_store_and_mark(buffer, index, folio, + true, false, gfp_mask); + if (ret < 0) { + folio_put(folio); + return ret; + } + + index += folio_nr_pages(folio); + } while (index <= to && index != 0); + + return 0; +} + +/* + * Set up a buffer into which to data will be read or decrypted/decompressed. + * The folios to be read into are attached to this buffer and the gaps filled + * in to form a continuous region. + */ +int netfs_set_up_buffer(struct xarray *buffer, + struct address_space *mapping, + struct readahead_control *ractl, + struct folio *keep, + pgoff_t have_index, unsigned int have_folios) +{ + struct folio *folio; + gfp_t gfp_mask = readahead_gfp_mask(mapping); + unsigned int want_folios = have_folios; + pgoff_t want_index = have_index; + int ret; + + ret = netfs_add_folios_to_buffer(buffer, mapping, want_index, + have_index - 1, gfp_mask); + if (ret < 0) + return ret; + have_folios += have_index - want_index; + + ret = netfs_add_folios_to_buffer(buffer, mapping, + have_index + have_folios, + want_index + want_folios - 1, + gfp_mask); + if (ret < 0) + return ret; + + /* Transfer the folios proposed by the VM into the buffer and take refs + * on them. The locks will be dropped in netfs_rreq_unlock(). + */ + if (ractl) { + while ((folio = readahead_folio(ractl))) { + folio_get(folio); + if (folio == keep) + folio_get(folio); + ret = netfs_xa_store_and_mark(buffer, folio->index, folio, + true, true, gfp_mask); + if (ret < 0) { + if (folio != keep) + folio_unlock(folio); + folio_put(folio); + return ret; + } + } + } else { + folio_get(keep); + ret = netfs_xa_store_and_mark(buffer, keep->index, keep, + true, true, gfp_mask); + if (ret < 0) { + folio_put(keep); + return ret; + } + } + return 0; +} + +/* + * Clear an xarray buffer, putting a ref on the folios that have + * NETFS_BUF_PUT_MARK set. + */ +void netfs_clear_buffer(struct xarray *buffer) +{ + struct folio *folio; + XA_STATE(xas, buffer, 0); + + rcu_read_lock(); + xas_for_each_marked(&xas, folio, ULONG_MAX, NETFS_BUF_PUT_MARK) { + folio_put(folio); + } + rcu_read_unlock(); + xa_destroy(buffer); +} + /** * netfs_invalidate_folio - Invalidate or partially invalidate a folio * @folio: Folio proposed for release diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 66479a61ad00..e8d702ac6968 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -109,6 +109,10 @@ static inline int wait_on_page_fscache_killable(struct page *page) return folio_wait_private_2_killable(page_folio(page)); } +/* Marks used on xarray-based buffers */ +#define NETFS_BUF_PUT_MARK XA_MARK_0 /* - Page needs putting */ +#define NETFS_BUF_PAGECACHE_MARK XA_MARK_1 /* - Page needs wb/dirty flag wrangling */ + enum netfs_io_source { NETFS_FILL_WITH_ZEROES, NETFS_DOWNLOAD_FROM_SERVER, From patchwork Fri Oct 13 16:03:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17229CDB483 for ; Fri, 13 Oct 2023 16:05:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73A996B013C; Fri, 13 Oct 2023 12:05:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 711396B0145; Fri, 13 Oct 2023 12:05:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D88A6B0146; Fri, 13 Oct 2023 12:05:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4ED1E6B013C for ; Fri, 13 Oct 2023 12:05:21 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0783E120259 for ; Fri, 13 Oct 2023 16:05:20 +0000 (UTC) X-FDA: 81340912842.20.2B1EAF4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 40FD114004F for ; Fri, 13 Oct 2023 16:05:18 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DGMPQuGG; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213119; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/wI2ukAfhLlByZQe0rXpGLnwelltmScQCYU5cYtls1s=; b=qD4rLs/x8xgLfxocSR1sQ/EgHFprXSLcIC+PbeAk4Gu3aR2G8nyiI9L1zFGj1NmqFso0Nn okQyJVU+AHc6eCmRYcHArvsrnRfzVIuVX9q25pyM+s14otIV6Za+oXyQ9sXN3ycbHxy1ek 2FStf66hCuGIbVmzzZKWUQGhwlVktng= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DGMPQuGG; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213119; a=rsa-sha256; cv=none; b=14sgs+AU0gtVhJclh1QDN2OUuiVqA94H7dahnD9qsNxUvOKk7GtNNmsK7PsVI8pGrJPdnx EPZp0btiD/xOh7yioHcb6vbsM7n1/8w4ZCPpG6jd43Jkdty1MWHB7rLBglcppSoZezGSdk COTtu2Lb48gcWlKC4DF2GcowkeZDFwA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213118; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/wI2ukAfhLlByZQe0rXpGLnwelltmScQCYU5cYtls1s=; b=DGMPQuGGRXMGRMWMVYPv3j70k6eJdi39iJkMUIMgkOKBWsCH6b1dj4pnoU+6nRswepDhvR rlqI3w2YJgio0bx+6/iGDbrOXOo7OsBDOqWrgbXxgMUxg94l2nyAeOBrJjPnC0VhG3ims4 HfkHjE8c7m4S+5Tc8chxd0dS//zfSxw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-393-Y3JGXtQpP-C4R3eDg4Xd4A-1; Fri, 13 Oct 2023 12:05:12 -0400 X-MC-Unique: Y3JGXtQpP-C4R3eDg4Xd4A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D4C5A88B7A8; Fri, 13 Oct 2023 16:05:10 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A8DB201F457; Fri, 13 Oct 2023 16:05:08 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 13/53] netfs: Add bounce buffering support Date: Fri, 13 Oct 2023 17:03:42 +0100 Message-ID: <20231013160423.2218093-14-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspamd-Queue-Id: 40FD114004F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 5u13a1wbw3i955xemodmetdpgk48tj3h X-HE-Tag: 1697213118-445530 X-HE-Meta: U2FsdGVkX18i0L1BbbIcWZr1APZPhF5Y4I4h9G1+F84vs1i3SJ8WhtameOolRm0KAlf72AxPD9+T3+Ib4cqnjzmXzUJAjS/3mqvYBh+8RwZcogXfWeqNoW55T1bRV5wtuDVADRm1CqraL+9zDJQRX8NiktRcegiS32FLYF1/kBM8lV+gaVSiuDfQdCY/Hyg/W4btY6bnYu2X5vT8an6l1ksRR8R5FPcm4reHgMwkzxo5XnwvqiT+v6kIJAE8cYso/F0FixSgPG4BMtq8sE3ky3aB7FjAfzxAJFQ626JsCHjlSxK3JEQ/RMW7CtasA9Rw81/nEuUZNjINl31I5vnQ5/jI2MUoPNr5l+oXtxi60wrY0CwJZZOfUlhDodHFrqwxa235U84yYT2fOBhiHkwjziqEhMtH0eAYBGMSAYUKanuBFNwCiADFebaNrp5b84pS7+QAB6NouLi3WMiss0hPXkMYDEFfoG8LhQoxgfu3hscZ+lTHmoerDTAyonXyazxe4cxUFF1cSxYvGHZx2jr3uxG2KLIySiP1JptSSfJE2lfVmmwT+fGjej6FBjzfMrJv7EJMv4aVz7QAsAsnk+FkNCJXAzCu/T5/KfhpxM+k+080D9PizTfd/g4N4Kn087tRrQwaWWYkZ2N9SzkYDilndYZodkuX7VHnK3kRHaANAsVjlRiKoS73S6uuC3QtUmBWM6mOlGhGMKCEAe+I9urevMyRmR3so5f8UMa44nkymgrBpiSosxi+VNc5+0gX8wBCT4sqzc0JEODPk70Kzd416tXD2wiClqlVgKgd+c6NVb0KURlkG1paGJ8DmndM8AzftdVUZbCocxp6UN+blNIjgp4qfo5b7AqAa3S8TSxwglALiwfLiMd5QG4xAiVzpzZ4oy82hwnKnDm329MfVsqxGRZxyOq6FrZtFhTHDmwzvf5Xh267mPJZSq0rOBEatEvf2Bx2kEAfWFYNh9kxN2R CEp4F+GA EXmq1I7IwjzAO6AylTbhLn6OIPfnykNQtvYoKIQgS9VcV2JUcj07cR4P4x8vrXc9rUTmMwgiiHaeUEvrNq/j9AySTNeY5EAf/5UaDuYyImJgUE9pRXOwb0w/mehnPadxRDAX7FbWk68WlXH25A3icCxf7k3ZPrD+1NErcxlsNVZi75NeR0TBuszifiYj7W8PObtY040S8WOl6Ch7uCd7+B1IMKlcgbWF9wzEe3yn041r6fDbwrQN9sbfgu5d0DpZ+sNuWG5AmoQkDavxAQ7JZbICsswnR+jOHuHHkjWeD0CeaGSo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a second xarray struct to netfs_io_request for the purposes of holding a bounce buffer for when we have to deal with encrypted/compressed data or if we have to up/download data in blocks larger than we were asked for. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/io.c | 6 +++++- fs/netfs/objects.c | 3 +++ include/linux/netfs.h | 2 ++ 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index e9d408e211b8..d8e9cd6ce338 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -643,7 +643,11 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) return -EIO; } - rreq->io_iter = rreq->iter; + if (test_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags)) + iov_iter_xarray(&rreq->io_iter, ITER_DEST, &rreq->bounce, + rreq->start, rreq->len); + else + rreq->io_iter = rreq->iter; INIT_WORK(&rreq->work, netfs_rreq_work); diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 4396318081bf..0782a284dda8 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -35,6 +35,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, rreq->inode = inode; rreq->i_size = i_size_read(inode); rreq->debug_id = atomic_inc_return(&debug_ids); + xa_init(&rreq->bounce); INIT_LIST_HEAD(&rreq->subrequests); refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); @@ -43,6 +44,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, if (rreq->netfs_ops->init_request) { ret = rreq->netfs_ops->init_request(rreq, file); if (ret < 0) { + xa_destroy(&rreq->bounce); kfree(rreq); return ERR_PTR(ret); } @@ -96,6 +98,7 @@ static void netfs_free_request(struct work_struct *work) } kvfree(rreq->direct_bv); } + netfs_clear_buffer(&rreq->bounce); kfree_rcu(rreq, rcu); netfs_stat_d(&netfs_n_rh_rreq); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index e8d702ac6968..a7220e906287 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -196,6 +196,7 @@ struct netfs_io_request { struct iov_iter iter; /* Unencrypted-side iterator */ struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter) */ + struct xarray bounce; /* Bounce buffer (eg. for crypto/compression) */ void *netfs_priv; /* Private data for the netfs */ unsigned int direct_bv_count; /* Number of elements in bv[] */ unsigned int debug_id; @@ -220,6 +221,7 @@ struct netfs_io_request { #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ #define NETFS_RREQ_NONBLOCK 6 /* Don't block if possible (O_NONBLOCK) */ #define NETFS_RREQ_BLOCKED 7 /* We blocked */ +#define NETFS_RREQ_USE_BOUNCE_BUFFER 8 /* Use bounce buffer */ const struct netfs_request_ops *netfs_ops; }; From patchwork Fri Oct 13 16:03:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01440CDB47E for ; Fri, 13 Oct 2023 16:05:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16DBC6B013B; Fri, 13 Oct 2023 12:05:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11D536B013C; Fri, 13 Oct 2023 12:05:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F019C6B0145; Fri, 13 Oct 2023 12:05:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D802B6B013B for ; Fri, 13 Oct 2023 12:05:19 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id AFFDF803BD for ; Fri, 13 Oct 2023 16:05:19 +0000 (UTC) X-FDA: 81340912758.13.F897496 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 66202160039 for ; Fri, 13 Oct 2023 16:05:17 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RjLD6FcZ; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213117; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ontDGl26Z2fbsTnGmpaZePgZN/TS57PItn3SJDCoVuk=; b=Q0aAFQjzk9jv/IsReR8Y2jk6STzA9QfcZAvc/preUAFO3/wO5ht0ALQwsXpidPjwf6hy8a 56H6JqGuYoi/N/Yprp+Y95TlWIBhuLEUcEQE4b6MGs0pkiarDeSHINADIfRkUg7WPc+iHT QGbnf0YlC9CZ+vBJ4oRPNpxWmLvNSaQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213117; a=rsa-sha256; cv=none; b=uATjWCLTwPK8Q5P/hKGSQx/LihJsMqOlX/XnudQrIh/pi0Iwhbf6JKKx0akK1VowXhbmue +887SskOqJUl7OVhKFZ5N5IJIaoc5V7SVOWj6ykTgcQJy2CYIddFtQKiw3/9Y8W4of9JDF tb8DuSZ5NO2HnkmyPzAiqrPJu3CIkDk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RjLD6FcZ; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213116; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ontDGl26Z2fbsTnGmpaZePgZN/TS57PItn3SJDCoVuk=; b=RjLD6FcZe02BmtGpqGgdl4Pkjqyv1c/cRelqRG0SLXO2qZucmzK4UM8Xzk99KRyh1CgOXc Ot4iSXhPlnB/+8YFScMU6DkltdpwoSmfhy8To863R4RrS5yKu1sAM/0ts47Nsf5lEwPn2B v/PTBZ5C0Ems8/lb9JD3azosxbdsito= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-133-rWbD_8zjPn63ZHOu7ytAzw-1; Fri, 13 Oct 2023 12:05:15 -0400 X-MC-Unique: rWbD_8zjPn63ZHOu7ytAzw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 190583823337; Fri, 13 Oct 2023 16:05:14 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 97A0425C0; Fri, 13 Oct 2023 16:05:11 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 14/53] netfs: Add func to calculate pagecount/size-limited span of an iterator Date: Fri, 13 Oct 2023 17:03:43 +0100 Message-ID: <20231013160423.2218093-15-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: 66202160039 X-Rspam-User: X-Stat-Signature: nwbozdfarepizndr899qffpcf8a15y7f X-Rspamd-Server: rspam03 X-HE-Tag: 1697213117-643601 X-HE-Meta: U2FsdGVkX1+7Nd6/XM3JvKXoUnDb7UKOGQQ6xazuulxuuQgVC8ZTBn+f/mOJngZ7Lj2n4qDGoP4GgregGJ/7mXb1kGwlOI2p9sG9S+NHRls9cCYel46+74E/CdGb3rkhvxzT83/W72ssP14gHVgpQmXdTJAlG/RHzBZxgOjqjrVIWDS3H/vr6t2LvZ2W5HCCs2bIipQMFt/vPxkeZopy0X0y5HKV4URiK4GAFYMlBUSHHuPB7J+vPr1KgS0YjwGHvIaU2dtIzhKKF30nGeCt2eycR3MDku6RD9tTvj8/26xwyI+NmPYQ2osWNE0Te9iW/MnkLM8a5aHxWTyq/ZvgEPqd1OF+pofv+Bk1hWE1LmKfAiAixhFEVi0O1kOeRuOLG3TgbIbJJsc9Zu/oQXpalBa4BZcOdC39bl2CNbR7I1SUCisSPv0Nm+q5BL9qnDspaXMSm1p5Ah08DLrdsc/9S8jkm08fMeUB//l+pOPZ5i1PSrjUfZVjfMcgHD6+e9egJZppA9hWdYKL6YIaPu029quYuz20pz44Ol6BNc4RVRDLzsz4KL3SJZ3AllUAR4U+3IFaH3LxGVYZLYP9kHlKKdCPnqdt4NS1iHHUEqQzJDHWx6h/jel95H5BTJaQEHf6WRE6EzD9jk/xNEJ0di39eh53XyMwwGEEUMtiJCaMn5ATvdwkR70O4fni3PDugj/LM9R1yzL57/s5EjwTTF51H8vZWmbXZaKZSzRVICCE9KngxEiL8/ICoourROCBu22K9j5h58HKtLTO1DmRJZ0fXo5xeH9cRbYY6YCNJATzfNpArQ7ymhVJ9jK8RHRQL4QUfkVFt4vRjAdl4a5K8pw6RVtDxc2I+ge009XyPYSRUcCOA7ePatWVCEfhRGvkAg1AhCHiX5X3/h+H4Cl5RU9vjG7zHsYZ62e+leyQfOLMcWCvp2VnnN17Zi/X83rutYui2wzZNLhYEJTfEn8fp+r MVEdQPvW T70gLyY/6HpKqI4QWFWXZ7SNMNPeWFnbf3LWf5dE6fEy1dsm1utPMIqT8sq/Pay1eJCBW1efdkOy6qSxOzYUrBxGDoaJu7wsE/UR0TkB+qaYLh0z5anTkI1qHOAjnqsWx+2cc+dxI9GsPuz1aXuIcSC73Hu18oFkj1WaIGTWxSHMnBpFPXWFceJdP4BAUmfEEgK7ZI8X4cUpNZFrNCk5L9akCcPToFOfAxFLVOhxg+xM5UcnI7J2+6fs/veqyhtIhqvbxy48e41pNxfBXeYvhZumqyZqtWOx0s6Umrey8C4S3i9M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function to work out how much of an ITER_BVEC or ITER_XARRAY iterator we can use in a pagecount-limited and size-limited span. This will be used, for example, to limit the number of segments in a subrequest to the maximum number of elements that an RDMA transfer can handle. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/iterator.c | 97 +++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 2 + 2 files changed, 99 insertions(+) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 2ff07ba655a0..b781bbbf1d8d 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -101,3 +101,100 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, return npages; } EXPORT_SYMBOL_GPL(netfs_extract_user_iter); + +/* + * Select the span of a bvec iterator we're going to use. Limit it by both maximum + * size and maximum number of segments. Returns the size of the span in bytes. + */ +static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs) +{ + const struct bio_vec *bvecs = iter->bvec; + unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0; + size_t len, span = 0, n = iter->count; + size_t skip = iter->iov_offset + start_offset; + + if (WARN_ON(!iov_iter_is_bvec(iter)) || + WARN_ON(start_offset > n) || + n == 0) + return 0; + + while (n && ix < nbv && skip) { + len = bvecs[ix].bv_len; + if (skip < len) + break; + skip -= len; + n -= len; + ix++; + } + + while (n && ix < nbv) { + len = min3(n, bvecs[ix].bv_len - skip, max_size); + span += len; + nsegs++; + ix++; + if (span >= max_size || nsegs >= max_segs) + break; + skip = 0; + n -= len; + } + + return min(span, max_size); +} + +/* + * Select the span of an xarray iterator we're going to use. Limit it by both + * maximum size and maximum number of segments. It is assumed that segments + * can be larger than a page in size, provided they're physically contiguous. + * Returns the size of the span in bytes. + */ +static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs) +{ + struct folio *folio; + unsigned int nsegs = 0; + loff_t pos = iter->xarray_start + iter->iov_offset; + pgoff_t index = pos / PAGE_SIZE; + size_t span = 0, n = iter->count; + + XA_STATE(xas, iter->xarray, index); + + if (WARN_ON(!iov_iter_is_xarray(iter)) || + WARN_ON(start_offset > n) || + n == 0) + return 0; + max_size = min(max_size, n - start_offset); + + rcu_read_lock(); + xas_for_each(&xas, folio, ULONG_MAX) { + size_t offset, flen, len; + if (xas_retry(&xas, folio)) + continue; + if (WARN_ON(xa_is_value(folio))) + break; + if (WARN_ON(folio_test_hugetlb(folio))) + break; + + flen = folio_size(folio); + offset = offset_in_folio(folio, pos); + len = min(max_size, flen - offset); + span += len; + nsegs++; + if (span >= max_size || nsegs >= max_segs) + break; + } + + rcu_read_unlock(); + return min(span, max_size); +} + +size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs) +{ + if (iov_iter_is_bvec(iter)) + return netfs_limit_bvec(iter, start_offset, max_size, max_segs); + if (iov_iter_is_xarray(iter)) + return netfs_limit_xarray(iter, start_offset, max_size, max_segs); + BUG(); +} +EXPORT_SYMBOL(netfs_limit_iter); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a7220e906287..2b5e04ea4db2 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -328,6 +328,8 @@ void netfs_stats_show(struct seq_file *); ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); +size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs); int netfs_start_io_read(struct inode *inode); void netfs_end_io_read(struct inode *inode); From patchwork Fri Oct 13 16:03:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D46DCDB47E for ; Fri, 13 Oct 2023 16:05:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DEFDA6B0185; Fri, 13 Oct 2023 12:05:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA0726B0186; Fri, 13 Oct 2023 12:05:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1A466B0187; Fri, 13 Oct 2023 12:05:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B2EA46B0185 for ; Fri, 13 Oct 2023 12:05:27 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8A087C03D2 for ; Fri, 13 Oct 2023 16:05:27 +0000 (UTC) X-FDA: 81340913094.29.6C49DCE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id A020E1C0002 for ; Fri, 13 Oct 2023 16:05:25 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ex3vNT+U; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213125; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=My5vhPID0cGh8dfo8fU6SjTEUwiUM3vyJnExzTG3s6g=; b=xAXWQ4Y66xAMM6G2obssVJ8SuHtSS83r+LJwKJa7A9UXpqP1iZFD/9elP1onZ2Nnh8p8zB IPOzaxo9+NS+ShqGkJd85THd8/DDs5B95hUhty1lalOZ12ydYcHSdZd89/EYgfRZfOxD7O KnKOybpHHfj4e5fm+aZytY6Zeug/g0g= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ex3vNT+U; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213125; a=rsa-sha256; cv=none; b=EV5LfcKJgLr63Pe2c7Tfm4q6ifYrLIebN8qmbicpoU/rCsEgq+peRFeB0zoPeLYLUnEB6c Fjdbf3rFswN6h3k3BXQVG4Eveebf09FM0DJ8TJNpuK1nh/RwnfabhmNGYfKWKBB5lzJpfS BGNq506vhWsHp251hiZVXWCagoFDZr8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213124; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=My5vhPID0cGh8dfo8fU6SjTEUwiUM3vyJnExzTG3s6g=; b=Ex3vNT+UAeISDW0/sbwQvcT6Sj44AlnwlummkG2/0WvlvQF0BOZ2uOKFIih81M+B3bv8A3 U3PUXgokWbDU54t9cthQBwdLuv9re0quB8Jv0LeCcq0KE6Gxrg4yiEFWJcacrbnoeHaNCe R3teWOLATFxo/0CRWoV9TS3EiJo5mtI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-161-UOB1c2_XOhGlcUUO4m1fqg-1; Fri, 13 Oct 2023 12:05:18 -0400 X-MC-Unique: UOB1c2_XOhGlcUUO4m1fqg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5743E382333F; Fri, 13 Oct 2023 16:05:17 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAF5040C6CA5; Fri, 13 Oct 2023 16:05:14 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 15/53] netfs: Limit subrequest by size or number of segments Date: Fri, 13 Oct 2023 17:03:44 +0100 Message-ID: <20231013160423.2218093-16-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A020E1C0002 X-Stat-Signature: 9xxukjuznp7o6p1iri6r4koc8gq8xdtd X-Rspam-User: X-HE-Tag: 1697213125-859096 X-HE-Meta: U2FsdGVkX18JpTWpXgODkqWp1/fQsA0VkJ8a3MnVFO5BLOTgD3oagvRn13VhzYhCqkkkJyeCO+S1fviGdVCT8mGTkiKL8II7GYxHLk+UZEyE4ia0hhDt6EtUKrRL9JWdC02NtgI4oDJFQGJuOjK++nfdoDNCwvYN2Fd/XKblQ7rqrv8BiwiVerAWSimf46rBsvJ6QEgyLVwt58PRc8DCNMHpXbVZVU6uOZh+AYHWzVkuZA4+K4QzbUY+5ooX/XZ5TCms6rqN6q6WKO5pNzxXzFYoeU2Ysoqx2lJSyNnhaPyucZyE2Q782l1y5BW6ad9dv01j2v0BPeacc9+tSFSaxOTJwh5xF3klvCRFnVXAJuvq/gcfII2aQ8mw8bvhEHd1D7kTZ/d+JI29NQzwQ974bAJ9CzgTrSIqSNXk3TP05lcbH+VJkP/agll9UsZUoacdL5w21kmu2X9RBd82IOQnKx4Ni/sBoWmKh7+GhdfsOtQzPG1IJ+vvlKlYZ7I1jxUlJ+qgg46gZKyuF1j1Ip/0C2N6GhnszK/xGbAz4tfeQl8IsNDNeMA++w5G7xzWeileuxP3qd9xVtf4EcIOpk5WHBMjrgGZAssjNhF+Z3d8CjkJU+Xs5aiPvGN7yGitY4da3dpu8KSsnDIBZkrEU9bGrNzxkECByI9BYH2bKHEBBaB97N78QDZLtKDGWbjghrN7qtzBDy+TtGf0ylTzmnoJxsrVNMS7F0g+CyJc2yE/MHZY+ADE27/USF5fox0LFjKILnEKr9Hb6KvXCix4jZ2k4ClC7lsGV8mwewBkwVIUbKBneuA0AcqFjAxik8W0QPYcqRTSFYUBO6rsEAr0ZvREKHjbAzOsDEiYumkBhWYMhsll7YNDelMPDK0XmC58WtXOiLr/kyfeRyaC6aue2BsWQ129I9cFiMJ6GCTADX51FK7LGLKHIm/LRtLr7j4ci4uysF5E4nJQNpyxsMrhadb pYS7/C/Y 4WLdkUYO1YBqMAesCC436uV9QQkDAbuOH48zDiuqz3ILjgWmIbOpahVr7N3vf+1dUtrno1UvH55OJLnOIKL23UwSoxLGqZFxvVi+f3H4GQTEvBLJ5IZsZBbhjSdBpwgtpl3apJu92PcVBA8ogQBRRcnKI7Zy1g9K9yrLndI1uazoJ1+Fs2o7BGhsVJqELfFRRgnZ3B6i5S/ItX6lOQBfjjRIlLNRaw64XS7IvtgGPr2bo400tvCoIUk+os4ohD2yfCbcUU+rOZASWlYxHaI6nObKL5Wj1T9mmkC3k9U+/+ziuVi8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Limit a subrequest to a maximum size and/or a maximum number of contiguous physical regions. This permits, for instance, an subreq's iterator to be limited to the number of DMA'able segments that a large RDMA request can handle. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/io.c | 18 ++++++++++++++++++ include/linux/netfs.h | 1 + include/trace/events/netfs.h | 1 + 3 files changed, 20 insertions(+) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index d8e9cd6ce338..c80b8eed1209 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -525,6 +525,7 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, struct iov_iter *io_iter) { enum netfs_io_source source; + size_t lsize; _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); @@ -547,13 +548,30 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, source = NETFS_INVALID_READ; goto out; } + + if (subreq->max_nr_segs) { + lsize = netfs_limit_iter(io_iter, 0, subreq->len, + subreq->max_nr_segs); + if (subreq->len > lsize) { + subreq->len = lsize; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + } } + if (subreq->len > rreq->len) + pr_warn("R=%08x[%u] SREQ>RREQ %zx > %zx\n", + rreq->debug_id, subreq->debug_index, + subreq->len, rreq->len); + if (WARN_ON(subreq->len == 0)) { source = NETFS_INVALID_READ; goto out; } + subreq->source = source; + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + subreq->io_iter = *io_iter; iov_iter_truncate(&subreq->io_iter, subreq->len); iov_iter_advance(io_iter, subreq->len); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 2b5e04ea4db2..aaf1c1d4de51 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -163,6 +163,7 @@ struct netfs_io_subrequest { refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ + unsigned int max_nr_segs; /* 0 or max number of segments in an iterator */ enum netfs_io_source source; /* Where to read from/write to */ unsigned long flags; #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index beec534cbaab..fce6d0bc78e5 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -44,6 +44,7 @@ #define netfs_sreq_traces \ EM(netfs_sreq_trace_download_instead, "RDOWN") \ EM(netfs_sreq_trace_free, "FREE ") \ + EM(netfs_sreq_trace_limited, "LIMIT") \ EM(netfs_sreq_trace_prepare, "PREP ") \ EM(netfs_sreq_trace_resubmit_short, "SHORT") \ EM(netfs_sreq_trace_submit, "SUBMT") \ From patchwork Fri Oct 13 16:03:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB2C6CDB485 for ; Fri, 13 Oct 2023 16:05:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 342236B0186; Fri, 13 Oct 2023 12:05:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F1406B0187; Fri, 13 Oct 2023 12:05:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16C866B0188; Fri, 13 Oct 2023 12:05:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 02B676B0187 for ; Fri, 13 Oct 2023 12:05:28 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CE5C01403C6 for ; Fri, 13 Oct 2023 16:05:27 +0000 (UTC) X-FDA: 81340913094.20.063B14A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 11E314000E for ; Fri, 13 Oct 2023 16:05:25 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=crTmWg00; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213126; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=beNK3k77Pm26md3Z3Rl3/3GZ3zvJewM6yXHPyaSFW2M=; b=7lRRepesgyj38hXsnxfQIwHaFz0edMOd/OIUr11fwdBert8IsK0d/1peZfJrf+U/klbR4B trCRx4t8Bo1UqKftTk8aOo4sl9t+ZBVwvWRVHMbc/AdhxOOM3L2b+/m6v06WMmifjV4vXG A3FzZuOxhBy6ohQgLgXvYuMIX+v6FEg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=crTmWg00; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213126; a=rsa-sha256; cv=none; b=phv0OADjqCJpAYp9Obp9FGqBiL02xY3vqSrnOWuFXtT5qEaXa6mr+QRz1yegLWwD/txfDJ LCwATNQXlU+P+A5To7rJO9jSZBhc4q46oYVwOCLz1I6vIuy8x7NhBdN7Et7h9/OV316b9r cVGT5oLdGzncBxBP+wSVyiT4odFiIgM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213125; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=beNK3k77Pm26md3Z3Rl3/3GZ3zvJewM6yXHPyaSFW2M=; b=crTmWg0082UIhpMc6Ahi54VGtiUaTKDw5XCOOB6ez8fGtsRFwFe1KdqO6u/W3bNTAVVn9+ ArrTG5HqH2UoG3CjxbBLNVkHupuf5gz9LbNLAS+Y1XXtEa+z5dmtt2YE/YlXPfs8tSAm1x ddxktZOZLxrxUO3z4xHlZRyvfy300Io= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-LiWQAUa8O1izjX4EjeZHFQ-1; Fri, 13 Oct 2023 12:05:22 -0400 X-MC-Unique: LiWQAUa8O1izjX4EjeZHFQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 927F51C113ED; Fri, 13 Oct 2023 16:05:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B2A240C6F79; Fri, 13 Oct 2023 16:05:18 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 16/53] netfs: Export netfs_put_subrequest() and some tracepoints Date: Fri, 13 Oct 2023 17:03:45 +0100 Message-ID: <20231013160423.2218093-17-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Queue-Id: 11E314000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: g83dg4exdwque15bgpfzt1hxjwhq995m X-HE-Tag: 1697213125-384009 X-HE-Meta: U2FsdGVkX1+NXmg2iXMoipiwaUmZ2KjzhXN8n0Vjmn0zJvDvKo9f2z/NC6SxdjvjuGXSDY3JnhB3w872Uupfd4sc0iGbD9NHgNaLdio1NxtKXdjKlDbClWcX1BH91LfKEK+TXkNNCDKz1gOCXyojNvzcVuOw/gE67f3tg83Zhozbb5hZNpYDAq5taLfMu6WJbxK9AEqkpMZNgiXIhUZFkCf9WwZ+377dkJZ6dj5+pFH2BlC4Q9tTWzWSN9pQJfsSY1J5DfwDgqaAJXq52sLUgYIfpakaW/3RpRyDiJ3NSmMamrFSbMGyHYfVSTqaVYDDeO3tnl4B4JKlqfO6fW4OepEb9IbWSqhW9sSrfYzafgzFNlId7p5HwtDvKpaxZ0KrOB9O3AmSxJh5aSoDq9nVy/bDYOqQEtG2Mc9amUig3vV54R34IdTtXVETeXlz4zBlzeF7W1dcMgcRc3lgAsjBKupoJuPH6IBi6rPFVe0C0RLInKM2MIa9QVg7Ue1Wut5u/9OGda6/YC+dYPwgDjpUgCZpFoJvaWy9nueVH/XZ/VHr52T666atHjwtsOmRKr/QytIlb/VKU6ukK3fGqRUtuhHDpxb4n6yVXyCBbT74hrPfHeh6FkX8JJdA78jV1stqv980xclRFjm+iiyChGh0HtH8E0OWx7EAg3+p14nWuPNPATlJf5bmA+pULdbmPWH7hk6kKonDvPYraL9z12KkGjx3W53tVqk+0Xg9O8V3LucnuqXHgcxdtWnBNGYXj8DX8oKUn/8gj0oRqZYImd7L5Dt/gBR5yqIzaGMS/ERLDY374s3dVEKUn33Q+qA1LCEP045P+Gn0+ENQw0ZBK/uy6hSDeLpLIk0ETcJ3slkldPO9Flaq0elWwmW5f+cYMqBCgYD9RO6NihRTnpx6cInd6wULG5wc0Pmn25sDdyV6Z6dIsVT6DMsgxxsekII5C1FvJ+G3tGKzma0F02DTY7U qq2wqJYd lQWWyTm7Nx7isCG8nBe6o19+ucCBd7Gz78HdGTImLeib8YpSGo8P99ehZA7pDbl3wYQeruvM6CsZqk6PaGYJRnU6Ddl5CXYDRIceF3YApSGtHnx98H+/ffnORc30Koz+qDueYLQDNYsckVb7p//9E/F24JYSvvl3GyPW5NR7dGtaCKZecR27oFojmOZsg+u29PTmHtX+BKW7Zft8JV5miQrkOpYQauSR0qS8H7Z8LuNOYsXKUnvJR1cQgWHZBOKan5l/Lp+c3saMvafCi7IDKgbdp7rPhVsOsiQ5Xzaq1nia7yJGP6hpsD9KKeTROllPtR8gz8dFjj+ah6g9ksa5rSVZtuA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Export netfs_put_subrequest() and the netfs_rreq and netfs_sreq tracepoints. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/main.c | 3 +++ fs/netfs/objects.c | 1 + 2 files changed, 4 insertions(+) diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 21f814eee6af..0f0c6e70aa44 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -17,6 +17,9 @@ MODULE_DESCRIPTION("Network fs support"); MODULE_AUTHOR("Red Hat, Inc."); MODULE_LICENSE("GPL"); +EXPORT_TRACEPOINT_SYMBOL(netfs_rreq); +EXPORT_TRACEPOINT_SYMBOL(netfs_sreq); + unsigned netfs_debug; module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO); MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask"); diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 0782a284dda8..9b965a509e5a 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -180,3 +180,4 @@ void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async, if (dead) netfs_free_subrequest(subreq, was_async); } +EXPORT_SYMBOL(netfs_put_subrequest); From patchwork Fri Oct 13 16:03:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDB4DCDB483 for ; Fri, 13 Oct 2023 16:05:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58C606B029B; Fri, 13 Oct 2023 12:05:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4ED0F6B029D; Fri, 13 Oct 2023 12:05:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 367816B029C; Fri, 13 Oct 2023 12:05:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 228C66B029A for ; Fri, 13 Oct 2023 12:05:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id DAE5D1CA954 for ; Fri, 13 Oct 2023 16:05:40 +0000 (UTC) X-FDA: 81340913640.08.795316E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf04.hostedemail.com (Postfix) with ESMTP id A886240020 for ; Fri, 13 Oct 2023 16:05:38 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CL1fsf+0; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eZnc3j734khDGwXznc6wh2xPv7mSRAABdKA6EQBsCg4=; b=qGbWOfKJTeIg3mjLD+8x5MmJHzzLENhOapPa7n/M3BVrgI0ksUNLOK1urqw19LPU+X7kH9 nN0VHUBPB9YC5IiJ1lnZ0kPrxh0x0Cgo6D8Lda0KJDWdzr4XQ5XBy6B7AnL0n4nAgt4Kx5 Q+jggloATIp258HCQnSmSdiDHtc52lM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CL1fsf+0; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213138; a=rsa-sha256; cv=none; b=5DBk+3EDb3or+JSTTYpC7XUjtlPVHh+1UMPju3lyQLUcIOx8tZRkTn14RyG6X3NUhDi3rO Z0AwKw6wN04w9ohJDEjWkgR0NAWgJFA4p3z40Wk9GQOWRLZS98V6cIrOF/zRLNW3pFZwUe tFXoXoQgbjAjzyDG2JIFBgty7oVEVH4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213138; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eZnc3j734khDGwXznc6wh2xPv7mSRAABdKA6EQBsCg4=; b=CL1fsf+0FeAqna15xvLLLYQPuCo0G8tYoVARAULjBfWFE6pJojwI914bdNhIIpN96X/fr6 aMwJCZ+R+6b8uBOcx3ELmENB2sLWCCXx1OTwwNE7QTLTwUIWruWzc2+8sfbAFHlFpfxjah Xp1x91bfXJeh37/DUUVTmqsoSOXlV3U= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-319-Jzb0pISoMpSdZfj-hqrYxg-1; Fri, 13 Oct 2023 12:05:24 -0400 X-MC-Unique: Jzb0pISoMpSdZfj-hqrYxg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC5CF1C113E2; Fri, 13 Oct 2023 16:05:23 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3BDEFC15BBC; Fri, 13 Oct 2023 16:05:21 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 17/53] netfs: Extend the netfs_io_*request structs to handle writes Date: Fri, 13 Oct 2023 17:03:46 +0100 Message-ID: <20231013160423.2218093-18-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A886240020 X-Stat-Signature: mfdkbroti7xbre4h7ckk3awos4r8f3b9 X-Rspam-User: X-HE-Tag: 1697213138-375444 X-HE-Meta: U2FsdGVkX1+nWRhdfXmihhZOD4dmECFxIyuLSwYXoJLSHoIg/ztN0EeENja+vK7CuI/hzvBQQt5Tj6QRoqzsKWJXO1ybHQhE+PaKIMCuXbmKQrNJms9W+Y35klgKiP398kgaOfSHNmngu/mFaINgBS+8llLgEfiwxVPZtMYu7EYtsGu3ACOY0PCV115eyi+/LVC5mT1UOOTQeTtatoXtaBJAndpadbvoauptT53g6DVD5qKs7T0lqbC6e0zcr5J6ied7zA+RQstUQ8Q7I4wlwywOxu9jQZKCkLanAgLpTPO5rBOzGogO067xpiq+cs46Pof37gY3VJ9FUf5zQMfTYts9i/JaMZoK3NGVgza1J7nagDJjzYx4qVFDVkSPWHlKONvJciTd3THFcS5p7WKa5rYLSPQUVkPSEtl/bh86+yM3SuUyHA7zYEFHNXP5+XWy/9KnZRkehMGQeoQJR+oUWYrSp7U+j0diZsP/h63EeuumM4M/rjmOSp6R/DavEN6qYW1nTqbbVj+dKx/g+Kkf8RRTfjjgIJgOe+jyMp8badD3OhjDklcKg/5+7xIxZBWCtbpw9ikfA/45r/+swVcttnXt+Cyd6rkZOFDuLZbCTNeh74rHeplqgLn52IYQz/P00DOQmAEarHyvT/2SH/K1pUPRZy59dBrUcXY4I12mdbYmd4x74cEYMi9PtdFtVMVK9iEYLLJv4d8CqAbisc19cjFxvsGVJHhaHeYW48UqSBJU/kxk922OwFQeK/g5zP0OtbWKvB7scNtumIwiTD085SsjB2MQmeuSP2m/F/ojcFbup9mB/W9946r0YpHH+uAvVAsGYIggUN1xSgFzaqqrgJt1HW2+G4hNW0f0c+sqVjQhHFbT6F9JC4hPL88Vfv6YrUb6HNbM0TjHgafPUiJNludb3cahEBuGRKm0Nxmx1XU0/f3ZvNM9TNMgrWLmWXG6S5yyjlGNPjGK9KYpkFE 5eAdxn+3 OCGBzIXH22Sdl8xIYgRf2bBGlKesmGG9rmcIx+ckeQgV1FrXadXauwAhOS8tu+LlIQyPW6LLCCHr8GUY5UQuspSmZD1JHE7YjCk5bEBt5t5xUj+GSYhFmw5CdViz8DfgFEtcMQaI0bJp6IeYdBDrR9/MY/V7xA3Vb2h7cj7gbblEX5p14zJR0C3SDDB3+8V7skFkIdXqxXOrkRVu2uWDoOp9/zyZB4MY649quh5zqhbYVmgW1rYlbd8+xLLXFPh3CG96YGdiYNo28Ii4SBU6rbCEAEn0uG45Htp1vBuuXJPR4VIjPISVAjca5yr+ppqRRiILsxa08UwR7t62Qxo3iJdQzQA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Modify the netfs_io_request struct to act as a point around which writes can be coordinated. It represents and pins a range of pages that need writing and a list of regions of dirty data in that range of pages. If RMW is required, the original data can be downloaded into the bounce buffer, decrypted if necessary, the modifications made, then the modified data can be reencrypted/recompressed and sent back to the server. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 6 ++++++ fs/netfs/main.c | 3 ++- fs/netfs/objects.c | 6 ++++++ fs/netfs/stats.c | 18 ++++++++++++++---- include/linux/netfs.h | 15 ++++++++++++++- include/trace/events/netfs.h | 8 ++++++-- 6 files changed, 48 insertions(+), 8 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 00e01278316f..46183dad4d50 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -109,6 +109,12 @@ extern atomic_t netfs_n_rh_write_begin; extern atomic_t netfs_n_rh_write_done; extern atomic_t netfs_n_rh_write_failed; extern atomic_t netfs_n_rh_write_zskip; +extern atomic_t netfs_n_wh_upload; +extern atomic_t netfs_n_wh_upload_done; +extern atomic_t netfs_n_wh_upload_failed; +extern atomic_t netfs_n_wh_write; +extern atomic_t netfs_n_wh_write_done; +extern atomic_t netfs_n_wh_write_failed; static inline void netfs_stat(atomic_t *stat) diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 0f0c6e70aa44..e990738c2213 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -28,10 +28,11 @@ MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask"); LIST_HEAD(netfs_io_requests); DEFINE_SPINLOCK(netfs_proc_lock); -static const char *netfs_origins[] = { +static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READAHEAD] = "RA", [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", + [NETFS_WRITEBACK] = "WB", }; /* diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 9b965a509e5a..30ec42566966 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -20,6 +20,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, struct inode *inode = file ? file_inode(file) : mapping->host; struct netfs_inode *ctx = netfs_inode(inode); struct netfs_io_request *rreq; + bool cached = netfs_is_cache_enabled(ctx); int ret; rreq = kzalloc(ctx->ops->io_request_size ?: sizeof(struct netfs_io_request), @@ -38,7 +39,10 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, xa_init(&rreq->bounce); INIT_LIST_HEAD(&rreq->subrequests); refcount_set(&rreq->ref, 1); + __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); + if (cached) + __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags); if (file && file->f_flags & O_NONBLOCK) __set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags); if (rreq->netfs_ops->init_request) { @@ -50,6 +54,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, } } + trace_netfs_rreq_ref(rreq->debug_id, 1, netfs_rreq_trace_new); netfs_proc_add_rreq(rreq); netfs_stat(&netfs_n_rh_rreq); return rreq; @@ -134,6 +139,7 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq sizeof(struct netfs_io_subrequest), GFP_KERNEL); if (subreq) { + INIT_WORK(&subreq->work, NULL); INIT_LIST_HEAD(&subreq->rreq_link); refcount_set(&subreq->ref, 2); subreq->rreq = rreq; diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 5510a7a14a40..ce2a1a983280 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -27,6 +27,12 @@ atomic_t netfs_n_rh_write_begin; atomic_t netfs_n_rh_write_done; atomic_t netfs_n_rh_write_failed; atomic_t netfs_n_rh_write_zskip; +atomic_t netfs_n_wh_upload; +atomic_t netfs_n_wh_upload_done; +atomic_t netfs_n_wh_upload_failed; +atomic_t netfs_n_wh_write; +atomic_t netfs_n_wh_write_done; +atomic_t netfs_n_wh_write_failed; void netfs_stats_show(struct seq_file *m) { @@ -50,9 +56,13 @@ void netfs_stats_show(struct seq_file *m) atomic_read(&netfs_n_rh_read), atomic_read(&netfs_n_rh_read_done), atomic_read(&netfs_n_rh_read_failed)); - seq_printf(m, "RdHelp : WR=%u ws=%u wf=%u\n", - atomic_read(&netfs_n_rh_write), - atomic_read(&netfs_n_rh_write_done), - atomic_read(&netfs_n_rh_write_failed)); + seq_printf(m, "WrHelp : UL=%u us=%u uf=%u\n", + atomic_read(&netfs_n_wh_upload), + atomic_read(&netfs_n_wh_upload_done), + atomic_read(&netfs_n_wh_upload_failed)); + seq_printf(m, "WrHelp : WR=%u ws=%u wf=%u\n", + atomic_read(&netfs_n_wh_write), + atomic_read(&netfs_n_wh_write_done), + atomic_read(&netfs_n_wh_write_failed)); } EXPORT_SYMBOL(netfs_stats_show); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index aaf1c1d4de51..4115274e3129 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -118,6 +118,9 @@ enum netfs_io_source { NETFS_DOWNLOAD_FROM_SERVER, NETFS_READ_FROM_CACHE, NETFS_INVALID_READ, + NETFS_UPLOAD_TO_SERVER, + NETFS_WRITE_TO_CACHE, + NETFS_INVALID_WRITE, } __mode(byte); typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error, @@ -151,9 +154,14 @@ struct netfs_cache_resources { }; /* - * Descriptor for a single component subrequest. + * Descriptor for a single component subrequest. Each operation represents an + * individual read/write from/to a server, a cache, a journal, etc.. + * + * The buffer iterator is persistent for the life of the subrequest struct and + * the pages it points to can be relied on to exist for the duration. */ struct netfs_io_subrequest { + struct work_struct work; struct netfs_io_request *rreq; /* Supervising I/O request */ struct list_head rreq_link; /* Link in rreq->subrequests */ struct iov_iter io_iter; /* Iterator for this subrequest */ @@ -178,6 +186,8 @@ enum netfs_io_origin { NETFS_READAHEAD, /* This read was triggered by readahead */ NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ + NETFS_WRITEBACK, /* This write was triggered by writepages */ + nr__netfs_io_origin } __mode(byte); /* @@ -202,6 +212,7 @@ struct netfs_io_request { unsigned int direct_bv_count; /* Number of elements in bv[] */ unsigned int debug_id; unsigned int rsize; /* Maximum read size (0 for none) */ + unsigned int subreq_counter; /* Next subreq->debug_index */ atomic_t nr_outstanding; /* Number of ops in progress */ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ size_t submitted; /* Amount submitted for I/O so far */ @@ -223,6 +234,8 @@ struct netfs_io_request { #define NETFS_RREQ_NONBLOCK 6 /* Don't block if possible (O_NONBLOCK) */ #define NETFS_RREQ_BLOCKED 7 /* We blocked */ #define NETFS_RREQ_USE_BOUNCE_BUFFER 8 /* Use bounce buffer */ +#define NETFS_RREQ_WRITE_TO_CACHE 9 /* Need to write to the cache */ +#define NETFS_RREQ_UPLOAD_TO_SERVER 10 /* Need to write to the server */ const struct netfs_request_ops *netfs_ops; }; diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index fce6d0bc78e5..4ea4e34d279f 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -24,7 +24,8 @@ #define netfs_rreq_origins \ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ - E_(NETFS_READ_FOR_WRITE, "RW") + EM(NETFS_READ_FOR_WRITE, "RW") \ + E_(NETFS_WRITEBACK, "WB") #define netfs_rreq_traces \ EM(netfs_rreq_trace_assess, "ASSESS ") \ @@ -39,7 +40,10 @@ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ EM(NETFS_DOWNLOAD_FROM_SERVER, "DOWN") \ EM(NETFS_READ_FROM_CACHE, "READ") \ - E_(NETFS_INVALID_READ, "INVL") \ + EM(NETFS_INVALID_READ, "INVL") \ + EM(NETFS_UPLOAD_TO_SERVER, "UPLD") \ + EM(NETFS_WRITE_TO_CACHE, "WRIT") \ + E_(NETFS_INVALID_WRITE, "INVL") #define netfs_sreq_traces \ EM(netfs_sreq_trace_download_instead, "RDOWN") \ From patchwork Fri Oct 13 16:03:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FF6ACDB47E for ; Fri, 13 Oct 2023 16:05:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AA686B019B; Fri, 13 Oct 2023 12:05:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 959FC6B019C; Fri, 13 Oct 2023 12:05:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822126B019D; Fri, 13 Oct 2023 12:05:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7042E6B019B for ; Fri, 13 Oct 2023 12:05:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4F9FC4031B for ; Fri, 13 Oct 2023 16:05:34 +0000 (UTC) X-FDA: 81340913388.02.498B4D7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 52A2B140032 for ; Fri, 13 Oct 2023 16:05:32 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WbE6O7f1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213132; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Hms+OxujO9sLX9/qumf5gjzFiznbdtqDuL9OGzZ3468=; b=5RfOuzuptv41tKxV/90/CMedixDnn4XLVIgEsC8tyf9gwK/eR556j4CM6qONunpkcI7m6K yjD3jWCHr9X/X2b0rzwcJIwy3q5ZMUYARC54smmmpg/nrrx7m/Nla87q2jUHaAPMpR7t8n Jm+37Ls03IGbcPRWKR+XtTPcU0p1+jo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WbE6O7f1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213132; a=rsa-sha256; cv=none; b=WncZIocW1Cvqo0CQh2LP1cz/THqnaa131+57vp2dZLbX51k7aG7AtPyByiXTNJeDQNtV6B lYOHf/Aq1wm6g47V44KxO8zUVy2p8/Oo67p5nkPLtHchTGpzkJVFM94MmzPvvHtj7Z2Cbw 3JZaPsszpYMcZPTM2ZaBINdsGt9n1TQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213131; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hms+OxujO9sLX9/qumf5gjzFiznbdtqDuL9OGzZ3468=; b=WbE6O7f1Q2BNn7z+N9LoMjMu8YxjNpEl1vC204KSSSn+oN9Uap7yQCIUCzyV/oaFUvE5rp AX7lHOI/ueUdM/wcTif71n1BkHANrRQIRBu2TYDiu7DsmFsqQxIrYErKFPNxzBpt95iw0L QYSf+f2SeBOkgXGyrlSmGpduaszEHn4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-336-F7n1hDLgPkOovJNiv4j3xA-1; Fri, 13 Oct 2023 12:05:28 -0400 X-MC-Unique: F7n1hDLgPkOovJNiv4j3xA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1006A801234; Fri, 13 Oct 2023 16:05:27 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7CCBB1C06535; Fri, 13 Oct 2023 16:05:24 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 18/53] netfs: Add a hook to allow tell the netfs to update its i_size Date: Fri, 13 Oct 2023 17:03:47 +0100 Message-ID: <20231013160423.2218093-19-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: 52A2B140032 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: rx6mrx5oh4cr1gcugrjnpy37tjg85eui X-HE-Tag: 1697213132-349104 X-HE-Meta: U2FsdGVkX1+2sXHQ6dzQnpUiwUuc9dRu/qitWkZbHhESCf0w5FY9yEc/kMHI74XmuPyIOC+cMci3gEjlIJjpswfTZQ4vAXmrFIzQ3+9jGCp6CqwV82xIrcvIMKrN4wfBmNB29PK4Y4DZieR2dRg3MIfXK6UpKdu2QF6fn46WUVVx7RuGqVxgUw22pFgIgAE1Hqx6qSPTl8OVqeEitywlAT+gUNqEcXJb7nqCcP3eiCTNArzTnzWdSJkWpYIuhYbAIS63rHUTLtU010+uNw18M4HVfwsknE/9n/VtBn8BAlHJqtRa+uJ0ddbLj2Mgdh6cca4v9nGzYj6Zq240Vh6UvjkncnbHQWFqwFGDPP8WJxxy1u7XqeG8ROjWAsutZYD1plzSso5xo5jkUWzMcdnctnhK53CX2HJnQt1+qjMOUDfxYyceFpbG6Da08U8JkDD8U5Bo2P4QVszydbXm1/GcZEYAaKF85dWspiVrUE2R+cPSUISB9dVdk8NEFP4krCYnSr4lN4+a2qqL83duag5T2HQuFJFFsvT9rv8zKJUYUOUZFLwbqkKhlcMGHE3uWUMRkQCPUqUQEgBx3QmO87o2ZF1626M4m6NJFIF15JiMTSZRL+MbQ4Kx674aPboP6GSrBCj8vPrvS4Fc/13i84ag/uA58FWbBElINah450C2Sa801SGDJ40iLDbjjFsQ58PmXH1uGYmFf/XpacWnmWOJj9FEiqKUTi60hrAoc7099YtT1CNWelBDei+riyXOQW7tc6wijFApL3RB2CC+fRiGL84XEMlDzTjEheKzmjVcYpgAcWZ+S3MomTPx7GI+A61VlhJZNm5ZmmapxZkFdds9Pvzg7IxwdIMDPsBKIP63HkAo8W6saZ3VEteyq7CsGpGvkI3Gf2hucOftSwnfVgs/ZTskRqFyep3dgrncpDix1Ems95w7TQGhiIRHFKfOSKWOrUvo0k4ZEfNxHuPIq12 m6tdo8zS nyM0ckn+J2et6Z2yHtBf22qJ8jA3XD7Lop7Eb4oRk93oOXL1eVZ4fVWLIVXPB8PAddDAFsOOPuNnErYcEb+XjvwI+22E9RlG0QW536mmSeYxYXmNdgETfDJIugO3qyXFo+dNEBgLK0Cg06D53cIF5rYRZ7J3wPHCiGwcjNVbqSvTAKoQCodjSK0Q/bEXPRNc3mLD2gC3BwVbSC4J6dB+R+Wy0BsVZGy3MH6shgt/hXVQVTRZp/sC/jax/faP60j7tCkD2ZRFMaK8S75yu4TaaSX2BLCCIITt4JmK6KSzMBayvRJ9EfQHLBgY4k1ESsVTJZE23dGkmGYlLzsrRxTOD8VnN+AnnksQGiXpPuCs8/PqKlZIBq9ntzrZpjXrNFHgxhwHvf46V7hin6fSANFSw82xpTA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a hook for netfslib's write helpers to call to tell the network filesystem that it should update its i_size. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/netfs.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 4115274e3129..39b3eeefa03c 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -250,6 +250,7 @@ struct netfs_request_ops { void (*free_subrequest)(struct netfs_io_subrequest *rreq); int (*begin_cache_operation)(struct netfs_io_request *rreq); + /* Read request handling */ void (*expand_readahead)(struct netfs_io_request *rreq); bool (*clamp_length)(struct netfs_io_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq); @@ -257,6 +258,9 @@ struct netfs_request_ops { int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, struct folio **foliop, void **_fsdata); void (*done)(struct netfs_io_request *rreq); + + /* Modification handling */ + void (*update_i_size)(struct inode *inode, loff_t i_size); }; /* From patchwork Fri Oct 13 16:03:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 906F0CDB483 for ; Fri, 13 Oct 2023 16:05:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC8DA6B0299; Fri, 13 Oct 2023 12:05:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7A496B029A; Fri, 13 Oct 2023 12:05:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C19136B029B; Fri, 13 Oct 2023 12:05:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B07AF6B0299 for ; Fri, 13 Oct 2023 12:05:39 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6C78B160394 for ; Fri, 13 Oct 2023 16:05:39 +0000 (UTC) X-FDA: 81340913598.01.0690FF6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 782D310000F for ; Fri, 13 Oct 2023 16:05:37 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PQVImIxQ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213137; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sNTP/OfMhxIhZ+Uokb2uWeEqbTgo8AOpAI0HX1DDcAE=; b=1TXWMX+5xIgcC83+qjf/AUkqn9TjMGSQKjIvmPF5kNU/CN/sZIkHYLDWxvsmXk5+4dSSdO ZWjKRjrTjDEEpXw2T2gKxYXEGFK2bNpyg/ciqmN4pwiOK6ftTUbl3xo8PiVOB/hKYW0DkT Z7y4XTT49k6KVqeCA1+ZWbQQaLEgCE0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PQVImIxQ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213137; a=rsa-sha256; cv=none; b=pcWc+G+/Y1YkP9wcsZt+J7V6aPc5wMiuGerWwyzciFt9jmAWy2bDIFWjF0NGLzzkFNkR5J Syu3qtuYMGgzIvoHxNYzXno8Vtfr/SIFSO2e7icsUgByi0xBo/t7onyJoeCMdYg8AnsOkA TW3iPC4Q3Bi1WUsW9p1sc3SSQaVkWsw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213136; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sNTP/OfMhxIhZ+Uokb2uWeEqbTgo8AOpAI0HX1DDcAE=; b=PQVImIxQJ93sfjObCVZ7NxQrBJ7H3FmMGJTvNqvwsW62SC0rZLtb1jvkg6Wn6+fztOcfhP HVKPzb8MsRYHtllXK+uPkQLXN0krSQ3NPGlCVa5am3PEdQ6wbVV03yF+RKWOtN9WtyuEbb lWfACYugPrBFh64HVq8zqdOf5ALb5nY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-562-AFJvguDWNo2pHPpzTJ-Rfg-1; Fri, 13 Oct 2023 12:05:31 -0400 X-MC-Unique: AFJvguDWNo2pHPpzTJ-Rfg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 38AD73823340; Fri, 13 Oct 2023 16:05:30 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id B23DEC15BBC; Fri, 13 Oct 2023 16:05:27 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 19/53] netfs: Make netfs_put_request() handle a NULL pointer Date: Fri, 13 Oct 2023 17:03:48 +0100 Message-ID: <20231013160423.2218093-20-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 782D310000F X-Stat-Signature: ekhxinn3fzrphjrzxcyz9xxmxk3kqmex X-HE-Tag: 1697213137-448173 X-HE-Meta: U2FsdGVkX183bYEA0vaZgo/e4flMHUerEVz6T9ZqX+Q8tmw69HA0P4m1CIgbMGnJ05mcwrmedwx8QX4mnKzfYWNJTSXzZiN77w9QekqrZLTDyHnIEg98qyY9iyeh2Feh5gWBWKmzclP68st7Ik3K7Kgyk0Z5wFEQ5dKmUF3lWHB+7J8NcKl13rVv4Hp8zCgKJ6wDlnBgb5VCi3jemqDwz2cKUmj6Vp6JlQjqcerkfDCl8H9oMSLRPkVgOrWLclA3tism0FK3q2LCRqtJh+9iuFriuGJc6/+LTvbkjdFnAXjruTk32R1r5dqQnaF83hf7MeZf4p7gkY7cnDWOsVkUJsd5RmOGaiNNbNneqmEs1b/9LA2VtJhAgFNymRz4CiGM21URDfDyed9rV99j5Ulve/pVB4edDoo3t8n/9smYOtpr/rO20IOQFUobQujCi/qm6zNOA7FXt1lxKiJ9A4zesVdtD/KeCKnuhANoWcEjz+F4QazNiFke9aA+QcfFdStdE51YZxJUuK2UjLGREVHg6bcatCtKNeNuK4iLSa7lRtIDmRMotdMF1wiapfntQiOWDbIYK11xptB5qgWR1mchgyqnFYr591tdJkZKYMdkzb/LWFDkWx3yPyWaUHmjTtGzeHNg4Bi6ZukcFU/9yGeIV3UAyjizUXaKFTM7snM0ntowX0Z+ytoFZhClUN9Xz1dDlxmVxntD8YAyZUjYqVt0DsYUbaDGI9Hn0AajJvS77OCw9ABJfV9x0QTVc6CUnajMdLeFIt5xXiAK1DcyQlLLkFiLM/b8W6lJRyFeWPTfBaOAnyPmB+vMhw04TvGhvXdOu2bS7/c/iAoje7vBjZi4N0ok306AjEayo4Fqtipog8q/8aCXgMYYDKEdFWQlVGaQVudWQGZpOlcUgGwhYBCN2ZEv4f+uQoMBHtWZBDtOCH2Wk2Nhkd9uTFW0JCJ6vZghdDeM+Je0z4hNJ/LH6lk CKN2w/k/ whOAHNnjHbOLfqbcsX88OYu4jlmS61SWgi5UyQyluuXrKl3/xb9x5glS9hlYQxc/2bkgq/CaJRf6dNplSr+tmkKo7B2zXYRD20Sk8l9/e84vplmEHBCDA6hEadJRBoW53tPJCXg5ZYPICD1LaUsN7KzvQRY1EvuSZdrKg8ZPm1tm/z7MpVNVF1mbR7/yV4Q7mmeZcdXnqzQOtu7Ts04lMPMbOa1/kVKDmXr8s9dOa3IfhtUcXhZ3q5IcI9yxg3crZmNjtWlgEE/x6latB8J3eEeFlgbap+rT9HiEGn4Xsy80wmw9ezrpl0eaWKDKnzn45PA/vceSQDCX60icgpPtEYU/EVw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make netfs_put_request() just return if given a NULL request pointer. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/objects.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 30ec42566966..7a78c1665bc9 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -111,19 +111,22 @@ static void netfs_free_request(struct work_struct *work) void netfs_put_request(struct netfs_io_request *rreq, bool was_async, enum netfs_rreq_ref_trace what) { - unsigned int debug_id = rreq->debug_id; + unsigned int debug_id; bool dead; int r; - dead = __refcount_dec_and_test(&rreq->ref, &r); - trace_netfs_rreq_ref(debug_id, r - 1, what); - if (dead) { - if (was_async) { - rreq->work.func = netfs_free_request; - if (!queue_work(system_unbound_wq, &rreq->work)) - BUG(); - } else { - netfs_free_request(&rreq->work); + if (rreq) { + debug_id = rreq->debug_id; + dead = __refcount_dec_and_test(&rreq->ref, &r); + trace_netfs_rreq_ref(debug_id, r - 1, what); + if (dead) { + if (was_async) { + rreq->work.func = netfs_free_request; + if (!queue_work(system_unbound_wq, &rreq->work)) + BUG(); + } else { + netfs_free_request(&rreq->work); + } } } } From patchwork Fri Oct 13 16:03:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0778ECDB482 for ; Fri, 13 Oct 2023 16:05:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F8A26B029F; Fri, 13 Oct 2023 12:05:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 983266B02A0; Fri, 13 Oct 2023 12:05:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B1D46B02A1; Fri, 13 Oct 2023 12:05:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6629D6B029F for ; Fri, 13 Oct 2023 12:05:46 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3D72F403A5 for ; Fri, 13 Oct 2023 16:05:46 +0000 (UTC) X-FDA: 81340913892.08.C59D2F3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 61431160034 for ; Fri, 13 Oct 2023 16:05:44 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OvuRUzy7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213144; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F0Ay07c0Gz72YNYgMd2laEIR/IQCWOWvRbIa7ptaL9M=; b=5TTpTPi5AONM4jg08xnWI5MEt+hgRzO+rH4eVjomKNa2/Qr6l+BeBm/sSrEQ063hJBuAbx S/rpfFLQs3u9EVyIx84qHX+UGuZYe5LqvEQjI6p6mKBG5i+3jRvveFfMNH8YfKaNA3fcbV ZRuTJofudr4LbFDZaIf3tYz4rsmQX4g= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OvuRUzy7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213144; a=rsa-sha256; cv=none; b=bR/1PtFV9AAMa3jxNTwb7HMWSZ4ViMWF3AdzpEYcyG1w/20USPQilQaPtcEKlUWDkPpAf/ POUW6YcoXJPRFqD4iZ3zI8EVJ3oqziL3iru9ZlOF6mAOLa1XubeW3FOT+OUHxphSdJ+LPD cPTcFRL4JM2fWUHJy7rWhnwdBjLd6HE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F0Ay07c0Gz72YNYgMd2laEIR/IQCWOWvRbIa7ptaL9M=; b=OvuRUzy7qQtutclkpzoLeQug+Ca1uZtYbuOQJ1naWJ6A5iA0jziHsIrPfT4IWosITIu1nn Ycy+tDvjjILXO3ZbdoKf1FGSEWwFQAIZq25ThgS/GQ/SnOjzhchCSk4fBqWEgv+pTyKlNE b5hX0JnbpyBonw0TyW6QpYoIASMafxQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-549-ySrH6GjHO9Ka3rsox-_Ymw-1; Fri, 13 Oct 2023 12:05:40 -0400 X-MC-Unique: ySrH6GjHO9Ka3rsox-_Ymw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C482187B2BB; Fri, 13 Oct 2023 16:05:38 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 41C051C06535; Fri, 13 Oct 2023 16:05:36 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 20/53] fscache: Add a function to begin an cache op from a netfslib request Date: Fri, 13 Oct 2023 17:03:49 +0100 Message-ID: <20231013160423.2218093-21-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 61431160034 X-Stat-Signature: 9uxi9at693z9psgszfzphtker1nf3pox X-Rspam-User: X-HE-Tag: 1697213144-693242 X-HE-Meta: U2FsdGVkX19g9SiIMxoIHd2Qi38EUjfCaA6Lu+wwMKyQ9dQ33iAKhXFtTXSUP160+ELF/2/vyQrmnxUb5+Ik8rR6rynBnsgyMB4mJPoSo5L87SKKXmZ+4QhU86e6IUX/FxX7WuUmYapSjJhFQG2a/5a0EIPMd3VQm7MukXZCP0gpM86AGGalgtkEZork2j/PHM3cnRGuwUM/Q+iKZFU7ghDm/qX6f5fW9J22+j+tTGnxPYql+s4l8O0E014Zf6+YQVbFyqac3gu1Ze/PWrSpfpu4zJAhPRkc6O3DgxbKMAc6xjYX4nPKlYRCVS62Ru1UtNwhqkHNEfhtOa6+ni7kya/Ux5KoDsuSaFEY+OVnAUEgnyfSuq37EVMBWmr0ncdn/zaTnwVFz3MK1Bt+LlIDnTHy1EKHtO+hYN3ba0d5EiVC9otSdi+KGZthpyf2Swh6b/Y/J6J3RDJ2hP7KnHkx6moW4fV5q6zgJkC4QxiAsn4Eoz5jI78Y4aDiH0qUyu/di2UtQvDKNIooEFtaEzSUxQIgfrfeHJvCw2hI7WbIf7r3uQ2MvjJbjOAVlQJJupvP+K3UdNTpsw2lzH60DPgii9JdFxjigrl+H5UaXFPbXVhA+qRmO/ZnjBsxYwbMydzMDZBIomLY+TAPUCgsDVkP4SL+ZpZSyb8dkBztxdLozogYGq1j9nHU3b5cVuw6yO5FqlVV5IhdArbVi+Znjs4iteLVkcKlqYkE3L7MwCMsyc/OmSxwtLpOiBYHAlWzUr+LZKAJ9a5VIx5lSaBUT4Wfq4ccxu77WknEshVIV38DP8hxkt7BvWrsv0WOJfgixSv0r82obxiUFXspSdllmCY+wyFX7zvpW9GKu5n0N/XyLsv2bfMF+wp5h6/4BWhcG+OLPiqbol+5Fyd1EvsFEzC00axoy7d53cNCtiVzLRfjLVeI5a4bQZDbNtd3fcwAulNtU03+r/ka05agJTsgJqI iMFUx2sa nRI38uHehkM8TH/qVbhOORwu/iGyycjX26gvreawzcxTzslGG6FdauxAY+MpwqvuGuUlaOIkWWoww6xoFHEmbjzovVJzSKtgXP7URpbtK5EZuqJJeGaolwrvkhUBa6FmNhapexBFHVg5JGwap2ucN6nyzY/GhtA6JImYpwvYWpZEYoZzoDI6cbWGRv55l1GoasG+H6LM4diwYv7CtfaPDZ90dk4KAty6/UsLBvnlr5EmRuZzfpeBE5FGv5oD7XqlvwL495MzwuaJ+F0SmTZ50F/QAH7iGbYAqjk9jsvXpbFh0wwH0BNsuCyUgOpXFsB+sNyKHjWDSfH6376pGNS7qQ3dWosWwEefO2kuYA8Fu/UdWAQk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function to begin an cache read or write operation from a netfslib I/O request. This function can then be pointed to directly by the network filesystem's netfs_request_ops::begin_cache_operation op pointer. Ideally, netfslib would just call into fscache directly, but that would cause dependency cycles as fscache calls into netfslib directly. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/9p/vfs_addr.c | 18 ++---------------- fs/afs/file.c | 14 +------------- fs/ceph/addr.c | 2 +- fs/ceph/cache.h | 12 ------------ fs/fscache/io.c | 42 +++++++++++++++++++++++++++++++++++++++++ include/linux/fscache.h | 6 ++++++ 6 files changed, 52 insertions(+), 42 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 18a666c43e4a..516572bad412 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -82,25 +83,10 @@ static void v9fs_free_request(struct netfs_io_request *rreq) p9_fid_put(fid); } -/** - * v9fs_begin_cache_operation - Begin a cache operation for a read - * @rreq: The read request - */ -static int v9fs_begin_cache_operation(struct netfs_io_request *rreq) -{ -#ifdef CONFIG_9P_FSCACHE - struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode)); - - return fscache_begin_read_operation(&rreq->cache_resources, cookie); -#else - return -ENOBUFS; -#endif -} - const struct netfs_request_ops v9fs_req_ops = { .init_request = v9fs_init_request, .free_request = v9fs_free_request, - .begin_cache_operation = v9fs_begin_cache_operation, + .begin_cache_operation = fscache_begin_cache_operation, .issue_read = v9fs_issue_read, }; diff --git a/fs/afs/file.c b/fs/afs/file.c index 3e39a2ebcad6..5bb78d874292 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -360,18 +360,6 @@ static int afs_init_request(struct netfs_io_request *rreq, struct file *file) return 0; } -static int afs_begin_cache_operation(struct netfs_io_request *rreq) -{ -#ifdef CONFIG_AFS_FSCACHE - struct afs_vnode *vnode = AFS_FS_I(rreq->inode); - - return fscache_begin_read_operation(&rreq->cache_resources, - afs_vnode_cache(vnode)); -#else - return -ENOBUFS; -#endif -} - static int afs_check_write_begin(struct file *file, loff_t pos, unsigned len, struct folio **foliop, void **_fsdata) { @@ -388,7 +376,7 @@ static void afs_free_request(struct netfs_io_request *rreq) const struct netfs_request_ops afs_req_ops = { .init_request = afs_init_request, .free_request = afs_free_request, - .begin_cache_operation = afs_begin_cache_operation, + .begin_cache_operation = fscache_begin_cache_operation, .check_write_begin = afs_check_write_begin, .issue_read = afs_issue_read, }; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 92a5ddcd9a76..4841b06df78c 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -488,7 +488,7 @@ static void ceph_netfs_free_request(struct netfs_io_request *rreq) const struct netfs_request_ops ceph_netfs_ops = { .init_request = ceph_init_request, .free_request = ceph_netfs_free_request, - .begin_cache_operation = ceph_begin_cache_operation, + .begin_cache_operation = fscache_begin_cache_operation, .issue_read = ceph_netfs_issue_read, .expand_readahead = ceph_netfs_expand_readahead, .clamp_length = ceph_netfs_clamp_length, diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h index dc502daac49a..b804f1094764 100644 --- a/fs/ceph/cache.h +++ b/fs/ceph/cache.h @@ -57,13 +57,6 @@ static inline int ceph_fscache_dirty_folio(struct address_space *mapping, return fscache_dirty_folio(mapping, folio, ceph_fscache_cookie(ci)); } -static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq) -{ - struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode)); - - return fscache_begin_read_operation(&rreq->cache_resources, cookie); -} - static inline bool ceph_is_cache_enabled(struct inode *inode) { return fscache_cookie_enabled(ceph_fscache_cookie(ceph_inode(inode))); @@ -135,11 +128,6 @@ static inline bool ceph_is_cache_enabled(struct inode *inode) return false; } -static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq) -{ - return -ENOBUFS; -} - static inline void ceph_fscache_note_page_release(struct inode *inode) { } diff --git a/fs/fscache/io.c b/fs/fscache/io.c index 0d2b8dec8f82..cb602dd651e6 100644 --- a/fs/fscache/io.c +++ b/fs/fscache/io.c @@ -158,6 +158,48 @@ int __fscache_begin_write_operation(struct netfs_cache_resources *cres, } EXPORT_SYMBOL(__fscache_begin_write_operation); +/** + * fscache_begin_cache_operation - Begin a cache op for netfslib + * @rreq: The netfs request that wants to access the cache. + * + * Begin an I/O operation on behalf of the netfs helper library, read or write. + * @rreq indicates the netfs operation that wishes to access the cache. + * + * This is intended to be pointed to directly by the ->begin_cache_operation() + * netfs lib operation for the network filesystem. + * + * @cres->inval_counter is set from @cookie->inval_counter for comparison at + * the end of the operation. This allows invalidation during the operation to + * be detected by the caller. + * + * Returns: + * * 0 - Success + * * -ENOBUFS - No caching available + * * Other error code from the cache, such as -ENOMEM. + */ +int fscache_begin_cache_operation(struct netfs_io_request *rreq) +{ + struct netfs_inode *ctx = netfs_inode(rreq->inode); + + switch (rreq->origin) { + case NETFS_READAHEAD: + case NETFS_READPAGE: + case NETFS_READ_FOR_WRITE: + return fscache_begin_operation(&rreq->cache_resources, + netfs_i_cookie(ctx), + FSCACHE_WANT_PARAMS, + fscache_access_io_read); + case NETFS_WRITEBACK: + return fscache_begin_operation(&rreq->cache_resources, + netfs_i_cookie(ctx), + FSCACHE_WANT_PARAMS, + fscache_access_io_write); + default: + return -ENOBUFS; + } +} +EXPORT_SYMBOL(fscache_begin_cache_operation); + /** * fscache_dirty_folio - Mark folio dirty and pin a cache object for writeback * @mapping: The mapping the folio belongs to. diff --git a/include/linux/fscache.h b/include/linux/fscache.h index 8e312c8323a8..9c389adaf286 100644 --- a/include/linux/fscache.h +++ b/include/linux/fscache.h @@ -177,6 +177,12 @@ extern void __fscache_write_to_cache(struct fscache_cookie *, struct address_spa bool); extern void __fscache_clear_page_bits(struct address_space *, loff_t, size_t); +#if __fscache_available +extern int fscache_begin_cache_operation(struct netfs_io_request *rreq); +#else +#define fscache_begin_cache_operation NULL +#endif + /** * fscache_acquire_volume - Register a volume as desiring caching services * @volume_key: An identification string for the volume From patchwork Fri Oct 13 16:03:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 676C8CDB47E for ; Fri, 13 Oct 2023 16:06:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 193A76B02A0; Fri, 13 Oct 2023 12:05:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 144F26B02A2; Fri, 13 Oct 2023 12:05:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F004D6B02A3; Fri, 13 Oct 2023 12:05:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E04C36B02A0 for ; Fri, 13 Oct 2023 12:05:49 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id AD456803B8 for ; Fri, 13 Oct 2023 16:05:49 +0000 (UTC) X-FDA: 81340914018.23.AD9889D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id A6F4D40026 for ; Fri, 13 Oct 2023 16:05:47 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ZtW2Gw/X"; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213147; a=rsa-sha256; cv=none; b=xm3Kr6w8ZWXVSnXnLT750TStjEUplVETSat3nkmY77v1cmM3oOEQa8cgzDBkVFq1mX1gFV 1cfmvS+NMETMVS5AnfjbarsbQNfaZ9cwnE9TS4Dj4PxbYFWjYoL4nvNx8TNJevlc8w2K4Z GhwPYZQuraqjFfjMQhkMqmyk6CHvLdU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ZtW2Gw/X"; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H+1i5F1S+6wxwGtTBxUYE1F0fETWm68EZav8/D9/C4o=; b=cOIKM+2d8zyjwOZev7o/tuNJ0QUmutCrNDt7y5JOhyKa2anNn+JsKEkKMHEisDEMeSLeAO oY7ytLu9mHmbMiASLqNWD0Wxayq1QypRKtLh65AYWQB4jfdQEQBsStlElGmJkZTDdke7xJ sxnM1E9Lzbtq00AJ1uTzXFL/7LzeAqg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213147; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H+1i5F1S+6wxwGtTBxUYE1F0fETWm68EZav8/D9/C4o=; b=ZtW2Gw/XVHVexYiu86nALj4Mq2TSSHcbHJkzrCywlNPpKw4gF09g2uO61gdaRTReuipjrG fiUIHQjWQTGoeBTrcxrO1AdgHIm0pImOyqDBRxBhN/Pn3AxxIP1EiOcAROq8O+7ZbwLQ/A fz0SPxEnzeyMfIewtddnmfswq2M6I+k= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-48-CiGEkiPFOyWFt3PkgwJsGg-1; Fri, 13 Oct 2023 12:05:43 -0400 X-MC-Unique: CiGEkiPFOyWFt3PkgwJsGg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E7945280FEC5; Fri, 13 Oct 2023 16:05:41 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C8361C060DF; Fri, 13 Oct 2023 16:05:39 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 21/53] netfs: Make the refcounting of netfs_begin_read() easier to use Date: Fri, 13 Oct 2023 17:03:50 +0100 Message-ID: <20231013160423.2218093-22-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A6F4D40026 X-Stat-Signature: c5jkppx9q4ugx4oy6fp5jofdpig71qwd X-Rspam-User: X-HE-Tag: 1697213147-170789 X-HE-Meta: U2FsdGVkX1/3VqOD7BFarDYUALQLzUhSnxkzo431i+JkdgfMvjF2qd3mKjUS+Mowbe3yPHUjFY7wM7meW5AZuNKIas1YiiRoPMMHavLSULbVAz2+Xxm2DPGgeRN+IFqJGkjCP91QtgvC/1RzlXJV7YZvlqDpIdCm4g0pQws0DIto5gBxTFBNkWWukYcmeL+x33rUfYhYRwco0cV1dYNx7MyMNSVE0rcxiyVSfbptcOXncdSDi5UGvzQ4lnz2RxNDCLhFKVDiIEdcDid9GTgLwQzprEZnp7luzlAE4hVqauwnisnHqvqJ+QG5XRkYamScCw8HV8cxMVbjWGbb3o42lNsv5F2mtHImzVzGu66F8z6cFqoHKMyTkzAhxYR6C2tJx74Fc+1meLSVBieN3Fuwtl/zky23nTtX5x3i1f1V0bgcizdksg10Y751wTJ+kTgbH/PhH8F7p2pkhfVwrd057cJj0HMkqq5dMvGDoXpxqx+t6tv12ALMMpBZN8/wq07YpmCiHqDSOncofOTUD760tYzp0zdUoBs4o1Wsbht4PId/MQrWR3KhiZHE4nM/HyDXVHF5/o1AiUUJvXeKuV9QNf8JxcKi/AAqLA2yWhCclQXWiP4KQrhE1U4GWU6Zc0V+G7zEwz8HObfmJAIalKKwNmmUafhpZE9Pnvgzif9BiewJphy/dHwL8cX3L2oghoJXGfDyJmdDxRVfB0zNUATQvFIVO3xuAUnELd6Bh30+xlWl6vjMfm71e05tl9d7tlSrZEJ+Mb1rOsEoxbxqmVndsdDfwyRvk2exri6iG+lxrtr+Md+IeUlDVV9yaru8aVP+KgCTfBdQY58YO1WPkEN83ZlMyCqDVZrjQOwBU5M5QgqSy5WPkG54JpiCJnDBVSir2X4UbZPa36ffVluoND3/2eFt0r71f1VqA4vzPzG7lPio9wLHgxM0YiqNzvJ9BUbn/b8M4zWKv8r+uloc5rw YRYepiZx qgKGEeXqEk9JxvQA3GpIEuvuQ/yJJKGVbwO074f7XIkeTuGkplg1VQBxGBtceUXofSVQJapI6OvWQZ7z2Z4EyhWrbrEK1V3+A9PR8Lv4Z9Oe8n9r4VbKW00qE3q93yyeKbmWOwmdUv8OMLaZx7mb9/bB/taWp3BX1mU7MIgiqCbIug3FaVTSJLK1+FPkXNU83IuCRu5HnYyLejcAQrSLwfAKWhmmtyQIYoT4i7IFzydX6y28YB2ce/61Y8pTPPr4zWJY62Naeym1DQCphw4ci+3Zv176PX69Izt6L2iVkXNd6NKYL+LrWojJkDvgJiJe96QVuz6JBK7HQu3xKNT0uXSc2+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make the refcounting of netfs_begin_read() easier to use by not eating the caller's ref on the netfs_io_request it's given. This makes it easier to use when we need to look in the request struct after. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 6 +++++- fs/netfs/io.c | 28 +++++++++++++--------------- include/trace/events/netfs.h | 9 +++++---- 3 files changed, 23 insertions(+), 20 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 3b7eb706f2fe..05824f73cfc7 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -217,6 +217,7 @@ void netfs_readahead(struct readahead_control *ractl) ; netfs_begin_read(rreq, false); + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); return; cleanup_free: @@ -267,7 +268,9 @@ int netfs_read_folio(struct file *file, struct folio *folio) iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, rreq->start, rreq->len); - return netfs_begin_read(rreq, true); + ret = netfs_begin_read(rreq, true); + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + return ret; discard: netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); @@ -436,6 +439,7 @@ int netfs_write_begin(struct netfs_inode *ctx, ret = netfs_begin_read(rreq, true); if (ret < 0) goto error; + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); have_folio: ret = folio_wait_fscache_killable(folio); diff --git a/fs/netfs/io.c b/fs/netfs/io.c index c80b8eed1209..1795f8679be9 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -362,6 +362,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) netfs_rreq_unlock_folios(rreq); + trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); @@ -657,7 +658,6 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) if (rreq->len == 0) { pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); - netfs_put_request(rreq, false, netfs_rreq_trace_put_zero_len); return -EIO; } @@ -669,12 +669,10 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) INIT_WORK(&rreq->work, netfs_rreq_work); - if (sync) - netfs_get_request(rreq, netfs_rreq_trace_get_hold); - /* Chop the read into slices according to what the cache and the netfs * want and submit each one. */ + netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding); atomic_set(&rreq->nr_outstanding, 1); io_iter = rreq->io_iter; do { @@ -684,25 +682,25 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) } while (rreq->submitted < rreq->len); if (sync) { - /* Keep nr_outstanding incremented so that the ref always belongs to - * us, and the service code isn't punted off to a random thread pool to - * process. + /* Keep nr_outstanding incremented so that the ref always + * belongs to us, and the service code isn't punted off to a + * random thread pool to process. Note that this might start + * further work, such as writing to the cache. */ - for (;;) { - wait_var_event(&rreq->nr_outstanding, - atomic_read(&rreq->nr_outstanding) == 1); + wait_var_event(&rreq->nr_outstanding, + atomic_read(&rreq->nr_outstanding) == 1); + if (atomic_dec_and_test(&rreq->nr_outstanding)) netfs_rreq_assess(rreq, false); - if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) - break; - cond_resched(); - } + + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); ret = rreq->error; if (ret == 0 && rreq->submitted < rreq->len) { trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); ret = -EIO; } - netfs_put_request(rreq, false, netfs_rreq_trace_put_hold); } else { /* If we decrement nr_outstanding to 0, the ref belongs to us. */ if (atomic_dec_and_test(&rreq->nr_outstanding)) diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 4ea4e34d279f..6daadf2aac8a 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -34,7 +34,9 @@ EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \ EM(netfs_rreq_trace_unlock, "UNLOCK ") \ - E_(netfs_rreq_trace_unmark, "UNMARK ") + EM(netfs_rreq_trace_unmark, "UNMARK ") \ + EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \ + E_(netfs_rreq_trace_wake_ip, "WAKE-IP") #define netfs_sreq_sources \ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ @@ -65,14 +67,13 @@ E_(netfs_fail_prepare_write, "prep-write") #define netfs_rreq_ref_traces \ - EM(netfs_rreq_trace_get_hold, "GET HOLD ") \ + EM(netfs_rreq_trace_get_for_outstanding,"GET OUTSTND") \ EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \ EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \ EM(netfs_rreq_trace_put_discard, "PUT DISCARD") \ EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \ - EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \ + EM(netfs_rreq_trace_put_return, "PUT RETURN ") \ EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \ - EM(netfs_rreq_trace_put_zero_len, "PUT ZEROLEN") \ E_(netfs_rreq_trace_new, "NEW ") #define netfs_sreq_ref_traces \ From patchwork Fri Oct 13 16:03:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C262CDB482 for ; Fri, 13 Oct 2023 16:06:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A21B56B02A4; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D0676B02A5; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84AAE6B02A6; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6EC496B02A4 for ; Fri, 13 Oct 2023 12:05:53 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2509C1202D1 for ; Fri, 13 Oct 2023 16:05:53 +0000 (UTC) X-FDA: 81340914186.26.CEDC198 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 43026C0004 for ; Fri, 13 Oct 2023 16:05:51 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K8kva5L1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213151; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qXcme0kNMrH3V4PIjyzNcjxLbny8iXa7oebH1ad5t2M=; b=MkjOFz+cT/Mm/Sl92IKWZc8mXhkDBLHzMrOuyVVCvuGeDE2HD+fqe+jQoI1QX9HjIG+Rji tDCt4PEfWe42pM6zK19wP90L1zrrK6nC8szzlIuzvsGy+1Yy54Ix6cGt+mFjPkwsVjI6aF Fok+tn6B0SMta1GRiiA8nqBOOG9l7Uc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K8kva5L1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213151; a=rsa-sha256; cv=none; b=D1DEbUcMPoduKzP8YcpgkLoUVrkRhunNFjTd8yE/SAmtOPecy7NTSIrSgV3y3mbwB6WbAJ q5v5IiMs6A8rA4Nv8H/BpNbO68P4umXyIDupoSfjZlEkVNa+wwgjq2KI4RQi4ibzBIHfJP zvXe3jzM5WQ7x0E/cM6w9yHtlL0S0xM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213150; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qXcme0kNMrH3V4PIjyzNcjxLbny8iXa7oebH1ad5t2M=; b=K8kva5L1N13du+xD5idxIMpkq/RsFJNe3i2sMjUEDMEbVYp7I7LKBG1Sq1xAI1b30kLlJ5 KXoequK1FZK9mmL+bFhZ2d3ROWECVJ2JqfbalAgNrw1sxC2OEz+my6dDTKWWcZlbzjzy0q CzAzvUReDUxydLPhX2fqwnFVxshedAY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-110-wbIsFAmxMSm-WFxmFZQxmw-1; Fri, 13 Oct 2023 12:05:47 -0400 X-MC-Unique: wbIsFAmxMSm-WFxmFZQxmw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2D4EF280D584; Fri, 13 Oct 2023 16:05:45 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id A81CC201F457; Fri, 13 Oct 2023 16:05:42 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 22/53] netfs: Prep to use folio->private for write grouping and streaming write Date: Fri, 13 Oct 2023 17:03:51 +0100 Message-ID: <20231013160423.2218093-23-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspamd-Queue-Id: 43026C0004 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: b4mdxxhxhbh6frgjrw7tsme31hinr8c3 X-HE-Tag: 1697213151-587781 X-HE-Meta: U2FsdGVkX18V4tFqx2Y8nkM+C9r5tddcEgPT+OOpql3ml6XfvXuNo7NZBGMpmdOMXQIcH45cGu6iIr/+SnDqiIQkdm/rIpRv9N8o2/XTvMFzZi01UlBRZnRnOfrxrs4Rl1G6NqLSaxOz589qv5NvXarGqUk2rQhubkPRNwlbZgLltoPXLtRIUNhgexVo0t5DPH5PrWLNt8Mq1lkaH6WnzGA7Vn4pn9pQ1yn4VH1HhSPhoAdpFLhKJ/+WjA0Ad9raqKWYx4mAbD65WQXm93BxmRqiz7K9tyzF/aZkLS9dwLCfO5eTwwSmNS2F3Bo4pZ1UTr81qXM5utbLerv4PU7n6dBMgqCwou7IiFIymSA16IAIG+33iovEpfJWwAtLYp2+AY6TfUPRrhUtm4oV9zg2wf72fXpFiS7wYfK+JNOeburTItRllEtjP8xg4ep8MvFYny6N5HjRphRZyinOj7g7OcvXwLa7pALXvwpCuYJkF7vs2di1xX+QhCNYUdOx4OoIC8d0QiuXKXUENZz671Tnwg8n7Iq7faTlsXU5TU/u1qjcTXgHCPrhPm5J/EJ5L3e/TyQxjn72+SobKE/6aXbmWFpBlivqdlAoYslODn1vJPOmpUzb+IXcw+EIKOPwX4zNWLgmIzNdRv7XfrCoiag1nlnNVGSvQll2tsTRGgTuo5NTmYW3tANDjhf8tnQpCudgTOvHKBOjv7QKA9UJSM626thisu4eIE4FI9s4RyBaelx/xZcUId2ji5CIdgAXU2AYFO1l9LrE5FV4QmNZpZM+/Bhy1ultJ+0E0hOEQe8P2wWhlB5zHc7sicALeFdyvrPneSQXQaLlJ9D10H7b1gKiR+uEykCcxByJujMkhiXIpIIt0fZMkJd0OhI3NDjQYAIGLChU99OUwoiGq8jzpKwecw9j3HC6s+9o0a62TnSgIbl1JuN6ifyAaT38TH6m1jc2YMPqErJ+joEOpTuOIEg rURaRrhn TYzNBobz1NTvvecU3jBd+11jabsC5EmpMkB5IkQZ1S8jNKNTSMrPasCtwX9Q/iZizTmgCWMVy7lLW6ELo2Nj/GYWkO1OC0Aihc7+IsBC0D+irpg2cTqimGMomO0SKGaKGBfnbrIO5ar+rmyXt9EUx90NRgDDh3EUakU3RbVGhMReUFXnfJQulMN/furxl3o89hGS7tKGN+TEutxefzDw59u4HXmZDUasKjqtakkdX14Kw1deUzvYXvKD/uOT9hVErVmiWHzFtBptmnPa0vxIsy5Zw9SA3uosxCT43mrOLTx5CdVc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prepare to use folio->private to hold information write grouping and streaming write. These are implemented in the same commit as they both make use of folio->private and will be both checked at the same time in several places. "Write grouping" involves ordering the writeback of groups of writes, such as is needed for ceph snaps. A group is represented by a filesystem-supplied object which must contain a netfs_group struct. This contains just a refcount and a pointer to a destructor. "Streaming write" is the storage of data in folios that are marked dirty, but not uptodate, to avoid unnecessary reads of data. This is represented by a netfs_folio struct. This contains the offset and length of the modified region plus the otherwise displaced write grouping pointer. The way folio->private is multiplexed is: (1) If private is NULL then neither is in operation on a dirty folio. (2) If private is set, with bit 0 clear, then this points to a group. (3) If private is set, with bit 0 set, then this points to a netfs_folio struct (with bit 0 AND'ed out). Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 28 ++++++++++++++++++++++++++ fs/netfs/misc.c | 46 +++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 41 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 115 insertions(+) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 46183dad4d50..83418a918ee1 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -147,6 +147,34 @@ static inline bool netfs_is_cache_enabled(struct netfs_inode *ctx) #endif } +/* + * Get a ref on a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline struct netfs_group *netfs_get_group(struct netfs_group *netfs_group) +{ + if (netfs_group) + refcount_inc(&netfs_group->ref); + return netfs_group; +} + +/* + * Dispose of a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline void netfs_put_group(struct netfs_group *netfs_group) +{ + if (netfs_group && refcount_dec_and_test(&netfs_group->ref)) + netfs_group->free(netfs_group); +} + +/* + * Dispose of a netfs group attached to a dirty page (e.g. a ceph snap). + */ +static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr) +{ + if (netfs_group && refcount_sub_and_test(nr, &netfs_group->ref)) + netfs_group->free(netfs_group); +} + /*****************************************************************************/ /* * debug tracing diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index c70f856f3129..8a2a56f1f623 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -159,9 +159,55 @@ void netfs_clear_buffer(struct xarray *buffer) */ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) { + struct netfs_folio *finfo = NULL; + size_t flen = folio_size(folio); + _enter("{%lx},%zx,%zx", folio_index(folio), offset, length); folio_wait_fscache(folio); + + if (!folio_test_private(folio)) + return; + + finfo = netfs_folio_info(folio); + + if (offset == 0 && length >= flen) + goto erase_completely; + + if (finfo) { + /* We have a partially uptodate page from a streaming write. */ + unsigned int fstart = finfo->dirty_offset; + unsigned int fend = fstart + finfo->dirty_len; + unsigned int end = offset + length; + + if (offset >= fend) + return; + if (end <= fstart) + return; + if (offset <= fstart && end >= fend) + goto erase_completely; + if (offset <= fstart && end > fstart) + goto reduce_len; + if (offset > fstart && end >= fend) + goto move_start; + /* A partial write was split. The caller has already zeroed + * it, so just absorb the hole. + */ + } + return; + +erase_completely: + netfs_put_group(netfs_folio_group(folio)); + folio_detach_private(folio); + folio_clear_uptodate(folio); + kfree(finfo); + return; +reduce_len: + finfo->dirty_len = offset + length - finfo->dirty_offset; + return; +move_start: + finfo->dirty_len -= offset - finfo->dirty_offset; + finfo->dirty_offset = offset; } EXPORT_SYMBOL(netfs_invalidate_folio); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 39b3eeefa03c..11a073506f98 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -142,6 +142,47 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ }; +/* + * A netfs group - for instance a ceph snap. This is marked on dirty pages and + * pages marked with a group must be flushed before they can be written under + * the domain of another group. + */ +struct netfs_group { + refcount_t ref; + void (*free)(struct netfs_group *netfs_group); +}; + +/* + * Information about a dirty page (attached only if necessary). + * folio->private + */ +struct netfs_folio { + struct netfs_group *netfs_group; /* Filesystem's grouping marker (or NULL). */ + unsigned int dirty_offset; /* Write-streaming dirty data offset */ + unsigned int dirty_len; /* Write-streaming dirty data length */ +}; +#define NETFS_FOLIO_INFO 0x1UL /* OR'd with folio->private. */ + +static inline struct netfs_folio *netfs_folio_info(struct folio *folio) +{ + void *priv = folio_get_private(folio); + + if ((unsigned long)priv & NETFS_FOLIO_INFO) + return (struct netfs_folio *)((unsigned long)priv & ~NETFS_FOLIO_INFO); + return NULL; +} + +static inline struct netfs_group *netfs_folio_group(struct folio *folio) +{ + struct netfs_folio *finfo; + void *priv = folio_get_private(folio); + + finfo = netfs_folio_info(folio); + if (finfo) + return finfo->netfs_group; + return priv; +} + /* * Resources required to do operations on a cache. */ From patchwork Fri Oct 13 16:03:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0770CDB483 for ; Fri, 13 Oct 2023 16:06:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 31BC56B02A7; Fri, 13 Oct 2023 12:05:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CACD6B02A8; Fri, 13 Oct 2023 12:05:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F7386B02A9; Fri, 13 Oct 2023 12:05:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EDC196B02A7 for ; Fri, 13 Oct 2023 12:05:55 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AD009A03BD for ; Fri, 13 Oct 2023 16:05:55 +0000 (UTC) X-FDA: 81340914270.16.47E78A2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 77F4E16002E for ; Fri, 13 Oct 2023 16:05:53 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z8CdyZzd; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213153; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vvEn1cQTHumYX0x+F3vVxK05iLNgICoM49EDkceB88U=; b=oIyF7WMtVRfz10EHWEKpd8ugsUgCApKvqEABSMMga1bwkWWwFheCYWJ5rYTMrhoAGNYj0i /DJ2K+1T8U6QStCE3Hq6G7789Hy27OzZS6Hu18AZfSqZ5KJcArf0XSKOsy6e6HomgC+xIq Dbw+F5EOR7RBWYhP55Uff3bcIVj06+o= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z8CdyZzd; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213153; a=rsa-sha256; cv=none; b=T0K5Fo5WqS45+ngH/YfBrw9/WkzPAraQx173AYZU/Juz2hBxLN7lNQ7lW3PcVKgK+dddl7 TKBeg/YY/Gv83nyBgZXIUNYO4FTAET5rOQz/OE+pgpMZiVXWdJ8Lvo117zpX8kw5CunBNG psPkjvcTVzRPbjKgJx9tXHu3gSSB3HY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213152; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vvEn1cQTHumYX0x+F3vVxK05iLNgICoM49EDkceB88U=; b=Z8CdyZzdbcJuYSl0pHF3KBcH1wtAxLi0NpbMoe7DPPBP2mEvos7DDJFefSDBXAyEQMvRYV P9AnOWQINBbI68oRnmag2S0eZuW//lydKGywGEMeNvzhoW0smsn/7JqPc8gdhpz+mD62Ez YUMVyaDGogrkOBJc1tVyKEr+ullP27k= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-92-kvKTwjajNz2aRLdm1deHiw-1; Fri, 13 Oct 2023 12:05:49 -0400 X-MC-Unique: kvKTwjajNz2aRLdm1deHiw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EA9F280D205; Fri, 13 Oct 2023 16:05:48 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB237492BE1; Fri, 13 Oct 2023 16:05:45 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 23/53] netfs: Dispatch write requests to process a writeback slice Date: Fri, 13 Oct 2023 17:03:52 +0100 Message-ID: <20231013160423.2218093-24-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 77F4E16002E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fd9dtduna5ocycotfdg8o4dpkc1wti3n X-HE-Tag: 1697213153-355359 X-HE-Meta: U2FsdGVkX185x5rnpRGQnRdtgdoYVUS1K6qml2TWXsL+RiQBmsK3WyGnhjqmdGrR4reBU7dTRnmxjibYAwECDdo72pYkZvurXvTGA858SyOHwkqdPo4KQGNdIq/joNabbZXAIPl6LlpREAqCE+desIehdawMl38pxkUBH/dTsD540AokVFZwDZ1QHxxUcxbbBvrNUpfvC2pyoaYupADubpty9risfUkxLHA9bGiTN8geLXel37BbfbXZZ1nk6IYic8UfBDsx9qbWIIrifE+CUkhv+fQWJPWMfJ5KjHZol4EJ1HncAUE8FBwpHb4nBFuCogMOD0eErhczuOE6px2mg0p7gAa0c3ZmR1Qfu6oONWDhCEw1iUHLP/WvmWDVfNQyV60g8ljr/waVnf+1gjaxEyk4e5KifgZJViG0xPnMVH8iRbhIyqkyDO0GLhIUIlxou4qO8Wedpz8dev9hjChudxmQo9bnPEqBEHSKjeSJuI+VJJCZX9G5kzDMXsEffRU365fPl1HXfBUQrdpxBZwv4o/24wOU4Cer6RNEbtePzPMdvtJ+3pST1JlQ9IxPhbWtobXIFA2h9j37rxqaFqPI6f35t5ii7h/LPHfCiGI+MHnch0712mqs68rY3/hmqikU+EAQHklh9m5Buy6SyinUsfUYhn3qQ+jPsh4jRFXdalZUT8xa5v3tg14hVsbzYu0sIIpYCMzijxuWTrD0dGhJPzSuQONM06aQfHgWHCvzkIKvCPmF2EaQrK5D76bZAm0GEOAHpm97bHiS0Y6I7w+Dx4M1ONR4Ozon9pGsOY9gqZbqquc4KPQLpvc/ZswlfGtHErPipe4qXXokvN2ma+j3O47flp0ZezDBLcuH3KIiviHir5t68g+1pVYM4E5mjsIOI2CzBzylswUHWJo/oXSBV9LWiTQ8zGAk0puS1zJ19l/axnXhaZVXT+8LZvEtZmPqEcSvc79pBcWL8FK1Irv fSCX7UgR wCnpDFEsqIbiCaoKMU/fEXJXp77HGTuWMDmgO9RKygWRzRuI5q1sGAxh03csT3jiOUPdyyDFcTMfKxSfpYdOa3hCCgBp/C45AdsupkL7ga2rdItVzUlBkUfplQwjI5tpNQA4QWQPMOCoqt/pAu3opcqQN14N9hgtaJf2rwb9UZ7x1+qdofmRc4V/q7EKRBxNQ0jTeAJU2jqUg8WJ+dF9E6QmnmjxeI5F7pXOm5A2Fv1IYqjgoU8fdWk4iLjuzNpSDGtzPGzooQdfZNHWaq5upMruaVYRUfsu/RIaDM3a5y/LXw+sJNTchN+aPKo8VafwXgmQW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Dispatch one or more write reqeusts to process a writeback slice, where a slice is tailored more to logical block divisions within the file (such as crypto blocks, an object layout or cache granules) than the protocol RPC maximum capacity. The dispatch doesn't happen until throttling allows, at which point the entire writeback slice is processed and queued. A slice may be written to multiple destinations (one or more servers and the local cache) and the writes to each destination might be split up along different lines. The writeback slice holds the required folios pinned. An iov_iter is provided in netfs_write_request that describes the buffer to be used. This may be part of the pagecache, may have auxiliary padding pages attached or may be a bounce buffer resulting from crypto or compression. Consequently, the filesystem must not twiddle the folio markings directly. The following API is available to the filesystem: (1) The ->create_write_requests() method is called to ask the filesystem to create the requests it needs. This is passed the writeback slice to be processed. (2) The filesystem should then call netfs_create_write_request() to create the requests it needs. (3) Once a request is initialised, netfs_queue_write_request() can be called to dispatch it asynchronously, if not completed immediately. (4) netfs_write_request_completed() should be called to note the completion of a request. (5) netfs_get_write_request() and netfs_put_write_request() are provided to refcount a request. These take constants from the netfs_wreq_trace enum for logging into ftrace. (6) The ->free_write_request is method is called to ask the filesystem to clean up a request. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/Makefile | 3 +- fs/netfs/internal.h | 6 + fs/netfs/output.c | 366 +++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 13 ++ include/trace/events/netfs.h | 50 ++++- 5 files changed, 435 insertions(+), 3 deletions(-) create mode 100644 fs/netfs/output.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 647ce1935674..ce1197713276 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -7,7 +7,8 @@ netfs-y := \ locking.o \ main.o \ misc.o \ - objects.o + objects.o \ + output.o netfs-$(CONFIG_NETFS_STATS) += stats.o diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 83418a918ee1..30ec8949ebcd 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -87,6 +87,12 @@ static inline void netfs_see_request(struct netfs_io_request *rreq, trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what); } +/* + * output.c + */ +int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, + enum netfs_write_trace what); + /* * stats.c */ diff --git a/fs/netfs/output.c b/fs/netfs/output.c new file mode 100644 index 000000000000..e93453f4372d --- /dev/null +++ b/fs/netfs/output.c @@ -0,0 +1,366 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem high-level write support. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/** + * netfs_create_write_request - Create a write operation. + * @wreq: The write request this is storing from. + * @dest: The destination type + * @start: Start of the region this write will modify + * @len: Length of the modification + * @worker: The worker function to handle the write(s) + * + * Allocate a write operation, set it up and add it to the list on a write + * request. + */ +struct netfs_io_subrequest *netfs_create_write_request(struct netfs_io_request *wreq, + enum netfs_io_source dest, + loff_t start, size_t len, + work_func_t worker) +{ + struct netfs_io_subrequest *subreq; + + subreq = netfs_alloc_subrequest(wreq); + if (subreq) { + INIT_WORK(&subreq->work, worker); + subreq->source = dest; + subreq->start = start; + subreq->len = len; + subreq->debug_index = wreq->subreq_counter++; + + switch (subreq->source) { + case NETFS_UPLOAD_TO_SERVER: + netfs_stat(&netfs_n_wh_upload); + break; + case NETFS_WRITE_TO_CACHE: + netfs_stat(&netfs_n_wh_write); + break; + default: + BUG(); + } + + subreq->io_iter = wreq->io_iter; + iov_iter_advance(&subreq->io_iter, subreq->start - wreq->start); + iov_iter_truncate(&subreq->io_iter, subreq->len); + + trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, + refcount_read(&subreq->ref), + netfs_sreq_trace_new); + atomic_inc(&wreq->nr_outstanding); + list_add_tail(&subreq->rreq_link, &wreq->subrequests); + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + } + + return subreq; +} +EXPORT_SYMBOL(netfs_create_write_request); + +/* + * Process a completed write request once all the component operations have + * been completed. + */ +static void netfs_write_terminated(struct netfs_io_request *wreq, bool was_async) +{ + struct netfs_io_subrequest *subreq; + struct netfs_inode *ctx = netfs_inode(wreq->inode); + + _enter("R=%x[]", wreq->debug_id); + + trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); + + list_for_each_entry(subreq, &wreq->subrequests, rreq_link) { + if (!subreq->error) + continue; + switch (subreq->source) { + case NETFS_UPLOAD_TO_SERVER: + /* Depending on the type of failure, this may prevent + * writeback completion unless we're in disconnected + * mode. + */ + if (!wreq->error) + wreq->error = subreq->error; + break; + + case NETFS_WRITE_TO_CACHE: + /* Failure doesn't prevent writeback completion unless + * we're in disconnected mode. + */ + if (subreq->error != -ENOBUFS) + ctx->ops->invalidate_cache(wreq); + break; + + default: + WARN_ON_ONCE(1); + if (!wreq->error) + wreq->error = -EIO; + return; + } + } + + wreq->cleanup(wreq); + + _debug("finished"); + trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip); + clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags); + wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS); + + netfs_clear_subrequests(wreq, was_async); + netfs_put_request(wreq, was_async, netfs_rreq_trace_put_complete); +} + +/* + * Deal with the completion of writing the data to the cache. + */ +void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error, + bool was_async) +{ + struct netfs_io_subrequest *subreq = _op; + struct netfs_io_request *wreq = subreq->rreq; + unsigned int u; + + _enter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error); + + switch (subreq->source) { + case NETFS_UPLOAD_TO_SERVER: + netfs_stat(&netfs_n_wh_upload_done); + break; + case NETFS_WRITE_TO_CACHE: + netfs_stat(&netfs_n_wh_write_done); + break; + case NETFS_INVALID_WRITE: + break; + default: + BUG(); + } + + if (IS_ERR_VALUE(transferred_or_error)) { + subreq->error = transferred_or_error; + trace_netfs_failure(wreq, subreq, transferred_or_error, + netfs_fail_write); + goto failed; + } + + if (WARN(transferred_or_error > subreq->len - subreq->transferred, + "Subreq excess write: R%x[%x] %zd > %zu - %zu", + wreq->debug_id, subreq->debug_index, + transferred_or_error, subreq->len, subreq->transferred)) + transferred_or_error = subreq->len - subreq->transferred; + + subreq->error = 0; + subreq->transferred += transferred_or_error; + + if (iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) + pr_warn("R=%08x[%u] ITER POST-MISMATCH %zx != %zx-%zx %x\n", + wreq->debug_id, subreq->debug_index, + iov_iter_count(&subreq->io_iter), subreq->len, + subreq->transferred, subreq->io_iter.iter_type); + + if (subreq->transferred < subreq->len) + goto incomplete; + + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); +out: + trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); + + /* If we decrement nr_outstanding to 0, the ref belongs to us. */ + u = atomic_dec_return(&wreq->nr_outstanding); + if (u == 0) + netfs_write_terminated(wreq, was_async); + else if (u == 1) + wake_up_var(&wreq->nr_outstanding); + + netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); + return; + +incomplete: + if (transferred_or_error == 0) { + if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { + subreq->error = -ENODATA; + goto failed; + } + } else { + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + } + + __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); + set_bit(NETFS_RREQ_INCOMPLETE_IO, &wreq->flags); + goto out; + +failed: + switch (subreq->source) { + case NETFS_WRITE_TO_CACHE: + netfs_stat(&netfs_n_wh_write_failed); + set_bit(NETFS_RREQ_INCOMPLETE_IO, &wreq->flags); + break; + case NETFS_UPLOAD_TO_SERVER: + netfs_stat(&netfs_n_wh_upload_failed); + set_bit(NETFS_RREQ_FAILED, &wreq->flags); + wreq->error = subreq->error; + break; + default: + break; + } + goto out; +} +EXPORT_SYMBOL(netfs_write_subrequest_terminated); + +static void netfs_write_to_cache_op(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *wreq = subreq->rreq; + struct netfs_cache_resources *cres = &wreq->cache_resources; + + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + + cres->ops->write(cres, subreq->start, &subreq->io_iter, + netfs_write_subrequest_terminated, subreq); +} + +static void netfs_write_to_cache_op_worker(struct work_struct *work) +{ + struct netfs_io_subrequest *subreq = + container_of(work, struct netfs_io_subrequest, work); + + netfs_write_to_cache_op(subreq); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_work); +} + +/** + * netfs_queue_write_request - Queue a write request for attention + * @subreq: The write request to be queued + * + * Queue the specified write request for processing by a worker thread. We + * pass the caller's ref on the request to the worker thread. + */ +void netfs_queue_write_request(struct netfs_io_subrequest *subreq) +{ + if (!queue_work(system_unbound_wq, &subreq->work)) + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_wip); +} +EXPORT_SYMBOL(netfs_queue_write_request); + +/* + * Set up a op for writing to the cache. + */ +static void netfs_set_up_write_to_cache(struct netfs_io_request *wreq) +{ + struct netfs_cache_resources *cres; + struct netfs_io_subrequest *subreq; + struct netfs_inode *ctx = netfs_inode(wreq->inode); + struct fscache_cookie *cookie = netfs_i_cookie(ctx); + loff_t start = wreq->start; + size_t len = wreq->len; + int ret; + + if (!fscache_cookie_enabled(cookie)) { + clear_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags); + return; + } + + _debug("write to cache"); + subreq = netfs_create_write_request(wreq, NETFS_WRITE_TO_CACHE, start, len, + netfs_write_to_cache_op_worker); + if (!subreq) + return; + + cres = &wreq->cache_resources; + ret = -ENOBUFS; + if (wreq->netfs_ops->begin_cache_operation) + ret = wreq->netfs_ops->begin_cache_operation(wreq); + if (ret < 0) { + netfs_write_subrequest_terminated(subreq, ret, false); + return; + } + + ret = cres->ops->prepare_write(cres, &start, &len, i_size_read(wreq->inode), + true); + if (ret < 0) { + netfs_write_subrequest_terminated(subreq, ret, false); + return; + } + + netfs_queue_write_request(subreq); +} + +/* + * Begin the process of writing out a chunk of data. + * + * We are given a write request that holds a series of dirty regions and + * (partially) covers a sequence of folios, all of which are present. The + * pages must have been marked as writeback as appropriate. + * + * We need to perform the following steps: + * + * (1) If encrypting, create an output buffer and encrypt each block of the + * data into it, otherwise the output buffer will point to the original + * folios. + * + * (2) If the data is to be cached, set up a write op for the entire output + * buffer to the cache, if the cache wants to accept it. + * + * (3) If the data is to be uploaded (ie. not merely cached): + * + * (a) If the data is to be compressed, create a compression buffer and + * compress the data into it. + * + * (b) For each destination we want to upload to, set up write ops to write + * to that destination. We may need multiple writes if the data is not + * contiguous or the span exceeds wsize for a server. + */ +int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, + enum netfs_write_trace what) +{ + struct netfs_inode *ctx = netfs_inode(wreq->inode); + + _enter("R=%x %llx-%llx f=%lx", + wreq->debug_id, wreq->start, wreq->start + wreq->len - 1, + wreq->flags); + + trace_netfs_write(wreq, what); + if (wreq->len == 0 || wreq->iter.count == 0) { + pr_err("Zero-sized write [R=%x]\n", wreq->debug_id); + return -EIO; + } + + wreq->io_iter = wreq->iter; + + /* ->outstanding > 0 carries a ref */ + netfs_get_request(wreq, netfs_rreq_trace_get_for_outstanding); + atomic_set(&wreq->nr_outstanding, 1); + + /* Start the encryption/compression going. We can do that in the + * background whilst we generate a list of write ops that we want to + * perform. + */ + // TODO: Encrypt or compress the region as appropriate + + /* We need to write all of the region to the cache */ + if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags)) + netfs_set_up_write_to_cache(wreq); + + /* However, we don't necessarily write all of the region to the server. + * Caching of reads is being managed this way also. + */ + if (test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) + ctx->ops->create_write_requests(wreq, wreq->start, wreq->len); + + if (atomic_dec_and_test(&wreq->nr_outstanding)) + netfs_write_terminated(wreq, false); + + if (!may_wait) + return -EIOCBQUEUED; + + wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + return wreq->error; +} diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 11a073506f98..333c1ad44598 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -253,6 +253,7 @@ struct netfs_io_request { unsigned int direct_bv_count; /* Number of elements in bv[] */ unsigned int debug_id; unsigned int rsize; /* Maximum read size (0 for none) */ + unsigned int wsize; /* Maximum write size (0 for none) */ unsigned int subreq_counter; /* Next subreq->debug_index */ atomic_t nr_outstanding; /* Number of ops in progress */ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ @@ -278,6 +279,7 @@ struct netfs_io_request { #define NETFS_RREQ_WRITE_TO_CACHE 9 /* Need to write to the cache */ #define NETFS_RREQ_UPLOAD_TO_SERVER 10 /* Need to write to the server */ const struct netfs_request_ops *netfs_ops; + void (*cleanup)(struct netfs_io_request *req); }; /* @@ -302,6 +304,11 @@ struct netfs_request_ops { /* Modification handling */ void (*update_i_size)(struct inode *inode, loff_t i_size); + + /* Write request handling */ + void (*create_write_requests)(struct netfs_io_request *wreq, + loff_t start, size_t len); + void (*invalidate_cache)(struct netfs_io_request *wreq); }; /* @@ -389,6 +396,12 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, iov_iter_extraction_t extraction_flags); size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, size_t max_size, size_t max_segs); +struct netfs_io_subrequest *netfs_create_write_request( + struct netfs_io_request *wreq, enum netfs_io_source dest, + loff_t start, size_t len, work_func_t worker); +void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error, + bool was_async); +void netfs_queue_write_request(struct netfs_io_subrequest *subreq); int netfs_start_io_read(struct inode *inode); void netfs_end_io_read(struct inode *inode); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 6daadf2aac8a..e03635172760 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -21,6 +21,11 @@ EM(netfs_read_trace_readpage, "READPAGE ") \ E_(netfs_read_trace_write_begin, "WRITEBEGN") +#define netfs_write_traces \ + EM(netfs_write_trace_dio_write, "DIO-WRITE") \ + EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ + E_(netfs_write_trace_writeback, "WRITEBACK") + #define netfs_rreq_origins \ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ @@ -32,11 +37,13 @@ EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_free, "FREE ") \ + EM(netfs_rreq_trace_redirty, "REDIRTY") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \ EM(netfs_rreq_trace_unlock, "UNLOCK ") \ EM(netfs_rreq_trace_unmark, "UNMARK ") \ EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \ - E_(netfs_rreq_trace_wake_ip, "WAKE-IP") + EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \ + E_(netfs_rreq_trace_write_done, "WR-DONE") #define netfs_sreq_sources \ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ @@ -64,7 +71,8 @@ EM(netfs_fail_copy_to_cache, "copy-to-cache") \ EM(netfs_fail_read, "read") \ EM(netfs_fail_short_read, "short-read") \ - E_(netfs_fail_prepare_write, "prep-write") + EM(netfs_fail_prepare_write, "prep-write") \ + E_(netfs_fail_write, "write") #define netfs_rreq_ref_traces \ EM(netfs_rreq_trace_get_for_outstanding,"GET OUTSTND") \ @@ -74,6 +82,8 @@ EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_rreq_trace_put_return, "PUT RETURN ") \ EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \ + EM(netfs_rreq_trace_put_work, "PUT WORK ") \ + EM(netfs_rreq_trace_see_work, "SEE WORK ") \ E_(netfs_rreq_trace_new, "NEW ") #define netfs_sreq_ref_traces \ @@ -82,9 +92,12 @@ EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \ EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ + EM(netfs_sreq_trace_put_discard, "PUT DISCARD") \ EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \ EM(netfs_sreq_trace_put_no_copy, "PUT NO COPY") \ + EM(netfs_sreq_trace_put_wip, "PUT WIP ") \ + EM(netfs_sreq_trace_put_work, "PUT WORK ") \ E_(netfs_sreq_trace_put_terminated, "PUT TERM ") #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY @@ -96,6 +109,7 @@ #define E_(a, b) a enum netfs_read_trace { netfs_read_traces } __mode(byte); +enum netfs_write_trace { netfs_write_traces } __mode(byte); enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte); enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte); enum netfs_failure { netfs_failures } __mode(byte); @@ -113,6 +127,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); #define E_(a, b) TRACE_DEFINE_ENUM(a); netfs_read_traces; +netfs_write_traces; netfs_rreq_origins; netfs_rreq_traces; netfs_sreq_sources; @@ -320,6 +335,37 @@ TRACE_EVENT(netfs_sreq_ref, __entry->ref) ); +TRACE_EVENT(netfs_write, + TP_PROTO(const struct netfs_io_request *wreq, + enum netfs_write_trace what), + + TP_ARGS(wreq, what), + + TP_STRUCT__entry( + __field(unsigned int, wreq ) + __field(unsigned int, cookie ) + __field(enum netfs_write_trace, what ) + __field(unsigned long long, start ) + __field(size_t, len ) + ), + + TP_fast_assign( + struct netfs_inode *__ctx = netfs_inode(wreq->inode); + struct fscache_cookie *__cookie = netfs_i_cookie(__ctx); + __entry->wreq = wreq->debug_id; + __entry->cookie = __cookie ? __cookie->debug_id : 0; + __entry->what = what; + __entry->start = wreq->start; + __entry->len = wreq->len; + ), + + TP_printk("R=%08x %s c=%08x by=%llx-%llx", + __entry->wreq, + __print_symbolic(__entry->what, netfs_write_traces), + __entry->cookie, + __entry->start, __entry->start + __entry->len - 1) + ); + #undef EM #undef E_ #endif /* _TRACE_NETFS_H */ From patchwork Fri Oct 13 16:03:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 107F1CDB47E for ; Fri, 13 Oct 2023 16:06:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0821E6B02AB; Fri, 13 Oct 2023 12:06:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00BB06B02AC; Fri, 13 Oct 2023 12:05:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9FCA6B02AD; Fri, 13 Oct 2023 12:05:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C92E96B02AB for ; Fri, 13 Oct 2023 12:05:59 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4D519B4F75 for ; Fri, 13 Oct 2023 16:05:59 +0000 (UTC) X-FDA: 81340914438.26.57C1A05 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 3C8F940020 for ; Fri, 13 Oct 2023 16:05:57 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Lso1ALwL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213157; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fMVKu026WwLo8PiUApbh3xw0WYebs+2cFxlGEDEXJj8=; b=LzQETjm0jl1YIBvaqDgUaR5SMXw9bSTiAUYHhO5lkrxw55xnpqBuXe4+tEmDVpFRMkrvLa 11KgLvTiYwZifT8+5ZM7aJObZeQV5rVr/ENAmkgopbsvYc4ow20TKZEn8pQoIk8h4qtBsQ InBsblYO2QztHvXpNna4aaA8JqCiP+E= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Lso1ALwL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213157; a=rsa-sha256; cv=none; b=FHBjDhNY9NOpLeO5LvfpiNB76M5n4hwSLuxtbyBXC7IilWuMbWfUOe23wVAgvhNEDAVcLr XP19YZwkc3LlOLMiUUjR+m2GVcm75RabWGmoArWUlGs0rJIHhloZasDAQcLNwojZIgS+mG F3ElDXjyUsCKcWnQNDE9rTKpyVvAjPk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213156; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fMVKu026WwLo8PiUApbh3xw0WYebs+2cFxlGEDEXJj8=; b=Lso1ALwLVFm0Nikn9u0wCROD3ZHOxyNqqwwzioQE1c1ytMKIqtJSmUvieMHNNQDIBL/8d/ XW9Wg0gwNVrgdiyPzXPpdeJFyi9WCBgq84LCUS5n8Od+A2g/u5djoKaBL8LjkhKrFpgSv3 36uUo1Qz0nITSVz4m6WB+4tUgmfckZE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-58-YTgu9LNQO8--vdkXgbNuTw-1; Fri, 13 Oct 2023 12:05:53 -0400 X-MC-Unique: YTgu9LNQO8--vdkXgbNuTw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DADEB946DCA; Fri, 13 Oct 2023 16:05:51 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2F3E7492BE0; Fri, 13 Oct 2023 16:05:49 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 24/53] netfs: Provide func to copy data to pagecache for buffered write Date: Fri, 13 Oct 2023 17:03:53 +0100 Message-ID: <20231013160423.2218093-25-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspam-User: X-Stat-Signature: qq1ji6jkmfotshei3dd97twqo69w6sjt X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3C8F940020 X-HE-Tag: 1697213157-478610 X-HE-Meta: U2FsdGVkX199SNaRRpo7ShRd58LWcN7ArA6M3qSzcu1qIBxl1QXLwDUeURu+DzPi6iT2WgOtALq2epEdlIdt1Xgxk9yGSWLjEJvyw/2+i9yDwNBtVgUy35InMaoBmOIZZgK/911u42OXhke8/a4z5eH366/lpuXD8JdFbxO4+9lUCbRtjCxrCi7KXTTcdSD7Rgxl2Hzc4m3udfJWEYaps57NemRQvC+wHTcV/04jGyYsFatvGkO3b4XQdfDQRRGlraSdQlz5J5cbVR9qiRwJ3t3EwtQntaq43yPoNRB5MraLJlmRm2Qxkrhiwi1kIo7c/Aj5CLNznERaxoGLJ6BrcRMkJK8s0SOd67VC+Kpt3mKlMjvA2UsctMEy1Xad6ufCb9LDVtbGS9nKAjjQY3dTA70XoEiDT+hhYuc952q2rMiywA1aIK6B+NEafmjch+/1SD79X9xusauqQn6LE9iQljowq1JUN/3jYV96wc9gqhZsnm3JvVN55ScnFN7z+cJNunB0DH25+DtyUxQ6v9ketRGFzsXXE4lc/+tvOsTxVy87F3HxCCpVMocLVdcZAOPJ8pGqWbXIWLOIJX4z1Y4dxi1D7fA7enRnY9QCbbRYr9ijGg9fQt6NA/nb4gqYGpzHlbk8OGkHPpObQ43vL38tGhfk2XIpMKTh6h4ilekQLemIQQuiEGzy2EkDzIlZZBham5MJZ45ZiCd4cV6pfzDtP/AIp1sLDXuanAVf+I6j81jeLXj8ECH5teG8r6TQIX5DahIHxKqiryqA/U2l/IjJYuqBWoW0zOeKK92OW5ADcu6MddZI/Xu77No0ickFklMI8+SlOe9yJxdBHVOudk7hvAc0Zzdv55mUcxBa3jgzF8QoNkXIqTj31RJqP01zMYjNDXVHUzbNpsg90qi6g1sIZoW9Cfrm6MExZNP4G5+PM8LCexy04XX/JRM2jRRU9kYipOIh+dWXh7U/XxKGVnU L267ERuw XBpCUCART5RLmMJB/O425zmDRUxjxh8ZoY2DCQqWRb1GC3+3tIhr9mzjnXqg3Memhj9zPjJ0RlEt4l1rujUYAwgFB5NEm6T3MQ8TWmb1X4tehttRuZIg58kNZ+hWXTpYNQu+7pMKGhYJ54VTYaYawZhvDivRUArcW0HaEeOww2y+87VVlJ84yRjE9bzZ+VOOiaZa6saDeygc7Jaj1H5F0d6VvWa4OCgc0s8fGJCvQeqbx9sBnXq/qPum6cnJsFvgJiMX0Hjz2PMNMkHkXpybdmqmUbmjbakRFydKOq60FNS3mvTWD8L4JeCPeH5U3Z4xNzo5C X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a netfs write helper, netfs_perform_write() to buffer data to be written in the pagecache and mark the modified folios dirty. It will perform "streaming writes" for folios that aren't currently resident, if possible, storing data in partially modified folios that are marked dirty, but not uptodate. It will also tag pages as belonging to fs-specific write groups if so directed by the filesystem. This is derived from generic_perform_write(), but doesn't use ->write_begin() and ->write_end(), having that logic rolled in instead. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/Makefile | 2 + fs/netfs/buffered_read.c | 48 +++++ fs/netfs/buffered_write.c | 327 +++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 2 + include/linux/netfs.h | 5 + include/trace/events/netfs.h | 70 ++++++++ 6 files changed, 454 insertions(+) create mode 100644 fs/netfs/buffered_write.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index ce1197713276..5c450db29932 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -2,6 +2,8 @@ netfs-y := \ buffered_read.o \ + buffered_write.o \ + crypto.o \ io.o \ iterator.o \ locking.o \ diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 05824f73cfc7..2f06344bba21 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -461,3 +461,51 @@ int netfs_write_begin(struct netfs_inode *ctx, return ret; } EXPORT_SYMBOL(netfs_write_begin); + +/* + * Preload the data into a page we're proposing to write into. + */ +int netfs_prefetch_for_write(struct file *file, struct folio *folio, + size_t offset, size_t len) +{ + struct netfs_io_request *rreq; + struct address_space *mapping = folio_file_mapping(folio); + struct netfs_inode *ctx = netfs_inode(mapping->host); + unsigned long long start = folio_pos(folio); + size_t flen = folio_size(folio); + int ret; + + _enter("%zx @%llx", flen, start); + + ret = -ENOMEM; + + rreq = netfs_alloc_request(mapping, file, start, flen, + NETFS_READ_FOR_WRITE); + if (IS_ERR(rreq)) { + ret = PTR_ERR(rreq); + goto error; + } + + rreq->no_unlock_folio = folio_index(folio); + __set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags); + ret = netfs_begin_cache_operation(rreq, ctx); + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) + goto error_put; + + netfs_stat(&netfs_n_rh_write_begin); + trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); + + /* Set up the output buffer */ + iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, + rreq->start, rreq->len); + + ret = netfs_begin_read(rreq, true); + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + return ret; + +error_put: + netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); +error: + _leave(" = %d", ret); + return ret; +} diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c new file mode 100644 index 000000000000..406c3f3666fa --- /dev/null +++ b/fs/netfs/buffered_write.c @@ -0,0 +1,327 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem high-level write support. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Determined write method. Adjust netfs_folio_traces if this is changed. + */ +enum netfs_how_to_modify { + NETFS_FOLIO_IS_UPTODATE, /* Folio is uptodate already */ + NETFS_JUST_PREFETCH, /* We have to read the folio anyway */ + NETFS_WHOLE_FOLIO_MODIFY, /* We're going to overwrite the whole folio */ + NETFS_MODIFY_AND_CLEAR, /* We can assume there is no data to be downloaded. */ + NETFS_STREAMING_WRITE, /* Store incomplete data in non-uptodate page. */ + NETFS_STREAMING_WRITE_CONT, /* Continue streaming write. */ + NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ +}; + +static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) +{ + if (netfs_group && !folio_get_private(folio)) + folio_attach_private(folio, netfs_get_group(netfs_group)); +} + +/* + * Decide how we should modify a folio. We might be attempting to do + * write-streaming, in which case we don't want to a local RMW cycle if we can + * avoid it. If we're doing local caching or content crypto, we award that + * priority over avoiding RMW. If the file is open readably, then we also + * assume that we may want to read what we wrote. + */ +static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, + struct file *file, + struct folio *folio, + void *netfs_group, + size_t flen, + size_t offset, + size_t len, + bool maybe_trouble) +{ + struct netfs_folio *finfo = netfs_folio_info(folio); + loff_t pos = folio_file_pos(folio); + + _enter("z=%llx", ctx->zero_point); + + if (netfs_folio_group(folio) != netfs_group) + return NETFS_FLUSH_CONTENT; + + if (folio_test_uptodate(folio)) + return NETFS_FOLIO_IS_UPTODATE; + + if (pos >= ctx->zero_point) + return NETFS_MODIFY_AND_CLEAR; + + if (!maybe_trouble && offset == 0 && len >= flen) + return NETFS_WHOLE_FOLIO_MODIFY; + + if (file->f_mode & FMODE_READ) + return NETFS_JUST_PREFETCH; + + if (netfs_is_cache_enabled(ctx)) + return NETFS_JUST_PREFETCH; + + if (!finfo) + return NETFS_STREAMING_WRITE; + + /* We can continue a streaming write only if it continues on from the + * previous. If it overlaps, we must flush lest we suffer a partial + * copy and disjoint dirty regions. + */ + if (offset == finfo->dirty_offset + finfo->dirty_len) + return NETFS_STREAMING_WRITE_CONT; + return NETFS_FLUSH_CONTENT; +} + +/* + * Grab a folio for writing and lock it. + */ +static struct folio *netfs_grab_folio_for_write(struct address_space *mapping, + loff_t pos, size_t part) +{ + pgoff_t index = pos / PAGE_SIZE; + + return __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, + mapping_gfp_mask(mapping)); +} + +/** + * netfs_perform_write - Copy data into the pagecache. + * @iocb: The operation parameters + * @iter: The source buffer + * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * + * Copy data into pagecache pages attached to the inode specified by @iocb. + * The caller must hold appropriate inode locks. + * + * Dirty pages are tagged with a netfs_folio struct if they're not up to date + * to indicate the range modified. Dirty pages may also be tagged with a + * netfs-specific grouping such that data from an old group gets flushed before + * a new one is started. + */ +ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, + struct netfs_group *netfs_group) +{ + struct file *file = iocb->ki_filp; + struct inode *inode = file_inode(file); + struct address_space *mapping = inode->i_mapping; + struct netfs_inode *ctx = netfs_inode(inode); + struct netfs_folio *finfo; + struct folio *folio; + enum netfs_how_to_modify howto; + enum netfs_folio_trace trace; + unsigned int bdp_flags = (iocb->ki_flags & IOCB_SYNC) ? 0: BDP_ASYNC; + ssize_t written = 0, ret; + loff_t i_size, pos = iocb->ki_pos, from, to; + size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; + bool maybe_trouble = false; + + do { + size_t flen; + size_t offset; /* Offset into pagecache folio */ + size_t part; /* Bytes to write to folio */ + size_t copied; /* Bytes copied from user */ + + ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); + if (unlikely(ret < 0)) + break; + + offset = pos & (max_chunk - 1); + part = min(max_chunk - offset, iov_iter_count(iter)); + + /* Bring in the user pages that we will copy from _first_ lest + * we hit a nasty deadlock on copying from the same page as + * we're writing to, without it being marked uptodate. + * + * Not only is this an optimisation, but it is also required to + * check that the address is actually valid, when atomic + * usercopies are used below. + * + * We rely on the page being held onto long enough by the LRU + * that we can grab it below if this causes it to be read. + */ + ret = -EFAULT; + if (unlikely(fault_in_iov_iter_readable(iter, part) == part)) + break; + + ret = -ENOMEM; + folio = netfs_grab_folio_for_write(mapping, pos, part); + if (!folio) + break; + + flen = folio_size(folio); + offset = pos & (flen - 1); + part = min_t(size_t, flen - offset, part); + + if (signal_pending(current)) { + ret = written ? -EINTR : -ERESTARTSYS; + goto error_folio_unlock; + } + + /* See if we need to prefetch the area we're going to modify. + * We need to do this before we get a lock on the folio in case + * there's more than one writer competing for the same cache + * block. + */ + howto = netfs_how_to_modify(ctx, file, folio, netfs_group, + flen, offset, part, maybe_trouble); + _debug("howto %u", howto); + switch (howto) { + case NETFS_JUST_PREFETCH: + ret = netfs_prefetch_for_write(file, folio, offset, part); + if (ret < 0) { + _debug("prefetch = %zd", ret); + goto error_folio_unlock; + } + break; + case NETFS_FOLIO_IS_UPTODATE: + case NETFS_WHOLE_FOLIO_MODIFY: + case NETFS_STREAMING_WRITE_CONT: + break; + case NETFS_MODIFY_AND_CLEAR: + zero_user_segment(&folio->page, 0, offset); + break; + case NETFS_STREAMING_WRITE: + ret = -EIO; + if (WARN_ON(folio_get_private(folio))) + goto error_folio_unlock; + break; + case NETFS_FLUSH_CONTENT: + trace_netfs_folio(folio, netfs_flush_content); + from = folio_pos(folio); + to = from + folio_size(folio) - 1; + folio_unlock(folio); + folio_put(folio); + ret = filemap_write_and_wait_range(mapping, from, to); + if (ret < 0) + goto error_folio_unlock; + continue; + } + + if (mapping_writably_mapped(mapping)) + flush_dcache_folio(folio); + + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + + flush_dcache_folio(folio); + + /* Deal with a (partially) failed copy */ + if (copied == 0) { + ret = -EFAULT; + goto error_folio_unlock; + } + + trace = (enum netfs_folio_trace)howto; + switch (howto) { + case NETFS_FOLIO_IS_UPTODATE: + case NETFS_JUST_PREFETCH: + netfs_set_group(folio, netfs_group); + break; + case NETFS_MODIFY_AND_CLEAR: + zero_user_segment(&folio->page, offset + copied, flen); + netfs_set_group(folio, netfs_group); + folio_mark_uptodate(folio); + break; + case NETFS_WHOLE_FOLIO_MODIFY: + if (unlikely(copied < part)) { + maybe_trouble = true; + iov_iter_revert(iter, copied); + copied = 0; + goto retry; + } + netfs_set_group(folio, netfs_group); + folio_mark_uptodate(folio); + break; + case NETFS_STREAMING_WRITE: + if (offset == 0 && copied == flen) { + netfs_set_group(folio, netfs_group); + folio_mark_uptodate(folio); + trace = netfs_streaming_filled_page; + break; + } + finfo = kzalloc(sizeof(*finfo), GFP_KERNEL); + if (!finfo) { + iov_iter_revert(iter, copied); + ret = -ENOMEM; + goto error_folio_unlock; + } + finfo->netfs_group = netfs_get_group(netfs_group); + finfo->dirty_offset = offset; + finfo->dirty_len = copied; + folio_attach_private(folio, (void *)((unsigned long)finfo | + NETFS_FOLIO_INFO)); + break; + case NETFS_STREAMING_WRITE_CONT: + finfo = netfs_folio_info(folio); + finfo->dirty_len += copied; + if (finfo->dirty_offset == 0 && finfo->dirty_len == flen) { + folio_change_private(folio, finfo->netfs_group); + folio_mark_uptodate(folio); + kfree(finfo); + trace = netfs_streaming_filled_page; + } + break; + default: + WARN(true, "Unexpected modify type %u ix=%lx\n", + howto, folio_index(folio)); + ret = -EIO; + goto error_folio_unlock; + } + + trace_netfs_folio(folio, trace); + + /* Update the inode size if we moved the EOF marker */ + i_size = i_size_read(inode); + pos += copied; + if (pos > i_size) { + if (ctx->ops->update_i_size) { + ctx->ops->update_i_size(inode, pos); + } else { + i_size_write(inode, pos); +#if IS_ENABLED(CONFIG_FSCACHE) + fscache_update_cookie(ctx->cache, NULL, &pos); +#endif + } + } + written += copied; + + folio_mark_dirty(folio); + retry: + folio_unlock(folio); + folio_put(folio); + folio = NULL; + + cond_resched(); + } while (iov_iter_count(iter)); + +out: + if (likely(written)) { + /* Flush and wait for a write that requires immediate synchronisation. */ + if (iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) { + _debug("dsync"); + ret = filemap_fdatawait_range(mapping, iocb->ki_pos, + iocb->ki_pos + written); + } + + iocb->ki_pos += written; + } + + _leave(" = %zd [%zd]", written, ret); + return written ? written : ret; + +error_folio_unlock: + folio_unlock(folio); + folio_put(folio); + goto out; +} +EXPORT_SYMBOL(netfs_perform_write); diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 30ec8949ebcd..6f79823261f7 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -19,6 +19,8 @@ * buffered_read.c */ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); +int netfs_prefetch_for_write(struct file *file, struct folio *folio, + size_t offset, size_t len); /* * io.c diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 333c1ad44598..8a4aee547c6d 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -376,6 +376,11 @@ struct netfs_cache_ops { loff_t *_data_start, size_t *_data_len); }; +/* High-level write API */ +ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, + struct netfs_group *netfs_group); + +/* Address operations API */ struct readahead_control; void netfs_readahead(struct readahead_control *); int netfs_read_folio(struct file *, struct folio *); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index e03635172760..94793f842000 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -19,6 +19,7 @@ EM(netfs_read_trace_expanded, "EXPANDED ") \ EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readpage, "READPAGE ") \ + EM(netfs_read_trace_prefetch_for_write, "PREFETCHW") \ E_(netfs_read_trace_write_begin, "WRITEBEGN") #define netfs_write_traces \ @@ -100,6 +101,28 @@ EM(netfs_sreq_trace_put_work, "PUT WORK ") \ E_(netfs_sreq_trace_put_terminated, "PUT TERM ") +#define netfs_folio_traces \ + /* The first few correspond to enum netfs_how_to_modify */ \ + EM(netfs_folio_is_uptodate, "mod-uptodate") \ + EM(netfs_just_prefetch, "mod-prefetch") \ + EM(netfs_whole_folio_modify, "mod-whole-f") \ + EM(netfs_modify_and_clear, "mod-n-clear") \ + EM(netfs_streaming_write, "mod-streamw") \ + EM(netfs_streaming_write_cont, "mod-streamw+") \ + EM(netfs_flush_content, "flush") \ + EM(netfs_streaming_filled_page, "mod-streamw-f") \ + /* The rest are for writeback */ \ + EM(netfs_folio_trace_clear, "clear") \ + EM(netfs_folio_trace_clear_s, "clear-s") \ + EM(netfs_folio_trace_clear_g, "clear-g") \ + EM(netfs_folio_trace_kill, "kill") \ + EM(netfs_folio_trace_mkwrite, "mkwrite") \ + EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ + EM(netfs_folio_trace_redirty, "redirty") \ + EM(netfs_folio_trace_redirtied, "redirtied") \ + EM(netfs_folio_trace_store, "store") \ + E_(netfs_folio_trace_store_plus, "store+") + #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY @@ -115,6 +138,7 @@ enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte); enum netfs_failure { netfs_failures } __mode(byte); enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte); enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); +enum netfs_folio_trace { netfs_folio_traces } __mode(byte); #endif @@ -135,6 +159,7 @@ netfs_sreq_traces; netfs_failures; netfs_rreq_ref_traces; netfs_sreq_ref_traces; +netfs_folio_traces; /* * Now redefine the EM() and E_() macros to map the enums to the strings that @@ -335,6 +360,51 @@ TRACE_EVENT(netfs_sreq_ref, __entry->ref) ); +TRACE_EVENT(netfs_folio, + TP_PROTO(struct folio *folio, enum netfs_folio_trace why), + + TP_ARGS(folio, why), + + TP_STRUCT__entry( + __field(ino_t, ino) + __field(pgoff_t, index) + __field(unsigned int, nr) + __field(enum netfs_folio_trace, why) + ), + + TP_fast_assign( + __entry->ino = folio->mapping->host->i_ino; + __entry->why = why; + __entry->index = folio_index(folio); + __entry->nr = folio_nr_pages(folio); + ), + + TP_printk("i=%05lx ix=%05lx-%05lx %s", + __entry->ino, __entry->index, __entry->index + __entry->nr - 1, + __print_symbolic(__entry->why, netfs_folio_traces)) + ); + +TRACE_EVENT(netfs_write_iter, + TP_PROTO(const struct kiocb *iocb, const struct iov_iter *from), + + TP_ARGS(iocb, from), + + TP_STRUCT__entry( + __field(unsigned long long, start ) + __field(size_t, len ) + __field(unsigned int, flags ) + ), + + TP_fast_assign( + __entry->start = iocb->ki_pos; + __entry->len = iov_iter_count(from); + __entry->flags = iocb->ki_flags; + ), + + TP_printk("WRITE-ITER s=%llx l=%zx f=%x", + __entry->start, __entry->len, __entry->flags) + ); + TRACE_EVENT(netfs_write, TP_PROTO(const struct netfs_io_request *wreq, enum netfs_write_trace what), From patchwork Fri Oct 13 16:03:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FEA4CDB47E for ; Fri, 13 Oct 2023 16:06:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2F3F6B02B1; Fri, 13 Oct 2023 12:06:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDF056B02B2; Fri, 13 Oct 2023 12:06:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B57906B02B3; Fri, 13 Oct 2023 12:06:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 995416B02B1 for ; Fri, 13 Oct 2023 12:06:03 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 77C64C03DF for ; Fri, 13 Oct 2023 16:06:03 +0000 (UTC) X-FDA: 81340914606.24.9FF2A3C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 8711320030 for ; Fri, 13 Oct 2023 16:06:01 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KMF+jEMt; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213161; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=28kgPZMsHEWnVwc0vT6VK3Ue5Bj5q+rjSv2U7JFM+9k=; b=2BnQJ8CGtSdOZ9+nn5+8X9McbFor2DwN1ekyVOEV2E+OuAfK/nqxp/pZ7lWUSpRk/QUFIQ sr2XedulMIfRQqy9kzCLUhm2FOY1GMNu5j5lkkRUcoUG2glY1AJfZ2pFL13J4pXv4XOeKA AkOLxNvzfdO8ziPA0lfutFPzYWHlZ8k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213161; a=rsa-sha256; cv=none; b=b9CfsBjIRxCkI3Lk+ytB1F3WYb8u+BsU8XzAhQehy+7dLafC3UsGDjCGkDCkNHow8ZjX5L 3XzGzD5jPYikhgvXHXiJozEXl03EZeHG6HzFvun+nsSRbnyK2M4o/f6lvAkAtebueYGwXb CBXIbWgMuyZIOGP9xspzHU0i+FgTOuY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KMF+jEMt; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=28kgPZMsHEWnVwc0vT6VK3Ue5Bj5q+rjSv2U7JFM+9k=; b=KMF+jEMtKoNeEY9vll8yIWWI1qYKkrSsI7bvqewC5qHi8DEmIdADvrmHCw3HCl/kpGP1/k eflAYarn43yJjMOpFfwZgFNMo/U91n39naqONDpZWoFEjE8rx79CfqCOfBmf5PB1UErKGs UqLmoWIISXjukj0Jm3ZSg1wBg+YQWE0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-93-P-Ft7OcvOIWG1ggPzh_5og-1; Fri, 13 Oct 2023 12:05:56 -0400 X-MC-Unique: P-Ft7OcvOIWG1ggPzh_5og-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4C57B858293; Fri, 13 Oct 2023 16:05:55 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9012925C0; Fri, 13 Oct 2023 16:05:52 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 25/53] netfs: Make netfs_read_folio() handle streaming-write pages Date: Fri, 13 Oct 2023 17:03:54 +0100 Message-ID: <20231013160423.2218093-26-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: 8711320030 X-Rspam-User: X-Stat-Signature: ftx9qe1xw7eteuhman368pbgxfi4bwn3 X-Rspamd-Server: rspam03 X-HE-Tag: 1697213161-824102 X-HE-Meta: U2FsdGVkX1/xlgXJqXKXfNNPBs8Naq+KMz8W9ST58fpsPFASAE3CU5adcKgRtaXS4MIZsgkF0d1xh0Q9XaganHrHdlFrHnOkGUnEC5wBNVKjAoxI51my/tLeyM/nI7Ygkkv6SnCYVGWb0B4fvyoqMtYByCCG+8U4CpyuuOIjSQi4TUqRPvvQL5cs1RvzQZUaYT5Dag1SCr6nk7VE+uiUdr+tXn+Mjvz9CD9daly8dj7OaVzRUQX4WInspChpHhnYsc/ohPW7K7VPjMQuVk5F4Ndw3/m0RMB+/GCkJ64wTMwatmrYDZUAsH7ahVU9+1y1OtJvCXnd6EYzzxmGrlajgFLSEmQI5+ngrnEiqfHMUNaS47HRTDUQ+IQzgbp2XXnQ25mnPt6tlroLwpxpxqVc0b9j/XXBqPTqG/0F5PkdZt7s3RTJobYZk7czvXB5xYNIPeO/zwoQCqukyyryvxLaEMcBEGK0uH7XyG5oN0/ddBWz/1FaS/bIkuP8DOZKMSuKMPvU6jm/91xNRIRwJtkiQNsOmiJi3DgPb0a+vMu+BQGVxLbCKC/Q9k2qc0XFfSetS/9RcLcIuHz1qUGVwYZi4e9Ek5ACJ5eC6GVwxM78rAbTE4KkUdThUXqXM6+5V1Ey4pENetnFC7NbP3NFqwOSSnp93i8Motd5qRhic2wpAvcsQmrZgo7o+nDvaxWppL8Lu+DhQlP4Xg077NM1cie+qv0D/JrT33WFJmfdIHUs4PMA2ujnFgdkXidDxJU7qNSTPX/nC1kAjRcrujrfZvLY41y/x+akfIyFQGb6HoOkqlsmyY4H8v98PspDJJERFmy9hKGhesaRtB5Or1KLLWcBs8zmUfFwecEs5H0pYjxRrZeMS9e5U8+ItXoDmFcZvcAM7MzmWDBg2/iFOiPRqHRWUJe4i7Mj29x80NGDmvkhbR5PvS0CIpfq2uMJsj1o0z8RG8qmxt7bfsJEXOZIreR eYAi/C/v 7Q/TqD5A/FJeYXQuNyghA6psD/XJpmcHMQmFyhs/4IRBuHvPtUFsg473s9t39nGUBM+RoRr1wAZBXi5ePL3fyfLbA1s/NvFJqIkIakVP0aargjVPzDa+13TTaEEbBkRvnBBCybDXcGOHRZA1DFQ0CS6pE4UrMKLMRGOFDoyC35JcuFKM2VXbGA1IQuh+Nu7gvE5Ux8sh2rrZhWeaRBlofFzEqlVDRj2v2yFM3yasAPJMRYSU6lrl80/OYzlPh/Pslis/wcTIqYMeV1YnvDzjbM94cVSiJBJ2EJJ0BzNyCjuuQbTbpFs7Hyn0fpetCkupnEEvRrjvJm5Q1p1n+hh3/5L2e2g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: netfs_read_folio() needs to handle partially-valid pages that are marked dirty, but not uptodate in the event that someone tries to read a page was used to cache data by a streaming write. In such a case, make netfs_read_folio() set up a bvec iterator that points to the parts of the folio that need filling and to a sink page for the data that should be discarded and use that instead of i_pages as the iterator to be written to. This requires netfs_rreq_unlock_folios() to convert the page into a normal dirty uptodate page, getting rid of the partial write record and bumping the group pointer over to folio->private. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 61 ++++++++++++++++++++++++++++++++++-- include/trace/events/netfs.h | 2 ++ 2 files changed, 60 insertions(+), 3 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 2f06344bba21..374707df6575 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -16,6 +16,7 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) { struct netfs_io_subrequest *subreq; + struct netfs_folio *finfo; struct folio *folio; pgoff_t start_page = rreq->start / PAGE_SIZE; pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; @@ -86,6 +87,15 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) if (!pg_failed) { flush_dcache_folio(folio); + finfo = netfs_folio_info(folio); + if (finfo) { + trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); + if (finfo->netfs_group) + folio_change_private(folio, finfo->netfs_group); + else + folio_detach_private(folio); + kfree(finfo); + } folio_mark_uptodate(folio); } @@ -245,6 +255,7 @@ int netfs_read_folio(struct file *file, struct folio *folio) struct address_space *mapping = folio_file_mapping(folio); struct netfs_io_request *rreq; struct netfs_inode *ctx = netfs_inode(mapping->host); + struct folio *sink = NULL; int ret; _enter("%lx", folio_index(folio)); @@ -265,12 +276,56 @@ int netfs_read_folio(struct file *file, struct folio *folio) trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); + if (folio_test_dirty(folio)) { + /* Handle someone trying to read from an unflushed streaming + * write. We fiddle the buffer so that a gap at the beginning + * and/or a gap at the end get copied to, but the middle is + * discarded. + */ + struct netfs_folio *finfo = netfs_folio_info(folio); + struct bio_vec *bvec; + unsigned int from = finfo->dirty_offset; + unsigned int to = from + finfo->dirty_len; + unsigned int off = 0, i = 0; + size_t flen = folio_size(folio); + size_t nr_bvec = flen / PAGE_SIZE + 2; + size_t part; + + ret = -ENOMEM; + bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL); + if (!bvec) + goto discard; + + sink = folio_alloc(GFP_KERNEL, 0); + if (!sink) + goto discard; + + trace_netfs_folio(folio, netfs_folio_trace_read_gaps); + + rreq->direct_bv = bvec; + rreq->direct_bv_count = nr_bvec; + if (from > 0) { + bvec_set_folio(&bvec[i++], folio, from, 0); + off = from; + } + while (off < to) { + part = min_t(size_t, to - off, PAGE_SIZE); + bvec_set_folio(&bvec[i++], sink, part, 0); + off += part; + } + if (to < flen) + bvec_set_folio(&bvec[i++], folio, flen - to, to); + iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); + } else { + iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, + rreq->start, rreq->len); + } ret = netfs_begin_read(rreq, true); + if (sink) + folio_put(sink); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); - return ret; + return ret < 0 ? ret : 0; discard: netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 94793f842000..b7426f455086 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -115,9 +115,11 @@ EM(netfs_folio_trace_clear, "clear") \ EM(netfs_folio_trace_clear_s, "clear-s") \ EM(netfs_folio_trace_clear_g, "clear-g") \ + EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ EM(netfs_folio_trace_kill, "kill") \ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ + EM(netfs_folio_trace_read_gaps, "read-gaps") \ EM(netfs_folio_trace_redirty, "redirty") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ From patchwork Fri Oct 13 16:03:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7CBDCDB47E for ; Fri, 13 Oct 2023 16:06:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 036708D0167; Fri, 13 Oct 2023 12:06:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F2A4C8D0015; Fri, 13 Oct 2023 12:06:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA57E8D0167; Fri, 13 Oct 2023 12:06:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C3E718D0015 for ; Fri, 13 Oct 2023 12:06:13 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 91686120317 for ; Fri, 13 Oct 2023 16:06:13 +0000 (UTC) X-FDA: 81340915026.24.2BA9F65 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id C4ACA100018 for ; Fri, 13 Oct 2023 16:06:11 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=h8aPTxFr; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213171; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YCXtE3Gqov3oA1rKgjLh+er5qsDXpT3dWhNn5qjdKp8=; b=DTyqKojvG82jjXzei/C5xN4yA04uAVuo152zUBhSOhUZf0ZXGh3ruXsf6WbJ5yeRnEQi87 Oxad8qXdQdGCVUPkwSvA7rrn1d2lxziX5VHg3RK8pWJ3ikQy69vjWmW0Hg4jvCryYdIxbM kE38nl5UlktWkXcnJPUXMA9l0vWL7+A= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=h8aPTxFr; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213171; a=rsa-sha256; cv=none; b=EovM2+SYq49jI0j/z5PasH4erXgN4VDyyeNIvw+1eFLlPOsdEhbXZbXXnQmufXB0k5kROc 8oukHuVcX2LHfYqd8cdZCTkPC/r3/MioBKVEyQ1qxVb2yBcCOB4/vBHv3MvEYOmmqRmv/f xigxuoEAOoaevCBUEsmvvrswEzeEQcM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213171; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YCXtE3Gqov3oA1rKgjLh+er5qsDXpT3dWhNn5qjdKp8=; b=h8aPTxFr1nac6uJ4eI7mACmCxyGn2S3Fq3y5/8wBcEMxgWHoHCRFzQWstS2op+FNQmGeRG GAX7ewZOFDEd6KW6xmJMN1Cw8ffIyzN62E4k4k/4WPRlYprfjuojGxQholYVNVZLcVVspM lvsRQSX1Jv/wl8nqeVNIFux55tOvBZQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-173-TJDkM4R1O72X3owEh91uEg-1; Fri, 13 Oct 2023 12:05:59 -0400 X-MC-Unique: TJDkM4R1O72X3owEh91uEg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 93274801E62; Fri, 13 Oct 2023 16:05:58 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11660C15BBC; Fri, 13 Oct 2023 16:05:55 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 26/53] netfs: Allocate multipage folios in the writepath Date: Fri, 13 Oct 2023 17:03:55 +0100 Message-ID: <20231013160423.2218093-27-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Queue-Id: C4ACA100018 X-Rspam-User: X-Stat-Signature: 8yybpmnmepe16j5woafhh9xaiuhsdety X-Rspamd-Server: rspam01 X-HE-Tag: 1697213171-359762 X-HE-Meta: U2FsdGVkX19FPKFm3gSKVmlJefdG+ng7Ha+INmblvxrdSiH87YS5M3pvuwsYycROfExuipcWDofXtYJgrxg+69+r55vWkOJBMzYnVLj8Khjjso4e0oDQS0PiRDkAcYazLUmWx6Ey6fFe3OKvRfb8hV3b6Q7H6by7GdRxT67klOuSt+gKI/5HAwyuXRm+6AtuWJ13CHHECAuMP7V8Apv6WSnpffg6JwU3SqTlGJ5IFmBhz1ByY0QvRhCtg3DCNU+1FSFNVOj3fNCof0uIt6339ODpZuK0NfPxd9gw1sduhu/PA0ynf4PjAthCygAqZP0S/EHiTtPsVFdOrKmnu8Xxx9HHfya//QG7Pul0XqwhgZ2EshPRrC23TCOVFzMnxlAX2Y17+SzSPYShb4F2CJL/6Oe6KFO40rGW9Zcu5BtMC2Yq1tZdJ5/yzjsfcoYWl+VMA/CXHsOslcx0DYOTd1u7jRVzI+0e5sm4lI3Qj4BR8WrU5bOHPEcxSyNgvFp7eyFDj3NzrgY+lNX6HSJNHT3WUxM6UX3sCLNvGB//kdRBA7cB9V/UriBSrRfowyTqQq6TMNzJO9QyVi3mhTtZ5kjOcVFBg0txSwZq70oB2izqgf+tyROslgprL5AdLWq/yY5iDwiuBvmRSRBpjvwouALvRHok6RZup1QTWBfCIFEZYsQaw15ymoasnAlykSXSGwujINXeCJ/9jqI6B2uU5GP86tWpjG3Mr+dPIvz3espGmUCFX09K5hg2s7BYJ8iyURUXTKZQ87E8o0EJhvoFgC6EF//PYPiTBDmolR+e+eDRxLCpnd9+I/oY4kwShm2GTONjmQfjO1Su5T77PO0rwhRx8D9nO2GqhMDQ7Rt/ev4y/Y6r5IRKpFnEkxlUEKUTLcvq7e9MHD84tfyAkDKbnp/D0fi6SmARaXMRVHhjt+U9XqiPpeF3mJ34jhLPklnUnVh5qUxgGcdlawpG72zosnT Ki1unL5E H8pr4BXuj8CB1p3hVPga63zv9qFbyIEcaHXkr0CLDHYNGTW1uVLDZh58OPuTr7+dpWvogF/mvjyAJl9K0OKjJUgouifriWASe4upCleLcOiEGwPNqr2ToOX+pVyPS89hkw3Vl3qOBYnMIdq1AJ9MWU2so3d34pcTTB93v/6GoQFtyAGcmkGMyM0iSdxrSFK7SWAgEDNjYgiA9kNBGd1uXs+rAHh7XAUp2POE1f6Xv8F0ytT7/hAYkkHKFts5qaHjBkoSyk7qYq+E+U4j1a6+myjUPdE09sagDjQcY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocate a multipage folio when copying data into the pagecache if possible if there's sufficient data to warrant it. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 406c3f3666fa..4de6a12149e4 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -84,14 +84,19 @@ static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, } /* - * Grab a folio for writing and lock it. + * Grab a folio for writing and lock it. Attempt to allocate as large a folio + * as possible to hold as much of the remaining length as possible in one go. */ static struct folio *netfs_grab_folio_for_write(struct address_space *mapping, loff_t pos, size_t part) { pgoff_t index = pos / PAGE_SIZE; + fgf_t fgp_flags = FGP_WRITEBEGIN; - return __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, + if (mapping_large_folio_support(mapping)) + fgp_flags |= fgf_set_order(pos % PAGE_SIZE + part); + + return __filemap_get_folio(mapping, index, fgp_flags, mapping_gfp_mask(mapping)); } From patchwork Fri Oct 13 16:03:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2F6BCDB483 for ; Fri, 13 Oct 2023 16:06:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C0D380021; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 171EC80020; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7B4C80021; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BCEE080020 for ; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5144E160394 for ; Fri, 13 Oct 2023 16:06:17 +0000 (UTC) X-FDA: 81340915194.04.B5A4987 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 1D97E100010 for ; Fri, 13 Oct 2023 16:06:14 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d3CGsqcs; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213175; a=rsa-sha256; cv=none; b=esPzm4K/oO5NJFlZB1uZP8pAqaeFpFyQ+Ts1tvunBrFE7KokroNdxEoCBWEaRVn8iASua+ rl0fsNrYM+wEhYipEsslNQJZIPDC/G7DriaT2GKmNmJMP6CWEi7QZMfrp6Sf2G7zZzaaCr ll11e93h8x3REdhGoEKLPTy0Y0QkPJY= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d3CGsqcs; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SkhAI4Iz7b9wcyqLsaiQccQhRokb/RKYkC1y6MYpYKI=; b=5d0eUAMZFSL63PAZUYIQAKQrtr2GFn11s3So1lmSK9AS5RUQKIHCXUMJjF01RznLrH8ifW Sz9VHF+98EFv6Rh7EcJnLXDV2373dObMg7AR9U2S5g7Q+WvskDqzxuapKpHj+W3lJ48aHb Do4oyyCSF9nU0ZyfDbK8NUOEjdIEinc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213174; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SkhAI4Iz7b9wcyqLsaiQccQhRokb/RKYkC1y6MYpYKI=; b=d3CGsqcsx2qfVpZvSJx/4wBbd8bgJhbqccg3UYIKvmCZHR4fH8N5DRfW5JN3XZ23rLyiDo JXUrEp5FV0T7/tz02hfwO86CqeJ1gfErmRfxAMBPrLZ7ohzpreMFRf0asWzyoktuaTY0dL /QkxYlSvnq6sq6itM7GwedhGBZiOd40= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-379-N8MnoE-gOh6cI3byRJIGwg-1; Fri, 13 Oct 2023 12:06:06 -0400 X-MC-Unique: N8MnoE-gOh6cI3byRJIGwg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B32E18811BE; Fri, 13 Oct 2023 16:06:02 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 42E3D63F45; Fri, 13 Oct 2023 16:05:59 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 27/53] netfs: Implement support for unbuffered/DIO read Date: Fri, 13 Oct 2023 17:03:56 +0100 Message-ID: <20231013160423.2218093-28-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1D97E100010 X-Stat-Signature: hnhtaqy86rw1w85ukdug8wekdk4yc5sq X-Rspam-User: X-HE-Tag: 1697213174-392164 X-HE-Meta: U2FsdGVkX1+k9cGWsHnue5r7ZvCBSdr8/Oz2wSHs28X7QZ+R5PDndlPrlKvKXcGmSEq535CrTiL554pzAATWBNMSgYv1aj+axDw9F3u4Bx1KRnDvX/dO9dWvwXnw8bbBa8JweipeThs9rOvBUoINKjzHcNldscJJVJSL/LBa+6e9x3mKVzf8WVz6Hf8O+X/QgvgJDkVsO/op31eBSR4yD/Zcbgn/GIbHMx06lAtPWzU0C9I6fexyMa5eVc16MELSjSSHjwHHJXpYduw3TGmv0ERKUGywETXp+DiASJXrEy8PE4w5H7mlTFAQwbgAwCUrNVJzkV8EZKxpo6wCmtl4iKtNZ/HTOq+coBhEkqwVy4Hn/pN8fCxvyDJ2qPU5bWUShNidSOVQgaV912krKafU7wT6eK8odiCtIgAILFGgBJnE5g6hNErrx3UKYymUqTAJOm3zmS1FbsP0apZLT5U3j7K620dnJMGb9CnjA7F2bUW1WpsBSDMHp3pDkU9FX+01+8pppvgOcnEQi1k58oZrV659+cpXV43yNCk1NHTtljPNkO2CfBZdu5d3GrIGfwv9lAwEjpy1jWiV7C6HJg7aiZ8Zqyc+sEvztLcAYGQSQTOF2RKNowE6yZexU8rHj5B7ni2UbKjadfs6dDEYiy4V2cMVpLP8lXRebIQOv5m9WDKtPLoJXzmCPa+NLUxitX5amk1GLQtba0mCgAqUWsxat77p1fyBy91xBhbZ0tF2qrGchrWvoVzPYpu2Tqjr0qyJ984RQ/2BzRUuiPMZhJgtlpmSGZdMRq6zyNE9G+4E02XGIyNSJuP1l8vkdHMhT0RSznfbRAAE8OzCRdBYWTgsGqR3IbVM4Hg5at/yYGZ07iI/TUtmMO74t9UiCrejA2SPF0hlJdVMT2+eTtpaB21wUN976WdmePZaEPhJ28MkleDK+MZdQKqGc4KWUGsF2mVnrl1zi8pdZyW48K7QFYP surOLwze zQKSrXZ3Z11HRay1qJjsFWTL249TbRLP5i2hJ6flmYtdlAk+BFG6mb01JOcakPnFIeEMoH/lBVrVwjzhe80pTwjnp9rued4VCaJYS4qrUQuRd+nZEzJ2ISnxIIlcGz5nC1Bd0USxYCdLifvPBulX2xDTFv5tz1ZP7nfASPxsegQwt3TXZOlST2B/RK6d2UyQUiqVF66LUE+B6pFk356eqsX9kmo6UnNbjxzjgmFFwNW3O5lD1V1THWEQPY8Vk6MGny6Z2Yl44OXoQXHDnNkDlEvPks1Oe94MC7mqEjLz7TYz0EUxY8BDADwHar321oWmg7IUr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement support for unbuffered and DIO reads in the netfs library, utilising the existing read helper code to do block splitting and individual queuing. The code also handles extraction of the destination buffer from the supplied iterator, allowing async unbuffered reads to take place. The read will be split up according to the rsize setting and, if supplied, the ->clamp_length() method. Note that the next subrequest will be issued as soon as issue_op returns, without waiting for previous ones to finish. The network filesystem needs to pause or handle queuing them if it doesn't want to fire them all at the server simultaneously. Once all the subrequests have finished, the state will be assessed and the amount of data to be indicated as having being obtained will be determined. As the subrequests may finish in any order, if an intermediate subrequest is short, any further subrequests may be copied into the buffer and then abandoned. In the future, this will also take care of doing an unbuffered read from encrypted content, with the decryption being done by the library. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/Makefile | 2 +- fs/netfs/direct_read.c | 252 +++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 1 + fs/netfs/io.c | 78 +++++++++-- fs/netfs/main.c | 1 + fs/netfs/objects.c | 3 +- fs/netfs/stats.c | 4 +- include/linux/netfs.h | 6 + include/trace/events/netfs.h | 7 +- 9 files changed, 342 insertions(+), 12 deletions(-) create mode 100644 fs/netfs/direct_read.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 5c450db29932..27643557b443 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -3,7 +3,7 @@ netfs-y := \ buffered_read.o \ buffered_write.o \ - crypto.o \ + direct_read.o \ io.o \ iterator.o \ locking.o \ diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c new file mode 100644 index 000000000000..1d26468aafd9 --- /dev/null +++ b/fs/netfs/direct_read.c @@ -0,0 +1,252 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Direct I/O support. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Copy all of the data from the folios in the source xarray into the + * destination iterator. We cannot step through and kmap the dest iterator if + * it's an iovec, so we have to step through the xarray and drop the RCU lock + * each time. + */ +static int netfs_copy_xarray_to_iter(struct netfs_io_request *rreq, + struct xarray *xa, struct iov_iter *dst, + unsigned long long start, size_t avail) +{ + struct folio *folio; + void *base; + pgoff_t index = start / PAGE_SIZE; + size_t len, copied, count = min(avail, iov_iter_count(dst)); + + XA_STATE(xas, xa, index); + + _enter("%zx", count); + + if (!count) { + trace_netfs_failure(rreq, NULL, -EIO, netfs_fail_dio_read_zero); + return -EIO; + } + + len = PAGE_SIZE - offset_in_page(start); + rcu_read_lock(); + xas_for_each(&xas, folio, ULONG_MAX) { + size_t offset; + + if (xas_retry(&xas, folio)) + continue; + + /* There shouldn't be a need to call xas_pause() as no one else + * should be modifying the xarray we're iterating over. + * Really, we only need the RCU readlock to keep lockdep happy + * inside xas_for_each(). + */ + rcu_read_unlock(); + + offset = offset_in_folio(folio, start); + kdebug("folio %lx +%zx [%llx]", folio->index, offset, start); + + while (offset < folio_size(folio)) { + len = min(count, len); + + base = kmap_local_folio(folio, offset); + copied = copy_to_iter(base, len, dst); + kunmap_local(base); + if (copied != len) + goto out; + count -= len; + if (count == 0) + goto out; + + start += len; + offset += len; + len = PAGE_SIZE; + } + + rcu_read_lock(); + } + + rcu_read_unlock(); +out: + _leave(" = %zx", count); + return count ? -EFAULT : 0; +} + +/* + * If we did a direct read to a bounce buffer (say we needed to decrypt it), + * copy the data obtained to the destination iterator. + */ +static int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq) +{ + struct iov_iter *dest_iter = &rreq->iter; + struct kiocb *iocb = rreq->iocb; + unsigned long long start = rreq->start; + + _enter("%zx/%zx @%llx", rreq->transferred, rreq->len, start); + + if (!test_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags)) + return 0; + + if (start < iocb->ki_pos) { + if (rreq->transferred <= iocb->ki_pos - start) { + trace_netfs_failure(rreq, NULL, -EIO, netfs_fail_dio_read_short); + return -EIO; + } + rreq->len = rreq->transferred; + rreq->transferred -= iocb->ki_pos - start; + } + + if (rreq->transferred > iov_iter_count(dest_iter)) + rreq->transferred = iov_iter_count(dest_iter); + + _debug("xfer %zx/%zx @%llx", rreq->transferred, rreq->len, iocb->ki_pos); + return netfs_copy_xarray_to_iter(rreq, &rreq->bounce, dest_iter, + iocb->ki_pos, rreq->transferred); +} + +/** + * netfs_unbuffered_read_iter_locked - Perform an unbuffered or direct I/O read + * @iocb: The I/O control descriptor describing the read + * @iter: The output buffer (also specifies read length) + * + * Perform an unbuffered I/O or direct I/O from the file in @iocb to the + * output buffer. No use is made of the pagecache. + * + * The caller must hold any appropriate locks. + */ +static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *iter) +{ + struct netfs_io_request *rreq; + struct netfs_inode *ctx; + unsigned long long start, end; + unsigned int min_bsize; + pgoff_t first, last; + ssize_t ret; + size_t orig_count = iov_iter_count(iter); + bool async = !is_sync_kiocb(iocb); + + _enter(""); + + if (!orig_count) + return 0; /* Don't update atime */ + + ret = kiocb_write_and_wait(iocb, orig_count); + if (ret < 0) + return ret; + file_accessed(iocb->ki_filp); + + rreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp, + iocb->ki_pos, orig_count, + NETFS_DIO_READ); + if (IS_ERR(rreq)) + return PTR_ERR(rreq); + + ctx = netfs_inode(rreq->inode); + netfs_stat(&netfs_n_rh_dio_read); + trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_dio_read); + + /* If this is an async op, we have to keep track of the destination + * buffer for ourselves as the caller's iterator will be trashed when + * we return. + * + * In such a case, extract an iterator to represent as much of the the + * output buffer as we can manage. Note that the extraction might not + * be able to allocate a sufficiently large bvec array and may shorten + * the request. + */ + if (user_backed_iter(iter)) { + ret = netfs_extract_user_iter(iter, rreq->len, &rreq->iter, 0); + if (ret < 0) + goto out; + rreq->direct_bv = (struct bio_vec *)rreq->iter.bvec; + rreq->direct_bv_count = ret; + rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter); + rreq->len = iov_iter_count(&rreq->iter); + } else { + rreq->iter = *iter; + rreq->len = orig_count; + rreq->direct_bv_unpin = false; + iov_iter_advance(iter, orig_count); + } + + /* If we're going to use a bounce buffer, we need to set it up. We + * will then need to pad the request out to the minimum block size. + */ + if (test_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags)) { + start = rreq->start; + end = min_t(unsigned long long, + round_up(rreq->start + rreq->len, min_bsize), + ctx->remote_i_size); + + rreq->start = start; + rreq->len = end - start; + first = start / PAGE_SIZE; + last = (end - 1) / PAGE_SIZE; + _debug("bounce %llx-%llx %lx-%lx", + rreq->start, end, first, last); + + ret = netfs_add_folios_to_buffer(&rreq->bounce, rreq->mapping, + first, last, GFP_KERNEL); + if (ret < 0) + goto out; + } + + if (async) + rreq->iocb = iocb; + + ret = netfs_begin_read(rreq, is_sync_kiocb(iocb)); + if (ret < 0) + goto out; /* May be -EIOCBQUEUED */ + if (!async) { + ret = netfs_dio_copy_bounce_to_dest(rreq); + if (ret == 0) { + iocb->ki_pos += rreq->transferred; + ret = rreq->transferred; + } + } + +out: + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + if (ret > 0) + orig_count -= ret; + if (ret != -EIOCBQUEUED) + iov_iter_revert(iter, orig_count - iov_iter_count(iter)); + return ret; +} + +/** + * netfs_unbuffered_read_iter - Perform an unbuffered or direct I/O read + * @iocb: The I/O control descriptor describing the read + * @iter: The output buffer (also specifies read length) + * + * Perform an unbuffered I/O or direct I/O from the file in @iocb to the + * output buffer. No use is made of the pagecache. + */ +ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter) +{ + struct inode *inode = file_inode(iocb->ki_filp); + ssize_t ret; + + if (!iter->count) + return 0; /* Don't update atime */ + + ret = netfs_start_io_direct(inode); + if (ret == 0) { + ret = netfs_unbuffered_read_iter_locked(iocb, iter); + netfs_end_io_direct(inode); + } + return ret; +} +EXPORT_SYMBOL(netfs_unbuffered_read_iter); diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 6f79823261f7..0fe9aa5c6114 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -99,6 +99,7 @@ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, * stats.c */ #ifdef CONFIG_NETFS_STATS +extern atomic_t netfs_n_rh_dio_read; extern atomic_t netfs_n_rh_readahead; extern atomic_t netfs_n_rh_readpage; extern atomic_t netfs_n_rh_rreq; diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 1795f8679be9..921daecf5fde 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -78,7 +78,9 @@ static void netfs_read_from_server(struct netfs_io_request *rreq, struct netfs_io_subrequest *subreq) { netfs_stat(&netfs_n_rh_download); - if (iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) + + if (rreq->origin != NETFS_DIO_READ && + iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n", rreq->debug_id, subreq->debug_index, iov_iter_count(&subreq->io_iter), subreq->len, @@ -340,6 +342,42 @@ static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq) } } +/* + * Determine how much we can admit to having read from a DIO read. + */ +static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + unsigned int i; + size_t transferred = 0; + + for (i = 0; i < rreq->direct_bv_count; i++) + flush_dcache_page(rreq->direct_bv[i].bv_page); + + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + if (subreq->error || subreq->transferred == 0) + break; + transferred += subreq->transferred; + if (subreq->transferred < subreq->len) + break; + } + + for (i = 0; i < rreq->direct_bv_count; i++) + flush_dcache_page(rreq->direct_bv[i].bv_page); + + rreq->transferred = transferred; + task_io_account_read(transferred); + + if (rreq->iocb) { + rreq->iocb->ki_pos += transferred; + if (rreq->iocb->ki_complete) + rreq->iocb->ki_complete( + rreq->iocb, rreq->error ? rreq->error : transferred); + } + if (rreq->netfs_ops->done) + rreq->netfs_ops->done(rreq); +} + /* * Assess the state of a read request and decide what to do next. * @@ -360,7 +398,10 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) return; } - netfs_rreq_unlock_folios(rreq); + if (rreq->origin != NETFS_DIO_READ) + netfs_rreq_unlock_folios(rreq); + else + netfs_rreq_assess_dio(rreq); trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); @@ -525,14 +566,16 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, struct netfs_io_subrequest *subreq, struct iov_iter *io_iter) { - enum netfs_io_source source; + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; size_t lsize; _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); - source = netfs_cache_prepare_read(subreq, rreq->i_size); - if (source == NETFS_INVALID_READ) - goto out; + if (rreq->origin != NETFS_DIO_READ) { + source = netfs_cache_prepare_read(subreq, rreq->i_size); + if (source == NETFS_INVALID_READ) + goto out; + } if (source == NETFS_DOWNLOAD_FROM_SERVER) { /* Call out to the netfs to let it shrink the request to fit @@ -543,6 +586,8 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, */ if (subreq->len > rreq->i_size - subreq->start) subreq->len = rreq->i_size - subreq->start; + if (rreq->rsize && subreq->len > rreq->rsize) + subreq->len = rreq->rsize; if (rreq->netfs_ops->clamp_length && !rreq->netfs_ops->clamp_length(subreq)) { @@ -676,11 +721,25 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) atomic_set(&rreq->nr_outstanding, 1); io_iter = rreq->io_iter; do { + _debug("submit %llx + %zx >= %llx", + rreq->start, rreq->submitted, rreq->i_size); + if (rreq->origin == NETFS_DIO_READ && + rreq->start + rreq->submitted >= rreq->i_size) + break; if (!netfs_rreq_submit_slice(rreq, &io_iter, &debug_index)) break; + if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && + test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) + break; } while (rreq->submitted < rreq->len); + if (!rreq->submitted) { + netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); + ret = 0; + goto out; + } + if (sync) { /* Keep nr_outstanding incremented so that the ref always * belongs to us, and the service code isn't punted off to a @@ -697,7 +756,8 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) TASK_UNINTERRUPTIBLE); ret = rreq->error; - if (ret == 0 && rreq->submitted < rreq->len) { + if (ret == 0 && rreq->submitted < rreq->len && + rreq->origin != NETFS_DIO_READ) { trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); ret = -EIO; } @@ -705,7 +765,9 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) /* If we decrement nr_outstanding to 0, the ref belongs to us. */ if (atomic_dec_and_test(&rreq->nr_outstanding)) netfs_rreq_assess(rreq, false); - ret = 0; + ret = -EIOCBQUEUED; } + +out: return ret; } diff --git a/fs/netfs/main.c b/fs/netfs/main.c index e990738c2213..d0eb6654efa3 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_DIO_READ] = "DR", }; /* diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 7a78c1665bc9..d46e957812a6 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -20,7 +20,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, struct inode *inode = file ? file_inode(file) : mapping->host; struct netfs_inode *ctx = netfs_inode(inode); struct netfs_io_request *rreq; - bool cached = netfs_is_cache_enabled(ctx); + bool is_dio = (origin == NETFS_DIO_READ); + bool cached = is_dio && netfs_is_cache_enabled(ctx); int ret; rreq = kzalloc(ctx->ops->io_request_size ?: sizeof(struct netfs_io_request), diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index ce2a1a983280..545f0505a91d 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -9,6 +9,7 @@ #include #include "internal.h" +atomic_t netfs_n_rh_dio_read; atomic_t netfs_n_rh_readahead; atomic_t netfs_n_rh_readpage; atomic_t netfs_n_rh_rreq; @@ -36,7 +37,8 @@ atomic_t netfs_n_wh_write_failed; void netfs_stats_show(struct seq_file *m) { - seq_printf(m, "RdHelp : RA=%u RP=%u WB=%u WBZ=%u rr=%u sr=%u\n", + seq_printf(m, "RdHelp : DR=%u RA=%u RP=%u WB=%u WBZ=%u rr=%u sr=%u\n", + atomic_read(&netfs_n_rh_dio_read), atomic_read(&netfs_n_rh_readahead), atomic_read(&netfs_n_rh_readpage), atomic_read(&netfs_n_rh_write_begin), diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 8a4aee547c6d..1d7e44d3c915 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -228,6 +228,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_DIO_READ, /* This is a direct I/O read */ nr__netfs_io_origin } __mode(byte); @@ -242,6 +243,7 @@ struct netfs_io_request { }; struct inode *inode; /* The file being accessed */ struct address_space *mapping; /* The mapping being accessed */ + struct kiocb *iocb; /* AIO completion vector */ struct netfs_cache_resources cache_resources; struct list_head proc_link; /* Link in netfs_iorequests */ struct list_head subrequests; /* Contributory I/O operations */ @@ -259,6 +261,7 @@ struct netfs_io_request { atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ size_t submitted; /* Amount submitted for I/O so far */ size_t len; /* Length of the request */ + size_t transferred; /* Amount to be indicated as transferred */ short error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ @@ -376,6 +379,9 @@ struct netfs_cache_ops { loff_t *_data_start, size_t *_data_len); }; +/* High-level read API. */ +ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter); + /* High-level write API */ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, struct netfs_group *netfs_group); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index b7426f455086..cc7cb55f3420 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -16,6 +16,7 @@ * Define enums for tracing information. */ #define netfs_read_traces \ + EM(netfs_read_trace_dio_read, "DIO-READ ") \ EM(netfs_read_trace_expanded, "EXPANDED ") \ EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readpage, "READPAGE ") \ @@ -31,7 +32,8 @@ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ - E_(NETFS_WRITEBACK, "WB") + EM(NETFS_WRITEBACK, "WB") \ + E_(NETFS_DIO_READ, "DR") #define netfs_rreq_traces \ EM(netfs_rreq_trace_assess, "ASSESS ") \ @@ -70,6 +72,8 @@ #define netfs_failures \ EM(netfs_fail_check_write_begin, "check-write-begin") \ EM(netfs_fail_copy_to_cache, "copy-to-cache") \ + EM(netfs_fail_dio_read_short, "dio-read-short") \ + EM(netfs_fail_dio_read_zero, "dio-read-zero") \ EM(netfs_fail_read, "read") \ EM(netfs_fail_short_read, "short-read") \ EM(netfs_fail_prepare_write, "prep-write") \ @@ -81,6 +85,7 @@ EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \ EM(netfs_rreq_trace_put_discard, "PUT DISCARD") \ EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \ + EM(netfs_rreq_trace_put_no_submit, "PUT NO-SUBM") \ EM(netfs_rreq_trace_put_return, "PUT RETURN ") \ EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \ EM(netfs_rreq_trace_put_work, "PUT WORK ") \ From patchwork Fri Oct 13 16:03:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9763DCDB485 for ; Fri, 13 Oct 2023 16:06:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D25480025; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 27D5480022; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D33B80025; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id ED6D880022 for ; Fri, 13 Oct 2023 12:06:27 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C1B051603DB for ; Fri, 13 Oct 2023 16:06:27 +0000 (UTC) X-FDA: 81340915614.24.42273D5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 92B401C0046 for ; Fri, 13 Oct 2023 16:06:25 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OVSKEg4B; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213185; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+JXKAIpr84btCw1blQ/2Pq/kPda4roUzVk+ID6i5/YY=; b=wHU0vvVysF7pUsGkqEsUhaoJkGdhgvqfwcHAcjggW7lwue0adJi8MmsJefVEgNeKicC+Xw hOppme22D1OOm6vh5W6q8BpbWoU7B2IK26kwsnrKwMa35ENpKrh+HNJvGADWS0V6UzV42r dCa8L+h6wR0rekPmu5N1HMyczUXlWuo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OVSKEg4B; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213185; a=rsa-sha256; cv=none; b=JZgwbBdRCo4KXTfZ4E3q1kmPVWczBFob3wJEe7x6QXM/aR/MgT//w0KMKTsSIuEkt8awJ+ 0XfOyjSMcvJN0JRFboHtL1qp+rr6g6bfKEQ9zCHFjVe1jptKyBVqW4l/swI3expZSPn4FV j5gm5kAh5hFPx6Km7yrsKQSRJpe0CEU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+JXKAIpr84btCw1blQ/2Pq/kPda4roUzVk+ID6i5/YY=; b=OVSKEg4B0R6L9Ap/6Zlaatsulx80Wt9kA+C4nzTB6GJD8JLjaln8xEcA6HHJtqhrlfLGIx YpeVkTHQp8vq+GG5vtsGAhZ0X+EIl0Hh/I2BPHUK7J+2ATHuI9rgmhPsSM/ddkAsovhH0I fiINepnC1NceFCKlxixPLNtzwXF7OhY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-373-dY8y93B-NfO9PkdbPyLPog-1; Fri, 13 Oct 2023 12:06:06 -0400 X-MC-Unique: dY8y93B-NfO9PkdbPyLPog-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6CA29811731; Fri, 13 Oct 2023 16:06:05 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id C32B61C060DF; Fri, 13 Oct 2023 16:06:02 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 28/53] netfs: Implement unbuffered/DIO write support Date: Fri, 13 Oct 2023 17:03:57 +0100 Message-ID: <20231013160423.2218093-29-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspam-User: X-Stat-Signature: gke5x7hyr4tckbsx994m3ucp43mpgrze X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 92B401C0046 X-HE-Tag: 1697213185-887499 X-HE-Meta: U2FsdGVkX1/eX0QeWqWX3IyFS2fqp2YDmN/+efr2nNFf1+7wliIsLjcOSyp07mVUq7TdUx/saByfzx6Jt/AYCLTk6O/W9Jt6cNtfq/PWstc0/BuRsmfMctkeYSy4daTuWlNPaQsa5yf24+qswAHgSK4L/ld4ZzLl2uCtQpirhiAltHSWhta4kdYXx9zoCReweG8f8UjV0jdu+z89Po22hCuWYIJEm/GtlqsxpjVx2iTAbiOUAtnEHbHTcwzuoIJM0AA7nSpdS9xY5dDtE3oEADv5wlqjdE3qy1WILwJVAMHFw6LfQATcyKLFpU70uIk8/oSqdqb1ZerCIjJ1QcNaOo5VKaqL6InawuHP/NNcmUzJb213kYNGrXiZsiwfWUkEeAKuUXlQBWkEtGpHJdlkOjGI8+0FWXQLRFUGezeNimcuRHaD3FlDS+6NQx5Gg8RrwsatGNlI+TpNUr/f0JZbTWOKAiHE1zIrlDEeQE9dynGXF5evLr+/rP2H5xePvAV/Sg6qdtVEKC5YYe6ZVz+YOxW3UcQAlyrjuXeS7C8WX2CN3lv7LpfakreVCg88/nxhPIob/Y6274myJ+KGoDO6dTZG0+6d60D9GzOZm4oDbqX98cY5x2ohVquZNc6wAyqOZoGYVoY25k5tnZKOE8HwS4M6F8LjxqF2ITDEFXUb5x1GCo76/MVqXkF/WaVPJ0LkCDt90w4vwoPW2sQPmW5O2w4+RL43c/qs9CfT7s+l7IzD1tj94kmacMn6w6NzsxAlqcMJjvAmLtaSwEYq0/jrqWqtbTgZQVoLpS1hcqGv3Gd7ubmtblzNKpfcJ9VklzD4B1fVuXXL4Lh5RIDsIAthUyIyW+DpwBSGWHYHCM1jKDdSPwIaTBn1h9FmsSTRd6+ocxsvBwb+wXdtQikmHTkeQCnpMUSjUeqSFivdo5L/S5V2tYS5JaU5f0bhPUV8/miQAmywqoKKIwmfDOu51CX 4B8O/xk+ JcxRTIjaaHZH4utKrR2vpo7TLz43uTJMAf1CL3xHtQDrlZWjSAKZyFOEQYSen4lIWF6/pFliMF3iOHVzFnR8Xpnfnre/MMLF67xeIWeeRT4HouU2jqRQraquy2Q4zuJs79DxdKiKeyN0Iu20MEq1pDOxkKO7mvphNm4Rrub0bx1DkDHgwZuStbrwbHM8gjDQ1h86LRDZdsiycsEQFWtyf1MMU3KOtZBI2sPRiljITTWKYgC2ziebHNYMiZumqo1sXEbmNPay4OSd2dw/yeTymiwsMbqsDnfmALYo3NtPcA8TNonq6h3AX+cP1WrNWY9uLHnx7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement support for unbuffered writes and direct I/O writes. If the write is misaligned with respect to the fscrypt block size, then RMW cycles are performed if necessary. DIO writes are a special case of unbuffered writes with extra restriction imposed, such as block size alignment requirements. Also provide a field that can tell the code to add some extra space onto the bounce buffer for use by the filesystem in the case of a content-encrypted file. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/inode.c | 2 +- fs/netfs/Makefile | 1 + fs/netfs/direct_write.c | 159 +++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 6 ++ fs/netfs/io.c | 2 +- fs/netfs/main.c | 12 +-- fs/netfs/objects.c | 6 +- fs/netfs/output.c | 24 ++++++ include/linux/netfs.h | 4 + include/trace/events/netfs.h | 4 +- 10 files changed, 210 insertions(+), 10 deletions(-) create mode 100644 fs/netfs/direct_write.c diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 46bc5574d6f5..a8f4301aca9a 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -250,7 +250,7 @@ static void afs_apply_status(struct afs_operation *op, * what's on the server. */ vnode->netfs.remote_i_size = status->size; - if (change_size) { + if (change_size || status->size > i_size_read(inode)) { afs_set_i_size(vnode, status->size); vnode->netfs.zero_point = status->size; inode_set_ctime_to_ts(inode, t); diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 27643557b443..d5c2809fc029 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -4,6 +4,7 @@ netfs-y := \ buffered_read.o \ buffered_write.o \ direct_read.o \ + direct_write.o \ io.o \ iterator.o \ locking.o \ diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c new file mode 100644 index 000000000000..b1a4921ac4a2 --- /dev/null +++ b/fs/netfs/direct_write.c @@ -0,0 +1,159 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Unbuffered and direct write support. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include "internal.h" + +static void netfs_cleanup_dio_write(struct netfs_io_request *wreq) +{ + struct inode *inode = wreq->inode; + unsigned long long end = wreq->start + wreq->len; + + if (!wreq->error && + i_size_read(inode) < end) { + if (wreq->netfs_ops->update_i_size) + wreq->netfs_ops->update_i_size(inode, end); + else + i_size_write(inode, end); + } +} + +/* + * Perform an unbuffered write where we may have to do an RMW operation on an + * encrypted file. This can also be used for direct I/O writes. + */ +ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *iter, + struct netfs_group *netfs_group) +{ + struct netfs_io_request *wreq; + unsigned long long start = iocb->ki_pos; + unsigned long long end = start + iov_iter_count(iter); + ssize_t ret, n; + bool async = !is_sync_kiocb(iocb); + + _enter(""); + + /* We're going to need a bounce buffer if what we transmit is going to + * be different in some way to the source buffer, e.g. because it gets + * encrypted/compressed or because it needs expanding to a block size. + */ + // TODO + + _debug("uw %llx-%llx", start, end); + + wreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp, + start, end - start, + iocb->ki_flags & IOCB_DIRECT ? + NETFS_DIO_WRITE : NETFS_UNBUFFERED_WRITE); + if (IS_ERR(wreq)) + return PTR_ERR(wreq); + + { + /* If this is an async op and we're not using a bounce buffer, + * we have to save the source buffer as the iterator is only + * good until we return. In such a case, extract an iterator + * to represent as much of the the output buffer as we can + * manage. Note that the extraction might not be able to + * allocate a sufficiently large bvec array and may shorten the + * request. + */ + if (async || user_backed_iter(iter)) { + n = netfs_extract_user_iter(iter, wreq->len, &wreq->iter, 0); + if (n < 0) { + ret = n; + goto out; + } + wreq->direct_bv = (struct bio_vec *)wreq->iter.bvec; + wreq->direct_bv_count = n; + wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter); + wreq->len = iov_iter_count(&wreq->iter); + } else { + wreq->iter = *iter; + } + + wreq->io_iter = wreq->iter; + } + + /* Copy the data into the bounce buffer and encrypt it. */ + // TODO + + /* Dispatch the write. */ + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + if (async) + wreq->iocb = iocb; + wreq->cleanup = netfs_cleanup_dio_write; + ret = netfs_begin_write(wreq, is_sync_kiocb(iocb), + iocb->ki_flags & IOCB_DIRECT ? + netfs_write_trace_dio_write : + netfs_write_trace_unbuffered_write); + if (ret < 0) { + _debug("begin = %zd", ret); + goto out; + } + + if (!async) { + trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + + ret = wreq->error; + _debug("waited = %zd", ret); + if (ret == 0) { + ret = wreq->transferred; + iocb->ki_pos += ret; + } + } else { + ret = -EIOCBQUEUED; + } + +out: + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + return ret; +} + +/** + * netfs_unbuffered_write_iter - Unbuffered write to a file + * @iocb: IO state structure + * @from: iov_iter with data to write + * + * Do an unbuffered write to a file, writing the data directly to the server + * and not lodging the data in the pagecache. + * + * Return: + * * Negative error code if no data has been written at all of + * vfs_fsync_range() failed for a synchronous write + * * Number of bytes written, even for truncated writes + */ +ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from) +{ + struct file *file = iocb->ki_filp; + struct inode *inode = file->f_mapping->host; + ssize_t ret; + + _enter("%llx,%zx,%llx", iocb->ki_pos, iov_iter_count(from), i_size_read(inode)); + + trace_netfs_write_iter(iocb, from); + + ret = netfs_start_io_direct(inode); + if (ret < 0) + return ret; + ret = generic_write_checks(iocb, from); + if (ret < 0) + goto out; + ret = file_remove_privs(file); + if (ret < 0) + goto out; + ret = file_update_time(file); + if (ret < 0) + goto out; + ret = netfs_unbuffered_write_iter_locked(iocb, from, NULL); +out: + netfs_end_io_direct(inode); + return ret; +} +EXPORT_SYMBOL(netfs_unbuffered_write_iter); diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 0fe9aa5c6114..6a67abdf71c8 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -22,6 +22,12 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); +/* + * direct_write.c + */ +ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *iter, + struct netfs_group *netfs_group); + /* * io.c */ diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 921daecf5fde..36a3f720193a 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -643,7 +643,7 @@ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, subreq->debug_index = (*_debug_index)++; subreq->start = rreq->start + rreq->submitted; - subreq->len = rreq->len - rreq->submitted; + subreq->len = io_iter->count; _debug("slice %llx,%zx,%zx", subreq->start, subreq->len, rreq->submitted); list_add_tail(&subreq->rreq_link, &rreq->subrequests); diff --git a/fs/netfs/main.c b/fs/netfs/main.c index d0eb6654efa3..1cf10f9c4c1f 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -29,11 +29,13 @@ LIST_HEAD(netfs_io_requests); DEFINE_SPINLOCK(netfs_proc_lock); static const char *netfs_origins[nr__netfs_io_origin] = { - [NETFS_READAHEAD] = "RA", - [NETFS_READPAGE] = "RP", - [NETFS_READ_FOR_WRITE] = "RW", - [NETFS_WRITEBACK] = "WB", - [NETFS_DIO_READ] = "DR", + [NETFS_READAHEAD] = "RA", + [NETFS_READPAGE] = "RP", + [NETFS_READ_FOR_WRITE] = "RW", + [NETFS_WRITEBACK] = "WB", + [NETFS_UNBUFFERED_WRITE] = "UW", + [NETFS_DIO_READ] = "DR", + [NETFS_DIO_WRITE] = "DW", }; /* diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index d46e957812a6..c1218b183197 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -20,8 +20,10 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, struct inode *inode = file ? file_inode(file) : mapping->host; struct netfs_inode *ctx = netfs_inode(inode); struct netfs_io_request *rreq; - bool is_dio = (origin == NETFS_DIO_READ); - bool cached = is_dio && netfs_is_cache_enabled(ctx); + bool is_unbuffered = (origin == NETFS_UNBUFFERED_WRITE || + origin == NETFS_DIO_READ || + origin == NETFS_DIO_WRITE); + bool cached = !is_unbuffered && netfs_is_cache_enabled(ctx); int ret; rreq = kzalloc(ctx->ops->io_request_size ?: sizeof(struct netfs_io_request), diff --git a/fs/netfs/output.c b/fs/netfs/output.c index e93453f4372d..bb42789c7a24 100644 --- a/fs/netfs/output.c +++ b/fs/netfs/output.c @@ -74,11 +74,21 @@ static void netfs_write_terminated(struct netfs_io_request *wreq, bool was_async { struct netfs_io_subrequest *subreq; struct netfs_inode *ctx = netfs_inode(wreq->inode); + size_t transferred = 0; _enter("R=%x[]", wreq->debug_id); trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); + list_for_each_entry(subreq, &wreq->subrequests, rreq_link) { + if (subreq->error || subreq->transferred == 0) + break; + transferred += subreq->transferred; + if (subreq->transferred < subreq->len) + break; + } + wreq->transferred = transferred; + list_for_each_entry(subreq, &wreq->subrequests, rreq_link) { if (!subreq->error) continue; @@ -110,11 +120,25 @@ static void netfs_write_terminated(struct netfs_io_request *wreq, bool was_async wreq->cleanup(wreq); + if (wreq->origin == NETFS_DIO_WRITE && + wreq->mapping->nrpages) { + pgoff_t first = wreq->start >> PAGE_SHIFT; + pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; + invalidate_inode_pages2_range(wreq->mapping, first, last); + } + _debug("finished"); trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip); clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags); wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS); + if (wreq->iocb) { + wreq->iocb->ki_pos += transferred; + if (wreq->iocb->ki_complete) + wreq->iocb->ki_complete( + wreq->iocb, wreq->error ? wreq->error : transferred); + } + netfs_clear_subrequests(wreq, was_async); netfs_put_request(wreq, was_async, netfs_rreq_trace_put_complete); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 1d7e44d3c915..052d62625796 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -140,6 +140,7 @@ struct netfs_inode { * on the server */ unsigned long flags; #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ +#define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ }; /* @@ -228,7 +229,9 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_READ, /* This is a direct I/O read */ + NETFS_DIO_WRITE, /* This is a direct I/O write */ nr__netfs_io_origin } __mode(byte); @@ -385,6 +388,7 @@ ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter); /* High-level write API */ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, struct netfs_group *netfs_group); +ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from); /* Address operations API */ struct readahead_control; diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index cc7cb55f3420..60f98c99fe21 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -33,7 +33,9 @@ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ - E_(NETFS_DIO_READ, "DR") + EM(NETFS_UNBUFFERED_WRITE, "UW") \ + EM(NETFS_DIO_READ, "DR") \ + E_(NETFS_DIO_WRITE, "DW") #define netfs_rreq_traces \ EM(netfs_rreq_trace_assess, "ASSESS ") \ From patchwork Fri Oct 13 16:03:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EBB9CDB47E for ; Fri, 13 Oct 2023 16:06:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7F2580023; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB6F580022; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A468480021; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 853BF80020 for ; Fri, 13 Oct 2023 12:06:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6A76C80306 for ; Fri, 13 Oct 2023 16:06:17 +0000 (UTC) X-FDA: 81340915194.01.4469275 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id AB09114002E for ; Fri, 13 Oct 2023 16:06:15 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ihleFrQH; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fYFiVS0O5EjQomrQ+SnrCnYkLaT5nU/hfhD8w5eAjCY=; b=vqoJKSfyJU/sBbkkqBZGG+vf+xxi1jnK0ufxM2qv7fdOwM3WwIrb7Ej7mXgYKBn16qQ2LN mghWglJbn5N8hXlf2rL0yUOImR492h5bdmqsEPR3eBgVtLrSo/DPf/QhjXhSa1/E5v+ymM 4A3eNp4NnE9oTLtO1hxH5KngS8OFTMY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ihleFrQH; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213175; a=rsa-sha256; cv=none; b=ewJq+P4hKm/ZYpNC1etgcSC7DeGbT4/wloxVmTEmChiQ3hICWTc6yfYhdGCjrIPnp7XuBK ZY/7YAf0lETu/mTPp+BXjih2OtEaXGwgXNTeUmTH92tZKYrHW4rehRleUU+TPjfpcel4bz ObMksPqI2ACF/F68wszdUZBoWYX7BMM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213175; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fYFiVS0O5EjQomrQ+SnrCnYkLaT5nU/hfhD8w5eAjCY=; b=ihleFrQHUBGmV7Xks6E8tc90yHfFcp/Wl9De9kKlrBZdR8gkPXHnE9qfE+KzIBi4LW7X2o 6LDCukxKFt7G4Y6FYKk+zSlUJYqBo6DED+ss0PQYPD7rDO/cBjBrwIvk84yNZV3IkCw2Ov 3LEgP+UDTlwMqhzkYf03ItxIoGqORnU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-125-53Duo8nMPhGHGtATEeqCAQ-1; Fri, 13 Oct 2023 12:06:09 -0400 X-MC-Unique: 53Duo8nMPhGHGtATEeqCAQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A573F1029F40; Fri, 13 Oct 2023 16:06:08 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 169FB25C0; Fri, 13 Oct 2023 16:06:05 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 29/53] netfs: Implement buffered write API Date: Fri, 13 Oct 2023 17:03:58 +0100 Message-ID: <20231013160423.2218093-30-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: AB09114002E X-Rspam-User: X-Stat-Signature: 9u19iuzuxibt9r9n8czwwdyeapsbx1px X-Rspamd-Server: rspam01 X-HE-Tag: 1697213175-317486 X-HE-Meta: U2FsdGVkX19Qd0i8n4H2RmjGLQlip8K9Mlwy3EjmCI8V8ON1voJwumNlipDdjh4QIaQSMY68MXX9Z7yDWS0KppVp5Lq1rTBo8BgkbaiXih3ZHBRrmppyz1vBdvUhXgrMIvXrsOmVHgWxP6wgAHKhzpAMSBp5BF8eb19BfiQna0SFXkWA+JQNT29xYY1FOEB4nNEpyOy63LLArMfH5cPtE3gAseuWsZRHCPgvsmhJzbLwDKcyiSVi3bvhYXDiUhExvscm84Av/W6lv6T55I8TdZb9u2aYv2LEm9FEOYsqARl9ghI6y+oRkjGncPh9UMTF3rUU966u7tZvarhW87LcVam/Z3LzYyF4KFJ5s+xa1PjWFmTsvhM5sLnHLb7h8n1sxvKkSsgVbu0Tok1sXZLJYN4KxTHv/49z7+8+MKZ6hHqZWi8ARgiBs+dfdkwhPI6eLoghmYPYn8f9hltyea1gEDGV4BHMh8FsYUigTwBiQiWEHMr2Oz+nANni88fP+q5M3jzhi9lP1RpH6lcgQWQK8ESlaMC1sxAvgNz+gQWK/bsbpdg3fc9pY8kCKY5ukplMHGuVQzEhUhsQvVDKwiwDRtiymg1BHf6DCbHJht1nPcsKMJZZ2Vu/Fb+cA1Pr/eGuZf5m7eMCsHORuIB2zuzlDhG3d7DQZAGGAs9lR2y9/7XuMmNt66g/6OPh0FkaVcqzADKtQHp7/Pj1tjldd9jTNdGiS2W/H8cY+05ttgM04om1WHi+tT++FPoi6SW+7ZvWIqw8mI2J1V3hWehGPXy/7EDFWuvyqacLYWRyyJhhDYBbZlMKwLnBeCUcKYdzUE1Wq7IuknyU9uc4Ci1EAe4uygxsmG+4ULf0VPbPGCbPiwRLoCOivVg5Kvmi121WUGPLNYU830w1VoQhn2NNXRkNIZ7/07FYCMVRUY50jEd8xGPiOtKsluT5zcQkiDrXqWFHJCQYZKmGhDxhAoNyfr+ phQYtLG6 UJ5HfV8LOEXTD+NxXd797bD510UhrKXEd3gCgL6YEjhDXOyHeR0LUthfjdziHXay1/MLvaIybsaLBUpTMrLWoc1B14BmrKnCJgC8uCo1WbjHUl5N4WDrTDAAYWMyOD/7Duo/yOy4Rbj9B901fRy7N1nQJeC7iVSo6o5F1zxiCa1OSxVimk8NuVl2vv6kiysIy+/SEZE08tJIX1JnRmBOmJJl6Ic3VHuM72wR3czH7b/8R5gcPQWStwL4iWlJO9E1Wn3k9kp2zcHg+qJLGmmOa0eA/ZE58WKcnquluuAdPEtsOxjg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Institute a netfs write helper, netfs_file_write_iter(), to be pointed at by the network filesystem ->write_iter() call. Make it handled buffered writes by calling the previously defined netfs_perform_write() to copy the source data into the pagecache. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 83 +++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 3 ++ 2 files changed, 86 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 4de6a12149e4..60e7da53cbd2 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -330,3 +330,86 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, goto out; } EXPORT_SYMBOL(netfs_perform_write); + +/** + * netfs_buffered_write_iter_locked - write data to a file + * @iocb: IO state structure (file, offset, etc.) + * @from: iov_iter with data to write + * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * + * This function does all the work needed for actually writing data to a + * file. It does all basic checks, removes SUID from the file, updates + * modification times and calls proper subroutines depending on whether we + * do direct IO or a standard buffered write. + * + * The caller must hold appropriate locks around this function and have called + * generic_write_checks() already. The caller is also responsible for doing + * any necessary syncing afterwards. + * + * This function does *not* take care of syncing data in case of O_SYNC write. + * A caller has to handle it. This is mainly due to the fact that we want to + * avoid syncing under i_rwsem. + * + * Return: + * * number of bytes written, even for truncated writes + * * negative error code if no data has been written at all + */ +ssize_t netfs_buffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *from, + struct netfs_group *netfs_group) +{ + struct file *file = iocb->ki_filp; + ssize_t ret; + + trace_netfs_write_iter(iocb, from); + + ret = file_remove_privs(file); + if (ret) + return ret; + + ret = file_update_time(file); + if (ret) + return ret; + + return netfs_perform_write(iocb, from, netfs_group); +} +EXPORT_SYMBOL(netfs_buffered_write_iter_locked); + +/** + * netfs_file_write_iter - write data to a file + * @iocb: IO state structure + * @from: iov_iter with data to write + * + * Perform a write to a file, writing into the pagecache if possible and doing + * an unbuffered write instead if not. + * + * Return: + * * Negative error code if no data has been written at all of + * vfs_fsync_range() failed for a synchronous write + * * Number of bytes written, even for truncated writes + */ +ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) +{ + struct file *file = iocb->ki_filp; + struct inode *inode = file->f_mapping->host; + struct netfs_inode *ictx = netfs_inode(inode); + ssize_t ret; + + _enter("%llx,%zx,%llx", iocb->ki_pos, iov_iter_count(from), i_size_read(inode)); + + if ((iocb->ki_flags & IOCB_DIRECT) || + test_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags)) + return netfs_unbuffered_write_iter(iocb, from); + + ret = netfs_start_io_write(inode); + if (ret < 0) + return ret; + + ret = generic_write_checks(iocb, from); + if (ret > 0) + ret = netfs_buffered_write_iter_locked(iocb, from, NULL); + netfs_end_io_write(inode); + if (ret > 0) + ret = generic_write_sync(iocb, ret); + return ret; +} +EXPORT_SYMBOL(netfs_file_write_iter); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 052d62625796..d1dc7ba62f17 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -388,7 +388,10 @@ ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter); /* High-level write API */ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, struct netfs_group *netfs_group); +ssize_t netfs_buffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *from, + struct netfs_group *netfs_group); ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from); +ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); /* Address operations API */ struct readahead_control; From patchwork Fri Oct 13 16:03:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EFB8CDB47E for ; Fri, 13 Oct 2023 16:06:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6790C80020; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6285580022; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A25180020; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 256D380022 for ; Fri, 13 Oct 2023 12:06:18 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E4E5E1A0340 for ; Fri, 13 Oct 2023 16:06:17 +0000 (UTC) X-FDA: 81340915194.24.1BC83FC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id D431016001A for ; Fri, 13 Oct 2023 16:06:15 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YfRr0ZNq; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e01HNtHrdecUenuf00QKZmcqLnF1Z0lVkw33nPqTUEg=; b=Q0fCkSrtjf4qwjugphKTNTpU8snp80TbaSDHxAKgLX+Vv976WsX6XF6fRW7pYnbtpBFqG7 ys+trgvA0HjlLk4nQw4z9ge+Tq421hr6EhIqTGOoCiNPSI1jnGWjq4wn9ZZ3W6M8PWZ5OV pN1VAASeKmEXfOEGYsFTIcvfgTTmuj8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YfRr0ZNq; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213175; a=rsa-sha256; cv=none; b=mN4XZXI5nXNCluxXtRGFZtbnoxDDfjyNgwlaGc5kMRspEJLvBCN/Jxt8vkIjCUvDgz++M4 vVh7aJanYYHafUaTzF+vahReaOlGMWKi3odv4lKDwF6c5txSSezM7I2kZzTAb7L6HuUa0g YC+8wIk89JSG4VFNWedxuMKy5C9Xrzk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213175; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e01HNtHrdecUenuf00QKZmcqLnF1Z0lVkw33nPqTUEg=; b=YfRr0ZNqZwAetTLIOpXSfnY7lZeyoGGAID2bxhfL0ogOvpSzBzp6i5Tht09E4PcHp49YIr hMYgsVyEcK1Q2hkQfdX1agWI+4/Oe152ee59qGMEuHZ070ARx9L+o/aTToiwk7l1A3cDo9 8vwD8w03JxzA7Vf0dY8/Mvxd/RqhJcM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-52--Lg_vaN0O_OFZFocB3Q5SA-1; Fri, 13 Oct 2023 12:06:13 -0400 X-MC-Unique: -Lg_vaN0O_OFZFocB3Q5SA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EBC66858280; Fri, 13 Oct 2023 16:06:11 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B2651C060DF; Fri, 13 Oct 2023 16:06:09 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 30/53] netfs: Allow buffered shared-writeable mmap through netfs_page_mkwrite() Date: Fri, 13 Oct 2023 17:03:59 +0100 Message-ID: <20231013160423.2218093-31-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: D431016001A X-Rspam-User: X-Stat-Signature: nuyrnenrmxa3utz4g3544d8x8w365tzo X-Rspamd-Server: rspam01 X-HE-Tag: 1697213175-169849 X-HE-Meta: U2FsdGVkX18BPcCMQfJPQFJu2PxzXyZQkVxND+16SdY4JFf3t+2+CJ6k2edQal60RsyH//1Hr5VP+FN6odq4Ufmpo36rkrkB4U3NMCVb66QYuq6zoBn9r3/kh6ICdEwSt+Ac8DNf72GTr81si4NA2mDdT12uu221uQREPyVdX9o+aJsyZZsePG4POBJsZ1XCDmTD19SzAzVvoppx5ETRA+b0dMQBhlbnxt4laoIB1ef+ph9fGyHGFUFFo0m3cjMcuaw9ZE9CpDKg7Ql8Jng6aZqz2BY6x42LyXNOYL41vnr1H8o9ofiwS68zwMP6yNJhHokg23Ru80akOTwDBKbQMD5Idhwd86drzj+b3r5vFpD0ZCRjxFHwwwRmi8epWqQnGO0KPT06luyIiF5wi6BEdJLgyN509vDtQnKi6qsBnf6Qt8ITOVWqFCHpLOhGXPOqjh/XMEq/iW0OV3sRiAL+RHiQB73FZqXMj19xAFOdTJkED+MQ600pO9atvbOvINJ4JEszf5oRQBceLvZDMUZnEBX58XE8a2q2KezaIZXUxU6Cutp7P0m6t6fMrLbaLct5BTz5MafRq2D0/6V5Q732Snth1MRfnMsnhMtrejNx8bDE23spFHt2QBA09ESC2/x9wU2qMAhM2eLYUp5ooI1bfCBFBXWnjHZ5nT0ogvouS7+014kAxD7QnrzAi/kHbTdFJr+Lytw88n/pW2OV4mFF9+ruWD1eHHI2EwJ+enzGMToisBAtgv+bLsk+GhgIosMOykopc/0tqDJDpTC8rSYwqxO1nuEXWGS1Gf3tuGNpvjjesfnXYcHSjhZ1o7r86jl6lt5E9eIdUNstdTvoVs07HpalXe7lOudB7jbUNir+ndd3aoL1evqahtYSnJAwvlr+xrrAOjrNqdYDRycc7NhqE0o7LDaXvXy/a/ISK8z42MvFbmPYHzudRNyj8+i1q43uOR52gtRiNtRDbN6yV4o 1NW79z8+ Skroh/yMfeEuebtOkAxrYr/+7cqlP0KlgaNd2RT3qqp269Wnqnh7XoXWDjjhKwWEz+q2fMYrixxf/aDMBhRnmD/ECSimW7TIOt75jT2MschM7EEIntdxF41tBt2w6O5Amqa18So52GfTSwZz2LeypNFkS/xVUhI75sGHxlCJX3my5BSNg2vCaxRPG4wulD5izryNrPZqmb4+U+JhyxYRynkmtnTx0jcALCRpeKOMjcQIcEl7MIqoIYbvuaFBEp0LE//1mmDb6BogtOfc7DOUnOi0xmfnMQoFAFnfmgnY+k6oIeRaer8ccl7ebaOtVYH005JLRi251yYJQucOitYM3TkE2Rw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide an entry point to delegate a filesystem's ->page_mkwrite() to. This checks for conflicting writes, then attached any netfs-specific group marking (e.g. ceph snap) to the page to be considered dirty. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 59 +++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 4 +++ 2 files changed, 63 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 60e7da53cbd2..3c1f26f32351 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -413,3 +413,62 @@ ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) return ret; } EXPORT_SYMBOL(netfs_file_write_iter); + +/* + * Notification that a previously read-only page is about to become writable. + * Note that the caller indicates a single page of a multipage folio. + */ +vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group) +{ + struct folio *folio = page_folio(vmf->page); + struct file *file = vmf->vma->vm_file; + struct inode *inode = file_inode(file); + vm_fault_t ret = VM_FAULT_RETRY; + int err; + + _enter("%lx", folio->index); + + sb_start_pagefault(inode->i_sb); + + if (folio_wait_writeback_killable(folio)) + goto out; + + if (folio_lock_killable(folio) < 0) + goto out; + + /* Can we see a streaming write here? */ + if (WARN_ON(!folio_test_uptodate(folio))) { + ret = VM_FAULT_SIGBUS | VM_FAULT_LOCKED; + goto out; + } + + if (netfs_folio_group(folio) != netfs_group) { + folio_unlock(folio); + err = filemap_fdatawait_range(inode->i_mapping, + folio_pos(folio), + folio_pos(folio) + folio_size(folio)); + switch (err) { + case 0: + ret = VM_FAULT_RETRY; + goto out; + case -ENOMEM: + ret = VM_FAULT_OOM; + goto out; + default: + ret = VM_FAULT_SIGBUS; + goto out; + } + } + + if (folio_test_dirty(folio)) + trace_netfs_folio(folio, netfs_folio_trace_mkwrite_plus); + else + trace_netfs_folio(folio, netfs_folio_trace_mkwrite); + netfs_set_group(folio, netfs_group); + file_update_time(file); + ret = VM_FAULT_LOCKED; +out: + sb_end_pagefault(inode->i_sb); + return ret; +} +EXPORT_SYMBOL(netfs_page_mkwrite); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index d1dc7ba62f17..e2a5a441b7fc 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -403,6 +403,10 @@ int netfs_write_begin(struct netfs_inode *, struct file *, void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); bool netfs_release_folio(struct folio *folio, gfp_t gfp); +/* VMA operations API. */ +vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); + +/* (Sub)request management API. */ void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); From patchwork Fri Oct 13 16:04:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37068CDB488 for ; Fri, 13 Oct 2023 16:06:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC94580024; Fri, 13 Oct 2023 12:06:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7CDB80022; Fri, 13 Oct 2023 12:06:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF37980024; Fri, 13 Oct 2023 12:06:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AA6F680022 for ; Fri, 13 Oct 2023 12:06:26 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 84A5FB4E83 for ; Fri, 13 Oct 2023 16:06:26 +0000 (UTC) X-FDA: 81340915572.13.2FE21B9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id CF9B440032 for ; Fri, 13 Oct 2023 16:06:24 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aOXeFzim; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213184; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xgDLRrlPZ+fbYBYQqYIYXWKYaa9HMgYZfbqZFNilvm4=; b=nCbw+s+g9uyBKSJU5T/6kGBM+gpU8/4DQWZ70DDkCkR1pT6ZWW64EU6sDhlRTyYPJH7fBn PRYwJ5BD7BNTEYt+XzPE3IPMc1fEEGeJefMsipAJ+3oQrzoyD4wVJq0UBjt2Tho7jltmJQ AfNz52mAd0nlEuSOirnr4KYgFgYhyy8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aOXeFzim; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213184; a=rsa-sha256; cv=none; b=Zn4PnCZNd6lW6pHsVTjLRponQlmTluQstQkIkggVZW4MmBKQhXcmuXB1Uy/HJDgXdQUEdB 8knkPnLn7zaIKa599gUXqsOEoF1rYT1GleNVeI+Ef/wjvCRvFl3Evi/e5YuQOQvpeuwu5t IfP7nks7Exsa8bvPhdVOqftYMN90OYU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xgDLRrlPZ+fbYBYQqYIYXWKYaa9HMgYZfbqZFNilvm4=; b=aOXeFzimSlzHPJLIwGF9D+tniD8dw4taE4M3RhlztwAz2+0TlM35ec5JZUmqJFGaENpwI7 TtyEGUdjIhBegkIM312eUYioFfm2y89IxWM4Y1Wzz00t3uZ8H1jf6pMjnGhNo9IOh7ZVJq eWdsCGWP9tCtKm4ulvRXqb+ICc2RMGU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-128-YyJ4fOlmM3q6kor9W6-rTQ-1; Fri, 13 Oct 2023 12:06:16 -0400 X-MC-Unique: YyJ4fOlmM3q6kor9W6-rTQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28C4C185A795; Fri, 13 Oct 2023 16:06:15 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 950EF1C060DF; Fri, 13 Oct 2023 16:06:12 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 31/53] netfs: Provide netfs_file_read_iter() Date: Fri, 13 Oct 2023 17:04:00 +0100 Message-ID: <20231013160423.2218093-32-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspam-User: X-Stat-Signature: 7pq6f534oqmamcw5qcggttbsut516gke X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: CF9B440032 X-HE-Tag: 1697213184-258549 X-HE-Meta: U2FsdGVkX19pe9cTxYVyq0H4CLWAAOiemXaZyfI3uDAKWrwbrXOsJtUiq85Rux8MecaFbY1KJdG4mhwrUuk9tH8de81H0w7cjT7eR3GeCC9yMEmQG+nqrYsjWd/4Z/pE+HEUxvqyLwmI1rLC9Cj9/Dts5ygtG3ZZ4u6ey3dKdR2Y42OvbavILQ/R/P265/9/7PKOPAuatjHXXBnOLyNvLYxwsL0V2a2uBlGpNr84ii+SmtwxUcQ03sK/86ZglJJUj7lQGW0cl2jAmWUYeAKnWx9b0XpyN55psmeeXtDmg95x8GuXZKZUjTKshOCZlYEkHfGpITsOwbczJkJlfAS7qMpD9MZ5/Xz7A9TTACYR45S95CsjzQPZI3rj3WYd3DyYi2T4JB1iGDFES6NgaxNjVae79rxy+2FeVxExwUmssuDSbZ9oMlHUx4YaIYCwDEUI+3Z2Ua4wugrm38tqb5ZNRIxuDPNs0meoKeZPJogCjdAY+HEufaA9CatLkfTaXRjTK0zmjc5QzfhlOfDCL1ueQUJ8teUy+9Ln9B0c7Xu4BL+n90mP793mf6wdPQB/1+Kl0BbO94eQbbaj1Zl20MijVSGqJXleouzPKfpLm2y/C7NcbCSLN2TEwib/bP7xjZHUGotO23HhBipnffyR/xGiPW+wSkxSVrMY7gMfjL1YCKwn/EU7zMZQdf6OJwmPDt0kFictmL7N6GdNi61foasWx/yQtEJ66iK9a09L6YnEj2YD4hp54SUcgKMOVFAfXyif6pdiJensVj5+3TxMLm5fd8+msXxaV0OhpzhD9uHcLVYqSyEuLZcXRADt7dzC18jZG/Ukpo4L9ve2a6c2iVSFh3sCo7priFtA2Gf/Tfp20Fm4ceLKPiQGaM0p1blwtLkmqW18l+Jp3AHRjFwrAYBn0gg6mnLw/nC2mCHDkpmfnD26Syi0/Z/z4nEdxHm+vt4SU3cYWmgDnmEZkdbxBXk F5bxciab fUBW1hRJMigv8G6HMXWdBf8NZww99VDcoxM2CRLHXJFBSx8dW8RY7XFtCTh2bZiA8v2TM7j/LkVQvn5c/XI3DenIJAV3i3dpfzvjX/qsXo3vd1Iyfu4vxW++SFm3tDPRUxH2rknZkfkAi/4PTECFi7AZ3xQZrUvSj6q+YhJrAfcuagk2AKzHwKfP3Ty8EDVzkEuYwo27nf9P5de7ttvyrh5eCbya3H8SIKfI4HCBdx3bmRn8d9zERHO1sIDQh/2TpOFl8NgLByexdYGpfQ5QTxPBYz1XtaIHMK6tQpwmseG/ilSUinytGyO5tWMYnQ+4jaRzhFqivAag6MgOsZGD6PlEpDg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a top-level-ish function that can be pointed to directly by ->read_iter file op. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 33 +++++++++++++++++++++++++++++++++ include/linux/netfs.h | 1 + 2 files changed, 34 insertions(+) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 374707df6575..ab9f8e123245 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -564,3 +564,36 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, _leave(" = %d", ret); return ret; } + +/** + * netfs_file_read_iter - Generic filesystem read routine + * @iocb: kernel I/O control block + * @iter: destination for the data read + * + * This is the ->read_iter() routine for all filesystems that can use the page + * cache directly. + * + * The IOCB_NOWAIT flag in iocb->ki_flags indicates that -EAGAIN shall be + * returned when no data can be read without waiting for I/O requests to + * complete; it doesn't prevent readahead. + * + * The IOCB_NOIO flag in iocb->ki_flags indicates that no new I/O requests + * shall be made for the read or for readahead. When no data can be read, + * -EAGAIN shall be returned. When readahead would be triggered, a partial, + * possibly empty read shall be returned. + * + * Return: + * * number of bytes copied, even for partial reads + * * negative error code (or 0 if IOCB_NOIO) if nothing was read + */ +ssize_t netfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) +{ + struct netfs_inode *ictx = netfs_inode(iocb->ki_filp->f_mapping->host); + + if ((iocb->ki_flags & IOCB_DIRECT) || + test_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags)) + return netfs_unbuffered_read_iter(iocb, iter); + + return filemap_read(iocb, iter, 0); +} +EXPORT_SYMBOL(netfs_file_read_iter); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index e2a5a441b7fc..6e02a68a51f7 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -384,6 +384,7 @@ struct netfs_cache_ops { /* High-level read API. */ ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter); +ssize_t netfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); /* High-level write API */ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, From patchwork Fri Oct 13 16:04:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97779CDB483 for ; Fri, 13 Oct 2023 16:06:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0122580026; Fri, 13 Oct 2023 12:06:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F02F980022; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7C0D80026; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C081A80022 for ; Fri, 13 Oct 2023 12:06:28 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6894E1CA91F for ; Fri, 13 Oct 2023 16:06:28 +0000 (UTC) X-FDA: 81340915656.20.B405464 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 751A5A0048 for ; Fri, 13 Oct 2023 16:06:26 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=it8TLX50; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213186; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cqRvJgW5udJSY0q81JhH7NlZTloam9ZqZvvelwdcp84=; b=EP1HyMoQmOe8hurLZNsyHOXI+PTJIbC6Jb6KE0F4YicAN4u4QJDFz5B/b1gQSsukFcjBuF DyYF0bxwtMXnuTJOiBI2N1Wri57o+bbloumZmgeOZfeDNvkHNBgxkrag0aUYMX3mw3JE/W Q1Fis1UIcrqXUKrHEp6bEgVc4ObI/z0= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=it8TLX50; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213186; a=rsa-sha256; cv=none; b=OQKZeMkqcOujK6ZK9ImAkH64QLA/wJElj55Da8NgjGYWhPBNz+QoWCIXA/HsE+idpJvIsG Cw8hHQJyUqqSiK5Idtq+8Fjh7wjJw+5nvgLS0qoFGKaPXbHNWB+dwHlMtIvwDQ7dSgsUEY aeGide5rrfCxNtIxWQp7WxM3aSa7qs4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cqRvJgW5udJSY0q81JhH7NlZTloam9ZqZvvelwdcp84=; b=it8TLX50j/Ku7WiMpTbklpP7FABQM6W47ikEbsbOBVa3qN1+fqFVMOY915y0/QLXgIpHG5 TMAY3/0h1Whi1r9zqyL6vgRPx6H8JQ18/xCnGtBblnaNp2UpIrwCfqJKt4iWMlIUa/bkvZ MJVyGl2ZP5RlPds946RacfMc62ub2sk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-394-14Kq_r-FOUSyGiLX17an5Q-1; Fri, 13 Oct 2023 12:06:20 -0400 X-MC-Unique: 14Kq_r-FOUSyGiLX17an5Q-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B2E0588B7B6; Fri, 13 Oct 2023 16:06:18 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECB33492BE2; Fri, 13 Oct 2023 16:06:15 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 32/53] netfs: Provide a writepages implementation Date: Fri, 13 Oct 2023 17:04:01 +0100 Message-ID: <20231013160423.2218093-33-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 751A5A0048 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: wqetusessnhztdmaiwu9i8tg6imf6cn4 X-HE-Tag: 1697213186-305943 X-HE-Meta: U2FsdGVkX19+TWXYRaIqqHckaq/XZgPrBWwrtlmXstAuTpnn5J+UC6Q6F0ZH9Jiw3AkI+GaZySAA9OvbOvheQblX2nAN1ObvGXuX6+g7kOrWrPmIFB0HxxgTuF44FzNPeHoxhrhpjO5XZdgKjfJzBdZ+yrlOks/+Ln4Y6gwzqXtQ+dEG6IW4GaZuISZa3CSNFtvIS1jQuxKrzkmmvearRNOGBFfJxHNpNZQteRgVMMbWFBi5dbcuGxlfgj0Fl60o+Dqzds+UeNbR5zdwFc0BA9ae+lzu/70afhewpt219Z1j9tbaw3rww+5Ro0VMZwdsQplZtoc29b9sFOI547YtzGzFpCV8TD0IryHeKETCA24AHKGrgXbWqQZbW7K+6OYAbCTilWWKugaLiCyBbNkQFiRSLC2P4B+lWs05fRG06datL+qxpHAoJwNecAHUZRA0hjCL/XLAxKys1fjm3xwHk1JKUpFLb+tfbDcwPpjWdfN4zrqaCtBd8/NjzquERd4NJSYkYsQYL6KULh35eAgj3l84Qd8hD6XtxqE/Ulyqfquc6A9cfcVyTX4iZj+LBKZZ02gIr6J6clRyE1aOY2Sse0dWoDpMMlOy06bmc920wH+PL2zPbrUuYO/kreYdtzzmZ5TgYst5XzI6TbWVYvCi0MMq4y4EhmghmmHTX62+9SHEqxsTZuu6kUR4pik4j0MrA98Cex56DSJUvcu5n+sJ0HAeOzbOgV41W+nwAAjYbfd1cu9Ay7rMTaCOXNYQFrZ+nQQg4bX9bLLjsphTTzphioL5pJWDmwlYTYOlLQ7dDvtphrJgTA6G5zzKpf/x/RjPArd7s4lAcQDkY0MC1rLfLGhQFtUN26xLTBur31Pw5zk03ocz12dCVykjDyWv5hvTAtOjiA6yW1jJOffPgjg05BIX6RfnA3UtAmyLOZvHDER9Ph97JGC8xJUWXKtDH3I7FqkPB/9e73RqkagcVT7 sATfjNHx WrlQZzFtrpidquZsUfsvlyY2qlXYvKaP0YVZ7VUJwwCb4Jws3czFrvu8bCFXB/GOFg3A+T2uZ0+qR8DxU8dfyxSQO7zCaSUVxsQXojs5PK2KqqbpfYlWwNSvSBlYV8SWDfFZQO/nQIOXOUot/Vwrg5ytlJ2Ksu5YWqHXSVLYdROlAEE3oKVLU8Gl0fFYO6xdlwk7uns5/t7turASiULlOeID6m5YnAK6W7gldrCcWgWpkq3UeuVY0ekUa1AjUAGXOd6W2+MQktdrQ/YMYgcGseWzkQTResZdrw9OEX08ltq0Df3c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide an implementation of writepages for network filesystems to delegate to. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 627 ++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 2 + 2 files changed, 629 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 3c1f26f32351..d5a5a315fbd3 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -32,6 +32,18 @@ static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group folio_attach_private(folio, netfs_get_group(netfs_group)); } +#if IS_ENABLED(CONFIG_FSCACHE) +static void netfs_folio_start_fscache(bool caching, struct folio *folio) +{ + if (caching) + folio_start_fscache(folio); +} +#else +static void netfs_folio_start_fscache(bool caching, struct folio *folio) +{ +} +#endif + /* * Decide how we should modify a folio. We might be attempting to do * write-streaming, in which case we don't want to a local RMW cycle if we can @@ -472,3 +484,618 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr return ret; } EXPORT_SYMBOL(netfs_page_mkwrite); + +/* + * Kill all the pages in the given range + */ +static void netfs_kill_pages(struct address_space *mapping, + loff_t start, loff_t len) +{ + struct folio *folio; + pgoff_t index = start / PAGE_SIZE; + pgoff_t last = (start + len - 1) / PAGE_SIZE, next; + + _enter("%llx-%llx", start, start + len - 1); + + do { + _debug("kill %lx (to %lx)", index, last); + + folio = filemap_get_folio(mapping, index); + if (IS_ERR(folio)) { + next = index + 1; + continue; + } + + next = folio_next_index(folio); + + folio_clear_uptodate(folio); + folio_end_writeback(folio); + folio_lock(folio); + trace_netfs_folio(folio, netfs_folio_trace_kill); + generic_error_remove_page(mapping, &folio->page); + folio_unlock(folio); + folio_put(folio); + + } while (index = next, index <= last); + + _leave(""); +} + +/* + * Redirty all the pages in a given range. + */ +static void netfs_redirty_pages(struct address_space *mapping, + loff_t start, loff_t len) +{ + struct folio *folio; + pgoff_t index = start / PAGE_SIZE; + pgoff_t last = (start + len - 1) / PAGE_SIZE, next; + + _enter("%llx-%llx", start, start + len - 1); + + do { + _debug("redirty %llx @%llx", len, start); + + folio = filemap_get_folio(mapping, index); + if (IS_ERR(folio)) { + next = index + 1; + continue; + } + + next = folio_next_index(folio); + trace_netfs_folio(folio, netfs_folio_trace_redirty); + filemap_dirty_folio(mapping, folio); + folio_end_writeback(folio); + folio_put(folio); + } while (index = next, index <= last); + + balance_dirty_pages_ratelimited(mapping); + + _leave(""); +} + +/* + * Completion of write to server + */ +static void netfs_pages_written_back(struct netfs_io_request *wreq) +{ + struct address_space *mapping = wreq->mapping; + struct netfs_folio *finfo; + struct netfs_group *group = NULL; + struct folio *folio; + pgoff_t last; + int gcount = 0; + + XA_STATE(xas, &mapping->i_pages, wreq->start / PAGE_SIZE); + + _enter("%llx-%llx", wreq->start, wreq->start + wreq->len); + + rcu_read_lock(); + + last = (wreq->start + wreq->len - 1) / PAGE_SIZE; + xas_for_each(&xas, folio, last) { + WARN(!folio_test_writeback(folio), + "bad %zx @%llx page %lx %lx\n", + wreq->len, wreq->start, folio_index(folio), last); + + if ((finfo = netfs_folio_info(folio))) { + /* Streaming writes cannot be redirtied whilst under + * writeback, so discard the streaming record. + */ + folio_detach_private(folio); + group = finfo->netfs_group; + gcount++; + trace_netfs_folio(folio, netfs_folio_trace_clear_s); + } else if ((group = netfs_folio_group(folio))) { + /* Need to detach the group pointer if the page didn't + * get redirtied. If it has been redirtied, then it + * must be within the same group. + */ + if (folio_test_dirty(folio)) { + trace_netfs_folio(folio, netfs_folio_trace_redirtied); + goto end_wb; + } + if (folio_trylock(folio)) { + if (!folio_test_dirty(folio)) { + folio_detach_private(folio); + gcount++; + trace_netfs_folio(folio, netfs_folio_trace_clear_g); + } else { + trace_netfs_folio(folio, netfs_folio_trace_redirtied); + } + folio_unlock(folio); + goto end_wb; + } + + xas_pause(&xas); + rcu_read_unlock(); + folio_lock(folio); + if (!folio_test_dirty(folio)) { + folio_detach_private(folio); + gcount++; + trace_netfs_folio(folio, netfs_folio_trace_clear_g); + } else { + trace_netfs_folio(folio, netfs_folio_trace_redirtied); + } + folio_unlock(folio); + rcu_read_lock(); + } else { + trace_netfs_folio(folio, netfs_folio_trace_clear); + } + end_wb: + folio_end_writeback(folio); + } + + rcu_read_unlock(); + netfs_put_group_many(group, gcount); + _leave(""); +} + +/* + * Deal with the disposition of the folios that are under writeback to close + * out the operation. + */ +static void netfs_cleanup_buffered_write(struct netfs_io_request *wreq) +{ + struct address_space *mapping = wreq->mapping; + + _enter(""); + + switch (wreq->error) { + case 0: + netfs_pages_written_back(wreq); + break; + + default: + pr_notice("R=%08x Unexpected error %d\n", wreq->debug_id, wreq->error); + fallthrough; + case -EACCES: + case -EPERM: + case -ENOKEY: + case -EKEYEXPIRED: + case -EKEYREJECTED: + case -EKEYREVOKED: + case -ENETRESET: + case -EDQUOT: + case -ENOSPC: + netfs_redirty_pages(mapping, wreq->start, wreq->len); + break; + + case -EROFS: + case -EIO: + case -EREMOTEIO: + case -EFBIG: + case -ENOENT: + case -ENOMEDIUM: + case -ENXIO: + netfs_kill_pages(mapping, wreq->start, wreq->len); + break; + } + + if (wreq->error) + mapping_set_error(mapping, wreq->error); + if (wreq->netfs_ops->done) + wreq->netfs_ops->done(wreq); +} + +/* + * Extend the region to be written back to include subsequent contiguously + * dirty pages if possible, but don't sleep while doing so. + * + * If this page holds new content, then we can include filler zeros in the + * writeback. + */ +static void netfs_extend_writeback(struct address_space *mapping, + struct netfs_group *group, + struct xa_state *xas, + long *_count, + loff_t start, + loff_t max_len, + bool caching, + size_t *_len) +{ + struct netfs_folio *finfo; + struct folio_batch fbatch; + struct folio *folio; + unsigned int i; + pgoff_t index = (start + *_len) / PAGE_SIZE; + size_t len; + void *priv; + bool stop = true; + + folio_batch_init(&fbatch); + + do { + /* Firstly, we gather up a batch of contiguous dirty pages + * under the RCU read lock - but we can't clear the dirty flags + * there if any of those pages are mapped. + */ + rcu_read_lock(); + + xas_for_each(xas, folio, ULONG_MAX) { + stop = true; + if (xas_retry(xas, folio)) + continue; + if (xa_is_value(folio)) + break; + if (folio_index(folio) != index) + break; + + priv = rcu_dereference(*(__force void __rcu **)&folio->private); + if ((const struct netfs_group *)priv != group) { + finfo = (void *)((unsigned long)priv & ~NETFS_FOLIO_INFO); + if (finfo->netfs_group != group) + break; + if (finfo->dirty_offset > 0) + break; + } + + if (!folio_try_get_rcu(folio)) { + xas_reset(xas); + continue; + } + + /* Has the folio moved or been split? */ + if (unlikely(folio != xas_reload(xas))) { + folio_put(folio); + break; + } + + if (!folio_trylock(folio)) { + folio_put(folio); + break; + } + if (!folio_test_dirty(folio) || + folio_test_writeback(folio) || + folio_test_fscache(folio)) { + folio_unlock(folio); + folio_put(folio); + break; + } + + stop = false; + len = folio_size(folio); + priv = folio->private; + if ((const struct netfs_group *)priv != group) { + stop = true; + finfo = (void *)((unsigned long)priv & ~NETFS_FOLIO_INFO); + if (finfo->netfs_group != group || + finfo->dirty_offset > 0) { + folio_unlock(folio); + folio_put(folio); + break; + } + len = finfo->dirty_len; + } + + index += folio_nr_pages(folio); + *_count -= folio_nr_pages(folio); + *_len += len; + if (*_len >= max_len || *_count <= 0) + stop = true; + + if (!folio_batch_add(&fbatch, folio)) + break; + if (stop) + break; + } + + xas_pause(xas); + rcu_read_unlock(); + + /* Now, if we obtained any folios, we can shift them to being + * writable and mark them for caching. + */ + if (!folio_batch_count(&fbatch)) + break; + + for (i = 0; i < folio_batch_count(&fbatch); i++) { + folio = fbatch.folios[i]; + trace_netfs_folio(folio, netfs_folio_trace_store_plus); + + if (!folio_clear_dirty_for_io(folio)) + BUG(); + if (folio_start_writeback(folio)) + BUG(); + netfs_folio_start_fscache(caching, folio); + folio_unlock(folio); + } + + folio_batch_release(&fbatch); + cond_resched(); + } while (!stop); +} + +/* + * Synchronously write back the locked page and any subsequent non-locked dirty + * pages. + */ +static ssize_t netfs_write_back_from_locked_folio(struct address_space *mapping, + struct writeback_control *wbc, + struct netfs_group *group, + struct xa_state *xas, + struct folio *folio, + unsigned long long start, + unsigned long long end) +{ + struct netfs_io_request *wreq; + struct netfs_folio *finfo; + struct netfs_inode *ctx = netfs_inode(mapping->host); + unsigned long long i_size = i_size_read(&ctx->inode); + size_t len, max_len; + bool caching = netfs_is_cache_enabled(ctx); + long count = wbc->nr_to_write; + int ret; + + _enter(",%lx,%llx-%llx,%u", folio_index(folio), start, end, caching); + + wreq = netfs_alloc_request(mapping, NULL, start, folio_size(folio), + NETFS_WRITEBACK); + if (IS_ERR(wreq)) + return PTR_ERR(wreq); + + if (!folio_clear_dirty_for_io(folio)) + BUG(); + if (folio_start_writeback(folio)) + BUG(); + netfs_folio_start_fscache(caching, folio); + + count -= folio_nr_pages(folio); + + /* Find all consecutive lockable dirty pages that have contiguous + * written regions, stopping when we find a page that is not + * immediately lockable, is not dirty or is missing, or we reach the + * end of the range. + */ + trace_netfs_folio(folio, netfs_folio_trace_store); + + len = folio_size(folio); + finfo = netfs_folio_info(folio); + if (finfo) { + start += finfo->dirty_offset; + if (finfo->dirty_offset + finfo->dirty_len != len) { + len = finfo->dirty_len; + goto cant_expand; + } + len = finfo->dirty_len; + } + + if (start < i_size) { + /* Trim the write to the EOF; the extra data is ignored. Also + * put an upper limit on the size of a single storedata op. + */ + max_len = 65536 * 4096; + max_len = min_t(unsigned long long, max_len, end - start + 1); + max_len = min_t(unsigned long long, max_len, i_size - start); + + if (len < max_len) + netfs_extend_writeback(mapping, group, xas, &count, + start, max_len, caching, &len); + } + +cant_expand: + len = min_t(unsigned long long, len, i_size - start); + + /* We now have a contiguous set of dirty pages, each with writeback + * set; the first page is still locked at this point, but all the rest + * have been unlocked. + */ + folio_unlock(folio); + wreq->start = start; + wreq->len = len; + + if (start < i_size) { + _debug("write back %zx @%llx [%llx]", len, start, i_size); + + /* Speculatively write to the cache. We have to fix this up + * later if the store fails. + */ + wreq->cleanup = netfs_cleanup_buffered_write; + + iov_iter_xarray(&wreq->iter, ITER_SOURCE, &mapping->i_pages, start, len); + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + ret = netfs_begin_write(wreq, true, netfs_write_trace_writeback); + if (ret == 0 || ret == -EIOCBQUEUED) + wbc->nr_to_write -= len / PAGE_SIZE; + } else { + _debug("write discard %zx @%llx [%llx]", len, start, i_size); + + /* The dirty region was entirely beyond the EOF. */ + fscache_clear_page_bits(mapping, start, len, caching); + netfs_pages_written_back(wreq); + ret = 0; + } + + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + _leave(" = 0"); + return 0; +} + +/* + * Write a region of pages back to the server + */ +static ssize_t netfs_writepages_begin(struct address_space *mapping, + struct writeback_control *wbc, + struct netfs_group *group, + struct xa_state *xas, + unsigned long long *_start, + unsigned long long end) +{ + const struct netfs_folio *finfo; + struct folio *folio; + unsigned long long start = *_start; + ssize_t ret; + void *priv; + int skips = 0; + + _enter("%llx,%llx,", start, end); + +search_again: + /* Find the first dirty page in the group. */ + rcu_read_lock(); + + for (;;) { + folio = xas_find_marked(xas, end / PAGE_SIZE, PAGECACHE_TAG_DIRTY); + if (xas_retry(xas, folio) || xa_is_value(folio)) + continue; + if (!folio) + break; + + /* Skip any dirty folio that's not in the group of interest. */ + priv = rcu_dereference(*(__force void __rcu **)&folio->private); + if ((const struct netfs_group *)priv != group) { + finfo = (void *)((unsigned long)priv & ~NETFS_FOLIO_INFO); + if (finfo->netfs_group != group) + continue; + } + + if (!folio_try_get_rcu(folio)) { + xas_reset(xas); + continue; + } + + if (unlikely(folio != xas_reload(xas))) { + folio_put(folio); + xas_reset(xas); + continue; + } + + xas_pause(xas); + break; + } + rcu_read_unlock(); + if (!folio) + return 0; + + start = folio_pos(folio); /* May regress with THPs */ + + _debug("wback %lx", folio_index(folio)); + + /* At this point we hold neither the i_pages lock nor the page lock: + * the page may be truncated or invalidated (changing page->mapping to + * NULL), or even swizzled back from swapper_space to tmpfs file + * mapping + */ +lock_again: + if (wbc->sync_mode != WB_SYNC_NONE) { + ret = folio_lock_killable(folio); + if (ret < 0) + return ret; + } else { + if (!folio_trylock(folio)) + goto search_again; + } + + if (folio->mapping != mapping || + !folio_test_dirty(folio)) { + start += folio_size(folio); + folio_unlock(folio); + goto search_again; + } + + if (folio_test_writeback(folio) || + folio_test_fscache(folio)) { + folio_unlock(folio); + if (wbc->sync_mode != WB_SYNC_NONE) { + folio_wait_writeback(folio); +#ifdef CONFIG_NETFS_FSCACHE + folio_wait_fscache(folio); +#endif + goto lock_again; + } + + start += folio_size(folio); + if (wbc->sync_mode == WB_SYNC_NONE) { + if (skips >= 5 || need_resched()) { + ret = 0; + goto out; + } + skips++; + } + goto search_again; + } + + ret = netfs_write_back_from_locked_folio(mapping, wbc, group, xas, + folio, start, end); +out: + if (ret > 0) + *_start = start + ret; + _leave(" = %zd [%llx]", ret, *_start); + return ret; +} + +/* + * Write a region of pages back to the server + */ +static int netfs_writepages_region(struct address_space *mapping, + struct writeback_control *wbc, + struct netfs_group *group, + unsigned long long *_start, + unsigned long long end) +{ + ssize_t ret; + + XA_STATE(xas, &mapping->i_pages, *_start / PAGE_SIZE); + + do { + ret = netfs_writepages_begin(mapping, wbc, group, &xas, + _start, end); + if (ret > 0 && wbc->nr_to_write > 0) + cond_resched(); + } while (ret > 0 && wbc->nr_to_write > 0); + + return ret > 0 ? 0 : ret; +} + +/* + * write some of the pending data back to the server + */ +int netfs_writepages(struct address_space *mapping, + struct writeback_control *wbc) +{ + struct netfs_group *group = NULL; + loff_t start, end; + int ret; + + _enter(""); + + /* We have to be careful as we can end up racing with setattr() + * truncating the pagecache since the caller doesn't take a lock here + * to prevent it. + */ + + if (wbc->range_cyclic && mapping->writeback_index) { + start = mapping->writeback_index * PAGE_SIZE; + ret = netfs_writepages_region(mapping, wbc, group, + &start, LLONG_MAX); + if (ret < 0) + goto out; + + if (wbc->nr_to_write <= 0) { + mapping->writeback_index = start / PAGE_SIZE; + goto out; + } + + start = 0; + end = mapping->writeback_index * PAGE_SIZE; + mapping->writeback_index = 0; + ret = netfs_writepages_region(mapping, wbc, group, &start, end); + if (ret == 0) + mapping->writeback_index = start / PAGE_SIZE; + } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) { + start = 0; + ret = netfs_writepages_region(mapping, wbc, group, + &start, LLONG_MAX); + if (wbc->nr_to_write > 0 && ret == 0) + mapping->writeback_index = start / PAGE_SIZE; + } else { + start = wbc->range_start; + ret = netfs_writepages_region(mapping, wbc, group, + &start, wbc->range_end); + } + +out: + _leave(" = %d", ret); + return ret; +} +EXPORT_SYMBOL(netfs_writepages); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 6e02a68a51f7..fb4f4f826b93 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -401,6 +401,8 @@ int netfs_read_folio(struct file *, struct folio *); int netfs_write_begin(struct netfs_inode *, struct file *, struct address_space *, loff_t pos, unsigned int len, struct folio **, void **fsdata); +int netfs_writepages(struct address_space *mapping, + struct writeback_control *wbc); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); bool netfs_release_folio(struct folio *folio, gfp_t gfp); From patchwork Fri Oct 13 16:04:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 742F1CDB47E for ; Fri, 13 Oct 2023 16:07:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E65A280048; Fri, 13 Oct 2023 12:06:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E188780022; Fri, 13 Oct 2023 12:06:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1A0F80048; Fri, 13 Oct 2023 12:06:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B1D2980022 for ; Fri, 13 Oct 2023 12:06:39 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8075514032C for ; Fri, 13 Oct 2023 16:06:39 +0000 (UTC) X-FDA: 81340916118.29.1A7E68F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 96C9E2001D for ; Fri, 13 Oct 2023 16:06:37 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hm3ItpE7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213197; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uT/Mhdx8sQf5PESeJ8Rr2IswwVlQVrc+dpLVfyRgQ2c=; b=Mp/vCTQi37cV+Fia9DIjDcyBCSK53ppLER684AAMwkEAMVTwVRuco7g21/vkpzzf2G66AZ F9zJ7cp8XNsaBsAQWl+3HkDoNJjlIPqFjuobC3JJf0Ap4/hM2hbMoQKrR4iidvjy2C2Tal EfFd/TmzoCdjJclIQHscpt36YySXS7Y= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hm3ItpE7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213197; a=rsa-sha256; cv=none; b=aFwstMnLOigOUBNcyuiXAuerjDfWQU3sV2NduzX4waFxcktuoB5ZpPwpko1hA2QxJIAjR6 PeKgAVo9TdaASuOvMUd85WW4Loj2ijA/HqqNqMK//+4ihVn1RFh/QRNh2ov3qino9JCvE9 vQKWvMv/pTcMejAmjnzvn0Hfh+Oi2ds= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213196; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uT/Mhdx8sQf5PESeJ8Rr2IswwVlQVrc+dpLVfyRgQ2c=; b=hm3ItpE7s02pJPova3XIqdDimlvRJtQ8owbEk0Qrv3E9k9X6byocuHTJICUpmENFSPsOt7 GownP5JWiDDw9k76/vxUMRJy1Sbifh/Qio8qom8YgPZCxQTX94gVLjqpZc0rK0BtkeJ8Zs J0nYFYeAGF0MZewdx+cLMvcX0aDqazg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-640-oOqr7qHpOt-wp8G5blDJHw-1; Fri, 13 Oct 2023 12:06:23 -0400 X-MC-Unique: oOqr7qHpOt-wp8G5blDJHw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 259D588B7B8; Fri, 13 Oct 2023 16:06:22 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 61A841C060DF; Fri, 13 Oct 2023 16:06:19 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 33/53] netfs: Provide minimum blocksize parameter Date: Fri, 13 Oct 2023 17:04:02 +0100 Message-ID: <20231013160423.2218093-34-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 96C9E2001D X-Stat-Signature: 856npsbk9coj69fssqwfg15o9i3c8csf X-Rspam-User: X-HE-Tag: 1697213197-916396 X-HE-Meta: U2FsdGVkX181OdQ8ZODpcNu4bKardZIiHqaiiSuzF+uJhiD8I87urhSg2QMaysJWycfHDie6gWWduldmTdF3UFPRn0smG3k9J/BVqR9rrRH7jhmn7AIeh8DRIfs2y2saX+5p40EnOCoSsQvVnIOzbrbJakX+U+twGwkFv3CSI2gCUcqXBDcjvmOTVYC7RHLWHvCzt6n4eFlDBfTjIj682L1v77js5C1iey1cGL5KqS5Em0RgiLRoRsTc3AI3sNbCKRwbmX/RMI/dqLbkhVqGE7T9/he4/rWrv1EdSEEwE+2O0kmHv/RYkiAc32rmJuJXhRpoxt/Y+x5x2tKyfB2ANli0sIIdRBwHDhguU2RIFSwv128AheZpGK/wrLKLoEAlclZg1bjp5NgQqNGyGnw4/TGrmvOI5W7aboGy+lzui9u9VM+TVicqAK9ehPaNri5hxnVbKvYa//2CYdGyg0q3AARxWPXR5fkYxU+l7/QiA1H/OQV9awyTJ4wBRE06W9NMvcSHxY7QbKbA5fSu714NTDCUA7E22/FdZCRKOlIqCs7pA0YNydpDffB2ve3MIN2C8o+gYS4uXQhKp+O20ffSJxJcmOOmvuFx3ZNoARvjVjcQZ+I5c4TjwIHUODMbmA6IvgIyRIPlXinGDuV+9shoBbLJMFaLkNKclBtz6M6a6bsiflPYikpfCCmTOYTQZgl6lHOro5YcwuLQQFUUKiMgn1byXybNw8692vDUBFZXqj8Pq7iD9sRWFMDvdIMRvCMRI0K3YjcvJVGCLWJssiTcmC8/gQFRPYPNJFD6dk1Y9yYfmRSaxhUuXmHv52Rivu2KVl+18TyW+euYB2LLRGnCHWy/d+twr9v/q+KtPJQjN/8CQTT6M1sjjQvqoKI9A9lGdBrP+dHhsLkGsgPE+FbtLZR5nke8MsQOEgbRsMZqnjO45GSwdFydAtecrPg4XTAJukN56THnGoqOeQAIC34 benpVeXD NFCNcUkpjZQZ7tahK7m4GFpd14WatBqDBM3kGdZ8j7XJjWQGBWXSEzA9Cjon7XFb7vfsDQMSoTJdYNTgqdiHzddEPz0ZUE+/rwPMfCQrtOUN2N11exkTfWpV1fKaLWrbVYtx7L6dNb9G07ftuqlSoFt310I+vtW+aG9QdkQ6R7qBvWyKhdYThrs+iS5LA3Ssz0wBttm92y7jOtAbpkDXhM+BKIPTwCPLvY+5ZdD8y7cPNvMWsG16f4+hFQl3A5a2TlX4KfIExkO9zFLVOPbSFIXIuv1J91AM1aMUGQVpv+Bm+ONg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a parameter for minimum blocksize in the netfs_i_context struct. This can be used, for instance, to force I/O alignment for content encryption. It also requires the use of an RMW cycle if a write we want to do doesn't meet the block alignment requirements. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 26 ++++++++++++++++++++++---- fs/netfs/buffered_write.c | 3 ++- fs/netfs/direct_read.c | 3 ++- include/linux/netfs.h | 2 ++ 4 files changed, 28 insertions(+), 6 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index ab9f8e123245..e06461ef0bfa 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -527,14 +527,26 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, struct address_space *mapping = folio_file_mapping(folio); struct netfs_inode *ctx = netfs_inode(mapping->host); unsigned long long start = folio_pos(folio); - size_t flen = folio_size(folio); + unsigned long long i_size, rstart, end; + size_t rlen; int ret; - _enter("%zx @%llx", flen, start); + DEFINE_READAHEAD(ractl, file, NULL, mapping, folio_index(folio)); + + _enter("%zx @%llx", len, start); ret = -ENOMEM; - rreq = netfs_alloc_request(mapping, file, start, flen, + i_size = i_size_read(mapping->host); + end = round_up(start + len, 1U << ctx->min_bshift); + if (end > i_size) { + unsigned long long limit = round_up(start + len, PAGE_SIZE); + end = max(limit, round_up(i_size, PAGE_SIZE)); + } + rstart = round_down(start, 1U << ctx->min_bshift); + rlen = end - rstart; + + rreq = netfs_alloc_request(mapping, file, rstart, rlen, NETFS_READ_FOR_WRITE); if (IS_ERR(rreq)) { ret = PTR_ERR(rreq); @@ -548,7 +560,13 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, goto error_put; netfs_stat(&netfs_n_rh_write_begin); - trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); + trace_netfs_read(rreq, rstart, rlen, netfs_read_trace_prefetch_for_write); + + /* Expand the request to meet caching requirements and download + * preferences. + */ + ractl._nr_pages = folio_nr_pages(folio); + netfs_rreq_expand(rreq, &ractl); /* Set up the output buffer */ iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index d5a5a315fbd3..7163fcc05206 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -80,7 +80,8 @@ static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, if (file->f_mode & FMODE_READ) return NETFS_JUST_PREFETCH; - if (netfs_is_cache_enabled(ctx)) + if (netfs_is_cache_enabled(ctx) || + ctx->min_bshift > 0) return NETFS_JUST_PREFETCH; if (!finfo) diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 1d26468aafd9..52ad8fa66dd5 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -185,7 +185,8 @@ static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_ * will then need to pad the request out to the minimum block size. */ if (test_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags)) { - start = rreq->start; + min_bsize = 1ULL << ctx->min_bshift; + start = round_down(rreq->start, min_bsize); end = min_t(unsigned long long, round_up(rreq->start + rreq->len, min_bsize), ctx->remote_i_size); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index fb4f4f826b93..6244f7a9a44a 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -141,6 +141,7 @@ struct netfs_inode { unsigned long flags; #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ + unsigned char min_bshift; /* log2 min block size for bounding box or 0 */ }; /* @@ -462,6 +463,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, ctx->remote_i_size = i_size_read(&ctx->inode); ctx->zero_point = ctx->remote_i_size; ctx->flags = 0; + ctx->min_bshift = 0; #if IS_ENABLED(CONFIG_FSCACHE) ctx->cache = NULL; #endif From patchwork Fri Oct 13 16:04:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54B2ACDB47E for ; Fri, 13 Oct 2023 16:07:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F027280046; Fri, 13 Oct 2023 12:06:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB1C880022; Fri, 13 Oct 2023 12:06:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D542A80046; Fri, 13 Oct 2023 12:06:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C186680022 for ; Fri, 13 Oct 2023 12:06:35 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9C898B4E83 for ; Fri, 13 Oct 2023 16:06:35 +0000 (UTC) X-FDA: 81340915950.29.4BB63BD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 77B4CC002F for ; Fri, 13 Oct 2023 16:06:33 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MlnxFuAi; spf=pass (imf22.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yJ0EH3j50U389igBFANhTwAFPV4cR9SbiJMpaWBP0sQ=; b=XbMUTBGxyfGluCr5sSpKSubcWRCxOnIrGjQ/QwzN1pvD/hFyaKpu+a4bohvOyiXs96VSwK npA2dJ07apc1hGt6l4AAxlTVl9cy3A3pdw+h7nNnZlweWUIr1+l4NujaJJgnrBFI3JuY9J q06zW+LJDa/wVdX97W4MUgnAhsmUY+A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213193; a=rsa-sha256; cv=none; b=ym4Wr1r5VbNvBvmqxuHvMFyjJKKkIpw/E76JEISmtVy8hfJp/ymKXV8ow9ezfda2H4JpFe SB2CL0JzmlIBQuJP5eK9tyrEN1VnGgoa2wz/g+I/q58bcTGoywDfblbygWB04NJQ9wFM7y wSfwOZuwVSnreI3SU43vRq4Vd5ggUtg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MlnxFuAi; spf=pass (imf22.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213192; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yJ0EH3j50U389igBFANhTwAFPV4cR9SbiJMpaWBP0sQ=; b=MlnxFuAitgJOg6EW40kH9ebxwDzl91Vv8xXxr/P14nObAtlInnR+kGely5QKEXVPf2diJH Z9BJzJ6d2QAui6Q/gVubW3mvunW7KOKwzGX/rISxj9WQhYMyM/CP0Z6BcLKcimgqY3jsfl DqtZE93fvJIlvNhND5W4Od5DDfJ1gVs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-646-YdQnHyJONxOiGOy-PV0Gow-1; Fri, 13 Oct 2023 12:06:27 -0400 X-MC-Unique: YdQnHyJONxOiGOy-PV0Gow-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D0E085A5BF; Fri, 13 Oct 2023 16:06:25 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id DDB741C060DF; Fri, 13 Oct 2023 16:06:22 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 34/53] netfs: Make netfs_skip_folio_read() take account of blocksize Date: Fri, 13 Oct 2023 17:04:03 +0100 Message-ID: <20231013160423.2218093-35-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Stat-Signature: jstf87xng3hexj36imex83jk4ps6fxmt X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 77B4CC002F X-Rspam-User: X-HE-Tag: 1697213193-885591 X-HE-Meta: U2FsdGVkX1/znwxm4dK0lLIqDzKhcMqlDBfPgCIDQ+QsYE7dGwqCmC/a4Wp5c2tVcrNZ9MDV4b/7Y38uwTL6XG8iLAgzOJEEbobsK9MboVPfYX7MwKooOGTs0ZJQ1yVIRulvWwqJkVkFP87bjdAe94x/uV/pVcZ6lNGY0anz3SbfUvYUUkOurvvpMcBGU9PTiQ8OGe7u1sTl7Mgqs0y5GNB4LgciQlgELT2F8ivFehbAfvGad7vHZKncaA8sbxRlo9r3urKm/DiTyVdYurLYbH+c/OD6xiGZJzJOPjahik6GFgxUv2dd6sSLuu/LF6sgkUPjyvB6af43ES9jen21V/GH2nREOD817OB2zS2o/NWg5Yf94iXj/tgjA2JkVCtkSaYcAcIeIZYmm0KJqTRy6oNZjngkYfgwxZmVFlTGe05/JQkDZwwYCdfmqMVwLx+KKbITQcxXjhODJTBKPL12smlG+g5a7W4EeDT734bVlc3h1XxNQvYqrS/tyrCGVo4AEG5XQH52+MzF8pGTqTtQ3OYruocJVPng4ip+/8a4yZe0dJxFpvya1Y/11jNWxN7UiXoBNpgMu9+91XBhZNOPbUimMRCjkChFXhdT3gzqZK1IPcz5PiCy/urC3h0QZ6lARpWrVcAqziKlnGEzHg30KTwtQsh0KC133lUUIIj15FhMs5WVdFz1ToTRrlQtWbRm2IZGJ7scr8VhxZuLIOTrw1bmBHK3TFzIVCjdgB1GlDK72QiHAkB8WwGe426OAxc2B554L80JzzX2sfaVxihGXfYir8U6LLeWhl5ofavfBnzxfYWell6Iz9ZbvcfvEYjkhb1cFXPs5Chqkv7FvPyd83/nag5W2hRRumL3+ldMNSo8qB3D64b17ov/zy++Xq3pA4A96t7uNV33wAzF3qG2nLR1BBtw8eeIQn9PecXk0CiYk/yQ9+xziTiqBflIQKOZdflHLovbTfBKg8kK92P gicJYXbf DJpQKCiQ/3RER6bEMu+w9LEYj2MsGMziXU1zdV1cJa49m0kDmXrnaq+wiSYmdXLSDz9NuEzffvKEiwb7LYg2yqxjG2Qpi70lSb1a7g9JyvFWO2AYgFMAXzjx3qHn/br1yNTJSw+/pFVJjx12d15L3dW1t7szPvnIQ46a7BI5s5E3CFmpGdoQwieo3F+HoBjXZyCpptkMFynw7UCFC+5IeMexe9zN+BH5mpik8IrX7UONMMsXgJmOVL/lWPBULeQZY/HAZ9c4rBulsLyeqABu/QhfH8yZGJ9N43AkSHEz5CerIl/DtMGCYLIBPhd6AY56h8ixp6dFN5j8ZBSFBH/XLpzdIBA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make netfs_skip_folio_read() take account of blocksize such as crypto blocksize. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index e06461ef0bfa..de696aaaefbd 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -337,6 +337,7 @@ EXPORT_SYMBOL(netfs_read_folio); /* * Prepare a folio for writing without reading first + * @ctx: File context * @folio: The folio being prepared * @pos: starting position for the write * @len: length of write @@ -350,32 +351,41 @@ EXPORT_SYMBOL(netfs_read_folio); * If any of these criteria are met, then zero out the unwritten parts * of the folio and return true. Otherwise, return false. */ -static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, - bool always_fill) +static bool netfs_skip_folio_read(struct netfs_inode *ctx, struct folio *folio, + loff_t pos, size_t len, bool always_fill) { struct inode *inode = folio_inode(folio); - loff_t i_size = i_size_read(inode); + loff_t i_size = i_size_read(inode), low, high; size_t offset = offset_in_folio(folio, pos); size_t plen = folio_size(folio); + size_t min_bsize = 1UL << ctx->min_bshift; + + if (likely(min_bsize == 1)) { + low = folio_file_pos(folio); + high = low + plen; + } else { + low = round_down(pos, min_bsize); + high = round_up(pos + len, min_bsize); + } if (unlikely(always_fill)) { - if (pos - offset + len <= i_size) - return false; /* Page entirely before EOF */ + if (low < i_size) + return false; /* Some part of the block before EOF */ zero_user_segment(&folio->page, 0, plen); folio_mark_uptodate(folio); return true; } - /* Full folio write */ - if (offset == 0 && len >= plen) + /* Full page write */ + if (pos == low && high == pos + len) return true; - /* Page entirely beyond the end of the file */ - if (pos - offset >= i_size) + /* pos beyond last page in the file */ + if (low >= i_size) goto zero_out; /* Write that covers from the start of the folio to EOF or beyond */ - if (offset == 0 && (pos + len) >= i_size) + if (pos == low && (pos + len) >= i_size) goto zero_out; return false; @@ -454,7 +464,7 @@ int netfs_write_begin(struct netfs_inode *ctx, * to preload the granule. */ if (!netfs_is_cache_enabled(ctx) && - netfs_skip_folio_read(folio, pos, len, false)) { + netfs_skip_folio_read(ctx, folio, pos, len, false)) { netfs_stat(&netfs_n_rh_write_zskip); goto have_folio_no_wait; } From patchwork Fri Oct 13 16:04:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3784FCDB483 for ; Fri, 13 Oct 2023 16:07:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4A1380022; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F6FA8004B; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 872A580022; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6E5648004B for ; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 43D4012036F for ; Fri, 13 Oct 2023 16:06:50 +0000 (UTC) X-FDA: 81340916580.21.E6D1C79 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 79141180015 for ; Fri, 13 Oct 2023 16:06:48 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Xz7cOgQM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213208; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OIFHeTgUMwtSvY+j42oOHHXzZS1S7Lc7r7uKXM72kUQ=; b=kfFE7Yff4xDPCz7kx2aoxUMRkw9j5WFnI2sP/ujHE618laEA4JZOEKmCWvElKRHT1Mh2DD GpIAJKsL5LEi1oZaBDltpOgMsjBCZgbOEILBkKV09DFM3BZhayO4wc7rMgWdhLnrjkJbaY VepbRSXC40nYoDaGZYvMk2RwY/smxqI= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Xz7cOgQM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213208; a=rsa-sha256; cv=none; b=v7IJv1Mo6GNs8+AcH9PprS2xZAksyuwPSGnOniDxP8A7G+qeF6GCEO2oOKrWzSKCVRdvin sb06xemXb/EpjHDXYAiauEGmJOf7SnRHtQprJ6IGNEIk3UJTM/VcLgD+ogKWaHUuzRyUlY 7Nkwrbi2OMrNoRPcTks87jsF07u9IPw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OIFHeTgUMwtSvY+j42oOHHXzZS1S7Lc7r7uKXM72kUQ=; b=Xz7cOgQMXXhftR3AGa0583oe2uh7swG/NZ+IqqUDeDr6jA7e6psf2EOXBFLpyeg3rGGaU7 9mzCYN/qK9zZ1UDlS7mfXfVzC814taYUNW2jjXXBRugDzOGVhfKDMjfjY0WQaSg1gbsqaj bHCUlaZ7Xk7CgU/k0CPKt5CBzEPQofs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-439-jK_A197JMJ2pqz6S6X0F5A-1; Fri, 13 Oct 2023 12:06:42 -0400 X-MC-Unique: jK_A197JMJ2pqz6S6X0F5A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A39C01029F59; Fri, 13 Oct 2023 16:06:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1C0E72157F5A; Fri, 13 Oct 2023 16:06:26 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 35/53] netfs: Perform content encryption Date: Fri, 13 Oct 2023 17:04:04 +0100 Message-ID: <20231013160423.2218093-36-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 79141180015 X-Stat-Signature: ay86rjtpzcjkk3jeqsh5yit9ckfqhq39 X-HE-Tag: 1697213208-432363 X-HE-Meta: U2FsdGVkX191HBB8TZt/CcuiK7zGbQRcdqEFq53o5zPnByqZxIPwt8EKNQSjM+DO1mbHgmHgW68GO7u2FIzmFbZ/pOr+k3UpBPoNiDbHf3hlfInYJB3KfA5dDwbGwpi8dhljtyiEObbr0IHihq5fvLSftRvvGAd6uA8cJzZzr3294/4otesbIGbPiUXtRtoKBEdiRS6i2O3ajfObxO3J5fPxM+ENsaeaV9kflx2ebwznWPScSKeCuD9oUscAEP39A0kWD/LS8zalReb4KJpAZ96wYdY77tmOp8zisi4P1jwjkjA21ICbxakaop3RNcaIJFwFV3kYLKmKEOCRC1wnS3L2NvLWU/Sl/u52P0hk5RCY6IAZhSnezTrBh9bJidxG1+kF3HT7oVfXYIztUG19gRSVVncjHN9R6ac7zOHsqpzMuyWx/hOHv4wX3AJdOKVwabwcW35XZxZKxAcXsrpIKMWQXyNbK2IFASGIZtlg1pbr5ULBgCkEZrAFf2PchYnoi2HyJP9pzBQ4wpNKTJRRJuVXDvRB62gJRGSHBWP74iZmvw08EbGZNE9Ttf2SXsqehG0h9/FtW1lkkWq/OtY9lpeKcgvjCsUXuv1o/zCT0EwFbzc17th25H3D7+CcJbUO2Abv3lXc0xk/WcZoll9BDQLq8oPTY8Ppn1Hqg6AMhG2r3H6jG8oC3vyE0bNdAKCVNbIEUXF26Ctt4152abGIePsgQwtRQAR5HPLI6Drba6t7a7MuNzmIIrjQu6i+CDjcF/+lNz+UuWDGe9mZ95xWcc8W6J+w1LYZwx0FWTRSPxzDT4CHwRbvAoN+qI53F15fJxomveBmOrD0sRMwXgzgZGzLEVdH7AtX7dtrwt4ryTz3O08jtNWLw8/7rxsWZ39uE565P6iynGJ/S8OLX67ZdBNylmdEqXtgcab3xIdkQS4w8MWJrWae3qzyhQAf7hK2xg2hjPrC/4T7abBvWlF Qteb8NkC unBGGT6zqCNodGTPVl5/3gLmIaxCs8b6HlIxyCxmtOuHobBr+tADOC5qXpUZQKIu/fHuZQFiiswy0XQWh1tq1xnTrMbPFEhk9jkY0ufIG42Jku6p0qBdp/8PVFpom7S8DKE+/y6rlig1+Ks4IUTYDHAOHQagAqWQh0qRVQeFAymVLX0D8hZR+gRYYFqN+ygV9jnOa0EQGdXmPtjbmtiB6JrTxK/b6tq4ZObvUDCFJ9xRnjhE+Cu4V4szaYS9ASQF3nyx8QbU0mLUaQqU+ugUv/Sp3WXMG5/XKtEQaXwPmCkv042A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When dealing with an encrypted file, we gather together sufficient pages from the pagecache to constitute a logical crypto block, allocate a bounce buffer and then ask the filesystem to encrypt between the buffers. The bounce buffer is then passed to the filesystem to upload. The network filesystem must set a flag to indicate what service is desired and what the logical blocksize will be. The netfs library iterates through each block to be processed, providing a pair of scatterlists to describe the start and end buffers. Note that it should be possible in future to encrypt DIO writes also by this same mechanism. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/Makefile | 1 + fs/netfs/buffered_write.c | 3 +- fs/netfs/crypto.c | 89 ++++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 5 ++ fs/netfs/objects.c | 2 + fs/netfs/output.c | 7 ++- include/linux/netfs.h | 11 +++++ include/trace/events/netfs.h | 2 + 8 files changed, 118 insertions(+), 2 deletions(-) create mode 100644 fs/netfs/crypto.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index d5c2809fc029..5ea852ac276c 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -3,6 +3,7 @@ netfs-y := \ buffered_read.o \ buffered_write.o \ + crypto.o \ direct_read.o \ direct_write.o \ io.o \ diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 7163fcc05206..b81d807f89f0 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -77,7 +77,8 @@ static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, if (!maybe_trouble && offset == 0 && len >= flen) return NETFS_WHOLE_FOLIO_MODIFY; - if (file->f_mode & FMODE_READ) + if (file->f_mode & FMODE_READ || + test_bit(NETFS_ICTX_ENCRYPTED, &ctx->flags)) return NETFS_JUST_PREFETCH; if (netfs_is_cache_enabled(ctx) || diff --git a/fs/netfs/crypto.c b/fs/netfs/crypto.c new file mode 100644 index 000000000000..943d01f430e2 --- /dev/null +++ b/fs/netfs/crypto.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem content encryption support. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Populate a scatterlist from the next bufferage of an I/O iterator. + */ +static int netfs_iter_to_sglist(const struct iov_iter *iter, size_t len, + struct scatterlist *sg, unsigned int n_sg) +{ + struct iov_iter tmp_iter = *iter; + struct sg_table sgtable = { .sgl = sg }; + ssize_t ret; + + _enter("%zx/%zx", len, iov_iter_count(iter)); + + sg_init_table(sg, n_sg); + ret = extract_iter_to_sg(&tmp_iter, len, &sgtable, n_sg, 0); + if (ret < 0) + return ret; + sg_mark_end(&sg[sgtable.nents - 1]); + return sgtable.nents; +} + +/* + * Prepare a write request for writing. We encrypt in/into the bounce buffer. + */ +bool netfs_encrypt(struct netfs_io_request *wreq) +{ + struct netfs_inode *ctx = netfs_inode(wreq->inode); + struct scatterlist source_sg[16], dest_sg[16]; + unsigned int n_dest; + size_t n, chunk, bsize = 1UL << ctx->crypto_bshift; + loff_t pos; + int ret; + + _enter(""); + + trace_netfs_rreq(wreq, netfs_rreq_trace_encrypt); + + pos = wreq->start; + n = wreq->len; + _debug("ENCRYPT %llx-%llx", pos, pos + n - 1); + + for (; n > 0; n -= chunk, pos += chunk) { + chunk = min(n, bsize); + + ret = netfs_iter_to_sglist(&wreq->io_iter, chunk, + dest_sg, ARRAY_SIZE(dest_sg)); + if (ret < 0) + goto error; + n_dest = ret; + + if (test_bit(NETFS_RREQ_CRYPT_IN_PLACE, &wreq->flags)) { + ret = ctx->ops->encrypt_block(wreq, pos, chunk, + dest_sg, n_dest, + dest_sg, n_dest); + } else { + ret = netfs_iter_to_sglist(&wreq->iter, chunk, + source_sg, ARRAY_SIZE(source_sg)); + if (ret < 0) + goto error; + ret = ctx->ops->encrypt_block(wreq, pos, chunk, + source_sg, ret, + dest_sg, n_dest); + } + + if (ret < 0) + goto error_failed; + } + + return true; + +error_failed: + trace_netfs_failure(wreq, NULL, ret, netfs_fail_encryption); +error: + wreq->error = ret; + return false; +} diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 6a67abdf71c8..3f4e64968623 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -22,6 +22,11 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); +/* + * crypto.c + */ +bool netfs_encrypt(struct netfs_io_request *wreq); + /* * direct_write.c */ diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index c1218b183197..6bf3b3f51499 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -44,6 +44,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); + if (test_bit(NETFS_ICTX_ENCRYPTED, &ctx->flags)) + __set_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags); if (cached) __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags); if (file && file->f_flags & O_NONBLOCK) diff --git a/fs/netfs/output.c b/fs/netfs/output.c index bb42789c7a24..2d2530dc9507 100644 --- a/fs/netfs/output.c +++ b/fs/netfs/output.c @@ -366,7 +366,11 @@ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, * background whilst we generate a list of write ops that we want to * perform. */ - // TODO: Encrypt or compress the region as appropriate + if (test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &wreq->flags) && + !netfs_encrypt(wreq)) { + may_wait = true; + goto out; + } /* We need to write all of the region to the cache */ if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags)) @@ -378,6 +382,7 @@ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, if (test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) ctx->ops->create_write_requests(wreq, wreq->start, wreq->len); +out: if (atomic_dec_and_test(&wreq->nr_outstanding)) netfs_write_terminated(wreq, false); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 6244f7a9a44a..cdb471938225 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -19,6 +19,7 @@ #include #include +struct scatterlist; enum netfs_sreq_ref_trace; /* @@ -141,7 +142,9 @@ struct netfs_inode { unsigned long flags; #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ +#define NETFS_ICTX_ENCRYPTED 2 /* The file contents are encrypted */ unsigned char min_bshift; /* log2 min block size for bounding box or 0 */ + unsigned char crypto_bshift; /* log2 of crypto block size */ }; /* @@ -285,6 +288,8 @@ struct netfs_io_request { #define NETFS_RREQ_USE_BOUNCE_BUFFER 8 /* Use bounce buffer */ #define NETFS_RREQ_WRITE_TO_CACHE 9 /* Need to write to the cache */ #define NETFS_RREQ_UPLOAD_TO_SERVER 10 /* Need to write to the server */ +#define NETFS_RREQ_CONTENT_ENCRYPTION 11 /* Content encryption is in use */ +#define NETFS_RREQ_CRYPT_IN_PLACE 12 /* Enc/dec in place in ->io_iter */ const struct netfs_request_ops *netfs_ops; void (*cleanup)(struct netfs_io_request *req); }; @@ -316,6 +321,11 @@ struct netfs_request_ops { void (*create_write_requests)(struct netfs_io_request *wreq, loff_t start, size_t len); void (*invalidate_cache)(struct netfs_io_request *wreq); + + /* Content encryption */ + int (*encrypt_block)(struct netfs_io_request *wreq, loff_t pos, size_t len, + struct scatterlist *source_sg, unsigned int n_source, + struct scatterlist *dest_sg, unsigned int n_dest); }; /* @@ -464,6 +474,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, ctx->zero_point = ctx->remote_i_size; ctx->flags = 0; ctx->min_bshift = 0; + ctx->crypto_bshift = 0; #if IS_ENABLED(CONFIG_FSCACHE) ctx->cache = NULL; #endif diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 60f98c99fe21..70e2f9a48f24 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -41,6 +41,7 @@ EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_done, "DONE ") \ + EM(netfs_rreq_trace_encrypt, "ENCRYPT") \ EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_redirty, "REDIRTY") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \ @@ -76,6 +77,7 @@ EM(netfs_fail_copy_to_cache, "copy-to-cache") \ EM(netfs_fail_dio_read_short, "dio-read-short") \ EM(netfs_fail_dio_read_zero, "dio-read-zero") \ + EM(netfs_fail_encryption, "encryption") \ EM(netfs_fail_read, "read") \ EM(netfs_fail_short_read, "short-read") \ EM(netfs_fail_prepare_write, "prep-write") \ From patchwork Fri Oct 13 16:04:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 951F7CDB47E for ; Fri, 13 Oct 2023 16:07:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6521E8004D; Fri, 13 Oct 2023 12:06:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D8F38004B; Fri, 13 Oct 2023 12:06:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4534E8004D; Fri, 13 Oct 2023 12:06:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 31C608004B for ; Fri, 13 Oct 2023 12:06:55 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F2CBC1603FA for ; Fri, 13 Oct 2023 16:06:54 +0000 (UTC) X-FDA: 81340916748.10.3F4D930 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 3115F2001E for ; Fri, 13 Oct 2023 16:06:53 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aafTBE83; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213213; a=rsa-sha256; cv=none; b=3Vg7AhnYyI1nhoLRNWib/nDzp5+xwJ2IL0Gyow5Uji0bPXxvRO8Va/VD1iCBGsBZg8nKeB 3cMcUa7J+PR/j0z0kPHUmXIC9/tCg4l19UbcVTZFaWh5/3zXC2icc9FtLBvW171ZyvIlqZ M+mNAM89pr43R/OpGrdUbBX5WbrpxJM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aafTBE83; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213213; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KBU+evKYv1DNx1iSvDhlzRDqOL3+5VRS9ZOU8xNhh8s=; b=shhaGzTruANksjTRO8WjSHUKOTidSARsGbvDJ8xnYGoNwerNOcwGPpn3s2vh4jqV5WiJgd yiBflYfc5nbSRlXSma0fCO8rcpy1afDxB4KGNinU+W2z+VKAgH+TmQhsCuoZt0VWA7vr4x wnOv9s44pCSWNKrcbd+HtvnEjAb9LA8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213212; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KBU+evKYv1DNx1iSvDhlzRDqOL3+5VRS9ZOU8xNhh8s=; b=aafTBE83wceli+lF4AYEaB/X2h+OODD0k8Q6hS4dzL4BTPp1w6vjMedNV6SB15cEucYELW vlgNH696A9MqkBOf7CrUkNQg/ijHkScdJ1z31qdLZPiqTgEPAZrUV3oGDbsnsmNyeAmejc JKPDIonuDgxdYnMJGEStDBQ+jwAgras= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-463-SPuFm4d-NGGsRvCeqTD79g-1; Fri, 13 Oct 2023 12:06:49 -0400 X-MC-Unique: SPuFm4d-NGGsRvCeqTD79g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A8E088B7A1; Fri, 13 Oct 2023 16:06:33 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 661D91C060DF; Fri, 13 Oct 2023 16:06:29 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 36/53] netfs: Decrypt encrypted content Date: Fri, 13 Oct 2023 17:04:05 +0100 Message-ID: <20231013160423.2218093-37-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3115F2001E X-Stat-Signature: hm6sxm1r5ns7zwrebydcsu7w78tihn7a X-HE-Tag: 1697213213-92188 X-HE-Meta: U2FsdGVkX18sz1xX2/jvwCYGVTtKEE3Npxx90ePQ4OijTAbE3oz28qygkxLp1YBxN9mHcjcK8JoDq0/X94GvTs+EHYqSx1f9NXPDpKh08k5wh8YdZ5Ib5TbrlRIbPwR0/P02j01caNjtI9GmXeLLK/t7MIKkkmc4eLmN1OwcPxLbVFWg8DOnVQCaw/B5htcgCAFf3gDuSQHaxQL18v7LCV+/ze2xiE3Gj1uKImLAsX/xYJgWwpI5POqb6MWSI24EkVNPW78O/NGUJDbMoczNTvH90QV+3M+Q5ncMIBc5FPtbeGUuRlyoJKTgixm3A4dCZ6q9hv0BKv40wxBsNYxLxG9ZUPwIbeinJCaCtd/RNuQOb+onnU7mnzPpwjAe2iHQzHkfKM/zNk6GnfozECdlAB3gMvSTY11Elyh1l5vJ+qL180K3TCvt3n0yrT2Ue8jHc5lRVRmTlR568Jp6wJzy1N9KMZ0aq5Uvqsj8OZADMbemnZd+znywQe76rY3qKb54zHu5axUPUktrudHG62YNKrTQuV/vVh5K1Sryj6Cs2zo7Zev+M3wvZ0k5oYDMdp4o1uFpTi9ON/oEMVc2YOqWOoCRqIWLDRrjHBA1rPOyAzspSeALx2uDc4aF/Ne7PVMDzRxcCgFvqDE2HwnfK35TapuCXT/PGm3Sbi9EpNDJOqw9QZe3oSmJxKutD7Y09cEh56bKOpT82GH+3EGkZH3F68pXjlncMt+rkqNn5r2Tclqlay64jvReE9MDIHiSJmK0QvbYe2lqgx8iWPjg4zPjfbekDQyDm/6boJCipymwAIJ21A+uCdZtq4hILWm3aasGB+21BkKHqCREhSI+QNnYtYYWZq2LC1GfmIbbA9r4loT+ydQXWWWdji5gVebpWSqTrUISrqmSCE2OqS7LQZf7gaYoKrHQbihpWWtqPlVV2OGA3ptE9ngiUv85dP1M3ilrF/d/0U9pDVIH2XMnuIj Bc/9mt+/ 9kX7EOlERPXLR/V+2wA63Ym24L6g926CTPWzD4NfK5UEVr6IbWfmLdgk/JE7GMq3/M7RlUJdhH3f9u9dWVrREu/6jvPwEV54OETW275GLwDgvEPgBuKASJxoRQkUa3OMsAO6B6BJkaEVdlsq2CwyMfcEjHVgVC9E6raXJnvjKVgRIsTshP3FC6oSjwbxD5vUR+SveyFZ4eqXZlo0UNgmlJNefXpe5Ipo8Om9bF0su6wwD1WIBvdhJCp3RcksNKo/7fulPOZGir7lWsHS18IBG3yTJ8HN/3hUpXUgFnTNgtmb7seU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement a facility to provide decryption for encrypted content to a whole read-request in one go (which might have been stitched together from disparate sources with divisions that don't match page boundaries). Note that this doesn't necessarily gain the best throughput if the crypto block size is equal to or less than the size of a page (in which case we might be better doing it as pages become read), but it will handle crypto blocks larger than the size of a page. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/crypto.c | 59 ++++++++++++++++++++++++++++++++++++ fs/netfs/internal.h | 1 + fs/netfs/io.c | 6 +++- include/linux/netfs.h | 3 ++ include/trace/events/netfs.h | 2 ++ 5 files changed, 70 insertions(+), 1 deletion(-) diff --git a/fs/netfs/crypto.c b/fs/netfs/crypto.c index 943d01f430e2..6729bcda4f47 100644 --- a/fs/netfs/crypto.c +++ b/fs/netfs/crypto.c @@ -87,3 +87,62 @@ bool netfs_encrypt(struct netfs_io_request *wreq) wreq->error = ret; return false; } + +/* + * Decrypt the result of a read request. + */ +void netfs_decrypt(struct netfs_io_request *rreq) +{ + struct netfs_inode *ctx = netfs_inode(rreq->inode); + struct scatterlist source_sg[16], dest_sg[16]; + unsigned int n_source; + size_t n, chunk, bsize = 1UL << ctx->crypto_bshift; + loff_t pos; + int ret; + + trace_netfs_rreq(rreq, netfs_rreq_trace_decrypt); + if (rreq->start >= rreq->i_size) + return; + + n = min_t(unsigned long long, rreq->len, rreq->i_size - rreq->start); + + _debug("DECRYPT %llx-%llx f=%lx", + rreq->start, rreq->start + n, rreq->flags); + + pos = rreq->start; + for (; n > 0; n -= chunk, pos += chunk) { + chunk = min(n, bsize); + + ret = netfs_iter_to_sglist(&rreq->io_iter, chunk, + source_sg, ARRAY_SIZE(source_sg)); + if (ret < 0) + goto error; + n_source = ret; + + if (test_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags)) { + ret = ctx->ops->decrypt_block(rreq, pos, chunk, + source_sg, n_source, + source_sg, n_source); + } else { + ret = netfs_iter_to_sglist(&rreq->iter, chunk, + dest_sg, ARRAY_SIZE(dest_sg)); + if (ret < 0) + goto error; + ret = ctx->ops->decrypt_block(rreq, pos, chunk, + source_sg, n_source, + dest_sg, ret); + } + + if (ret < 0) + goto error_failed; + } + + return; + +error_failed: + trace_netfs_failure(rreq, NULL, ret, netfs_fail_decryption); +error: + rreq->error = ret; + set_bit(NETFS_RREQ_FAILED, &rreq->flags); + return; +} diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 3f4e64968623..8dc68a75d6cd 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -26,6 +26,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, * crypto.c */ bool netfs_encrypt(struct netfs_io_request *wreq); +void netfs_decrypt(struct netfs_io_request *rreq); /* * direct_write.c diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 36a3f720193a..9887b22e4cb3 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -398,6 +398,9 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) return; } + if (!test_bit(NETFS_RREQ_FAILED, &rreq->flags) && + test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) + netfs_decrypt(rreq); if (rreq->origin != NETFS_DIO_READ) netfs_rreq_unlock_folios(rreq); else @@ -427,7 +430,8 @@ static void netfs_rreq_work(struct work_struct *work) static void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async) { - if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) && + if ((test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) || + test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) && was_async) { if (!queue_work(system_unbound_wq, &rreq->work)) BUG(); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index cdb471938225..524e6f5ff3fd 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -326,6 +326,9 @@ struct netfs_request_ops { int (*encrypt_block)(struct netfs_io_request *wreq, loff_t pos, size_t len, struct scatterlist *source_sg, unsigned int n_source, struct scatterlist *dest_sg, unsigned int n_dest); + int (*decrypt_block)(struct netfs_io_request *rreq, loff_t pos, size_t len, + struct scatterlist *source_sg, unsigned int n_source, + struct scatterlist *dest_sg, unsigned int n_dest); }; /* diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 70e2f9a48f24..2f35057602fa 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -40,6 +40,7 @@ #define netfs_rreq_traces \ EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_copy, "COPY ") \ + EM(netfs_rreq_trace_decrypt, "DECRYPT") \ EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_encrypt, "ENCRYPT") \ EM(netfs_rreq_trace_free, "FREE ") \ @@ -75,6 +76,7 @@ #define netfs_failures \ EM(netfs_fail_check_write_begin, "check-write-begin") \ EM(netfs_fail_copy_to_cache, "copy-to-cache") \ + EM(netfs_fail_decryption, "decryption") \ EM(netfs_fail_dio_read_short, "dio-read-short") \ EM(netfs_fail_dio_read_zero, "dio-read-zero") \ EM(netfs_fail_encryption, "encryption") \ From patchwork Fri Oct 13 16:04:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72CA6CDB47E for ; Fri, 13 Oct 2023 16:07:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF57C80049; Fri, 13 Oct 2023 12:06:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA4AB80022; Fri, 13 Oct 2023 12:06:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1F4D80049; Fri, 13 Oct 2023 12:06:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 900F380022 for ; Fri, 13 Oct 2023 12:06:45 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 65A1C120334 for ; Fri, 13 Oct 2023 16:06:45 +0000 (UTC) X-FDA: 81340916370.04.9ED7B6F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 69A7E18002A for ; Fri, 13 Oct 2023 16:06:43 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=U4eahnmp; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213203; a=rsa-sha256; cv=none; b=NMoJffWYVrp3B3BPCoPl42rNjnjrYzipO6TfQYvM8QLUXac9Rm2c+MiBMtuKysiSd9KKGK Am6vPIRO+9cr8odcFBi1zUleLBFjjV/hFIat5LE1BRdv6LJYxFTaX0Qt+eUf776h8ydSDZ aMnH24Ycrn65CytMiMoGGWlr8WCoKCg= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=U4eahnmp; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213203; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QdFa/dxqwrihf1ZMFwRJBWJsl8hSozQZEOt+FKZWmzU=; b=xHt936KjQLNpCMar0F/GOCZ8Q5KxvgETM7eejLBGtuH9KPdmzdrgLqrozwhSruk3R7u1vk LUdGS1ZGcztIdEptM9enelXJ7YwGDXQ+piO8DjRkgrp9uUXNeIt8seG5qfY026wJzAO2Pg Pa8MbfhEHcCZj+7/aPUkpVA1QQLCHS0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213202; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QdFa/dxqwrihf1ZMFwRJBWJsl8hSozQZEOt+FKZWmzU=; b=U4eahnmppgHhXfezVpYkpUtFvIwM3PKwf7eiVa0eHPBB3RF7+dkwtLOM/4MrxnAawvVu4k ouEmbFWBCoAstI/PqnChU5i4r5KlZv2WARZGJklCeT9PxHvZ8ug/ugZ1uciOhytMEvp3TL IvFQNRZCFWF2F822Lg9TbxZQxfC0jfE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-354-bIkAoftxPQSMUL0pN1n9yw-1; Fri, 13 Oct 2023 12:06:38 -0400 X-MC-Unique: bIkAoftxPQSMUL0pN1n9yw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 76A6A1C09A56; Fri, 13 Oct 2023 16:06:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id E310525C1; Fri, 13 Oct 2023 16:06:33 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 37/53] netfs: Support decryption on ubuffered/DIO read Date: Fri, 13 Oct 2023 17:04:06 +0100 Message-ID: <20231013160423.2218093-38-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 69A7E18002A X-Stat-Signature: jsk96apgd4yk39ix3zk9yobkj31a8ks1 X-HE-Tag: 1697213203-914618 X-HE-Meta: U2FsdGVkX1/5F2vhmhaJTnjKzzsyiLShSlf9ymTxIb7XFpe02IwK5mAV/YU+o57xrFdD08MKQ6biwlSdSSGXFK5ql+NmObo97PPWDB8qfbFh+Ba/j1l6vdRu0vc1j+hDk0Ma8sz3G0cdm3ZemPQKlZTA92qmGbEgL72AK4GhZfaODlvsm+rtA629iT0MeqduMAbrUS0Z2C3EmCnP3ZZcJlhoeoAe2R+LxwNzpL8jASd02yKsTMwWFWH824RSpJqD4JGuRGNLeNSCNixRNcqoARCJ3gqe8d28Me3G9cmLpZjaYa+LR1qE3pK7uUTcnf9e37AadWWlOPD5RwxYbRMZ96W9NjygvYyVsNmx3OpOj22ypzIHNmTHmZUzGLJe/ARmtAr26iauBHwnsLRlpwNcNRsG7ui1tvFPtFPkTG4eJU70nWN+qUjcAW8+gJeY+tu6/FQwsfh4Z+FCZBcM0NV7cnmSGj4RquTSlcni3PjmVyXNw6Ij64mzWic4fOh0Ei/isWXSDvcRoZCgkpQP7/FII05vhA57bNqQjy1FJ3/SkzkbVyEh3xg9PvxjBPqnCuHWVXRyYEgkS59Mja4k5/YRmYpodQE8KMqwfWuq0xI1BJg0YiL98ZP8NtvJ4U/5CBmzZbf0+dKIoEc5rleAEU4GZhEGLKf3vdK2W+q3qhKIa/hOxt8gZymsjwMumZYUiVioL4F1Z3GKOzbTMBThnjd+0XHMKlhkVkcx2nwdFaRn8Qxp+ebwNe6CIwo9LrviuQnZxDBONr2lLaAxlBfE17wXYxMTtJClHH4QLT14OkjMKi/rPDUs2qo12qRvVcIFF7261eWSKmdxNkl6nY2cK9p0KhJu/jzdaqzOThTLTTTGWir8reZaQMvsZBNBCbvTRU7IaJG6Fo7fQz81tII2xgStHdFBZYaVQRPbtt1CHEBnbEInN3C4KJJ3aYqpZ/3Fhd3r61t/gIFLkAGyYQgH3jE oExAm4oK HQ3RjJMpBWBWUIQAoUylLMP5pv8vVh0nAqEducOpZ9xySexbeRV1Wdb25lySL0NMQBV0QyQkZdi+KM49Vy79aTmV/5LPXGhlkCG9w0NK6+h55ZOEsdW1yJVIvYTCvRKCJb+tQQbAG4Dt152Ee3w2AQAX2U8p9bXeaOh9y+PIGx/UakamvbcjJFKuDKeWueXyg42dHbgGjwEgsA+y0OcdbomKCxadNWjfhBjfq6uMLHTnyw6GAJik+x/W7YgPYs05iMZDze4F3IdUJrlukfdGjCGsdcriQCk8uIwOcGi6xwFWuzCw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Support unbuffered and direct I/O reads from an encrypted file. This may require making a larger read than is required into a bounce buffer and copying out the required bits. We don't decrypt in-place in the user buffer lest userspace interfere and muck up the decryption. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/direct_read.c | 10 ++++++++++ fs/netfs/internal.h | 17 +++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 52ad8fa66dd5..158719b56900 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -181,6 +181,16 @@ static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_ iov_iter_advance(iter, orig_count); } + /* If we're going to do decryption or decompression, we're going to + * need a bounce buffer - and if the data is misaligned for the crypto + * algorithm, we decrypt in place and then copy. + */ + if (test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) { + if (!netfs_is_crypto_aligned(rreq, iter)) + __set_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags); + __set_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags); + } + /* If we're going to use a bounce buffer, we need to set it up. We * will then need to pad the request out to the minimum block size. */ diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 8dc68a75d6cd..7dd37d3aff3f 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -196,6 +196,23 @@ static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr) netfs_group->free(netfs_group); } +/* + * Check to see if a buffer aligns with the crypto unit block size. If it + * doesn't the crypto layer is going to copy all the data - in which case + * relying on the crypto op for a free copy is pointless. + */ +static inline bool netfs_is_crypto_aligned(struct netfs_io_request *rreq, + struct iov_iter *iter) +{ + struct netfs_inode *ctx = netfs_inode(rreq->inode); + unsigned long align, mask = (1UL << ctx->min_bshift) - 1; + + if (!ctx->min_bshift) + return true; + align = iov_iter_alignment(iter); + return (align & mask) == 0; +} + /*****************************************************************************/ /* * debug tracing From patchwork Fri Oct 13 16:04:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C3CCDB483 for ; Fri, 13 Oct 2023 16:07:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F0FF8004A; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A1C580022; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31AF78004A; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1D24B80022 for ; Fri, 13 Oct 2023 12:06:50 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AF5C5A03CF for ; Fri, 13 Oct 2023 16:06:49 +0000 (UTC) X-FDA: 81340916538.30.BAD6B57 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id D8B4316002E for ; Fri, 13 Oct 2023 16:06:47 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WOToe1X5; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213207; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fQp4WTvxOBACtMIy3nJwv4+CVEV1YI0RQdPTNq8wX4w=; b=Lv84e4JJneEMIB6IzCYLe3ab2y9eeUAjILMdZvGLIkWbduEBTPvHVisoavaGQOUzFJLpyR tA4fSpRL5YFX2m1fSyxuL9jfJhEGO16AOJegTTRXiJQXfoQvHrZC15YnKgHN2fkE9FHvnx Pbne34ogMM1O+bBCnNlOuRdFqbLDZJY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WOToe1X5; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213207; a=rsa-sha256; cv=none; b=7Axdf/bJ222ifRRJZP0ElM1vGn3PlrWXlxGD+e8nEZDR3Q2rDAgqzmZGjrOd+qqpJJGQ5R AbWT+eXZT0NQrQLNXJdOAODsOlX/MfMNrZQkihBrbfOk5kVxdjgdWmjP3WricINTmKWCFk be90oO59iOf0Kley4ITIsW0auylj//o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fQp4WTvxOBACtMIy3nJwv4+CVEV1YI0RQdPTNq8wX4w=; b=WOToe1X5Em9fYvx81efq+3+TQ/dCuLIZcQKXO+QwqUxzIMeaK39U6MQvYay8lguzU10UfE S/nwnO79Dvof7+e8U5aA4rfT8yY6Y1PrMd+pNzYe7r042vDvglkSEjIUZ5gaRmxNBCJu0W QhdP2TUWRIUpc0mN9YlXgCViYH/lkYw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-569-9yySIVFgMC-ICwMlMjNYsw-1; Fri, 13 Oct 2023 12:06:41 -0400 X-MC-Unique: 9yySIVFgMC-ICwMlMjNYsw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D30191029F48; Fri, 13 Oct 2023 16:06:40 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3FBD825C1; Fri, 13 Oct 2023 16:06:38 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 38/53] netfs: Support encryption on Unbuffered/DIO write Date: Fri, 13 Oct 2023 17:04:07 +0100 Message-ID: <20231013160423.2218093-39-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspam-User: X-Stat-Signature: jmeqjdwj1rkcojqtzrg1hsph5opn5mix X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D8B4316002E X-HE-Tag: 1697213207-904530 X-HE-Meta: U2FsdGVkX19fLWJ3Tw9ElTUPwlJmNNWJORYjjViOZfqtp3txMaiF0Mjr9Pbn4htJuq6lb98aObhIyVWnESdgqcDK46YI7ysOdFiOrA4QChFhDq10iO2RrSx/Dz4SCtfIVNKfDWads07ZGlwOSQtMvs99WuoeIVQ0dKs575W1AHxVQzjzEXrfn7NlvJkqLF7Y/EkOypJElvBLvZszl3NGvsRUGp9omu3b5nYEWInSR2G1SzjlKOkrHwe6mTiMhmJo3g8bfufwfd/Ty6/3vfjV8q+ZScpmS/LTmjTQQjFjL96tkRL6uLxM/zGo3JMN+nOk9z0Add11zMWImMDVFTgnwhRNbTaMcECNeKl65O3yBh/UjP9WhPABTxnNE77l/VvQmPuNBhFWoOA2emp+W6xtIKVwQmGP1ukVdfPQ0/CEbnsmoXegheG+qCKgzUpeuBGS26x5Jl/Ttz1Y6WAa2UlBKkSqtnXXuiXEQC8ZClRXi1VeNgLgG48XeG01KCKG+QLG5sIcZU+2dlOXApQVLGdku7b+TmmkfO9pQdF6d+JoMKNaxdMFhtD6MRnakpCgZvJxCFwQcIwp0tCpFHF8gOpaylVKYWij9SCroHHcgI6GGaLW1Fdosc31ML5aBb5NMmcxS8DQGnEuWEWVj1tYeZk/8W06hWvyev8daXn9Gsp7zBZg1jeOkp0mDnuDgeA7ScrnIXBCcMmCzZBzZiufdHUrgJlh5aLnMcKCwyb/Td3dNuLJ4xaPEADHFmV9jJBYIKVfLcEtEV2helqDnd53LUH8NolwARJ4jPgZ06p6wNuyylL/5nSmE3sN4236Jh60WTWC/Kqs0qOnBhNVdMIAgpoGP3ceXeRa4zj/H6jD5N5Q8IJUU6o/ETYpyBAuyI+5yXXvyueGAJVLz+bwurE/USePRFixkFXGdjVrLWibC0I2wN1jxFFDjW6x8AxYJBOwZLCSvxMS1bQj9w9ELvcB4M3 7F4pIF+E LYXQeNLcmkky3EakMeinsvGK19hG5GEUjr7UHGx/2AqNSFNh1yBxIGFmevVkAyOnHN01dKoFyCyhe5/PDKmw9P92GPGB6lJclW2vXTdCbG2g3I6WkscRE/4IvsGAp9IY9XTC4IidUgjUYOzU4QPR0IkFSPk+Zsy18/+H5Ptr4qbl3Qb3v0dXqNRC87vj8cnLRm4Wx0Mb2cMGVYVM7bg/s8TvZ27vL2RBRWwPqbmWntuS7rC373vadtrh3ceIJOwsj9yFDsOs91tfZHeZ/KOD9rSzCHo4li2zLLs6SDB8FvHhCn0o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Support unbuffered and direct I/O writes to an encrypted file. This may require making an RMW cycle if the write is not appropriately aligned with respect to the crypto blocks. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/direct_read.c | 2 +- fs/netfs/direct_write.c | 210 ++++++++++++++++++++++++++++++++++- fs/netfs/internal.h | 8 ++ fs/netfs/io.c | 117 +++++++++++++++++++ fs/netfs/main.c | 1 + include/linux/netfs.h | 4 + include/trace/events/netfs.h | 1 + 7 files changed, 337 insertions(+), 6 deletions(-) diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 158719b56900..c01cbe42db8a 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -88,7 +88,7 @@ static int netfs_copy_xarray_to_iter(struct netfs_io_request *rreq, * If we did a direct read to a bounce buffer (say we needed to decrypt it), * copy the data obtained to the destination iterator. */ -static int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq) +int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq) { struct iov_iter *dest_iter = &rreq->iter; struct kiocb *iocb = rreq->iocb; diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index b1a4921ac4a2..f9dea801d6dd 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -23,6 +23,100 @@ static void netfs_cleanup_dio_write(struct netfs_io_request *wreq) } } +/* + * Allocate a bunch of pages and add them into the xarray buffer starting at + * the given index. + */ +static int netfs_alloc_buffer(struct xarray *xa, pgoff_t index, unsigned int nr_pages) +{ + struct page *page; + unsigned int n; + int ret = 0; + LIST_HEAD(list); + + n = alloc_pages_bulk_list(GFP_NOIO, nr_pages, &list); + if (n < nr_pages) { + ret = -ENOMEM; + } + + while ((page = list_first_entry_or_null(&list, struct page, lru))) { + list_del(&page->lru); + page->index = index; + ret = xa_insert(xa, index++, page, GFP_NOIO); + if (ret < 0) + break; + } + + while ((page = list_first_entry_or_null(&list, struct page, lru))) { + list_del(&page->lru); + __free_page(page); + } + return ret; +} + +/* + * Copy all of the data from the source iterator into folios in the destination + * xarray. We cannot step through and kmap the source iterator if it's an + * iovec, so we have to step through the xarray and drop the RCU lock each + * time. + */ +static int netfs_copy_iter_to_xarray(struct iov_iter *src, struct xarray *xa, + unsigned long long start) +{ + struct folio *folio; + void *base; + pgoff_t index = start / PAGE_SIZE; + size_t len, copied, count = iov_iter_count(src); + + XA_STATE(xas, xa, index); + + _enter("%zx", count); + + if (!count) + return -EIO; + + len = PAGE_SIZE - offset_in_page(start); + rcu_read_lock(); + xas_for_each(&xas, folio, ULONG_MAX) { + size_t offset; + + if (xas_retry(&xas, folio)) + continue; + + /* There shouldn't be a need to call xas_pause() as no one else + * can see the xarray we're iterating over. + */ + rcu_read_unlock(); + + offset = offset_in_folio(folio, start); + _debug("folio %lx +%zx [%llx]", folio->index, offset, start); + + while (offset < folio_size(folio)) { + len = min(count, len); + + base = kmap_local_folio(folio, offset); + copied = copy_from_iter(base, len, src); + kunmap_local(base); + if (copied != len) + goto out; + count -= len; + if (count == 0) + goto out; + + start += len; + offset += len; + len = PAGE_SIZE; + } + + rcu_read_lock(); + } + + rcu_read_unlock(); +out: + _leave(" = %zx", count); + return count ? -EIO : 0; +} + /* * Perform an unbuffered write where we may have to do an RMW operation on an * encrypted file. This can also be used for direct I/O writes. @@ -31,20 +125,47 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * struct netfs_group *netfs_group) { struct netfs_io_request *wreq; + struct netfs_inode *ctx = netfs_inode(file_inode(iocb->ki_filp)); + unsigned long long real_size = ctx->remote_i_size; unsigned long long start = iocb->ki_pos; unsigned long long end = start + iov_iter_count(iter); ssize_t ret, n; - bool async = !is_sync_kiocb(iocb); + size_t min_bsize = 1UL << ctx->min_bshift; + size_t bmask = min_bsize - 1; + size_t gap_before = start & bmask; + size_t gap_after = (min_bsize - end) & bmask; + bool use_bounce, async = !is_sync_kiocb(iocb); + enum { + DIRECT_IO, COPY_TO_BOUNCE, ENC_TO_BOUNCE, COPY_THEN_ENC, + } buffering; _enter(""); + /* The real size must be rounded out to the crypto block size plus + * any trailer we might want to attach. + */ + if (real_size && ctx->crypto_bshift) { + size_t cmask = 1UL << ctx->crypto_bshift; + + if (real_size < ctx->crypto_trailer) + return -EIO; + if ((real_size - ctx->crypto_trailer) & cmask) + return -EIO; + real_size -= ctx->crypto_trailer; + } + /* We're going to need a bounce buffer if what we transmit is going to * be different in some way to the source buffer, e.g. because it gets * encrypted/compressed or because it needs expanding to a block size. */ - // TODO + use_bounce = test_bit(NETFS_ICTX_ENCRYPTED, &ctx->flags); + if (gap_before || gap_after) { + if (iocb->ki_flags & IOCB_DIRECT) + return -EINVAL; + use_bounce = true; + } - _debug("uw %llx-%llx", start, end); + _debug("uw %llx-%llx +%zx,%zx", start, end, gap_before, gap_after); wreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp, start, end - start, @@ -53,7 +174,57 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * if (IS_ERR(wreq)) return PTR_ERR(wreq); - { + if (use_bounce) { + unsigned long long bstart = start - gap_before; + unsigned long long bend = end + gap_after; + pgoff_t first = bstart / PAGE_SIZE; + pgoff_t last = (bend - 1) / PAGE_SIZE; + + _debug("bounce %llx-%llx %lx-%lx", bstart, bend, first, last); + + ret = netfs_alloc_buffer(&wreq->bounce, first, last - first + 1); + if (ret < 0) + goto out; + + iov_iter_xarray(&wreq->io_iter, READ, &wreq->bounce, + bstart, bend - bstart); + + if (gap_before || gap_after) + async = false; /* We may have to repeat the RMW cycle */ + } + +repeat_rmw_cycle: + if (use_bounce) { + /* If we're going to need to do an RMW cycle, fill in the gaps + * at the ends of the buffer. + */ + if (gap_before || gap_after) { + struct iov_iter buffer = wreq->io_iter; + + if ((gap_before && start - gap_before < real_size) || + (gap_after && end < real_size)) { + ret = netfs_rmw_read(wreq, iocb->ki_filp, + start - gap_before, gap_before, + end, end < real_size ? gap_after : 0); + if (ret < 0) + goto out; + } + + if (gap_before && start - gap_before >= real_size) + iov_iter_zero(gap_before, &buffer); + if (gap_after && end >= real_size) { + iov_iter_advance(&buffer, end - start); + iov_iter_zero(gap_after, &buffer); + } + } + + if (!test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &wreq->flags)) + buffering = COPY_TO_BOUNCE; + else if (!gap_before && !gap_after && netfs_is_crypto_aligned(wreq, iter)) + buffering = ENC_TO_BOUNCE; + else + buffering = COPY_THEN_ENC; + } else { /* If this is an async op and we're not using a bounce buffer, * we have to save the source buffer as the iterator is only * good until we return. In such a case, extract an iterator @@ -77,10 +248,25 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * } wreq->io_iter = wreq->iter; + buffering = DIRECT_IO; } /* Copy the data into the bounce buffer and encrypt it. */ - // TODO + if (buffering == COPY_TO_BOUNCE || + buffering == COPY_THEN_ENC) { + ret = netfs_copy_iter_to_xarray(iter, &wreq->bounce, wreq->start); + if (ret < 0) + goto out; + wreq->iter = wreq->io_iter; + wreq->start -= gap_before; + wreq->len += gap_before + gap_after; + } + + if (buffering == COPY_THEN_ENC || + buffering == ENC_TO_BOUNCE) { + if (!netfs_encrypt(wreq)) + goto out; + } /* Dispatch the write. */ __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); @@ -101,6 +287,20 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); + /* See if the write failed due to a 3rd party race when doing + * an RMW on a partially modified block in an encrypted file. + */ + if (test_and_clear_bit(NETFS_RREQ_REPEAT_RMW, &wreq->flags)) { + netfs_clear_subrequests(wreq, false); + iov_iter_revert(iter, end - start); + wreq->error = 0; + wreq->start = start; + wreq->len = end - start; + wreq->transferred = 0; + wreq->submitted = 0; + goto repeat_rmw_cycle; + } + ret = wreq->error; _debug("waited = %zd", ret); if (ret == 0) { diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 7dd37d3aff3f..a25adbe7ec72 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -28,6 +28,11 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, bool netfs_encrypt(struct netfs_io_request *wreq); void netfs_decrypt(struct netfs_io_request *rreq); +/* + * direct_read.c + */ +int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq); + /* * direct_write.c */ @@ -38,6 +43,9 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * * io.c */ int netfs_begin_read(struct netfs_io_request *rreq, bool sync); +ssize_t netfs_rmw_read(struct netfs_io_request *wreq, struct file *file, + unsigned long long start1, size_t len1, + unsigned long long start2, size_t len2); /* * main.c diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 9887b22e4cb3..14a9f3312d3b 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -775,3 +775,120 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) out: return ret; } + +static bool netfs_rmw_read_one(struct netfs_io_request *rreq, + unsigned long long start, size_t len) +{ + struct netfs_inode *ctx = netfs_inode(rreq->inode); + struct iov_iter io_iter; + unsigned long long pstart, end = start + len; + pgoff_t first, last; + ssize_t ret; + size_t min_bsize = 1UL << ctx->min_bshift; + + /* Determine the block we need to load. */ + end = round_up(end, min_bsize); + start = round_down(start, min_bsize); + + /* Determine the folios we need to insert. */ + pstart = round_down(start, PAGE_SIZE); + first = pstart / PAGE_SIZE; + last = DIV_ROUND_UP(end, PAGE_SIZE); + + ret = netfs_add_folios_to_buffer(&rreq->bounce, rreq->mapping, + first, last, GFP_NOFS); + if (ret < 0) { + rreq->error = ret; + return false; + } + + rreq->start = start; + rreq->len = len; + rreq->submitted = 0; + iov_iter_xarray(&rreq->io_iter, ITER_DEST, &rreq->bounce, start, len); + + io_iter = rreq->io_iter; + do { + _debug("submit %llx + %zx >= %llx", + rreq->start, rreq->submitted, rreq->i_size); + if (rreq->start + rreq->submitted >= rreq->i_size) + break; + if (!netfs_rreq_submit_slice(rreq, &io_iter, &rreq->subreq_counter)) + break; + } while (rreq->submitted < rreq->len); + + if (rreq->submitted < rreq->len) { + netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); + return false; + } + + return true; +} + +/* + * Begin the process of reading in one or two chunks of data for use by + * unbuffered write to perform an RMW cycle. We don't read directly into the + * write buffer as this may get called to redo the read in the case that a + * conditional write fails due to conflicting 3rd-party modifications. + */ +ssize_t netfs_rmw_read(struct netfs_io_request *wreq, struct file *file, + unsigned long long start1, size_t len1, + unsigned long long start2, size_t len2) +{ + struct netfs_io_request *rreq; + ssize_t ret; + + _enter("RMW:R=%x %llx-%llx %llx-%llx", + rreq->debug_id, start1, start1 + len1 - 1, start2, start2 + len2 - 1); + + rreq = netfs_alloc_request(wreq->mapping, file, + start1, start2 - start1 + len2, NETFS_RMW_READ); + if (IS_ERR(rreq)) + return PTR_ERR(rreq); + + INIT_WORK(&rreq->work, netfs_rreq_work); + + rreq->iter = wreq->io_iter; + __set_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags); + __set_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags); + + /* Chop the reads into slices according to what the netfs wants and + * submit each one. + */ + netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding); + atomic_set(&rreq->nr_outstanding, 1); + if (len1 && !netfs_rmw_read_one(rreq, start1, len1)) + goto wait; + if (len2) + netfs_rmw_read_one(rreq, start2, len2); + +wait: + /* Keep nr_outstanding incremented so that the ref always belongs to us + * and the service code isn't punted off to a random thread pool to + * process. + */ + for (;;) { + wait_var_event(&rreq->nr_outstanding, + atomic_read(&rreq->nr_outstanding) == 1); + netfs_rreq_assess(rreq, false); + if (atomic_read(&rreq->nr_outstanding) == 1) + break; + cond_resched(); + } + + trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + + ret = rreq->error; + if (ret == 0 && rreq->submitted < rreq->len) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret = -EIO; + } + + if (ret == 0) + ret = netfs_dio_copy_bounce_to_dest(rreq); + + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + return ret; +} diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 1cf10f9c4c1f..b335e6a50f9c 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_RMW_READ] = "RM", [NETFS_UNBUFFERED_WRITE] = "UW", [NETFS_DIO_READ] = "DR", [NETFS_DIO_WRITE] = "DW", diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 524e6f5ff3fd..9661ae24120f 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -145,6 +145,7 @@ struct netfs_inode { #define NETFS_ICTX_ENCRYPTED 2 /* The file contents are encrypted */ unsigned char min_bshift; /* log2 min block size for bounding box or 0 */ unsigned char crypto_bshift; /* log2 of crypto block size */ + unsigned char crypto_trailer; /* Size of crypto trailer */ }; /* @@ -233,6 +234,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_RMW_READ, /* This is an unbuffered read for RMW */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_DIO_WRITE, /* This is a direct I/O write */ @@ -290,6 +292,7 @@ struct netfs_io_request { #define NETFS_RREQ_UPLOAD_TO_SERVER 10 /* Need to write to the server */ #define NETFS_RREQ_CONTENT_ENCRYPTION 11 /* Content encryption is in use */ #define NETFS_RREQ_CRYPT_IN_PLACE 12 /* Enc/dec in place in ->io_iter */ +#define NETFS_RREQ_REPEAT_RMW 13 /* Need to repeat RMW cycle */ const struct netfs_request_ops *netfs_ops; void (*cleanup)(struct netfs_io_request *req); }; @@ -478,6 +481,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx, ctx->flags = 0; ctx->min_bshift = 0; ctx->crypto_bshift = 0; + ctx->crypto_trailer = 0; #if IS_ENABLED(CONFIG_FSCACHE) ctx->cache = NULL; #endif diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 2f35057602fa..825946f510ee 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -33,6 +33,7 @@ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_RMW_READ, "RM") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ EM(NETFS_DIO_READ, "DR") \ E_(NETFS_DIO_WRITE, "DW") From patchwork Fri Oct 13 16:04:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A1E3CDB47E for ; Fri, 13 Oct 2023 16:07:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B86458004C; Fri, 13 Oct 2023 12:06:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B126D8004B; Fri, 13 Oct 2023 12:06:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9436E8004C; Fri, 13 Oct 2023 12:06:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7F8E58004B for ; Fri, 13 Oct 2023 12:06:53 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4C3BE1603F4 for ; Fri, 13 Oct 2023 16:06:53 +0000 (UTC) X-FDA: 81340916706.10.DF75937 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id 6EFBF40022 for ; Fri, 13 Oct 2023 16:06:51 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YO8pgpqq; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213211; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UOuHGM4ycbvjj38l0WSkUP2aqRFFc0lkv8Oh1G3x/FU=; b=5FbUekKHPgwOX4rmsz6LbceyFE8OJKLRTCbKNcR3TPPj4NjyUysgDcgpC20el1aODx7hgG T+ZPmFp6dnEBEPNQQDuM0nhUk+I1giMubPTjz2C36P4quvfBsd/bJjeu6tdrFxAOapBZWc o3tsymjbdasC4rRQF6gK85RMFF/IzyA= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YO8pgpqq; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213211; a=rsa-sha256; cv=none; b=u9cz0zjd/o+fnQB3M58Y7VBzT1UeW0u7g4ulQ9Q7wOSAlBoCSYWTvN7kD7BvjEXOHYsVuf MjfPd+ly+yHSmnMm4dI2zJ+I2yfcWsm8yDCvUKiqg/jHTqcABRG7U9/QMJMdLT585ZZqhn jILWUM90o5+52o/NnM9JzWnfatRMZkA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213210; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UOuHGM4ycbvjj38l0WSkUP2aqRFFc0lkv8Oh1G3x/FU=; b=YO8pgpqqkEmS5ApRrtvrLKWUAJQLpfWsNY9Ut0su7KALghMnq0xEFzsW3q4kyqqFU9v0Gk qg/OhI2ySU+lpSLfczl+XhKBt9e23Ub8EL/ZHEqHzqNAauVSD/v0JwxngZeJEPGdpg95bO S7JUcfLP5fMICpvMSo9RazpmlJryJVU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-74-Whk-vkIlNHKs64JvpVrt_w-1; Fri, 13 Oct 2023 12:06:45 -0400 X-MC-Unique: Whk-vkIlNHKs64JvpVrt_w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0F46085829F; Fri, 13 Oct 2023 16:06:44 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 83A9F1C060DF; Fri, 13 Oct 2023 16:06:41 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 39/53] netfs: Provide a launder_folio implementation Date: Fri, 13 Oct 2023 17:04:08 +0100 Message-ID: <20231013160423.2218093-40-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: 6EFBF40022 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: n46q6hgfs9e77nnuwbo55ews3zja8ofk X-HE-Tag: 1697213211-910563 X-HE-Meta: U2FsdGVkX1/MMpwJOzv6Hu/IiWEI94+/6AulYQPQ1EZbffgBfAGpg9v0BBWh/NoFTbGzcuyvK8pFwt49FGCm1bpPHFuEiXMuIVtB/1iUpAhGn3HIdiDxm54zQqzkK+e/a8n93zsQyCzdGaFkb87Ez+RGqqzgAvwXs+mDlonWIjByQJUSW+5vIaKARKhkivX0T1KdF/RpVKnYiAjsSsavV4U6l3xZ05QIBoYcelTpmSK0rfy0gBp4MCYdDAsG7fl+NFqw6VVQbgt3cWEAUgzSuJl8Ep1AzFf9YbJSdnxpo6kBtoWiwExv1GIiRbBeVEn4HxyrYMrgGYHk8oZ+fWSd2BEcOBfQpB+4tqbkraKq9NOqMBDy7BDF2OTw48yEV955ibPmjOKVMWKX1KpK7+W5TMEJ/DprqDczfo+4kk9jaTPINkPk1a2uKaObN91Ykl+y00IJSEVoQ02dfuaUy8mlznh+lOUfMlJosYJCl4+uHXGn6XW9RnJ8fsouq9PeSrAp/VcC/Oia8Evkii7awTn9bNNJrTchVAt33kCJgZjDRf224pTJhqwYmXCwHUEwxTSsapvuJTjujoZfG1W1gC34m04H9/NNkveYbcAwdi96jJRIL0rxa5TdP1rrxv+Xo/810EVpGZB32KQPH8pE28FOTuT1cd8ZFfumvlAfDwOnJedI0+hplB9bsXqYxRvdOJTO5wblszvHa1FRm7S2QsYgIsMNJI9SXdBFlS+WanPaqdEGpc37TLcNfP5Y6d739YyfC5ogb2pM44vPmYhxS7mop6rbCpFz0G/HUndyjy7FGSfWyg0YadnXbDZgASIAGwN1tTWc+QOvmWmBtotgp3ESa77XgTOJITbmwhtyFi/bdv1ohdp/GF0p0x9H5EVPfp+IhxhF4qebjX9JeFnj//bzmKvqGjxsUg3sOCcNGwDwXNJx63a2fo2jCHWpRgc5qqpy4dPQ2hSHqD2+QEYIRPf bqXg3hQC KkJ3K4uQ/dKFEsyNfQPcDlDH5FJjzzahV8hL4BVyfJymQTV8UnR1sJoZLBZAfrVbGj1ZJ9iWh+SpMpsxhJUnoI1g6GABgFi+hLUAK0F+9r0jlXe2EBkvUTQ6AVSZL3bKMTEjrq9E70/Hx0uWsd46bcMdV73E3RMBy8MMCNjM02ybk9DPUZve/8zY6a6/39MvayWN+Jwbb9CZavtOL30BjHcTDh4X8c1PLIZcHm+ziZ/9WYPuq7raCJxQsQGhnyfZZBwo+7vLUmXUyA8TnIBLTj++WGk6MyccGc2nNNGwp24cjwxobi62p0x4kdAZ/RxnNnsDjb521f82qssGytiLGy5+0WQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a launder_folio implementation for netfslib. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 71 ++++++++++++++++++++++++++++++++++++ fs/netfs/main.c | 1 + include/linux/netfs.h | 2 + include/trace/events/netfs.h | 3 ++ 4 files changed, 77 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index b81d807f89f0..5695bc3acf6c 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -1101,3 +1101,74 @@ int netfs_writepages(struct address_space *mapping, return ret; } EXPORT_SYMBOL(netfs_writepages); + +/* + * Deal with the disposition of a laundered folio. + */ +static void netfs_cleanup_launder_folio(struct netfs_io_request *wreq) +{ + if (wreq->error) { + pr_notice("R=%08x Laundering error %d\n", wreq->debug_id, wreq->error); + mapping_set_error(wreq->mapping, wreq->error); + } +} + +/** + * netfs_launder_folio - Clean up a dirty folio that's being invalidated + * @folio: The folio to clean + * + * This is called to write back a folio that's being invalidated when an inode + * is getting torn down. Ideally, writepages would be used instead. + */ +int netfs_launder_folio(struct folio *folio) +{ + struct netfs_io_request *wreq; + struct address_space *mapping = folio->mapping; + struct netfs_folio *finfo; + struct bio_vec bvec; + unsigned long long i_size = i_size_read(mapping->host); + unsigned long long start = folio_pos(folio); + size_t offset = 0, len; + int ret = 0; + + finfo = netfs_folio_info(folio); + if (finfo) { + offset = finfo->dirty_offset; + start += offset; + len = finfo->dirty_len; + } else { + len = folio_size(folio); + } + len = min_t(unsigned long long, len, i_size - start); + + wreq = netfs_alloc_request(mapping, NULL, start, len, NETFS_LAUNDER_WRITE); + if (IS_ERR(wreq)) { + ret = PTR_ERR(wreq); + goto out; + } + + if (!folio_clear_dirty_for_io(folio)) + goto out_put; + + trace_netfs_folio(folio, netfs_folio_trace_launder); + + _debug("launder %llx-%llx", start, start + len - 1); + + /* Speculatively write to the cache. We have to fix this up later if + * the store fails. + */ + wreq->cleanup = netfs_cleanup_launder_folio; + + bvec_set_folio(&bvec, folio, len, offset); + iov_iter_bvec(&wreq->iter, ITER_SOURCE, &bvec, 1, len); + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + ret = netfs_begin_write(wreq, true, netfs_write_trace_launder); + +out_put: + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); +out: + folio_wait_fscache(folio); + _leave(" = %d", ret); + return ret; +} +EXPORT_SYMBOL(netfs_launder_folio); diff --git a/fs/netfs/main.c b/fs/netfs/main.c index b335e6a50f9c..577c8a9fc0f2 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_LAUNDER_WRITE] = "LW", [NETFS_RMW_READ] = "RM", [NETFS_UNBUFFERED_WRITE] = "UW", [NETFS_DIO_READ] = "DR", diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 9661ae24120f..d4a1073cc541 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -234,6 +234,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */ NETFS_RMW_READ, /* This is an unbuffered read for RMW */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_READ, /* This is a direct I/O read */ @@ -422,6 +423,7 @@ int netfs_writepages(struct address_space *mapping, struct writeback_control *wbc); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); bool netfs_release_folio(struct folio *folio, gfp_t gfp); +int netfs_launder_folio(struct folio *folio); /* VMA operations API. */ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 825946f510ee..54b2d781d3a9 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -25,6 +25,7 @@ #define netfs_write_traces \ EM(netfs_write_trace_dio_write, "DIO-WRITE") \ + EM(netfs_write_trace_launder, "LAUNDER ") \ EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ E_(netfs_write_trace_writeback, "WRITEBACK") @@ -33,6 +34,7 @@ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_LAUNDER_WRITE, "LW") \ EM(NETFS_RMW_READ, "RM") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ EM(NETFS_DIO_READ, "DR") \ @@ -129,6 +131,7 @@ EM(netfs_folio_trace_clear_g, "clear-g") \ EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ EM(netfs_folio_trace_kill, "kill") \ + EM(netfs_folio_trace_launder, "launder") \ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \ From patchwork Fri Oct 13 16:04:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27F12C41513 for ; Fri, 13 Oct 2023 16:07:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBCC48004E; Fri, 13 Oct 2023 12:06:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6A608004B; Fri, 13 Oct 2023 12:06:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE4138004E; Fri, 13 Oct 2023 12:06:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9BB248004B for ; Fri, 13 Oct 2023 12:06:58 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6B578802B9 for ; Fri, 13 Oct 2023 16:06:58 +0000 (UTC) X-FDA: 81340916916.16.5FA108E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 99641120032 for ; Fri, 13 Oct 2023 16:06:56 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hX/t2UKX"; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213216; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l1167aARceaRUWalpeI90lGLd6Ah2vo7Z5hj/oR0JXA=; b=TmfwgUZVnhqnr1+CNv4Yqw8ijgicmnp0hGdir8XVGQASaxLnPgHdq6JQr1oAhjnRxHL0TH GXcqU6h0sTvJui99EUv6QFy3YhqDmcusqYjsxiXEJH1623qV3pWB2VbdPDMiaFo1yV0vIG 2lm1yxlKSlSCf9EGgsKtGj/HyTnuNxY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hX/t2UKX"; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213216; a=rsa-sha256; cv=none; b=eGR3NxbLN8EMpdj/JSYOZRAZwm+bAJ8xxhwEx8ucvdfsLgzdrEHZJg1uvH3oUAS66Gm6JQ A8R8uhe3z2qRm59+/rznUEaktUC4n08YDwtLN55/y4MTNmvd4VjWWE1zgQtM19ilxeew+G /TI2L2AS7LoiBPK/cDqz0x74OkB16LQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213216; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l1167aARceaRUWalpeI90lGLd6Ah2vo7Z5hj/oR0JXA=; b=hX/t2UKXYkClw+Kn7aF+7YtVIa9QHYLSJ1w6fJigg9seLm1VvQFPqgGpCrldwPC2wRiTqR QSd6G/2zf4rNh6B4j56z3EMZwT7TawHO2Sf+dCsNowT1HemISsmmawFhtV9ikKKsr+sWtU l5gPPKZnSEckyaRzKf8ESV1GA3mdvrI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-677-yaXXYWPoPSe4Orm0ZTKItw-1; Fri, 13 Oct 2023 12:06:49 -0400 X-MC-Unique: yaXXYWPoPSe4Orm0ZTKItw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6520E18312C2; Fri, 13 Oct 2023 16:06:47 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id C57C825C1; Fri, 13 Oct 2023 16:06:44 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 40/53] netfs: Implement a write-through caching option Date: Fri, 13 Oct 2023 17:04:09 +0100 Message-ID: <20231013160423.2218093-41-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspamd-Queue-Id: 99641120032 X-Rspam-User: X-Stat-Signature: kmct9zpt8u3nxc7ybqsh5bhcs4dgnudm X-Rspamd-Server: rspam01 X-HE-Tag: 1697213216-110507 X-HE-Meta: U2FsdGVkX19eSpxYkLBKubWJDFzZVN8XHYR15uTPNA+4JX2FaW1Jy4hpVK4Dc3fcYUyy0SZ4fe6I5Bh9h8G5pzgbCRZGJNUXRIFroRQBu2PGWklLHjjpup/plM6bmmLowcWBaes4D1b0YsDwF7IwJ9v+mR2jV/KyqC16va1MWU9LJwARkFE8Itq1J45LeIGaIQ/AoFvtGmLYqLCCWehSV+vAqU8uEZgNs1Lqqc23JtnCNBY4NwGxQwP7vDOZr+0n9eiiJbNBml+ZFpj17Fvp6sQCCna0qQAr561n6uRmVHyISrV6VFitsiQ+Q8W6O7C5iKIZ9JE+Bhu2NQ6GkwS+vEONzpDQdvAo44Plh2JxSmZqk4HZfLoczx+MAonNFmdl6v/pG2rJqgYJIpOLF7a1xeeB8aaqJA/6aPjVwiZHGGEajFP2mLKljh/yZaDAJmjiZV5E5fXpMh4xeQdiucT3xPIhHsGUD9H5JfCmtu1+5XfEbSJQVcPzTeiPZT4yDYDFWhse5/7udaVupW8xSgF9RZsetjYfZmSsuQLCKKS250WQWXEVq6k9kl6j/XcWHsDzfZXQJKkq7eKf+0m+hjbduQmNQmU9kbckr4lP06XqsHylvvt6VW5a6wb28oG65SIDmSNSaTeGZZOoDmow00wDeMq1vjqPioBGPNHcK8tXeYOqoho/JJquimnwKhG09KnHrKAr30fGbmUmDhp+9+/HjF0sNgRtA7/zmxVlkAHdxvlm4zs0P3umIpVsYiTvQvzLPeg2TIX5uXhWi/M+lDF0H+1AM2ZQIXtrVHTNEg5NNpqnXYT73em0swahjge2b+wzrtE+cffFq3VaaRAeW3XgGBhHn+P7LM1DN4abXnNHvLq8/e3JmXq//RJa67w0XLfVMPqScW+Yjq6M7YR2xjczT0Zq8pOjTT5Aa7NdBcDigFYivwqsxUpY9zNQlR12hWMuBW5DAgz5MPCASsdAi6r 7dsNQOuO jaqofmO5Sz7eY7epjkVmAbOmJswEHa7kbCOWtmNuBZmVzRuvTMUZDah8ol94hQl+jbn+Qfa4mQOl8Rww5OOMmp4Hz3mfwHtojz3jRO4rP6QnWQWk3FKqcEuQNMK5DlDyGdQSf2KiwWcv7U6JAKdpUZlpsFPW/aTe3lJAJ+Z2QWRoyXS8VEIFW74gpKgBR9SSs+oJHgairSiP5LoNfr16NBBxpQ6ouIHmiCn45+deCroe+6WqB6njgIS9gNAPtVGLD61ppXGY9Sw0T4YFS5x7+Jk9ycjJ84jxDB3hAi+uEJXX4gGI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a flag whereby a filesystem may request that cifs_perform_write() perform write-through caching. This involves putting pages directly into writeback rather than dirty and attaching them to a write operation as we go. Further, the writes being made are limited to the byte range being written rather than whole folios being written. This can be used by cifs, for example, to deal with strict byte-range locking. This can't be used with content encryption as that may require expansion of the write RPC beyond the write being made. This doesn't affect writes via mmap - those are written back in the normal way; similarly failed writethrough writes are marked dirty and left to writeback to retry. Another option would be to simply invalidate them, but the contents can be simultaneously accessed by read() and through mmap. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 66 ++++++++++++++++++++++---- fs/netfs/internal.h | 3 ++ fs/netfs/main.c | 1 + fs/netfs/objects.c | 1 + fs/netfs/output.c | 90 ++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 2 + include/trace/events/netfs.h | 8 +++- 7 files changed, 159 insertions(+), 12 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 5695bc3acf6c..6657dbd07b9d 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -26,6 +26,8 @@ enum netfs_how_to_modify { NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ }; +static void netfs_cleanup_buffered_write(struct netfs_io_request *wreq); + static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { if (netfs_group && !folio_get_private(folio)) @@ -135,6 +137,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, struct inode *inode = file_inode(file); struct address_space *mapping = inode->i_mapping; struct netfs_inode *ctx = netfs_inode(inode); + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .for_sync = true, + .nr_to_write = LONG_MAX, + .range_start = iocb->ki_pos, + .range_end = iocb->ki_pos + iter->count, + }; + struct netfs_io_request *wreq = NULL; struct netfs_folio *finfo; struct folio *folio; enum netfs_how_to_modify howto; @@ -145,6 +155,30 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; bool maybe_trouble = false; + if (unlikely(test_bit(NETFS_ICTX_WRITETHROUGH, &ctx->flags) || + iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) + ) { + if (pos < i_size_read(inode)) { + ret = filemap_write_and_wait_range(mapping, pos, pos + iter->count); + if (ret < 0) { + goto out; + } + } + + wbc_attach_fdatawrite_inode(&wbc, mapping->host); + + wreq = netfs_begin_writethrough(iocb, iter->count); + if (IS_ERR(wreq)) { + wbc_detach_inode(&wbc); + ret = PTR_ERR(wreq); + wreq = NULL; + goto out; + } + if (!is_sync_kiocb(iocb)) + wreq->iocb = iocb; + wreq->cleanup = netfs_cleanup_buffered_write; + } + do { size_t flen; size_t offset; /* Offset into pagecache folio */ @@ -314,7 +348,22 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } written += copied; - folio_mark_dirty(folio); + if (likely(!wreq)) { + folio_mark_dirty(folio); + } else { + if (folio_test_dirty(folio)) + /* Sigh. mmap. */ + folio_clear_dirty_for_io(folio); + /* We make multiple writes to the folio... */ + if (!folio_start_writeback(folio)) { + if (wreq->iter.count == 0) + trace_netfs_folio(folio, netfs_folio_trace_wthru); + else + trace_netfs_folio(folio, netfs_folio_trace_wthru_plus); + } + netfs_advance_writethrough(wreq, copied, + offset + copied == flen); + } retry: folio_unlock(folio); folio_put(folio); @@ -324,17 +373,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: - if (likely(written)) { - /* Flush and wait for a write that requires immediate synchronisation. */ - if (iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) { - _debug("dsync"); - ret = filemap_fdatawait_range(mapping, iocb->ki_pos, - iocb->ki_pos + written); - } - - iocb->ki_pos += written; + if (unlikely(wreq)) { + ret = netfs_end_writethrough(wreq, iocb); + wbc_detach_inode(&wbc); + if (ret == -EIOCBQUEUED) + return ret; } + iocb->ki_pos += written; _leave(" = %zd [%zd]", written, ret); return written ? written : ret; diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index a25adbe7ec72..fc400659a73f 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -114,6 +114,9 @@ static inline void netfs_see_request(struct netfs_io_request *rreq, */ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, enum netfs_write_trace what); +struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len); +int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end); +int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb); /* * stats.c diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 577c8a9fc0f2..ed540c5dec8d 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_WRITETHROUGH] = "WT", [NETFS_LAUNDER_WRITE] = "LW", [NETFS_RMW_READ] = "RM", [NETFS_UNBUFFERED_WRITE] = "UW", diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 6bf3b3f51499..14cdf34e767e 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -41,6 +41,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, rreq->debug_id = atomic_inc_return(&debug_ids); xa_init(&rreq->bounce); INIT_LIST_HEAD(&rreq->subrequests); + INIT_WORK(&rreq->work, NULL); refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); diff --git a/fs/netfs/output.c b/fs/netfs/output.c index 2d2530dc9507..255027b472d7 100644 --- a/fs/netfs/output.c +++ b/fs/netfs/output.c @@ -393,3 +393,93 @@ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, TASK_UNINTERRUPTIBLE); return wreq->error; } + +/* + * Begin a write operation for writing through the pagecache. + */ +struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len) +{ + struct netfs_io_request *wreq; + struct file *file = iocb->ki_filp; + + wreq = netfs_alloc_request(file->f_mapping, file, iocb->ki_pos, len, + NETFS_WRITETHROUGH); + if (IS_ERR(wreq)) + return wreq; + + trace_netfs_write(wreq, netfs_write_trace_writethrough); + + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + iov_iter_xarray(&wreq->iter, ITER_SOURCE, &wreq->mapping->i_pages, wreq->start, 0); + wreq->io_iter = wreq->iter; + + /* ->outstanding > 0 carries a ref */ + netfs_get_request(wreq, netfs_rreq_trace_get_for_outstanding); + atomic_set(&wreq->nr_outstanding, 1); + return wreq; +} + +static void netfs_submit_writethrough(struct netfs_io_request *wreq, bool final) +{ + struct netfs_inode *ictx = netfs_inode(wreq->inode); + unsigned long long start; + size_t len; + + if (!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) + return; + + start = wreq->start + wreq->submitted; + len = wreq->iter.count - wreq->submitted; + if (!final) { + len /= wreq->wsize; /* Round to number of maximum packets */ + len *= wreq->wsize; + } + + ictx->ops->create_write_requests(wreq, start, len); + wreq->submitted += len; +} + +/* + * Advance the state of the write operation used when writing through the + * pagecache. Data has been copied into the pagecache that we need to append + * to the request. If we've added more than wsize then we need to create a new + * subrequest. + */ +int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end) +{ + _enter("ic=%zu sb=%zu ws=%u cp=%zu tp=%u", + wreq->iter.count, wreq->submitted, wreq->wsize, copied, to_page_end); + + wreq->iter.count += copied; + wreq->io_iter.count += copied; + if (to_page_end && wreq->io_iter.count - wreq->submitted >= wreq->wsize) + netfs_submit_writethrough(wreq, false); + + return wreq->error; +} + +/* + * End a write operation used when writing through the pagecache. + */ +int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb) +{ + int ret = -EIOCBQUEUED; + + _enter("ic=%zu sb=%zu ws=%u", + wreq->iter.count, wreq->submitted, wreq->wsize); + + if (wreq->submitted < wreq->io_iter.count) + netfs_submit_writethrough(wreq, true); + + if (atomic_dec_and_test(&wreq->nr_outstanding)) + netfs_write_terminated(wreq, false); + + if (is_sync_kiocb(iocb)) { + wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + ret = wreq->error; + } + + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + return ret; +} diff --git a/include/linux/netfs.h b/include/linux/netfs.h index d4a1073cc541..c416645649e1 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -143,6 +143,7 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ #define NETFS_ICTX_ENCRYPTED 2 /* The file contents are encrypted */ +#define NETFS_ICTX_WRITETHROUGH 3 /* Write-through caching */ unsigned char min_bshift; /* log2 min block size for bounding box or 0 */ unsigned char crypto_bshift; /* log2 of crypto block size */ unsigned char crypto_trailer; /* Size of crypto trailer */ @@ -234,6 +235,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */ NETFS_RMW_READ, /* This is an unbuffered read for RMW */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 54b2d781d3a9..04cbe803c251 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -27,13 +27,15 @@ EM(netfs_write_trace_dio_write, "DIO-WRITE") \ EM(netfs_write_trace_launder, "LAUNDER ") \ EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ - E_(netfs_write_trace_writeback, "WRITEBACK") + EM(netfs_write_trace_writeback, "WRITEBACK") \ + E_(netfs_write_trace_writethrough, "WRITETHRU") #define netfs_rreq_origins \ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_WRITETHROUGH, "WT") \ EM(NETFS_LAUNDER_WRITE, "LW") \ EM(NETFS_RMW_READ, "RM") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ @@ -138,7 +140,9 @@ EM(netfs_folio_trace_redirty, "redirty") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ - E_(netfs_folio_trace_store_plus, "store+") + EM(netfs_folio_trace_store_plus, "store+") \ + EM(netfs_folio_trace_wthru, "wthru") \ + E_(netfs_folio_trace_wthru_plus, "wthru+") #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY From patchwork Fri Oct 13 16:04:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94EA0CDB483 for ; Fri, 13 Oct 2023 16:07:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3466A8004F; Fri, 13 Oct 2023 12:07:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F38E8004B; Fri, 13 Oct 2023 12:07:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 194DF8004F; Fri, 13 Oct 2023 12:07:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 004878004B for ; Fri, 13 Oct 2023 12:07:01 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D3813B4F75 for ; Fri, 13 Oct 2023 16:07:01 +0000 (UTC) X-FDA: 81340917042.30.FAAAB4C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 2AE0E2000E for ; Fri, 13 Oct 2023 16:06:59 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dpjAemf9; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BY0IZ1puEbWlqmZjAm/oiBzHHsszrgyhlAY+cvlLt1A=; b=akoAg1k16jJNZFAoTV+C/HJCliGuuazhe+R7rzkod9+7lEGfSAGlnSH0y/w9NTxiBBwkEG QX5uDWb+RxSHQlSkUAeKtKkY3PZmRUwYVpWWDGP+8cX9/6Z5W7H5pvxPbxnVyhqoYZ7vy9 4QhbEFIEJ+BtDU2heCq4HOCS2tsXvyA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dpjAemf9; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213220; a=rsa-sha256; cv=none; b=cyzTItwjnYTE955n5EB/UZzifRHW5vj35D0YiW0yboRaSbkvWdsEX/TMF27xhI0LWYNymb KQV4WyO9e+Mq2Lhi5JYNCRJH54V+e6JeOyU0Rj7lgYh3cX88PyEMEi+bqR1dqitdafa/eQ uCNZ3cw1IkfiFfqwhkNm1zVA14RgGgI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BY0IZ1puEbWlqmZjAm/oiBzHHsszrgyhlAY+cvlLt1A=; b=dpjAemf9lPPRglaM2d/IGN/X7O6ZlsUtcGAGwxWkRPFCH3Qv4VW5RYq+Ic4kB/BRNshCYr /BM0p0WE5p8H1CWw3d2K1nf7/er+7QNuqvkJfWfo39XE9Y4WdcFrfg6RtsEqSCvOllPeZ2 r7ywsUapYDtAlGwpMRjMaxiI4t9DQ9w= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-682-Ht3YwKPqOXqHHVTwg42j_A-1; Fri, 13 Oct 2023 12:06:52 -0400 X-MC-Unique: Ht3YwKPqOXqHHVTwg42j_A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 11633380670B; Fri, 13 Oct 2023 16:06:51 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1556125C1; Fri, 13 Oct 2023 16:06:47 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 41/53] netfs: Rearrange netfs_io_subrequest to put request pointer first Date: Fri, 13 Oct 2023 17:04:10 +0100 Message-ID: <20231013160423.2218093-42-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspam-User: X-Stat-Signature: f1pe191igwwawi4fugyfq9sxibuoox87 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2AE0E2000E X-HE-Tag: 1697213219-144307 X-HE-Meta: U2FsdGVkX18bW356s+C0CoVHq1CEYAVzsX6s2MBeNwj/XBtE5Iwv2Ifobm5y6MEpuyCsW/haceO7eyqb6/4es4siiEETz0ZZGEqWTMjK1LXZH7eT/hcMmQnk1pRL1bixRgazak7HstARCxPLAJvOeUkWTi6a7p3axSEOL1NLc31YCyl5+BprnEFK9b5nWDL+cCfQwzzuwp035I/n7uQqde5BlVfQO3XV83QlMnsHAocaaCq0HWeH480qeqkayj2hyyqy3hd0ULnU1G5di+9BJejArAX4th4oGQf7r47k3XM373pM6ZjSr11wVwC//U1X3PEe9bf9Sccv/CQ0pVJTtfkhV0dJVn6ye5ZlgAtIhqQxbPp3Z+0c+/IQ86ZHWdMG4hRqQNRZxoDzZ+xdRp44qd/U8tVb7zjlP5Bd4yO1J5ulAxfYDF5nxYxbVtZoWjLhdJA9YmTc7xc9PAXcT3Vp3NLmwRka4AzHpVyqtJv233MDjuFTYe0vesiuo/k3RHOh1cxWJVRChGUhKRYmq2M1W6PJ1AWAkJUT/8IVpi1zn3ofb/JJxg6+kyS9v5f9sCNN5wEQLwBzCENmtETuJz7Map0Hica+sOfYcbJ5NrJAuVBp3HLm8e6ekerm3Z6r1RWtG6T/HRiM3yqBil3szMiIBB0gjnceAfLcw6rFKESCVe+bOjJgGb2LviBxg6gc8SR+1Vg6BBDbnhrFeAcf4btFJsXE1oSHpTO2JHDqVOaiDp0pi4UIqRI9fSMk4QlyXGpeapLkVmr1duUgaslKfjqTSDTUN4eftfVpdZPohguiPdCOUGrXzRSZAo1lauo9/OV47k3OvqeGnefRX3PJ5QjUEyUQkFuYRyjpNT2dytUBLoDy9DXvV26GF16hJxOjDP/QfLqf5lLga7R20g692r67ZTbxN59FvzBews+pi62dOorXVndawyYDd08wtKD4eiWQM/gByDQmnh0dgYGXSW6 oWUJrz68 u90MSdVJtOOHNscRMGS33sLafc6TINiUQlIDVJLIsIZYqJz/amn2s6ECgGPnkvorL5JK9kwmhFaJ9Q/09SEowdrVv1WknvpaCSyirHcSdGyrcxCZho5t7h2NakWjbpzq98b6GNxaG0qUvmxMsqF3ipipHdaCx1YmavZ89mirO31AnM3d+oVv3IAJKaiuQnK2SnKyZCbyn0QhLOS9Q+Wmqc97B//NFijTJCW15QUnc1YLWkrvjehVxgNrmdKdpoIj8tVfTDcxHq4hvswCJSlrgqWHLNo6n1uh+uoaguH3LL4No0qWOyEtfjkaTbYtFVF215VHHakgm35rSloEB54GvGFYEiEvdKnQ7DDjPderwT8BoVlgwMjVDIXpjeVpJ16YVNP5MIG1oILgjxw9Jtgh6c1ij3BvzqK1KRKogE4e8U4xpAmIPOE26mBEoaBPNQlwEyXPGwHrxQOH1smrSfzawkbFu4z43ldDg+mxkSunetFs0SWHTahiCFSlV6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rearrange the netfs_io_subrequest struct to put the netfs_io_request pointer (rreq) first. This then allows netfs_io_subrequest to be put in a union with a pointer to a wrapper around netfs_io_request for cifs. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/netfs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index c416645649e1..ff4f86ae64e4 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -209,8 +209,8 @@ struct netfs_cache_resources { * the pages it points to can be relied on to exist for the duration. */ struct netfs_io_subrequest { - struct work_struct work; struct netfs_io_request *rreq; /* Supervising I/O request */ + struct work_struct work; struct list_head rreq_link; /* Link in rreq->subrequests */ struct iov_iter io_iter; /* Iterator for this subrequest */ loff_t start; /* Where to start the I/O */ From patchwork Fri Oct 13 16:04:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 408E8CDB47E for ; Fri, 13 Oct 2023 16:07:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F29D180050; Fri, 13 Oct 2023 12:07:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDA148004B; Fri, 13 Oct 2023 12:07:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D058C80050; Fri, 13 Oct 2023 12:07:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BA3358004B for ; Fri, 13 Oct 2023 12:07:05 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 73D32B5195 for ; Fri, 13 Oct 2023 16:07:05 +0000 (UTC) X-FDA: 81340917210.14.0D8336D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 7458340034 for ; Fri, 13 Oct 2023 16:07:03 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZxjNqPgV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SmcQ2OYS80jrv0Qt98hHYwlE6Ftq6y8QJ4ZIMlhPlus=; b=4XTJjwsn5ce01bUz6QeLOY8z79yus0ycLtY2C0WjPL8dl638Rk/o8hie48a+6QGCcEw1e8 1oMNNVq/7mJCTwcj6yIIC/dG3sKMtkk9U64ZVCYEV6MS0AVM9kWJhokIlnOMbQkzUhLB0g bD9Gw5ZY5YzpBOlxeZj1RZXQjTuU9Tk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZxjNqPgV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213223; a=rsa-sha256; cv=none; b=OdKFwIrQtUYRX6BiX3XNjievrR34qlfwnSaHUA2IgDftY75pEIHkrZa5uGPab1ycaaR2jy 3DH015YsP0+0NbP5WSCBD5TAhG1mPmiERBSTvv4XDMX3lmBORAlevYQ9GZ75Q4rfziR1Eh kkj5Onayz4DTtsMpj1TsjxT8M7kEATI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213222; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SmcQ2OYS80jrv0Qt98hHYwlE6Ftq6y8QJ4ZIMlhPlus=; b=ZxjNqPgVW9BB+0xRnvd015vgbhX8Nx9LO5piRtE/ZkmQ36YLjYhVNGW/tmCaKBZM0DhyJH jk+iBfm8RyNvtdZFmIqary0LsOL8djEmGaYW/139yjRCgVBvet4F2qe+f2akP47v/aQqEP urkCRCxcw7xebRD4NLq8UmKt+1JPVRs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-iBl8tUgMNS-bxksqX8UMGg-1; Fri, 13 Oct 2023 12:06:55 -0400 X-MC-Unique: iBl8tUgMNS-bxksqX8UMGg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8C8CB18312C1; Fri, 13 Oct 2023 16:06:54 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB051C15BBC; Fri, 13 Oct 2023 16:06:51 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 42/53] afs: Use the netfs write helpers Date: Fri, 13 Oct 2023 17:04:11 +0100 Message-ID: <20231013160423.2218093-43-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8 X-Rspamd-Queue-Id: 7458340034 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: x7ueuerxxg6f8h9pge143c9cc9m1gwm3 X-HE-Tag: 1697213223-305788 X-HE-Meta: U2FsdGVkX18QyMG+ria9nsQss4KXnAf/gbeHTZTWpNaqC+kQg3rb7dpLMO3qOAVSpNNe5A3ttfk4O/ZwEeljQVHnJalbX+9IYMPCPQ2lcb8v0DlMXQ/Zug/1PikP7d14V6zKkUHIz+ido4krrLEbZAGZrECQy8vPaXbBR7TAGFQrvZoluiIT1lDoVdIbXBOpVQVsCWcT9jiGXPQl1JXWvaZQ3Hj8IFQTGMLBILf6MX2HgZkDOVgsoGbVNyXaSKUtzYp2UAV4asty6mVSwW2TXS6tfMp2E4nXCxCdF03aQPVkFdXnW/Q/zpbF3h8HsufhV7bcd6mQFXrvVTHlHdEJ55x8hq2aFHDCIjbh6qxcaYQc8IbpWhe/fuUksX2Gry/nBfX+xMDI/ies/3vDOYLnS08bkxflbHX6DKe6CuUyvsSAIiUd4IwuZqJzptgAbPnQpLmDBJu0MBqGiHtXgUSqtSSeAzejVT3rmBUZri0vVOFLJPK1Jebn0U1LRSuI/3qI9G4TmAYNYMeZO6nQQOAZlZ1si7KmwGa+K8KGMlsLfiK3r7FicfJkVivmgz+1cMjyWTr+CRxVOCnUkUDghO8Rj9kgKXZu5U8YYveOdIHGyPFI97UU04/mLGz/81AaYfYzPVtlAORh6xxFsOlUgHMwDCtasr+YQyVl+RzLd1evLMtASMI7sBfScXBpcFzSc+yTGdSP2mk6HR4kRIZXj8EJyaslSStM/N8P0NXqKDMJyKuOzpmFCpcjX9l9NxJemepAnzN9InKANiDa1xvGME877zKiygvPSCT323bAkz+hqNgmbXYHc804ugD9u++fGcadAUf9UWzFldZMDwgmT5gooZFWLd69m2E5RNAev6n+q5HA02JKuVvklZbNMPes0g37fPTJurInEepVmGa51frD3ygrIyQNJPc+2UdMxrep3vDkv9o1EMg40Wi8P2ouhB0VT5aYoeJnpxaN2ZmXq9U rORBcIl6 5PdzHAejuxnlYJgM9MaA9zBoAcORwytL/xr2K5WaJz0SjMwyroPZmoC27sK9VQ+FN3LLzAlGy6Km6yXhnxdpqpboR4VWO6PvT+kMVfr7pmUniA0xsC2+/q8IXCZBoYXw3dw2jEUc/v0p5juvY6o5SPCBeBNrsIXssk7oRvJ5zFJa2VYR8kTTPODZhM8z+47Y1dluRXUFSLeWxDP8fFe7LFjh7slEsAG4/kDIV4KZqVxeZ1hCYpnHEyXSBe7GJNV+5lMotD6+IPpiY/T6QT3YotStXQUENxt6TfPWfSaiQTCIU4XBvgUeADvw49WIwf7Wjic6opEi6xU7837g0HfyBosNSh8263YtdeCnVuK6tJcGsCcmvzZ6/zJtNyp8CXci1v9cC/JXYPfL5HjfcgmI9alcnolcrjVRYs7Iqenq5lmfi8ui5csRvMIqTTg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make afs use the netfs write helpers. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/afs/file.c | 65 +++- fs/afs/internal.h | 10 +- fs/afs/write.c | 704 ++----------------------------------- include/trace/events/afs.h | 23 -- 4 files changed, 81 insertions(+), 721 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index 5bb78d874292..586a573b1a9b 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -34,7 +34,7 @@ const struct file_operations afs_file_operations = { .release = afs_release, .llseek = generic_file_llseek, .read_iter = afs_file_read_iter, - .write_iter = afs_file_write, + .write_iter = netfs_file_write_iter, .mmap = afs_file_mmap, .splice_read = afs_file_splice_read, .splice_write = iter_file_splice_write, @@ -50,16 +50,15 @@ const struct inode_operations afs_file_inode_operations = { }; const struct address_space_operations afs_file_aops = { + .direct_IO = noop_direct_IO, .read_folio = netfs_read_folio, .readahead = netfs_readahead, .dirty_folio = afs_dirty_folio, - .launder_folio = afs_launder_folio, + .launder_folio = netfs_launder_folio, .release_folio = netfs_release_folio, .invalidate_folio = netfs_invalidate_folio, - .write_begin = afs_write_begin, - .write_end = afs_write_end, - .writepages = afs_writepages, .migrate_folio = filemap_migrate_folio, + .writepages = afs_writepages, }; const struct address_space_operations afs_symlink_aops = { @@ -355,8 +354,10 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio) static int afs_init_request(struct netfs_io_request *rreq, struct file *file) { - rreq->netfs_priv = key_get(afs_file_key(file)); + if (file) + rreq->netfs_priv = key_get(afs_file_key(file)); rreq->rsize = 4 * 1024 * 1024; + rreq->wsize = 16 * 1024; return 0; } @@ -373,12 +374,37 @@ static void afs_free_request(struct netfs_io_request *rreq) key_put(rreq->netfs_priv); } +static void afs_update_i_size(struct inode *inode, loff_t new_i_size) +{ + struct afs_vnode *vnode = AFS_FS_I(inode); + loff_t i_size; + + write_seqlock(&vnode->cb_lock); + i_size = i_size_read(&vnode->netfs.inode); + if (new_i_size > i_size) { + i_size_write(&vnode->netfs.inode, new_i_size); + inode_set_bytes(&vnode->netfs.inode, new_i_size); + } + write_sequnlock(&vnode->cb_lock); + fscache_update_cookie(afs_vnode_cache(vnode), NULL, &new_i_size); +} + +static void afs_netfs_invalidate_cache(struct netfs_io_request *wreq) +{ + struct afs_vnode *vnode = AFS_FS_I(wreq->inode); + + afs_invalidate_cache(vnode, 0); +} + const struct netfs_request_ops afs_req_ops = { .init_request = afs_init_request, .free_request = afs_free_request, .begin_cache_operation = fscache_begin_cache_operation, .check_write_begin = afs_check_write_begin, .issue_read = afs_issue_read, + .update_i_size = afs_update_i_size, + .invalidate_cache = afs_netfs_invalidate_cache, + .create_write_requests = afs_create_write_requests, }; int afs_write_inode(struct inode *inode, struct writeback_control *wbc) @@ -453,28 +479,39 @@ static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pg static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp)); + struct inode *inode = file_inode(iocb->ki_filp); + struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_file *af = iocb->ki_filp->private_data; int ret; - ret = afs_validate(vnode, af->key); + if (iocb->ki_flags & IOCB_DIRECT) + return netfs_unbuffered_read_iter(iocb, iter); + + ret = netfs_start_io_read(inode); if (ret < 0) return ret; - - return generic_file_read_iter(iocb, iter); + ret = afs_validate(vnode, af->key); + if (ret == 0) + ret = netfs_file_read_iter(iocb, iter); + netfs_end_io_read(inode); + return ret; } static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe, size_t len, unsigned int flags) { - struct afs_vnode *vnode = AFS_FS_I(file_inode(in)); + struct inode *inode = file_inode(in); + struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_file *af = in->private_data; int ret; - ret = afs_validate(vnode, af->key); + ret = netfs_start_io_read(inode); if (ret < 0) return ret; - - return filemap_splice_read(in, ppos, pipe, len, flags); + ret = afs_validate(vnode, af->key); + if (ret == 0) + ret = filemap_splice_read(in, ppos, pipe, len, flags); + netfs_end_io_read(inode); + return ret; } diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 03fed7ecfab9..da5de62b5f9c 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1468,19 +1468,11 @@ bool afs_dirty_folio(struct address_space *, struct folio *); #else #define afs_dirty_folio filemap_dirty_folio #endif -extern int afs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **pagep, void **fsdata); -extern int afs_write_end(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *page, void *fsdata); -extern int afs_writepage(struct page *, struct writeback_control *); extern int afs_writepages(struct address_space *, struct writeback_control *); -extern ssize_t afs_file_write(struct kiocb *, struct iov_iter *); extern int afs_fsync(struct file *, loff_t, loff_t, int); extern vm_fault_t afs_page_mkwrite(struct vm_fault *vmf); extern void afs_prune_wb_keys(struct afs_vnode *); -int afs_launder_folio(struct folio *); +void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len); /* * xattr.c diff --git a/fs/afs/write.c b/fs/afs/write.c index cdb1391ec46e..748d5954f0ee 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -12,17 +12,9 @@ #include #include #include +#include #include "internal.h" -static int afs_writepages_region(struct address_space *mapping, - struct writeback_control *wbc, - unsigned long long start, - unsigned long long end, loff_t *_next, - bool max_one_loop); - -static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len, - loff_t i_size, bool caching); - #ifdef CONFIG_AFS_FSCACHE /* * Mark a page as having been made dirty and thus needing writeback. We also @@ -33,216 +25,16 @@ bool afs_dirty_folio(struct address_space *mapping, struct folio *folio) return fscache_dirty_folio(mapping, folio, afs_vnode_cache(AFS_FS_I(mapping->host))); } -static void afs_folio_start_fscache(bool caching, struct folio *folio) -{ - if (caching) - folio_start_fscache(folio); -} -#else -static void afs_folio_start_fscache(bool caching, struct folio *folio) -{ -} #endif -/* - * prepare to perform part of a write to a page - */ -int afs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **_page, void **fsdata) -{ - struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); - struct folio *folio; - int ret; - - _enter("{%llx:%llu},%llx,%x", - vnode->fid.vid, vnode->fid.vnode, pos, len); - - /* Prefetch area to be written into the cache if we're caching this - * file. We need to do this before we get a lock on the page in case - * there's more than one writer competing for the same cache block. - */ - ret = netfs_write_begin(&vnode->netfs, file, mapping, pos, len, &folio, fsdata); - if (ret < 0) - return ret; - -try_again: - /* See if this page is already partially written in a way that we can - * merge the new write with. - */ - if (folio_test_writeback(folio)) { - trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio); - folio_unlock(folio); - goto wait_for_writeback; - } - - *_page = folio_file_page(folio, pos / PAGE_SIZE); - _leave(" = 0"); - return 0; - -wait_for_writeback: - ret = folio_wait_writeback_killable(folio); - if (ret < 0) - goto error; - - ret = folio_lock_killable(folio); - if (ret < 0) - goto error; - goto try_again; - -error: - folio_put(folio); - _leave(" = %d", ret); - return ret; -} - -/* - * finalise part of a write to a page - */ -int afs_write_end(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *subpage, void *fsdata) -{ - struct folio *folio = page_folio(subpage); - struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); - loff_t i_size, write_end_pos; - - _enter("{%llx:%llu},{%lx}", - vnode->fid.vid, vnode->fid.vnode, folio_index(folio)); - - if (!folio_test_uptodate(folio)) { - if (copied < len) { - copied = 0; - goto out; - } - - folio_mark_uptodate(folio); - } - - if (copied == 0) - goto out; - - write_end_pos = pos + copied; - - i_size = i_size_read(&vnode->netfs.inode); - if (write_end_pos > i_size) { - write_seqlock(&vnode->cb_lock); - i_size = i_size_read(&vnode->netfs.inode); - if (write_end_pos > i_size) - afs_set_i_size(vnode, write_end_pos); - write_sequnlock(&vnode->cb_lock); - fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos); - } - - if (folio_mark_dirty(folio)) - _debug("dirtied %lx", folio_index(folio)); - -out: - folio_unlock(folio); - folio_put(folio); - return copied; -} - -/* - * kill all the pages in the given range - */ -static void afs_kill_pages(struct address_space *mapping, - loff_t start, loff_t len) -{ - struct afs_vnode *vnode = AFS_FS_I(mapping->host); - struct folio *folio; - pgoff_t index = start / PAGE_SIZE; - pgoff_t last = (start + len - 1) / PAGE_SIZE, next; - - _enter("{%llx:%llu},%llx @%llx", - vnode->fid.vid, vnode->fid.vnode, len, start); - - do { - _debug("kill %lx (to %lx)", index, last); - - folio = filemap_get_folio(mapping, index); - if (IS_ERR(folio)) { - next = index + 1; - continue; - } - - next = folio_next_index(folio); - - folio_clear_uptodate(folio); - folio_end_writeback(folio); - folio_lock(folio); - generic_error_remove_page(mapping, &folio->page); - folio_unlock(folio); - folio_put(folio); - - } while (index = next, index <= last); - - _leave(""); -} - -/* - * Redirty all the pages in a given range. - */ -static void afs_redirty_pages(struct writeback_control *wbc, - struct address_space *mapping, - loff_t start, loff_t len) -{ - struct afs_vnode *vnode = AFS_FS_I(mapping->host); - struct folio *folio; - pgoff_t index = start / PAGE_SIZE; - pgoff_t last = (start + len - 1) / PAGE_SIZE, next; - - _enter("{%llx:%llu},%llx @%llx", - vnode->fid.vid, vnode->fid.vnode, len, start); - - do { - _debug("redirty %llx @%llx", len, start); - - folio = filemap_get_folio(mapping, index); - if (IS_ERR(folio)) { - next = index + 1; - continue; - } - - next = index + folio_nr_pages(folio); - folio_redirty_for_writepage(wbc, folio); - folio_end_writeback(folio); - folio_put(folio); - } while (index = next, index <= last); - - _leave(""); -} - /* * completion of write to server */ static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsigned int len) { - struct address_space *mapping = vnode->netfs.inode.i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - _enter("{%llx:%llu},{%x @%llx}", vnode->fid.vid, vnode->fid.vnode, len, start); - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (!folio_test_writeback(folio)) { - kdebug("bad %x @%llx page %lx %lx", - len, start, folio_index(folio), end); - ASSERT(folio_test_writeback(folio)); - } - - trace_afs_folio_dirty(vnode, tracepoint_string("clear"), folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); - afs_prune_wb_keys(vnode); _leave(""); } @@ -379,339 +171,53 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t return afs_put_operation(op); } -/* - * Extend the region to be written back to include subsequent contiguously - * dirty pages if possible, but don't sleep while doing so. - * - * If this page holds new content, then we can include filler zeros in the - * writeback. - */ -static void afs_extend_writeback(struct address_space *mapping, - struct afs_vnode *vnode, - long *_count, - loff_t start, - loff_t max_len, - bool caching, - size_t *_len) +static void afs_upload_to_server(struct netfs_io_subrequest *subreq) { - struct folio_batch fbatch; - struct folio *folio; - pgoff_t index = (start + *_len) / PAGE_SIZE; - bool stop = true; - unsigned int i; - - XA_STATE(xas, &mapping->i_pages, index); - folio_batch_init(&fbatch); - - do { - /* Firstly, we gather up a batch of contiguous dirty pages - * under the RCU read lock - but we can't clear the dirty flags - * there if any of those pages are mapped. - */ - rcu_read_lock(); - - xas_for_each(&xas, folio, ULONG_MAX) { - stop = true; - if (xas_retry(&xas, folio)) - continue; - if (xa_is_value(folio)) - break; - if (folio_index(folio) != index) - break; - - if (!folio_try_get_rcu(folio)) { - xas_reset(&xas); - continue; - } - - /* Has the folio moved or been split? */ - if (unlikely(folio != xas_reload(&xas))) { - folio_put(folio); - break; - } - - if (!folio_trylock(folio)) { - folio_put(folio); - break; - } - if (!folio_test_dirty(folio) || - folio_test_writeback(folio) || - folio_test_fscache(folio)) { - folio_unlock(folio); - folio_put(folio); - break; - } - - index += folio_nr_pages(folio); - *_count -= folio_nr_pages(folio); - *_len += folio_size(folio); - stop = false; - if (*_len >= max_len || *_count <= 0) - stop = true; - - if (!folio_batch_add(&fbatch, folio)) - break; - if (stop) - break; - } - - if (!stop) - xas_pause(&xas); - rcu_read_unlock(); - - /* Now, if we obtained any folios, we can shift them to being - * writable and mark them for caching. - */ - if (!folio_batch_count(&fbatch)) - break; - - for (i = 0; i < folio_batch_count(&fbatch); i++) { - folio = fbatch.folios[i]; - trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio); + struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode); + ssize_t ret; - if (!folio_clear_dirty_for_io(folio)) - BUG(); - if (folio_start_writeback(folio)) - BUG(); - afs_folio_start_fscache(caching, folio); - folio_unlock(folio); - } + _enter("%x[%x],%zx", + subreq->rreq->debug_id, subreq->debug_index, subreq->io_iter.count); - folio_batch_release(&fbatch); - cond_resched(); - } while (!stop); + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + ret = afs_store_data(vnode, &subreq->io_iter, subreq->start, + subreq->rreq->origin == NETFS_LAUNDER_WRITE); + netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len, + false); } -/* - * Synchronously write back the locked page and any subsequent non-locked dirty - * pages. - */ -static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping, - struct writeback_control *wbc, - struct folio *folio, - unsigned long long start, - unsigned long long end) +static void afs_upload_to_server_worker(struct work_struct *work) { - struct afs_vnode *vnode = AFS_FS_I(mapping->host); - struct iov_iter iter; - unsigned long long i_size = i_size_read(&vnode->netfs.inode); - size_t len, max_len; - bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode)); - long count = wbc->nr_to_write; - int ret; - - _enter(",%lx,%llx-%llx", folio_index(folio), start, end); - - if (folio_start_writeback(folio)) - BUG(); - afs_folio_start_fscache(caching, folio); - - count -= folio_nr_pages(folio); - - /* Find all consecutive lockable dirty pages that have contiguous - * written regions, stopping when we find a page that is not - * immediately lockable, is not dirty or is missing, or we reach the - * end of the range. - */ - trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio); - - len = folio_size(folio); - if (start < i_size) { - /* Trim the write to the EOF; the extra data is ignored. Also - * put an upper limit on the size of a single storedata op. - */ - max_len = 65536 * 4096; - max_len = min_t(unsigned long long, max_len, end - start + 1); - max_len = min_t(unsigned long long, max_len, i_size - start); - - if (len < max_len) - afs_extend_writeback(mapping, vnode, &count, - start, max_len, caching, &len); - len = min_t(unsigned long long, len, i_size - start); - } - - /* We now have a contiguous set of dirty pages, each with writeback - * set; the first page is still locked at this point, but all the rest - * have been unlocked. - */ - folio_unlock(folio); + struct netfs_io_subrequest *subreq = + container_of(work, struct netfs_io_subrequest, work); - if (start < i_size) { - _debug("write back %zx @%llx [%llx]", len, start, i_size); - - /* Speculatively write to the cache. We have to fix this up - * later if the store fails. - */ - afs_write_to_cache(vnode, start, len, i_size, caching); - - iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len); - ret = afs_store_data(vnode, &iter, start, false); - } else { - _debug("write discard %zx @%llx [%llx]", len, start, i_size); - - /* The dirty region was entirely beyond the EOF. */ - fscache_clear_page_bits(mapping, start, len, caching); - afs_pages_written_back(vnode, start, len); - ret = 0; - } - - switch (ret) { - case 0: - wbc->nr_to_write = count; - ret = len; - break; - - default: - pr_notice("kAFS: Unexpected error from FS.StoreData %d\n", ret); - fallthrough; - case -EACCES: - case -EPERM: - case -ENOKEY: - case -EKEYEXPIRED: - case -EKEYREJECTED: - case -EKEYREVOKED: - case -ENETRESET: - afs_redirty_pages(wbc, mapping, start, len); - mapping_set_error(mapping, ret); - break; - - case -EDQUOT: - case -ENOSPC: - afs_redirty_pages(wbc, mapping, start, len); - mapping_set_error(mapping, -ENOSPC); - break; - - case -EROFS: - case -EIO: - case -EREMOTEIO: - case -EFBIG: - case -ENOENT: - case -ENOMEDIUM: - case -ENXIO: - trace_afs_file_error(vnode, ret, afs_file_error_writeback_fail); - afs_kill_pages(mapping, start, len); - mapping_set_error(mapping, ret); - break; - } - - _leave(" = %d", ret); - return ret; + afs_upload_to_server(subreq); } /* - * write a region of pages back to the server + * Set up write requests for a writeback slice. We need to add a write request + * for each write we want to make. */ -static int afs_writepages_region(struct address_space *mapping, - struct writeback_control *wbc, - unsigned long long start, - unsigned long long end, loff_t *_next, - bool max_one_loop) +void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len) { - struct folio *folio; - struct folio_batch fbatch; - ssize_t ret; - unsigned int i; - int n, skips = 0; - - _enter("%llx,%llx,", start, end); - folio_batch_init(&fbatch); - - do { - pgoff_t index = start / PAGE_SIZE; + struct netfs_io_subrequest *subreq; - n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE, - PAGECACHE_TAG_DIRTY, &fbatch); + _enter("%x,%llx-%llx", wreq->debug_id, start, start + len); - if (!n) - break; - for (i = 0; i < n; i++) { - folio = fbatch.folios[i]; - start = folio_pos(folio); /* May regress with THPs */ - - _debug("wback %lx", folio_index(folio)); - - /* At this point we hold neither the i_pages lock nor the - * page lock: the page may be truncated or invalidated - * (changing page->mapping to NULL), or even swizzled - * back from swapper_space to tmpfs file mapping - */ -try_again: - if (wbc->sync_mode != WB_SYNC_NONE) { - ret = folio_lock_killable(folio); - if (ret < 0) { - folio_batch_release(&fbatch); - return ret; - } - } else { - if (!folio_trylock(folio)) - continue; - } - - if (folio->mapping != mapping || - !folio_test_dirty(folio)) { - start += folio_size(folio); - folio_unlock(folio); - continue; - } - - if (folio_test_writeback(folio) || - folio_test_fscache(folio)) { - folio_unlock(folio); - if (wbc->sync_mode != WB_SYNC_NONE) { - folio_wait_writeback(folio); -#ifdef CONFIG_AFS_FSCACHE - folio_wait_fscache(folio); -#endif - goto try_again; - } - - start += folio_size(folio); - if (wbc->sync_mode == WB_SYNC_NONE) { - if (skips >= 5 || need_resched()) { - *_next = start; - folio_batch_release(&fbatch); - _leave(" = 0 [%llx]", *_next); - return 0; - } - skips++; - } - continue; - } - - if (!folio_clear_dirty_for_io(folio)) - BUG(); - ret = afs_write_back_from_locked_folio(mapping, wbc, - folio, start, end); - if (ret < 0) { - _leave(" = %zd", ret); - folio_batch_release(&fbatch); - return ret; - } - - start += ret; - } - - folio_batch_release(&fbatch); - cond_resched(); - } while (wbc->nr_to_write > 0); - - *_next = start; - _leave(" = 0 [%llx]", *_next); - return 0; + subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER, + start, len, afs_upload_to_server_worker); + if (subreq) + netfs_queue_write_request(subreq); } /* * write some of the pending data back to the server */ -int afs_writepages(struct address_space *mapping, - struct writeback_control *wbc) +int afs_writepages(struct address_space *mapping, struct writeback_control *wbc) { struct afs_vnode *vnode = AFS_FS_I(mapping->host); - loff_t start, next; int ret; - _enter(""); - /* We have to be careful as we can end up racing with setattr() * truncating the pagecache since the caller doesn't take a lock here * to prevent it. @@ -721,68 +227,11 @@ int afs_writepages(struct address_space *mapping, else if (!down_read_trylock(&vnode->validate_lock)) return 0; - if (wbc->range_cyclic) { - start = mapping->writeback_index * PAGE_SIZE; - ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX, - &next, false); - if (ret == 0) { - mapping->writeback_index = next / PAGE_SIZE; - if (start > 0 && wbc->nr_to_write > 0) { - ret = afs_writepages_region(mapping, wbc, 0, - start, &next, false); - if (ret == 0) - mapping->writeback_index = - next / PAGE_SIZE; - } - } - } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) { - ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX, - &next, false); - if (wbc->nr_to_write > 0 && ret == 0) - mapping->writeback_index = next / PAGE_SIZE; - } else { - ret = afs_writepages_region(mapping, wbc, - wbc->range_start, wbc->range_end, - &next, false); - } - + ret = netfs_writepages(mapping, wbc); up_read(&vnode->validate_lock); - _leave(" = %d", ret); return ret; } -/* - * write to an AFS file - */ -ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from) -{ - struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp)); - struct afs_file *af = iocb->ki_filp->private_data; - ssize_t result; - size_t count = iov_iter_count(from); - - _enter("{%llx:%llu},{%zu},", - vnode->fid.vid, vnode->fid.vnode, count); - - if (IS_SWAPFILE(&vnode->netfs.inode)) { - printk(KERN_INFO - "AFS: Attempt to write to active swap file!\n"); - return -EBUSY; - } - - if (!count) - return 0; - - result = afs_validate(vnode, af->key); - if (result < 0) - return result; - - result = generic_file_write_iter(iocb, from); - - _leave(" = %zd", result); - return result; -} - /* * flush any dirty pages for this process, and check for write errors. * - the return status from this call provides a reliable indication of @@ -811,49 +260,11 @@ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync) */ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) { - struct folio *folio = page_folio(vmf->page); struct file *file = vmf->vma->vm_file; - struct inode *inode = file_inode(file); - struct afs_vnode *vnode = AFS_FS_I(inode); - struct afs_file *af = file->private_data; - vm_fault_t ret = VM_FAULT_RETRY; - - _enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio)); - - afs_validate(vnode, af->key); - - sb_start_pagefault(inode->i_sb); - - /* Wait for the page to be written to the cache before we allow it to - * be modified. We then assume the entire page will need writing back. - */ -#ifdef CONFIG_AFS_FSCACHE - if (folio_test_fscache(folio) && - folio_wait_fscache_killable(folio) < 0) - goto out; -#endif - - if (folio_wait_writeback_killable(folio)) - goto out; - - if (folio_lock_killable(folio) < 0) - goto out; - - if (folio_wait_writeback_killable(folio) < 0) { - folio_unlock(folio); - goto out; - } - - if (folio_test_dirty(folio)) - trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio); - else - trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio); - file_update_time(file); - ret = VM_FAULT_LOCKED; -out: - sb_end_pagefault(inode->i_sb); - return ret; + if (afs_validate(AFS_FS_I(file_inode(file)), afs_file_key(file)) < 0) + return VM_FAULT_SIGBUS; + return netfs_page_mkwrite(vmf, NULL); } /* @@ -883,60 +294,3 @@ void afs_prune_wb_keys(struct afs_vnode *vnode) afs_put_wb_key(wbk); } } - -/* - * Clean up a page during invalidation. - */ -int afs_launder_folio(struct folio *folio) -{ - struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); - struct iov_iter iter; - struct bio_vec bv; - unsigned long long fend, i_size = vnode->netfs.inode.i_size; - size_t len; - int ret = 0; - - _enter("{%lx}", folio->index); - - if (folio_clear_dirty_for_io(folio) && folio_pos(folio) < i_size) { - len = folio_size(folio); - fend = folio_pos(folio) + len; - if (vnode->netfs.inode.i_size < fend) - len = fend - i_size; - - bvec_set_folio(&bv, folio, len, 0); - iov_iter_bvec(&iter, WRITE, &bv, 1, len); - - trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio); - ret = afs_store_data(vnode, &iter, folio_pos(folio), true); - } - - trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio); - folio_wait_fscache(folio); - return ret; -} - -/* - * Deal with the completion of writing the data to the cache. - */ -static void afs_write_to_cache_done(void *priv, ssize_t transferred_or_error, - bool was_async) -{ - struct afs_vnode *vnode = priv; - - if (IS_ERR_VALUE(transferred_or_error) && - transferred_or_error != -ENOBUFS) - afs_invalidate_cache(vnode, 0); -} - -/* - * Save the write to the cache also. - */ -static void afs_write_to_cache(struct afs_vnode *vnode, - loff_t start, size_t len, loff_t i_size, - bool caching) -{ - fscache_write_to_cache(afs_vnode_cache(vnode), - vnode->netfs.inode.i_mapping, start, len, i_size, - afs_write_to_cache_done, vnode, caching); -} diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 08506680350c..754358149372 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -837,29 +837,6 @@ TRACE_EVENT(afs_dir_check_failed, __entry->vnode, __entry->off, __entry->i_size) ); -TRACE_EVENT(afs_folio_dirty, - TP_PROTO(struct afs_vnode *vnode, const char *where, struct folio *folio), - - TP_ARGS(vnode, where, folio), - - TP_STRUCT__entry( - __field(struct afs_vnode *, vnode) - __field(const char *, where) - __field(pgoff_t, index) - __field(size_t, size) - ), - - TP_fast_assign( - __entry->vnode = vnode; - __entry->where = where; - __entry->index = folio_index(folio); - __entry->size = folio_size(folio); - ), - - TP_printk("vn=%p ix=%05lx s=%05lx %s", - __entry->vnode, __entry->index, __entry->size, __entry->where) - ); - TRACE_EVENT(afs_call_state, TP_PROTO(struct afs_call *call, enum afs_call_state from, From patchwork Fri Oct 13 16:04:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAC4BCDB47E for ; Fri, 13 Oct 2023 16:07:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1498980051; Fri, 13 Oct 2023 12:07:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F6DB80053; Fri, 13 Oct 2023 12:07:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB86780051; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A5ECB80051 for ; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5EF0D1CA8D3 for ; Fri, 13 Oct 2023 16:07:09 +0000 (UTC) X-FDA: 81340917378.25.7C85F94 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 560DF14002D for ; Fri, 13 Oct 2023 16:07:07 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CQa0V+zk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213227; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UutgwVYsQ7qyPNNqfHNDjBtmnrC2AaP6FcP8Hn/E1mo=; b=XCxK4QcTk5fuKlcKoRFupcn4iG7kyUagJWOH1Oxp86l6uATkfbXKShNDXS95KBCvsbjhUC 12XyPPcKDnWSrrrtWsexNzDGps1sM0a+2MoyP+nr8Je9yrhyXh2c3+3fBphXW4lTYiohQo iOASNeiLSwJrrPzq503K9+waxCCGHrg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CQa0V+zk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213227; a=rsa-sha256; cv=none; b=YbekHtcYBrV2FuD4rB0sytsdB+FaH45pPeYwChpayK6V7fXj7ceWyQTcxbwqTRi1EkIN9l Zp/KF6kaa7ugnbx2GUsAh5Ev/vr3JYjF/2A/jHGkwJudJW0Atl24RYyC6w4eQWQTReNJoh unoXfFt604Rj137oQMeATXvMEdyOcrI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213226; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UutgwVYsQ7qyPNNqfHNDjBtmnrC2AaP6FcP8Hn/E1mo=; b=CQa0V+zkcIBTRJ7TUySL439dF3K9cvAJjIaluKLVe5SdazQFzkgk4g0s/WTsFZdilYHMXo L59TaJNF1iUp9SCYCV7D/EQgsTD5w4ZJ058mhIOjR5gnJbtTdP+6yLASmM9C7BjTTQSDGJ g7X5CFPoh7ckyWMB72bzSpT4bZvObyQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-139-MuqWtbM4PUySpzr5ld3B_Q-1; Fri, 13 Oct 2023 12:06:59 -0400 X-MC-Unique: MuqWtbM4PUySpzr5ld3B_Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2DAF6185A79B; Fri, 13 Oct 2023 16:06:58 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E90F200AE6F; Fri, 13 Oct 2023 16:06:55 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 43/53] cifs: Replace cifs_readdata with a wrapper around netfs_io_subrequest Date: Fri, 13 Oct 2023 17:04:12 +0100 Message-ID: <20231013160423.2218093-44-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspam-User: X-Stat-Signature: p7yrjed1i8h5giakh1kafmuat5rmfc1s X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 560DF14002D X-HE-Tag: 1697213227-711497 X-HE-Meta: U2FsdGVkX184wHFMxi3nX2e5gkDZaAzQj23NJPAb49QbqBev0TzviUBbNoA8QxB8/W01tz/yDmKMwB6MPKatqY6TQQlR0KaPU/bd7MWhi0LiIhbFyaCrSxZQXyw9+QLTxhD2J0r1yCQLz9gnG3xHUgH530MRgpr5vri6eR8x8ZkUhBMf52mA47zkdOkBDiDPbNtu5fg4UupFPzDcnvlDYfitA8MlefAJrHx2b22Ld3HTQMq9KEAxsoGxpkvAfT8D71xxGzjOrls+HmpN6QkLiVDiuq30Alo6fbI7ttyCOLX9zgUS4n63wn/uClh2al5HOH/hLvD7OHYvz4zwGthAE5kz6JdcFNRg9BXUd62H/4XjjIxpEtCgqDwHfB/LLH8qJFc5It4MlVjEMpkdMZBzwuW0+vixij9ZRsj1lvek8MRuZerVgUPU21dkDbeWbZzxIhsZkKw0CMaHXW629aRreNgYsmhGo/qhbNY5zaE0EZJuwlIeSBKgS01YElz3UDSQI4xvrk4VRVFT4c0FADXgHx5Mmxz5modwGD5qMK7MZHoCqVKHw2VP8Aczb2FUKDmsfsWL3Xf4lBKN48/2GR5bMQ0mJ93VcjkxuYRMeuJObxgOO2S4ioalc2uJh0IETCHkCvYX+Wzn/dHKgvYxDahbvD45w08aNuJMfzyUlRR8FFpXWgGg0H99Rnzo1clCecvdypZYOq0+nZgRpLMRRi3FN/HfKOAxCUij8lCcKh/4B5Sesek9uUb7tiOkW5jGrpGKaoJmgVm/PzuFbrJEtPRiP3OV/gFerkIbTl+XR9FsZ4+6uE2HvYJV9zz5CBA2T+DUa0k/RH0f5wq2dmCpJtGRwXZOfxwlMhWe/WwB9o25+cHWvrcbIrVqffJrmn4uP6DWcaIIraonW8PjQkNPQOW3cThWBalNaznXhkdfKiADJc9dO6Wj2gLd3b7wm+eCIo90dP0iZAkUMY7frG1KRoI /nc/jbq3 dRsJdTIQMybA5e1A/oXUQMnqCJF6XCUysVpVBgjaYI+O3HiJs2wspOqHSQs5lCuc9fm6pUOWwruDwj15chwiUqKrCv8H+srxEpG+59k2sIvCuGddnMqg6B2xq1RSBrorbXBcEO/1uPmnMbzUYbdvPEmhX9Pc225S+Xvv3Wjf8X2wSOrMPVAzF7Jk6J5/wpP8NMpzKoX17f4uf/facyKAAuhuzO5SEm91qrXfetteigwMK5mpgelGVCoxV61UDzEeo22rmN/6Nc79y5ZziSIF9TOWKQoKZrk2jLzIurvifLFit2QpmgTJ7995igdsU4ZmsGDj4xwgitkOVrCE3WhjUaJ9ybs+ADHdBnTgaxN1HCYS0RFG3zodKPGZbL8UyN49r+bvW15UbL9CXVw+y7u1qyUe3/Busmblj1oQronQ/seJ+Ek4YYsXYu/e/nkzWgzNYdGfFDb+XgYEnsAjbRdwMHW+7XgsoCwkIeoSFWcMy3ojN2U19sTZBUflqMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Netfslib has a facility whereby the allocation for netfs_io_subrequest can be increased to so that filesystem-specific data can be tagged on the end. Prepare to use this by making a struct, cifs_io_subrequest, that wraps netfs_io_subrequest, and absorb struct cifs_readdata into it. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 22 ++++++++++-------- fs/smb/client/cifsproto.h | 9 ++++++-- fs/smb/client/cifssmb.c | 11 ++++----- fs/smb/client/file.c | 48 ++++++++++++++++++--------------------- fs/smb/client/smb2ops.c | 2 +- fs/smb/client/smb2pdu.c | 13 ++++++----- fs/smb/client/smb2proto.h | 2 +- fs/smb/client/transport.c | 4 ++-- 8 files changed, 56 insertions(+), 55 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 02082621d8e0..22fa98428845 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -237,7 +237,7 @@ struct dfs_info3_param; struct cifs_fattr; struct smb3_fs_context; struct cifs_fid; -struct cifs_readdata; +struct cifs_io_subrequest; struct cifs_writedata; struct cifs_io_parms; struct cifs_search_info; @@ -411,7 +411,7 @@ struct smb_version_operations { /* send a flush request to the server */ int (*flush)(const unsigned int, struct cifs_tcon *, struct cifs_fid *); /* async read from the server */ - int (*async_readv)(struct cifs_readdata *); + int (*async_readv)(struct cifs_io_subrequest *); /* async write to the server */ int (*async_writev)(struct cifs_writedata *, void (*release)(struct kref *)); @@ -1423,26 +1423,28 @@ struct cifs_aio_ctx { }; /* asynchronous read support */ -struct cifs_readdata { - struct kref refcount; - struct list_head list; - struct completion done; +struct cifs_io_subrequest { + struct netfs_io_subrequest subreq; struct cifsFileInfo *cfile; struct address_space *mapping; struct cifs_aio_ctx *ctx; - __u64 offset; ssize_t got_bytes; - unsigned int bytes; pid_t pid; int result; - struct work_struct work; - struct iov_iter iter; struct kvec iov[2]; struct TCP_Server_Info *server; #ifdef CONFIG_CIFS_SMB_DIRECT struct smbd_mr *mr; #endif struct cifs_credits credits; + + // TODO: Remove following elements + struct list_head list; + struct completion done; + struct work_struct work; + struct iov_iter iter; + __u64 offset; + unsigned int bytes; }; /* asynchronous write support */ diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 0c37eefa18a5..7748fe148fb4 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -580,8 +580,13 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); -void cifs_readdata_release(struct kref *refcount); -int cifs_async_readv(struct cifs_readdata *rdata); +void cifs_readdata_release(struct cifs_io_subrequest *rdata); +static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) +{ + if (refcount_dec_and_test(&rdata->subreq.ref)) + cifs_readdata_release(rdata); +} +int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_writedata *wdata, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 25503f1a4fd2..76005b3d5ffe 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -24,6 +24,8 @@ #include #include #include +#include +#include #include "cifspdu.h" #include "cifsfs.h" #include "cifsglob.h" @@ -1260,12 +1262,11 @@ CIFS_open(const unsigned int xid, struct cifs_open_parms *oparms, int *oplock, static void cifs_readv_callback(struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, - .rq_iter_size = iov_iter_count(&rdata->iter), .rq_iter = rdata->iter }; struct cifs_credits credits = { .value = 1, .instance = 0 }; @@ -1310,7 +1311,7 @@ cifs_readv_callback(struct mid_q_entry *mid) /* cifs_async_readv - send an async write, and set up mid to handle result */ int -cifs_async_readv(struct cifs_readdata *rdata) +cifs_async_readv(struct cifs_io_subrequest *rdata) { int rc; READ_REQ *smb = NULL; @@ -1362,15 +1363,11 @@ cifs_async_readv(struct cifs_readdata *rdata) rdata->iov[1].iov_base = (char *)smb + 4; rdata->iov[1].iov_len = get_rfc1002_length(smb); - kref_get(&rdata->refcount); rc = cifs_call_async(tcon->ses->server, &rqst, cifs_readv_receive, cifs_readv_callback, NULL, rdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_reads); - else - kref_put(&rdata->refcount, cifs_readdata_release); - cifs_small_buf_release(smb); return rc; } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 2108b3b40ce9..b4f16ef62115 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2950,7 +2950,7 @@ static int cifs_writepages_region(struct address_space *mapping, continue; } - folio_batch_release(&fbatch); + folio_batch_release(&fbatch); cond_resched(); } while (wbc->nr_to_write > 0); @@ -3784,13 +3784,13 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } -static struct cifs_readdata *cifs_readdata_alloc(work_func_t complete) +static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) { - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; rdata = kzalloc(sizeof(*rdata), GFP_KERNEL); if (rdata) { - kref_init(&rdata->refcount); + refcount_set(&rdata->subreq.ref, 1); INIT_LIST_HEAD(&rdata->list); init_completion(&rdata->done); INIT_WORK(&rdata->work, complete); @@ -3800,11 +3800,8 @@ static struct cifs_readdata *cifs_readdata_alloc(work_func_t complete) } void -cifs_readdata_release(struct kref *refcount) +cifs_readdata_release(struct cifs_io_subrequest *rdata) { - struct cifs_readdata *rdata = container_of(refcount, - struct cifs_readdata, refcount); - if (rdata->ctx) kref_put(&rdata->ctx->refcount, cifs_aio_ctx_release); #ifdef CONFIG_CIFS_SMB_DIRECT @@ -3824,16 +3821,16 @@ static void collect_uncached_read_data(struct cifs_aio_ctx *ctx); static void cifs_uncached_readv_complete(struct work_struct *work) { - struct cifs_readdata *rdata = container_of(work, - struct cifs_readdata, work); + struct cifs_io_subrequest *rdata = + container_of(work, struct cifs_io_subrequest, work); complete(&rdata->done); collect_uncached_read_data(rdata->ctx); /* the below call can possibly free the last ref to aio ctx */ - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } -static int cifs_resend_rdata(struct cifs_readdata *rdata, +static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { @@ -3901,7 +3898,7 @@ static int cifs_resend_rdata(struct cifs_readdata *rdata, } while (rc == -EAGAIN); fail: - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); return rc; } @@ -3910,7 +3907,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, struct cifs_sb_info *cifs_sb, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; unsigned int rsize, nsegs, max_segs = INT_MAX; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -3978,7 +3975,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, rdata->ctx = ctx; kref_get(&ctx->refcount); - rdata->iter = ctx->iter; + rdata->iter = ctx->iter; iov_iter_truncate(&rdata->iter, cur_len); rc = adjust_credits(server, &rdata->credits, rdata->bytes); @@ -3992,7 +3989,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, if (rc) { add_credits_and_wake_if(server, &rdata->credits, 0); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); if (rc == -EAGAIN) continue; break; @@ -4010,7 +4007,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, static void collect_uncached_read_data(struct cifs_aio_ctx *ctx) { - struct cifs_readdata *rdata, *tmp; + struct cifs_io_subrequest *rdata, *tmp; struct cifs_sb_info *cifs_sb; int rc; @@ -4056,8 +4053,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) rdata->cfile, cifs_sb, &tmp_list, ctx); - kref_put(&rdata->refcount, - cifs_readdata_release); + cifs_put_readdata(rdata); } list_splice(&tmp_list, &ctx->list); @@ -4073,7 +4069,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) ctx->total_len += rdata->got_bytes; } list_del_init(&rdata->list); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } /* mask nodata case */ @@ -4445,8 +4441,8 @@ static void cifs_unlock_folios(struct address_space *mapping, pgoff_t first, pgo static void cifs_readahead_complete(struct work_struct *work) { - struct cifs_readdata *rdata = container_of(work, - struct cifs_readdata, work); + struct cifs_io_subrequest *rdata = container_of(work, + struct cifs_io_subrequest, work); struct folio *folio; pgoff_t last; bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); @@ -4472,7 +4468,7 @@ static void cifs_readahead_complete(struct work_struct *work) } rcu_read_unlock(); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } static void cifs_readahead(struct readahead_control *ractl) @@ -4512,7 +4508,7 @@ static void cifs_readahead(struct readahead_control *ractl) */ while ((nr_pages = ra_pages)) { unsigned int i, rsize; - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct folio *folio; @@ -4631,11 +4627,11 @@ static void cifs_readahead(struct readahead_control *ractl) rdata->offset / PAGE_SIZE, (rdata->offset + rdata->bytes - 1) / PAGE_SIZE); /* Fallback to the readpage in error/reconnect cases */ - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); break; } - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } free_xid(xid); diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 9aeecee6b91b..dc18130db9b3 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4584,7 +4584,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, unsigned int cur_off; unsigned int cur_page_idx; unsigned int pad_len; - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct smb2_hdr *shdr = (struct smb2_hdr *)buf; int length; bool use_rdma_mr = false; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index c75a80bb6d9e..cc3d80a66869 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -23,6 +23,8 @@ #include #include #include +#include +#include #include "cifsglob.h" #include "cifsacl.h" #include "cifsproto.h" @@ -4070,7 +4072,7 @@ static inline bool smb3_use_rdma_offload(struct cifs_io_parms *io_parms) */ static int smb2_new_read_req(void **buf, unsigned int *total_len, - struct cifs_io_parms *io_parms, struct cifs_readdata *rdata, + struct cifs_io_parms *io_parms, struct cifs_io_subrequest *rdata, unsigned int remaining_bytes, int request_type) { int rc = -EACCES; @@ -4162,13 +4164,14 @@ smb2_new_read_req(void **buf, unsigned int *total_len, static void smb2_readv_callback(struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); struct TCP_Server_Info *server = rdata->server; struct smb2_hdr *shdr = (struct smb2_hdr *)rdata->iov[0].iov_base; struct cifs_credits credits = { .value = 0, .instance = 0 }; - struct smb_rqst rqst = { .rq_iov = &rdata->iov[1], .rq_nvec = 1 }; + struct smb_rqst rqst = { .rq_iov = &rdata->iov[1], + .rq_nvec = 1 }; if (rdata->got_bytes) { rqst.rq_iter = rdata->iter; @@ -4249,7 +4252,7 @@ smb2_readv_callback(struct mid_q_entry *mid) /* smb2_async_readv - send an async read, and set up mid to handle result */ int -smb2_async_readv(struct cifs_readdata *rdata) +smb2_async_readv(struct cifs_io_subrequest *rdata) { int rc, flags = 0; char *buf; @@ -4307,13 +4310,11 @@ smb2_async_readv(struct cifs_readdata *rdata) flags |= CIFS_HAS_CREDITS; } - kref_get(&rdata->refcount); rc = cifs_call_async(server, &rqst, cifs_readv_receive, smb2_readv_callback, smb3_handle_read_data, rdata, flags, &rdata->credits); if (rc) { - kref_put(&rdata->refcount, cifs_readdata_release); cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE); trace_smb3_read_err(0 /* xid */, io_parms.persistent_fid, io_parms.tcon->tid, diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h index 46eff9ec302a..02ffe5ec9b21 100644 --- a/fs/smb/client/smb2proto.h +++ b/fs/smb/client/smb2proto.h @@ -186,7 +186,7 @@ extern int SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon, extern int SMB2_get_srv_num(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid, u64 volatile_fid, __le64 *uniqueid); -extern int smb2_async_readv(struct cifs_readdata *rdata); +extern int smb2_async_readv(struct cifs_io_subrequest *rdata); extern int SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, char **buf, int *buf_type); extern int smb2_async_writev(struct cifs_writedata *wdata, diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 14710afdc2a3..16d87867ef50 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1686,7 +1686,7 @@ __cifs_readv_discard(struct TCP_Server_Info *server, struct mid_q_entry *mid, static int cifs_readv_discard(struct TCP_Server_Info *server, struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; return __cifs_readv_discard(server, mid, rdata->result); } @@ -1696,7 +1696,7 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) { int length, len; unsigned int data_offset, data_len; - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; char *buf = server->smallbuf; unsigned int buflen = server->pdu_size + HEADER_PREAMBLE_SIZE(server); bool use_rdma_mr = false; From patchwork Fri Oct 13 16:04:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A648BCDB482 for ; Fri, 13 Oct 2023 16:07:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC01580052; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C411280053; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A470F80052; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8FB2780051 for ; Fri, 13 Oct 2023 12:07:09 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4F7131403D1 for ; Fri, 13 Oct 2023 16:07:09 +0000 (UTC) X-FDA: 81340917378.23.8547F67 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 6803B100034 for ; Fri, 13 Oct 2023 16:07:07 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ou3ZJ3HM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213227; a=rsa-sha256; cv=none; b=4LIuX1IxeE/niYI/PBp7fYOjdNz5wZgUMwwVAxEQJYOUjNrx3Bmv5d/vANqPYNNquIvbJM ZBXdgm3sQtqIIdTTrKg296ADEpJT+CItSvod2E268170pxUDQISgDk8kGRU289l1YaIcFB dVFcnZoBYFusjqRDFaSFIamXaWASjlg= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ou3ZJ3HM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213227; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TOWBTb7dSiEX2Zo576hGEWbl47hY2TK0ZPUPW+cuiow=; b=fkSMzxcVONZtHCp7WzHvNbVsJ4IDYso548IvntZj43++babk8klv7V1KHvqJtYtTDtl2nu /BVPAG9XwgOITswJ5Mkesx9/Ej+NDTIPSrAH7b6asRf0YEFDKxhI3Gy1psTKOzj5PBcU8/ fnhYgC8LQBVGaTiFqUhxH06FaXU25Lo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213226; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TOWBTb7dSiEX2Zo576hGEWbl47hY2TK0ZPUPW+cuiow=; b=Ou3ZJ3HMoOC8WjJtFmI9HMJ7GMaD/F4gOWDteDnCkZVwNcmS1kpjIgTsAPtTXKC9sfXSx6 5woF3xc0niPv+GriRqLXv5P3xNiHw0LaNwAiGHGbGgD1ymUJQeeYKDFBGLIrnIYs51E9SL RhxP2ydAUjPPHVtgaQo4eGQ6H6UfyL0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-70-mlred8r2Ooqa8GL90Jlipg-1; Fri, 13 Oct 2023 12:07:03 -0400 X-MC-Unique: mlred8r2Ooqa8GL90Jlipg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C93F288B7AD; Fri, 13 Oct 2023 16:07:01 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id EB8EB492BD9; Fri, 13 Oct 2023 16:06:58 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 44/53] cifs: Share server EOF pos with netfslib Date: Fri, 13 Oct 2023 17:04:13 +0100 Message-ID: <20231013160423.2218093-45-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6803B100034 X-Stat-Signature: 7a7aq69u8bnwi15yya4d1xco738e99tf X-HE-Tag: 1697213227-845865 X-HE-Meta: U2FsdGVkX1+04u0yDOOTpZJx2APRWjmHhvpdBxJf714yI6KEk1YnRSA5BPitWdGheAqea3kEAOc000ExOnE492o2a7qkZX29ZReQfB2r2oSQbvYGmmoEh/QoaBjy2Mc8YopXcAEpkBUvm6VOkQZq3IFChxQ3lPtl1tDlaSv47HyqGLNMCRKHlpX2yXhhs8VxSaW1ZoN5SxjMLtvvKutJ5K58SMtv0iUyFc0kg3r+1x1gvwpnWHVXrp4UUzMb9+LlLjgW+gwPT1q4OI0br/8moT7kitrSGX3KNWXdhIytQbf6YqAE0/g106MhNuOC9ThkiBC34EVqRmJ5WboVJlpgHc9vPNSLEaaucK5ifIrRF1d42ULwsAu6HEpHsPg4xfo/PUAJHC677R79AeQYDkEHssLSfCAu4FVTffx172UJev9ultG4ThcC+PqOcuiUauT9HBzQyl0FmRIB+db9pTUVIbpR78Ttf9ek4NWDHgHBDb6vB3TYy3UIet7HKaQF3Kx4Z82UEv+rjmD8iLSpEJ1DfK8XbCx4BB3ahX7uExdfSri33iv11mb4klqa4v82N9CocvsaquezZMmEmZFCdRMIPn/vtCD/y5X6EXoRYtDgJBha0b54+ZuJyVKBY+pKk/5jtTLSUENIlwmFsKvsYMQ8Odf0o7XRooFY01H/7QtMVuVBU+Nkw6mZKVzcMVn/qBzPZ2LhDQ+8zvaAeTRXbzL/r3LvfqvAx5E3ub97OZF9GMJjF2hDDoW9St20GFIvyfdqXlJB2AAVi4cT4jTUQpe5iJnBNFU0EMYDDEzPDQi1D7kpUp6q3oOiaaeOQVSe0ACj08RhYeFMOtKvMjOJ4PoM9mN83hJfVigW0hKNZi+SSPeo6T4BxnOetCTnEPcGy+s01pyGELcVEEZc2UPj/fSx9vAdpC48jYjn6+cdxOJqvQF34w6q1CMT29pH8h4fa9aBRd/14j8G08zZMrQSogp 1FFZoR0+ +9S3FDtZaq6IN8jYrxr8AXgicpGJlTQVQC4tcNAW+smlvnfYV6LJ/kBTb4oUhTTrkRsxJgqQDG54wYEKZn1OcbhhzkwXcYlCt3scuQ2HmdrxFYFnKs/skV/Lnmu1D1wJwCXAbSy7/RByShGcHzcq4g9ratlosED5pR2GWsK2+yQz+CfeYmjIdKUjear5IVRpwBbeahij96MBj2hxOvYggPtj2+jB3Kv9jhl/xAu/0w9T17nOEcYYJ8GWKfobxLEcpCKIgdF7dRhjhqckrSMDxsf2MXEI4yihPN0xVMXco0UE3lsQiAnSyCJGpDPCi/q6MF4RaIiQLseIHkUpEeTM6GShLNgEi+65oo1+XpyAnvln7N3nv2/bDauMhcbNVDJWv/qeXZYc3DTDmArDAyiXQcFO5HquadRaFrUlORCxk5IdK5vp1sXYcDFMsHm3BTAkELwE0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use cifsi->netfs_ctx.remote_i_size instead of cifsi->server_eof so that netfslib can refer to it to. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsfs.c | 2 +- fs/smb/client/cifsglob.h | 1 - fs/smb/client/file.c | 8 ++++---- fs/smb/client/inode.c | 6 +++--- fs/smb/client/smb2ops.c | 10 +++++----- 5 files changed, 13 insertions(+), 14 deletions(-) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 22869cda1356..85799e9e0f4c 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -395,7 +395,7 @@ cifs_alloc_inode(struct super_block *sb) spin_lock_init(&cifs_inode->writers_lock); cifs_inode->writers = 0; cifs_inode->netfs.inode.i_blkbits = 14; /* 2**14 = CIFS_MAX_MSGSIZE */ - cifs_inode->server_eof = 0; + cifs_inode->netfs.remote_i_size = 0; cifs_inode->uniqueid = 0; cifs_inode->createtime = 0; cifs_inode->epoch = 0; diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 22fa98428845..1943d035b8d3 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1527,7 +1527,6 @@ struct cifsInodeInfo { spinlock_t writers_lock; unsigned int writers; /* Number of writers on this inode */ unsigned long time; /* jiffies of last update of inode */ - u64 server_eof; /* current file size on server -- protected by i_lock */ u64 uniqueid; /* server inode number */ u64 createtime; /* creation time on server */ __u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */ diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index b4f16ef62115..0383ce61ac35 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2117,8 +2117,8 @@ cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, { loff_t end_of_write = offset + bytes_written; - if (end_of_write > cifsi->server_eof) - cifsi->server_eof = end_of_write; + if (end_of_write > cifsi->netfs.remote_i_size) + netfs_resize_file(&cifsi->netfs, end_of_write); } static ssize_t @@ -3246,8 +3246,8 @@ cifs_uncached_writev_complete(struct work_struct *work) spin_lock(&inode->i_lock); cifs_update_eof(cifsi, wdata->offset, wdata->bytes); - if (cifsi->server_eof > inode->i_size) - i_size_write(inode, cifsi->server_eof); + if (cifsi->netfs.remote_i_size > inode->i_size) + i_size_write(inode, cifsi->netfs.remote_i_size); spin_unlock(&inode->i_lock); complete(&wdata->done); diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c index d7c302442c1e..6815b50ec56c 100644 --- a/fs/smb/client/inode.c +++ b/fs/smb/client/inode.c @@ -102,7 +102,7 @@ cifs_revalidate_cache(struct inode *inode, struct cifs_fattr *fattr) /* revalidate if mtime or size have changed */ fattr->cf_mtime = timestamp_truncate(fattr->cf_mtime, inode); if (timespec64_equal(&inode->i_mtime, &fattr->cf_mtime) && - cifs_i->server_eof == fattr->cf_eof) { + cifs_i->netfs.remote_i_size == fattr->cf_eof) { cifs_dbg(FYI, "%s: inode %llu is unchanged\n", __func__, cifs_i->uniqueid); return; @@ -191,7 +191,7 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) else clear_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags); - cifs_i->server_eof = fattr->cf_eof; + cifs_i->netfs.remote_i_size = fattr->cf_eof; /* * Can't safely change the file size here if the client is writing to * it due to potential races. @@ -2776,7 +2776,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, set_size_out: if (rc == 0) { - cifsInode->server_eof = attrs->ia_size; + netfs_resize_file(&cifsInode->netfs, attrs->ia_size); cifs_setsize(inode, attrs->ia_size); /* * i_blocks is not related to (i_size / i_blksize), but instead diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index dc18130db9b3..e7f765673246 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -3554,7 +3554,7 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon, rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid, cfile->fid.volatile_fid, cfile->pid, &eof); if (rc == 0) { - cifsi->server_eof = off + len; + netfs_resize_file(&cifsi->netfs, off + len); cifs_setsize(inode, off + len); cifs_truncate_page(inode->i_mapping, inode->i_size); truncate_setsize(inode, off + len); @@ -3646,8 +3646,8 @@ static long smb3_collapse_range(struct file *file, struct cifs_tcon *tcon, int rc; unsigned int xid; struct inode *inode = file_inode(file); - struct cifsFileInfo *cfile = file->private_data; struct cifsInodeInfo *cifsi = CIFS_I(inode); + struct cifsFileInfo *cfile = file->private_data; __le64 eof; loff_t old_eof; @@ -3682,9 +3682,9 @@ static long smb3_collapse_range(struct file *file, struct cifs_tcon *tcon, rc = 0; - cifsi->server_eof = i_size_read(inode) - len; - truncate_setsize(inode, cifsi->server_eof); - fscache_resize_cookie(cifs_inode_cookie(inode), cifsi->server_eof); + netfs_resize_file(&cifsi->netfs, eof); + truncate_setsize(inode, eof); + fscache_resize_cookie(cifs_inode_cookie(inode), eof); out_2: filemap_invalidate_unlock(inode->i_mapping); out: From patchwork Fri Oct 13 16:04:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD3FCCDB47E for ; Fri, 13 Oct 2023 16:08:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F34080054; Fri, 13 Oct 2023 12:07:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4533680053; Fri, 13 Oct 2023 12:07:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D89180054; Fri, 13 Oct 2023 12:07:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 16F4780053 for ; Fri, 13 Oct 2023 12:07:16 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 74BC2B51B3 for ; Fri, 13 Oct 2023 16:07:15 +0000 (UTC) X-FDA: 81340917630.27.23E685C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id AC2DD8000D for ; Fri, 13 Oct 2023 16:07:13 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FfGpCFtm; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213233; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wcl3Vj2/7CQoSh+rPBhRPT8yNX3F4osp10twi1tLfeM=; b=m1scQUuQO2qwZawG/gLeo7rjpPu8mV1iKHIlU3zIYpB2IWJQrLLKfabpLSOds9X3nW2GZy Og3VXu3ZJ3H/4jFSEL1I4FN1r/V5JVuDbHpIt2Z/FQ/F8+rcwh6VHzsSjmXQ9CjhwUO6mI zpGXS2oZxa0SFLEuBKbJGmi674oqMPA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213233; a=rsa-sha256; cv=none; b=5ZHX6KN+R7t3zADXw5spEP+UpUFnJ2ml9IMYL2wDLsYiJ/gwN0TTKCeR5e0W6hUjxjux0p iu+Vtz4o3mMTfvJGq4Ua8m/9x3w1E9TbmLp6e1k45XOaHrRB5wInVMyzR+MZQIB3RMoY8e mQnwfq1qJOCf7CrnDi457uVeQ+ygZ3k= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FfGpCFtm; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213233; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wcl3Vj2/7CQoSh+rPBhRPT8yNX3F4osp10twi1tLfeM=; b=FfGpCFtmYQSw8/fN0bUuybc0By45T4JteevkX7RacEiVegUV5gm79Tn1t4LOLDh9trjqaM wfRnl34gsvSl7Wg42+w35fq4VTuPWkFHjxqRP8ahBv9XbIv4yl909gxNMhz2uZdTyX4mdt OpCmDIqrZEkHOYqHhs0CRZNeLWw8t6k= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-196-7yfdTVF2N_y7UVQQs3wHQg-1; Fri, 13 Oct 2023 12:07:06 -0400 X-MC-Unique: 7yfdTVF2N_y7UVQQs3wHQg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 86C2218312C4; Fri, 13 Oct 2023 16:07:05 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7102C40C6F79; Fri, 13 Oct 2023 16:07:02 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 45/53] cifs: Replace cifs_writedata with a wrapper around netfs_io_subrequest Date: Fri, 13 Oct 2023 17:04:14 +0100 Message-ID: <20231013160423.2218093-46-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Queue-Id: AC2DD8000D X-Rspam-User: X-Stat-Signature: z8ir6m3g7znh6ym17rkwcgbstgfzeb9r X-Rspamd-Server: rspam03 X-HE-Tag: 1697213233-912124 X-HE-Meta: U2FsdGVkX18mczy/S0KHPDbA9iakHl6MSKIYa4YlKV3HD1Feb4LyS7HgOajyNILzpGxxZEltWUXotaXFM58mL4Dt2bFuiXKqrCxpTZbL7oy2cOCB0bmff7Pk2I74nnFD1X6FKKFh0sk4yWYz8gGfKRoRfgqQyRxiIfRh2pRzmpLeCIMvNLId8s9bNWkXugLd54OKAxlM+Vi/8zapM1fKXDGaxQxzf6eqynBPd7Acv8/GJiV8pL7/hIWqQSUIpERAxkq542k4p+LBlAu1JhV2wBP1jUbTajcPf/d/G2oCBnMoFXblnGr5aoVvplPtAyMAK++9QW8HqVkZBER7I9oyIpM+RmFMC3QQfC0ubcz0+D/ULY6+OkCZhY/fPAvwl5WYo9UFkhZHCDbJ9GI1+CZ5NWAkcGt8b6/sUc7kwxHEhd0U9/wwnB4bQvB/AMSwQnaNZICO93RUzb8LLTL2rczNPvBBcCS5O6oXJADz4/BzhmjuZaBZDTIGqpxY/mH9Qq7XQhEqOVLSNdMh5ySOGxoKo+Yb7ftSPqhPew07wo72VqCO8d3lDnXNuhQG/wHvCglNdka6MfiBcBL2b6MUsxobGg+kZZDtzYHlwUGHKSmtZibakgjGAKXYU43MZZjkJZh5keAXcXQcC3SmoqvxvryzMCGzyh7OJ9JowhIXuDj4XQpYPq4UEg559z1i7cy2ckqYtLP1lqfZGRL8XKbUpobedae/rL3/aMqjC0CQYkyvdXY+ahVcgR13XONmlW77NesSytMu3E2NsNgAn7w51aDCNoZ/3L/yp3pK0pEhbGy8dOw4M1EuAxflDZcJKtgap/LnPDGfpkaF7PV2xxuhH1v8viEPBfCjkYEbgaYY+snl9pHTG15ezs9VBp62gjxWVket2a02IYYcecwJ0iq965umhu0ukWKRzFh7kW1ytg5B5+Zhq/BruN54fmN8tCNzoiwKN1hnW5Bo/Y/E56Zp/8I JUyQTW6j czUxeaPsN+38Mg5JAgujjgkv4tEw5owzDBudq2sJmjn0OTa8IfbC66eJ/bdesSjem1Tp9i8uDEmbRHbTjjO2ApE/doZu6Bl4W+JLfOpDrMSIOTtU/+HMTIgCXpurtE0oXWLNjD9C3Kcebug4l7I1Ok/Wa+Z9QYEEjV4J9wwSPesZ925T6ZW97eWTEkKDRH5hzBzvFux/oKQgzqwTHc40BDk/BbJrbwf8naGL+WzFO6vJRJ4o10HPlIY/bt+co3ioDOoj2V5j4bZnlSzjz4VXFbpLbTb1nx/smDGvjhuNlJw37oYPtcAvN7fdvXYM9xroud7KO1kZBezyfPoEbzXszzrKVso5kDttfKUQz7mRdIdRDbEBCRredNSus0Ze3FxBGERDqZg9vYnOZ65MhdnuoFpcPpMTvgkRo20is/bBvK+D0pa2Nfykq5f6FFMFw5lFwSTYupTughkPhUiE72Sk+eEFX/i52+xUxxX4tHljq6l1nALhjPKHoKThmbw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace the cifs_writedata struct with the same wrapper around netfs_io_subrequest that was used to replace cifs_readdata. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 30 +++------------ fs/smb/client/cifsproto.h | 16 ++++++-- fs/smb/client/cifssmb.c | 9 ++--- fs/smb/client/file.c | 79 ++++++++++++++++----------------------- fs/smb/client/smb2pdu.c | 9 ++--- fs/smb/client/smb2proto.h | 3 +- 6 files changed, 58 insertions(+), 88 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 1943d035b8d3..0b1835751bda 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -238,7 +238,6 @@ struct cifs_fattr; struct smb3_fs_context; struct cifs_fid; struct cifs_io_subrequest; -struct cifs_writedata; struct cifs_io_parms; struct cifs_search_info; struct cifsInodeInfo; @@ -413,8 +412,7 @@ struct smb_version_operations { /* async read from the server */ int (*async_readv)(struct cifs_io_subrequest *); /* async write to the server */ - int (*async_writev)(struct cifs_writedata *, - void (*release)(struct kref *)); + int (*async_writev)(struct cifs_io_subrequest *); /* sync read from the server */ int (*sync_read)(const unsigned int, struct cifs_fid *, struct cifs_io_parms *, unsigned int *, char **, @@ -1438,35 +1436,17 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - // TODO: Remove following elements - struct list_head list; - struct completion done; - struct work_struct work; - struct iov_iter iter; - __u64 offset; - unsigned int bytes; -}; + enum writeback_sync_modes sync_mode; + bool uncached; + struct bio_vec *bv; -/* asynchronous write support */ -struct cifs_writedata { - struct kref refcount; + // TODO: Remove following elements struct list_head list; struct completion done; - enum writeback_sync_modes sync_mode; struct work_struct work; - struct cifsFileInfo *cfile; - struct cifs_aio_ctx *ctx; struct iov_iter iter; - struct bio_vec *bv; __u64 offset; - pid_t pid; unsigned int bytes; - int result; - struct TCP_Server_Info *server; -#ifdef CONFIG_CIFS_SMB_DIRECT - struct smbd_mr *mr; -#endif - struct cifs_credits credits; }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 7748fe148fb4..561dac1576a5 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -589,11 +589,19 @@ static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); -int cifs_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)); +int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); -struct cifs_writedata *cifs_writedata_alloc(work_func_t complete); -void cifs_writedata_release(struct kref *refcount); +struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); +void cifs_writedata_release(struct cifs_io_subrequest *rdata); +static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) +{ + refcount_inc(&wdata->subreq.ref); +} +static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) +{ + if (refcount_dec_and_test(&wdata->subreq.ref)) + cifs_writedata_release(wdata); +} int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 76005b3d5ffe..14fca3fa3e08 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1610,7 +1610,7 @@ CIFSSMBWrite(const unsigned int xid, struct cifs_io_parms *io_parms, static void cifs_writev_callback(struct mid_q_entry *mid) { - struct cifs_writedata *wdata = mid->callback_data; + struct cifs_io_subrequest *wdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); unsigned int written; WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf; @@ -1655,8 +1655,7 @@ cifs_writev_callback(struct mid_q_entry *mid) /* cifs_async_writev - send an async write, and set up mid to handle result */ int -cifs_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)) +cifs_async_writev(struct cifs_io_subrequest *wdata) { int rc = -EACCES; WRITE_REQ *smb = NULL; @@ -1723,14 +1722,14 @@ cifs_async_writev(struct cifs_writedata *wdata, iov[1].iov_len += 4; /* pad bigger by four bytes */ } - kref_get(&wdata->refcount); + cifs_get_writedata(wdata); rc = cifs_call_async(tcon->ses->server, &rqst, NULL, cifs_writev_callback, NULL, wdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_writes); else - kref_put(&wdata->refcount, release); + cifs_put_writedata(wdata); async_writev_out: cifs_small_buf_release(smb); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 0383ce61ac35..c192a38b1b7c 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2410,10 +2410,10 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, } void -cifs_writedata_release(struct kref *refcount) +cifs_writedata_release(struct cifs_io_subrequest *wdata) { - struct cifs_writedata *wdata = container_of(refcount, - struct cifs_writedata, refcount); + if (wdata->uncached) + kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); #ifdef CONFIG_CIFS_SMB_DIRECT if (wdata->mr) { smbd_deregister_mr(wdata->mr); @@ -2432,7 +2432,7 @@ cifs_writedata_release(struct kref *refcount) * possible that the page was redirtied so re-clean the page. */ static void -cifs_writev_requeue(struct cifs_writedata *wdata) +cifs_writev_requeue(struct cifs_io_subrequest *wdata) { int rc = 0; struct inode *inode = d_inode(wdata->cfile->dentry); @@ -2442,7 +2442,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) server = tlink_tcon(wdata->cfile->tlink)->ses->server; do { - struct cifs_writedata *wdata2; + struct cifs_io_subrequest *wdata2; unsigned int wsize, cur_len; wsize = server->ops->wp_retry_size(inode); @@ -2465,7 +2465,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) wdata2->sync_mode = wdata->sync_mode; wdata2->offset = fpos; wdata2->bytes = cur_len; - wdata2->iter = wdata->iter; + wdata2->iter = wdata->iter; iov_iter_advance(&wdata2->iter, fpos - wdata->offset); iov_iter_truncate(&wdata2->iter, wdata2->bytes); @@ -2487,11 +2487,10 @@ cifs_writev_requeue(struct cifs_writedata *wdata) rc = -EBADF; } else { wdata2->pid = wdata2->cfile->pid; - rc = server->ops->async_writev(wdata2, - cifs_writedata_release); + rc = server->ops->async_writev(wdata2); } - kref_put(&wdata2->refcount, cifs_writedata_release); + cifs_put_writedata(wdata2); if (rc) { if (is_retryable_error(rc)) continue; @@ -2510,14 +2509,14 @@ cifs_writev_requeue(struct cifs_writedata *wdata) if (rc != 0 && !is_retryable_error(rc)) mapping_set_error(inode->i_mapping, rc); - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); } void cifs_writev_complete(struct work_struct *work) { - struct cifs_writedata *wdata = container_of(work, - struct cifs_writedata, work); + struct cifs_io_subrequest *wdata = container_of(work, + struct cifs_io_subrequest, work); struct inode *inode = d_inode(wdata->cfile->dentry); if (wdata->result == 0) { @@ -2538,16 +2537,16 @@ cifs_writev_complete(struct work_struct *work) if (wdata->result != -EAGAIN) mapping_set_error(inode->i_mapping, wdata->result); - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); } -struct cifs_writedata *cifs_writedata_alloc(work_func_t complete) +struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete) { - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; wdata = kzalloc(sizeof(*wdata), GFP_NOFS); if (wdata != NULL) { - kref_init(&wdata->refcount); + refcount_set(&wdata->subreq.ref, 1); INIT_LIST_HEAD(&wdata->list); init_completion(&wdata->done); INIT_WORK(&wdata->work, complete); @@ -2729,7 +2728,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, { struct inode *inode = mapping->host; struct TCP_Server_Info *server; - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -2822,10 +2821,9 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, if (wdata->cfile->invalidHandle) rc = -EAGAIN; else - rc = wdata->server->ops->async_writev(wdata, - cifs_writedata_release); + rc = wdata->server->ops->async_writev(wdata); if (rc >= 0) { - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); goto err_close; } } else { @@ -2835,7 +2833,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, } err_wdata: - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); err_uncredit: add_credits_and_wake_if(server, credits, 0); err_close: @@ -3224,23 +3222,13 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } -static void -cifs_uncached_writedata_release(struct kref *refcount) -{ - struct cifs_writedata *wdata = container_of(refcount, - struct cifs_writedata, refcount); - - kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); - cifs_writedata_release(refcount); -} - static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); static void cifs_uncached_writev_complete(struct work_struct *work) { - struct cifs_writedata *wdata = container_of(work, - struct cifs_writedata, work); + struct cifs_io_subrequest *wdata = container_of(work, + struct cifs_io_subrequest, work); struct inode *inode = d_inode(wdata->cfile->dentry); struct cifsInodeInfo *cifsi = CIFS_I(inode); @@ -3253,11 +3241,11 @@ cifs_uncached_writev_complete(struct work_struct *work) complete(&wdata->done); collect_uncached_write_data(wdata->ctx); /* the below call can possibly free the last ref to aio ctx */ - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } static int -cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, +cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, struct cifs_aio_ctx *ctx) { unsigned int wsize; @@ -3306,8 +3294,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, wdata->mr = NULL; } #endif - rc = server->ops->async_writev(wdata, - cifs_uncached_writedata_release); + rc = server->ops->async_writev(wdata); } } @@ -3322,7 +3309,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, } while (rc == -EAGAIN); fail: - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); return rc; } @@ -3374,7 +3361,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, { int rc = 0; size_t cur_len, max_len; - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; pid_t pid; struct TCP_Server_Info *server; unsigned int xid, max_segs = INT_MAX; @@ -3438,6 +3425,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, break; } + wdata->uncached = true; wdata->sync_mode = WB_SYNC_ALL; wdata->offset = (__u64)fpos; wdata->cfile = cifsFileInfo_get(open_file); @@ -3457,14 +3445,12 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, if (wdata->cfile->invalidHandle) rc = -EAGAIN; else - rc = server->ops->async_writev(wdata, - cifs_uncached_writedata_release); + rc = server->ops->async_writev(wdata); } if (rc) { add_credits_and_wake_if(server, &wdata->credits, 0); - kref_put(&wdata->refcount, - cifs_uncached_writedata_release); + cifs_put_writedata(wdata); if (rc == -EAGAIN) continue; break; @@ -3482,7 +3468,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) { - struct cifs_writedata *wdata, *tmp; + struct cifs_io_subrequest *wdata, *tmp; struct cifs_tcon *tcon; struct cifs_sb_info *cifs_sb; struct dentry *dentry = ctx->cfile->dentry; @@ -3537,8 +3523,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) ctx->cfile, cifs_sb, &tmp_list, ctx); - kref_put(&wdata->refcount, - cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } list_splice(&tmp_list, &ctx->list); @@ -3546,7 +3531,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) } } list_del_init(&wdata->list); - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } cifs_stats_bytes_written(tcon, ctx->total_len); diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index cc3d80a66869..4f98631f2cf4 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4415,7 +4415,7 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, static void smb2_writev_callback(struct mid_q_entry *mid) { - struct cifs_writedata *wdata = mid->callback_data; + struct cifs_io_subrequest *wdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); struct TCP_Server_Info *server = wdata->server; unsigned int written; @@ -4496,8 +4496,7 @@ smb2_writev_callback(struct mid_q_entry *mid) /* smb2_async_writev - send an async write, and set up mid to handle result */ int -smb2_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)) +smb2_async_writev(struct cifs_io_subrequest *wdata) { int rc = -EACCES, flags = 0; struct smb2_write_req *req = NULL; @@ -4629,7 +4628,7 @@ smb2_async_writev(struct cifs_writedata *wdata, flags |= CIFS_HAS_CREDITS; } - kref_get(&wdata->refcount); + cifs_get_writedata(wdata); rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, NULL, wdata, flags, &wdata->credits); @@ -4641,7 +4640,7 @@ smb2_async_writev(struct cifs_writedata *wdata, io_parms->offset, io_parms->length, rc); - kref_put(&wdata->refcount, release); + cifs_put_writedata(wdata); cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); } diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h index 02ffe5ec9b21..4d3d51e42d3c 100644 --- a/fs/smb/client/smb2proto.h +++ b/fs/smb/client/smb2proto.h @@ -189,8 +189,7 @@ extern int SMB2_get_srv_num(const unsigned int xid, struct cifs_tcon *tcon, extern int smb2_async_readv(struct cifs_io_subrequest *rdata); extern int SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, char **buf, int *buf_type); -extern int smb2_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)); +extern int smb2_async_writev(struct cifs_io_subrequest *wdata); extern int SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, struct kvec *iov, int n_vec); extern int SMB2_echo(struct TCP_Server_Info *server); From patchwork Fri Oct 13 16:04:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EFC7CDB47E for ; Fri, 13 Oct 2023 16:08:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 942BD80055; Fri, 13 Oct 2023 12:07:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E82980053; Fri, 13 Oct 2023 12:07:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7135A80055; Fri, 13 Oct 2023 12:07:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5CB7380053 for ; Fri, 13 Oct 2023 12:07:18 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2B7B71CA84C for ; Fri, 13 Oct 2023 16:07:18 +0000 (UTC) X-FDA: 81340917756.19.D300354 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 4302E140030 for ; Fri, 13 Oct 2023 16:07:16 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jQeMo39R; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LRKkr36QAwgn0pJOKUXVOx++61AnLQDpegzOlu+z/q0=; b=cVfS40fSixXwJWJnsjPgZWZQRK2HCMOPtYHxw+UpKDLKWL4gY9dmOpCkaGOk0ahKJFL7KG 9D7BFYnNHkNd96Sm6R8KPujxVGcEeL2hzxDw/SZHKPwlEpvtxomjqKlMghywOTAA6GObcR jOg1IVT/6zO39GKqog1trTHUpYNoXu4= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jQeMo39R; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213236; a=rsa-sha256; cv=none; b=NbeLGZO+/9rcbstXAcsNB6EV98tJZID4Bo0bbrIIIl6Lz28Rw+jW+gJl/pBxr8Nq1V1Ner vSRCF2So3RRPPA5NI1h6aakU6vpV1dfuKaavI2UxxrKyN3t5uECC5oUNbG/UVTTwN6Xqm+ JJQWCuNrBtKkIRyiZWQA66H0w9JEFYY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213235; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LRKkr36QAwgn0pJOKUXVOx++61AnLQDpegzOlu+z/q0=; b=jQeMo39RmoXxtciKq/uXmBBaOIjCphYdCNed2jnvVicuQj2kur9tveL+7A0nUQ0KL2bn4I 683H0C+4QBVht84ApfNLbI8a9wNq603wyE+0jOwaIOdVZcnCB6h8YAWBFv1qQOoxka5X/1 Tjzq8XwHwxy6vBUQ9nsrUYh7KEq3OA8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-584-8aA9VlrKNzSfzzEonvLaUg-1; Fri, 13 Oct 2023 12:07:10 -0400 X-MC-Unique: 8aA9VlrKNzSfzzEonvLaUg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7C87188B7AC; Fri, 13 Oct 2023 16:07:09 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 517D22157F5A; Fri, 13 Oct 2023 16:07:06 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 46/53] cifs: Use more fields from netfs_io_subrequest Date: Fri, 13 Oct 2023 17:04:15 +0100 Message-ID: <20231013160423.2218093-47-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 4302E140030 X-Stat-Signature: 6bgjf9b85m4idinut3jgx6nb6an6atg9 X-Rspam-User: X-HE-Tag: 1697213236-834948 X-HE-Meta: U2FsdGVkX18rPqvLDi6phTjY02fnS2qu/t6+Q+8xeYDBIKmMGrLHPULeGytppR8X+ANdKEvekaJ/y0w5bm7r9+Ir898tlG3SqFfpXo7yDPHqJTngkreTqEY5emrdPDUKkC0Dfs6wITqdpdq9E9Ke+puerkDVMe7069qD7j3pT8KWaErQYLsxeeDI+dzlC7COnHbqaxEguNwzimV4VCeFkSAcjoFUspzM6mk4MncPDivzdhB7Qgh598XRBcaS60RnDofNGV0xAk4kGQuKpLuJbCHx/6M6FOtF5kYJU4hOutNXSva/0uLr2Pw3lZ7vH78van6/jplulzFHFVRYEQSAs0K4MHcbbos8Zsk1mumS5+vQKPCkJuL2taH/Hwl5zCQ9YK3AW9apOeAR6muE9JMw57PlMRxQflhyBxFTPCdTyYxPudOVBoiZ0mJNnhgTLvd0Nk+G0PX0Mr2sAIdZ5KciQDpcgxHbzvHfGImhL+GmP7PL29r29Z3JVTrn6fwfBxBr3ucJigWRoKwccwKOZvbpq9lqTxuodrk/D+BwBNDsC9ehJB7EuLpu3MCmUHxA5H0opbdHJiIVN+Y6N8y27Cbn+u7eRMOyx4YkTtoLtmtkdW4riVZtOszn2LUG039+B6Lb6m/e2uhrkYK9k1zKjQcTA7EH6ANdw5zxN1LYxYIphDe0gFtfwUdRBgkhyUUc98CLmm1CHZPt/RWZTrgsTLc4AIJ49FyaVHRkEqdcHHAY7+Ng5u181zN0Z8MV7W6tro+hdpU4xr0kuVE99WTGfsbD8mPUEmZlGlazdQUuHObIty2sIkP3wjGvYM1RnAVlz+JebZf8Yaxq5OlN0WD37bK4FXhTBZp+Pi7pbX2XQX0sUlc2s68u83A+iJ1CuW97Hm6wKWQP3LHMAWUOHb13Ztq9Md7fdwJM3S9xaiwyaHDrH8nPlYM/TXBGKoDJ9VwTsMIHs/vjDsrTCIzO5JCzULZ 9VvzzZ4Y /h2HhN8s1vGaer5yFKu6EJ5Ldi+j3FPR+8LgQMUnKUVq3YuTBEZCcNsP+l/QgLR798VUckjh6g5nHh+GPX9m2s9t+v51LyM2ibHpYbn7UKvms+RB6oSxd6yWACAgOTcVd7m+hZtnyQ0QjhWSQYudDkhf1PTKwhxQ0Zrp/+PeiSbTkd0ZAt4/Zxj4/tRnkynteNcX7LwclAClCoevO1lnu71eowUY75p6uf1H4l/vRLcA8kQnMuOS9XzVOabDbSSHuWA10uenlHNzOknOat6iQLyW4LSCxKWO3BvnnJzgUiu4mTrWZgc8WpF3p9eQLo2Z4rn5poIwpKc4awhTIs8sQrYZPcHZi0MIedCutC3Zyj/OIjL/1Ehvx9B91Xpztfpw18QQuRO96dK7I3LHjHxsPPBvunA18ulIaWBmFatNGa66RU92XE2W2QJcRPNRrGxaB0iqzxgRTDWZz4+3aO3oCOqMvu5gjb6FSQUPHawLfYWw8pf6krF2iQkQHKA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use more fields from netfs_io_subrequest instead of those incorporated into cifs_io_subrequest from cifs_readdata and cifs_writedata. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 3 - fs/smb/client/cifssmb.c | 52 +++++++++--------- fs/smb/client/file.c | 112 +++++++++++++++++++------------------- fs/smb/client/smb2ops.c | 4 +- fs/smb/client/smb2pdu.c | 52 +++++++++--------- fs/smb/client/transport.c | 6 +- 6 files changed, 113 insertions(+), 116 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 0b1835751bda..c7f04f9853c5 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1444,9 +1444,6 @@ struct cifs_io_subrequest { struct list_head list; struct completion done; struct work_struct work; - struct iov_iter iter; - __u64 offset; - unsigned int bytes; }; /* diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 14fca3fa3e08..112a5a2d95b8 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1267,12 +1267,12 @@ cifs_readv_callback(struct mid_q_entry *mid) struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, - .rq_iter = rdata->iter }; + .rq_iter = rdata->subreq.io_iter }; struct cifs_credits credits = { .value = 1, .instance = 0 }; - cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n", + cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n", __func__, mid->mid, mid->mid_state, rdata->result, - rdata->bytes); + rdata->subreq.len); switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: @@ -1320,14 +1320,14 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2 }; - cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", - __func__, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n", + __func__, rdata->subreq.start, rdata->subreq.len); if (tcon->ses->capabilities & CAP_LARGE_FILES) wct = 12; else { wct = 10; /* old style read */ - if ((rdata->offset >> 32) > 0) { + if ((rdata->subreq.start >> 32) > 0) { /* can not handle this big offset for old */ return -EIO; } @@ -1342,12 +1342,12 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) smb->AndXCommand = 0xFF; /* none */ smb->Fid = rdata->cfile->fid.netfid; - smb->OffsetLow = cpu_to_le32(rdata->offset & 0xFFFFFFFF); + smb->OffsetLow = cpu_to_le32(rdata->subreq.start & 0xFFFFFFFF); if (wct == 12) - smb->OffsetHigh = cpu_to_le32(rdata->offset >> 32); + smb->OffsetHigh = cpu_to_le32(rdata->subreq.start >> 32); smb->Remaining = 0; - smb->MaxCount = cpu_to_le16(rdata->bytes & 0xFFFF); - smb->MaxCountHigh = cpu_to_le32(rdata->bytes >> 16); + smb->MaxCount = cpu_to_le16(rdata->subreq.len & 0xFFFF); + smb->MaxCountHigh = cpu_to_le32(rdata->subreq.len >> 16); if (wct == 12) smb->ByteCount = 0; else { @@ -1631,13 +1631,13 @@ cifs_writev_callback(struct mid_q_entry *mid) * client. OS/2 servers are known to set incorrect * CountHigh values. */ - if (written > wdata->bytes) + if (written > wdata->subreq.len) written &= 0xFFFF; - if (written < wdata->bytes) + if (written < wdata->subreq.len) wdata->result = -ENOSPC; else - wdata->bytes = written; + wdata->subreq.len = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: @@ -1668,7 +1668,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) wct = 14; } else { wct = 12; - if (wdata->offset >> 32 > 0) { + if (wdata->subreq.start >> 32 > 0) { /* can not handle big offset for old srv */ return -EIO; } @@ -1683,9 +1683,9 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) smb->AndXCommand = 0xFF; /* none */ smb->Fid = wdata->cfile->fid.netfid; - smb->OffsetLow = cpu_to_le32(wdata->offset & 0xFFFFFFFF); + smb->OffsetLow = cpu_to_le32(wdata->subreq.start & 0xFFFFFFFF); if (wct == 14) - smb->OffsetHigh = cpu_to_le32(wdata->offset >> 32); + smb->OffsetHigh = cpu_to_le32(wdata->subreq.start >> 32); smb->Reserved = 0xFFFFFFFF; smb->WriteMode = 0; smb->Remaining = 0; @@ -1701,24 +1701,24 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) rqst.rq_iov = iov; rqst.rq_nvec = 2; - rqst.rq_iter = wdata->iter; - rqst.rq_iter_size = iov_iter_count(&wdata->iter); + rqst.rq_iter = wdata->subreq.io_iter; + rqst.rq_iter_size = iov_iter_count(&wdata->subreq.io_iter); - cifs_dbg(FYI, "async write at %llu %u bytes\n", - wdata->offset, wdata->bytes); + cifs_dbg(FYI, "async write at %llu %zu bytes\n", + wdata->subreq.start, wdata->subreq.len); - smb->DataLengthLow = cpu_to_le16(wdata->bytes & 0xFFFF); - smb->DataLengthHigh = cpu_to_le16(wdata->bytes >> 16); + smb->DataLengthLow = cpu_to_le16(wdata->subreq.len & 0xFFFF); + smb->DataLengthHigh = cpu_to_le16(wdata->subreq.len >> 16); if (wct == 14) { - inc_rfc1001_len(&smb->hdr, wdata->bytes + 1); - put_bcc(wdata->bytes + 1, &smb->hdr); + inc_rfc1001_len(&smb->hdr, wdata->subreq.len + 1); + put_bcc(wdata->subreq.len + 1, &smb->hdr); } else { /* wct == 12 */ struct smb_com_writex_req *smbw = (struct smb_com_writex_req *)smb; - inc_rfc1001_len(&smbw->hdr, wdata->bytes + 5); - put_bcc(wdata->bytes + 5, &smbw->hdr); + inc_rfc1001_len(&smbw->hdr, wdata->subreq.len + 5); + put_bcc(wdata->subreq.len + 5, &smbw->hdr); iov[1].iov_len += 4; /* pad bigger by four bytes */ } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index c192a38b1b7c..c70d106a413f 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2437,8 +2437,8 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) int rc = 0; struct inode *inode = d_inode(wdata->cfile->dentry); struct TCP_Server_Info *server; - unsigned int rest_len = wdata->bytes; - loff_t fpos = wdata->offset; + unsigned int rest_len = wdata->subreq.len; + loff_t fpos = wdata->subreq.start; server = tlink_tcon(wdata->cfile->tlink)->ses->server; do { @@ -2463,14 +2463,14 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) } wdata2->sync_mode = wdata->sync_mode; - wdata2->offset = fpos; - wdata2->bytes = cur_len; - wdata2->iter = wdata->iter; + wdata2->subreq.start = fpos; + wdata2->subreq.len = cur_len; + wdata2->subreq.io_iter = wdata->subreq.io_iter; - iov_iter_advance(&wdata2->iter, fpos - wdata->offset); - iov_iter_truncate(&wdata2->iter, wdata2->bytes); + iov_iter_advance(&wdata2->subreq.io_iter, fpos - wdata->subreq.start); + iov_iter_truncate(&wdata2->subreq.io_iter, wdata2->subreq.len); - if (iov_iter_is_xarray(&wdata2->iter)) + if (iov_iter_is_xarray(&wdata2->subreq.io_iter)) /* Check for pages having been redirtied and clean * them. We can do this by walking the xarray. If * it's not an xarray, then it's a DIO and we shouldn't @@ -2504,7 +2504,7 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) } while (rest_len > 0); /* Clean up remaining pages from the original wdata */ - if (iov_iter_is_xarray(&wdata->iter)) + if (iov_iter_is_xarray(&wdata->subreq.io_iter)) cifs_pages_write_failed(inode, fpos, rest_len); if (rc != 0 && !is_retryable_error(rc)) @@ -2521,19 +2521,19 @@ cifs_writev_complete(struct work_struct *work) if (wdata->result == 0) { spin_lock(&inode->i_lock); - cifs_update_eof(CIFS_I(inode), wdata->offset, wdata->bytes); + cifs_update_eof(CIFS_I(inode), wdata->subreq.start, wdata->subreq.len); spin_unlock(&inode->i_lock); cifs_stats_bytes_written(tlink_tcon(wdata->cfile->tlink), - wdata->bytes); + wdata->subreq.len); } else if (wdata->sync_mode == WB_SYNC_ALL && wdata->result == -EAGAIN) return cifs_writev_requeue(wdata); if (wdata->result == -EAGAIN) - cifs_pages_write_redirty(inode, wdata->offset, wdata->bytes); + cifs_pages_write_redirty(inode, wdata->subreq.start, wdata->subreq.len); else if (wdata->result < 0) - cifs_pages_write_failed(inode, wdata->offset, wdata->bytes); + cifs_pages_write_failed(inode, wdata->subreq.start, wdata->subreq.len); else - cifs_pages_written_back(inode, wdata->offset, wdata->bytes); + cifs_pages_written_back(inode, wdata->subreq.start, wdata->subreq.len); if (wdata->result != -EAGAIN) mapping_set_error(inode->i_mapping, wdata->result); @@ -2767,7 +2767,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, } wdata->sync_mode = wbc->sync_mode; - wdata->offset = folio_pos(folio); + wdata->subreq.start = folio_pos(folio); wdata->pid = cfile->pid; wdata->credits = credits_on_stack; wdata->cfile = cfile; @@ -2802,7 +2802,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, len = min_t(loff_t, len, max_len); } - wdata->bytes = len; + wdata->subreq.len = len; /* We now have a contiguous set of dirty pages, each with writeback * set; the first page is still locked at this point, but all the rest @@ -2811,10 +2811,10 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, folio_unlock(folio); if (start < i_size) { - iov_iter_xarray(&wdata->iter, ITER_SOURCE, &mapping->i_pages, + iov_iter_xarray(&wdata->subreq.io_iter, ITER_SOURCE, &mapping->i_pages, start, len); - rc = adjust_credits(wdata->server, &wdata->credits, wdata->bytes); + rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len); if (rc) goto err_wdata; @@ -3233,7 +3233,7 @@ cifs_uncached_writev_complete(struct work_struct *work) struct cifsInodeInfo *cifsi = CIFS_I(inode); spin_lock(&inode->i_lock); - cifs_update_eof(cifsi, wdata->offset, wdata->bytes); + cifs_update_eof(cifsi, wdata->subreq.start, wdata->subreq.len); if (cifsi->netfs.remote_i_size > inode->i_size) i_size_write(inode, cifsi->netfs.remote_i_size); spin_unlock(&inode->i_lock); @@ -3269,19 +3269,19 @@ cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list * segments */ do { - rc = server->ops->wait_mtu_credits(server, wdata->bytes, + rc = server->ops->wait_mtu_credits(server, wdata->subreq.len, &wsize, &credits); if (rc) goto fail; - if (wsize < wdata->bytes) { + if (wsize < wdata->subreq.len) { add_credits_and_wake_if(server, &credits, 0); msleep(1000); } - } while (wsize < wdata->bytes); + } while (wsize < wdata->subreq.len); wdata->credits = credits; - rc = adjust_credits(server, &wdata->credits, wdata->bytes); + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); if (!rc) { if (wdata->cfile->invalidHandle) @@ -3427,19 +3427,19 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, wdata->uncached = true; wdata->sync_mode = WB_SYNC_ALL; - wdata->offset = (__u64)fpos; + wdata->subreq.start = (__u64)fpos; wdata->cfile = cifsFileInfo_get(open_file); wdata->server = server; wdata->pid = pid; - wdata->bytes = cur_len; + wdata->subreq.len = cur_len; wdata->credits = credits_on_stack; - wdata->iter = *from; + wdata->subreq.io_iter = *from; wdata->ctx = ctx; kref_get(&ctx->refcount); - iov_iter_truncate(&wdata->iter, cur_len); + iov_iter_truncate(&wdata->subreq.io_iter, cur_len); - rc = adjust_credits(server, &wdata->credits, wdata->bytes); + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); if (!rc) { if (wdata->cfile->invalidHandle) @@ -3501,7 +3501,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) if (wdata->result) rc = wdata->result; else - ctx->total_len += wdata->bytes; + ctx->total_len += wdata->subreq.len; /* resend call if it's a retryable error */ if (rc == -EAGAIN) { @@ -3516,10 +3516,10 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) wdata, &tmp_list, ctx); else { iov_iter_advance(&tmp_from, - wdata->offset - ctx->pos); + wdata->subreq.start - ctx->pos); - rc = cifs_write_from_iter(wdata->offset, - wdata->bytes, &tmp_from, + rc = cifs_write_from_iter(wdata->subreq.start, + wdata->subreq.len, &tmp_from, ctx->cfile, cifs_sb, &tmp_list, ctx); @@ -3842,20 +3842,20 @@ static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, * segments */ do { - rc = server->ops->wait_mtu_credits(server, rdata->bytes, + rc = server->ops->wait_mtu_credits(server, rdata->subreq.len, &rsize, &credits); if (rc) goto fail; - if (rsize < rdata->bytes) { + if (rsize < rdata->subreq.len) { add_credits_and_wake_if(server, &credits, 0); msleep(1000); } - } while (rsize < rdata->bytes); + } while (rsize < rdata->subreq.len); rdata->credits = credits; - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) rc = -EAGAIN; @@ -3953,17 +3953,17 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, rdata->server = server; rdata->cfile = cifsFileInfo_get(open_file); - rdata->offset = fpos; - rdata->bytes = cur_len; + rdata->subreq.start = fpos; + rdata->subreq.len = cur_len; rdata->pid = pid; rdata->credits = credits_on_stack; rdata->ctx = ctx; kref_get(&ctx->refcount); - rdata->iter = ctx->iter; - iov_iter_truncate(&rdata->iter, cur_len); + rdata->subreq.io_iter = ctx->iter; + iov_iter_truncate(&rdata->subreq.io_iter, cur_len); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) @@ -4033,8 +4033,8 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) &tmp_list, ctx); } else { rc = cifs_send_async_read( - rdata->offset + got_bytes, - rdata->bytes - got_bytes, + rdata->subreq.start + got_bytes, + rdata->subreq.len - got_bytes, rdata->cfile, cifs_sb, &tmp_list, ctx); @@ -4048,7 +4048,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) rc = rdata->result; /* if there was a short read -- discard anything left */ - if (rdata->got_bytes && rdata->got_bytes < rdata->bytes) + if (rdata->got_bytes && rdata->got_bytes < rdata->subreq.len) rc = -ENODATA; ctx->total_len += rdata->got_bytes; @@ -4432,16 +4432,16 @@ static void cifs_readahead_complete(struct work_struct *work) pgoff_t last; bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); - XA_STATE(xas, &rdata->mapping->i_pages, rdata->offset / PAGE_SIZE); + XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE); if (good) cifs_readahead_to_fscache(rdata->mapping->host, - rdata->offset, rdata->bytes); + rdata->subreq.start, rdata->subreq.len); - if (iov_iter_count(&rdata->iter) > 0) - iov_iter_zero(iov_iter_count(&rdata->iter), &rdata->iter); + if (iov_iter_count(&rdata->subreq.io_iter) > 0) + iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter); - last = (rdata->offset + rdata->bytes - 1) / PAGE_SIZE; + last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE; rcu_read_lock(); xas_for_each(&xas, folio, last) { @@ -4580,8 +4580,8 @@ static void cifs_readahead(struct readahead_control *ractl) break; } - rdata->offset = ra_index * PAGE_SIZE; - rdata->bytes = nr_pages * PAGE_SIZE; + rdata->subreq.start = ra_index * PAGE_SIZE; + rdata->subreq.len = nr_pages * PAGE_SIZE; rdata->cfile = cifsFileInfo_get(open_file); rdata->server = server; rdata->mapping = ractl->mapping; @@ -4595,10 +4595,10 @@ static void cifs_readahead(struct readahead_control *ractl) ra_pages -= nr_pages; ra_index += nr_pages; - iov_iter_xarray(&rdata->iter, ITER_DEST, &rdata->mapping->i_pages, - rdata->offset, rdata->bytes); + iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages, + rdata->subreq.start, rdata->subreq.len); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) rc = -EAGAIN; @@ -4609,8 +4609,8 @@ static void cifs_readahead(struct readahead_control *ractl) if (rc) { add_credits_and_wake_if(server, &rdata->credits, 0); cifs_unlock_folios(rdata->mapping, - rdata->offset / PAGE_SIZE, - (rdata->offset + rdata->bytes - 1) / PAGE_SIZE); + rdata->subreq.start / PAGE_SIZE, + (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE); /* Fallback to the readpage in error/reconnect cases */ cifs_put_readdata(rdata); break; diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index e7f765673246..bb1e8415bcf3 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4686,7 +4686,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, /* Copy the data to the output I/O iterator. */ rdata->result = cifs_copy_pages_to_iter(pages, pages_len, - cur_off, &rdata->iter); + cur_off, &rdata->subreq.io_iter); if (rdata->result != 0) { if (is_offloaded) mid->mid_state = MID_RESPONSE_MALFORMED; @@ -4700,7 +4700,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, /* read response payload is in buf */ WARN_ONCE(pages && !xa_empty(pages), "read data can be either in buf or in pages"); - length = copy_to_iter(buf + data_offset, data_len, &rdata->iter); + length = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter); if (length < 0) return length; rdata->got_bytes = data_len; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 4f98631f2cf4..4fde3d506c60 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4113,7 +4113,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len, struct smbd_buffer_descriptor_v1 *v1; bool need_invalidate = server->dialect == SMB30_PROT_ID; - rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->iter, + rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter, true, need_invalidate); if (!rdata->mr) return -EAGAIN; @@ -4174,17 +4174,17 @@ smb2_readv_callback(struct mid_q_entry *mid) .rq_nvec = 1 }; if (rdata->got_bytes) { - rqst.rq_iter = rdata->iter; - rqst.rq_iter_size = iov_iter_count(&rdata->iter); + rqst.rq_iter = rdata->subreq.io_iter; + rqst.rq_iter_size = iov_iter_count(&rdata->subreq.io_iter); } WARN_ONCE(rdata->server != mid->server, "rdata server %p != mid server %p", rdata->server, mid->server); - cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n", + cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n", __func__, mid->mid, mid->mid_state, rdata->result, - rdata->bytes); + rdata->subreq.len); switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: @@ -4237,13 +4237,13 @@ smb2_readv_callback(struct mid_q_entry *mid) cifs_stats_fail_inc(tcon, SMB2_READ_HE); trace_smb3_read_err(0 /* xid */, rdata->cfile->fid.persistent_fid, - tcon->tid, tcon->ses->Suid, rdata->offset, - rdata->bytes, rdata->result); + tcon->tid, tcon->ses->Suid, rdata->subreq.start, + rdata->subreq.len, rdata->result); } else trace_smb3_read_done(0 /* xid */, rdata->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, - rdata->offset, rdata->got_bytes); + rdata->subreq.start, rdata->got_bytes); queue_work(cifsiod_wq, &rdata->work); release_mid(mid); @@ -4265,16 +4265,16 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) unsigned int total_len; int credit_request; - cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", - __func__, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n", + __func__, rdata->subreq.start, rdata->subreq.len); if (!rdata->server) rdata->server = cifs_pick_channel(tcon->ses); io_parms.tcon = tlink_tcon(rdata->cfile->tlink); io_parms.server = server = rdata->server; - io_parms.offset = rdata->offset; - io_parms.length = rdata->bytes; + io_parms.offset = rdata->subreq.start; + io_parms.length = rdata->subreq.len; io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; @@ -4293,7 +4293,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) shdr = (struct smb2_hdr *)buf; if (rdata->credits.value > 0) { - shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->subreq.len, SMB2_MAX_BUFFER_SIZE)); credit_request = le16_to_cpu(shdr->CreditCharge) + 8; if (server->credits >= server->max_credits) @@ -4303,7 +4303,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) min_t(int, server->max_credits - server->credits, credit_request)); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (rc) goto async_readv_out; @@ -4441,13 +4441,13 @@ smb2_writev_callback(struct mid_q_entry *mid) * client. OS/2 servers are known to set incorrect * CountHigh values. */ - if (written > wdata->bytes) + if (written > wdata->subreq.len) written &= 0xFFFF; - if (written < wdata->bytes) + if (written < wdata->subreq.len) wdata->result = -ENOSPC; else - wdata->bytes = written; + wdata->subreq.len = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: @@ -4478,8 +4478,8 @@ smb2_writev_callback(struct mid_q_entry *mid) cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); trace_smb3_write_err(0 /* no xid */, wdata->cfile->fid.persistent_fid, - tcon->tid, tcon->ses->Suid, wdata->offset, - wdata->bytes, wdata->result); + tcon->tid, tcon->ses->Suid, wdata->subreq.start, + wdata->subreq.len, wdata->result); if (wdata->result == -ENOSPC) pr_warn_once("Out of space writing to %s\n", tcon->tree_name); @@ -4487,7 +4487,7 @@ smb2_writev_callback(struct mid_q_entry *mid) trace_smb3_write_done(0 /* no xid */, wdata->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, - wdata->offset, wdata->bytes); + wdata->subreq.start, wdata->subreq.len); queue_work(cifsiod_wq, &wdata->work); release_mid(mid); @@ -4520,8 +4520,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) _io_parms = (struct cifs_io_parms) { .tcon = tcon, .server = server, - .offset = wdata->offset, - .length = wdata->bytes, + .offset = wdata->subreq.start, + .length = wdata->subreq.len, .persistent_fid = wdata->cfile->fid.persistent_fid, .volatile_fid = wdata->cfile->fid.volatile_fid, .pid = wdata->pid, @@ -4563,10 +4563,10 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) */ if (smb3_use_rdma_offload(io_parms)) { struct smbd_buffer_descriptor_v1 *v1; - size_t data_size = iov_iter_count(&wdata->iter); + size_t data_size = iov_iter_count(&wdata->subreq.io_iter); bool need_invalidate = server->dialect == SMB30_PROT_ID; - wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->iter, + wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter, false, need_invalidate); if (!wdata->mr) { rc = -EAGAIN; @@ -4593,7 +4593,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) rqst.rq_iov = iov; rqst.rq_nvec = 1; - rqst.rq_iter = wdata->iter; + rqst.rq_iter = wdata->subreq.io_iter; rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter); #ifdef CONFIG_CIFS_SMB_DIRECT if (wdata->mr) @@ -4611,7 +4611,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) #endif if (wdata->credits.value > 0) { - shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->subreq.len, SMB2_MAX_BUFFER_SIZE)); credit_request = le16_to_cpu(shdr->CreditCharge) + 8; if (server->credits >= server->max_credits) diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 16d87867ef50..c52b9bc10242 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1701,8 +1701,8 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) unsigned int buflen = server->pdu_size + HEADER_PREAMBLE_SIZE(server); bool use_rdma_mr = false; - cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%u\n", - __func__, mid->mid, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%zu\n", + __func__, mid->mid, rdata->subreq.start, rdata->subreq.len); /* * read the rest of READ_RSP header (sans Data array), or whatever we @@ -1807,7 +1807,7 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) length = data_len; /* An RDMA read is already done. */ else #endif - length = cifs_read_iter_from_socket(server, &rdata->iter, + length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter, data_len); if (length > 0) rdata->got_bytes += length; From patchwork Fri Oct 13 16:04:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D39BDCDB483 for ; Fri, 13 Oct 2023 16:08:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA95980056; Fri, 13 Oct 2023 12:07:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B588280053; Fri, 13 Oct 2023 12:07:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9823A80056; Fri, 13 Oct 2023 12:07:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 85FDD80053 for ; Fri, 13 Oct 2023 12:07:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 55331C03D2 for ; Fri, 13 Oct 2023 16:07:23 +0000 (UTC) X-FDA: 81340917966.01.9C173D0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 55AA640022 for ; Fri, 13 Oct 2023 16:07:21 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=R1HIlzkl; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213241; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GeVD+dT3EGEazLCy4JSSSP9p9uWw4JNjPo9O68lwEeY=; b=O2eWzR+gcvKE20moqK24GTtEYPLwiUlIhm3ZnnaPIfSLrRmLbNgsRi8SDKZT6iZxgOgVQX bxoFwIsW3ff4/XuQJmgjknfstekF0oSXuJc6dY6sOH3zHz78p1TedoBBJ4mLJGRTUIs+gs bkntiVihuEhlSn4ue5qMyDr/hun2nlI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=R1HIlzkl; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213241; a=rsa-sha256; cv=none; b=ZLnuK4NtOu9O6ePyj5G+bdZGr4pBTsrvvNsYTjSC79TymiIRsI5VKXhR0sJhMMzeq67HBu 2+jh7BQwMi7P0baNRaV1mhrDoyQC2/RmHomZIOJP89B6MHskTVBRt50XyokaFsaRWzibx0 PV+5LTU4WklUvsUP+Lq6IXuYyHZWvnk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213240; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GeVD+dT3EGEazLCy4JSSSP9p9uWw4JNjPo9O68lwEeY=; b=R1HIlzklk/LH9N7Jotsn/+qg7HpwZ6gxFi4qFTtm1Pm24c3QcBrS/l9tBndcv523QpeVNg GPSSIW5wf06zSv8mgGeHNaXemigIqs4ZnvzPx4y865KC3FNwV70+qWeRuVFf3ZJ22eFcUl RjitYBcC32DeqNDvjB0hLXuy2GkZJL4= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-610-aTExQ2g8PFmgL8TuEFeRIA-1; Fri, 13 Oct 2023 12:07:14 -0400 X-MC-Unique: aTExQ2g8PFmgL8TuEFeRIA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 044AD3C01BBE; Fri, 13 Oct 2023 16:07:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2DA676403C; Fri, 13 Oct 2023 16:07:10 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 47/53] cifs: Make wait_mtu_credits take size_t args Date: Fri, 13 Oct 2023 17:04:16 +0100 Message-ID: <20231013160423.2218093-48-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 55AA640022 X-Stat-Signature: 8yasw184sogwctxpg8uyartyqd1xmmo6 X-HE-Tag: 1697213241-873441 X-HE-Meta: U2FsdGVkX18N+wKan3/y5e9n1BmKCpO1lOF1HDZ5600oQ4GdUCX8RGVeEyypt6NCYNMJAT2d4RWeJFdN9sKSFmHEO5O2Q0rtSKZDBu924Hn2YU2mYhN/UV19ek7n5fuaX2v3GQdBM2I96PBALJyPj/H34Nd0DM1Nli9xKmD6v+Lmq3/LM6S2uD2qUTqs5XBZfYcrkdNo0DaOWX+hQAkaJDJjo707ZRgKUwoXfUrBkuoxOzmYD4vfg4HXFvRauCq+KqzjixoaGARl4ZHZwzRuN23AbpG+GSeqsw6SEgMIkRqj2r1bI09YxvNcKzr3zDLXjzEijrsZ9L893yOwn6iAYxW9T1TWGQaUn0tJA5E1wz2xHtvEP4/Y0zSN7qU6kVZGzwxi9Oa8jT1eR05OBQA7Em9iyPgcwZAegLFtiC2u/wcIc1MinSuAvZtZaOtZze4VJsTqMIxu2iAkEq5S+Dyc3XoXmT5RTScCa2ULgRl8hPtkeewWOJQnWxMRtnJm3l1bbIt5jqkyLYS3P83buwKusvPgtwF4Aa8WUNBauyTExa21MyO96SDWDSIF4OvOQ3okGPmqZCRI6sVDRo5XmcxEwQv6xGoiQgh9RlTkhHinLjoakX5poBIx+4vdQfuOVSmKbnA7qPlOPh+/yhsZPaWgIA01eW5V7AwkLwmIAJmmpCCloXE+PeywK+SV55LqCs2H101TerPVO5stKgwXU5NLsGg/41aYIkJk1FRC8GmBLYcVTcR2SpK6tS8DyQ0qQa9F/pxT1Lk8zhe5Bz6OHDlo5Lhlfspcol/hdlQO5GIFUUjTEncH+uQVJREc2+SuJFaCH38tGD/btNXrHYJrHyejhmM2FtCeLRSP4St6uarap5FanvqU0l+fzJGDKospAjZq2j93bXyTchB/5DPummE23xlMQK5th72V1wSjD2KiQEKt+gThZD1lsnmDM0y9lj9OJmpazAMInrQkbjZrSXO Gep1GRZz SS9yXYzpYbYgFH8fws/HHEBJnjhBuKWCdcS5C7jkFDEtPHFmZDF/ng6CcrKExKc5TbPmC1SoYqSqXqPBuix6neaxrBBnLmt6BQ/8NwH7jZ58GpfEIMCMyv4/vSs/jAahX9ktuw8ZInie7EWVNW4bmn2kt7IbVg+jm9IUxFkpRazvrKwrv/G2o6YrgSn8E1JILnUfpVLJ+g/eYuJY8LvZSDtQT/oYji2M/GaGEeQmvxbnMUj1z9nF/oe57Q0W+P9EJWFoS08BEZLE1IN8KNEYOO3DZogG4QKh9bM7b8cMk4XZKl5TfoqRJNkFJPqyMwajfMmTko88T9Ur77TGawcZE8XZv1uZ28xrQVNTIiyB+CVSorosipodlg9ABbPCfsZxMxpCGBYbkfgvBbHMY6vdkuVO5ndUTVl//M6241sxVcMpjJX4FH/gzhapIh2ypWHSqDx+BRLn5RGMwT/HqzKMQy4/u/BAJvK4uqtEqlPH5DXIeRct3+i4aVwCL9Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make the wait_mtu_credits functions use size_t for the size and num arguments rather than unsigned int as netfslib uses size_t/ssize_t for arguments and return values to allow for extra capacity. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 4 ++-- fs/smb/client/cifsproto.h | 2 +- fs/smb/client/file.c | 18 ++++++++++-------- fs/smb/client/smb2ops.c | 4 ++-- fs/smb/client/transport.c | 4 ++-- 5 files changed, 17 insertions(+), 15 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index c7f04f9853c5..73367fc3a77c 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -507,8 +507,8 @@ struct smb_version_operations { /* writepages retry size */ unsigned int (*wp_retry_size)(struct inode *); /* get mtu credits */ - int (*wait_mtu_credits)(struct TCP_Server_Info *, unsigned int, - unsigned int *, struct cifs_credits *); + int (*wait_mtu_credits)(struct TCP_Server_Info *, size_t, + size_t *, struct cifs_credits *); /* adjust previously taken mtu credits to request size */ int (*adjust_credits)(struct TCP_Server_Info *server, struct cifs_credits *credits, diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 561dac1576a5..735337e8326c 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -121,7 +121,7 @@ extern struct mid_q_entry *cifs_setup_async_request(struct TCP_Server_Info *, extern int cifs_check_receive(struct mid_q_entry *mid, struct TCP_Server_Info *server, bool log_error); extern int cifs_wait_mtu_credits(struct TCP_Server_Info *server, - unsigned int size, unsigned int *num, + size_t size, size_t *num, struct cifs_credits *credits); extern int SendReceive2(const unsigned int /* xid */ , struct cifs_ses *, struct kvec *, int /* nvec to send */, diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index c70d106a413f..dd5e52d5e8d0 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2733,9 +2733,9 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct cifsFileInfo *cfile = NULL; - unsigned int xid, wsize, len; + unsigned int xid, len; loff_t i_size = i_size_read(inode); - size_t max_len; + size_t max_len, wsize; long count = wbc->nr_to_write; int rc; @@ -3248,7 +3248,7 @@ static int cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, struct cifs_aio_ctx *ctx) { - unsigned int wsize; + size_t wsize; struct cifs_credits credits; int rc; struct TCP_Server_Info *server = wdata->server; @@ -3382,7 +3382,8 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, do { struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; - unsigned int wsize, nsegs = 0; + unsigned int nsegs = 0; + size_t wsize; if (signal_pending(current)) { rc = -EINTR; @@ -3819,7 +3820,7 @@ static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { - unsigned int rsize; + size_t rsize; struct cifs_credits credits; int rc; struct TCP_Server_Info *server; @@ -3893,10 +3894,10 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, struct cifs_aio_ctx *ctx) { struct cifs_io_subrequest *rdata; - unsigned int rsize, nsegs, max_segs = INT_MAX; + unsigned int nsegs, max_segs = INT_MAX; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; - size_t cur_len, max_len; + size_t cur_len, max_len, rsize; int rc; pid_t pid; struct TCP_Server_Info *server; @@ -4492,12 +4493,13 @@ static void cifs_readahead(struct readahead_control *ractl) * Chop the readahead request up into rsize-sized read requests. */ while ((nr_pages = ra_pages)) { - unsigned int i, rsize; + unsigned int i; struct cifs_io_subrequest *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct folio *folio; pgoff_t fsize; + size_t rsize; /* * Find out if we have anything cached in the range of diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index bb1e8415bcf3..c26bc64b7fad 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -216,8 +216,8 @@ smb2_get_credits(struct mid_q_entry *mid) } static int -smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, - unsigned int *num, struct cifs_credits *credits) +smb2_wait_mtu_credits(struct TCP_Server_Info *server, size_t size, + size_t *num, struct cifs_credits *credits) { int rc = 0; unsigned int scredits, in_flight; diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index c52b9bc10242..9d22b1cdfc9f 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -693,8 +693,8 @@ wait_for_compound_request(struct TCP_Server_Info *server, int num, } int -cifs_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, - unsigned int *num, struct cifs_credits *credits) +cifs_wait_mtu_credits(struct TCP_Server_Info *server, size_t size, + size_t *num, struct cifs_credits *credits) { *num = size; credits->value = 0; From patchwork Fri Oct 13 16:04:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3161CDB47E for ; Fri, 13 Oct 2023 16:08:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 714748005D; Fri, 13 Oct 2023 12:08:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C30C8005B; Fri, 13 Oct 2023 12:08:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 517A38005D; Fri, 13 Oct 2023 12:08:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3638A8005B for ; Fri, 13 Oct 2023 12:08:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 104D21A03A3 for ; Fri, 13 Oct 2023 16:08:18 +0000 (UTC) X-FDA: 81340920276.18.ED9DA2E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 35EE114001D for ; Fri, 13 Oct 2023 16:08:16 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=AJ2R97YI; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213296; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NBQNHM0wunbYvI6HTykjMTr83II0AzFI4gI2+YuCnQ8=; b=lOJESYOJ5IytlYAxO8skuHXzwHYvc2hhJglkkFkv/9Jb+acpNjhj67TPJBNG0BhkNMH4E5 WXCNhnVE66DibgqeTD0Nju9fFcrjuGlCSPRp4OoXOUUnDRtdLNdu9YlWaTtCAqxu7r9/uB 0A1uHfuXoabSsOLoYqEUumE+jqGOlkc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=AJ2R97YI; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213296; a=rsa-sha256; cv=none; b=GuiNbEYdbuX41Gy6EI/SmxX+l76WJz3u/ujtj6q8aVnWzxzfbtURriIaPU3sjQIppUFM7r iUWJRbbrXF5pjyhOZB7w/hvQQdMswU4Rhbpfpt1ePCtESl3JU9pbdTwBZG8/t0Q/m1h7m7 eMHq47N3tOGctZ4MadBPNSz9RUK4E30= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213295; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NBQNHM0wunbYvI6HTykjMTr83II0AzFI4gI2+YuCnQ8=; b=AJ2R97YIwgZaANxbQ+XpAZPHvmkh/jLVi2dLGhOZlrdslp6sMSwGyZwpkK5Z3fBClCEboP lmC1y+Pjn2sXgmNJZxMnQB/jZ6W43dJH/xBhi+JMqRlRpuAd2m7nz0btLfGV1Kz8XKonTA smzo0z6fKG8tvn8tdQS7R/DPkmcp9yA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-441-0QYM3UNDMEG3fnODsDHD7w-1; Fri, 13 Oct 2023 12:08:01 -0400 X-MC-Unique: 0QYM3UNDMEG3fnODsDHD7w-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BFB54801E62; Fri, 13 Oct 2023 16:07:16 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id C07F3492BD9; Fri, 13 Oct 2023 16:07:13 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 48/53] cifs: Implement netfslib hooks Date: Fri, 13 Oct 2023 17:04:17 +0100 Message-ID: <20231013160423.2218093-49-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 35EE114001D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: am6t8rcioxj1gb4ycetni3epzn6wyswi X-HE-Tag: 1697213295-539360 X-HE-Meta: U2FsdGVkX1+Qrsa8cpZOTp+15oj3jGxYG5RpUsTyNuzU/77wHVTis98m0uJWXAe4IiYGa2SozT+E/FJ961v+ebnQ60sOZIbPHOOBDP61zAGX6AZcK2JCo8Gc6cgiGL/SGcL/rVJsP+migsf1O9Zd5T+mOcoG9f76U0W3zKcL2yhM49h8jn8KFJIgjVzgicK9cQudfxbBID0bCLtiB1JIW3rcvcMNCyHA1mk9ZzmGC/bNpYhDPDjYUbJwkaIuw7njn2cbckAbsQgMjqrvAtU0kVg1EdvwojQK9jVKOj6/GnB4NDQe9uOM3gFkODnu//A0R5p289PBWvCW1kXqZKi51BkSKw1Fe4Me2IgRhiaUZNVdTCrpBMXE95D9XX8WWp3UAaPZP7Gk8semQOyZd/XjiN8rm69jhm6kKf7Fd3EtRlARsh5V2s4iOydopcsPX9ZvLV1XYZRaURvg+LancA7PW7FZtwa4w8uCGiskne+ecdaGM035/aAIaU0Qv3g/f1uMw0PrBkV9qq5TnPsf5jfUDWE60XEjMc53eptYL3iDsZ+F7AQwIORq4xoHkgq/7dnGiYfA8JnWMv7mKyobyo9yVQaqg9BYhZ1laSswyLliKYmuQ3ENd7FTtMsKSkVmuvEHoGzKDbs5u90DSQlUpmy7RUc5QXEq5O17YAx2FrSNDGgpCB3glGDMKqN5LFfBb/Rfpx+o4BukYzXlSnBrxQYTTe+v75koPV8l5sgt93bCEtXFjHGWnPA3bmjaJwJRZvl3LgPIrg70KrZ/IMct4SfzBNuW5K8k1/R/jn0GqdN86vG90Zra8OjHVJHLQic7vvxKWUchq8ae0++9sF45PllgZfIvAhD/SQjb0j3O3h5VRUm2EI//38KgB4+JNwFS/Ir82U85NVVd1Q6b6GM3pjFACC56Ve9T6nIGeLrMZlaSMLuW6lS0KIYh8Og92/r0Hu286VyrXteTfDExp0OJQ+0 2ZnTZVwk Ho+em8OWg9eHK9hR8PPR8CxxGdtkNamQvQUQvZ/2MYNui2i4wlvBC9Ib1DoxLFB6TJNeQ26HK2C10FXL7i45CZPPx3+S8tXjVkUz+ygE5d1+JDdrlO8FzHQjh19qRCohnJy8FR8F2aZ02pMv3kzZtM4icYw211HGfOVaMwmCr2Z+NVzjU2xzRsguPDFcsxM0lJzp/fB9rBKlGjK5wbmzWETnmMrrUDqsatfBg5BUzCkJNO4zzK5Rw4jCQAE8mkTPrFdNqZ+/D/U2gH2l4JhbEatj7roUUTpwHzL6QHMDr3VhHXlmLA/U86zqKiYcrPWfCG9usn1TTxQPLNEoAnK8aTL3myFaZFLkCzr3nQWk7p05dyp7BnyZ9mSYAshUd0Z21jLhJKLytQK4YOqjEGTjD124YtHFyFvCPHncGZ0W61yRYbLuJImekZOVv+V5hvD3tSvlt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide implementation of the netfslib hooks that will be used by netfslib to ask cifs to set up and perform operations. Of particular note are (*) cifs_clamp_length() - This is used to negotiate the size of the next subrequest in a read request, taking into account the credit available and the rsize. The credits are attached to the subrequest. (*) cifs_req_issue_read() - This is used to issue a subrequest that has been set up and clamped. (*) cifs_create_write_requests() - This is used to break the given span of file positions into suboperations according to cifs's wsize and available credits. As each subop is created, it can be dispatched or queued for dispatch. At this point, cifs is not wired up to actually *use* netfslib; that will be done in a subsequent patch. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 3 + fs/smb/client/Kconfig | 1 + fs/smb/client/cifsglob.h | 26 ++- fs/smb/client/file.c | 373 +++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 1 + include/trace/events/netfs.h | 1 + 6 files changed, 397 insertions(+), 8 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 6657dbd07b9d..c2f7dc99ff92 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -373,6 +373,9 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: + if (likely(written) && ctx->ops->post_modify) + ctx->ops->post_modify(inode); + if (unlikely(wreq)) { ret = netfs_end_writethrough(wreq, iocb); wbc_detach_inode(&wbc); diff --git a/fs/smb/client/Kconfig b/fs/smb/client/Kconfig index 2927bd174a88..2517dc242386 100644 --- a/fs/smb/client/Kconfig +++ b/fs/smb/client/Kconfig @@ -2,6 +2,7 @@ config CIFS tristate "SMB3 and CIFS support (advanced network filesystem)" depends on INET + select NETFS_SUPPORT select NLS select NLS_UCS2_UTILS select CRYPTO diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 73367fc3a77c..a215c092725a 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1420,15 +1420,23 @@ struct cifs_aio_ctx { bool direct_io; }; +struct cifs_io_request { + struct netfs_io_request rreq; + struct cifsFileInfo *cfile; +}; + /* asynchronous read support */ struct cifs_io_subrequest { - struct netfs_io_subrequest subreq; - struct cifsFileInfo *cfile; - struct address_space *mapping; - struct cifs_aio_ctx *ctx; + union { + struct netfs_io_subrequest subreq; + struct netfs_io_request *rreq; + struct cifs_io_request *req; + }; ssize_t got_bytes; pid_t pid; + unsigned int xid; int result; + bool have_credits; struct kvec iov[2]; struct TCP_Server_Info *server; #ifdef CONFIG_CIFS_SMB_DIRECT @@ -1436,14 +1444,16 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - enum writeback_sync_modes sync_mode; - bool uncached; - struct bio_vec *bv; - // TODO: Remove following elements struct list_head list; struct completion done; struct work_struct work; + struct cifsFileInfo *cfile; + struct address_space *mapping; + struct cifs_aio_ctx *ctx; + enum writeback_sync_modes sync_mode; + bool uncached; + struct bio_vec *bv; }; /* diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index dd5e52d5e8d0..6c7b91728dd4 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -36,6 +36,379 @@ #include "fs_context.h" #include "cifs_ioctl.h" #include "cached_dir.h" +#include + +static int cifs_reopen_file(struct cifsFileInfo *cfile, bool can_flush); + +static void cifs_upload_to_server(struct netfs_io_subrequest *subreq) +{ + struct cifs_io_subrequest *wdata = + container_of(subreq, struct cifs_io_subrequest, subreq); + ssize_t rc; + + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + + if (wdata->req->cfile->invalidHandle) + rc = -EAGAIN; + else + rc = wdata->server->ops->async_writev(wdata); + if (rc < 0) + add_credits_and_wake_if(wdata->server, &wdata->credits, 0); +} + +static void cifs_upload_to_server_worker(struct work_struct *work) +{ + struct netfs_io_subrequest *subreq = + container_of(work, struct netfs_io_subrequest, work); + + cifs_upload_to_server(subreq); +} + +/* + * Set up write requests for a writeback slice. We need to add a write request + * for each write we want to make. + */ +static void cifs_create_write_requests(struct netfs_io_request *wreq, + loff_t start, size_t remain) +{ + struct netfs_io_subrequest *subreq; + struct cifs_io_subrequest *wdata; + struct cifs_io_request *req = container_of(wreq, struct cifs_io_request, rreq); + struct TCP_Server_Info *server; + struct cifsFileInfo *open_file = req->cfile; + struct cifs_sb_info *cifs_sb = CIFS_SB(wreq->inode->i_sb); + int rc = 0; + size_t offset = 0; + pid_t pid; + unsigned int xid, max_segs = INT_MAX; + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) + pid = open_file->pid; + else + pid = current->tgid; + + server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); + xid = get_xid(); + +#ifdef CONFIG_CIFS_SMB_DIRECT + if (server->smbd_conn) + max_segs = server->smbd_conn->max_frmr_depth; +#endif + + do { + unsigned int nsegs = 0; + size_t max_len, part, wsize; + + subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER, + start, remain, + cifs_upload_to_server_worker); + if (!subreq) { + wreq->error = -ENOMEM; + break; + } + + wdata = container_of(subreq, struct cifs_io_subrequest, subreq); + + retry: + if (signal_pending(current)) { + wreq->error = -EINTR; + break; + } + + if (open_file->invalidHandle) { + rc = cifs_reopen_file(open_file, false); + if (rc < 0) { + if (rc == -EAGAIN) + goto retry; + break; + } + } + + rc = server->ops->wait_mtu_credits(server, wreq->wsize, &wsize, + &wdata->credits); + if (rc) + break; + + max_len = min(remain, wsize); + if (!max_len) { + rc = -EAGAIN; + goto failed_return_credits; + } + + part = netfs_limit_iter(&wreq->io_iter, offset, max_len, max_segs); + cifs_dbg(FYI, "create_write_request len=%zx/%zx nsegs=%u/%lu/%u\n", + part, max_len, nsegs, wreq->io_iter.nr_segs, max_segs); + if (!part) { + rc = -EIO; + goto failed_return_credits; + } + + if (part < wdata->subreq.len) { + wdata->subreq.len = part; + iov_iter_truncate(&wdata->subreq.io_iter, part); + } + + wdata->server = server; + wdata->pid = pid; + + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); + if (rc) { + add_credits_and_wake_if(server, &wdata->credits, 0); + if (rc == -EAGAIN) + goto retry; + goto failed; + } + + cifs_upload_to_server(subreq); + //netfs_queue_write_request(subreq); + start += part; + offset += part; + remain -= part; + } while (remain > 0); + + free_xid(xid); + return; + +failed_return_credits: + add_credits_and_wake_if(server, &wdata->credits, 0); +failed: + netfs_write_subrequest_terminated(subreq, rc, false); + free_xid(xid); +} + +/* + * Split the read up according to how many credits we can get for each piece. + * It's okay to sleep here if we need to wait for more credit to become + * available. + * + * We also choose the server and allocate an operation ID to be cleaned up + * later. + */ +static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + struct TCP_Server_Info *server; + struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); + struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + size_t rsize = 0; + int rc; + + rdata->xid = get_xid(); + + server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); + rdata->server = server; + + if (cifs_sb->ctx->rsize == 0) + cifs_sb->ctx->rsize = + server->ops->negotiate_rsize(tlink_tcon(req->cfile->tlink), + cifs_sb->ctx); + + + rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, &rsize, + &rdata->credits); + if (rc) { + subreq->error = rc; + return false; + } + + rdata->have_credits = true; + subreq->len = min_t(size_t, subreq->len, rsize); +#ifdef CONFIG_CIFS_SMB_DIRECT + if (server->smbd_conn) + subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; +#endif + return true; +} + +/* + * Issue a read operation on behalf of the netfs helper functions. We're asked + * to make a read of a certain size at a point in the file. We are permitted + * to only read a portion of that, but as long as we read something, the netfs + * helper will call us again so that we can issue another read. + */ +static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); + struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + pid_t pid; + int rc = 0; + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) + pid = req->cfile->pid; + else + pid = current->tgid; // Ummm... This may be a workqueue + + cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", + __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, + subreq->transferred, subreq->len); + + if (req->cfile->invalidHandle) { + do { + rc = cifs_reopen_file(req->cfile, true); + } while (rc == -EAGAIN); + if (rc) + goto out; + } + + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + rdata->pid = pid; + + rc = adjust_credits(rdata->server, &rdata->credits, rdata->subreq.len); + if (!rc) { + if (rdata->req->cfile->invalidHandle) + rc = -EAGAIN; + else + rc = rdata->server->ops->async_readv(rdata); + } + +out: + if (rc) + netfs_subreq_terminated(subreq, rc, false); +} + +/* + * Initialise a request. + */ +static int cifs_init_request(struct netfs_io_request *rreq, struct file *file) +{ + struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + struct cifsFileInfo *open_file = NULL; + int ret; + + rreq->rsize = cifs_sb->ctx->rsize; + rreq->wsize = cifs_sb->ctx->wsize; + + if (file) { + open_file = file->private_data; + rreq->netfs_priv = file->private_data; + req->cfile = cifsFileInfo_get(open_file); + } else if (rreq->origin == NETFS_WRITEBACK || + rreq->origin == NETFS_LAUNDER_WRITE) { + ret = cifs_get_writable_file(CIFS_I(rreq->inode), FIND_WR_ANY, &req->cfile); + if (ret) { + cifs_dbg(VFS, "No writable handle in writepages ret=%d\n", ret); + return ret; + } + } else { + WARN_ON_ONCE(1); + return -EIO; + } + + return 0; +} + +/* + * Expand the size of a readahead to the size of the rsize, if at least as + * large as a page, allowing for the possibility that rsize is not pow-2 + * aligned. + */ +static void cifs_expand_readahead(struct netfs_io_request *rreq) +{ + unsigned int rsize = rreq->rsize; + loff_t misalignment, i_size = i_size_read(rreq->inode); + + if (rsize < PAGE_SIZE) + return; + + if (rsize < INT_MAX) + rsize = roundup_pow_of_two(rsize); + else + rsize = ((unsigned int)INT_MAX + 1) / 2; + + misalignment = rreq->start & (rsize - 1); + if (misalignment) { + rreq->start -= misalignment; + rreq->len += misalignment; + } + + rreq->len = round_up(rreq->len, rsize); + if (rreq->start < i_size && rreq->len > i_size - rreq->start) + rreq->len = i_size - rreq->start; +} + +/* + * Completion of a request operation. + */ +static void cifs_rreq_done(struct netfs_io_request *rreq) +{ + struct inode *inode = rreq->inode; + + /* we do not want atime to be less than mtime, it broke some apps */ + inode->i_atime = current_time(inode); + if (timespec64_compare(&inode->i_atime, &inode->i_mtime)) + inode->i_atime = inode->i_mtime; + else + inode->i_atime = current_time(inode); +} + +static void cifs_post_modify(struct inode *inode) +{ + /* Indication to update ctime and mtime as close is deferred */ + set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); +} + +/* + * Begin a cache operation. This allows for the netfs to have caching + * disabled or to use some cache other than fscache. + */ +static int cifs_begin_cache_operation(struct netfs_io_request *rreq) +{ +#ifdef CONFIG_CIFS_FSCACHE + struct fscache_cookie *cookie = cifs_inode_cookie(rreq->inode); + + return fscache_begin_read_operation(&rreq->cache_resources, cookie); +#else + return -ENOBUFS; +#endif +} + +static void cifs_free_request(struct netfs_io_request *rreq) +{ + struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); + + if (req->cfile) + cifsFileInfo_put(req->cfile); +} + +static void cifs_free_subrequest(struct netfs_io_subrequest *subreq) +{ + struct cifs_io_subrequest *rdata = + container_of(subreq, struct cifs_io_subrequest, subreq); + int rc; + + if (rdata->subreq.source == NETFS_DOWNLOAD_FROM_SERVER) { +#ifdef CONFIG_CIFS_SMB_DIRECT + if (rdata->mr) { + smbd_deregister_mr(rdata->mr); + rdata->mr = NULL; + } +#endif + + if (rdata->have_credits) + add_credits_and_wake_if(rdata->server, &rdata->credits, 0); + rc = subreq->error; + free_xid(rdata->xid); + } +} + +const struct netfs_request_ops cifs_req_ops = { + .io_request_size = sizeof(struct cifs_io_request), + .io_subrequest_size = sizeof(struct cifs_io_subrequest), + .init_request = cifs_init_request, + .free_request = cifs_free_request, + .free_subrequest = cifs_free_subrequest, + .begin_cache_operation = cifs_begin_cache_operation, + .expand_readahead = cifs_expand_readahead, + .clamp_length = cifs_clamp_length, + .issue_read = cifs_req_issue_read, + .done = cifs_rreq_done, + .post_modify = cifs_post_modify, + .create_write_requests = cifs_create_write_requests, +}; /* * Remove the dirty flags from a span of pages. diff --git a/include/linux/netfs.h b/include/linux/netfs.h index ff4f86ae64e4..8ba9f6d811e1 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -322,6 +322,7 @@ struct netfs_request_ops { /* Modification handling */ void (*update_i_size)(struct inode *inode, loff_t i_size); + void (*post_modify)(struct inode *inode); /* Write request handling */ void (*create_write_requests)(struct netfs_io_request *wreq, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 04cbe803c251..5c01c27fd3e7 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -106,6 +106,7 @@ #define netfs_sreq_ref_traces \ EM(netfs_sreq_trace_get_copy_to_cache, "GET COPY2C ") \ EM(netfs_sreq_trace_get_resubmit, "GET RESUBMIT") \ + EM(netfs_sreq_trace_get_submit, "GET SUBMIT") \ EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \ EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ From patchwork Fri Oct 13 16:04:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E88ECDB47E for ; Fri, 13 Oct 2023 16:08:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A49BA80059; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A03BB80058; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 899F580059; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 70FD680053 for ; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4564C40138 for ; Fri, 13 Oct 2023 16:07:38 +0000 (UTC) X-FDA: 81340918596.29.C62F86D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 87E5640027 for ; Fri, 13 Oct 2023 16:07:36 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gD9MGnRr; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213256; a=rsa-sha256; cv=none; b=zQmaCYVezrhoD2/+WzmAJJdC/216E3dI3vO+ud8bFaA/XVl5Yo0bwo7uxaIjZs+xgZ08ve QE1nPQn38fJrrnc8egv12RHGM2tSiRRJil8muJCaLoSHSlimDOqbU5O7GbE95CRhqomOYd 6rsHyAHZQ4M1SF/IswEN0JXp+2N76fk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gD9MGnRr; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213256; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=siiUCfm/6wTPe3NeddwK5MIcndctwu37QkcVcSF/dvI=; b=n4nEfhMT7SQj5yyQEmQeBbdZEjwGgA2b0B5vCH0rT4qK7xp5indyScM93y1sJcMH6C3imo j7f2lhTs7qAX37c8Z5GUPci04akib3TZ9eh8ZfDVC/VXip4cTLE4+RElhoh7VcVLHIMrlR zeIROpi+BK7SY4JOdJ0vz8J7dOTivfc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213255; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=siiUCfm/6wTPe3NeddwK5MIcndctwu37QkcVcSF/dvI=; b=gD9MGnRrZaCpvQbKA8KU24Mb2xPNnDU+ib8HqzPCmZNsk7lMRPLKYUPXG2KfX4bc8sPXHq lwo8JmjkWp53tjOxcOIm6RN1jjBfvlInPYm3Nh85sExwzg0djj3zGtaZ63589zLnE+iYdf miXCzPz043JIYSkBwVkdjpowaYGfsBQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-312-Vll3KhhUMQuYBLcsr0GY-w-1; Fri, 13 Oct 2023 12:07:21 -0400 X-MC-Unique: Vll3KhhUMQuYBLcsr0GY-w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 502851029F42; Fri, 13 Oct 2023 16:07:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C4DE20267FC; Fri, 13 Oct 2023 16:07:17 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 49/53] cifs: Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c Date: Fri, 13 Oct 2023 17:04:18 +0100 Message-ID: <20231013160423.2218093-50-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 87E5640027 X-Stat-Signature: 56sz3n1bpdmadcxt3furorrkecfgsx71 X-HE-Tag: 1697213256-323929 X-HE-Meta: U2FsdGVkX1/oPqeYT2i+1adG/fqYK+NUBDyBBMoA6B02JjBETH5PLzQHiVGTwHq6OZ0+z/Bb4KvdqTytyjw0yyZewhukRSMTqJWo9uVx/sPvGT4WxGrlj4SJmaolQcqz/evVpKVkiwZh2CrRIgB13ka8k+15UA4xP7kbGppnS81z7t5lkn+txxNKfqAQVjaT0M55bcTIkkUYDt6jb5j+Xn/eg99m0zNVpyP+r90QSFKnMadM3YetcYMcUbX8SsI357Y1c6Eg1klPWDsggyotplGBRhZ4ynfYSbSlz7jy9VPaKknGbMxZQw+P42IaSvPHeQLHcCjVx5p3BTZjOBBo775WA2Ov50x16yBf2iCpklEX3ttTm9JOVB0Arlol2co/sL4wMrXNc2gKvii4+LxLCS9rzTOQULU9+3jFl4M/RGh8bA69FDuJ4o+JHI+bQyl6QIqBKQKk561MjnW5GdfB37/hkFw0lxFhIUKOZP4Oh+VWUBRX4/FWNnEmom5OTySIt1aCJaWpWcQByJ38PYolafyFO1fc2hZ4AiMxP9UMvBuDjhXHFVyS5pQuWq44VrT3YdxR9qH59xoGRWl5FBodwqI6duHGY1ME5R+8Ldn8DaDxKiuU8AjbJP50MiQft/mgDbb69/tZ+SDnwYp30myW4T4DuOdxSpJqYpbo++VqQEt6UoMESIf2IwRQsm3s6cTXQppcrMjiymTCy0eGpIiNMgUf/8CKbSDsWW4yGZhCpva6Rfod1jFozIeHqLps8TpWA3oHtgJJpO9w1BSlw6ktUq1IJfQ7MVtoeztzJ1x9gjk6xsdGvp2nyKGkExJxT+DuaBmjXgqKvHRlvTL5RzQMA7CumaK6cfgb+rx0gWwJeghA1/5ZMt9UyUpajQ6PUO/Wz1mxRSDjDk+Ww8cEYu5tH0P7P3PMrhq0X9g1v8VE+BTPZ+HeV6BKZX/ywucKvCjerua94LZIe7dTZdCwzzE di/KBym8 /+FjqKufHM/CIdYyeVWidATaDmbkn3QSTHT/NSAQkOIEZfKq7r2I/bDI9Nwl3fnbaTI+7Hb6+1rjTI+9T2Zuc3aezANARFgZwmjWAZ+ogoyoFtgI0bM43z4ZehWras3BlVo1sY+lZ2vnGqQrmBNyASYkhm9NRDPkISguqT83ztsvHsFDpjKWBlPybSgfk74gcZAQmI12awMCjVxr+ExKdaaRrgA7txmjz5COX5IiGVA1JG2EYa8saJ71PKnecXdQXmbKoGknx3DsztghIxFRsguDZ9w9C7ehBsNmLi3E6P6Wee9LXo7yCD7xL3XNUZ2N70K2B0fZjuDpUEAJZisy8k5ebalzEB7cKmE2naPkeRY8Z12SA/VXXoLfQhO4w9eUI2NIl5/FGYEljDB+UEl+jKjqWjUs5eW+mgZzAcegs1KZymaBjt9tyRt3DV3PcmjWPDmoI5QBRLDYvVm0fyxXRTKe0K4HEKeyaNruVi5V0evo9Hw21cYPy3piZ+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c so that they are colocated with similar functions rather than being split with cifsfs.c. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsfs.c | 55 ------------------------------------------ fs/smb/client/cifsfs.h | 2 ++ fs/smb/client/file.c | 53 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 55 insertions(+), 55 deletions(-) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 85799e9e0f4c..0c19b65206f6 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -982,61 +982,6 @@ cifs_smb3_do_mount(struct file_system_type *fs_type, return root; } - -static ssize_t -cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) -{ - ssize_t rc; - struct inode *inode = file_inode(iocb->ki_filp); - - if (iocb->ki_flags & IOCB_DIRECT) - return cifs_user_readv(iocb, iter); - - rc = cifs_revalidate_mapping(inode); - if (rc) - return rc; - - return generic_file_read_iter(iocb, iter); -} - -static ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) -{ - struct inode *inode = file_inode(iocb->ki_filp); - struct cifsInodeInfo *cinode = CIFS_I(inode); - ssize_t written; - int rc; - - if (iocb->ki_filp->f_flags & O_DIRECT) { - written = cifs_user_writev(iocb, from); - if (written > 0 && CIFS_CACHE_READ(cinode)) { - cifs_zap_mapping(inode); - cifs_dbg(FYI, - "Set no oplock for inode=%p after a write operation\n", - inode); - cinode->oplock = 0; - } - return written; - } - - written = cifs_get_writer(cinode); - if (written) - return written; - - written = generic_file_write_iter(iocb, from); - - if (CIFS_CACHE_WRITE(CIFS_I(inode))) - goto out; - - rc = filemap_fdatawrite(inode->i_mapping); - if (rc) - cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", - rc, inode); - -out: - cifs_put_writer(cinode); - return written; -} - static loff_t cifs_llseek(struct file *file, loff_t offset, int whence) { struct cifsFileInfo *cfile = file->private_data; diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h index 41daebd220ff..24d5bac07f87 100644 --- a/fs/smb/client/cifsfs.h +++ b/fs/smb/client/cifsfs.h @@ -100,6 +100,8 @@ extern ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to); extern ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from); +ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); +ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter); extern int cifs_flock(struct file *pfile, int cmd, struct file_lock *plock); extern int cifs_lock(struct file *, int, struct file_lock *); extern int cifs_fsync(struct file *, loff_t, loff_t, int); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 6c7b91728dd4..3112233c4835 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -4584,6 +4584,59 @@ ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) return __cifs_readv(iocb, to, false); } +ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) +{ + ssize_t rc; + struct inode *inode = file_inode(iocb->ki_filp); + + if (iocb->ki_flags & IOCB_DIRECT) + return cifs_user_readv(iocb, iter); + + rc = cifs_revalidate_mapping(inode); + if (rc) + return rc; + + return generic_file_read_iter(iocb, iter); +} + +ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) +{ + struct inode *inode = file_inode(iocb->ki_filp); + struct cifsInodeInfo *cinode = CIFS_I(inode); + ssize_t written; + int rc; + + if (iocb->ki_filp->f_flags & O_DIRECT) { + written = cifs_user_writev(iocb, from); + if (written > 0 && CIFS_CACHE_READ(cinode)) { + cifs_zap_mapping(inode); + cifs_dbg(FYI, + "Set no oplock for inode=%p after a write operation\n", + inode); + cinode->oplock = 0; + } + return written; + } + + written = cifs_get_writer(cinode); + if (written) + return written; + + written = generic_file_write_iter(iocb, from); + + if (CIFS_CACHE_WRITE(CIFS_I(inode))) + goto out; + + rc = filemap_fdatawrite(inode->i_mapping); + if (rc) + cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", + rc, inode); + +out: + cifs_put_writer(cinode); + return written; +} + ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) { From patchwork Fri Oct 13 16:04:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B5ACDB47E for ; Fri, 13 Oct 2023 16:08:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDDEA80057; Fri, 13 Oct 2023 12:07:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E904D80053; Fri, 13 Oct 2023 12:07:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB7F680057; Fri, 13 Oct 2023 12:07:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B67C280053 for ; Fri, 13 Oct 2023 12:07:31 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8F68FA03AF for ; Fri, 13 Oct 2023 16:07:31 +0000 (UTC) X-FDA: 81340918302.01.FE5D79B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id B056E40018 for ; Fri, 13 Oct 2023 16:07:29 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WEBRB3Qt; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213249; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V1ekZfokfT2Ugeza4zvLRbCUR99VZ8ojUeQjipxYgqg=; b=VSvA0J9r2Rwrz2tpbHRel93oUiVhXcmQm0r/lkR4DNfQ9Y+j/QH9nSVrgupOSReFaOz/2v 7E1trLxWDCUjW1O4cNiyuZyRHZqG2+LxodpJeeXIPqBh+jByeHeWP0PCafCyaG9k8RB0yv PRhoUgTK9gwD04bXmuMGlg5KUpg9lwE= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WEBRB3Qt; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213249; a=rsa-sha256; cv=none; b=kDDzp6xaxYLiezbkgN0FUH/acZxISa/pBfAuVNoZAKpNS6ynS5E/12p1wfU61594SunwLb fr4c5gWJ5xBeyuucBl3fCyH2D+xVjSznAvKS55a0dpex32rScVwIwRqGC0x7LnLkbAFbFS V8Bop+f/vwsk4XxlrGminmtxmyB3Nxs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V1ekZfokfT2Ugeza4zvLRbCUR99VZ8ojUeQjipxYgqg=; b=WEBRB3QtdL54E1NXXB3CyePfdSf+1OA7jZySQeTWADjR0z3NHZJDDREZ+prnmrPZvZxR7m lOfm3mZsZjbjV980OIDSyPISZuDpTYPo9rIc5s9O7M0mcntKHABJUB++nbxu2DDuAAmS7r YFo9hJYCrXnH9sGlG/5Mc87r18HfsFU= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-93-WFdRDarRNVaGT3xJaXJzAA-1; Fri, 13 Oct 2023 12:07:25 -0400 X-MC-Unique: WFdRDarRNVaGT3xJaXJzAA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 443502810D4A; Fri, 13 Oct 2023 16:07:24 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B29F40C6CA0; Fri, 13 Oct 2023 16:07:21 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 50/53] cifs: Cut over to using netfslib Date: Fri, 13 Oct 2023 17:04:19 +0100 Message-ID: <20231013160423.2218093-51-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Queue-Id: B056E40018 X-Rspam-User: X-Stat-Signature: qcefzbe9aa98ncingpjr87bz8yfjuczo X-Rspamd-Server: rspam01 X-HE-Tag: 1697213249-677414 X-HE-Meta: U2FsdGVkX19GvqKNKIyTAdXyaREhgjBBwUniTCwUD9kPfiEYzKH+SKvzqcK6NZkNrEVdVZTkQnA9IbEbzDcyY30QAmeZnxzWUWomvE43KZDf5YakBZiVy6T3IdfuB76ieuowc7hcvHKSy2umGWXqTEbxu7mYRrYQFnGdMrfpRuO3zNQVbzlj/ub0obiHHfPOIcESm+OgO3lfhj2Y44WXspEDPq/MtoMDUDekPtbTH9fnVpfx9gqgVu2N7EBefy5l+vGlWy7yWC0Gd/yr7/IXzA2Oei+ZJRfJjU6FebWGImKHldpvVy92wgRA1KR7bq6T+jm3MNsbp6YJ661NgmaoaUhPUXbw3q6coZYg68tNIjJmIyCBcoKZoHN0O8v50bmBz2qCSwwuMxAd8W1BlQ9CtgIXG9HbzriBd8xW4wzWxaZZdL0U7Cm70kbCqfFRp164VN0/Q8gLyt/NioFg2y2jUBxHljgDGTWC/kjzf6anlepyOxit6sIy4c+JAfHYHA4BBd1XA1r/VY0VW/09VLoOd8wN1YmRw4OmoBfnfvs1+QY4WM5Fx+8uFKG0VNaxkOKzo0vvcalB4eJMxsChbQ6bGKyONwGcblVjDFllPRN7y9KKqpfxEzGQYwgllmWydEbTYlRaAm84U6aMLksPY5uAzXcQuLeBjhjofQNy1NCHa6Yx2bE+TidO/dBz0MWCp7IJaVw4o1LLVd/hM0xQ5+HoUBA2JQl/vI8oSjyTQfOcR9O/NRlDSBlv26jFer5aH6Nyv6wBDkegWzg1tRrWix2tlQ9AmbCpk9wtdwxALE5EaSjzi3YE4z6+QDUwAHuLytKlhtK/vjDNAeEYhN9UviQEBwflLahRAaiptUWtchB4ba+6FBVe9josElbwSFz9KXSisK2H+hriWnBFkze2A1v/YRZmJKvHi5uBUFJbY3fRw3Oe0HvLl3Il7Z+kfK+RSWbP9Le4McAhbLJiPY8c/+2 4K/MYRkT UTcUmiwYRsw/2SQCmdp5SysoM78U8OdJqyDk3kDYyl5mtfJGe3qR0ON9UjY5onHbDNhojrXYdawr6tr5A4qK2kLjgpXIsYl2UVzpiGa5f5Y35Q+IrUDsoFm0qxFnPTCK7dyizC7+RSHet+wRQI5e6M3BhJ/YWQOBtZRTfk8hOdVcB4m5OLOM8D5OPumzPxR0gD4iZrZ2YGCD1dbHOd1wiYqzG6s5lV2qQ/Mx5v+jpRGw/yPt7vIEC0vECAbt92yN8aYDpPWkmz95UUCJsaPfNL4pL+tertSUMD4Z3V5zzABQoMLEIENJBwzfTnEynfihIoY4T0MaChbma9KMaG2MA7UdwXgOrzOj1K/TU9HcsY2XlRZmNj9B+M29zxZYHG3unVlgBOSEPXAQTT/WAoJ20aaIW+AiUVMznCpQRcB/9KdY5dLiHVawuwwgoNXyv1qLv7+su X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make the cifs filesystem use netfslib to handle reading and writing on behalf of cifs. The changes include: (1) Various read_iter/write_iter type functions are turned into wrappers around netfslib API functions or are pointed directly at those functions: cifs_file_direct{,_nobrl}_ops switch to use netfs_unbuffered_read_iter and netfs_unbuffered_write_iter. Large pieces of code that will be removed are #if'd out and will be removed in subsequent patches. [?] Why does cifs mark the page dirty in the destination buffer of a DIO read? Should that happen automatically? Does netfs need to do that? Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/io.c | 7 +- fs/smb/client/cifsfs.c | 8 +-- fs/smb/client/cifsfs.h | 8 +-- fs/smb/client/cifsglob.h | 3 +- fs/smb/client/cifsproto.h | 4 ++ fs/smb/client/cifssmb.c | 45 +++++++----- fs/smb/client/file.c | 130 ++++++++++++++++++---------------- fs/smb/client/fscache.c | 2 + fs/smb/client/fscache.h | 4 ++ fs/smb/client/inode.c | 19 ++++- fs/smb/client/smb2pdu.c | 98 ++++++++++++++++---------- fs/smb/client/trace.h | 144 +++++++++++++++++++++++++++++++++----- fs/smb/client/transport.c | 3 + 13 files changed, 326 insertions(+), 149 deletions(-) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 14a9f3312d3b..112fa0548f22 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -351,8 +351,13 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) unsigned int i; size_t transferred = 0; - for (i = 0; i < rreq->direct_bv_count; i++) + for (i = 0; i < rreq->direct_bv_count; i++) { flush_dcache_page(rreq->direct_bv[i].bv_page); + // TODO: cifs marks pages in the destination buffer + // dirty under some circumstances after a read. Do we + // need to do that too? + set_page_dirty(rreq->direct_bv[i].bv_page); + } list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { if (subreq->error || subreq->transferred == 0) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 0c19b65206f6..26b6ea9eb53e 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -1352,8 +1352,8 @@ const struct file_operations cifs_file_strict_ops = { }; const struct file_operations cifs_file_direct_ops = { - .read_iter = cifs_direct_readv, - .write_iter = cifs_direct_writev, + .read_iter = netfs_unbuffered_read_iter, + .write_iter = netfs_file_write_iter, .open = cifs_open, .release = cifs_close, .lock = cifs_lock, @@ -1408,8 +1408,8 @@ const struct file_operations cifs_file_strict_nobrl_ops = { }; const struct file_operations cifs_file_direct_nobrl_ops = { - .read_iter = cifs_direct_readv, - .write_iter = cifs_direct_writev, + .read_iter = netfs_unbuffered_read_iter, + .write_iter = netfs_file_write_iter, .open = cifs_open, .release = cifs_close, .fsync = cifs_fsync, diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h index 24d5bac07f87..6bbb26a462db 100644 --- a/fs/smb/client/cifsfs.h +++ b/fs/smb/client/cifsfs.h @@ -85,6 +85,7 @@ extern const struct inode_operations cifs_namespace_inode_operations; /* Functions related to files and directories */ +extern const struct netfs_request_ops cifs_req_ops; extern const struct file_operations cifs_file_ops; extern const struct file_operations cifs_file_direct_ops; /* if directio mnt */ extern const struct file_operations cifs_file_strict_ops; /* if strictio mnt */ @@ -94,11 +95,7 @@ extern const struct file_operations cifs_file_strict_nobrl_ops; extern int cifs_open(struct inode *inode, struct file *file); extern int cifs_close(struct inode *inode, struct file *file); extern int cifs_closedir(struct inode *inode, struct file *file); -extern ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to); -extern ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to); extern ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to); -extern ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from); -extern ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from); ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter); @@ -112,9 +109,6 @@ extern int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma); extern const struct file_operations cifs_dir_ops; extern int cifs_dir_open(struct inode *inode, struct file *file); extern int cifs_readdir(struct file *file, struct dir_context *ctx); -extern void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len); -extern void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len); -extern void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int len); /* Functions related to dir entries */ extern const struct dentry_operations cifs_dentry_ops; diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index a215c092725a..a5e114eeeb8b 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1444,7 +1444,7 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - // TODO: Remove following elements +#if 0 // TODO: Remove following elements struct list_head list; struct completion done; struct work_struct work; @@ -1454,6 +1454,7 @@ struct cifs_io_subrequest { enum writeback_sync_modes sync_mode; bool uncached; struct bio_vec *bv; +#endif }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 735337e8326c..52ff5e889af2 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -580,17 +580,20 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); +#if 0 // TODO Remove void cifs_readdata_release(struct cifs_io_subrequest *rdata); static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) { if (refcount_dec_and_test(&rdata->subreq.ref)) cifs_readdata_release(rdata); } +#endif int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); +#if 0 // TODO Remove struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); void cifs_writedata_release(struct cifs_io_subrequest *rdata); static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) @@ -602,6 +605,7 @@ static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) if (refcount_dec_and_test(&wdata->subreq.ref)) cifs_writedata_release(wdata); } +#endif int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 112a5a2d95b8..722f1dd884de 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1263,7 +1263,7 @@ static void cifs_readv_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *rdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, @@ -1304,7 +1304,12 @@ cifs_readv_callback(struct mid_q_entry *mid) rdata->result = -EIO; } - queue_work(cifsiod_wq, &rdata->work); + if (rdata->result == 0 || rdata->result == -EAGAIN) + iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); + netfs_subreq_terminated(&rdata->subreq, + (rdata->result == 0 || rdata->result == -EAGAIN) ? + rdata->got_bytes : rdata->result, + false); release_mid(mid); add_credits(server, &credits, 0); } @@ -1316,7 +1321,7 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) int rc; READ_REQ *smb = NULL; int wct; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2 }; @@ -1341,7 +1346,7 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) smb->hdr.PidHigh = cpu_to_le16((__u16)(rdata->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ - smb->Fid = rdata->cfile->fid.netfid; + smb->Fid = rdata->req->cfile->fid.netfid; smb->OffsetLow = cpu_to_le32(rdata->subreq.start & 0xFFFFFFFF); if (wct == 12) smb->OffsetHigh = cpu_to_le32(rdata->subreq.start >> 32); @@ -1611,15 +1616,16 @@ static void cifs_writev_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *wdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); - unsigned int written; + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf; struct cifs_credits credits = { .value = 1, .instance = 0 }; + ssize_t result; + size_t written; switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: - wdata->result = cifs_check_receive(mid, tcon->ses->server, 0); - if (wdata->result != 0) + result = cifs_check_receive(mid, tcon->ses->server, 0); + if (result != 0) break; written = le16_to_cpu(smb->CountHigh); @@ -1635,20 +1641,20 @@ cifs_writev_callback(struct mid_q_entry *mid) written &= 0xFFFF; if (written < wdata->subreq.len) - wdata->result = -ENOSPC; + result = -ENOSPC; else - wdata->subreq.len = written; + result = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: - wdata->result = -EAGAIN; + result = -EAGAIN; break; default: - wdata->result = -EIO; + result = -EIO; break; } - queue_work(cifsiod_wq, &wdata->work); + netfs_write_subrequest_terminated(&wdata->subreq, result, true); release_mid(mid); add_credits(tcon->ses->server, &credits, 0); } @@ -1660,7 +1666,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) int rc = -EACCES; WRITE_REQ *smb = NULL; int wct; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct kvec iov[2]; struct smb_rqst rqst = { }; @@ -1670,7 +1676,8 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) wct = 12; if (wdata->subreq.start >> 32 > 0) { /* can not handle big offset for old srv */ - return -EIO; + rc = -EIO; + goto out; } } @@ -1682,7 +1689,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) smb->hdr.PidHigh = cpu_to_le16((__u16)(wdata->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ - smb->Fid = wdata->cfile->fid.netfid; + smb->Fid = wdata->req->cfile->fid.netfid; smb->OffsetLow = cpu_to_le32(wdata->subreq.start & 0xFFFFFFFF); if (wct == 14) smb->OffsetHigh = cpu_to_le32(wdata->subreq.start >> 32); @@ -1722,17 +1729,17 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) iov[1].iov_len += 4; /* pad bigger by four bytes */ } - cifs_get_writedata(wdata); rc = cifs_call_async(tcon->ses->server, &rqst, NULL, cifs_writev_callback, NULL, wdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_writes); - else - cifs_put_writedata(wdata); async_writev_out: cifs_small_buf_release(smb); +out: + if (rc) + netfs_write_subrequest_terminated(&wdata->subreq, rc, false); return rc; } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 3112233c4835..4c9125a98d18 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "cifsfs.h" #include "cifspdu.h" @@ -410,6 +411,7 @@ const struct netfs_request_ops cifs_req_ops = { .create_write_requests = cifs_create_write_requests, }; +#if 0 // TODO remove 397 /* * Remove the dirty flags from a span of pages. */ @@ -534,6 +536,7 @@ void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int le rcu_read_unlock(); } +#endif // end netfslib remove 397 /* * Mark as invalid, all open files on tree connections since they @@ -2494,6 +2497,7 @@ cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, netfs_resize_file(&cifsi->netfs, end_of_write); } +#if 0 // TODO remove 2483 static ssize_t cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, size_t write_size, loff_t *offset) @@ -2577,6 +2581,7 @@ cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, free_xid(xid); return total_written; } +#endif // end netfslib remove 2483 struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only) @@ -2782,6 +2787,7 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, return -ENOENT; } +#if 0 // TODO remove 2773 void cifs_writedata_release(struct cifs_io_subrequest *wdata) { @@ -3474,7 +3480,11 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, return rc; } +#endif // End netfs removal 2773 +/* + * Flush data on a strict file. + */ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end, int datasync) { @@ -3529,6 +3539,9 @@ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end, return rc; } +/* + * Flush data on a non-strict data. + */ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync) { unsigned int xid; @@ -3595,6 +3608,7 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } +#if 0 // TODO remove 3594 static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); static void @@ -4056,6 +4070,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) { return __cifs_writev(iocb, from, false); } +#endif // TODO remove 3594 static ssize_t cifs_writev(struct kiocb *iocb, struct iov_iter *from) @@ -4067,7 +4082,10 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from) struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; ssize_t rc; - inode_lock(inode); + rc = netfs_start_io_write(inode); + if (rc < 0) + return rc; + /* * We need to hold the sem to be sure nobody modifies lock list * with a brlock that prevents writing. @@ -4081,13 +4099,12 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from) if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from), server->vals->exclusive_lock_type, 0, NULL, CIFS_WRITE_OP)) - rc = __generic_file_write_iter(iocb, from); + rc = netfs_buffered_write_iter_locked(iocb, from, NULL); else rc = -EACCES; out: up_read(&cinode->lock_sem); - inode_unlock(inode); - + netfs_end_io_write(inode); if (rc > 0) rc = generic_write_sync(iocb, rc); return rc; @@ -4110,9 +4127,9 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) if (CIFS_CACHE_WRITE(cinode)) { if (cap_unix(tcon->ses) && - (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) - && ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { - written = generic_file_write_iter(iocb, from); + (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) && + ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { + written = netfs_file_write_iter(iocb, from); goto out; } written = cifs_writev(iocb, from); @@ -4124,7 +4141,7 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) * affected pages because it may cause a error with mandatory locks on * these pages but not on the region from pos to ppos+len-1. */ - written = cifs_user_writev(iocb, from); + written = netfs_file_write_iter(iocb, from); if (CIFS_CACHE_READ(cinode)) { /* * We have read level caching and we have just sent a write @@ -4143,6 +4160,7 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } +#if 0 // TODO remove 4143 static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) { struct cifs_io_subrequest *rdata; @@ -4582,7 +4600,9 @@ ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to) ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) { return __cifs_readv(iocb, to, false); + } +#endif // end netfslib removal 4143 ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) { @@ -4590,13 +4610,13 @@ ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) struct inode *inode = file_inode(iocb->ki_filp); if (iocb->ki_flags & IOCB_DIRECT) - return cifs_user_readv(iocb, iter); + return netfs_unbuffered_read_iter(iocb, iter); rc = cifs_revalidate_mapping(inode); if (rc) return rc; - return generic_file_read_iter(iocb, iter); + return netfs_file_read_iter(iocb, iter); } ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) @@ -4607,7 +4627,7 @@ ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) int rc; if (iocb->ki_filp->f_flags & O_DIRECT) { - written = cifs_user_writev(iocb, from); + written = netfs_unbuffered_write_iter(iocb, from); if (written > 0 && CIFS_CACHE_READ(cinode)) { cifs_zap_mapping(inode); cifs_dbg(FYI, @@ -4622,17 +4642,15 @@ ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) if (written) return written; - written = generic_file_write_iter(iocb, from); - - if (CIFS_CACHE_WRITE(CIFS_I(inode))) - goto out; + written = netfs_file_write_iter(iocb, from); - rc = filemap_fdatawrite(inode->i_mapping); - if (rc) - cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", - rc, inode); + if (!CIFS_CACHE_WRITE(CIFS_I(inode))) { + rc = filemap_fdatawrite(inode->i_mapping); + if (rc) + cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", + rc, inode); + } -out: cifs_put_writer(cinode); return written; } @@ -4657,12 +4675,15 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) * pos+len-1. */ if (!CIFS_CACHE_READ(cinode)) - return cifs_user_readv(iocb, to); + return netfs_unbuffered_read_iter(iocb, to); if (cap_unix(tcon->ses) && (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) && - ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) + ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { + if (iocb->ki_flags & IOCB_DIRECT) + return netfs_unbuffered_read_iter(iocb, to); return generic_file_read_iter(iocb, to); + } /* * We need to hold the sem to be sure nobody modifies lock list @@ -4671,12 +4692,17 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) down_read(&cinode->lock_sem); if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(to), tcon->ses->server->vals->shared_lock_type, - 0, NULL, CIFS_READ_OP)) - rc = generic_file_read_iter(iocb, to); + 0, NULL, CIFS_READ_OP)) { + if (iocb->ki_flags & IOCB_DIRECT) + rc = netfs_unbuffered_read_iter(iocb, to); + else + rc = generic_file_read_iter(iocb, to); + } up_read(&cinode->lock_sem); return rc; } +#if 0 // TODO remove 4633 static ssize_t cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) { @@ -4768,29 +4794,11 @@ cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) free_xid(xid); return total_read; } +#endif // end netfslib remove 4633 -/* - * If the page is mmap'ed into a process' page tables, then we need to make - * sure that it doesn't change while being written back. - */ static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { - struct folio *folio = page_folio(vmf->page); - - /* Wait for the folio to be written to the cache before we allow it to - * be modified. We then assume the entire folio will need writing back. - */ -#ifdef CONFIG_CIFS_FSCACHE - if (folio_test_fscache(folio) && - folio_wait_fscache_killable(folio) < 0) - return VM_FAULT_RETRY; -#endif - - folio_wait_writeback(folio); - - if (folio_lock_killable(folio) < 0) - return VM_FAULT_RETRY; - return VM_FAULT_LOCKED; + return netfs_page_mkwrite(vmf, NULL); } static const struct vm_operations_struct cifs_file_vm_ops = { @@ -4836,6 +4844,7 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) return rc; } +#if 0 // TODO remove 4794 /* * Unlock a bunch of folios in the pagecache. */ @@ -5119,6 +5128,7 @@ static int cifs_read_folio(struct file *file, struct folio *folio) free_xid(xid); return rc; } +#endif // end netfslib remove 4794 static int is_inode_writable(struct cifsInodeInfo *cifs_inode) { @@ -5165,6 +5175,7 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file) return true; } +#if 0 // TODO remove 5152 static int cifs_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, struct page **pagep, void **fsdata) @@ -5281,6 +5292,7 @@ static int cifs_launder_folio(struct folio *folio) folio_wait_fscache(folio); return rc; } +#endif // end netfslib remove 5152 void cifs_oplock_break(struct work_struct *work) { @@ -5371,6 +5383,7 @@ void cifs_oplock_break(struct work_struct *work) cifs_done_oplock_break(cinode); } +#if 0 // TODO remove 5333 /* * The presence of cifs_direct_io() in the address space ops vector * allowes open() O_DIRECT flags which would have failed otherwise. @@ -5389,6 +5402,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) */ return -EINVAL; } +#endif // netfs end remove 5333 static int cifs_swap_activate(struct swap_info_struct *sis, struct file *swap_file, sector_t *span) @@ -5465,16 +5479,14 @@ static bool cifs_dirty_folio(struct address_space *mapping, struct folio *folio) #endif const struct address_space_operations cifs_addr_ops = { - .read_folio = cifs_read_folio, - .readahead = cifs_readahead, - .writepages = cifs_writepages, - .write_begin = cifs_write_begin, - .write_end = cifs_write_end, + .read_folio = netfs_read_folio, + .readahead = netfs_readahead, + .writepages = netfs_writepages, .dirty_folio = cifs_dirty_folio, - .release_folio = cifs_release_folio, - .direct_IO = cifs_direct_io, - .invalidate_folio = cifs_invalidate_folio, - .launder_folio = cifs_launder_folio, + .release_folio = netfs_release_folio, + .direct_IO = noop_direct_IO, + .invalidate_folio = netfs_invalidate_folio, + .launder_folio = netfs_launder_folio, .migrate_folio = filemap_migrate_folio, /* * TODO: investigate and if useful we could add an is_dirty_writeback @@ -5490,13 +5502,11 @@ const struct address_space_operations cifs_addr_ops = { * to leave cifs_readahead out of the address space operations. */ const struct address_space_operations cifs_addr_ops_smallbuf = { - .read_folio = cifs_read_folio, - .writepages = cifs_writepages, - .write_begin = cifs_write_begin, - .write_end = cifs_write_end, + .read_folio = netfs_read_folio, + .writepages = netfs_writepages, .dirty_folio = cifs_dirty_folio, - .release_folio = cifs_release_folio, - .invalidate_folio = cifs_invalidate_folio, - .launder_folio = cifs_launder_folio, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, + .launder_folio = netfs_launder_folio, .migrate_folio = filemap_migrate_folio, }; diff --git a/fs/smb/client/fscache.c b/fs/smb/client/fscache.c index e5cad149f5a2..e4cb0938fb15 100644 --- a/fs/smb/client/fscache.c +++ b/fs/smb/client/fscache.c @@ -137,6 +137,7 @@ void cifs_fscache_release_inode_cookie(struct inode *inode) } } +#if 0 // TODO remove /* * Fallback page reading interface. */ @@ -245,3 +246,4 @@ int __cifs_fscache_query_occupancy(struct inode *inode, fscache_end_operation(&cres); return ret; } +#endif diff --git a/fs/smb/client/fscache.h b/fs/smb/client/fscache.h index 84f3b09367d2..7308efeb2d89 100644 --- a/fs/smb/client/fscache.h +++ b/fs/smb/client/fscache.h @@ -74,6 +74,7 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags i_size_read(inode), flags); } +#if 0 // TODO remove extern int __cifs_fscache_query_occupancy(struct inode *inode, pgoff_t first, unsigned int nr_pages, pgoff_t *_data_first, @@ -108,6 +109,7 @@ static inline void cifs_readahead_to_fscache(struct inode *inode, if (cifs_inode_cookie(inode)) __cifs_readahead_to_fscache(inode, pos, len); } +#endif #else /* CONFIG_CIFS_FSCACHE */ static inline @@ -125,6 +127,7 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; } static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {} +#if 0 // TODO remove static inline int cifs_fscache_query_occupancy(struct inode *inode, pgoff_t first, unsigned int nr_pages, pgoff_t *_data_first, @@ -143,6 +146,7 @@ cifs_readpage_from_fscache(struct inode *inode, struct page *page) static inline void cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) {} +#endif #endif /* CONFIG_CIFS_FSCACHE */ diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c index 6815b50ec56c..95c3503bcf80 100644 --- a/fs/smb/client/inode.c +++ b/fs/smb/client/inode.c @@ -27,14 +27,29 @@ #include "cifs_ioctl.h" #include "cached_dir.h" +/* + * Set parameters for the netfs library + */ +static void cifs_set_netfs_context(struct inode *inode) +{ + struct cifsInodeInfo *cifs_i = CIFS_I(inode); + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); + + netfs_inode_init(&cifs_i->netfs, &cifs_req_ops); + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) + __set_bit(NETFS_ICTX_WRITETHROUGH, &cifs_i->netfs.flags); +} + static void cifs_set_ops(struct inode *inode) { struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); + struct netfs_inode *ictx = netfs_inode(inode); switch (inode->i_mode & S_IFMT) { case S_IFREG: inode->i_op = &cifs_file_inode_ops; if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) { + set_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags); if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL) inode->i_fop = &cifs_file_direct_nobrl_ops; else @@ -216,8 +231,10 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) if (fattr->cf_flags & CIFS_FATTR_JUNCTION) inode->i_flags |= S_AUTOMOUNT; - if (inode->i_state & I_NEW) + if (inode->i_state & I_NEW) { + cifs_set_netfs_context(inode); cifs_set_ops(inode); + } return 0; } diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 4fde3d506c60..b2e13a707d7f 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4100,10 +4100,12 @@ smb2_new_read_req(void **buf, unsigned int *total_len, req->Length = cpu_to_le32(io_parms->length); req->Offset = cpu_to_le64(io_parms->offset); - trace_smb3_read_enter(0 /* xid */, - io_parms->persistent_fid, - io_parms->tcon->tid, io_parms->tcon->ses->Suid, - io_parms->offset, io_parms->length); + trace_smb3_read_enter(rdata ? rdata->rreq->debug_id : 0, + rdata ? rdata->subreq.debug_index : 0, + rdata ? rdata->xid : 0, + io_parms->persistent_fid, + io_parms->tcon->tid, io_parms->tcon->ses->Suid, + io_parms->offset, io_parms->length); #ifdef CONFIG_CIFS_SMB_DIRECT /* * If we want to do a RDMA write, fill in and append @@ -4165,7 +4167,7 @@ static void smb2_readv_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *rdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct TCP_Server_Info *server = rdata->server; struct smb2_hdr *shdr = (struct smb2_hdr *)rdata->iov[0].iov_base; @@ -4235,17 +4237,33 @@ smb2_readv_callback(struct mid_q_entry *mid) #endif if (rdata->result && rdata->result != -ENODATA) { cifs_stats_fail_inc(tcon, SMB2_READ_HE); - trace_smb3_read_err(0 /* xid */, - rdata->cfile->fid.persistent_fid, + trace_smb3_read_err(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, + rdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, rdata->subreq.start, rdata->subreq.len, rdata->result); } else - trace_smb3_read_done(0 /* xid */, - rdata->cfile->fid.persistent_fid, + trace_smb3_read_done(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, + rdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, rdata->subreq.start, rdata->got_bytes); - queue_work(cifsiod_wq, &rdata->work); + if (rdata->result == -ENODATA) { + /* We may have got an EOF error because fallocate + * failed to enlarge the file. + */ + if (rdata->subreq.start < rdata->subreq.rreq->i_size) + rdata->result = 0; + } + if (rdata->result == 0 || rdata->result == -EAGAIN) + iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); + rdata->have_credits = false; + netfs_subreq_terminated(&rdata->subreq, + (rdata->result == 0 || rdata->result == -EAGAIN) ? + rdata->got_bytes : rdata->result, true); release_mid(mid); add_credits(server, &credits, 0); } @@ -4261,7 +4279,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 1 }; struct TCP_Server_Info *server; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); unsigned int total_len; int credit_request; @@ -4271,12 +4289,12 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) if (!rdata->server) rdata->server = cifs_pick_channel(tcon->ses); - io_parms.tcon = tlink_tcon(rdata->cfile->tlink); + io_parms.tcon = tlink_tcon(rdata->req->cfile->tlink); io_parms.server = server = rdata->server; io_parms.offset = rdata->subreq.start; io_parms.length = rdata->subreq.len; - io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; - io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; + io_parms.persistent_fid = rdata->req->cfile->fid.persistent_fid; + io_parms.volatile_fid = rdata->req->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; rc = smb2_new_read_req( @@ -4316,7 +4334,9 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) &rdata->credits); if (rc) { cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE); - trace_smb3_read_err(0 /* xid */, io_parms.persistent_fid, + trace_smb3_read_err(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, io_parms.persistent_fid, io_parms.tcon->tid, io_parms.tcon->ses->Suid, io_parms.offset, io_parms.length, rc); @@ -4367,22 +4387,23 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, if (rc != -ENODATA) { cifs_stats_fail_inc(io_parms->tcon, SMB2_READ_HE); cifs_dbg(VFS, "Send error in read = %d\n", rc); - trace_smb3_read_err(xid, + trace_smb3_read_err(0, 0, xid, req->PersistentFileId, io_parms->tcon->tid, ses->Suid, io_parms->offset, io_parms->length, rc); } else - trace_smb3_read_done(xid, req->PersistentFileId, io_parms->tcon->tid, + trace_smb3_read_done(0, 0, xid, + req->PersistentFileId, io_parms->tcon->tid, ses->Suid, io_parms->offset, 0); free_rsp_buf(resp_buftype, rsp_iov.iov_base); cifs_small_buf_release(req); return rc == -ENODATA ? 0 : rc; } else - trace_smb3_read_done(xid, - req->PersistentFileId, - io_parms->tcon->tid, ses->Suid, - io_parms->offset, io_parms->length); + trace_smb3_read_done(0, 0, xid, + req->PersistentFileId, + io_parms->tcon->tid, ses->Suid, + io_parms->offset, io_parms->length); cifs_small_buf_release(req); @@ -4416,11 +4437,12 @@ static void smb2_writev_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *wdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct TCP_Server_Info *server = wdata->server; - unsigned int written; struct smb2_write_rsp *rsp = (struct smb2_write_rsp *)mid->resp_buf; struct cifs_credits credits = { .value = 0, .instance = 0 }; + ssize_t result = 0; + size_t written; WARN_ONCE(wdata->server != mid->server, "wdata server %p != mid server %p", @@ -4430,8 +4452,8 @@ smb2_writev_callback(struct mid_q_entry *mid) case MID_RESPONSE_RECEIVED: credits.value = le16_to_cpu(rsp->hdr.CreditRequest); credits.instance = server->reconnect_instance; - wdata->result = smb2_check_receive(mid, server, 0); - if (wdata->result != 0) + result = smb2_check_receive(mid, server, 0); + if (result != 0) break; written = le32_to_cpu(rsp->DataLength); @@ -4448,17 +4470,18 @@ smb2_writev_callback(struct mid_q_entry *mid) wdata->result = -ENOSPC; else wdata->subreq.len = written; + iov_iter_advance(&wdata->subreq.io_iter, written); break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: - wdata->result = -EAGAIN; + result = -EAGAIN; break; case MID_RESPONSE_MALFORMED: credits.value = le16_to_cpu(rsp->hdr.CreditRequest); credits.instance = server->reconnect_instance; fallthrough; default: - wdata->result = -EIO; + result = -EIO; break; } #ifdef CONFIG_CIFS_SMB_DIRECT @@ -4474,10 +4497,10 @@ smb2_writev_callback(struct mid_q_entry *mid) wdata->mr = NULL; } #endif - if (wdata->result) { + if (result) { cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); trace_smb3_write_err(0 /* no xid */, - wdata->cfile->fid.persistent_fid, + wdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, wdata->subreq.start, wdata->subreq.len, wdata->result); if (wdata->result == -ENOSPC) @@ -4485,11 +4508,11 @@ smb2_writev_callback(struct mid_q_entry *mid) tcon->tree_name); } else trace_smb3_write_done(0 /* no xid */, - wdata->cfile->fid.persistent_fid, + wdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, wdata->subreq.start, wdata->subreq.len); - queue_work(cifsiod_wq, &wdata->work); + netfs_write_subrequest_terminated(&wdata->subreq, result ?: written, true); release_mid(mid); add_credits(server, &credits, 0); } @@ -4501,7 +4524,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) int rc = -EACCES, flags = 0; struct smb2_write_req *req = NULL; struct smb2_hdr *shdr; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct TCP_Server_Info *server = wdata->server; struct kvec iov[1]; struct smb_rqst rqst = { }; @@ -4522,8 +4545,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) .server = server, .offset = wdata->subreq.start, .length = wdata->subreq.len, - .persistent_fid = wdata->cfile->fid.persistent_fid, - .volatile_fid = wdata->cfile->fid.volatile_fid, + .persistent_fid = wdata->req->cfile->fid.persistent_fid, + .volatile_fid = wdata->req->cfile->fid.volatile_fid, .pid = wdata->pid, }; io_parms = &_io_parms; @@ -4531,7 +4554,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) rc = smb2_plain_req_init(SMB2_WRITE, tcon, server, (void **) &req, &total_len); if (rc) - return rc; + goto out; if (smb3_encryption_required(tcon)) flags |= CIFS_TRANSFORM_REQ; @@ -4628,7 +4651,6 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) flags |= CIFS_HAS_CREDITS; } - cifs_get_writedata(wdata); rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, NULL, wdata, flags, &wdata->credits); @@ -4640,12 +4662,14 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) io_parms->offset, io_parms->length, rc); - cifs_put_writedata(wdata); cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); } async_writev_out: cifs_small_buf_release(req); +out: + if (rc) + netfs_write_subrequest_terminated(&wdata->subreq, rc, true); return rc; } diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h index de199ec9f726..b24264aeac13 100644 --- a/fs/smb/client/trace.h +++ b/fs/smb/client/trace.h @@ -21,6 +21,62 @@ /* For logging errors in read or write */ DECLARE_EVENT_CLASS(smb3_rw_err_class, + TP_PROTO(unsigned int rreq_debug_id, + unsigned int rreq_debug_index, + unsigned int xid, + __u64 fid, + __u32 tid, + __u64 sesid, + __u64 offset, + __u32 len, + int rc), + TP_ARGS(rreq_debug_id, rreq_debug_index, + xid, fid, tid, sesid, offset, len, rc), + TP_STRUCT__entry( + __field(unsigned int, rreq_debug_id) + __field(unsigned int, rreq_debug_index) + __field(unsigned int, xid) + __field(__u64, fid) + __field(__u32, tid) + __field(__u64, sesid) + __field(__u64, offset) + __field(__u32, len) + __field(int, rc) + ), + TP_fast_assign( + __entry->rreq_debug_id = rreq_debug_id; + __entry->rreq_debug_index = rreq_debug_index; + __entry->xid = xid; + __entry->fid = fid; + __entry->tid = tid; + __entry->sesid = sesid; + __entry->offset = offset; + __entry->len = len; + __entry->rc = rc; + ), + TP_printk("\tR=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", + __entry->rreq_debug_id, __entry->rreq_debug_index, + __entry->xid, __entry->sesid, __entry->tid, __entry->fid, + __entry->offset, __entry->len, __entry->rc) +) + +#define DEFINE_SMB3_RW_ERR_EVENT(name) \ +DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ + TP_PROTO(unsigned int rreq_debug_id, \ + unsigned int rreq_debug_index, \ + unsigned int xid, \ + __u64 fid, \ + __u32 tid, \ + __u64 sesid, \ + __u64 offset, \ + __u32 len, \ + int rc), \ + TP_ARGS(rreq_debug_id, rreq_debug_index, xid, fid, tid, sesid, offset, len, rc)) + +DEFINE_SMB3_RW_ERR_EVENT(read_err); + +/* For logging errors in other file I/O ops */ +DECLARE_EVENT_CLASS(smb3_other_err_class, TP_PROTO(unsigned int xid, __u64 fid, __u32 tid, @@ -52,8 +108,8 @@ DECLARE_EVENT_CLASS(smb3_rw_err_class, __entry->offset, __entry->len, __entry->rc) ) -#define DEFINE_SMB3_RW_ERR_EVENT(name) \ -DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ +#define DEFINE_SMB3_OTHER_ERR_EVENT(name) \ +DEFINE_EVENT(smb3_other_err_class, smb3_##name, \ TP_PROTO(unsigned int xid, \ __u64 fid, \ __u32 tid, \ @@ -63,15 +119,67 @@ DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ int rc), \ TP_ARGS(xid, fid, tid, sesid, offset, len, rc)) -DEFINE_SMB3_RW_ERR_EVENT(write_err); -DEFINE_SMB3_RW_ERR_EVENT(read_err); -DEFINE_SMB3_RW_ERR_EVENT(query_dir_err); -DEFINE_SMB3_RW_ERR_EVENT(zero_err); -DEFINE_SMB3_RW_ERR_EVENT(falloc_err); +DEFINE_SMB3_OTHER_ERR_EVENT(write_err); +DEFINE_SMB3_OTHER_ERR_EVENT(query_dir_err); +DEFINE_SMB3_OTHER_ERR_EVENT(zero_err); +DEFINE_SMB3_OTHER_ERR_EVENT(falloc_err); /* For logging successful read or write */ DECLARE_EVENT_CLASS(smb3_rw_done_class, + TP_PROTO(unsigned int rreq_debug_id, + unsigned int rreq_debug_index, + unsigned int xid, + __u64 fid, + __u32 tid, + __u64 sesid, + __u64 offset, + __u32 len), + TP_ARGS(rreq_debug_id, rreq_debug_index, + xid, fid, tid, sesid, offset, len), + TP_STRUCT__entry( + __field(unsigned int, rreq_debug_id) + __field(unsigned int, rreq_debug_index) + __field(unsigned int, xid) + __field(__u64, fid) + __field(__u32, tid) + __field(__u64, sesid) + __field(__u64, offset) + __field(__u32, len) + ), + TP_fast_assign( + __entry->rreq_debug_id = rreq_debug_id; + __entry->rreq_debug_index = rreq_debug_index; + __entry->xid = xid; + __entry->fid = fid; + __entry->tid = tid; + __entry->sesid = sesid; + __entry->offset = offset; + __entry->len = len; + ), + TP_printk("R=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x", + __entry->rreq_debug_id, __entry->rreq_debug_index, + __entry->xid, __entry->sesid, __entry->tid, __entry->fid, + __entry->offset, __entry->len) +) + +#define DEFINE_SMB3_RW_DONE_EVENT(name) \ +DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ + TP_PROTO(unsigned int rreq_debug_id, \ + unsigned int rreq_debug_index, \ + unsigned int xid, \ + __u64 fid, \ + __u32 tid, \ + __u64 sesid, \ + __u64 offset, \ + __u32 len), \ + TP_ARGS(rreq_debug_id, rreq_debug_index, xid, fid, tid, sesid, offset, len)) + +DEFINE_SMB3_RW_DONE_EVENT(read_enter); +DEFINE_SMB3_RW_DONE_EVENT(read_done); + +/* For logging successful other op */ +DECLARE_EVENT_CLASS(smb3_other_done_class, TP_PROTO(unsigned int xid, __u64 fid, __u32 tid, @@ -100,8 +208,8 @@ DECLARE_EVENT_CLASS(smb3_rw_done_class, __entry->offset, __entry->len) ) -#define DEFINE_SMB3_RW_DONE_EVENT(name) \ -DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ +#define DEFINE_SMB3_OTHER_DONE_EVENT(name) \ +DEFINE_EVENT(smb3_other_done_class, smb3_##name, \ TP_PROTO(unsigned int xid, \ __u64 fid, \ __u32 tid, \ @@ -110,16 +218,14 @@ DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ __u32 len), \ TP_ARGS(xid, fid, tid, sesid, offset, len)) -DEFINE_SMB3_RW_DONE_EVENT(write_enter); -DEFINE_SMB3_RW_DONE_EVENT(read_enter); -DEFINE_SMB3_RW_DONE_EVENT(query_dir_enter); -DEFINE_SMB3_RW_DONE_EVENT(zero_enter); -DEFINE_SMB3_RW_DONE_EVENT(falloc_enter); -DEFINE_SMB3_RW_DONE_EVENT(write_done); -DEFINE_SMB3_RW_DONE_EVENT(read_done); -DEFINE_SMB3_RW_DONE_EVENT(query_dir_done); -DEFINE_SMB3_RW_DONE_EVENT(zero_done); -DEFINE_SMB3_RW_DONE_EVENT(falloc_done); +DEFINE_SMB3_OTHER_DONE_EVENT(write_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(query_dir_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(zero_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(falloc_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(write_done); +DEFINE_SMB3_OTHER_DONE_EVENT(query_dir_done); +DEFINE_SMB3_OTHER_DONE_EVENT(zero_done); +DEFINE_SMB3_OTHER_DONE_EVENT(falloc_done); /* For logging successful set EOF (truncate) */ DECLARE_EVENT_CLASS(smb3_eof_class, diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 9d22b1cdfc9f..b77110b11a19 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1807,8 +1807,11 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) length = data_len; /* An RDMA read is already done. */ else #endif + { length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter, data_len); + iov_iter_revert(&rdata->subreq.io_iter, data_len); + } if (length > 0) rdata->got_bytes += length; server->total_read += length; From patchwork Fri Oct 13 16:04:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60C96CDB47E for ; Fri, 13 Oct 2023 16:08:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE13F80053; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C93A180058; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A9DC80053; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7E32280058 for ; Fri, 13 Oct 2023 12:07:38 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 49BE7A0393 for ; Fri, 13 Oct 2023 16:07:38 +0000 (UTC) X-FDA: 81340918596.29.CEBBF44 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 78D5B40024 for ; Fri, 13 Oct 2023 16:07:36 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Jbiimrey; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213256; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wXdIzpLK1y5Jb05M80jJroiS3FdAFZMSSeAGecXi/gg=; b=ewlUSHfUpANYpn7SNwgIqt9XAtrUTNUM9hUmEiO484xsJD95iUmhfA4N9lMovYICxGHBs4 1BmfyGfkfSQeor89ysbmcub8pxFtG7fp3Ehu2UhkaGLjkyV8VmgTuODP5NjUrJ+6SEoqR8 IFYt9YX4yJoX0evbHFf4P5L1eb7gNlU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Jbiimrey; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213256; a=rsa-sha256; cv=none; b=cmO1/TEhtbpfUcSgTaUOwXS2bGh43uP8cvkV5oEmJU7Mn478T+G0PLIiRhjJy93TMKPyl3 zjrPncUA75xBmvhRzOA8eywPCfky1r2N4bTQkS+TljPaGJpjmtJD83GTNIdYvPhN19RCvV NJ85odrRfICCmk2jUD826lcZ4Y90TO8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213255; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wXdIzpLK1y5Jb05M80jJroiS3FdAFZMSSeAGecXi/gg=; b=JbiimreyIXwPsWRBELtnFFzglf7hwD/Sd2u5O7GfUoa4iICi+MYOOMWslqZEDPjHR24I2E Sakr6lbcDbFyrXpCThXAoDxJD7n4/4T5+Wbnn80/5d7KIVUfvgkJMk4naMebLM98rRktCg URq39UzJdoJHyG+qXyl1b3mlmyoEDR0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-629-HcgrZ0jZPryLQ1prQ5RwgQ-1; Fri, 13 Oct 2023 12:07:32 -0400 X-MC-Unique: HcgrZ0jZPryLQ1prQ5RwgQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 00F4A1029F4E; Fri, 13 Oct 2023 16:07:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id E97D020296DB; Fri, 13 Oct 2023 16:07:24 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 51/53] cifs: Remove some code that's no longer used, part 1 Date: Fri, 13 Oct 2023 17:04:20 +0100 Message-ID: <20231013160423.2218093-52-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 78D5B40024 X-Stat-Signature: n3qxr7yiq4n3cab4j94ujraam1dy6dja X-Rspam-User: X-HE-Tag: 1697213256-71609 X-HE-Meta: U2FsdGVkX198kGwe1Na9x7cul8SSroxZcpgUP5DFu/bTx8XMtBzJqJ5VmP1onvylvkAxWvT1dAISl+7CbQg0piD5AfOaF56hnwTRgIsDaxLtNu2o3WSHZfMBovc8Nmc5oOlz9ueSeQ2MC3gumko96uFMEGu29MqkVGixUZHPzYYQkoNWQMLs5IznHzUktL2mFW+PGx92GQlQrsjxLjbLCFabvxvZwNHKz9pnK4prOO4W4Og/vrbfhZC5tli3itsFHTaAjZNrLK49Rhe5UYYP6q4/lsIbxEhX8LXHUt28J6eVDxXOlJaejs0U8ysk+0wLT+0NA/Jm4NvJooXnthlt9Vy1o4x47viJvKkyPETfQ5OxHM9x0hgtJ3PvRAc7DBOsWufECUagdPj83q1+l4WdW+lAjQSuqzflQJHZEd3LJosqw9UiD3kdlMw3QY2fP1nhnNrrZFi23pirLE1drSJbzjWjtTKVid3ERzGfPIlSj7QiAqn2qQSaU0OZpt/EFuxkJyhDbad4qfM9V0TrJjTEscJ+wX6LnmWFT1WhhAcrM2KIebyfeVwWnm1oCnbTzsDLirCmENiHzQdq5ocYwXGeiPVS4HN8czBBxBzNmuHn73AyOuVuMvh51bpHQ243M3+kfoPijkzUpF5OMNTIZd//imH2HNHT3KnLhD6qo3O3d+j6edJupboqIpe+0pnpWkpTU13IoShfnYoVJk+AD4HYaLOsVFfdCECVMXA1uiO5tO2pfdLn7wg/HrdsOOm2vI6ifWsKKOFoEZtRq+3VQKXOwdIvRFmSOQN6Qyl4xNcL87foEkH5Yy6CZhz7nyBlJItwMZuCUqHkTtuJfRfV83WkpUd5Z8XJ8Y71VR7uXBUUP8+G1U8e/Do6WvBkTTuXSeceTMp9pnvgNsTZ7w5ou5A0akI/VmioF/eYpSlKCJyIcen69vOGQTEhcB6SAM/1QAhT1CSUXukAXQ6VOmzobxN wpYMepww I9SyED4wyEh99qA2kCTl9tRn29kE7yZ7ZhXwVz9cxacwYoed1rq7hg8FzraYjdyGYbLha0GUMR0rQ9QqYE5Pmwd62h+YlEhNuljx/B1GJG4xpZUfk+KKoFRRw8t+EiFgWpXyvGJu+cDjYQE0yBrIy/6GuGnt2HXlTYei6cr+ZIs45xbHhEZD2oac9ufxhw4e+1OQz/Wh78MLUnCflbjMVRgvU+GagBaM2N/FM6tEoLe1tb2hUrSoFiySIC0swCQaRvTFcpAD080KQ8xlSbm0fR0xiFCJDfCltF3aymDbHTaMzSJZhkjIHVvBts/vaOwqo7Z4+ARtg3VuKS662fJ7aRPlqtExxxKuL3m8XV/N3xSzdnHCL3O9XZ8cm4XBStFumWmRUG5/lW3h3DPJXGUFxJyBm8BkcPPJyPAYlEvkU/XkEvnwc/H8ShwvhEvnj7vwgXIpn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 12 - fs/smb/client/cifsproto.h | 21 -- fs/smb/client/file.c | 639 -------------------------------------- fs/smb/client/fscache.c | 111 ------- fs/smb/client/fscache.h | 58 ---- 5 files changed, 841 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index a5e114eeeb8b..01ea1206ec7e 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1443,18 +1443,6 @@ struct cifs_io_subrequest { struct smbd_mr *mr; #endif struct cifs_credits credits; - -#if 0 // TODO: Remove following elements - struct list_head list; - struct completion done; - struct work_struct work; - struct cifsFileInfo *cfile; - struct address_space *mapping; - struct cifs_aio_ctx *ctx; - enum writeback_sync_modes sync_mode; - bool uncached; - struct bio_vec *bv; -#endif }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 52ff5e889af2..25985b56cd7f 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -580,32 +580,11 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); -#if 0 // TODO Remove -void cifs_readdata_release(struct cifs_io_subrequest *rdata); -static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) -{ - if (refcount_dec_and_test(&rdata->subreq.ref)) - cifs_readdata_release(rdata); -} -#endif int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); -#if 0 // TODO Remove -struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); -void cifs_writedata_release(struct cifs_io_subrequest *rdata); -static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) -{ - refcount_inc(&wdata->subreq.ref); -} -static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) -{ - if (refcount_dec_and_test(&wdata->subreq.ref)) - cifs_writedata_release(wdata); -} -#endif int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 4c9125a98d18..2c64dccdc81d 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -411,133 +411,6 @@ const struct netfs_request_ops cifs_req_ops = { .create_write_requests = cifs_create_write_requests, }; -#if 0 // TODO remove 397 -/* - * Remove the dirty flags from a span of pages. - */ -static void cifs_undirty_folios(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each_marked(&xas, folio, end, PAGECACHE_TAG_DIRTY) { - if (xas_retry(&xas, folio)) - continue; - xas_pause(&xas); - rcu_read_unlock(); - folio_lock(folio); - folio_clear_dirty_for_io(folio); - folio_unlock(folio); - rcu_read_lock(); - } - - rcu_read_unlock(); -} - -/* - * Completion of write to server. - */ -void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (xas_retry(&xas, folio)) - continue; - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio_index(folio), end); - continue; - } - - folio_detach_private(folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} - -/* - * Failure of write to server. - */ -void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (xas_retry(&xas, folio)) - continue; - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio_index(folio), end); - continue; - } - - folio_set_error(folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} - -/* - * Redirty pages after a temporary failure. - */ -void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio_index(folio), end); - continue; - } - - filemap_dirty_folio(folio->mapping, folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} -#endif // end netfslib remove 397 - /* * Mark as invalid, all open files on tree connections since they * were closed when session to server was lost. @@ -2497,92 +2370,6 @@ cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, netfs_resize_file(&cifsi->netfs, end_of_write); } -#if 0 // TODO remove 2483 -static ssize_t -cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, - size_t write_size, loff_t *offset) -{ - int rc = 0; - unsigned int bytes_written = 0; - unsigned int total_written; - struct cifs_tcon *tcon; - struct TCP_Server_Info *server; - unsigned int xid; - struct dentry *dentry = open_file->dentry; - struct cifsInodeInfo *cifsi = CIFS_I(d_inode(dentry)); - struct cifs_io_parms io_parms = {0}; - - cifs_dbg(FYI, "write %zd bytes to offset %lld of %pd\n", - write_size, *offset, dentry); - - tcon = tlink_tcon(open_file->tlink); - server = tcon->ses->server; - - if (!server->ops->sync_write) - return -ENOSYS; - - xid = get_xid(); - - for (total_written = 0; write_size > total_written; - total_written += bytes_written) { - rc = -EAGAIN; - while (rc == -EAGAIN) { - struct kvec iov[2]; - unsigned int len; - - if (open_file->invalidHandle) { - /* we could deadlock if we called - filemap_fdatawait from here so tell - reopen_file not to flush data to - server now */ - rc = cifs_reopen_file(open_file, false); - if (rc != 0) - break; - } - - len = min(server->ops->wp_retry_size(d_inode(dentry)), - (unsigned int)write_size - total_written); - /* iov[0] is reserved for smb header */ - iov[1].iov_base = (char *)write_data + total_written; - iov[1].iov_len = len; - io_parms.pid = pid; - io_parms.tcon = tcon; - io_parms.offset = *offset; - io_parms.length = len; - rc = server->ops->sync_write(xid, &open_file->fid, - &io_parms, &bytes_written, iov, 1); - } - if (rc || (bytes_written == 0)) { - if (total_written) - break; - else { - free_xid(xid); - return rc; - } - } else { - spin_lock(&d_inode(dentry)->i_lock); - cifs_update_eof(cifsi, *offset, bytes_written); - spin_unlock(&d_inode(dentry)->i_lock); - *offset += bytes_written; - } - } - - cifs_stats_bytes_written(tcon, total_written); - - if (total_written > 0) { - spin_lock(&d_inode(dentry)->i_lock); - if (*offset > d_inode(dentry)->i_size) { - i_size_write(d_inode(dentry), *offset); - d_inode(dentry)->i_blocks = (512 - 1 + *offset) >> 9; - } - spin_unlock(&d_inode(dentry)->i_lock); - } - mark_inode_dirty_sync(d_inode(dentry)); - free_xid(xid); - return total_written; -} -#endif // end netfslib remove 2483 - struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only) { @@ -4844,292 +4631,6 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) return rc; } -#if 0 // TODO remove 4794 -/* - * Unlock a bunch of folios in the pagecache. - */ -static void cifs_unlock_folios(struct address_space *mapping, pgoff_t first, pgoff_t last) -{ - struct folio *folio; - XA_STATE(xas, &mapping->i_pages, first); - - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - folio_unlock(folio); - } - rcu_read_unlock(); -} - -static void cifs_readahead_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata = container_of(work, - struct cifs_io_subrequest, work); - struct folio *folio; - pgoff_t last; - bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); - - XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE); - - if (good) - cifs_readahead_to_fscache(rdata->mapping->host, - rdata->subreq.start, rdata->subreq.len); - - if (iov_iter_count(&rdata->subreq.io_iter) > 0) - iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter); - - last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE; - - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - if (good) { - flush_dcache_folio(folio); - folio_mark_uptodate(folio); - } - folio_unlock(folio); - } - rcu_read_unlock(); - - cifs_put_readdata(rdata); -} - -static void cifs_readahead(struct readahead_control *ractl) -{ - struct cifsFileInfo *open_file = ractl->file->private_data; - struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file); - struct TCP_Server_Info *server; - unsigned int xid, nr_pages, cache_nr_pages = 0; - unsigned int ra_pages; - pgoff_t next_cached = ULONG_MAX, ra_index; - bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) && - cifs_inode_cookie(ractl->mapping->host)->cache_priv; - bool check_cache = caching; - pid_t pid; - int rc = 0; - - /* Note that readahead_count() lags behind our dequeuing of pages from - * the ractl, wo we have to keep track for ourselves. - */ - ra_pages = readahead_count(ractl); - ra_index = readahead_index(ractl); - - xid = get_xid(); - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - - cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", - __func__, ractl->file, ractl->mapping, ra_pages); - - /* - * Chop the readahead request up into rsize-sized read requests. - */ - while ((nr_pages = ra_pages)) { - unsigned int i; - struct cifs_io_subrequest *rdata; - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - struct folio *folio; - pgoff_t fsize; - size_t rsize; - - /* - * Find out if we have anything cached in the range of - * interest, and if so, where the next chunk of cached data is. - */ - if (caching) { - if (check_cache) { - rc = cifs_fscache_query_occupancy( - ractl->mapping->host, ra_index, nr_pages, - &next_cached, &cache_nr_pages); - if (rc < 0) - caching = false; - check_cache = false; - } - - if (ra_index == next_cached) { - /* - * TODO: Send a whole batch of pages to be read - * by the cache. - */ - folio = readahead_folio(ractl); - fsize = folio_nr_pages(folio); - ra_pages -= fsize; - ra_index += fsize; - if (cifs_readpage_from_fscache(ractl->mapping->host, - &folio->page) < 0) { - /* - * TODO: Deal with cache read failure - * here, but for the moment, delegate - * that to readpage. - */ - caching = false; - } - folio_unlock(folio); - next_cached += fsize; - cache_nr_pages -= fsize; - if (cache_nr_pages == 0) - check_cache = true; - continue; - } - } - - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc) { - if (rc == -EAGAIN) - continue; - break; - } - } - - if (cifs_sb->ctx->rsize == 0) - cifs_sb->ctx->rsize = - server->ops->negotiate_rsize(tlink_tcon(open_file->tlink), - cifs_sb->ctx); - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &rsize, credits); - if (rc) - break; - nr_pages = min_t(size_t, rsize / PAGE_SIZE, ra_pages); - if (next_cached != ULONG_MAX) - nr_pages = min_t(size_t, nr_pages, next_cached - ra_index); - - /* - * Give up immediately if rsize is too small to read an entire - * page. The VFS will fall back to readpage. We should never - * reach this point however since we set ra_pages to 0 when the - * rsize is smaller than a cache page. - */ - if (unlikely(!nr_pages)) { - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata = cifs_readdata_alloc(cifs_readahead_complete); - if (!rdata) { - /* best to give up if we're out of mem */ - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata->subreq.start = ra_index * PAGE_SIZE; - rdata->subreq.len = nr_pages * PAGE_SIZE; - rdata->cfile = cifsFileInfo_get(open_file); - rdata->server = server; - rdata->mapping = ractl->mapping; - rdata->pid = pid; - rdata->credits = credits_on_stack; - - for (i = 0; i < nr_pages; i++) { - if (!readahead_folio(ractl)) - WARN_ON(1); - } - ra_pages -= nr_pages; - ra_index += nr_pages; - - iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages, - rdata->subreq.start, rdata->subreq.len); - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_readv(rdata); - } - - if (rc) { - add_credits_and_wake_if(server, &rdata->credits, 0); - cifs_unlock_folios(rdata->mapping, - rdata->subreq.start / PAGE_SIZE, - (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE); - /* Fallback to the readpage in error/reconnect cases */ - cifs_put_readdata(rdata); - break; - } - - cifs_put_readdata(rdata); - } - - free_xid(xid); -} - -/* - * cifs_readpage_worker must be called with the page pinned - */ -static int cifs_readpage_worker(struct file *file, struct page *page, - loff_t *poffset) -{ - char *read_data; - int rc; - - /* Is the page cached? */ - rc = cifs_readpage_from_fscache(file_inode(file), page); - if (rc == 0) - goto read_complete; - - read_data = kmap(page); - /* for reads over a certain size could initiate async read ahead */ - - rc = cifs_read(file, read_data, PAGE_SIZE, poffset); - - if (rc < 0) - goto io_error; - else - cifs_dbg(FYI, "Bytes read %d\n", rc); - - /* we do not want atime to be less than mtime, it broke some apps */ - file_inode(file)->i_atime = current_time(file_inode(file)); - if (timespec64_compare(&(file_inode(file)->i_atime), &(file_inode(file)->i_mtime))) - file_inode(file)->i_atime = file_inode(file)->i_mtime; - else - file_inode(file)->i_atime = current_time(file_inode(file)); - - if (PAGE_SIZE > rc) - memset(read_data + rc, 0, PAGE_SIZE - rc); - - flush_dcache_page(page); - SetPageUptodate(page); - rc = 0; - -io_error: - kunmap(page); - -read_complete: - unlock_page(page); - return rc; -} - -static int cifs_read_folio(struct file *file, struct folio *folio) -{ - struct page *page = &folio->page; - loff_t offset = page_file_offset(page); - int rc = -EACCES; - unsigned int xid; - - xid = get_xid(); - - if (file->private_data == NULL) { - rc = -EBADF; - free_xid(xid); - return rc; - } - - cifs_dbg(FYI, "read_folio %p at offset %d 0x%x\n", - page, (int)offset, (int)offset); - - rc = cifs_readpage_worker(file, page, &offset); - - free_xid(xid); - return rc; -} -#endif // end netfslib remove 4794 - static int is_inode_writable(struct cifsInodeInfo *cifs_inode) { struct cifsFileInfo *open_file; @@ -5175,125 +4676,6 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file) return true; } -#if 0 // TODO remove 5152 -static int cifs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **pagep, void **fsdata) -{ - int oncethru = 0; - pgoff_t index = pos >> PAGE_SHIFT; - loff_t offset = pos & (PAGE_SIZE - 1); - loff_t page_start = pos & PAGE_MASK; - loff_t i_size; - struct page *page; - int rc = 0; - - cifs_dbg(FYI, "write_begin from %lld len %d\n", (long long)pos, len); - -start: - page = grab_cache_page_write_begin(mapping, index); - if (!page) { - rc = -ENOMEM; - goto out; - } - - if (PageUptodate(page)) - goto out; - - /* - * If we write a full page it will be up to date, no need to read from - * the server. If the write is short, we'll end up doing a sync write - * instead. - */ - if (len == PAGE_SIZE) - goto out; - - /* - * optimize away the read when we have an oplock, and we're not - * expecting to use any of the data we'd be reading in. That - * is, when the page lies beyond the EOF, or straddles the EOF - * and the write will cover all of the existing data. - */ - if (CIFS_CACHE_READ(CIFS_I(mapping->host))) { - i_size = i_size_read(mapping->host); - if (page_start >= i_size || - (offset == 0 && (pos + len) >= i_size)) { - zero_user_segments(page, 0, offset, - offset + len, - PAGE_SIZE); - /* - * PageChecked means that the parts of the page - * to which we're not writing are considered up - * to date. Once the data is copied to the - * page, it can be set uptodate. - */ - SetPageChecked(page); - goto out; - } - } - - if ((file->f_flags & O_ACCMODE) != O_WRONLY && !oncethru) { - /* - * might as well read a page, it is fast enough. If we get - * an error, we don't need to return it. cifs_write_end will - * do a sync write instead since PG_uptodate isn't set. - */ - cifs_readpage_worker(file, page, &page_start); - put_page(page); - oncethru = 1; - goto start; - } else { - /* we could try using another file handle if there is one - - but how would we lock it to prevent close of that handle - racing with this read? In any case - this will be written out by write_end so is fine */ - } -out: - *pagep = page; - return rc; -} - -static bool cifs_release_folio(struct folio *folio, gfp_t gfp) -{ - if (folio_test_private(folio)) - return 0; - if (folio_test_fscache(folio)) { - if (current_is_kswapd() || !(gfp & __GFP_FS)) - return false; - folio_wait_fscache(folio); - } - fscache_note_page_release(cifs_inode_cookie(folio->mapping->host)); - return true; -} - -static void cifs_invalidate_folio(struct folio *folio, size_t offset, - size_t length) -{ - folio_wait_fscache(folio); -} - -static int cifs_launder_folio(struct folio *folio) -{ - int rc = 0; - loff_t range_start = folio_pos(folio); - loff_t range_end = range_start + folio_size(folio); - struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, - .nr_to_write = 0, - .range_start = range_start, - .range_end = range_end, - }; - - cifs_dbg(FYI, "Launder page: %lu\n", folio->index); - - if (folio_clear_dirty_for_io(folio)) - rc = cifs_writepage_locked(&folio->page, &wbc); - - folio_wait_fscache(folio); - return rc; -} -#endif // end netfslib remove 5152 - void cifs_oplock_break(struct work_struct *work) { struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, @@ -5383,27 +4765,6 @@ void cifs_oplock_break(struct work_struct *work) cifs_done_oplock_break(cinode); } -#if 0 // TODO remove 5333 -/* - * The presence of cifs_direct_io() in the address space ops vector - * allowes open() O_DIRECT flags which would have failed otherwise. - * - * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests - * so this method should never be called. - * - * Direct IO is not yet supported in the cached mode. - */ -static ssize_t -cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) -{ - /* - * FIXME - * Eventually need to support direct IO for non forcedirectio mounts - */ - return -EINVAL; -} -#endif // netfs end remove 5333 - static int cifs_swap_activate(struct swap_info_struct *sis, struct file *swap_file, sector_t *span) { diff --git a/fs/smb/client/fscache.c b/fs/smb/client/fscache.c index e4cb0938fb15..bd9284923cc6 100644 --- a/fs/smb/client/fscache.c +++ b/fs/smb/client/fscache.c @@ -136,114 +136,3 @@ void cifs_fscache_release_inode_cookie(struct inode *inode) cifsi->netfs.cache = NULL; } } - -#if 0 // TODO remove -/* - * Fallback page reading interface. - */ -static int fscache_fallback_read_page(struct inode *inode, struct page *page) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - struct iov_iter iter; - struct bio_vec bvec; - int ret; - - memset(&cres, 0, sizeof(cres)); - bvec_set_page(&bvec, page, PAGE_SIZE, 0); - iov_iter_bvec(&iter, ITER_DEST, &bvec, 1, PAGE_SIZE); - - ret = fscache_begin_read_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL, - NULL, NULL); - fscache_end_operation(&cres); - return ret; -} - -/* - * Fallback page writing interface. - */ -static int fscache_fallback_write_pages(struct inode *inode, loff_t start, size_t len, - bool no_space_allocated_yet) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - struct iov_iter iter; - int ret; - - memset(&cres, 0, sizeof(cres)); - iov_iter_xarray(&iter, ITER_SOURCE, &inode->i_mapping->i_pages, start, len); - - ret = fscache_begin_write_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode), - no_space_allocated_yet); - if (ret == 0) - ret = fscache_write(&cres, start, &iter, NULL, NULL); - fscache_end_operation(&cres); - return ret; -} - -/* - * Retrieve a page from FS-Cache - */ -int __cifs_readpage_from_fscache(struct inode *inode, struct page *page) -{ - int ret; - - cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n", - __func__, cifs_inode_cookie(inode), page, inode); - - ret = fscache_fallback_read_page(inode, page); - if (ret < 0) - return ret; - - /* Read completed synchronously */ - SetPageUptodate(page); - return 0; -} - -void __cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) -{ - cifs_dbg(FYI, "%s: (fsc: %p, p: %llx, l: %zx, i: %p)\n", - __func__, cifs_inode_cookie(inode), pos, len, inode); - - fscache_fallback_write_pages(inode, pos, len, true); -} - -/* - * Query the cache occupancy. - */ -int __cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - loff_t start, data_start; - size_t len, data_len; - int ret; - - ret = fscache_begin_read_operation(&cres, cookie); - if (ret < 0) - return ret; - - start = first * PAGE_SIZE; - len = nr_pages * PAGE_SIZE; - ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE, - &data_start, &data_len); - if (ret == 0) { - *_data_first = data_start / PAGE_SIZE; - *_data_nr_pages = len / PAGE_SIZE; - } - - fscache_end_operation(&cres); - return ret; -} -#endif diff --git a/fs/smb/client/fscache.h b/fs/smb/client/fscache.h index 7308efeb2d89..b4faf6b8b9bd 100644 --- a/fs/smb/client/fscache.h +++ b/fs/smb/client/fscache.h @@ -74,43 +74,6 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags i_size_read(inode), flags); } -#if 0 // TODO remove -extern int __cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages); - -static inline int cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - if (!cifs_inode_cookie(inode)) - return -ENOBUFS; - return __cifs_fscache_query_occupancy(inode, first, nr_pages, - _data_first, _data_nr_pages); -} - -extern int __cifs_readpage_from_fscache(struct inode *pinode, struct page *ppage); -extern void __cifs_readahead_to_fscache(struct inode *pinode, loff_t pos, size_t len); - - -static inline int cifs_readpage_from_fscache(struct inode *inode, - struct page *page) -{ - if (cifs_inode_cookie(inode)) - return __cifs_readpage_from_fscache(inode, page); - return -ENOBUFS; -} - -static inline void cifs_readahead_to_fscache(struct inode *inode, - loff_t pos, size_t len) -{ - if (cifs_inode_cookie(inode)) - __cifs_readahead_to_fscache(inode, pos, len); -} -#endif - #else /* CONFIG_CIFS_FSCACHE */ static inline void cifs_fscache_fill_coherency(struct inode *inode, @@ -127,27 +90,6 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; } static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {} -#if 0 // TODO remove -static inline int cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - *_data_first = ULONG_MAX; - *_data_nr_pages = 0; - return -ENOBUFS; -} - -static inline int -cifs_readpage_from_fscache(struct inode *inode, struct page *page) -{ - return -ENOBUFS; -} - -static inline -void cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) {} -#endif - #endif /* CONFIG_CIFS_FSCACHE */ #endif /* _CIFS_FSCACHE_H */ From patchwork Fri Oct 13 16:04:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 997CBCDB47E for ; Fri, 13 Oct 2023 16:08:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 95E668005A; Fri, 13 Oct 2023 12:07:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90D9A80058; Fri, 13 Oct 2023 12:07:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 761188005A; Fri, 13 Oct 2023 12:07:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6001780058 for ; Fri, 13 Oct 2023 12:07:43 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 277301A03BE for ; Fri, 13 Oct 2023 16:07:43 +0000 (UTC) X-FDA: 81340918806.19.AE9FC14 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 49A768000D for ; Fri, 13 Oct 2023 16:07:41 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Gnqdm1Ue; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213261; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CjzuKhZSdElilT8RUPZc+mZaxBGFMIxma69PEOW3rQA=; b=hqxNTiz5qPjRPfg7Quw3tm/iKIWKznxugg1EuoLYZ/BhQNvBaX4YbdtLNt9WPppDcTm6CX eMKBMyUjbcJ/AeM2sv10KKe+qj2SnjoMyH27soUcEZGMM77ckjBwVBlKqwa19acWNG7DMI xQZIe4Sut/ZToODlIzeC219HGI5Zuik= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Gnqdm1Ue; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213261; a=rsa-sha256; cv=none; b=3R5m0rzuj0fC77ooLSBr5XoCKjbXOekW4lpZOpjEWG1juQf/3oXvKB0LliNjie136pIcQL dodUaexhKXfqblkNQ/I4CsvkfOkPJKeHlUNU4oKWl+caPiWGA60MzkkMrcGLPa8Ynt/wvC PYLbLbG91/c83A5uHmlkcurzb7KP3nM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CjzuKhZSdElilT8RUPZc+mZaxBGFMIxma69PEOW3rQA=; b=Gnqdm1UeD9oiD3IiNvWUuYHV62o7nCvMoKsHLy7sz+V1tABgyfZpSFgK3fcWZO7oBweV+C xAfjbI42rFldQU5kkiKC3rqmGgCrTjuEMvLbnOxkmouel+zHMh+IryQc+N3tKbN59A2gf+ 33hyOi+/j4vJF3a2FXXALvWMxT7x5nQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-546-_DGavg-fNBKFYo-L1TegPg-1; Fri, 13 Oct 2023 12:07:34 -0400 X-MC-Unique: _DGavg-fNBKFYo-L1TegPg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B4BF63C0CF04; Fri, 13 Oct 2023 16:07:31 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9EF01C060DF; Fri, 13 Oct 2023 16:07:28 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 52/53] cifs: Remove some code that's no longer used, part 2 Date: Fri, 13 Oct 2023 17:04:21 +0100 Message-ID: <20231013160423.2218093-53-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Queue-Id: 49A768000D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: r8yw9hqxzs3hzxh5i5mhrt7em3ruc96o X-HE-Tag: 1697213261-755258 X-HE-Meta: U2FsdGVkX1++saeJgltfSSpBczQ9Y23s/7YgPyexmuUUOvi7ejd9M2NkEFOE+dbL8+79KWBueYgrxlbU6uKmG8Icqy5+8pLqCgerhYzCmC8ETVFvn5DBEEfj7usuzwinFvXGkR+cC0kKr4i9V9s9Pt5d285fpV2CAoLeMWeLIJH9Vnb+keFZI6F6srCaCQdm+Z7gPuRHoaj+tm22jjtVUR3TEjcap+gcbIx+4ekhJ5WDYUxk6RYNIvWeM8salFzEQT3B3fkOnJtN4PrnvZ1YpTTr284wmpjnOqqxuYrtRj/qYi3+ZDfTC8ooUIAgJ5LVOyry7aKMrLigsPoWrywgxcAONa2wJbaTok2e2LVxGAl5j82QOgGMB1P3i+EQPbnGOTz8BlQ7BYVQgQ/DOuv/jXFM8NTEHyPbRCWVZFAyR1igw1MD78Xabh3y8PHyzxt/q9R6dqQW98gMTARt2dllmoIMaexUMTh2mMWhfkCQ66KBtcxAAOY/oddQmQ3ILsBITI3kaGwV5K7cYaO2oamv5W1qnuN9l4tNhto5zsfFBkYH9LnEPol9lbOXJ659h/i6QFxvHlW4CsQMcKUknysAmPJOEaIRiUa+nN1OUgHsXEudEevWhdQeL7wfiypE9jMy8+gQlw7eLLZRqA4nn7XlfMoCzefxtUikMB2pBMcmYk4XGkl1oaS7CbLWIDByY7A6NecWA1F2S66jWkLgBwoN8sGqzYjeAhXHSHrTXjkCtgK5HfwDTFRKehwRpwynyGFghAvUF88kMH6GIYHz9xguh/wOKX43hNIKpO8RD+kJVjvkqHDGGCdbPBcXU3+HWD6yXGx3bRibCLz8sQF2vlV9TMr60tYJMvOb0zsBj3wt/kQ8GFwucWZgJgGKMwGWORmQPxpUJ8wbc34wME2QAN1Fl0yuuzBIq1ZFnWPGC3zTCtwYP4BWMyHLUw/LtWHw5WMR5repOUiy4+LjW4xr7V+ Hx1iMoRR I9A29J8Py3Jcut87OqOd1nfGMs8Z/kyIeufpxemwBymKXR7eH3fihjbycNobb4ko0HEhtRhv8MH9wo2OfLxXIOQZLeVFushbdyEBSnogRs3VCweV8AkJWjT2v7OD52zCR6ooollEfEBx7kfU/+UEWVMWIr1L86u82TZB/gqyNHZI/9To9OzESj3RX/Pf/AyDK7OL02pcyk3QCYk1WJ8MgtK5oAUyEdu/W6EF5iM7b9WGdIX8OVUqpiQBzUasApdaCyF2W8MAzoB2WtluMmXPY3J1jErRkKBk6rtTYAnS+CBW/4fI445yjWzPeSLYEeYLLp6pxNUZ/gsCLpu+97+G8OO5EB33hBHuMCS/VkaFGkfIL2STxAylI4jdPEqySlapBMJOb2wpH2JHHyqhr/Vv49hwDl72PcUHcwi/JJ0qMX3xzT06w6u5S5Z9I07JEUmo2Ghr3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/file.c | 696 +------------------------------------------ 1 file changed, 1 insertion(+), 695 deletions(-) diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 2c64dccdc81d..f6b148aa184c 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2574,701 +2574,6 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, return -ENOENT; } -#if 0 // TODO remove 2773 -void -cifs_writedata_release(struct cifs_io_subrequest *wdata) -{ - if (wdata->uncached) - kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); -#ifdef CONFIG_CIFS_SMB_DIRECT - if (wdata->mr) { - smbd_deregister_mr(wdata->mr); - wdata->mr = NULL; - } -#endif - - if (wdata->cfile) - cifsFileInfo_put(wdata->cfile); - - kfree(wdata); -} - -/* - * Write failed with a retryable error. Resend the write request. It's also - * possible that the page was redirtied so re-clean the page. - */ -static void -cifs_writev_requeue(struct cifs_io_subrequest *wdata) -{ - int rc = 0; - struct inode *inode = d_inode(wdata->cfile->dentry); - struct TCP_Server_Info *server; - unsigned int rest_len = wdata->subreq.len; - loff_t fpos = wdata->subreq.start; - - server = tlink_tcon(wdata->cfile->tlink)->ses->server; - do { - struct cifs_io_subrequest *wdata2; - unsigned int wsize, cur_len; - - wsize = server->ops->wp_retry_size(inode); - if (wsize < rest_len) { - if (wsize < PAGE_SIZE) { - rc = -EOPNOTSUPP; - break; - } - cur_len = min(round_down(wsize, PAGE_SIZE), rest_len); - } else { - cur_len = rest_len; - } - - wdata2 = cifs_writedata_alloc(cifs_writev_complete); - if (!wdata2) { - rc = -ENOMEM; - break; - } - - wdata2->sync_mode = wdata->sync_mode; - wdata2->subreq.start = fpos; - wdata2->subreq.len = cur_len; - wdata2->subreq.io_iter = wdata->subreq.io_iter; - - iov_iter_advance(&wdata2->subreq.io_iter, fpos - wdata->subreq.start); - iov_iter_truncate(&wdata2->subreq.io_iter, wdata2->subreq.len); - - if (iov_iter_is_xarray(&wdata2->subreq.io_iter)) - /* Check for pages having been redirtied and clean - * them. We can do this by walking the xarray. If - * it's not an xarray, then it's a DIO and we shouldn't - * be mucking around with the page bits. - */ - cifs_undirty_folios(inode, fpos, cur_len); - - rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, - &wdata2->cfile); - if (!wdata2->cfile) { - cifs_dbg(VFS, "No writable handle to retry writepages rc=%d\n", - rc); - if (!is_retryable_error(rc)) - rc = -EBADF; - } else { - wdata2->pid = wdata2->cfile->pid; - rc = server->ops->async_writev(wdata2); - } - - cifs_put_writedata(wdata2); - if (rc) { - if (is_retryable_error(rc)) - continue; - fpos += cur_len; - rest_len -= cur_len; - break; - } - - fpos += cur_len; - rest_len -= cur_len; - } while (rest_len > 0); - - /* Clean up remaining pages from the original wdata */ - if (iov_iter_is_xarray(&wdata->subreq.io_iter)) - cifs_pages_write_failed(inode, fpos, rest_len); - - if (rc != 0 && !is_retryable_error(rc)) - mapping_set_error(inode->i_mapping, rc); - cifs_put_writedata(wdata); -} - -void -cifs_writev_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *wdata = container_of(work, - struct cifs_io_subrequest, work); - struct inode *inode = d_inode(wdata->cfile->dentry); - - if (wdata->result == 0) { - spin_lock(&inode->i_lock); - cifs_update_eof(CIFS_I(inode), wdata->subreq.start, wdata->subreq.len); - spin_unlock(&inode->i_lock); - cifs_stats_bytes_written(tlink_tcon(wdata->cfile->tlink), - wdata->subreq.len); - } else if (wdata->sync_mode == WB_SYNC_ALL && wdata->result == -EAGAIN) - return cifs_writev_requeue(wdata); - - if (wdata->result == -EAGAIN) - cifs_pages_write_redirty(inode, wdata->subreq.start, wdata->subreq.len); - else if (wdata->result < 0) - cifs_pages_write_failed(inode, wdata->subreq.start, wdata->subreq.len); - else - cifs_pages_written_back(inode, wdata->subreq.start, wdata->subreq.len); - - if (wdata->result != -EAGAIN) - mapping_set_error(inode->i_mapping, wdata->result); - cifs_put_writedata(wdata); -} - -struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete) -{ - struct cifs_io_subrequest *wdata; - - wdata = kzalloc(sizeof(*wdata), GFP_NOFS); - if (wdata != NULL) { - refcount_set(&wdata->subreq.ref, 1); - INIT_LIST_HEAD(&wdata->list); - init_completion(&wdata->done); - INIT_WORK(&wdata->work, complete); - } - return wdata; -} - -static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) -{ - struct address_space *mapping = page->mapping; - loff_t offset = (loff_t)page->index << PAGE_SHIFT; - char *write_data; - int rc = -EFAULT; - int bytes_written = 0; - struct inode *inode; - struct cifsFileInfo *open_file; - - if (!mapping || !mapping->host) - return -EFAULT; - - inode = page->mapping->host; - - offset += (loff_t)from; - write_data = kmap(page); - write_data += from; - - if ((to > PAGE_SIZE) || (from > to)) { - kunmap(page); - return -EIO; - } - - /* racing with truncate? */ - if (offset > mapping->host->i_size) { - kunmap(page); - return 0; /* don't care */ - } - - /* check to make sure that we are not extending the file */ - if (mapping->host->i_size - offset < (loff_t)to) - to = (unsigned)(mapping->host->i_size - offset); - - rc = cifs_get_writable_file(CIFS_I(mapping->host), FIND_WR_ANY, - &open_file); - if (!rc) { - bytes_written = cifs_write(open_file, open_file->pid, - write_data, to - from, &offset); - cifsFileInfo_put(open_file); - /* Does mm or vfs already set times? */ - inode->i_atime = inode->i_mtime = inode_set_ctime_current(inode); - if ((bytes_written > 0) && (offset)) - rc = 0; - else if (bytes_written < 0) - rc = bytes_written; - else - rc = -EFAULT; - } else { - cifs_dbg(FYI, "No writable handle for write page rc=%d\n", rc); - if (!is_retryable_error(rc)) - rc = -EIO; - } - - kunmap(page); - return rc; -} - -/* - * Extend the region to be written back to include subsequent contiguously - * dirty pages if possible, but don't sleep while doing so. - */ -static void cifs_extend_writeback(struct address_space *mapping, - long *_count, - loff_t start, - int max_pages, - size_t max_len, - unsigned int *_len) -{ - struct folio_batch batch; - struct folio *folio; - unsigned int psize, nr_pages; - size_t len = *_len; - pgoff_t index = (start + len) / PAGE_SIZE; - bool stop = true; - unsigned int i; - XA_STATE(xas, &mapping->i_pages, index); - - folio_batch_init(&batch); - - do { - /* Firstly, we gather up a batch of contiguous dirty pages - * under the RCU read lock - but we can't clear the dirty flags - * there if any of those pages are mapped. - */ - rcu_read_lock(); - - xas_for_each(&xas, folio, ULONG_MAX) { - stop = true; - if (xas_retry(&xas, folio)) - continue; - if (xa_is_value(folio)) - break; - if (folio_index(folio) != index) - break; - if (!folio_try_get_rcu(folio)) { - xas_reset(&xas); - continue; - } - nr_pages = folio_nr_pages(folio); - if (nr_pages > max_pages) - break; - - /* Has the page moved or been split? */ - if (unlikely(folio != xas_reload(&xas))) { - folio_put(folio); - break; - } - - if (!folio_trylock(folio)) { - folio_put(folio); - break; - } - if (!folio_test_dirty(folio) || folio_test_writeback(folio)) { - folio_unlock(folio); - folio_put(folio); - break; - } - - max_pages -= nr_pages; - psize = folio_size(folio); - len += psize; - stop = false; - if (max_pages <= 0 || len >= max_len || *_count <= 0) - stop = true; - - index += nr_pages; - if (!folio_batch_add(&batch, folio)) - break; - if (stop) - break; - } - - if (!stop) - xas_pause(&xas); - rcu_read_unlock(); - - /* Now, if we obtained any pages, we can shift them to being - * writable and mark them for caching. - */ - if (!folio_batch_count(&batch)) - break; - - for (i = 0; i < folio_batch_count(&batch); i++) { - folio = batch.folios[i]; - /* The folio should be locked, dirty and not undergoing - * writeback from the loop above. - */ - if (!folio_clear_dirty_for_io(folio)) - WARN_ON(1); - if (folio_start_writeback(folio)) - WARN_ON(1); - - *_count -= folio_nr_pages(folio); - folio_unlock(folio); - } - - folio_batch_release(&batch); - cond_resched(); - } while (!stop); - - *_len = len; -} - -/* - * Write back the locked page and any subsequent non-locked dirty pages. - */ -static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, - struct writeback_control *wbc, - struct folio *folio, - loff_t start, loff_t end) -{ - struct inode *inode = mapping->host; - struct TCP_Server_Info *server; - struct cifs_io_subrequest *wdata; - struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - struct cifsFileInfo *cfile = NULL; - unsigned int xid, len; - loff_t i_size = i_size_read(inode); - size_t max_len, wsize; - long count = wbc->nr_to_write; - int rc; - - /* The folio should be locked, dirty and not undergoing writeback. */ - if (folio_start_writeback(folio)) - WARN_ON(1); - - count -= folio_nr_pages(folio); - len = folio_size(folio); - - xid = get_xid(); - server = cifs_pick_channel(cifs_sb_master_tcon(cifs_sb)->ses); - - rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, &cfile); - if (rc) { - cifs_dbg(VFS, "No writable handle in writepages rc=%d\n", rc); - goto err_xid; - } - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->wsize, - &wsize, credits); - if (rc != 0) - goto err_close; - - wdata = cifs_writedata_alloc(cifs_writev_complete); - if (!wdata) { - rc = -ENOMEM; - goto err_uncredit; - } - - wdata->sync_mode = wbc->sync_mode; - wdata->subreq.start = folio_pos(folio); - wdata->pid = cfile->pid; - wdata->credits = credits_on_stack; - wdata->cfile = cfile; - wdata->server = server; - cfile = NULL; - - /* Find all consecutive lockable dirty pages, stopping when we find a - * page that is not immediately lockable, is not dirty or is missing, - * or we reach the end of the range. - */ - if (start < i_size) { - /* Trim the write to the EOF; the extra data is ignored. Also - * put an upper limit on the size of a single storedata op. - */ - max_len = wsize; - max_len = min_t(unsigned long long, max_len, end - start + 1); - max_len = min_t(unsigned long long, max_len, i_size - start); - - if (len < max_len) { - int max_pages = INT_MAX; - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_pages = server->smbd_conn->max_frmr_depth; -#endif - max_pages -= folio_nr_pages(folio); - - if (max_pages > 0) - cifs_extend_writeback(mapping, &count, start, - max_pages, max_len, &len); - } - len = min_t(loff_t, len, max_len); - } - - wdata->subreq.len = len; - - /* We now have a contiguous set of dirty pages, each with writeback - * set; the first page is still locked at this point, but all the rest - * have been unlocked. - */ - folio_unlock(folio); - - if (start < i_size) { - iov_iter_xarray(&wdata->subreq.io_iter, ITER_SOURCE, &mapping->i_pages, - start, len); - - rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len); - if (rc) - goto err_wdata; - - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = wdata->server->ops->async_writev(wdata); - if (rc >= 0) { - cifs_put_writedata(wdata); - goto err_close; - } - } else { - /* The dirty region was entirely beyond the EOF. */ - cifs_pages_written_back(inode, start, len); - rc = 0; - } - -err_wdata: - cifs_put_writedata(wdata); -err_uncredit: - add_credits_and_wake_if(server, credits, 0); -err_close: - if (cfile) - cifsFileInfo_put(cfile); -err_xid: - free_xid(xid); - if (rc == 0) { - wbc->nr_to_write = count; - rc = len; - } else if (is_retryable_error(rc)) { - cifs_pages_write_redirty(inode, start, len); - } else { - cifs_pages_write_failed(inode, start, len); - mapping_set_error(mapping, rc); - } - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); - return rc; -} - -/* - * write a region of pages back to the server - */ -static int cifs_writepages_region(struct address_space *mapping, - struct writeback_control *wbc, - loff_t start, loff_t end, loff_t *_next) -{ - struct folio_batch fbatch; - int skips = 0; - - folio_batch_init(&fbatch); - do { - int nr; - pgoff_t index = start / PAGE_SIZE; - - nr = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE, - PAGECACHE_TAG_DIRTY, &fbatch); - if (!nr) - break; - - for (int i = 0; i < nr; i++) { - ssize_t ret; - struct folio *folio = fbatch.folios[i]; - -redo_folio: - start = folio_pos(folio); /* May regress with THPs */ - - /* At this point we hold neither the i_pages lock nor the - * page lock: the page may be truncated or invalidated - * (changing page->mapping to NULL), or even swizzled - * back from swapper_space to tmpfs file mapping - */ - if (wbc->sync_mode != WB_SYNC_NONE) { - ret = folio_lock_killable(folio); - if (ret < 0) - goto write_error; - } else { - if (!folio_trylock(folio)) - goto skip_write; - } - - if (folio_mapping(folio) != mapping || - !folio_test_dirty(folio)) { - start += folio_size(folio); - folio_unlock(folio); - continue; - } - - if (folio_test_writeback(folio) || - folio_test_fscache(folio)) { - folio_unlock(folio); - if (wbc->sync_mode == WB_SYNC_NONE) - goto skip_write; - - folio_wait_writeback(folio); -#ifdef CONFIG_CIFS_FSCACHE - folio_wait_fscache(folio); -#endif - goto redo_folio; - } - - if (!folio_clear_dirty_for_io(folio)) - /* We hold the page lock - it should've been dirty. */ - WARN_ON(1); - - ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end); - if (ret < 0) - goto write_error; - - start += ret; - continue; - -write_error: - folio_batch_release(&fbatch); - *_next = start; - return ret; - -skip_write: - /* - * Too many skipped writes, or need to reschedule? - * Treat it as a write error without an error code. - */ - if (skips >= 5 || need_resched()) { - ret = 0; - goto write_error; - } - - /* Otherwise, just skip that folio and go on to the next */ - skips++; - start += folio_size(folio); - continue; - } - - folio_batch_release(&fbatch); - cond_resched(); - } while (wbc->nr_to_write > 0); - - *_next = start; - return 0; -} - -/* - * Write some of the pending data back to the server - */ -static int cifs_writepages(struct address_space *mapping, - struct writeback_control *wbc) -{ - loff_t start, next; - int ret; - - /* We have to be careful as we can end up racing with setattr() - * truncating the pagecache since the caller doesn't take a lock here - * to prevent it. - */ - - if (wbc->range_cyclic) { - start = mapping->writeback_index * PAGE_SIZE; - ret = cifs_writepages_region(mapping, wbc, start, LLONG_MAX, &next); - if (ret == 0) { - mapping->writeback_index = next / PAGE_SIZE; - if (start > 0 && wbc->nr_to_write > 0) { - ret = cifs_writepages_region(mapping, wbc, 0, - start, &next); - if (ret == 0) - mapping->writeback_index = - next / PAGE_SIZE; - } - } - } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) { - ret = cifs_writepages_region(mapping, wbc, 0, LLONG_MAX, &next); - if (wbc->nr_to_write > 0 && ret == 0) - mapping->writeback_index = next / PAGE_SIZE; - } else { - ret = cifs_writepages_region(mapping, wbc, - wbc->range_start, wbc->range_end, &next); - } - - return ret; -} - -static int -cifs_writepage_locked(struct page *page, struct writeback_control *wbc) -{ - int rc; - unsigned int xid; - - xid = get_xid(); -/* BB add check for wbc flags */ - get_page(page); - if (!PageUptodate(page)) - cifs_dbg(FYI, "ppw - page not up to date\n"); - - /* - * Set the "writeback" flag, and clear "dirty" in the radix tree. - * - * A writepage() implementation always needs to do either this, - * or re-dirty the page with "redirty_page_for_writepage()" in - * the case of a failure. - * - * Just unlocking the page will cause the radix tree tag-bits - * to fail to update with the state of the page correctly. - */ - set_page_writeback(page); -retry_write: - rc = cifs_partialpagewrite(page, 0, PAGE_SIZE); - if (is_retryable_error(rc)) { - if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN) - goto retry_write; - redirty_page_for_writepage(wbc, page); - } else if (rc != 0) { - SetPageError(page); - mapping_set_error(page->mapping, rc); - } else { - SetPageUptodate(page); - } - end_page_writeback(page); - put_page(page); - free_xid(xid); - return rc; -} - -static int cifs_write_end(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *page, void *fsdata) -{ - int rc; - struct inode *inode = mapping->host; - struct cifsFileInfo *cfile = file->private_data; - struct cifs_sb_info *cifs_sb = CIFS_SB(cfile->dentry->d_sb); - struct folio *folio = page_folio(page); - __u32 pid; - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = cfile->pid; - else - pid = current->tgid; - - cifs_dbg(FYI, "write_end for page %p from pos %lld with %d bytes\n", - page, pos, copied); - - if (folio_test_checked(folio)) { - if (copied == len) - folio_mark_uptodate(folio); - folio_clear_checked(folio); - } else if (!folio_test_uptodate(folio) && copied == PAGE_SIZE) - folio_mark_uptodate(folio); - - if (!folio_test_uptodate(folio)) { - char *page_data; - unsigned offset = pos & (PAGE_SIZE - 1); - unsigned int xid; - - xid = get_xid(); - /* this is probably better than directly calling - partialpage_write since in this function the file handle is - known which we might as well leverage */ - /* BB check if anything else missing out of ppw - such as updating last write time */ - page_data = kmap(page); - rc = cifs_write(cfile, pid, page_data + offset, copied, &pos); - /* if (rc < 0) should we set writebehind rc? */ - kunmap(page); - - free_xid(xid); - } else { - rc = copied; - pos += copied; - set_page_dirty(page); - } - - if (rc > 0) { - spin_lock(&inode->i_lock); - if (pos > inode->i_size) { - i_size_write(inode, pos); - inode->i_blocks = (512 - 1 + pos) >> 9; - } - spin_unlock(&inode->i_lock); - } - - unlock_page(page); - put_page(page); - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); - - return rc; -} -#endif // End netfs removal 2773 - /* * Flush data on a strict file. */ @@ -4583,6 +3888,7 @@ cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) } #endif // end netfslib remove 4633 + static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { return netfs_page_mkwrite(vmf, NULL); From patchwork Fri Oct 13 16:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E005CDB482 for ; Fri, 13 Oct 2023 16:08:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F057E8005C; Fri, 13 Oct 2023 12:07:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB49B8005B; Fri, 13 Oct 2023 12:07:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE0088005C; Fri, 13 Oct 2023 12:07:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B4F4C8005B for ; Fri, 13 Oct 2023 12:07:52 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 881A7803FB for ; Fri, 13 Oct 2023 16:07:52 +0000 (UTC) X-FDA: 81340919184.19.03F6C85 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf05.hostedemail.com (Postfix) with ESMTP id BC74A100020 for ; Fri, 13 Oct 2023 16:07:50 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=h8EaOqd8; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697213270; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=no01PhpuRaBQ8id/gpjbA75BSmmT1V2g2hg7J1K3AS0=; b=8VCje9KbEdsDsuiJ7U7Ngo4+zSxoIcF86VbHQvXKj3zxpWsMAsimplp+TyhAxlX7EGcAQn Ocv+/D44SyNWRCEz0iJKSBwlIKbBL7AnmXrM6rYwJP1ZQQsyIAgu/b62mD9S0cJA+l20gH +Oj4LffAhbEQJ4Qi/POUPgs9+9qTivI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697213270; a=rsa-sha256; cv=none; b=m09G4JNqYhBNnkPGSu+i+DfHAwW2Bjjxyx7PaX05J5sO5+YQaiK+IPbF4e520JX1s9kFgZ FXV5XFkXsIFFpNJHkm1ROLJJwlCbV7IsNsAyuQhXpGips1Y7Bmiw2RqJK3hubNP3suU9LM 3YKihYon6QMkBOGDK8sBXQ32wSv0nRo= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=h8EaOqd8; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213270; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=no01PhpuRaBQ8id/gpjbA75BSmmT1V2g2hg7J1K3AS0=; b=h8EaOqd86ZwBPrpIvidj8IXtLeApNzoZLayprFKvIOQsZ1f0F6/Yy3koeA0IK8bH3PoXMz 2R1rq7C6Fgn4rU7FZnkD7vI7mw8isaImUfe04A5FO9EDYH1xSCzb2AGOtHkV+Q0icmG3mX NEy6j9Y/Q+Z+ArEFBh+KudfzNak5WMs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-590-0pt9ng49MX2OtLESz_RCMQ-1; Fri, 13 Oct 2023 12:07:36 -0400 X-MC-Unique: 0pt9ng49MX2OtLESz_RCMQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 65DB63C0DDBE; Fri, 13 Oct 2023 16:07:35 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5CA9A1C060DF; Fri, 13 Oct 2023 16:07:32 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [RFC PATCH 53/53] cifs: Remove some code that's no longer used, part 3 Date: Fri, 13 Oct 2023 17:04:22 +0100 Message-ID: <20231013160423.2218093-54-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Stat-Signature: dyex15ujerirditz63pm78xgesddccki X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: BC74A100020 X-Rspam-User: X-HE-Tag: 1697213270-75276 X-HE-Meta: U2FsdGVkX1/HlLoTOFhuTRTvi/KHhUiu7aG8nH4DmGBg3VpD16KNgqJT2kftik9aXuuBV89tn3abdMLx3umsTF6NQU0q707VfKVn8VIz0Sn+YxuOh6TpjePe4H4u5+3P3CIcnoNKtmXV2z7Qd23DTUEqWRPJeUgz/Eo7NB4ThYd1X8kC8FUd9Ifcbq3lJOU3go+ueUsa/EFqH67+noAW75ma6f3ve29tatZF/nQ6HoJUy4swvCrhFPIwrZ5glbpU05hVpDbJIKS79XgM3o3CaZMd1HKGDEZQOdJpVwBtmr7knU2oDB++FZl+RmUiGLLdEy/hXF23IUH436vezm7QTBGxPM6YVgShakMIzINhHPBmSHjVeO3KDpdmgMbTwV6jdLjYbBSvawFRovKwX0ZCmlZtuDLrSSKMiLkkzeSiBvkpOHRa52jnFwXiDDgP7Z/GFgXDPkqYH7s7lxDz6mGR4A908bgl4ZxPtKi1pQ4SN7JE0hB/xY54piwQFyUeZC3P8uO8q9bqt7uUIJiKxtr1NXNjZWZrXpgaauGGpbZ0fujNBqubTKT+QY36hpzi95fe81HLo3uHYYkr00TJaNX5cWC5nXx0C7/R+AKosgJdD3aDR3rjEqP4YDzX5Tf+WrAvTVV516UpH1RQ61QBwLKvgrauDGdkWXydLzLZ7Wl+h7W3sFPLa0ecWOqMXJNsKUdPS/3pK8uljLproZVsOueaK54pLzUV7nCnXcx9FKDwGxO4JM5uY8H4q1RF36uY6wodVkweaSzJEbksCJPtZLCEnplDECgGpys370fONr4fS7UJheCGkTddB63mcG4YV9kEcAXa/L5Org/fsd59oaPz6sC9pTkgWZqYsB7d7bUjRGUalg7/YBo28ANP2s1AsXN+NFxfCyrTzi/+PqqwkF0VqRT3Gdn8mwzGqz0MGp/+SHW1UDogqQ/JUCwuqsrZA+0id7zdHkb2n+6BuKjKXRS RERj6LWV POJVGJrggfGpHWaDr/baEbRrPi9WXhHO6JreTuduRyKyXFrbII9tv9aOD8ugx0BBDkeexCTMiA3OEmx03JWsffPoJ4P0f2Ah0hGvwTN4cTRHw+KrA+udumNpe9pgHQkffX1bpBSXxV9Xb5KbMxMv6Kitu3k4X1bHaW/4BAdjuIbhusb6K+BF1KQugU1n7yQCkqdVBTw72RNN+ZHknuumH8Hey/PgBuciBp7jOb0UG7vYcyrzQ5M2ocHw5zQgkPYxz7kJOnqjn9FqoHPrUdVR8VraWFWCg9OltICdLP1sd6i5dMVu4uYOkoV2nsWQck424PyA/sI4DAtdBnitB5/EN5XRErZTfTkt1MIiiNf4Q97XlIuVXT5aiIrKEqYInC4uHWfdbdr+RJWGF4WiXzu3YCvjbyPRlWM6uyNQVskEqvd7rVnx2XOnUq0ehWu4EHVN2QmHR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/file.c | 1003 ------------------------------------------ 1 file changed, 1003 deletions(-) diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index f6b148aa184c..be2c786e7c52 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2700,470 +2700,6 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } -#if 0 // TODO remove 3594 -static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); - -static void -cifs_uncached_writev_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *wdata = container_of(work, - struct cifs_io_subrequest, work); - struct inode *inode = d_inode(wdata->cfile->dentry); - struct cifsInodeInfo *cifsi = CIFS_I(inode); - - spin_lock(&inode->i_lock); - cifs_update_eof(cifsi, wdata->subreq.start, wdata->subreq.len); - if (cifsi->netfs.remote_i_size > inode->i_size) - i_size_write(inode, cifsi->netfs.remote_i_size); - spin_unlock(&inode->i_lock); - - complete(&wdata->done); - collect_uncached_write_data(wdata->ctx); - /* the below call can possibly free the last ref to aio ctx */ - cifs_put_writedata(wdata); -} - -static int -cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, - struct cifs_aio_ctx *ctx) -{ - size_t wsize; - struct cifs_credits credits; - int rc; - struct TCP_Server_Info *server = wdata->server; - - do { - if (wdata->cfile->invalidHandle) { - rc = cifs_reopen_file(wdata->cfile, false); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - - /* - * Wait for credits to resend this wdata. - * Note: we are attempting to resend the whole wdata not in - * segments - */ - do { - rc = server->ops->wait_mtu_credits(server, wdata->subreq.len, - &wsize, &credits); - if (rc) - goto fail; - - if (wsize < wdata->subreq.len) { - add_credits_and_wake_if(server, &credits, 0); - msleep(1000); - } - } while (wsize < wdata->subreq.len); - wdata->credits = credits; - - rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); - - if (!rc) { - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else { -#ifdef CONFIG_CIFS_SMB_DIRECT - if (wdata->mr) { - wdata->mr->need_invalidate = true; - smbd_deregister_mr(wdata->mr); - wdata->mr = NULL; - } -#endif - rc = server->ops->async_writev(wdata); - } - } - - /* If the write was successfully sent, we are done */ - if (!rc) { - list_add_tail(&wdata->list, wdata_list); - return 0; - } - - /* Roll back credits and retry if needed */ - add_credits_and_wake_if(server, &wdata->credits, 0); - } while (rc == -EAGAIN); - -fail: - cifs_put_writedata(wdata); - return rc; -} - -/* - * Select span of a bvec iterator we're going to use. Limit it by both maximum - * size and maximum number of segments. - */ -static size_t cifs_limit_bvec_subset(const struct iov_iter *iter, size_t max_size, - size_t max_segs, unsigned int *_nsegs) -{ - const struct bio_vec *bvecs = iter->bvec; - unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0; - size_t len, span = 0, n = iter->count; - size_t skip = iter->iov_offset; - - if (WARN_ON(!iov_iter_is_bvec(iter)) || n == 0) - return 0; - - while (n && ix < nbv && skip) { - len = bvecs[ix].bv_len; - if (skip < len) - break; - skip -= len; - n -= len; - ix++; - } - - while (n && ix < nbv) { - len = min3(n, bvecs[ix].bv_len - skip, max_size); - span += len; - max_size -= len; - nsegs++; - ix++; - if (max_size == 0 || nsegs >= max_segs) - break; - skip = 0; - n -= len; - } - - *_nsegs = nsegs; - return span; -} - -static int -cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, - struct cifsFileInfo *open_file, - struct cifs_sb_info *cifs_sb, struct list_head *wdata_list, - struct cifs_aio_ctx *ctx) -{ - int rc = 0; - size_t cur_len, max_len; - struct cifs_io_subrequest *wdata; - pid_t pid; - struct TCP_Server_Info *server; - unsigned int xid, max_segs = INT_MAX; - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - xid = get_xid(); - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_segs = server->smbd_conn->max_frmr_depth; -#endif - - do { - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - unsigned int nsegs = 0; - size_t wsize; - - if (signal_pending(current)) { - rc = -EINTR; - break; - } - - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, false); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->wsize, - &wsize, credits); - if (rc) - break; - - max_len = min_t(const size_t, len, wsize); - if (!max_len) { - rc = -EAGAIN; - add_credits_and_wake_if(server, credits, 0); - break; - } - - cur_len = cifs_limit_bvec_subset(from, max_len, max_segs, &nsegs); - cifs_dbg(FYI, "write_from_iter len=%zx/%zx nsegs=%u/%lu/%u\n", - cur_len, max_len, nsegs, from->nr_segs, max_segs); - if (cur_len == 0) { - rc = -EIO; - add_credits_and_wake_if(server, credits, 0); - break; - } - - wdata = cifs_writedata_alloc(cifs_uncached_writev_complete); - if (!wdata) { - rc = -ENOMEM; - add_credits_and_wake_if(server, credits, 0); - break; - } - - wdata->uncached = true; - wdata->sync_mode = WB_SYNC_ALL; - wdata->subreq.start = (__u64)fpos; - wdata->cfile = cifsFileInfo_get(open_file); - wdata->server = server; - wdata->pid = pid; - wdata->subreq.len = cur_len; - wdata->credits = credits_on_stack; - wdata->subreq.io_iter = *from; - wdata->ctx = ctx; - kref_get(&ctx->refcount); - - iov_iter_truncate(&wdata->subreq.io_iter, cur_len); - - rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); - - if (!rc) { - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_writev(wdata); - } - - if (rc) { - add_credits_and_wake_if(server, &wdata->credits, 0); - cifs_put_writedata(wdata); - if (rc == -EAGAIN) - continue; - break; - } - - list_add_tail(&wdata->list, wdata_list); - iov_iter_advance(from, cur_len); - fpos += cur_len; - len -= cur_len; - } while (len > 0); - - free_xid(xid); - return rc; -} - -static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *wdata, *tmp; - struct cifs_tcon *tcon; - struct cifs_sb_info *cifs_sb; - struct dentry *dentry = ctx->cfile->dentry; - ssize_t rc; - - tcon = tlink_tcon(ctx->cfile->tlink); - cifs_sb = CIFS_SB(dentry->d_sb); - - mutex_lock(&ctx->aio_mutex); - - if (list_empty(&ctx->list)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - rc = ctx->rc; - /* - * Wait for and collect replies for any successful sends in order of - * increasing offset. Once an error is hit, then return without waiting - * for any more replies. - */ -restart_loop: - list_for_each_entry_safe(wdata, tmp, &ctx->list, list) { - if (!rc) { - if (!try_wait_for_completion(&wdata->done)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - if (wdata->result) - rc = wdata->result; - else - ctx->total_len += wdata->subreq.len; - - /* resend call if it's a retryable error */ - if (rc == -EAGAIN) { - struct list_head tmp_list; - struct iov_iter tmp_from = ctx->iter; - - INIT_LIST_HEAD(&tmp_list); - list_del_init(&wdata->list); - - if (ctx->direct_io) - rc = cifs_resend_wdata( - wdata, &tmp_list, ctx); - else { - iov_iter_advance(&tmp_from, - wdata->subreq.start - ctx->pos); - - rc = cifs_write_from_iter(wdata->subreq.start, - wdata->subreq.len, &tmp_from, - ctx->cfile, cifs_sb, &tmp_list, - ctx); - - cifs_put_writedata(wdata); - } - - list_splice(&tmp_list, &ctx->list); - goto restart_loop; - } - } - list_del_init(&wdata->list); - cifs_put_writedata(wdata); - } - - cifs_stats_bytes_written(tcon, ctx->total_len); - set_bit(CIFS_INO_INVALID_MAPPING, &CIFS_I(dentry->d_inode)->flags); - - ctx->rc = (rc == 0) ? ctx->total_len : rc; - - mutex_unlock(&ctx->aio_mutex); - - if (ctx->iocb && ctx->iocb->ki_complete) - ctx->iocb->ki_complete(ctx->iocb, ctx->rc); - else - complete(&ctx->done); -} - -static ssize_t __cifs_writev( - struct kiocb *iocb, struct iov_iter *from, bool direct) -{ - struct file *file = iocb->ki_filp; - ssize_t total_written = 0; - struct cifsFileInfo *cfile; - struct cifs_tcon *tcon; - struct cifs_sb_info *cifs_sb; - struct cifs_aio_ctx *ctx; - int rc; - - rc = generic_write_checks(iocb, from); - if (rc <= 0) - return rc; - - cifs_sb = CIFS_FILE_SB(file); - cfile = file->private_data; - tcon = tlink_tcon(cfile->tlink); - - if (!tcon->ses->server->ops->async_writev) - return -ENOSYS; - - ctx = cifs_aio_ctx_alloc(); - if (!ctx) - return -ENOMEM; - - ctx->cfile = cifsFileInfo_get(cfile); - - if (!is_sync_kiocb(iocb)) - ctx->iocb = iocb; - - ctx->pos = iocb->ki_pos; - ctx->direct_io = direct; - ctx->nr_pinned_pages = 0; - - if (user_backed_iter(from)) { - /* - * Extract IOVEC/UBUF-type iterators to a BVEC-type iterator as - * they contain references to the calling process's virtual - * memory layout which won't be available in an async worker - * thread. This also takes a pin on every folio involved. - */ - rc = netfs_extract_user_iter(from, iov_iter_count(from), - &ctx->iter, 0); - if (rc < 0) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - ctx->nr_pinned_pages = rc; - ctx->bv = (void *)ctx->iter.bvec; - ctx->bv_need_unpin = iov_iter_extract_will_pin(from); - } else if ((iov_iter_is_bvec(from) || iov_iter_is_kvec(from)) && - !is_sync_kiocb(iocb)) { - /* - * If the op is asynchronous, we need to copy the list attached - * to a BVEC/KVEC-type iterator, but we assume that the storage - * will be pinned by the caller; in any case, we may or may not - * be able to pin the pages, so we don't try. - */ - ctx->bv = (void *)dup_iter(&ctx->iter, from, GFP_KERNEL); - if (!ctx->bv) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -ENOMEM; - } - } else { - /* - * Otherwise, we just pass the iterator down as-is and rely on - * the caller to make sure the pages referred to by the - * iterator don't evaporate. - */ - ctx->iter = *from; - } - - ctx->len = iov_iter_count(&ctx->iter); - - /* grab a lock here due to read response handlers can access ctx */ - mutex_lock(&ctx->aio_mutex); - - rc = cifs_write_from_iter(iocb->ki_pos, ctx->len, &ctx->iter, - cfile, cifs_sb, &ctx->list, ctx); - - /* - * If at least one write was successfully sent, then discard any rc - * value from the later writes. If the other write succeeds, then - * we'll end up returning whatever was written. If it fails, then - * we'll get a new rc value from that. - */ - if (!list_empty(&ctx->list)) - rc = 0; - - mutex_unlock(&ctx->aio_mutex); - - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - if (!is_sync_kiocb(iocb)) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EIOCBQUEUED; - } - - rc = wait_for_completion_killable(&ctx->done); - if (rc) { - mutex_lock(&ctx->aio_mutex); - ctx->rc = rc = -EINTR; - total_written = ctx->total_len; - mutex_unlock(&ctx->aio_mutex); - } else { - rc = ctx->rc; - total_written = ctx->total_len; - } - - kref_put(&ctx->refcount, cifs_aio_ctx_release); - - if (unlikely(!total_written)) - return rc; - - iocb->ki_pos += total_written; - return total_written; -} - -ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from) -{ - struct file *file = iocb->ki_filp; - - cifs_revalidate_mapping(file->f_inode); - return __cifs_writev(iocb, from, true); -} - -ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) -{ - return __cifs_writev(iocb, from, false); -} -#endif // TODO remove 3594 - static ssize_t cifs_writev(struct kiocb *iocb, struct iov_iter *from) { @@ -3252,450 +2788,6 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } -#if 0 // TODO remove 4143 -static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) -{ - struct cifs_io_subrequest *rdata; - - rdata = kzalloc(sizeof(*rdata), GFP_KERNEL); - if (rdata) { - refcount_set(&rdata->subreq.ref, 1); - INIT_LIST_HEAD(&rdata->list); - init_completion(&rdata->done); - INIT_WORK(&rdata->work, complete); - } - - return rdata; -} - -void -cifs_readdata_release(struct cifs_io_subrequest *rdata) -{ - if (rdata->ctx) - kref_put(&rdata->ctx->refcount, cifs_aio_ctx_release); -#ifdef CONFIG_CIFS_SMB_DIRECT - if (rdata->mr) { - smbd_deregister_mr(rdata->mr); - rdata->mr = NULL; - } -#endif - if (rdata->cfile) - cifsFileInfo_put(rdata->cfile); - - kfree(rdata); -} - -static void collect_uncached_read_data(struct cifs_aio_ctx *ctx); - -static void -cifs_uncached_readv_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata = - container_of(work, struct cifs_io_subrequest, work); - - complete(&rdata->done); - collect_uncached_read_data(rdata->ctx); - /* the below call can possibly free the last ref to aio ctx */ - cifs_put_readdata(rdata); -} - -static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, - struct list_head *rdata_list, - struct cifs_aio_ctx *ctx) -{ - size_t rsize; - struct cifs_credits credits; - int rc; - struct TCP_Server_Info *server; - - /* XXX: should we pick a new channel here? */ - server = rdata->server; - - do { - if (rdata->cfile->invalidHandle) { - rc = cifs_reopen_file(rdata->cfile, true); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - /* - * Wait for credits to resend this rdata. - * Note: we are attempting to resend the whole rdata not in - * segments - */ - do { - rc = server->ops->wait_mtu_credits(server, rdata->subreq.len, - &rsize, &credits); - - if (rc) - goto fail; - - if (rsize < rdata->subreq.len) { - add_credits_and_wake_if(server, &credits, 0); - msleep(1000); - } - } while (rsize < rdata->subreq.len); - rdata->credits = credits; - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else { -#ifdef CONFIG_CIFS_SMB_DIRECT - if (rdata->mr) { - rdata->mr->need_invalidate = true; - smbd_deregister_mr(rdata->mr); - rdata->mr = NULL; - } -#endif - rc = server->ops->async_readv(rdata); - } - } - - /* If the read was successfully sent, we are done */ - if (!rc) { - /* Add to aio pending list */ - list_add_tail(&rdata->list, rdata_list); - return 0; - } - - /* Roll back credits and retry if needed */ - add_credits_and_wake_if(server, &rdata->credits, 0); - } while (rc == -EAGAIN); - -fail: - cifs_put_readdata(rdata); - return rc; -} - -static int -cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, - struct cifs_sb_info *cifs_sb, struct list_head *rdata_list, - struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *rdata; - unsigned int nsegs, max_segs = INT_MAX; - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - size_t cur_len, max_len, rsize; - int rc; - pid_t pid; - struct TCP_Server_Info *server; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_segs = server->smbd_conn->max_frmr_depth; -#endif - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - do { - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - if (cifs_sb->ctx->rsize == 0) - cifs_sb->ctx->rsize = - server->ops->negotiate_rsize(tlink_tcon(open_file->tlink), - cifs_sb->ctx); - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &rsize, credits); - if (rc) - break; - - max_len = min_t(size_t, len, rsize); - - cur_len = cifs_limit_bvec_subset(&ctx->iter, max_len, - max_segs, &nsegs); - cifs_dbg(FYI, "read-to-iter len=%zx/%zx nsegs=%u/%lu/%u\n", - cur_len, max_len, nsegs, ctx->iter.nr_segs, max_segs); - if (cur_len == 0) { - rc = -EIO; - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata = cifs_readdata_alloc(cifs_uncached_readv_complete); - if (!rdata) { - add_credits_and_wake_if(server, credits, 0); - rc = -ENOMEM; - break; - } - - rdata->server = server; - rdata->cfile = cifsFileInfo_get(open_file); - rdata->subreq.start = fpos; - rdata->subreq.len = cur_len; - rdata->pid = pid; - rdata->credits = credits_on_stack; - rdata->ctx = ctx; - kref_get(&ctx->refcount); - - rdata->subreq.io_iter = ctx->iter; - iov_iter_truncate(&rdata->subreq.io_iter, cur_len); - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_readv(rdata); - } - - if (rc) { - add_credits_and_wake_if(server, &rdata->credits, 0); - cifs_put_readdata(rdata); - if (rc == -EAGAIN) - continue; - break; - } - - list_add_tail(&rdata->list, rdata_list); - iov_iter_advance(&ctx->iter, cur_len); - fpos += cur_len; - len -= cur_len; - } while (len > 0); - - return rc; -} - -static void -collect_uncached_read_data(struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *rdata, *tmp; - struct cifs_sb_info *cifs_sb; - int rc; - - cifs_sb = CIFS_SB(ctx->cfile->dentry->d_sb); - - mutex_lock(&ctx->aio_mutex); - - if (list_empty(&ctx->list)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - rc = ctx->rc; - /* the loop below should proceed in the order of increasing offsets */ -again: - list_for_each_entry_safe(rdata, tmp, &ctx->list, list) { - if (!rc) { - if (!try_wait_for_completion(&rdata->done)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - if (rdata->result == -EAGAIN) { - /* resend call if it's a retryable error */ - struct list_head tmp_list; - unsigned int got_bytes = rdata->got_bytes; - - list_del_init(&rdata->list); - INIT_LIST_HEAD(&tmp_list); - - if (ctx->direct_io) { - /* - * Re-use rdata as this is a - * direct I/O - */ - rc = cifs_resend_rdata( - rdata, - &tmp_list, ctx); - } else { - rc = cifs_send_async_read( - rdata->subreq.start + got_bytes, - rdata->subreq.len - got_bytes, - rdata->cfile, cifs_sb, - &tmp_list, ctx); - - cifs_put_readdata(rdata); - } - - list_splice(&tmp_list, &ctx->list); - - goto again; - } else if (rdata->result) - rc = rdata->result; - - /* if there was a short read -- discard anything left */ - if (rdata->got_bytes && rdata->got_bytes < rdata->subreq.len) - rc = -ENODATA; - - ctx->total_len += rdata->got_bytes; - } - list_del_init(&rdata->list); - cifs_put_readdata(rdata); - } - - /* mask nodata case */ - if (rc == -ENODATA) - rc = 0; - - ctx->rc = (rc == 0) ? (ssize_t)ctx->total_len : rc; - - mutex_unlock(&ctx->aio_mutex); - - if (ctx->iocb && ctx->iocb->ki_complete) - ctx->iocb->ki_complete(ctx->iocb, ctx->rc); - else - complete(&ctx->done); -} - -static ssize_t __cifs_readv( - struct kiocb *iocb, struct iov_iter *to, bool direct) -{ - size_t len; - struct file *file = iocb->ki_filp; - struct cifs_sb_info *cifs_sb; - struct cifsFileInfo *cfile; - struct cifs_tcon *tcon; - ssize_t rc, total_read = 0; - loff_t offset = iocb->ki_pos; - struct cifs_aio_ctx *ctx; - - len = iov_iter_count(to); - if (!len) - return 0; - - cifs_sb = CIFS_FILE_SB(file); - cfile = file->private_data; - tcon = tlink_tcon(cfile->tlink); - - if (!tcon->ses->server->ops->async_readv) - return -ENOSYS; - - if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cifs_dbg(FYI, "attempting read on write only file instance\n"); - - ctx = cifs_aio_ctx_alloc(); - if (!ctx) - return -ENOMEM; - - ctx->pos = offset; - ctx->direct_io = direct; - ctx->len = len; - ctx->cfile = cifsFileInfo_get(cfile); - ctx->nr_pinned_pages = 0; - - if (!is_sync_kiocb(iocb)) - ctx->iocb = iocb; - - if (user_backed_iter(to)) { - /* - * Extract IOVEC/UBUF-type iterators to a BVEC-type iterator as - * they contain references to the calling process's virtual - * memory layout which won't be available in an async worker - * thread. This also takes a pin on every folio involved. - */ - rc = netfs_extract_user_iter(to, iov_iter_count(to), - &ctx->iter, 0); - if (rc < 0) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - ctx->nr_pinned_pages = rc; - ctx->bv = (void *)ctx->iter.bvec; - ctx->bv_need_unpin = iov_iter_extract_will_pin(to); - ctx->should_dirty = true; - } else if ((iov_iter_is_bvec(to) || iov_iter_is_kvec(to)) && - !is_sync_kiocb(iocb)) { - /* - * If the op is asynchronous, we need to copy the list attached - * to a BVEC/KVEC-type iterator, but we assume that the storage - * will be retained by the caller; in any case, we may or may - * not be able to pin the pages, so we don't try. - */ - ctx->bv = (void *)dup_iter(&ctx->iter, to, GFP_KERNEL); - if (!ctx->bv) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -ENOMEM; - } - } else { - /* - * Otherwise, we just pass the iterator down as-is and rely on - * the caller to make sure the pages referred to by the - * iterator don't evaporate. - */ - ctx->iter = *to; - } - - if (direct) { - rc = filemap_write_and_wait_range(file->f_inode->i_mapping, - offset, offset + len - 1); - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EAGAIN; - } - } - - /* grab a lock here due to read response handlers can access ctx */ - mutex_lock(&ctx->aio_mutex); - - rc = cifs_send_async_read(offset, len, cfile, cifs_sb, &ctx->list, ctx); - - /* if at least one read request send succeeded, then reset rc */ - if (!list_empty(&ctx->list)) - rc = 0; - - mutex_unlock(&ctx->aio_mutex); - - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - if (!is_sync_kiocb(iocb)) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EIOCBQUEUED; - } - - rc = wait_for_completion_killable(&ctx->done); - if (rc) { - mutex_lock(&ctx->aio_mutex); - ctx->rc = rc = -EINTR; - total_read = ctx->total_len; - mutex_unlock(&ctx->aio_mutex); - } else { - rc = ctx->rc; - total_read = ctx->total_len; - } - - kref_put(&ctx->refcount, cifs_aio_ctx_release); - - if (total_read) { - iocb->ki_pos += total_read; - return total_read; - } - return rc; -} - -ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to) -{ - return __cifs_readv(iocb, to, true); -} - -ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) -{ - return __cifs_readv(iocb, to, false); - -} -#endif // end netfslib removal 4143 - ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) { ssize_t rc; @@ -3794,101 +2886,6 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) return rc; } -#if 0 // TODO remove 4633 -static ssize_t -cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) -{ - int rc = -EACCES; - unsigned int bytes_read = 0; - unsigned int total_read; - unsigned int current_read_size; - unsigned int rsize; - struct cifs_sb_info *cifs_sb; - struct cifs_tcon *tcon; - struct TCP_Server_Info *server; - unsigned int xid; - char *cur_offset; - struct cifsFileInfo *open_file; - struct cifs_io_parms io_parms = {0}; - int buf_type = CIFS_NO_BUFFER; - __u32 pid; - - xid = get_xid(); - cifs_sb = CIFS_FILE_SB(file); - - /* FIXME: set up handlers for larger reads and/or convert to async */ - rsize = min_t(unsigned int, cifs_sb->ctx->rsize, CIFSMaxBufSize); - - if (file->private_data == NULL) { - rc = -EBADF; - free_xid(xid); - return rc; - } - open_file = file->private_data; - tcon = tlink_tcon(open_file->tlink); - server = cifs_pick_channel(tcon->ses); - - if (!server->ops->sync_read) { - free_xid(xid); - return -ENOSYS; - } - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cifs_dbg(FYI, "attempting read on write only file instance\n"); - - for (total_read = 0, cur_offset = read_data; read_size > total_read; - total_read += bytes_read, cur_offset += bytes_read) { - do { - current_read_size = min_t(uint, read_size - total_read, - rsize); - /* - * For windows me and 9x we do not want to request more - * than it negotiated since it will refuse the read - * then. - */ - if (!(tcon->ses->capabilities & - tcon->ses->server->vals->cap_large_files)) { - current_read_size = min_t(uint, - current_read_size, CIFSMaxBufSize); - } - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc != 0) - break; - } - io_parms.pid = pid; - io_parms.tcon = tcon; - io_parms.offset = *offset; - io_parms.length = current_read_size; - io_parms.server = server; - rc = server->ops->sync_read(xid, &open_file->fid, &io_parms, - &bytes_read, &cur_offset, - &buf_type); - } while (rc == -EAGAIN); - - if (rc || (bytes_read == 0)) { - if (total_read) { - break; - } else { - free_xid(xid); - return rc; - } - } else { - cifs_stats_bytes_read(tcon, total_read); - *offset += bytes_read; - } - } - free_xid(xid); - return total_read; -} -#endif // end netfslib remove 4633 - - static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { return netfs_page_mkwrite(vmf, NULL);