From patchwork Thu Oct 24 14:04:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13849094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4013ACE8E7A for ; Thu, 24 Oct 2024 14:06:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B9CC6B007B; Thu, 24 Oct 2024 10:06:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 842716B009C; Thu, 24 Oct 2024 10:06:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BCCA6B009D; Thu, 24 Oct 2024 10:06:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 481CD6B009C for ; Thu, 24 Oct 2024 10:06:04 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BA62B1C67CD for ; Thu, 24 Oct 2024 14:05:42 +0000 (UTC) X-FDA: 82708669386.15.8FF311A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 3BEA210001F for ; Thu, 24 Oct 2024 14:05:41 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="O6dtO7D/"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729778637; a=rsa-sha256; cv=none; b=uBFT2T6XMB648VIk+iEag9HC2O7ZuTq1LBq/yTOPjy1drEYRT5j6RydJn7epaaXoVA/MeC mkB2w23He1idScjOrBzRmmDu9QnbWc3/KsMmRYQT5Kqtgh9HFgJHvZptio80dg3jBzwMC1 bIq9ugDw1uqTU0CsvcYLy9sRclQKmUk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="O6dtO7D/"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729778637; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=ev9kJ9rlV3vqxDuSiIe2EjcyS0bQJKHDy9szPwe6evI=; b=mHRQgWMiH30MLZcMOZ2CSC9spwHkARRxxPsqNue/xtyeF9IkS4oD3rISf3lF0KuhBAdVcL ilLCQXwQVIHL6hCe+foDv4RzcsITd108b+txJPZJNNMT+oysXS9/1ptq+RRneD2P+GNhYa w9yrdUnxRI7KYnMg9i038eP+t3pwP6o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729778761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=ev9kJ9rlV3vqxDuSiIe2EjcyS0bQJKHDy9szPwe6evI=; b=O6dtO7D/QYkDZIxr1JoA3QzgIy3M8ihmRxIXPWc+Ybv6D9SyNq0IhkRKoSeYuWQkzPYSby DNrFsgDp3Okm5NiMsxWs27l74/9gmKjY+3HD/wry9t6hoFARhnDc4imN8lbTyx3L9VmP3H JPRfUFfCbt8rig+eeNkZ+idJeBs8G9M= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-345-DDRJR6TRMWaIDdipoI9l1Q-1; Thu, 24 Oct 2024 10:05:57 -0400 X-MC-Unique: DDRJR6TRMWaIDdipoI9l1Q-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 358ED1954B11; Thu, 24 Oct 2024 14:05:54 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2DA351955F42; Thu, 24 Oct 2024 14:05:46 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 00/27] netfs: Read performance improvements and "single-blob" support Date: Thu, 24 Oct 2024 15:04:58 +0100 Message-ID: <20241024140539.3828093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3BEA210001F X-Stat-Signature: 6knnp6a764wcmdjmippnn79hprt34a1d X-Rspam-User: X-HE-Tag: 1729778741-151454 X-HE-Meta: U2FsdGVkX1+Rf50KSHUgAvr0HEq0pT8/EviG5GQXGEprbqQSw5eqtZneOpnUkEzH20jkRjTYHoUlr9cA+CF2Z728dTDT4AwvSQ+nz9V81EiY2GWwOZCACrZ1PPDEZe/1dASg33fV1Gda42mpHHKkNlU9snLVuvhOPWPldzMUJloc0AP10q3mRfdkDMPPfd9yAWLRBtJx9Dul+zokjrtqyyuaPt/kqzJgpC0ga58w02aCqR+6oi4jZ7Z2KfEilB3fJdN12yHsTpElvHzVN6WiCRHnw/I7//bydmpxyRQf5M0aC9bpX2Jh3c7fr1gcQigNU374sOAwJWDkE+nMYtWUnATahvG8Acr+DCg0iVSJZRtyrv8RoMz8qdv4pKpg8y6x+MaAU7cdFX3XtdW/jmnroGefYYqvS7D6Y0+6VGDcBuQuxRw88y9MRw2pdn8LwNlZdnedBWpC5pKYnLZwytKxUErnazdKnL8GdsZU2dO5ea37H9Tf3d0r5J/C2x3sJb5wIEmvAcu/q7v48OjKNXLDKwjzWs8pSlg1CWexAok0JbZLhpG8PIrBr3nE8RfjJ0Peuc0doTGC7wkBPMvSXbpK1kNuGFaI3nYKlVZuDf3j8HT+OzDtqePimVXXvQPD3lDedXtuzT2dEeM9jkkQrLpvBWsu1BOLJWqRTMgX3y6KaNzKPXyDPKsxkb5dGkLIjluyJtveFky+jt3/VsYFwUPk7v6wJa6YtQKgc7JzW+hnsPlsSDJsYSxYvupuXPxC/AMzUrs0waDIdt8QGGSVASXhFDJ9f6NpV71ZjRDBpY9fb8XtvIrBZK2EnL4OXywyIW2Ol3aPvZHYYcpNZD+C4FyJTwR63Dt+3hY+vq/bhKp4FL+LANi99UlPb+K4xJYjfCr4pDQ6x0l6TmwSielyKBey3KjvH7ame/FdEGU70piDyBMagWRYD4cYAv3oe5WJ+gAhLmMKBFAYSL3aC0gXdWX P7gWbHDO +bkJBdXRwFrZg28mRgG5ZlLfUtRYahC2smVV1lNLxr1PGoelJ+bgTiPauIuAgm2wQwWMezf0V6oorpC8BZrkornmrrHiAgGe95CdxTNy5pa05cwZX8Bd0PnWZRiW4F3fkiIDEoSBOAyIYW7nMdYRPdaYmlhngFWZkdKGit9gug/9pfwk1JsFxme8oGK0t+tCHctU0d/MJnKR5IRl6Q+qp/29omWVh6eEJirWL2iQAYCWxPo9REr9y37Wuze+tw/nb1VVPk7bKW4qVhyIV8faZmvm4BcObsoph+M6MHs9CyCjBG1JJGJuyPfRmCW0K9ZsMnaPz3uv5dpuPDWE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Christian, Steve, Willy, This set of patches is primarily about two things: improving read performance and supporting monolithic single-blob objects that have to be read/written as such (e.g. AFS directory contents). The implementation of the two parts is interwoven as each makes the other possible. READ PERFORMANCE ================ The read performance improvements are intended to speed up some loss of performance detected in cifs and to a lesser extend in afs. The problem is that we queue too many work items during the collection of read results: each individual subrequest is collected by its own work item, and then they have to interact with each other when a series of subrequests don't exactly align with the pattern of folios that are being read by the overall request. Whilst the processing of the pages covered by individual subrequests as they complete potentially allows folios to be woken in parallel and with minimum delay, it can shuffle wakeups for sequential reads out of order - and that is the most common I/O pattern. The final assessment and cleanup of an operation is then held up until the last I/O completes - and for a synchronous sequential operation, this means the bouncing around of work items just adds latency. Two changes have been made to make this work: (1) All collection is now done in a single "work item" that works progressively through the subrequests as they complete (and also dispatches retries as necessary). (2) For readahead and AIO, this work item be done on a workqueue and can run in parallel with the ultimate consumer of the data; for synchronous direct or unbuffered reads, the collection is run in the application thread and not offloaded. Functions such as smb2_readv_callback() then just tell netfslib that the subrequest has terminated; netfslib does a minimal bit of processing on the spot - stat counting and tracing mostly - and then queues/wakes up the worker. This simplifies the logic as the collector just walks sequentially through the subrequests as they complete and walks through the folios, if buffered, unlocking them as it goes. It also keeps to a minimum the amount of latency injected into the filesystem's low-level I/O handling SINGLE-BLOB OBJECT SUPPORT ========================== Single-blob objects are files for which the content of the file must be read from or written to the server in a single operation because reading them in parts may yield inconsistent results. AFS directories are an example of this as there exists the possibility that the contents are generated on the fly and would differ between reads or might change due to third party interference. Such objects will be written to and retrieved from the cache if one is present, though we allow/may need to propose multiple subrequests to do so. The important part is that read from/write to the *server* is monolithic. Single blob reading is, for the moment, fully synchronous and does result collection in the application thread and, also for the moment, the API is supplied the buffer in the form of a folio_queue chain rather than using the pagecache. AFS CHANGES =========== This series makes a number of changes to the kafs filesystem, primarily in the area of directory handling: (1) AFS's FetchData RPC reply processing is made partially asynchronous which allows the netfs_io_request's outstanding operation counter to be removed as part of reducing the collection to a single work item. (2) Directory and symlink reading are plumbed through netfslib using the single-blob object API and are now cacheable with fscache. This also allows the afs_read struct to be eliminated and netfs_io_subrequest to be used directly instead. (3) Directory and symlink content are now stored in a folio_queue buffer rather than in the pagecache. This means we don't require the RCU read lock and xarray iteration to access it, and folios won't randomly disappear under us because the VM wants them back. There are some downsides to this, though: the storage folios are no longer known to the VM, drop_caches can't flush them, the folios are not migrateable. The inode must also be marked dirty manually to get the data written to the cache in the background. (4) The vnode operation lock is changed from a mutex struct to a private lock implementation. The problem is that the lock now needs to be dropped in a separate thread and mutexes don't permit that. (5) When a new directory is created, we now initialise it locally and mark it valid rather than downloading it (we know what it's likely to look like). SUPPORTING CHANGES ================== To support the above some other changes are also made: (1) A "rolling buffer" implementation is created to abstract out the two separate folio_queue chaining implementations I had (one for read and one for write). (2) Functions are provided to create/extend a buffer in a folio_queue chain and tear it down again. This is used to handle AFS directories, but could also be used to create bounce buffers for content crypto and transport crypto. (3) The was_async argument is dropped from netfs_read_subreq_terminated(). Instead we wake the read collection work item by either queuing it or waking up the app thread. (4) We don't need to use BH-excluding locks when communicating between the issuing thread and the collection thread as neither of them now run in BH context. MISCELLANY ========== Also included are some fixes from Matthew Wilcox that need to be applied first; a number of new tracepoints; a split of the netfslib write collection code to put retrying into its own file (it gets more complicated with content encryption). There are also some minor fixes AFS included, including fixing the AFS directory format struct layout, reducing some directory over-invalidation and making afs_mkdir() translate EEXIST to ENOTEMPY (which is not available on all systems the servers support). The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-writeback Thanks, David David Howells (24): netfs: Use a folio_queue allocation and free functions netfs: Add a tracepoint to log the lifespan of folio_queue structs netfs: Abstract out a rolling folio buffer implementation netfs: Make netfs_advance_write() return size_t netfs: Split retry code out of fs/netfs/write_collect.c netfs: Drop the error arg from netfs_read_subreq_terminated() netfs: Drop the was_async arg from netfs_read_subreq_terminated() netfs: Don't use bh spinlock afs: Don't use mutex for I/O operation lock afs: Fix EEXIST error returned from afs_rmdir() to be ENOTEMPTY afs: Fix directory format encoding struct netfs: Remove some extraneous directory invalidations cachefiles: Add some subrequest tracepoints cachefiles: Add auxiliary data trace afs: Add more tracepoints to do with tracking validity netfs: Add functions to build/clean a buffer in a folio_queue netfs: Add support for caching single monolithic objects such as AFS dirs afs: Make afs_init_request() get a key if not given a file afs: Use netfslib for directories afs: Use netfslib for symlinks, allowing them to be cached afs: Eliminate afs_read afs: Make {Y,}FS.FetchData an asynchronous operation netfs: Change the read result collector to only use one work item afs: Make afs_mkdir() locally initialise a new directory's content Matthew Wilcox (Oracle) (3): netfs: Remove call to folio_index() netfs: Fix a few minor bugs in netfs_page_mkwrite() netfs: Remove unnecessary references to pages fs/9p/vfs_addr.c | 8 +- fs/afs/callback.c | 4 +- fs/afs/dir.c | 743 ++++++++++++++++-------------- fs/afs/dir_edit.c | 265 ++++++----- fs/afs/file.c | 244 +++++----- fs/afs/fs_operation.c | 113 ++++- fs/afs/fsclient.c | 59 +-- fs/afs/inode.c | 104 ++++- fs/afs/internal.h | 97 ++-- fs/afs/main.c | 2 +- fs/afs/mntpt.c | 22 +- fs/afs/rotate.c | 4 +- fs/afs/rxrpc.c | 8 +- fs/afs/super.c | 4 +- fs/afs/validation.c | 31 +- fs/afs/write.c | 16 +- fs/afs/xdr_fs.h | 2 +- fs/afs/yfsclient.c | 48 +- fs/cachefiles/io.c | 4 + fs/cachefiles/xattr.c | 9 +- fs/ceph/addr.c | 13 +- fs/netfs/Makefile | 5 +- fs/netfs/buffered_read.c | 274 ++++------- fs/netfs/buffered_write.c | 41 +- fs/netfs/direct_read.c | 80 ++-- fs/netfs/direct_write.c | 10 +- fs/netfs/internal.h | 36 +- fs/netfs/main.c | 6 +- fs/netfs/misc.c | 163 +++---- fs/netfs/objects.c | 21 +- fs/netfs/read_collect.c | 703 ++++++++++++++++------------ fs/netfs/read_pgpriv2.c | 35 +- fs/netfs/read_retry.c | 209 +++++---- fs/netfs/read_single.c | 195 ++++++++ fs/netfs/rolling_buffer.c | 225 +++++++++ fs/netfs/stats.c | 4 +- fs/netfs/write_collect.c | 244 +--------- fs/netfs/write_issue.c | 239 +++++++++- fs/netfs/write_retry.c | 233 ++++++++++ fs/nfs/fscache.c | 6 +- fs/nfs/fscache.h | 3 +- fs/smb/client/cifssmb.c | 12 +- fs/smb/client/file.c | 3 +- fs/smb/client/smb2ops.c | 2 +- fs/smb/client/smb2pdu.c | 14 +- include/linux/folio_queue.h | 12 +- include/linux/netfs.h | 55 ++- include/linux/rolling_buffer.h | 61 +++ include/trace/events/afs.h | 178 ++++++- include/trace/events/cachefiles.h | 13 +- include/trace/events/netfs.h | 97 ++-- lib/kunit_iov_iter.c | 4 +- 52 files changed, 3147 insertions(+), 1836 deletions(-) create mode 100644 fs/netfs/read_single.c create mode 100644 fs/netfs/rolling_buffer.c create mode 100644 fs/netfs/write_retry.c create mode 100644 include/linux/rolling_buffer.h