From patchwork Wed Aug 14 20:38:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2BA1C3DA4A for ; Wed, 14 Aug 2024 20:39:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 799B96B0088; Wed, 14 Aug 2024 16:39:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 771E86B0089; Wed, 14 Aug 2024 16:39:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6122A6B008A; Wed, 14 Aug 2024 16:39:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3E11B6B0088 for ; Wed, 14 Aug 2024 16:39:21 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E33851A11CE for ; Wed, 14 Aug 2024 20:39:20 +0000 (UTC) X-FDA: 82452016080.12.2B65680 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 33F1840012 for ; Wed, 14 Aug 2024 20:39:19 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dPzNxRGg; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667887; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JSm2exThzk/ONkPYsLo5wk2mtv0Gd6Av4xPNxsPfqsw=; b=bbq/f13ajL93XAEV6ZjnNpYAYVc3d601NuWVizMzArkAObiN6cVKBZw/pD2qqhMwQf2ajy CRNWIrDAmDlEKeLU6zHLP/KM27bYUwLpjDVBsgel8yDOyL5U+n/W2G7DFHb6nMY2ZdQF0V 5ggN78QnIX+zIK8WOH6JpofYunGXOBI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667887; a=rsa-sha256; cv=none; b=1XoYLanemiOrPQA1jBtOXaGtBal2J6zWRMDg6DMUpDZ2c3SanG0pwLenOFv50X0msh/f5e LqGCML7H67wqBtKnw2EKMvW0+cWtcOV1EaksUtLzJdTnUMMBsUK2oU3eim/g/FR+XRSUU0 8gap9kzmk4/ZPfnx5FtjoZ8GJiG+pxE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dPzNxRGg; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667958; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JSm2exThzk/ONkPYsLo5wk2mtv0Gd6Av4xPNxsPfqsw=; b=dPzNxRGgLj0+Hj93JpzVSJeK21XEkDgXdPWBRoRa7EtrUcw+QP+cepDmBuOkhWQPnWeGze Kxr/ShVMu0L9jMHMF/KXLX/a2IXVMtz4M3brEqYBoId0v9kuucQMhm4vy/+OfJES2HyezH w5VVzD51PlI/5CZNE7EHbkowoir9Pys= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-637-9yDs-KHPO0qhfQhsIZ1yWQ-1; Wed, 14 Aug 2024 16:39:15 -0400 X-MC-Unique: 9yDs-KHPO0qhfQhsIZ1yWQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0C2591956095; Wed, 14 Aug 2024 20:39:10 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9959E3001FE1; Wed, 14 Aug 2024 20:39:02 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Max Kellermann , Xiubo Li Subject: [PATCH v2 01/25] netfs, ceph: Partially revert "netfs: Replace PG_fscache by setting folio->private and marking dirty" Date: Wed, 14 Aug 2024 21:38:21 +0100 Message-ID: <20240814203850.2240469-2-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 33F1840012 X-Stat-Signature: pq5ifmh69gp3na1g15pgf6b9ki4jc3yi X-HE-Tag: 1723667959-748707 X-HE-Meta: U2FsdGVkX181aUGGV6DmBLuGQBAdi537QkwAqxubo9UJ8x8CHVnU4OhnxifGcMt06deyC5P8qOmerpNsUZHfJMo/9oFl6cOQhLILoNYdgsfzeJZUT0EW4rEQcT5r0+F5wKfkO1mUvHN8zn7I9B/E0vsZBDZ3XOd/dtLah+3i4dOSuA1I+VMQlxfyxiUmLCN3eFiz9BvBoJbFmLLnxOn7+wQfzL7UH5C4fUFSa6n+hn4d/aKpJsvhG06l/gqqGMMmup5OXIdEP+u911Fu+E6ymF6KGW6awWoXuzIb9nNW+u7TRBiDqHxjXGLex/kNM+jVRXmxW8vDhsEkUmiS4hsmub0FiWsN7lfRBMMQbd5PnU53CT1G4/P/dNr++nih9ijauEB4C0jtS+5SeoBMtsijLFOPfG8faDs3vvAAIM2FmBHnGC+bCEL2lz/hKbORDEDCPCBJ40oxZem3DzxEPWRXYV4ZTzJAZfqPcnpEIXK/28z/UOwyrRHCBq86ungr/XEQVDtGpOXLezZdoadXdOq5jY574gjtzYnMdRiTOmTIRdqDzTvPZzA2Npz5yFR8e7af5ek2EdJI2cAlSfS7+9tLnKmlSH0QUIfNLoU5k849pqWRBxfZ7aXzCvSuPaqhysTAGcAcoMGj0ALgSn/K/JaKioVuX+7/R6yvS5SpD0wn72o7a6Xj6KCAst/EDWsvlLbD0CZM4Yx1DNzXIavGcbClD3E6kysmJi/JI9QFtQTp3kaMcBEEu+9sVWvC0xX0DkPNQ+QYifmfKN574djgPozHHScj9JMgWs638Ybg3ouCCy0o9GARdsCTseLq49OVqE4mq5G3k3We/9npgnC/KaGVUGo38W7R/0WtQdIeCnm1PehlcjsWZvDsqKf6pe81pdNI0G+fCmrD1/di6dC3/mKuSLpfXZqc7S7zXP8PGrhfHQSjjgrrWkllj0PSAGR3fqnkil9vKppq9ZmXpvBjg5M zEQ+g+a6 zmzqWzW/6Z4bJqMQsAq59ctzliqcItz6+yLeqykiUvbY1wvcW7v72X6pdWgxzO+cKfZ8gm0fpdr6xExVqJhNsS6cCbniA7QOPaBbm/JJxmNYrG1iUHJd+tVUD0WFHVNvqd9jCvAyMlGe193IwRbx+Ylx8yVl39TcK6m21Xdyqi4J0uTwFHSm8QqxNR0a0a6lS6Pw+N63HTkpSBO4rzt/sYX+w5H9ERzvYyvZGe6R9B+T1xZWwx56M7/1v4T83Eff6WSIOJrmfsv5UKtLjawHoENaFDIDmF+M5T0oddMelMns7GzQMkSLNErqoQX4u8B5f1WgojHcWKz5q2wZKB/hJB404ma6cjecQvvnGsrMe7DCuXZ3ATkNUH943EGrvF9rgZ0UFNlDfIxVijOW6dX1CqhEShyIvI1HywhxBeq+fZZXMYUE+Pk9Ve9BDtJeCSMRv8kWcJ0BKWhSa/ndwKmvDK1cEr0mM7EM7qBgTHhrphPq+Po/Q8VVP+ysSBjFaGSeOEUnyBS2fTsuY4OiUPZnqls/yJ/jnLpTD9aDGR6JISTia1KdorYsQQ/psZykn4gb+RPedsCUNpP1rqCVFR7cTXqjy409m1G1zIOv9icpkPTD4UIrjkqZ8aN1BUT682MbRt/7KiBhfgIARcXjwoQoiud7n/qpaxHcxO3Rr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This partially reverts commit 2ff1e97587f4d398686f52c07afde3faf3da4e5c. In addition to reverting the removal of PG_private_2 wrangling from the buffered read code[1][2], the removal of the waits for PG_private_2 from netfs_release_folio() and netfs_invalidate_folio() need reverting too. It also adds a wait into ceph_evict_inode() to wait for netfs read and copy-to-cache ops to complete. Fixes: 2ff1e97587f4 ("netfs: Replace PG_fscache by setting folio->private and marking dirty") Signed-off-by: David Howells cc: Max Kellermann cc: Ilya Dryomov cc: Xiubo Li cc: Jeff Layton cc: Matthew Wilcox cc: ceph-devel@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/3575457.1722355300@warthog.procyon.org.uk [1] Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8e5ced7804cb9184c4a23f8054551240562a8eda [2] --- fs/ceph/inode.c | 1 + fs/netfs/misc.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index 71cd70514efa..4a8eec46254b 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -695,6 +695,7 @@ void ceph_evict_inode(struct inode *inode) percpu_counter_dec(&mdsc->metric.total_inodes); + netfs_wait_for_outstanding_io(inode); truncate_inode_pages_final(&inode->i_data); if (inode->i_state & I_PINNING_NETFS_WB) ceph_fscache_unuse_cookie(inode, true); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 83e644bd518f..554a1a4615ad 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -101,6 +101,8 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) _enter("{%lx},%zx,%zx", folio->index, offset, length); + folio_wait_private_2(folio); /* [DEPRECATED] */ + if (!folio_test_private(folio)) return; @@ -165,6 +167,11 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp) if (folio_test_private(folio)) return false; + if (unlikely(folio_test_private_2(folio))) { /* [DEPRECATED] */ + if (current_is_kswapd() || !(gfp & __GFP_FS)) + return false; + folio_wait_private_2(folio); + } fscache_note_page_release(netfs_i_cookie(ctx)); return true; } From patchwork Wed Aug 14 20:38:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9392AC52D7F for ; Wed, 14 Aug 2024 20:39:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4FF276B008C; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37B676B0095; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1303F6B0092; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DEE776B008A for ; Wed, 14 Aug 2024 16:39:34 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 61B471C3F9A for ; Wed, 14 Aug 2024 20:39:34 +0000 (UTC) X-FDA: 82452016668.15.111EC4A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id AD1352000D for ; Wed, 14 Aug 2024 20:39:32 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cAkrRz5j; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667876; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wvz9YBrTPFQ9wVYP2d1umk84aV9uv83ek2O7Ma2LSP4=; b=aFoYzrfTjSjuhfxtRNC53jkr2qndgB7VjpwkzDbLcQq/KpxbXCz9RjWU3Z3amOKJVZoioO Y/IbR8xY9q79gL2E0LHFjKUA2SqGDp0sAr/hYkuYJqReOGmkkeJ4vkt/9qmVPgx+do0tcu ourxi2NRq7HFVD6Y6ucAHiJUZYewEU4= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cAkrRz5j; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667876; a=rsa-sha256; cv=none; b=11vK5kJKSS2gmZhzzwNRFRW6RUQ9BchjKuOjb8j5cr5R2OCX388UPxv8zAPFDEhp/fEFhm P/YVsBmdL/c/vmOiYDzvlTU8rZB+sLx7fdd/g/vB+pfreccx8MXcJA/Aw8qVX34Xfx+Dq8 OUD1QW47+mQeJoo0dL7Fg1j7OjnJZi8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667971; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wvz9YBrTPFQ9wVYP2d1umk84aV9uv83ek2O7Ma2LSP4=; b=cAkrRz5jeFml2lKkjtHKRChufNeWs++hy+FNWdL2brKaIjsTw71tQDNRii0C5XDubAMbAF Q0eXg3JBBnY3epLV4pcnt6DPg/DJjByWVUDReix32Lds23ZFbviiwKAFLGEq0NLFlYPWZR JvmFwUB6iOZbnmNkU8mG1ZFF+aIfQlc= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-8-9QbQ7aiUNXWL2QCGcnXXHA-1; Wed, 14 Aug 2024 16:39:26 -0400 X-MC-Unique: 9QbQ7aiUNXWL2QCGcnXXHA-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4AA841954B10; Wed, 14 Aug 2024 20:39:19 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 90C961955E8C; Wed, 14 Aug 2024 20:39:11 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christian Brauner , Gao Xiang Subject: [PATCH v2 02/25] cachefiles: Fix non-taking of sb_writers around set/removexattr Date: Wed, 14 Aug 2024 21:38:22 +0100 Message-ID: <20240814203850.2240469-3-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AD1352000D X-Stat-Signature: ewh58u3jdnxwz59s3itdkmj6wpnki1wg X-Rspam-User: X-HE-Tag: 1723667972-523499 X-HE-Meta: U2FsdGVkX1+6sxgSXUVBq8VccKcyDq2GD/tYQRlqZ+46X+TJ70JkV2ATxYDsYbD/IKnh3nFrvDwk86Bqd8n6fO//x5yrQnbU1bTllFEr6UG5g/bhXxgav7LwWKNwNfGOfgltvY/P5LYjZBlKeBpGoBMOWIYGoQ3I/zPMn9yKpUH49ZXzX0tASmiPgPa+Jq3n65KGKXJedABvOjgj+n58jr//UQRD+HzfjWtKw2Zv7cyVmUs1gMSmUjZjcN+n5ipanw9OOURPlTlytxABq0HBOJhknzX6JsZXyoEvWP2IGct20Dg87VHSQw353ExXLZlyHoFAhAJhIr1Hc9pPpvT6GGr7YMmh7mRV6K1chcKlejFbededvWf2+EuXp9UDJWd61jv14+j11++5VgNj6CzFd2F0o0awccN710wGRZEPFRBku9AMKpyRtmG1h7JyVdoxgPx0cePX4aMsi9ahK9iKXrLACl1gj9S5Aeh3S81f7w9FwWAL2xuO64Abshks8Q8pCU/jPX4iArVUA3MKzdHmrxACv0xbpJ31rQuAPvcVqocxYFhmxKpmOSpU5INPe5C+6h1yoxWT0K5bM/CoAxH5qd6L0pYSrMfwki5fV2KaLwIw5bypk+zEInJ55UXn4mPtFG9pQcMeFtwuLgs7D4oZ2rYYL5H6UGPyhC0FPol1Z278Wz8HceFMGchU9gAkbZlJDWycltQDsdZ3lHOTbHSrHYhyTSrRotUp3TTAax+N9zDo24C4iNK0NAZkzy4dhPzwLIE2ZuaYWJnxpMPy7q8lrCJEd4UWP4tXRuSscWH0wuDNUZiInGEvr4NEhHaKilwLzivctQlOYBlDRnKphKoa2HiX45GeQ7nILu8H7T48Z1IGXI06Yr6ImsfdYifc/wivqz7H6ya4Ij0+baN4rE826fnGSGlnrezpqyOwHLHJRYZb1C66S60FeAQ7PFDvCv2d4FgXPRYieP4EBdETKdY ht3jw2YR GoKOvHsaYlEcNNoa/ihgG147iGIwHjUwxvJkdpkv2B5b5E7mEc9Fp5TUY02HAtct6S51UQJoUO7SP/oS8NTFsoosXjz3j0yVQfjcaMQq4Vgh6cglR03cXZotV79+jbnZ2aPe4VALjIHldEwzgZfk8uuaC4xKtg4WicKGIQ0GdFCvX/Tw5WpuAbQTftyftIhd5b8yUL7cfI1b+3Sl76E2gUBN/i72OzWxhsp5PpliMr5X1bCPQF71RFwidfdIb1KlKQOZ56cOiOu+p8JbZG4PnZwnq3zlYUgUy/9VF1k3mPZqkM2Tj4G/LlJh2kvCUr9gxSKYmggmGHyhSr1W5zgb527pKuporBZQdSDFhTrnwOr5sxPm7GLaf76T1XOBk3o6Ly2BGRbZTpfA8/V2JqZrCXEPXjqZ2aoOOwB6W8T4VX5ua/kyAVUnWotMsXPKksWXGFmxFZb+MzqwphDKl0tIt9ewWIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Unlike other vfs_xxxx() calls, vfs_setxattr() and vfs_removexattr() don't take the sb_writers lock, so the caller should do it for them. Fix cachefiles to do this. Fixes: 9ae326a69004 ("CacheFiles: A cache that backs onto a mounted filesystem") Signed-off-by: David Howells cc: Christian Brauner cc: Gao Xiang cc: netfs@lists.linux.dev cc: linux-erofs@lists.ozlabs.org cc: linux-fsdevel@vger.kernel.org --- fs/cachefiles/xattr.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c index 4dd8a993c60a..7c6f260a3be5 100644 --- a/fs/cachefiles/xattr.c +++ b/fs/cachefiles/xattr.c @@ -64,9 +64,15 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object) memcpy(buf->data, fscache_get_aux(object->cookie), len); ret = cachefiles_inject_write_error(); - if (ret == 0) - ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache, - buf, sizeof(struct cachefiles_xattr) + len, 0); + if (ret == 0) { + ret = mnt_want_write_file(file); + if (ret == 0) { + ret = vfs_setxattr(&nop_mnt_idmap, dentry, + cachefiles_xattr_cache, buf, + sizeof(struct cachefiles_xattr) + len, 0); + mnt_drop_write_file(file); + } + } if (ret < 0) { trace_cachefiles_vfs_error(object, file_inode(file), ret, cachefiles_trace_setxattr_error); @@ -151,8 +157,14 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache, int ret; ret = cachefiles_inject_remove_error(); - if (ret == 0) - ret = vfs_removexattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache); + if (ret == 0) { + ret = mnt_want_write(cache->mnt); + if (ret == 0) { + ret = vfs_removexattr(&nop_mnt_idmap, dentry, + cachefiles_xattr_cache); + mnt_drop_write(cache->mnt); + } + } if (ret < 0) { trace_cachefiles_vfs_error(object, d_inode(dentry), ret, cachefiles_trace_remxattr_error); @@ -208,9 +220,15 @@ bool cachefiles_set_volume_xattr(struct cachefiles_volume *volume) memcpy(buf->data, p, volume->vcookie->coherency_len); ret = cachefiles_inject_write_error(); - if (ret == 0) - ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache, - buf, len, 0); + if (ret == 0) { + ret = mnt_want_write(volume->cache->mnt); + if (ret == 0) { + ret = vfs_setxattr(&nop_mnt_idmap, dentry, + cachefiles_xattr_cache, + buf, len, 0); + mnt_drop_write(volume->cache->mnt); + } + } if (ret < 0) { trace_cachefiles_vfs_error(NULL, d_inode(dentry), ret, cachefiles_trace_setxattr_error); From patchwork Wed Aug 14 20:38:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96A9DC3DA4A for ; Wed, 14 Aug 2024 20:39:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 291D06B008A; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 241316B008C; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0947C6B0093; Wed, 14 Aug 2024 16:39:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DFD566B008C for ; Wed, 14 Aug 2024 16:39:34 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8B1C2161168 for ; Wed, 14 Aug 2024 20:39:34 +0000 (UTC) X-FDA: 82452016668.28.F3C7814 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id DD501140012 for ; Wed, 14 Aug 2024 20:39:32 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S0Dp82ic; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667892; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nD8BuDcHuaWMd5Fn2FK+IU9NPvXO40RV6M0I0jL+Pvo=; b=GVv/lllS6GxTNuEmHCd7TYTxjwGsfJ1FUk/viF2wLkkXpD1IezFifV7e/ejgnhGIJsEGmG rkNr1nX4+Bjutvc23QP/rWEvkL3ArW/iFU3qaaz/R88W32m+IrBtVn+DoADSy9BdthMMfI 2jdxs0TCHX7xF4QoFD9MKSg3jDnTkrs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667892; a=rsa-sha256; cv=none; b=FUOUQ/kmApW87HrUJjVuy9h5ogD/oEWh7e5k1SW87/fYv+AkxobqK9h7gTDO0SP7YFqAw3 FwUJf1g7Cl7JqMfVHTs2EjNiBSJz7IeV2jWdXq9dZSCBkEPytI3UcpBomnOJCWrYOon6Vo D5coqNhNqVRt7pgbmBNSK26Ly8pR3bU= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S0Dp82ic; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667972; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nD8BuDcHuaWMd5Fn2FK+IU9NPvXO40RV6M0I0jL+Pvo=; b=S0Dp82ic6SEMm5HK+KXfJg7DPG5HqKMyl/c80i8HW6lm4rufruLicAYLFfsjrF2qgGdmei n3mI85VR9VaIBiIGPeNywh9vvr4QEXtkLJ1CWl5QOUx4BqpPDXLRejWqaSkQG2oYX3O85B 6yWwZRYBpoP3njBdjyaXWhLn4Rk0yfk= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-437-Vr_VRDFSNTuDCt44PrJuxw-1; Wed, 14 Aug 2024 16:39:30 -0400 X-MC-Unique: Vr_VRDFSNTuDCt44PrJuxw-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BB18B1955F6A; Wed, 14 Aug 2024 20:39:26 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CCA5919560A3; Wed, 14 Aug 2024 20:39:20 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/25] netfs: Adjust labels in /proc/fs/netfs/stats Date: Wed, 14 Aug 2024 21:38:23 +0100 Message-ID: <20240814203850.2240469-4-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DD501140012 X-Stat-Signature: hk1di3kdewuuo39i89beuzmh8we3ozup X-Rspam-User: X-HE-Tag: 1723667972-712973 X-HE-Meta: U2FsdGVkX1/VrcBKSUBoNZZhCla+XBNFwFX2LIeDRd0YOMZnu0D6x23XdLkI9i34aWbjJ1nkz9H8wyxttQNQs6KDG+UzGVhQNn1ZKJpJUMAcYJCet9KeaivFnTzSnOStz/v9lxRpMtuSqztz7LDP+nz4VPOH5s2d22PUxPneazc+O3M7GDipxokeo8lIL8c9kRyRpafzUjnQhTsIpmjkzbD4ypI4BPK4sB7KmeHvRdXXXZuc7iXi78VJ/YMFA9HxAVCoh/2mZwhOR/mPDlIdaeq0ueEs0xbuFV6+IrmCz33UxyvqSUmwztQzhWpxgbQ6DF5SDH2EJUnRs1aJcIkXMnlut5/BrH/z+yiCffeSfhQf37uySQ1mZJcQeXMhDpFqOx+vOpEjazkVfqU21BHxoHfYb3LDf7xtK9Uchl743AjRlR1ur7rtu5jTE+pMccjx8X41IDSX9lKkHmW+hB9qDdMkL4qvzYWpfCFVBOHFFO9FsJq2XgvzuXGrP1op5Q76BBYK+K29iZkbAKV3N75XyYkhX+S+LeY+TgkLD74s1sS79RCOxE26o0qM+7QKA/IEBSNjCRd1Ypb9RsyZc35GShON/h0KomIFViMBFMVxb5NRxCoaPrADGBM6nonmZnEC5w4gotgfq6e9uSnwBpsY9nrehwWCNc5qL3B9QzYeSQt0SRtao8vzHerJkMMYQtThICTX9+XPe4hdO+Eqb0POC1XEw6KXXu/RAeQc1tRb5qp7ZjQJY7JZXsufS+TH+o5Tq1It7bZyDJY7maCLrRQYe82mKGMA1oP0Jb6XgxX0LFsgo39aMRLRgd72gQnXyjlMU+gXrjhzOgtSjhA9Mx6JOSQjmyzpF3MtRt/etvGjTtd0KRBSvcGh/UzYJY2jd9TFEyDf73lhF5T0VafAfveIj4BR2Xw5ujVgGsQ6p1ATzSWKuh9F7Ys/NaJNcXx35Lfy1GI0XIYrVokYGKm6njf HcoVGpd5 DlpRtcxS8053ul/cOUWwrzHqHzi//R1cVkWzkMrKVKNWinEOOzpM/p0dKo5AZSSGlW5r0hCF+oWyEs8ncxGsSTIUphCbMYAnHHwp26j4hlNmGoInQx0mhV6N9ZwAARJE7CZAX4jDVYo4TJ/zBLprBHZosAz0fjOWUXoI7fPwOlzAgG0tlpo8a8TAi4ld5P2E2Ynqrc4uYyXTGVXj3CGdNvBKMAPXIDblFx3D52jIZQaihKwphVWwnzJUPWGdKU4IFtcyL+upBV0BD0FOXSsiVIyZP5DAn97CK1FBgpGEhaGI7PS9adg95CyT8CcUbWifcWHRTr6UtgnTztVe6KrhlRQGcenDbKaYIcTFZX1OhAXtrAsY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Adjust the labels in /proc/fs/netfs/stats that refer to netfs-specific counters. These currently all begin with "Netfs", but change them to begin with more specific labels. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/stats.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 0892768eea32..95ed2d2623a8 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -42,39 +42,39 @@ atomic_t netfs_n_wh_write_failed; int netfs_stats_show(struct seq_file *m, void *v) { - seq_printf(m, "Netfs : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", + seq_printf(m, "Reads : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", atomic_read(&netfs_n_rh_dio_read), atomic_read(&netfs_n_rh_readahead), atomic_read(&netfs_n_rh_read_folio), atomic_read(&netfs_n_rh_write_begin), atomic_read(&netfs_n_rh_write_zskip)); - seq_printf(m, "Netfs : BW=%u WT=%u DW=%u WP=%u\n", + seq_printf(m, "Writes : BW=%u WT=%u DW=%u WP=%u\n", atomic_read(&netfs_n_wh_buffered_write), atomic_read(&netfs_n_wh_writethrough), atomic_read(&netfs_n_wh_dio_write), atomic_read(&netfs_n_wh_writepages)); - seq_printf(m, "Netfs : ZR=%u sh=%u sk=%u\n", + seq_printf(m, "ZeroOps: ZR=%u sh=%u sk=%u\n", atomic_read(&netfs_n_rh_zero), atomic_read(&netfs_n_rh_short_read), atomic_read(&netfs_n_rh_write_zskip)); - seq_printf(m, "Netfs : DL=%u ds=%u df=%u di=%u\n", + seq_printf(m, "DownOps: DL=%u ds=%u df=%u di=%u\n", atomic_read(&netfs_n_rh_download), atomic_read(&netfs_n_rh_download_done), atomic_read(&netfs_n_rh_download_failed), atomic_read(&netfs_n_rh_download_instead)); - seq_printf(m, "Netfs : RD=%u rs=%u rf=%u\n", + seq_printf(m, "CaRdOps: RD=%u rs=%u rf=%u\n", atomic_read(&netfs_n_rh_read), atomic_read(&netfs_n_rh_read_done), atomic_read(&netfs_n_rh_read_failed)); - seq_printf(m, "Netfs : UL=%u us=%u uf=%u\n", + seq_printf(m, "UpldOps: UL=%u us=%u uf=%u\n", atomic_read(&netfs_n_wh_upload), atomic_read(&netfs_n_wh_upload_done), atomic_read(&netfs_n_wh_upload_failed)); - seq_printf(m, "Netfs : WR=%u ws=%u wf=%u\n", + seq_printf(m, "CaWrOps: WR=%u ws=%u wf=%u\n", atomic_read(&netfs_n_wh_write), atomic_read(&netfs_n_wh_write_done), atomic_read(&netfs_n_wh_write_failed)); - seq_printf(m, "Netfs : rr=%u sr=%u wsc=%u\n", + seq_printf(m, "Objs : rr=%u sr=%u wsc=%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), atomic_read(&netfs_n_wh_wstream_conflict)); From patchwork Wed Aug 14 20:38:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71A6FC52D7F for ; Wed, 14 Aug 2024 20:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0965E6B0095; Wed, 14 Aug 2024 16:39:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 046216B0096; Wed, 14 Aug 2024 16:39:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E76656B0098; Wed, 14 Aug 2024 16:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C30BD6B0095 for ; Wed, 14 Aug 2024 16:39:45 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7B924A8358 for ; Wed, 14 Aug 2024 20:39:45 +0000 (UTC) X-FDA: 82452017130.01.6E8EB4F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id CD28012000E for ; Wed, 14 Aug 2024 20:39:43 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FHQep7wa; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667925; a=rsa-sha256; cv=none; b=CXhYGfY/kCM3cTIm0E26KxlVT41iug+wruuVIeXujG4ingZ8ndK72k7WtRTdNjT3ciQ46g jRGvj+CuhT+oyurTEitVVFCmZFMJT3zGVLbVMbSgv1I7bC4fzeCgiZU39yjrpMQpp8CRYW YqS8cic6hWGJw7TzV0HLAya0euMitVE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FHQep7wa; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667925; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uDrUQ7Tmc3LMCcmpsItzClcB/6T7pKhhgt+iqpix15s=; b=0aJ6WnwhfieRGmI29DGrvspTpBD29YYW+h4GFWeSGIw4vuiFdjIX0S4+ebjtSXPEF/v1Ps Y97KFUgZ5i2xLof/DCoqR2qFuqmRKYPBzOGforOYWJ7zcavoXp8IHIn1BgVc2Ec953zdMZ NWUT6xsK5pfpbfHudkVXOdrEpj/3bUk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667983; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uDrUQ7Tmc3LMCcmpsItzClcB/6T7pKhhgt+iqpix15s=; b=FHQep7waFQreGCKwNcL9umSoWvYO7iN3rWbtrbZHCKViyl0hO57LuzSNfSjOASJL6rF1rB A2tXCeykULk0gskAh3UqBnZm+lnUXnOt+SEPzpjduObZ8QLi9Lx5Lwzkt9sDMq5n9xd8NN 5805Y9MaykjEogefuPO8HOs4N3Dipi8= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-137-hLPCHq2nOU26AtMXXsTm9g-1; Wed, 14 Aug 2024 16:39:38 -0400 X-MC-Unique: hLPCHq2nOU26AtMXXsTm9g-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D502818EB236; Wed, 14 Aug 2024 20:39:34 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 279A019560A3; Wed, 14 Aug 2024 20:39:27 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH v2 04/25] netfs: Record contention stats for writeback lock Date: Wed, 14 Aug 2024 21:38:24 +0100 Message-ID: <20240814203850.2240469-5-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: CD28012000E X-Stat-Signature: eqim1e9rc565auby8bwtcnd79rqndk5s X-Rspam-User: X-HE-Tag: 1723667983-493106 X-HE-Meta: U2FsdGVkX1+7Eo37G8mhBjrrHlZ9EHP6l1NTMKR+TLwVoSLiVXTGrfkgAUJ5tdaaeIo9RMsON7uV6SBkGbEKI4ymLaN+QMHmLz3+Mf6homkwdodab+hoA3bCfnGgDryczyccH3FrDQYqLZOGIa/2s7j8qhZAgiO+DvgtvhqgA6UAx4pVBysuYuKRuViMHY7TDqNg+iWe7luWoGcIlRU/q5wd0feBLjNAzEDpibJWpcxwkKcrfhMV6kYGddwzIR+eO2koenIivKPUoK4VaCrZSrY3aa6AbnaDsERPbyC7NS09O35MY4QuvMNVdIGvQJ3bucFncAdShw5uK9zxnRp45mFHHefB3tq6CXB8tm2xMps/hCTqKAKRWlnf7Li3NKojVGs9q7Q45wioxcPe44zQnahs4+hLhOrjET+rfua6jlrDm0Km9uIMffVEiNcjD1Q4WkVwwrX3XdI0K7Ry5uBPMAX+t0ABx7jmFdYRusXQxNUQ8f19/S6P0qYlCY0ZXuHt2pHp6Z90CEkPMQuXqCY7c0LHL9GjyGBHJ9qw9WiLLuf6edgbaUsZCpHF5+MNFaVqwwWEQiOZ7sRvXlKRw5psNjYDRHH+qM30dUfq+1QNfx5yB7tA5Vv7N/htZYO85e55qh5ffYiXd/wq5lRf2MUHm08ANqzrJkFI+SuChEVoEjzYi1D1Jz4yvMsaO8GIUCLf0+p216/tE0CdhWhkop5arEZCGQtLvvLHG5qXMLnVpXAgenVqyC1h2jH0Oikz6Rm8Ks1eWtM+D0gImMAgW9rsoJUHgOWulfuo4ROKDA4ka9CZ3BPMIPA9oR8/XJgIoj0NJlQ3vJoypMSpRxukRFy+/zJrUzZ5njiaZLdRBVSTwRewzjBLEeltSbAbNd5s9+ZHG3f4418JNsxYVphr1IXb95JSgJF1otWiPOLlmhozxsCLokHV3G8PPSKytdu2d6Q/KU3ZIlKU2OhvGAazCRK DUEj7WQ2 6h/acp4emv3abQiryJaL6j23Lp8F1hCAHDmis32gBLBBOnHLwdU9zVbDzLFpKvAtVAD0veKwO950o2U+hH/OCC686+csUlTuLwzrqzWfs5TSjY2lDrpFLklfz/TsrNLTRxfZMX721uxVgiV4GhRtJqPBv94hR7ny8sgU2AVyP1ypdG56dX0BReTV+OEKGtT0z5agMR1aL2EuCPxtcZXtUdflaGkiQsX0MKFAM2JMcssF0o3JRXH2UnIigaX6c+9zw2nRCjsi3bMx9xf7RbLBmLiqFDqJa+lW4M7paqZvtDJofZcLSDEtCfdYl4DsHfuuCS51/vhHemRSVlY2C3LV+U0CRbC1Yj6J3Uq3LJtNk244nqHgcCp8my9YU+F1oBJgpnUYgKL/e7F+aOTPSywFlceirrgC8/VNESR2NdcqBEe9K9JbGj+2HmhU5ad2mbw1RGAEr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Record statistics for contention upon the writeback serialisation lock that prevents racing writeback calls from causing each other to interleave their writebacks. These can be viewed in /proc/fs/netfs/stats on the WbLock line, with skip=N indicating the number of non-SYNC writebacks skipped and wait=N indicating the number of SYNC writebacks that waited. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 2 ++ fs/netfs/stats.c | 5 +++++ fs/netfs/write_issue.c | 10 +++++++--- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 7773f3d855a9..9e6e0e59d7e4 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -117,6 +117,8 @@ extern atomic_t netfs_n_wh_upload_failed; extern atomic_t netfs_n_wh_write; extern atomic_t netfs_n_wh_write_done; extern atomic_t netfs_n_wh_write_failed; +extern atomic_t netfs_n_wb_lock_skip; +extern atomic_t netfs_n_wb_lock_wait; int netfs_stats_show(struct seq_file *m, void *v); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 95ed2d2623a8..5fe1c396e24f 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -39,6 +39,8 @@ atomic_t netfs_n_wh_upload_failed; atomic_t netfs_n_wh_write; atomic_t netfs_n_wh_write_done; atomic_t netfs_n_wh_write_failed; +atomic_t netfs_n_wb_lock_skip; +atomic_t netfs_n_wb_lock_wait; int netfs_stats_show(struct seq_file *m, void *v) { @@ -78,6 +80,9 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), atomic_read(&netfs_n_wh_wstream_conflict)); + seq_printf(m, "WbLock : skip=%u wait=%u\n", + atomic_read(&netfs_n_wb_lock_skip), + atomic_read(&netfs_n_wb_lock_wait)); return fscache_stats_show(m); } EXPORT_SYMBOL(netfs_stats_show); diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 3f7e37e50c7d..44f35a0faaca 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -505,10 +505,14 @@ int netfs_writepages(struct address_space *mapping, struct folio *folio; int error = 0; - if (wbc->sync_mode == WB_SYNC_ALL) + if (!mutex_trylock(&ictx->wb_lock)) { + if (wbc->sync_mode == WB_SYNC_NONE) { + netfs_stat(&netfs_n_wb_lock_skip); + return 0; + } + netfs_stat(&netfs_n_wb_lock_wait); mutex_lock(&ictx->wb_lock); - else if (!mutex_trylock(&ictx->wb_lock)) - return 0; + } /* Need the first folio to be able to set up the op. */ folio = writeback_iter(mapping, wbc, NULL, &error); From patchwork Wed Aug 14 20:38:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3210C531DC for ; Wed, 14 Aug 2024 20:39:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 775426B0098; Wed, 14 Aug 2024 16:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 723AF6B009A; Wed, 14 Aug 2024 16:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59DE06B009B; Wed, 14 Aug 2024 16:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 33DCC6B0098 for ; Wed, 14 Aug 2024 16:39:53 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E426716117C for ; Wed, 14 Aug 2024 20:39:52 +0000 (UTC) X-FDA: 82452017424.11.329D69B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 0FA9514001F for ; Wed, 14 Aug 2024 20:39:50 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TRJ1zzRc; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667954; a=rsa-sha256; cv=none; b=FkooElwjH+LTaK/z61ktktzruHiZzrSUdoKTxDURTCLtnpPpgCqGaiTCmZjY24Kuy10VuA I/c9pCzCSw7ZkOYrw4QehwAFpfLjDMgRtKDqr1n0zy90pwkWiYivi+49cAwpqD+z58G90H yYIpBwByESqkbq1ywb4C6+QXTyQK7ZA= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TRJ1zzRc; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667954; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QBrJFNqzdg8zKmLBLr/RTFTCnmrEEpEZkysg1EMBcYc=; b=7ry0Ml9ELuT/aJPr85XuJfTx9yc2POCGuSgvavIypBImJpNBdW9lGE4Maqb09rONz8o4JP /8qxj2mQeaEnoDMP+WkUqrah3UJiIU7l103duroBAKQyLJBrXEHC1SZK5xa+316zBL9TTq C6EzbNKTHNUJ4d2UBvlvJbSBWO5gx+o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667990; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QBrJFNqzdg8zKmLBLr/RTFTCnmrEEpEZkysg1EMBcYc=; b=TRJ1zzRc4yVHq9hcF0vTZOcdEkhm5lRdCUlzKOcVkAl903VJkj41wlfjqXAXY/iyuKR6Gn jE8my2zwUlo+c9V/Ary+lz4oTz2J5OzA66YXCpHbHtBbNevp7QzLvKI7nBSpT0DU0V7ZFO pWXjNkkDlRgya5srjjGTeCIOXUfaGUg= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-665-GBuVmIjnPf2EJA5DwwyTTQ-1; Wed, 14 Aug 2024 16:39:46 -0400 X-MC-Unique: GBuVmIjnPf2EJA5DwwyTTQ-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CB5331925381; Wed, 14 Aug 2024 20:39:42 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5D4711955D87; Wed, 14 Aug 2024 20:39:36 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 05/25] netfs: Reduce number of conditional branches in netfs_perform_write() Date: Wed, 14 Aug 2024 21:38:25 +0100 Message-ID: <20240814203850.2240469-6-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Stat-Signature: cj7t6i3dgw77etjnnjzei5w5pbcxbezb X-Rspamd-Queue-Id: 0FA9514001F X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1723667990-532414 X-HE-Meta: U2FsdGVkX1+sFtlvJbk+yGET9ACNm1br2GcLi20q22Nl+B91Do4LZ3YdCiBi9a/9jkYe0Rzvla/tXHOH1D7gkMmAg12FdOcIZQE0hCc9U5vzjEupJbdBElJFEKY2Xq8HFtC/CHBY+E8HwdMIxQJJgywQvI+oW4Kk/jvl61cwtmLQA6MIvMRH6JGE4gqVCnI7NERtMUfW1qgIL7+OehsWLDdFN6IeVeO9RqP7gLqwoCyZz51emhwaSnY0c0ISslVsDAME5vIGyY37QjFG4ALJEZiyAy+ql5RjPZKkPO9duQpGw3ybpyOQgHnTuXbJQk1W73KgRS2W7AjYEWsdQ5EDWwwkMQ4JhSK7fgMdVYVa0/PIExAntkc4BvDuV890k/naB5ay5AMbnlfltsjZZyPPN5qU3B1emHKnDroLYzUr+7yWWa5f9NgKgn6ueOffs3Zc7NHkDl1u8s4rIpLxoCuwuEZubv8cHCnyvOQxVCAWMK+MXoDK1o0QB8GJ905yIXsfzAaiBX8fZFqyHoSGY2G8ajM2efkxCScvOYoGKM4f3n85wIAnr5AxmIdA32b+ykvVTSLCDUGTMPfMzRzZDYNHWRBTWfelwwnz32oeJYwljSVnvPe4MCAwDhGtEa2pJpqE3LXt4vTR4YzzMiXL05sg+MlZKmzlXeUeJSgbFGvLYFBkDTOIwyKpk31wFbAq6qZCljQw4jK5japTHdkzcTKPNV/09nVgKUe4kVSVKtuEoZr0WTv9DVLulvKGpyCc48SJ5afCAkTDUvnioG828GCiTWaxhJuedhxe8P0DzHQpTt3Z/VgeAtYmckq2hnRKnNLPnF1Uv1o9iru992xeSvhG8zL6FYoBSw3Q3AOHGZB3gHIRXa2Gh4sAyGtYyEs3tmpA+0YH9lnqLgHM2/zItsNy2AfD7V1k+twwC334M7MuN9DAggX22jSH+YRGQkSJWF9ayzo2JFNTT2/g+Tjw6NU fZgnGK1c nDLyp4wZnuOvKulPKyMLMOczT9TeatWT4IKayju5QfOufdt7B3xf+P6eOw/GoxTFyoKGFbWo9HpExT1HnlC/oWbF3X5GokNalb+O0VlCGGTqZZwAF4YvmUCPO2eJXgbBnUgbkQDrBZ39Pze5ULqpbxyUSS2bZJJP2ijZEWZ5dtzW+mHe46t2cCgwRyBytMCdh4HG9B5r1yQNXL2RqWfcIC9qRRoZOEU+nWyWpRMQa0iqJzk8B7bk4A2JPJkHHYdLeJqisnwTC2IWuZw3ZnpoEquhGq1VdgaraQQ53mZoPkrmF/QngcdTLrIGhrAMPN6Fa4PLOI+RcXrh+L9ndJzwWan3M+jcjO7CF42j2D4U6TzraDZG+0GTKBRgqMRHfMi9uR23OKTr5+LFcN6/9fjz4vtLH6w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce the number of conditional branches in netfs_perform_write() by merging in netfs_how_to_modify() and then creating a separate if-statement for each way we might modify a folio. Note that this means replicating the data copy in each path. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_write.c | 299 ++++++++++++++++------------------- include/trace/events/netfs.h | 2 - 2 files changed, 134 insertions(+), 167 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index ca53c5d1622e..327e5904b090 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -13,91 +13,22 @@ #include #include "internal.h" -/* - * Determined write method. Adjust netfs_folio_traces if this is changed. - */ -enum netfs_how_to_modify { - NETFS_FOLIO_IS_UPTODATE, /* Folio is uptodate already */ - NETFS_JUST_PREFETCH, /* We have to read the folio anyway */ - NETFS_WHOLE_FOLIO_MODIFY, /* We're going to overwrite the whole folio */ - NETFS_MODIFY_AND_CLEAR, /* We can assume there is no data to be downloaded. */ - NETFS_STREAMING_WRITE, /* Store incomplete data in non-uptodate page. */ - NETFS_STREAMING_WRITE_CONT, /* Continue streaming write. */ - NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ -}; - -static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) +static void __netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { - void *priv = folio_get_private(folio); - - if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) + if (netfs_group) folio_attach_private(folio, netfs_get_group(netfs_group)); - else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) - folio_detach_private(folio); } -/* - * Decide how we should modify a folio. We might be attempting to do - * write-streaming, in which case we don't want to a local RMW cycle if we can - * avoid it. If we're doing local caching or content crypto, we award that - * priority over avoiding RMW. If the file is open readably, then we also - * assume that we may want to read what we wrote. - */ -static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, - struct file *file, - struct folio *folio, - void *netfs_group, - size_t flen, - size_t offset, - size_t len, - bool maybe_trouble) +static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { - struct netfs_folio *finfo = netfs_folio_info(folio); - struct netfs_group *group = netfs_folio_group(folio); - loff_t pos = folio_pos(folio); - - _enter(""); - - if (group != netfs_group && group != NETFS_FOLIO_COPY_TO_CACHE) - return NETFS_FLUSH_CONTENT; - - if (folio_test_uptodate(folio)) - return NETFS_FOLIO_IS_UPTODATE; - - if (pos >= ctx->zero_point) - return NETFS_MODIFY_AND_CLEAR; - - if (!maybe_trouble && offset == 0 && len >= flen) - return NETFS_WHOLE_FOLIO_MODIFY; - - if (file->f_mode & FMODE_READ) - goto no_write_streaming; - - if (netfs_is_cache_enabled(ctx)) { - /* We don't want to get a streaming write on a file that loses - * caching service temporarily because the backing store got - * culled. - */ - goto no_write_streaming; - } + void *priv = folio_get_private(folio); - if (!finfo) - return NETFS_STREAMING_WRITE; - - /* We can continue a streaming write only if it continues on from the - * previous. If it overlaps, we must flush lest we suffer a partial - * copy and disjoint dirty regions. - */ - if (offset == finfo->dirty_offset + finfo->dirty_len) - return NETFS_STREAMING_WRITE_CONT; - return NETFS_FLUSH_CONTENT; - -no_write_streaming: - if (finfo) { - netfs_stat(&netfs_n_wh_wstream_conflict); - return NETFS_FLUSH_CONTENT; + if (unlikely(priv != netfs_group)) { + if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) + folio_attach_private(folio, netfs_get_group(netfs_group)); + else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) + folio_detach_private(folio); } - return NETFS_JUST_PREFETCH; } /* @@ -177,13 +108,10 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, .range_end = iocb->ki_pos + iter->count, }; struct netfs_io_request *wreq = NULL; - struct netfs_folio *finfo; - struct folio *folio, *writethrough = NULL; - enum netfs_how_to_modify howto; - enum netfs_folio_trace trace; + struct folio *folio = NULL, *writethrough = NULL; unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0; ssize_t written = 0, ret, ret2; - loff_t i_size, pos = iocb->ki_pos, from, to; + loff_t i_size, pos = iocb->ki_pos; size_t max_chunk = mapping_max_folio_size(mapping); bool maybe_trouble = false; @@ -213,15 +141,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } do { + struct netfs_folio *finfo; + struct netfs_group *group; + unsigned long long fpos; size_t flen; size_t offset; /* Offset into pagecache folio */ size_t part; /* Bytes to write to folio */ size_t copied; /* Bytes copied from user */ - ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); - if (unlikely(ret < 0)) - break; - offset = pos & (max_chunk - 1); part = min(max_chunk - offset, iov_iter_count(iter)); @@ -247,7 +174,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } flen = folio_size(folio); - offset = pos & (flen - 1); + fpos = folio_pos(folio); + offset = pos - fpos; part = min_t(size_t, flen - offset, part); /* Wait for writeback to complete. The writeback engine owns @@ -265,71 +193,52 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, goto error_folio_unlock; } - /* See if we need to prefetch the area we're going to modify. - * We need to do this before we get a lock on the folio in case - * there's more than one writer competing for the same cache - * block. + /* Decide how we should modify a folio. We might be attempting + * to do write-streaming, in which case we don't want to a + * local RMW cycle if we can avoid it. If we're doing local + * caching or content crypto, we award that priority over + * avoiding RMW. If the file is open readably, then we also + * assume that we may want to read what we wrote. */ - howto = netfs_how_to_modify(ctx, file, folio, netfs_group, - flen, offset, part, maybe_trouble); - _debug("howto %u", howto); - switch (howto) { - case NETFS_JUST_PREFETCH: - ret = netfs_prefetch_for_write(file, folio, offset, part); - if (ret < 0) { - _debug("prefetch = %zd", ret); - goto error_folio_unlock; - } - break; - case NETFS_FOLIO_IS_UPTODATE: - case NETFS_WHOLE_FOLIO_MODIFY: - case NETFS_STREAMING_WRITE_CONT: - break; - case NETFS_MODIFY_AND_CLEAR: - zero_user_segment(&folio->page, 0, offset); - break; - case NETFS_STREAMING_WRITE: - ret = -EIO; - if (WARN_ON(folio_get_private(folio))) - goto error_folio_unlock; - break; - case NETFS_FLUSH_CONTENT: - trace_netfs_folio(folio, netfs_flush_content); - from = folio_pos(folio); - to = from + folio_size(folio) - 1; - folio_unlock(folio); - folio_put(folio); - ret = filemap_write_and_wait_range(mapping, from, to); - if (ret < 0) - goto error_folio_unlock; - continue; - } - - if (mapping_writably_mapped(mapping)) - flush_dcache_folio(folio); - - copied = copy_folio_from_iter_atomic(folio, offset, part, iter); - - flush_dcache_folio(folio); - - /* Deal with a (partially) failed copy */ - if (copied == 0) { - ret = -EFAULT; - goto error_folio_unlock; + finfo = netfs_folio_info(folio); + group = netfs_folio_group(folio); + + if (unlikely(group != netfs_group) && + group != NETFS_FOLIO_COPY_TO_CACHE) + goto flush_content; + + if (folio_test_uptodate(folio)) { + if (mapping_writably_mapped(mapping)) + flush_dcache_folio(folio); + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; + netfs_set_group(folio, netfs_group); + trace_netfs_folio(folio, netfs_folio_is_uptodate); + goto copied; } - trace = (enum netfs_folio_trace)howto; - switch (howto) { - case NETFS_FOLIO_IS_UPTODATE: - case NETFS_JUST_PREFETCH: - netfs_set_group(folio, netfs_group); - break; - case NETFS_MODIFY_AND_CLEAR: + /* If the page is above the zero-point then we assume that the + * server would just return a block of zeros or a short read if + * we try to read it. + */ + if (fpos >= ctx->zero_point) { + zero_user_segment(&folio->page, 0, offset); + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; zero_user_segment(&folio->page, offset + copied, flen); - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - break; - case NETFS_WHOLE_FOLIO_MODIFY: + trace_netfs_folio(folio, netfs_modify_and_clear); + goto copied; + } + + /* See if we can write a whole folio in one go. */ + if (!maybe_trouble && offset == 0 && part >= flen) { + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; if (unlikely(copied < part)) { maybe_trouble = true; iov_iter_revert(iter, copied); @@ -337,16 +246,53 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_unlock(folio); goto retry; } - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - break; - case NETFS_STREAMING_WRITE: + trace_netfs_folio(folio, netfs_whole_folio_modify); + goto copied; + } + + /* We don't want to do a streaming write on a file that loses + * caching service temporarily because the backing store got + * culled and we don't really want to get a streaming write on + * a file that's open for reading as ->read_folio() then has to + * be able to flush it. + */ + if ((file->f_mode & FMODE_READ) || + netfs_is_cache_enabled(ctx)) { + if (finfo) { + netfs_stat(&netfs_n_wh_wstream_conflict); + goto flush_content; + } + ret = netfs_prefetch_for_write(file, folio, offset, part); + if (ret < 0) { + _debug("prefetch = %zd", ret); + goto error_folio_unlock; + } + /* Note that copy-to-cache may have been set. */ + + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; + netfs_set_group(folio, netfs_group); + trace_netfs_folio(folio, netfs_just_prefetch); + goto copied; + } + + if (!finfo) { + ret = -EIO; + if (WARN_ON(folio_get_private(folio))) + goto error_folio_unlock; + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; if (offset == 0 && copied == flen) { - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - trace = netfs_streaming_filled_page; - break; + trace_netfs_folio(folio, netfs_streaming_filled_page); + goto copied; } + finfo = kzalloc(sizeof(*finfo), GFP_KERNEL); if (!finfo) { iov_iter_revert(iter, copied); @@ -358,9 +304,18 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, finfo->dirty_len = copied; folio_attach_private(folio, (void *)((unsigned long)finfo | NETFS_FOLIO_INFO)); - break; - case NETFS_STREAMING_WRITE_CONT: - finfo = netfs_folio_info(folio); + trace_netfs_folio(folio, netfs_streaming_write); + goto copied; + } + + /* We can continue a streaming write only if it continues on + * from the previous. If it overlaps, we must flush lest we + * suffer a partial copy and disjoint dirty regions. + */ + if (offset == finfo->dirty_offset + finfo->dirty_len) { + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; finfo->dirty_len += copied; if (finfo->dirty_offset == 0 && finfo->dirty_len == flen) { if (finfo->netfs_group) @@ -369,17 +324,25 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_detach_private(folio); folio_mark_uptodate(folio); kfree(finfo); - trace = netfs_streaming_cont_filled_page; + trace_netfs_folio(folio, netfs_streaming_cont_filled_page); + } else { + trace_netfs_folio(folio, netfs_streaming_write_cont); } - break; - default: - WARN(true, "Unexpected modify type %u ix=%lx\n", - howto, folio->index); - ret = -EIO; - goto error_folio_unlock; + goto copied; } - trace_netfs_folio(folio, trace); + /* Incompatible write; flush the folio and try again. */ + flush_content: + trace_netfs_folio(folio, netfs_flush_content); + folio_unlock(folio); + folio_put(folio); + ret = filemap_write_and_wait_range(mapping, fpos, fpos + flen - 1); + if (ret < 0) + goto error_folio_unlock; + continue; + + copied: + flush_dcache_folio(folio); /* Update the inode size if we moved the EOF marker */ pos += copied; @@ -401,6 +364,10 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_put(folio); folio = NULL; + ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); + if (unlikely(ret < 0)) + break; + cond_resched(); } while (iov_iter_count(iter)); @@ -421,6 +388,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, _leave(" = %zd [%zd]", written, ret); return written ? written : ret; +copy_failed: + ret = -EFAULT; error_folio_unlock: folio_unlock(folio); folio_put(folio); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 606b4a0f92da..a4fd5dea52f4 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -129,7 +129,6 @@ E_(netfs_sreq_trace_put_terminated, "PUT TERM ") #define netfs_folio_traces \ - /* The first few correspond to enum netfs_how_to_modify */ \ EM(netfs_folio_is_uptodate, "mod-uptodate") \ EM(netfs_just_prefetch, "mod-prefetch") \ EM(netfs_whole_folio_modify, "mod-whole-f") \ @@ -139,7 +138,6 @@ EM(netfs_flush_content, "flush") \ EM(netfs_streaming_filled_page, "mod-streamw-f") \ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ - /* The rest are for writeback */ \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_clear, "clear") \ EM(netfs_folio_trace_clear_cc, "clear-cc") \ From patchwork Wed Aug 14 20:38:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F447C3DA4A for ; Wed, 14 Aug 2024 20:40:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E7ACC6B009B; Wed, 14 Aug 2024 16:39:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E28EF6B009C; Wed, 14 Aug 2024 16:39:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF9D06B009D; Wed, 14 Aug 2024 16:39:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B14B46B009B for ; Wed, 14 Aug 2024 16:39:59 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 770A34123F for ; Wed, 14 Aug 2024 20:39:59 +0000 (UTC) X-FDA: 82452017718.16.3B0CFBC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id C317D100008 for ; Wed, 14 Aug 2024 20:39:57 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=a4Qf5SUm; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667985; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xvg09nAt+yf4NVEcyGTKLEn8Q0Nrtw1zYvLWuxTLRzA=; b=wl5Q/rARQQhlps+oJL2FazahWTPr2DN/+NJxzQgRAnLJWTLLvEZgwT1jFouQ5ezr3m/8Kg IpEypl2fz/FnpKr+k8ZtF46rzn0F/qjGN7+HhAuTRj+23VkYCjHTG0baYzQtY44UFs+/MJ 6lvmiRmbCG7qLNmdUFoO52IUthRy2/0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=a4Qf5SUm; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667985; a=rsa-sha256; cv=none; b=k9N6s8Uix1JXV2UCqFZDv26150GmYBZS18SVufTWJodIwEErysjev2nLuEkHsICYJLXW5a Db+yQbOgU9ur2h7pm7OfpI3v2InoNjHk27bs8tBELH6crVoqEK8h5ZsGdf3EsBkjNWNGUk 1i6CzojiYUGKgoiyIEcGwXkSUuUhcqQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723667997; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xvg09nAt+yf4NVEcyGTKLEn8Q0Nrtw1zYvLWuxTLRzA=; b=a4Qf5SUmFv/CzqcNcEJbGt62BVyBYYVuCx6S+5ZUeWIP6pDWYHO3zEjsIKbNmqrEN5JCVp 9BDSfUTfVHozM7A2icGj4GweTGqjR3v6XOzzP4CSlJT9Oa3U9v6XdmCuUlhGNmXYz0aF76 rP8Fr4y0LEjZv7edvlqh9UsjvL1sd30= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-624-3rHT-gkeNMqsiQLJWGWckw-1; Wed, 14 Aug 2024 16:39:52 -0400 X-MC-Unique: 3rHT-gkeNMqsiQLJWGWckw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E154819373D7; Wed, 14 Aug 2024 20:39:49 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1245C1955F6B; Wed, 14 Aug 2024 20:39:43 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH v2 06/25] netfs, cifs: Move CIFS_INO_MODIFIED_ATTR to netfs_inode Date: Wed, 14 Aug 2024 21:38:26 +0100 Message-ID: <20240814203850.2240469-7-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Stat-Signature: fx9ikqoixktzo6owkshk7mhtctae3nhi X-Rspamd-Queue-Id: C317D100008 X-Rspamd-Server: rspam11 X-HE-Tag: 1723667997-397586 X-HE-Meta: U2FsdGVkX1+oyIh3UW26vLHnhv0Il0OmtLgVeT0ED1SeQ2oXOIR6uCkKbkHhctTrXPXNbsTVjCYelT1KnRUDUr9ElSs37XLrCyNgx6+E0DuUvaTTEWg2b/7YPomQMRRzXsCEvKcsgpiNTx6uw9IzP8fWObPtAgN4WE608eIsdShFkriUsn9YrQU3Sj94V3WJ8Gc43VYD3v/bMqg7Mxuohng0hjgzd1TyCQoprsSx+bZkO+4alfrzhztC3MO9uyHceCF971vgZoQV/7AuYzxsTgLqJC8KvJKU08LRPBlXnbbcaxgfCIQADMdpNCVA7Yu8RwqzMflTeZ/73pBfatKtF48mhjJmWOte6ji3N6YrIopSxe3iR0XT1v87CyVPRKV3PQujm6AeUOKF2+IV+fZd1tVupDVPzLxKD3qhhGA0azsEGyY6B7aK/vG5jDtBHMTIAFrMVEr1fDK+2KerhF7Fq67HIJCHl+gaLXhD+NRdKFjCt7CJvQFmA/2Yy0CXT0KK2PFc2JvddFynmNkYCBGcwU9iviUMQHZOFNhCL+Xo5m5pbmq9y1Rl3xY1hlqWHSU7kjwY+38nutgQgHZJfUL40wiSEnP/bBvBwJlETupOBiboFNAc4dWJbTKJLGM89LuxzHD2nS2OnzjA9XfZpnBfhVe/C6B/23eFg1Hu2gXcb5pHLW9LGgp3TFFtc2jYvLDnI/3O7atg9kyK8dB6sXSIdXII6aRlmkkNYb0d6WbcC0DfMmAVWjvbSFZrKjZJTKlZnzj05UCiID/q1uJq1n9035naInsaQUsFgwUuXyZlpmtJi+pSdN9SF1MgkRQgvw9xvOtmfvDvMGREKf10J8qfbv+oioyL9kol9tyNrQxkwpOlCbxRdMDwX27umL1xl5Nsjj8yyae1k5/MXghOXfjnBSuj6PQMc8Y1JoxfSJXtYPf+V1aLXb1Ad6HYeDpfi4MKXqVcgK+q9V0DZB/7qAH 8Ynwi+UQ +iMuE4PCx+d8mev8LT5/AG0lCLiqA0Evuv2IxAL4MMWMxLm8rrR6/CMCVnuLgDerysZ9kToc8zYwhFfi6Ey7poCbzv1Q4A5HTe8aGa4oHMoUNge2ZQ5liezXYM++jFyQGBrr7BIcdK+IgRKMvMBARGBM6XjRl0z+RYZfkPHl7XU86Ebhnr0vzKvs1QGfyQWCJB/Zo9+q5xZMcovQ3WwX3SLO9LWDiuscQi9ZROfNnbkkOmzxt0Ub0dPgpW6GKrFsAj1NSe6ivSATmQ6maubm/FGKgJwzKKEoyRyrqB/PyV4Y2hsyc8RssDyY2ySF+46m1ANiuUyx9QKbpPPUhs5LG4ZACRCWOGfmi6MhUDHWrwg4H/Cg9Up6UgEMMo2Js9bfyUc0hQeRRbcIDm1hcy/oS5ICqhrkEGwOXFrL32cGLt47MOTkbxYahvSkcDcBG5nnWggp4kwvwO6DWaLpAKbVYssKJI4RJC1K0MS+mOIlgXH1cbJdPFs+eM9ZuMaezWUbrSCzmN8lGKj4rO/o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move CIFS_INO_MODIFIED_ATTR to netfs_inode as NETFS_ICTX_MODIFIED_ATTR and then make netfs_perform_write() set it. This means that cifs doesn't need to implement the ->post_modify() hook. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Paulo Alcantara cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_write.c | 10 ++++++++-- fs/smb/client/cifsglob.h | 1 - fs/smb/client/file.c | 9 +-------- include/linux/netfs.h | 1 + 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 327e5904b090..d7eae597e54d 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -372,8 +372,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: - if (likely(written) && ctx->ops->post_modify) - ctx->ops->post_modify(inode); + if (likely(written)) { + /* Set indication that ctime and mtime got updated in case + * close is deferred. + */ + set_bit(NETFS_ICTX_MODIFIED_ATTR, &ctx->flags); + if (unlikely(ctx->ops->post_modify)) + ctx->ops->post_modify(inode); + } if (unlikely(wreq)) { ret2 = netfs_end_writethrough(wreq, &wbc, writethrough); diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 5c9b3e6cd95f..1028881098e1 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1550,7 +1550,6 @@ struct cifsInodeInfo { #define CIFS_INO_DELETE_PENDING (3) /* delete pending on server */ #define CIFS_INO_INVALID_MAPPING (4) /* pagecache is invalid */ #define CIFS_INO_LOCK (5) /* lock bit for synchronization */ -#define CIFS_INO_MODIFIED_ATTR (6) /* Indicate change in mtime/ctime */ #define CIFS_INO_CLOSE_ON_LOCK (7) /* Not to defer the close when lock is set */ unsigned long flags; spinlock_t writers_lock; diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 3f3842e7b44a..419bfd0c0cbb 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -287,12 +287,6 @@ static void cifs_rreq_done(struct netfs_io_request *rreq) inode_set_atime_to_ts(inode, inode_get_mtime(inode)); } -static void cifs_post_modify(struct inode *inode) -{ - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); -} - static void cifs_free_request(struct netfs_io_request *rreq) { struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); @@ -339,7 +333,6 @@ const struct netfs_request_ops cifs_req_ops = { .clamp_length = cifs_clamp_length, .issue_read = cifs_req_issue_read, .done = cifs_rreq_done, - .post_modify = cifs_post_modify, .begin_writeback = cifs_begin_writeback, .prepare_write = cifs_prepare_write, .issue_write = cifs_issue_write, @@ -1363,7 +1356,7 @@ int cifs_close(struct inode *inode, struct file *file) dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL); if ((cfile->status_file_deleted == false) && (smb2_can_defer_close(inode, dclose))) { - if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) { + if (test_and_clear_bit(NETFS_ICTX_MODIFIED_ATTR, &cinode->netfs.flags)) { inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 983816608f15..574df0402c44 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -73,6 +73,7 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ #define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */ +#define NETFS_ICTX_MODIFIED_ATTR 3 /* Indicate change in mtime/ctime */ }; /* From patchwork Wed Aug 14 20:38:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CBADC52D7F for ; Wed, 14 Aug 2024 20:40:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 047836B009D; Wed, 14 Aug 2024 16:40:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3B0A6B009E; Wed, 14 Aug 2024 16:40:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDA9F6B009F; Wed, 14 Aug 2024 16:40:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C02C66B009D for ; Wed, 14 Aug 2024 16:40:07 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6D54D12118E for ; Wed, 14 Aug 2024 20:40:07 +0000 (UTC) X-FDA: 82452018054.28.09EDED7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id B35F9140014 for ; Wed, 14 Aug 2024 20:40:05 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=D3Hzmnaw; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667933; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rqLnuh2mATxWHSyGBr0F3/ixnLGv5aheiziVrxxyTWk=; b=2VacifIdXEJSlKzm2ou/ZYWwqdLsjB1O1dpRlE8e2rmE6E2Z/av12h5GtlvHWjvWbZZlu2 u1QvkO3Z5AD9SRY5aS4IbUuaJMQWaVt8uoYOmVoPtMex73BPA7oGRpb8B9H2AuU6Zcm5ZG YeZrVtHSIc3HSWUY4hRaawLsHPJUwgE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667933; a=rsa-sha256; cv=none; b=FJBw4oxf/0G81Oe/TztGnzWlW3PFltf3HLtQhLSvf1wEzeb86lEry6xD6sCDWZufz/FehL eIQl9PsfbnTgtOcQfZQtubJdcFfUn923JVnrfIc7eD+3DkBXquOPjZIo87hwaTrnBT09cB Hv2l8tHR+T+tNdE4xt/RBEhquMVKNFs= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=D3Hzmnaw; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668005; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rqLnuh2mATxWHSyGBr0F3/ixnLGv5aheiziVrxxyTWk=; b=D3HzmnawxnjDuNuyV+znC/ZAWvHM6NgrQCdI8yLP35PwlX9W3sfLY8VV2W6+ACkC0AUO/i 7Y+s+Gv0cMtWqMaM4OFC94CL8+ciPvtq4HUv+PV8nr9o90G0L09E/socCj1ykv2oU2WCTN QaU8v2aOp5zsDmzEW28NqRI/tMY60xY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-246-8DvuAjrjPPu5lpcT3ZETFw-1; Wed, 14 Aug 2024 16:40:00 -0400 X-MC-Unique: 8DvuAjrjPPu5lpcT3ZETFw-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 430C21955F43; Wed, 14 Aug 2024 20:39:57 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 67BBB1955E8C; Wed, 14 Aug 2024 20:39:51 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/25] netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream Date: Wed, 14 Aug 2024 21:38:27 +0100 Message-ID: <20240814203850.2240469-8-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Stat-Signature: tpyo7fzhanhx9h64jcq56s8nkmkq9j7f X-Rspamd-Queue-Id: B35F9140014 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1723668005-370500 X-HE-Meta: U2FsdGVkX1+ln9Syj4dXzowkZwWG17F0KS9a5QtMuFbrbhSVEhsAYU3MSa3wrSQG66fYaYLn2P4WKHNGxdImQq+NLpxKbBE8oOjM7yKy8suO9S4cczq9LObFNyC/r40znNlrKZJ4uGYv7Wla/uI5QOMiDUk7UATIjYhnX2+NTcHWzpT0lKTos6ZIvwWULGFuCNaZg78ZifR/tWff4iyIHURf3muLdjdXOr9qcGYFe7wn4wMT+rhFTrNXwkyyjufC1/j8nOFt4h9FfY2OOZbbG8z2VfcWqjYWbkUACwqMNr54ZSMg7YjFcUTm0dAgAXQWyO/5/+MlY30lPEw7FV6Mt95G9+35VudiRthRWvftrrFsPOXKsr5s6KQsBV94c9te0Ys9Hy193NaNCvqGQ9Tk+i9CtFD5+TMbhnU+JVMbvGa6GENCqOxBKbPbzbZtb9aiTIKoyT4UFnDyFgKEdQs7cVArrYzI6OlfK4TokFRGJgucok/2Gmb6RiGwleXgyQEapV5pf5ae/nopFcft7ijHPqu1FZC+gYdSUnXOYOSbgwZoGbaNpkiuFNtHsK3uVuiGUBRoGMtHOaOY7tuxAL30zn17IVJELFOYqQ2Q42JZ08jSN/Qybu0ELFp27E2c5Ctuk1fpuq65dRC51oA7/bOMMoBh4w3GDErRKqdwaGk18IFtH3yQ+wFsHKOSUKiOU1XiE34Ovi5ZpLKWlxsfuxAs3zSA3WTT+HMIZJ6YXZM8Et1t8J9mXLjY1lATKJk+ocna0My0yzTuHZE5wgXO2QfvOhy0ZvBqROvDRCgWcwQTYr+sSmAK3DcWeVwuKLLjAjlMYEyb80dEu3cUYrqWhIfjqrSS3jw02cwerhQYQqSb5Ztw0qloSTCPICdBiYPHqHXAD6EDqGifAX+S98swqqI1A0bFu0rkfi8Y9vw4KmNW+06sIBBOLCvef1vpvxkNafjkfVudDumU5LuYnRAGi6e jtxtUkEX aCWL7M633jRQBEQZCZYY+gB4LOJPq4EqaxhdD56TQDQbKSlBAa4aIuYru+8iRDFhiZ1st5FcSH3rFXnXBR5O65hg8I0KR49ilN5J6N+VBWREdXNuKbsy+ORwmP5jB8eFBKhWX+OzZc67FBEfGPipu78w1zV/0propW/rUBMcatOnHDxCOj8dyYq6xEX684LCzOfqYTHPDzle8Ws6yc9VYSAKzXHaTYnfbJ374H/Z27upy5Z+kSzh1r0CfopQ/p7dYtxonPxk28FZ4aoAGxqRhqzVjJhrq0wfhqm7fKP0sBeQ5yC38f5+AUZR8cHqe7TbSM1duUV7bfRmzyU3fbsiXr0tmSHa1UJUk4IIJ7nnyid23rtumTNmsbkbyUXbEdgcdH3gH9No78bs4DOTjoV4pH0BhDg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move max_len/max_nr_segs from struct netfs_io_subrequest to struct netfs_io_stream as we only issue one subreq at a time and then don't need these values again for that subreq unless and until we have to retry it - in which case we want to renegotiate them. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/write.c | 4 +++- fs/cachefiles/io.c | 5 +++-- fs/netfs/io.c | 4 ++-- fs/netfs/write_collect.c | 10 +++++----- fs/netfs/write_issue.c | 14 +++++++------- fs/smb/client/file.c | 15 ++++++++------- include/linux/netfs.h | 4 ++-- 7 files changed, 30 insertions(+), 26 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index e959640694c2..34107b55f834 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -89,10 +89,12 @@ static const struct afs_operation_ops afs_store_data_operation = { */ void afs_prepare_write(struct netfs_io_subrequest *subreq) { + struct netfs_io_stream *stream = &subreq->rreq->io_streams[subreq->stream_nr]; + //if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) // subreq->max_len = 512 * 1024; //else - subreq->max_len = 256 * 1024 * 1024; + stream->sreq_max_len = 256 * 1024 * 1024; } /* diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index a91acd03ee12..5b82ba7785cd 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -627,11 +627,12 @@ static void cachefiles_prepare_write_subreq(struct netfs_io_subrequest *subreq) { struct netfs_io_request *wreq = subreq->rreq; struct netfs_cache_resources *cres = &wreq->cache_resources; + struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; _enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start); - subreq->max_len = MAX_RW_COUNT; - subreq->max_nr_segs = BIO_MAX_VECS; + stream->sreq_max_len = MAX_RW_COUNT; + stream->sreq_max_segs = BIO_MAX_VECS; if (!cachefiles_cres_file(cres)) { if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 5367caf3fa28..ce3e821b4e4f 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -619,9 +619,9 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, goto out; } - if (subreq->max_nr_segs) { + if (rreq->io_streams[0].sreq_max_segs) { lsize = netfs_limit_iter(io_iter, 0, subreq->len, - subreq->max_nr_segs); + rreq->io_streams[0].sreq_max_segs); if (subreq->len > lsize) { subreq->len = lsize; trace_netfs_sreq(subreq, netfs_sreq_trace_limited); diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 426cf87aaf2e..e105ac270090 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -231,7 +231,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); stream->prepare_write(subreq); - part = min(len, subreq->max_len); + part = min(len, stream->sreq_max_len); subreq->len = part; subreq->start = start; subreq->transferred = 0; @@ -271,8 +271,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, subreq = netfs_alloc_subrequest(wreq); subreq->source = to->source; subreq->start = start; - subreq->max_len = len; - subreq->max_nr_segs = INT_MAX; subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); subreq->stream_nr = to->stream_nr; __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); @@ -286,10 +284,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, to = list_next_entry(to, rreq_link); trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + stream->sreq_max_len = len; + stream->sreq_max_segs = INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - subreq->max_len = min(len, wreq->wsize); + stream->sreq_max_len = umin(len, wreq->wsize); break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -300,7 +300,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, stream->prepare_write(subreq); - part = min(len, subreq->max_len); + part = umin(len, stream->sreq_max_len); subreq->len = subreq->transferred + part; len -= part; start += part; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 44f35a0faaca..34e541afd79b 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -158,8 +158,6 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, subreq = netfs_alloc_subrequest(wreq); subreq->source = stream->source; subreq->start = start; - subreq->max_len = ULONG_MAX; - subreq->max_nr_segs = INT_MAX; subreq->stream_nr = stream->stream_nr; _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); @@ -170,10 +168,12 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + stream->sreq_max_len = UINT_MAX; + stream->sreq_max_segs = INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - subreq->max_len = wreq->wsize; + stream->sreq_max_len = wreq->wsize; break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -290,13 +290,13 @@ int netfs_advance_write(struct netfs_io_request *wreq, netfs_prepare_write(wreq, stream, start); subreq = stream->construct; - part = min(subreq->max_len - subreq->len, len); - _debug("part %zx/%zx %zx/%zx", subreq->len, subreq->max_len, part, len); + part = umin(stream->sreq_max_len - subreq->len, len); + _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len); subreq->len += part; subreq->nr_segs++; - if (subreq->len >= subreq->max_len || - subreq->nr_segs >= subreq->max_nr_segs || + if (subreq->len >= stream->sreq_max_len || + subreq->nr_segs >= stream->sreq_max_segs || to_eof) { netfs_issue_write(wreq, stream); subreq = NULL; diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 419bfd0c0cbb..0ff1a286e9ee 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -49,6 +49,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) struct cifs_io_subrequest *wdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = wdata->req; + struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr]; struct TCP_Server_Info *server; struct cifsFileInfo *open_file = req->cfile; size_t wsize = req->rreq.wsize; @@ -73,7 +74,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) } } - rc = server->ops->wait_mtu_credits(server, wsize, &wdata->subreq.max_len, + rc = server->ops->wait_mtu_credits(server, wsize, &stream->sreq_max_len, &wdata->credits); if (rc < 0) { subreq->error = rc; @@ -92,7 +93,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; + stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; #endif } @@ -149,11 +150,11 @@ static void cifs_netfs_invalidate_cache(struct netfs_io_request *wreq) static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; + struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr]; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); struct TCP_Server_Info *server = req->server; struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - size_t rsize = 0; int rc; rdata->xid = get_xid(); @@ -166,8 +167,8 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) cifs_sb->ctx); - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, &rsize, - &rdata->credits); + rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, + &stream->sreq_max_len, &rdata->credits); if (rc) { subreq->error = rc; return false; @@ -183,11 +184,11 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) server->credits, server->in_flight, 0, cifs_trace_rw_credits_read_submit); - subreq->len = min_t(size_t, subreq->len, rsize); + subreq->len = umin(subreq->len, stream->sreq_max_len); #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; + stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; #endif return true; } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 574df0402c44..11fa86640d91 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -134,6 +134,8 @@ static inline struct netfs_group *netfs_folio_group(struct folio *folio) struct netfs_io_stream { /* Submission tracking */ struct netfs_io_subrequest *construct; /* Op being constructed */ + size_t sreq_max_len; /* Maximum size of a subrequest */ + unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */ unsigned int submit_off; /* Folio offset we're submitting from */ unsigned int submit_len; /* Amount of data left to submit */ unsigned int submit_max_len; /* Amount I/O can be rounded up to */ @@ -177,14 +179,12 @@ struct netfs_io_subrequest { struct list_head rreq_link; /* Link in rreq->subrequests */ struct iov_iter io_iter; /* Iterator for this subrequest */ unsigned long long start; /* Where to start the I/O */ - size_t max_len; /* Maximum size of the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ unsigned int nr_segs; /* Number of segs in io_iter */ - unsigned int max_nr_segs; /* 0 or max number of segments in an iterator */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ unsigned long flags; From patchwork Wed Aug 14 20:38:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2A63C52D7F for ; Wed, 14 Aug 2024 20:40:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85E246B009F; Wed, 14 Aug 2024 16:40:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80EBD6B00A0; Wed, 14 Aug 2024 16:40:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AEDC6B00A1; Wed, 14 Aug 2024 16:40:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4E9596B009F for ; Wed, 14 Aug 2024 16:40:15 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0AE6BA0EF4 for ; Wed, 14 Aug 2024 20:40:15 +0000 (UTC) X-FDA: 82452018390.24.C8E0360 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 46DB7A0009 for ; Wed, 14 Aug 2024 20:40:13 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gyAR4Sub; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667960; a=rsa-sha256; cv=none; b=PAvaHh4soReY2942o6lxmaKnQtsLn+rxCD9htkrNIQXNBtKqTkNa82GkHisp1a/d9rIZ8c nNuDmr70DlfWXJu7pOgdpKNSZ+C2PdIEcGZBICre+iYdEondGXMZo6z3JHjeR9R8mrf4/3 84/9/23qs842jfVBja6urRnON1XSTEw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gyAR4Sub; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667960; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5ILxAt7NrlTMl997Ic8qQaZjEltu0gIkdcefwxoaQ6I=; b=i2s3UvBx6IvFCH8fbRKRx/djP7ArbJ1YCxMhgMAnaSb+O8qVN0ZrAvhmwgXEtb7cTP3Ji/ 1OHWmZNIgIZ/CLkxX1qZMjZn50p7iVrCfZxAniavk8qYkJfnfG20mzjLB9DHABNShKmEeu 9bnt/hjZ8t6dKKLyydAIa6u7TDEK9fg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668012; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ILxAt7NrlTMl997Ic8qQaZjEltu0gIkdcefwxoaQ6I=; b=gyAR4SubZfBZnAjycN5SqOKk9EYYPwHn9QPXZt+cnEzViVY8gAFk1QP/VqR6jJWSN0Bmaf KGVhpnkDz013uQLZ8whUbKHsRHYLX72BVp8+RpqL+2W1d1mthOfhAWKN69KG37OTeac9FT ak0wCEAgYecaSLJQ1pRC5/uPR+MyyuE= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-533-TmxqWq9ANiW1IsbBOq97nA-1; Wed, 14 Aug 2024 16:40:07 -0400 X-MC-Unique: TmxqWq9ANiW1IsbBOq97nA-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 75755195609E; Wed, 14 Aug 2024 20:40:04 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 917E219560A3; Wed, 14 Aug 2024 20:39:58 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/25] netfs: Reserve netfs_sreq_source 0 as unset/unknown Date: Wed, 14 Aug 2024 21:38:28 +0100 Message-ID: <20240814203850.2240469-9-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Queue-Id: 46DB7A0009 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 3jgue8mauyhf5km5hx6q7gsqr4xsrzcm X-HE-Tag: 1723668013-248541 X-HE-Meta: U2FsdGVkX1/2aohph0nLcxLkQjGjW5Eko6I2zFa7/xxn59Uzt0R9LyG9jQ2xQobTLEth7OqKauj0ttPUVYSHZmc39ShqSH7dBSGZVM6KDU1WmrjdDDdM1O1ahHtKn1hM7Mg62hTvrkPzU3mqideEKqdR52dM3q7r0A+f2JQIl2yyzSleqQpnk35YoIrgYEsXmBmaylT+APgbxu9+Eu77bs/Fj8PdM51qgpFF1I7+ww4z5jP8o7c/8NTmyuEkBO52rRR34SZBiuFbcRMkHywI0zzYFTzkF9tMmJerrGDqqacNWfjT877RrWcgnwf8px450UdHO1myr0q4Nh2XUTdNatpvNV0BsFZ1ZIkOHSbfswoqyKuofNCZpx+XA71hbI0bG1GkhYN0MYJXezsF334CVvOh44tkdtNPQEPIRKdzV2Kk+lKfXKJx669VSGCqxyHjR2BZPbSvyLQ9WmIJh3Q/kTae6o0kvJ14vu+DngALtc3gS180VtNmuV4Ew3RwKuHSN3YCctqaBl4VVrwmPPdjDbUmjBbZHuWUnHNVqotSyCNECRF9pl6cLZ5eSR7j+mbmB/TCELhlrIb6hREsJJYwAWV6HXG2lq0guyRwGxD9Nz0X23Trz2vu2TOSqzp+778rgbjvol1V++a+ta5ndeebZTACfbNylPXKkUGw2XdmFksv6jvZOGloOtI+XwQ4pgU37UtecWrwQ12m7XF1CMrckveax0XzqaAPlyVfp6H0iKIyNYB1OQExyPNUpkLe7WEM/5/JyOrqBOYuCeJ9f4wdOM8tkXuwgjp7vwrV65BD4zjKjIHjzM5tJHyLHsL+d+ubzbsBmQKPlMgr/dwcB/yNcz/rRcNAvT6yShm/i7AwXwRedaWqskcCaM8pjtHEjoFN3AO8DTNtegTiAdAuWz84lqAoZPpyjw62gufRhPxMIk35j2mCu5ADZiB2oEBswULpZDQYHUtvbRShmVoetnp nUODOTl+ OkGXqRc14XfUnyyz32n8dHmiaJ8GacbFTWV+qdI/rCgZPd1huBxVqyLg7PeV7fMzx4zSjDm1ZaW/On4oIxk9Zq2VOHp5nlOYGln0Gr/Vy52ZivJiBSpRl+P0AQTO2RHiXJa8/xHatKHycHswX2acUtv3JNS/G2QvJVtR2HeqxUh4akzKRWeRD9LEqsHwJsbA+yeuOnnpU2f61+nS0Os+yfBVosyMseMxq2KtUOH4XQq8xfmPNY6lbj0hNnst2n+orXDn2b474b556hD6rPPkvTLJY7kttIaNU8Bdcrk/NsDhm6dMwFM7wr3gHZR8vWRX1jPuz7e65I/OpG8rppqXlRr/YS5GPMBZCGmEb9Pc1LxZM3XzUmutnlxyaao0k6mo7wMz5m4LuuMMlP9ReriVCKKzpag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reserve the 0-valued netfs_sreq_source to mean unset or unknown so that it can be seen in the trace as such rather than appearing as download-from-server when it's going to get switched to something else. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- include/linux/netfs.h | 1 + include/trace/events/netfs.h | 1 + 2 files changed, 2 insertions(+) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 11fa86640d91..16834751e646 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -43,6 +43,7 @@ static inline void folio_start_private_2(struct folio *folio) #define NETFS_BUF_PAGECACHE_MARK XA_MARK_1 /* - Page needs wb/dirty flag wrangling */ enum netfs_io_source { + NETFS_SOURCE_UNKNOWN, NETFS_FILL_WITH_ZEROES, NETFS_DOWNLOAD_FROM_SERVER, NETFS_READ_FROM_CACHE, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index a4fd5dea52f4..f4105b8e5894 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -60,6 +60,7 @@ E_(netfs_rreq_trace_write_done, "WR-DONE") #define netfs_sreq_sources \ + EM(NETFS_SOURCE_UNKNOWN, "----") \ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ EM(NETFS_DOWNLOAD_FROM_SERVER, "DOWN") \ EM(NETFS_READ_FROM_CACHE, "READ") \ From patchwork Wed Aug 14 20:38:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B3CEC52D7F for ; Wed, 14 Aug 2024 20:40:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A810D6B00A1; Wed, 14 Aug 2024 16:40:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A316C6B00A2; Wed, 14 Aug 2024 16:40:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D2BF6B00A3; Wed, 14 Aug 2024 16:40:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6CB696B00A1 for ; Wed, 14 Aug 2024 16:40:23 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7376D1C2C09 for ; Wed, 14 Aug 2024 20:40:22 +0000 (UTC) X-FDA: 82452018684.10.C5FD4FE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id BF0E04000A for ; Wed, 14 Aug 2024 20:40:20 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eOaUj4PC; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668008; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WqmeAIFtHY/CwyA/vjZ+IfoBU5cuDLNukUcIV4khXxI=; b=1A4s8fcna++HI6rQnww8fIeAwbfUJPNSiz9lPn+cJOjVOLO0uRTggFpk8l1hJGo67hqw4J 4GE3GsQ1T4O56tJLxQyiDQS0PsYdu3F/xTeDq6Uy7jSjs+rx6Yfyxj+XS2qbeFm9OqLCyW IZnL0D22KopbzV/H272Ho1SXOArDA7Q= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eOaUj4PC; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668008; a=rsa-sha256; cv=none; b=FL33XXWWGfMLihlYxvvnNnImEJG5B4HKw8IkjXjeFR/C1RCQKbY05Pz9An1aZ+qy2Hi8E8 bCe0VYNM5NtUZfgMUtTpC5sCVBI0jOflolNfIqD02l0fiFO/mPNWrFCCpLyxeiCby6k+YH cIfTyS7PRHfihjEs+/wuPGCxORh9Nuw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WqmeAIFtHY/CwyA/vjZ+IfoBU5cuDLNukUcIV4khXxI=; b=eOaUj4PCnWbgIIftO3+aSNY9iRdseNkfvp1Xs9tLC5YxAZ/KnPIWGsDr6Ig+9WHpwnPdBG TrfAVBf8B9xrso5ajN4umPCDS75loYwpqs49L0XLa7WvWuqNTi/a5iWcV8diH3nOULanvz +jiU6jP+UK/zUQus9lFrcGUEf1qRitU= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-214-VGEIqNK5Ox-3wfRG2S8vKg-1; Wed, 14 Aug 2024 16:40:14 -0400 X-MC-Unique: VGEIqNK5Ox-3wfRG2S8vKg-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 78B6D1955F43; Wed, 14 Aug 2024 20:40:11 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D09D119560A3; Wed, 14 Aug 2024 20:40:05 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/25] netfs: Remove NETFS_COPY_TO_CACHE Date: Wed, 14 Aug 2024 21:38:29 +0100 Message-ID: <20240814203850.2240469-10-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspam-User: X-Stat-Signature: 3jnjigr3aisu13ibos6bbekp6xekhnpy X-Rspamd-Queue-Id: BF0E04000A X-Rspamd-Server: rspam11 X-HE-Tag: 1723668020-743484 X-HE-Meta: U2FsdGVkX1/GS6IVcbQYpInMBWBdAHPJMf2nHz16dGWz5/yEyzAU1SOBZVM6LyIccCWs/8HANOVXtylmeObrO16e5mteyKzaLSRenXsobjtmSyiyPM21uMBDVU5ZjUSvX7rRpGfz6CtjvAlpENoSlKERMfAFBEs7Y18fKeMdWfztRfweIkoAiwe3DjJnOFFaub9IklOwHjgNslKNm0Nc4ekkpfFrAcxG2C5RbrIg+OuXv5cXT6jZSaPuaCTSv8gmezSHqEXWqWIw9HfMAijeeC1DD3/pQfsDs24+Atdg43rst80rNnmvvVOjFbYxhk4hhZrudsFgNTW5O5cxzBA9GnSCZD33fXSi2dIDtTYNC0NfjPyWxpkI+d8QhgxaZUYBZodrR/HEirjZhRb3cc/pL+Kg5PELWswuJWzvAhWrffm+S7DOEMex7qbxToFB1su2WJAlAYJdTillYmqSshq2OcDeqp+QOElQBfQBfKtoJJXy+LUBo3/hVj+u3rxbRa5J2F43IqSvSV/SBGkhEvT7Deaat7U84S1GzG4giMbLXkfWI/gA7BdCzpoZuBnPRIHgmXYWiDVcsZDkO/Pwrplg56nsEbIzPQPOFMSkkdjVM3j78y4sTQRgoDv3GTdCEUuO5Erg5AZQtdp37i5Dtt9O6yO3gDbnTYOJr82BClJGBrUN91jm0C1Oktzfuqq6QJwrqa2XJ3bQ5ETmIdTU/fr/vCEaOYJ6eC93d7V6oC+dDtTyAP24QC5M3fscegjfrwrOKKYhJtKyXyHGVFwFE9kQCiJy/WjZNoP4bW5BcFD49RDzwpli6WEzwjgMXFNNk3m8tpCpkFT45zG7P7mwAHUGucJ00ocO6gow+392XIgYU0S6ipQSD5MwEVT861onqXTMYocVApruWT3Ucvr/ljqGdAPyr1EkYsbYI8szqiz/zWKIKbZHXOLySlC9vM/IHCiyLBdSqhSEnfgrR2/HzyT llQJlAQS drrzFXKrj4iZrv2CqbMTWAaUqpcoy2q599PAkcRwUPGq8luUZAxDryTprghMXSvQ3ZPSD5Dbxnq9pDk2qvDZJ1qnH1xPxhTk5xGYKI3CALv36aCuX6yvoAjGYUD5CBZORJMjTZbXlvRGMXxxaFFx90Y0O0tNAYIwj7PoVDGLNgw4eGB4EwAhOm3F6I8hS7Sef7ik0WeQE/Db+3GA2U3hWZwLhzswE+BU9WnLcrwYQFyPNrsTWUP1YzISMr8tdC2HxxM3tJqNmd/jBfiALYkjkFtCXibB9ifps7D46F7+Y3eA/6YHKw09vbhj5i/2vaEzih9iUJJM0y1T4MVuNCJPYf8vyATpIfWizJDN+f8CRw3kIaisHoxhd4lwA8VrF2BKTMvF59wNSO6LlzSjYbiBpg2plaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Remove NETFS_COPY_TO_CACHE as it isn't used anymore. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/main.c | 3 +-- include/linux/netfs.h | 3 +-- include/trace/events/netfs.h | 3 +-- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 5f0f438e5d21..1ee712bb3610 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -37,11 +37,10 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READAHEAD] = "RA", [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", - [NETFS_COPY_TO_CACHE] = "CC", + [NETFS_DIO_READ] = "DR", [NETFS_WRITEBACK] = "WB", [NETFS_WRITETHROUGH] = "WT", [NETFS_UNBUFFERED_WRITE] = "UW", - [NETFS_DIO_READ] = "DR", [NETFS_DIO_WRITE] = "DW", }; diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 16834751e646..ae4abf121d97 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -206,11 +206,10 @@ enum netfs_io_origin { NETFS_READAHEAD, /* This read was triggered by readahead */ NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ - NETFS_COPY_TO_CACHE, /* This write is to copy a read to the cache */ + NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_WRITEBACK, /* This write was triggered by writepages */ NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ - NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_DIO_WRITE, /* This is a direct I/O write */ nr__netfs_io_origin } __mode(byte); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index f4105b8e5894..47cd11aaccac 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -34,11 +34,10 @@ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ - EM(NETFS_COPY_TO_CACHE, "CC") \ + EM(NETFS_DIO_READ, "DR") \ EM(NETFS_WRITEBACK, "WB") \ EM(NETFS_WRITETHROUGH, "WT") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ - EM(NETFS_DIO_READ, "DR") \ E_(NETFS_DIO_WRITE, "DW") #define netfs_rreq_traces \ From patchwork Wed Aug 14 20:38:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC3FEC531DC for ; Wed, 14 Aug 2024 20:40:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CA8A6B00A3; Wed, 14 Aug 2024 16:40:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37BA26B00A4; Wed, 14 Aug 2024 16:40:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F4F66B00A5; Wed, 14 Aug 2024 16:40:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F1DC56B00A3 for ; Wed, 14 Aug 2024 16:40:25 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AB31C1A0D8C for ; Wed, 14 Aug 2024 20:40:25 +0000 (UTC) X-FDA: 82452018810.25.AE9DECB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id E782E18001E for ; Wed, 14 Aug 2024 20:40:23 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TJZZRgZL; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667951; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ysayb5RXeXC3/aarWcnK8uuWcOM3GXCwJSpymUE1G5c=; b=qtpiqfCAGFE5ao4GGL46FODXlS0iLZsfUo9ILfu74PXXYyAsefHgFwDwWNaoOrPNl61PD1 ZBTWyL8+w6oLSvS12gbGN+aMv9Kd2YIreGCQdq4rO3nRxIY11fn6psLLp7illFBblg1JxO SFmGFQg7q39JcA418CVMk5lzJ9pyvXw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667951; a=rsa-sha256; cv=none; b=nyzzP9DKdcWKMwfxF7HQtbnx6pXhL6RkFs0RTS0iZbYFmGSXpZWi3rr7zx195JKWESztej 4qvdcBOB6DGYAwcidM8FSboHXYh0NOJGu0kZNzbI14THk1005tyskqcbTBh1OAOEBNUFMW IsFSDOokfDqms8/180Robzwcztzq+WM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TJZZRgZL; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668023; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ysayb5RXeXC3/aarWcnK8uuWcOM3GXCwJSpymUE1G5c=; b=TJZZRgZLgYx7iXT2F/Rc44BppaBRAQBinh+6ZVpo2ZWVZxXFhenb14hrgmHEPLuQbvYlv8 qt5ueyQKRie+oIin3BGKrh0BoyDJfQ1widsyAMzx9N7mo8+ILDanHoyDFO5na14Kp9NPJa PDpeKPf/NUvtzGJaPfmOX8fXLWoaTls= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-375-LKaV8qK4MgahAHzEnVal2w-1; Wed, 14 Aug 2024 16:40:21 -0400 X-MC-Unique: LKaV8qK4MgahAHzEnVal2w-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4EC7918EB232; Wed, 14 Aug 2024 20:40:18 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B01791955E8C; Wed, 14 Aug 2024 20:40:12 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 10/25] netfs: Set the request work function upon allocation Date: Wed, 14 Aug 2024 21:38:30 +0100 Message-ID: <20240814203850.2240469-11-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E782E18001E X-Stat-Signature: 9f6p9r79xgrq9w91u9ybjfsb7yzh568f X-HE-Tag: 1723668023-598462 X-HE-Meta: U2FsdGVkX1/X6qB0fR/JhDFRfw57AiWNQ303jw9f8zQKXkEAvqpKWunrKZcHRoAsCME+/b7kVzerHhCC6UtqvxE/Ng4YB3NZxIQAmfUAFs03TSNSe7/89gytnKOOLo+TXexotWFLcy0avtOW4ocKxiMqGd3iG79b+R/ut71/bQLyLX5S4HuNAmyxrPnqDwAQhj8nB5wv2uOOn2Ct71G6BcM93XppEPZhInNuViRiUickHh8qgab8OAwmd5Ch0TCczUJGER6j5zMza1p0ZEl+BIqWbkDAvsWKM+GIsR2sIkIiJ6Evh09ejo3xiOggQSuj2Xd/UQkvcqzyu/GIiacr3HDnHPNi+LNWXkrBYrdef+IzYX2klamWoioT8PqeUddDG7P+LQKNliWzVObNKPSO64FJTb88jOoWh8wfAaiIdBBZC6TAMwb9JkUPge6cp5P89tyjnpQ1bZ4R+m1FC/dLDhS+2Di5xzhEJLMiDdwqZP6WfW7LUJITbbwYZxvOvl47eKOtJOtKieEhFiXfP0lYxeKUa0B81U7Sm//Mw9jvkVFZi16vMXQPBQda7Ar163tl2Dvmpl9tMqERNNxHPBMuxcOPFpd3A8Suji6ZHH2BTOFDpypc1eintUfcPCc0nUWHGAN3HFbjPAotqZ8J0osRL9lsFGuzWE8qK+4h1vHV1DhMHoCAXsolFLdc8u7G3UjWZbLPJpwsYXVyKZCBkmam7IhRY99EE3eTFyVYAI5B3qjF1O3B+5RpxzVLEScHmwxcOgTfTBJyw2w5cd7VMZiyvdWR8IDiEGh1X9vK8x284wFhNfRLTLbfLZ0tKgq5W49DHit/CSX4Bk6IAjwAcMLMuQubnJhDQEOEBuNwcecppWzGTCpWts6PUAWYbuu3pRFR48RS9xiR9KmwD6n6Eg+FY/X7adJYMfcXcmmLLen9Fcx3Zx85p5EkI4jAfQE4K6a/lLcBUi5kBV7tYfNITdC xA/o2mpz 8IjVBHmBU5aaevL3CJnZtQAG0QxfyblCkJztfUW6u9IZZPDNm/cGv7ABKxI7TVTxAOFF+jANrKcMuPSOLRXEw+/AcZaMGemthllg0vL6ahTFARlHp4j96EUlRKW9bckbbL+wZ8BWEI8MayGamExj+rFpvUxGnUos47i5X3jrlADuQ6rhXuHE5kYavclSLypTH1x3PwZ+yMK50spHHHG/HfJ1rjoVKP3xmpbdvFvbe3damuMJkZM+Z/nmDaq009jmn0OdEOcIVYoyPVsDVjWwNVp9yRWmc3H0EEG5eb+1N83VIuB127Nw5klzSsVyn6EYNhNB7/qPPlXttCtpFSiIw1gMEsIUSnK/UaaMbEmFHR46PpWI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Set the work function in the netfs_io_request work_struct when we allocate the request rather than doing this later. This reduces the number of places we need to set it in future code. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 1 + fs/netfs/io.c | 4 +--- fs/netfs/objects.c | 9 ++++++++- fs/netfs/write_issue.c | 1 - 4 files changed, 10 insertions(+), 5 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 9e6e0e59d7e4..f2920b4ee726 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -29,6 +29,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, /* * io.c */ +void netfs_rreq_work(struct work_struct *work); int netfs_begin_read(struct netfs_io_request *rreq, bool sync); /* diff --git a/fs/netfs/io.c b/fs/netfs/io.c index ce3e821b4e4f..8b9aaa99d787 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -422,7 +422,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) netfs_rreq_completed(rreq, was_async); } -static void netfs_rreq_work(struct work_struct *work) +void netfs_rreq_work(struct work_struct *work) { struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work); @@ -734,8 +734,6 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) // TODO: Use bounce buffer if requested rreq->io_iter = rreq->iter; - INIT_WORK(&rreq->work, netfs_rreq_work); - /* Chop the read into slices according to what the cache and the netfs * want and submit each one. */ diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 0294df70c3ff..d6e9785ce7a3 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -48,9 +48,16 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, INIT_LIST_HEAD(&rreq->io_streams[0].subrequests); INIT_LIST_HEAD(&rreq->io_streams[1].subrequests); INIT_LIST_HEAD(&rreq->subrequests); - INIT_WORK(&rreq->work, NULL); refcount_set(&rreq->ref, 1); + if (origin == NETFS_READAHEAD || + origin == NETFS_READPAGE || + origin == NETFS_READ_FOR_WRITE || + origin == NETFS_DIO_READ) + INIT_WORK(&rreq->work, netfs_rreq_work); + else + INIT_WORK(&rreq->work, netfs_write_collection_worker); + __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); if (file && file->f_flags & O_NONBLOCK) __set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags); diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 34e541afd79b..41db709ca1d3 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -109,7 +109,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, wreq->contiguity = wreq->start; wreq->cleaned_to = wreq->start; - INIT_WORK(&wreq->work, netfs_write_collection_worker); wreq->io_streams[0].stream_nr = 0; wreq->io_streams[0].source = NETFS_UPLOAD_TO_SERVER; From patchwork Wed Aug 14 20:38:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D41BBC531DC for ; Wed, 14 Aug 2024 20:40:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2FA96B00A6; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BDEC66B00A7; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7FA06B00A8; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 84CD86B00A6 for ; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4DCBD1A1083 for ; Wed, 14 Aug 2024 20:40:43 +0000 (UTC) X-FDA: 82452019566.08.AFB0760 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 8D7D84000A for ; Wed, 14 Aug 2024 20:40:41 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="iP/ZIsfs"; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SHrBZHpEC0U+YMsJ7gsza5daMkvE96bXZCw9DF6n8k0=; b=7uyFF25vo+XNRHzTmtErKsKe+E2bRyOiE0qavcVV2DT82L88GX6WqbhFdFlql5E367gYhs BLrDX/cYBsupdFlnxgM6lHt4l/d9xKas2OlgU0C88MRHqQWtOglFV+fAmfHG4Pz+haH5t3 /yIokEozwsZF5bb8y1i17lazMtaLr+M= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="iP/ZIsfs"; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667945; a=rsa-sha256; cv=none; b=v0YSUQyR/JNUGQKVzA0Gk+8P5qJYpqWNXTM4psMLznyMwEoCF91IC6aALP4NsYqvkYea/9 H4XQEGb756KYMo0kYERnnhappZF8PTDpaD4+KbVax6j8DsvvSYl5TjrBlquKEaTNtlb4pp Hc2Xn9xiySl41AYXOaoCyi3J1LC0Spc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SHrBZHpEC0U+YMsJ7gsza5daMkvE96bXZCw9DF6n8k0=; b=iP/ZIsfs1mHeIXY6DdfJqJcnsnkQfzswoFEWGQ+5rZxBJfZgukfuANbvro6E+X8Yk5g4JZ RLFxUCl6bUHyc9ME2hUPlFgaKK68zYSHBOLFuyE5coXfeqklgDHypMg/9rEeDkoePqFRco RK2CSDd6E/LFVn7OTwMejSA/RyezhTY= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-186-9Aleb69eNYKcJguKV1KcQA-1; Wed, 14 Aug 2024 16:40:31 -0400 X-MC-Unique: 9Aleb69eNYKcJguKV1KcQA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0AFF7195420C; Wed, 14 Aug 2024 20:40:25 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8DE81300019C; Wed, 14 Aug 2024 20:40:19 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 11/25] netfs: Use bh-disabling spinlocks for rreq->lock Date: Wed, 14 Aug 2024 21:38:31 +0100 Message-ID: <20240814203850.2240469-12-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8D7D84000A X-Stat-Signature: 4hp1e97tearchwnsjfxgosu5kdswm8rw X-Rspam-User: X-HE-Tag: 1723668041-525106 X-HE-Meta: U2FsdGVkX1+Z+YQXsy43UeDUaFcf6fJof15rtmx6PWqx8fxsArxdtAWne0ZzplpIrIUrLuz8djtZ8DUCvxx00+qj/IJd+mCQsTYXnfLJQuI0ccjl09u0g/KZSP8giSjGqqkWI5ReSbFW6cuL1XAPuXAgRVCl/6kyfQ+KvP0tlSR3nxGtZxZEOoJerexRNFjirkOhL55sBtpsi5nA364z2BpVC4B2rljQzEvh/kbWffxyWthGt/0W4NxU1JmvN21KVtiaSCmHXheFI7lG/Wa+IBY9bNp0rClunQozN0mIf1365H3joaHIQNLg6a4hRmFu1ilMqZoDT5jbKTfzKJugwiUnvI+fe3DLuYu5/L+ZMlUeHC7nooyFucb0Crp3uOaoa0mSRjgmJSwocy5gnnOzUzHHYoves7JSzDPaMfdBq5H+AayHflSS3jdjE9fWIBEv3/kmA4Q22C/+zD/yy2rzC0QFZf3jzt9sAOY2Q3dGZZOV/9+wAPpIFjpHvfHFtes4KtXY4lpzL3jyZkuuvgBKerJdYzdt6sRreNc4RoIWvljs2mA0gTrrxvsgNSukTIgsYbU7IT1qgMpEA209wMddy6tO7KPzfYS6F6ux5U4piQtUNLt+gW/OyBkVd6TH20F5XRN7ad4TuSs2MT/xV4WaYePF2IGR2EKZVQyiPvF8+esGyWAXeorNjTq+6HSaits7jfVbMh4eUZ4kiYyD4UvfmFBvgqD3WEFygtTF03B2SjSMeLLObBLYvxu5dj8JVF9A9cEnAFtplnsFuYXblClQ8Vx8QLoeyEmq3TUVK11jHJ1SL5NqSwDF0QhHTf71quBqjoBcXy6xfAEgUeVhO8UxGX4kCPHg3S2FtEoRYVL5Fi07msdBaU9QYD1aiWdqA16/8OAIeKM6Z8SvRysDmabVlBHrf+56MGSqdPr8lg/TXsk79sIrhfMBerpE9o1esLMd+WOBaV0gcoZjLrEWrQf Xn/VqPVh Bl9NufAgG2eaRGtCFpfMyvLVFXIsX+H9aQZHd1xCtSlAFYotDOJF6LN0OEIRDWmGigSTq+WM12ycuoUpfLdl7fqt+JRTvQt13rixdFgz/Q50UYKFuHm6MD0IDsGM4IKyHw3WswwVI5bT6HqsS2+a87rTLCOf63pnxMqV1sRR5AVM7Qo1oqYkBm0W/S7EJpi/bXm+vksdfLCfm/Wjny3HVAmN1iU5P4iC94k3utA0uM+JRJUWHh1j1syJ7g3wosFw1odrT7RyVmFD1e5oy1W6lLzl4+RM3/7RR36hKqpe/aGotEdPXCD22p+5vC4wfTOV7Xbi/YOQ0Ye0SElil5is5BdoL1zAtPzQoyzX6dc2sW6FzRys= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use bh-disabling spinlocks when accessing rreq->lock because, in the future, it may be twiddled from softirq context when cleanup is driven from cache backend DIO completion. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_collect.c | 4 ++-- fs/netfs/write_issue.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index e105ac270090..5f504b03a1e7 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -466,7 +466,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) cancel: /* Remove if completely consumed. */ - spin_lock(&wreq->lock); + spin_lock_bh(&wreq->lock); remove = front; list_del_init(&front->rreq_link); @@ -482,7 +482,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) } } - spin_unlock(&wreq->lock); + spin_unlock_bh(&wreq->lock); netfs_put_subrequest(remove, false, notes & SAW_FAILURE ? netfs_sreq_trace_put_cancel : diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 41db709ca1d3..7880a586343f 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -191,7 +191,7 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, * the list. The collector only goes nextwards and uses the lock to * remove entries off of the front. */ - spin_lock(&wreq->lock); + spin_lock_bh(&wreq->lock); list_add_tail(&subreq->rreq_link, &stream->subrequests); if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { stream->front = subreq; @@ -202,7 +202,7 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, } } - spin_unlock(&wreq->lock); + spin_unlock_bh(&wreq->lock); stream->construct = subreq; } From patchwork Wed Aug 14 20:38:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9EF8C3DA4A for ; Wed, 14 Aug 2024 20:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 503166B00A5; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B4546B00A6; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DE8C6B00A7; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0B8206B00A5 for ; Wed, 14 Aug 2024 16:40:43 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B186FA0EF4 for ; Wed, 14 Aug 2024 20:40:42 +0000 (UTC) X-FDA: 82452019524.03.098C655 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id EB4551C001D for ; Wed, 14 Aug 2024 20:40:40 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WAxzmV1k; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668004; a=rsa-sha256; cv=none; b=YE3nn10wMccFd7a0oW7WJhelnmlFmgAavGDDEsCPp97yLOF0yMgK+ffS/RZ74w1FtUrGlj kTG86iTWh9AN8WDlGnxr2VMa2nN3YzL/XuuILrOt0p/AeDhigIxcVEZ3qX1T9WJQBQ/2W0 1ImfLDxPM9f9xc6K5+YE0G3vv/NC4M8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WAxzmV1k; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LFTwpPK3MEADF2hEWWwaZigAapSKwXHY4NVBx1vkMjA=; b=i/XSf6lNHl6jPSG78tKINvo6BieIxO+VvvJ58yTv93MRVFtAyIos/WM1560Oe8tk244foQ w97tBRslaPGhQGJLBGpuqMby/xoswaKbuRejzlszsTbqEb8Fyx6mYNc7FqYPa8BdpNYemS 3ANK4Ttc7rJBqk76xhDFnvJfEl4j/7E= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LFTwpPK3MEADF2hEWWwaZigAapSKwXHY4NVBx1vkMjA=; b=WAxzmV1kFyylPB3sk/Iibn6I1gkXnfNpfTyEgYv/xGjb9CdQcRQVSMbcc6Zwzfmj3qF5KU 4wg6LkRzmQApMT2SqY9FxF6mPjsp+TymCNYnU2dhJGLsGnf12wGXAj4hIVT0MsqjQwoAoF 4pKVmnXYFUXeHMsJpAGJzryRmZLFAuI= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-GBy5YvA1Mo-QtBFyNpvqBA-1; Wed, 14 Aug 2024 16:40:36 -0400 X-MC-Unique: GBy5YvA1Mo-QtBFyNpvqBA-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 683E11954B0F; Wed, 14 Aug 2024 20:40:33 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6FF5E19560A3; Wed, 14 Aug 2024 20:40:26 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Gao Xiang , Mike Marshall , devel@lists.orangefs.org Subject: [PATCH v2 12/25] mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios Date: Wed, 14 Aug 2024 21:38:32 +0100 Message-ID: <20240814203850.2240469-13-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Stat-Signature: kqaz9me3ciby3xi6ypa9dzzgim1p7ow1 X-Rspamd-Queue-Id: EB4551C001D X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1723668040-231950 X-HE-Meta: U2FsdGVkX1+Fx3KVbItckNyvWww0cLuyE7idFnQqt5dyxl7zVrbq+d09pSZPsRVxPhDOgUFHQ9v8TZ3XZUIl235Lwa6FpBaxsQQam9kjRv8t3u++ku8eGosjTKjjB0X8BJxPZmNIFlaatGryVpmdMSzF41dA+V9l7CKzcfVySMoYzTc+rxGbxweqtDbd4yQANf39vPD5IzQ5VDlCz33G2jPoGKl0T5HrAEVaiuCW/2P/aDpT1r3dgo5+GVPefrHkQxReDLTd4rb1YSCqLzzETr7IYpXFmyRclm2Vu+sh5dmSXrZmse8uVMfH5Z3Rmy8gqyRNx3gmFPgEI8yVkgEIgu8Nng61UMWY++kabaAqWSSvzqlUCTa0yYu5J1xd4x9JJK9iWp6AV/PDh/EG6PVm+Ke/f/pXZuDkTdGULpLw+bevCw6a1XN2JMkm82wd5tFzhOUpAcwNwMqGcqwX8cJ0DBWrL8tHH4ogwWq5GrHljDb7cSh8fSS92X+KjpGdCQLa+WtfXKAFpKEcqml+vA51IaAVbNZTd4K/Rhim7EMH04Fs6ozyuvO+BtL4Xuo/BXhRhfpLzJ0JMu+fcfEtZ6TyQ2uouxha7cjpx/x2/+34y27nZ1uzWOXeUF3v3W7Q4iTRBvjvIE0KTf/j7Ws6HWcwCrBmq25cIAx+9jCZ3qDeNeE+KOxHTFjPTonUE1XhrzzA7+eFpzufq59ZiZ5Fnudawd8lbHeIeymHTxrApM8Z4J3cT7jSx9OBHnX+DLdHzQhaXeSNrkJr5mVp1/POFOieN0N416UpG6ExJ0RSd/Cl4paxeUHx36GFwIe5R+KClCMnjtz/bBppTlIygWF1DUF0M4Zr5UgthUDmPzrBykYU5dcdssCq2FtDLvBGCHE1mzZgrFBw4cZKrGt2rcOLoIZT0jVdMsAsPd+o1OfNE9i1oXXQDUEnwNwjR2rKkaaDICooXAyjzGKsZDvHfZ/PqtS RuIZsIsy EPBOaIf2Wyz3vrAN1FVZRwyNVbKkbLrhDwIgjAm04zuUMrJ4K0dUBSyZtapAv0I7RP8o7tAuXubrZhf9MlQDYlwYyhomnJbTSEe7dFGP2989eat9xM+mckr1DML6wDVo625VIViiEF0kb4IRYcEWPIPdUcRf1lNgw3kiPVTcMRjpOkUw35yc1oFh9bq72rdF/32ojJ+eO113BeT3HACz/s2LI6fswVLlo644PsnNWVCcEVBVhjx2zkuosir349GDyp0GEEYG30fQ+H94MV9nVrToSUR9bhODjRoAyQnKdRLtZM3a85Ke+MvaJx+PmbUptHlh2ItXS+9gNLtvRuhi9ppee20NmqZuxItOpw8OgHkJOrAfFVd5MTxHjxRTKoiQZRkZrnPsVrnvtwdXo27Ilh2QuiprpKEWuxQKnToKC7nqop/LS9885nwPkvMYvW5k86TcTqHOzHln7pvgHG8mPCCbinjeiTusdOmSH33dqK4LJYMS772h5ZOnnMHGJpU85xjWdepsDcpdkPNpkc8DBHkT+Eo2C1RIPoKjmMApAfPDX1Ch73pWJmp9PzvTUk4Zqs5kgSXb26z/k6lOG6i/Knst+oKGf2wLadMCdip9K0gN0/xy3YsUO8oU7gbwdwgI0Taz+3cSp5EE5dXho80nph/c0k01E9XVH+y7wTJYvLHTDAKH3HOeYBvGXCapuLiySq7aWuU/lnVqn4lXoSHtmNCure1IbFWoJ8HZJndFiCnjs841LeKo8gEsGAEDNDPc4WupXFflcWlUOU/3MfDqMqF0fwtPihfemKSNUK7grUyg9B2C/GvgqRuRBzQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Define a data structure, struct folio_queue, to represent a sequence of folios and a kernel-internal I/O iterator type, ITER_FOLIOQ, to allow a list of folio_queue structures to be used to provide a buffer to iov_iter-taking functions, such as sendmsg and recvmsg. The folio_queue structure looks like: struct folio_queue { struct folio_batch vec; u8 orders[PAGEVEC_SIZE]; struct folio_queue *next; struct folio_queue *prev; unsigned long marks; unsigned long marks2; }; It does not use a list_head so that next and/or prev can be set to NULL at the ends of the list, allowing iov_iter-handling routines to determine that they *are* the ends without needing to store a head pointer in the iov_iter struct. A folio_batch struct is used to hold the folio pointers which allows the batch to be passed to batch handling functions. Two mark bits are available per slot. The intention is to use at least one of them to mark folios that need putting, but that might not be ultimately necessary. Accessor functions are used to access the slots to do the masking and an additional accessor function is used to indicate the size of the array. The order of each folio is also stored in the structure to avoid the need for iov_iter_advance() and iov_iter_revert() to have to query each folio to find its size. With careful barriering, this can be used as an extending buffer with new folios inserted and new folio_queue structs added without the need for a lock. Further, provided we always keep at least one struct in the buffer, we can also remove consumed folios and consumed structs from the head end as we without the need for locks. [Questions/thoughts] (1) To manage this, I need a head pointer, a tail pointer, a tail slot number (assuming insertion happens at the tail end and the next pointers point from head to tail). Should I put these into a struct of their own, say "folio_queue_head" or "rolling_buffer"? I will end up with two of these in netfs_io_request eventually, one keeping track of the pagecache I'm dealing with for buffered I/O and the other to hold a bounce buffer when we need one. (2) Should I make the slots {folio,off,len} or bio_vec? (3) This is intended to replace ITER_XARRAY eventually. Using an xarray in I/O iteration requires the taking of the RCU read lock, doing copying under the RCU read lock, walking the xarray (which may change under us), handling retries and dealing with special values. The advantage of ITER_XARRAY is that when we're dealing with the pagecache directly, we don't need any allocation - but if we're doing encrypted comms, there's a good chance we'd be using a bounce buffer anyway. This will require afs, erofs, cifs, orangefs and fscache to be converted to not use this. afs still uses it for dirs and symlinks; some of erofs usages should be easy to change, but there's one which won't be so easy; ceph's use via fscache can be fixed by porting ceph to netfslib; cifs is using xarray as a bounce buffer - that can be moved to use sheaves instead; and orangefs has a similar problem to erofs - maybe orangefs could use netfslib? Signed-off-by: David Howells cc: Matthew Wilcox cc: Jeff Layton cc: Steve French cc: Ilya Dryomov cc: Gao Xiang cc: Mike Marshall cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: linux-afs@lists.infradead.org cc: linux-cifs@vger.kernel.org cc: ceph-devel@vger.kernel.org cc: linux-erofs@lists.ozlabs.org cc: devel@lists.orangefs.org --- include/linux/folio_queue.h | 138 +++++++++++++++++++ include/linux/iov_iter.h | 57 ++++++++ include/linux/uio.h | 12 ++ lib/iov_iter.c | 240 ++++++++++++++++++++++++++++++++- lib/kunit_iov_iter.c | 259 ++++++++++++++++++++++++++++++++++++ lib/scatterlist.c | 69 +++++++++- 6 files changed, 771 insertions(+), 4 deletions(-) create mode 100644 include/linux/folio_queue.h diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h new file mode 100644 index 000000000000..52773613bf23 --- /dev/null +++ b/include/linux/folio_queue.h @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Queue of folios definitions + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#ifndef _LINUX_FOLIO_QUEUE_H +#define _LINUX_FOLIO_QUEUE_H + +#include + +/* + * Segment in a queue of running buffers. Each segment can hold a number of + * folios and a portion of the queue can be referenced with the ITER_FOLIOQ + * iterator. The possibility exists of inserting non-folio elements into the + * queue (such as gaps). + * + * Explicit prev and next pointers are used instead of a list_head to make it + * easier to add segments to tail and remove them from the head without the + * need for a lock. + */ +struct folio_queue { + struct folio_batch vec; /* Folios in the queue segment */ + u8 orders[PAGEVEC_SIZE]; /* Order of each folio */ + struct folio_queue *next; /* Next queue segment or NULL */ + struct folio_queue *prev; /* Previous queue segment of NULL */ + unsigned long marks; /* 1-bit mark per folio */ + unsigned long marks2; /* Second 1-bit mark per folio */ +#if PAGEVEC_SIZE > BITS_PER_LONG +#error marks is not big enough +#endif +}; + +static inline void folioq_init(struct folio_queue *folioq) +{ + folio_batch_init(&folioq->vec); + folioq->next = NULL; + folioq->prev = NULL; + folioq->marks = 0; + folioq->marks2 = 0; +} + +static inline unsigned int folioq_nr_slots(const struct folio_queue *folioq) +{ + return PAGEVEC_SIZE; +} + +static inline unsigned int folioq_count(struct folio_queue *folioq) +{ + return folio_batch_count(&folioq->vec); +} + +static inline bool folioq_full(struct folio_queue *folioq) +{ + //return !folio_batch_space(&folioq->vec); + return folioq_count(folioq) >= folioq_nr_slots(folioq); +} + +static inline bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot) +{ + return test_bit(slot, &folioq->marks); +} + +static inline void folioq_mark(struct folio_queue *folioq, unsigned int slot) +{ + set_bit(slot, &folioq->marks); +} + +static inline void folioq_unmark(struct folio_queue *folioq, unsigned int slot) +{ + clear_bit(slot, &folioq->marks); +} + +static inline bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot) +{ + return test_bit(slot, &folioq->marks2); +} + +static inline void folioq_mark2(struct folio_queue *folioq, unsigned int slot) +{ + set_bit(slot, &folioq->marks2); +} + +static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int slot) +{ + clear_bit(slot, &folioq->marks2); +} + +static inline unsigned int __folio_order(struct folio *folio) +{ + if (!folio_test_large(folio)) + return 0; + return folio->_flags_1 & 0xff; +} + +static inline unsigned int folioq_append(struct folio_queue *folioq, struct folio *folio) +{ + unsigned int slot = folioq->vec.nr++; + + folioq->vec.folios[slot] = folio; + folioq->orders[slot] = __folio_order(folio); + return slot; +} + +static inline unsigned int folioq_append_mark(struct folio_queue *folioq, struct folio *folio) +{ + unsigned int slot = folioq->vec.nr++; + + folioq->vec.folios[slot] = folio; + folioq->orders[slot] = __folio_order(folio); + folioq_mark(folioq, slot); + return slot; +} + +static inline struct folio *folioq_folio(const struct folio_queue *folioq, unsigned int slot) +{ + return folioq->vec.folios[slot]; +} + +static inline unsigned int folioq_folio_order(const struct folio_queue *folioq, unsigned int slot) +{ + return folioq->orders[slot]; +} + +static inline size_t folioq_folio_size(const struct folio_queue *folioq, unsigned int slot) +{ + return PAGE_SIZE << folioq_folio_order(folioq, slot); +} + +static inline void folioq_clear(struct folio_queue *folioq, unsigned int slot) +{ + folioq->vec.folios[slot] = NULL; + folioq_unmark(folioq, slot); + folioq_unmark2(folioq, slot); +} + +#endif /* _LINUX_FOLIO_QUEUE_H */ diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index 270454a6703d..a223370a59a7 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -10,6 +10,7 @@ #include #include +#include typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, void *priv, void *priv2); @@ -140,6 +141,60 @@ size_t iterate_bvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, return progress; } +/* + * Handle ITER_FOLIOQ. + */ +static __always_inline +size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct folio_queue *folioq = iter->folioq; + unsigned int slot = iter->folioq_slot; + size_t progress = 0, skip = iter->iov_offset; + + if (slot == folioq_nr_slots(folioq)) { + /* The iterator may have been extended. */ + folioq = folioq->next; + slot = 0; + } + + do { + struct folio *folio = folioq_folio(folioq, slot); + size_t part, remain, consumed; + size_t fsize; + void *base; + + if (!folio) + break; + + fsize = folioq_folio_size(folioq, slot); + base = kmap_local_folio(folio, skip); + part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); + remain = step(base, progress, part, priv, priv2); + kunmap_local(base); + consumed = part - remain; + len -= consumed; + progress += consumed; + skip += consumed; + if (skip >= fsize) { + skip = 0; + slot++; + if (slot == folioq_nr_slots(folioq) && folioq->next) { + folioq = folioq->next; + slot = 0; + } + } + if (remain) + break; + } while (len); + + iter->folioq_slot = slot; + iter->folioq = folioq; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + /* * Handle ITER_XARRAY. */ @@ -249,6 +304,8 @@ size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv, return iterate_bvec(iter, len, priv, priv2, step); if (iov_iter_is_kvec(iter)) return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_folioq(iter)) + return iterate_folioq(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) return iterate_xarray(iter, len, priv, priv2, step); return iterate_discard(iter, len, priv, priv2, step); diff --git a/include/linux/uio.h b/include/linux/uio.h index 7020adedfa08..845d110acadc 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -11,6 +11,7 @@ #include struct page; +struct folio_queue; typedef unsigned int __bitwise iov_iter_extraction_t; @@ -25,6 +26,7 @@ enum iter_type { ITER_IOVEC, ITER_BVEC, ITER_KVEC, + ITER_FOLIOQ, ITER_XARRAY, ITER_DISCARD, }; @@ -66,6 +68,7 @@ struct iov_iter { const struct iovec *__iov; const struct kvec *kvec; const struct bio_vec *bvec; + const struct folio_queue *folioq; struct xarray *xarray; void __user *ubuf; }; @@ -74,6 +77,7 @@ struct iov_iter { }; union { unsigned long nr_segs; + u8 folioq_slot; loff_t xarray_start; }; }; @@ -126,6 +130,11 @@ static inline bool iov_iter_is_discard(const struct iov_iter *i) return iov_iter_type(i) == ITER_DISCARD; } +static inline bool iov_iter_is_folioq(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_FOLIOQ; +} + static inline bool iov_iter_is_xarray(const struct iov_iter *i) { return iov_iter_type(i) == ITER_XARRAY; @@ -273,6 +282,9 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec, unsigned long nr_segs, size_t count); void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); +void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, + const struct folio_queue *folioq, + unsigned int first_slot, unsigned int offset, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, loff_t start, size_t count); ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 4a6a9f419bd7..84a517a0189d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -527,6 +527,39 @@ static void iov_iter_iovec_advance(struct iov_iter *i, size_t size) i->__iov = iov; } +static void iov_iter_folioq_advance(struct iov_iter *i, size_t size) +{ + const struct folio_queue *folioq = i->folioq; + unsigned int slot = i->folioq_slot; + + if (!i->count) + return; + i->count -= size; + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + slot = 0; + } + + size += i->iov_offset; /* From beginning of current segment. */ + do { + size_t fsize = folioq_folio_size(folioq, slot); + + if (likely(size < fsize)) + break; + size -= fsize; + slot++; + if (slot >= folioq_nr_slots(folioq) && folioq->next) { + folioq = folioq->next; + slot = 0; + } + } while (size); + + i->iov_offset = size; + i->folioq_slot = slot; + i->folioq = folioq; +} + void iov_iter_advance(struct iov_iter *i, size_t size) { if (unlikely(i->count < size)) @@ -539,12 +572,40 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_iovec_advance(i, size); } else if (iov_iter_is_bvec(i)) { iov_iter_bvec_advance(i, size); + } else if (iov_iter_is_folioq(i)) { + iov_iter_folioq_advance(i, size); } else if (iov_iter_is_discard(i)) { i->count -= size; } } EXPORT_SYMBOL(iov_iter_advance); +static void iov_iter_folioq_revert(struct iov_iter *i, size_t unroll) +{ + const struct folio_queue *folioq = i->folioq; + unsigned int slot = i->folioq_slot; + + for (;;) { + size_t fsize; + + if (slot == 0) { + folioq = folioq->prev; + slot = folioq_nr_slots(folioq); + } + slot--; + + fsize = folioq_folio_size(folioq, slot); + if (unroll <= fsize) { + i->iov_offset = fsize - unroll; + break; + } + unroll -= fsize; + } + + i->folioq_slot = slot; + i->folioq = folioq; +} + void iov_iter_revert(struct iov_iter *i, size_t unroll) { if (!unroll) @@ -576,6 +637,9 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll) } unroll -= n; } + } else if (iov_iter_is_folioq(i)) { + i->iov_offset = 0; + iov_iter_folioq_revert(i, unroll); } else { /* same logics for iovec and kvec */ const struct iovec *iov = iter_iov(i); while (1) { @@ -603,6 +667,9 @@ size_t iov_iter_single_seg_count(const struct iov_iter *i) if (iov_iter_is_bvec(i)) return min(i->count, i->bvec->bv_len - i->iov_offset); } + if (unlikely(iov_iter_is_folioq(i))) + return !i->count ? 0 : + umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count); return i->count; } EXPORT_SYMBOL(iov_iter_single_seg_count); @@ -639,6 +706,36 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_bvec); +/** + * iov_iter_folio_queue - Initialise an I/O iterator to use the folios in a folio queue + * @i: The iterator to initialise. + * @direction: The direction of the transfer. + * @folioq: The starting point in the folio queue. + * @first_slot: The first slot in the folio queue to use + * @offset: The offset into the folio in the first slot to start at + * @count: The size of the I/O buffer in bytes. + * + * Set up an I/O iterator to either draw data out of the pages attached to an + * inode or to inject data into those pages. The pages *must* be prevented + * from evaporation, either by taking a ref on them or locking them by the + * caller. + */ +void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, + const struct folio_queue *folioq, unsigned int first_slot, + unsigned int offset, size_t count) +{ + BUG_ON(direction & ~1); + *i = (struct iov_iter) { + .iter_type = ITER_FOLIOQ, + .data_source = direction, + .folioq = folioq, + .folioq_slot = first_slot, + .count = count, + .iov_offset = offset, + }; +} +EXPORT_SYMBOL(iov_iter_folio_queue); + /** * iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray * @i: The iterator to initialise. @@ -765,12 +862,19 @@ bool iov_iter_is_aligned(const struct iov_iter *i, unsigned addr_mask, if (iov_iter_is_bvec(i)) return iov_iter_aligned_bvec(i, addr_mask, len_mask); + /* With both xarray and folioq types, we're dealing with whole folios. */ if (iov_iter_is_xarray(i)) { if (i->count & len_mask) return false; if ((i->xarray_start + i->iov_offset) & addr_mask) return false; } + if (iov_iter_is_folioq(i)) { + if (i->count & len_mask) + return false; + if (i->iov_offset & addr_mask) + return false; + } return true; } @@ -835,6 +939,9 @@ unsigned long iov_iter_alignment(const struct iov_iter *i) if (iov_iter_is_bvec(i)) return iov_iter_alignment_bvec(i); + /* With both xarray and folioq types, we're dealing with whole folios. */ + if (iov_iter_is_folioq(i)) + return i->iov_offset | i->count; if (iov_iter_is_xarray(i)) return (i->xarray_start + i->iov_offset) | i->count; @@ -887,6 +994,62 @@ static int want_pages_array(struct page ***res, size_t size, return count; } +static ssize_t iter_folioq_get_pages(struct iov_iter *iter, + struct page ***ppages, size_t maxsize, + unsigned maxpages, size_t *_start_offset) +{ + const struct folio_queue *folioq = iter->folioq; + struct page **pages; + unsigned int slot = iter->folioq_slot; + size_t extracted = 0, count = iter->count, iov_offset = iter->iov_offset; + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + slot = 0; + if (WARN_ON(iov_offset != 0)) + return -EIO; + } + + maxpages = want_pages_array(ppages, maxsize, iov_offset & ~PAGE_MASK, maxpages); + if (!maxpages) + return -ENOMEM; + *_start_offset = iov_offset & ~PAGE_MASK; + pages = *ppages; + + for (;;) { + struct folio *folio = folioq_folio(folioq, slot); + size_t offset = iov_offset, fsize = folioq_folio_size(folioq, slot); + size_t part = PAGE_SIZE - offset % PAGE_SIZE; + + part = umin(part, umin(maxsize - extracted, fsize - offset)); + count -= part; + iov_offset += part; + extracted += part; + + *pages = folio_page(folio, offset % PAGE_SIZE); + get_page(*pages); + pages++; + maxpages--; + if (maxpages == 0 || extracted >= maxsize) + break; + + if (offset >= fsize) { + iov_offset = 0; + slot++; + if (slot == folioq_nr_slots(folioq) && folioq->next) { + folioq = folioq->next; + slot = 0; + } + } + } + + iter->count = count; + iter->iov_offset = iov_offset; + iter->folioq = folioq; + iter->folioq_slot = slot; + return extracted; +} + static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa, pgoff_t index, unsigned int nr_pages) { @@ -1034,6 +1197,8 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, } return maxsize; } + if (iov_iter_is_folioq(i)) + return iter_folioq_get_pages(i, pages, maxsize, maxpages, start); if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); return -EFAULT; @@ -1118,6 +1283,11 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages) return iov_npages(i, maxpages); if (iov_iter_is_bvec(i)) return bvec_npages(i, maxpages); + if (iov_iter_is_folioq(i)) { + unsigned offset = i->iov_offset % PAGE_SIZE; + int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); + return min(npages, maxpages); + } if (iov_iter_is_xarray(i)) { unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE; int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); @@ -1398,6 +1568,68 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->nr_segs = state->nr_segs; } +/* + * Extract a list of contiguous pages from an ITER_FOLIOQ iterator. This does + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_folioq_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + const struct folio_queue *folioq = i->folioq; + struct page **p; + unsigned int nr = 0; + size_t extracted = 0, offset, slot = i->folioq_slot; + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + slot = 0; + if (WARN_ON(i->iov_offset != 0)) + return -EIO; + } + + offset = i->iov_offset & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + for (;;) { + struct folio *folio = folioq_folio(folioq, slot); + size_t offset = i->iov_offset, fsize = folioq_folio_size(folioq, slot); + size_t part = PAGE_SIZE - offset % PAGE_SIZE; + + if (offset < fsize) { + part = umin(part, umin(maxsize - extracted, fsize - offset)); + i->count -= part; + i->iov_offset += part; + extracted += part; + + p[nr++] = folio_page(folio, offset / PAGE_SIZE); + } + + if (nr >= maxpages || extracted >= maxsize) + break; + + if (i->iov_offset >= fsize) { + i->iov_offset = 0; + slot++; + if (slot == folioq_nr_slots(folioq) && folioq->next) { + folioq = folioq->next; + slot = 0; + } + } + } + + i->folioq = folioq; + i->folioq_slot = slot; + return extracted; +} + /* * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not * get references on the pages, nor does it get a pin on them. @@ -1618,8 +1850,8 @@ static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, * added to the pages, but refs will not be taken. * iov_iter_extract_will_pin() will return true. * - * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are - * merely listed; no extra refs or pins are obtained. + * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_FOLIOQ or ITER_XARRAY, the + * pages are merely listed; no extra refs or pins are obtained. * iov_iter_extract_will_pin() will return 0. * * Note also: @@ -1654,6 +1886,10 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i, return iov_iter_extract_bvec_pages(i, pages, maxsize, maxpages, extraction_flags, offset0); + if (iov_iter_is_folioq(i)) + return iov_iter_extract_folioq_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); if (iov_iter_is_xarray(i)) return iov_iter_extract_xarray_pages(i, pages, maxsize, maxpages, extraction_flags, diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 27e0c8ee71d8..13e15687675a 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -12,6 +12,7 @@ #include #include #include +#include #include MODULE_DESCRIPTION("iov_iter testing"); @@ -62,6 +63,9 @@ static void *__init iov_kunit_create_buffer(struct kunit *test, KUNIT_ASSERT_EQ(test, got, npages); } + for (int i = 0; i < npages; i++) + pages[i]->index = i; + buffer = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); @@ -362,6 +366,179 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test) KUNIT_SUCCEED(test); } +static void iov_kunit_destroy_folioq(void *data) +{ + struct folio_queue *folioq, *next; + + for (folioq = data; folioq; folioq = next) { + next = folioq->next; + for (int i = 0; i < folioq_nr_slots(folioq); i++) + if (folioq_folio(folioq, i)) + folio_put(folioq_folio(folioq, i)); + kfree(folioq); + } +} + +static void __init iov_kunit_load_folioq(struct kunit *test, + struct iov_iter *iter, int dir, + struct folio_queue *folioq, + struct page **pages, size_t npages) +{ + struct folio_queue *p = folioq; + size_t size = 0; + int i; + + for (i = 0; i < npages; i++) { + if (folioq_full(p)) { + p->next = kzalloc(sizeof(struct folio_queue), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next); + folioq_init(p->next); + p->next->prev = p; + p = p->next; + } + folioq_append(p, page_folio(pages[i])); + size += PAGE_SIZE; + } + iov_iter_folio_queue(iter, dir, folioq, 0, 0, size); +} + +static struct folio_queue *iov_kunit_create_folioq(struct kunit *test) +{ + struct folio_queue *folioq; + + folioq = kzalloc(sizeof(struct folio_queue), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folioq); + kunit_add_action_or_reset(test, iov_kunit_destroy_folioq, folioq); + folioq_init(folioq); + return folioq; +} + +/* + * Test copying to a ITER_FOLIOQ-type iterator. + */ +static void __init iov_kunit_copy_to_folioq(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct folio_queue *folioq; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, patt; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + folioq = iov_kunit_create_folioq(test); + + scratch = iov_kunit_create_buffer(test, &spages, npages); + for (i = 0; i < bufsize; i++) + scratch[i] = pattern(i); + + buffer = iov_kunit_create_buffer(test, &bpages, npages); + memset(buffer, 0, bufsize); + + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); + + i = 0; + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + size = pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_folio_queue(&iter, READ, folioq, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied = copy_to_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); + i += size; + if (test->status == KUNIT_FAILURE) + goto stop; + } + + /* Build the expected image in the scratch buffer. */ + patt = 0; + memset(scratch, 0, bufsize); + for (pr = kvec_test_ranges; pr->from >= 0; pr++) + for (i = pr->from; i < pr->to; i++) + scratch[i] = pattern(patt++); + + /* Compare the images */ + for (i = 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i); + if (buffer[i] != scratch[i]) + return; + } + +stop: + KUNIT_SUCCEED(test); +} + +/* + * Test copying from a ITER_FOLIOQ-type iterator. + */ +static void __init iov_kunit_copy_from_folioq(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct folio_queue *folioq; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, j; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + folioq = iov_kunit_create_folioq(test); + + buffer = iov_kunit_create_buffer(test, &bpages, npages); + for (i = 0; i < bufsize; i++) + buffer[i] = pattern(i); + + scratch = iov_kunit_create_buffer(test, &spages, npages); + memset(scratch, 0, bufsize); + + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); + + i = 0; + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + size = pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied = copy_from_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); + i += size; + } + + /* Build the expected image in the main buffer. */ + i = 0; + memset(buffer, 0, bufsize); + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + for (j = pr->from; j < pr->to; j++) { + buffer[i++] = pattern(j); + if (i >= bufsize) + goto stop; + } + } +stop: + + /* Compare the images */ + for (i = 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i); + if (scratch[i] != buffer[i]) + return; + } + + KUNIT_SUCCEED(test); +} + static void iov_kunit_destroy_xarray(void *data) { struct xarray *xarray = data; @@ -677,6 +854,85 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test) KUNIT_SUCCEED(test); } +/* + * Test the extraction of ITER_FOLIOQ-type iterators. + */ +static void __init iov_kunit_extract_pages_folioq(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct folio_queue *folioq; + struct iov_iter iter; + struct page **bpages, *pagelist[8], **pages = pagelist; + ssize_t len; + size_t bufsize, size = 0, npages; + int i, from; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + folioq = iov_kunit_create_folioq(test); + + iov_kunit_create_buffer(test, &bpages, npages); + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); + + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + from = pr->from; + size = pr->to - from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); + iov_iter_advance(&iter, from); + + do { + size_t offset0 = LONG_MAX; + + for (i = 0; i < ARRAY_SIZE(pagelist); i++) + pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL; + + len = iov_iter_extract_pages(&iter, &pages, 100 * 1024, + ARRAY_SIZE(pagelist), 0, &offset0); + KUNIT_EXPECT_GE(test, len, 0); + if (len < 0) + break; + KUNIT_EXPECT_LE(test, len, size); + KUNIT_EXPECT_EQ(test, iter.count, size - len); + if (len == 0) + break; + size -= len; + KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); + KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); + + for (i = 0; i < ARRAY_SIZE(pagelist); i++) { + struct page *p; + ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0); + int ix; + + KUNIT_ASSERT_GE(test, part, 0); + ix = from / PAGE_SIZE; + KUNIT_ASSERT_LT(test, ix, npages); + p = bpages[ix]; + KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); + KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); + from += part; + len -= part; + KUNIT_ASSERT_GE(test, len, 0); + if (len == 0) + break; + offset0 = 0; + } + + if (test->status == KUNIT_FAILURE) + goto stop; + } while (iov_iter_count(&iter) > 0); + + KUNIT_EXPECT_EQ(test, size, 0); + KUNIT_EXPECT_EQ(test, iter.count, 0); + } + +stop: + KUNIT_SUCCEED(test); +} + /* * Test the extraction of ITER_XARRAY-type iterators. */ @@ -761,10 +1017,13 @@ static struct kunit_case __refdata iov_kunit_cases[] = { KUNIT_CASE(iov_kunit_copy_from_kvec), KUNIT_CASE(iov_kunit_copy_to_bvec), KUNIT_CASE(iov_kunit_copy_from_bvec), + KUNIT_CASE(iov_kunit_copy_to_folioq), + KUNIT_CASE(iov_kunit_copy_from_folioq), KUNIT_CASE(iov_kunit_copy_to_xarray), KUNIT_CASE(iov_kunit_copy_from_xarray), KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), + KUNIT_CASE(iov_kunit_extract_pages_folioq), KUNIT_CASE(iov_kunit_extract_pages_xarray), {} }; diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 7bc2220fea80..473b2646f71c 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -11,6 +11,7 @@ #include #include #include +#include /** * sg_next - return the next scatterlist entry in a list @@ -1261,6 +1262,67 @@ static ssize_t extract_kvec_to_sg(struct iov_iter *iter, return ret; } +/* + * Extract up to sg_max folios from an FOLIOQ-type iterator and add them to + * the scatterlist. The pages are not pinned. + */ +static ssize_t extract_folioq_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + const struct folio_queue *folioq = iter->folioq; + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + unsigned int slot = iter->folioq_slot; + ssize_t ret = 0; + size_t offset = iter->iov_offset; + + BUG_ON(!folioq); + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + if (WARN_ON_ONCE(!folioq)) + return 0; + slot = 0; + } + + do { + struct folio *folio = folioq_folio(folioq, slot); + size_t fsize = folioq_folio_size(folioq, slot); + + if (offset < fsize) { + size_t part = umin(maxsize - ret, fsize - offset); + + sg_set_page(sg, folio_page(folio, 0), part, offset); + sgtable->nents++; + sg++; + sg_max--; + offset += part; + ret += part; + } + + if (offset >= fsize) { + offset = 0; + slot++; + if (slot >= folioq_nr_slots(folioq)) { + if (!folioq->next) { + WARN_ON_ONCE(ret < iter->count); + break; + } + folioq = folioq->next; + slot = 0; + } + } + } while (sg_max > 0 && ret < maxsize); + + iter->folioq = folioq; + iter->folioq_slot = slot; + iter->iov_offset = offset; + iter->count -= ret; + return ret; +} + /* * Extract up to sg_max folios from an XARRAY-type iterator and add them to * the scatterlist. The pages are not pinned. @@ -1323,8 +1385,8 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *iter, * addition of @sg_max elements. * * The pages referred to by UBUF- and IOVEC-type iterators are extracted and - * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; PIPE- - * and DISCARD-type are not supported. + * pinned; BVEC-, KVEC-, FOLIOQ- and XARRAY-type are extracted but aren't + * pinned; DISCARD-type is not supported. * * No end mark is placed on the scatterlist; that's left to the caller. * @@ -1356,6 +1418,9 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, case ITER_KVEC: return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); + case ITER_FOLIOQ: + return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_XARRAY: return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); From patchwork Wed Aug 14 20:38:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4669CC531DC for ; Wed, 14 Aug 2024 20:40:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCB706B0085; Wed, 14 Aug 2024 16:40:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7B966B00A9; Wed, 14 Aug 2024 16:40:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1BC06B00AA; Wed, 14 Aug 2024 16:40:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 938486B0085 for ; Wed, 14 Aug 2024 16:40:50 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5275241242 for ; Wed, 14 Aug 2024 20:40:50 +0000 (UTC) X-FDA: 82452019860.23.CF022AB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf15.hostedemail.com (Postfix) with ESMTP id A33E4A0016 for ; Wed, 14 Aug 2024 20:40:48 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d1XEm0J4; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668012; a=rsa-sha256; cv=none; b=btjVNFD07A26m5ntZDe0CHyL0HO+pXMmZFsT314LVQSHaVuaMCJTJOGx8UH2aGOB+fqQPp BVdZ+O31qPb0GRGMKZXhHMPhIAHwPbp4BwEHqX3GHL10VXDbrPRT3bTmGy8CCVZsshJgbE I3t2DaT8mLJ6kiU4myoG+kgLTxZIaks= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d1XEm0J4; spf=pass (imf15.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w97nPU0tAGeP5M9LqJOBxzdkU1NC+KgCeHw19Arp5sY=; b=PTj89j7EfGUON1+plYIKkTBujKr/2iMTBiUd7FYlOO/Wkcs9deB7PFKoTHTYKQSs5pgASJ Qj+2NqyK21blj766I78n5PH+VIjUrDvpR0Dak3OOx89mUvfqFJ/7/1z0kdH2mB2vkkHlYL ie6y0eRJlV2F6qD728j1W3IYjq8keLg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w97nPU0tAGeP5M9LqJOBxzdkU1NC+KgCeHw19Arp5sY=; b=d1XEm0J4ahPVLt6VtTqceq0aln6EgREjcRbALYmaYBzKzsl+Dl67sYfGQVZUy3/hBGv7Mr 2VIVpyIbYBc1gi17MUYIAa7wZeH+kbZZrQsbTWPU1nTDbMQqnj3fh//5RuZTTYsMP+3g00 PQIw4vlvvrL/vpKcGvzQzsDH+0yjtPE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-41-BVHWuZvpPrWVhSRWafLQKA-1; Wed, 14 Aug 2024 16:40:44 -0400 X-MC-Unique: BVHWuZvpPrWVhSRWafLQKA-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EBA0F1954206; Wed, 14 Aug 2024 20:40:40 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E563719560A3; Wed, 14 Aug 2024 20:40:34 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Viro Subject: [PATCH v2 13/25] iov_iter: Provide copy_folio_from_iter() Date: Wed, 14 Aug 2024 21:38:33 +0100 Message-ID: <20240814203850.2240469-14-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Stat-Signature: xpkf1sefsqcsdj63p6xp4gidqgqzchj8 X-Rspamd-Queue-Id: A33E4A0016 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1723668048-504558 X-HE-Meta: U2FsdGVkX1+b/VmU1iSLfP7EFCBjAVHBd1XfFe4SPn4HM2i8Q7QNAYpDSPhTxkDT60GNwFjRbWoITyIMH6lMR+/9t15XZQJaEXAH0NksIRtSaMs3/4bjYzVDeZfgr+Ef8OZPZzLVhaA3qqQc0a6Vag52BPdPRgc6ZPuicmirFM387hKS8kqueyo+JrXAwH/yDlPlGgH9xM4sfgs3vLZmkA/B3Dp+qYbRu9efSDhoj5F/mSGDmR47DnxagGQMctAZAPn/8w3pZB19RfGLnQWi5yCmOPC9aelBfx6e5FHN2ILaHo9o+MgGZItAzGl9cTzbM9JPJZOBZAkeWViqtsvvDUKDHNDKEX8yTbbSrgsAc/ifQuRmP+YLGa9NdqrS2ZUzkfTf5uFiNf0e3DinlvXhUDlI1NZiHJOcPHgm91aGaMzO5c3wdTg5Ys0g7VHhRZLdFbTc6BQP18aaFalRyK75GMaml78G2JD8oJ6IDJ1KZHQu8p6mdbmT2izeDJOjuFN9aVhdFT0XZzPeN6g017/YO6IfIjn/WNAhY4Ojs1XsGYdCzs40MtwtwPGYMxalyB8ptlGvKnyLFUlyALrxZFN1KXvlNhW0pw8QecwzMXX3OujoXU+68tqGlh5Oi3UVqfFQb2Y8SF9ynw+hsSEj+f4aDRclxCOzbedYVn3Qu6FqNiC/GZx+qJHnOC8prFHg9NboEqcXdlreUCbfPTIUvk7wSJoGgV4Wz/4aXjknlYSuHgmZIdqbdbDCaMhwxIPOzQKiMN85TgBm/YThwob0dMrPror/uSR5aZM80ilEigTG+IuXZxMyYKcUPXVGJTEAicfFGBeTUT6Zj+lPEODtF3oVp8v0kro702prky2cr6rlNOILoN3VIVVMCUInbX7ruV0aDdJMBGPiYlc2/qRgWW5UMdI+uZa9mJzIokfvg/zbLh+oalhd1s8vnKaI0tS8CNWJOoTPgU/hPPXGjnJvDkK v5TQhpjs nT6gSUaXdceozDcVu6F1+4NfrkUb/ZEaeuk8u0K0gHoQJuasFhpWwOobpDmyPnwIz4T962URC+cNfdAUx03uAM8GdIHhuw49xmJhW8f8QBxxz85XaoldcOl48RSCyOVDpE3vmYmXFI1qMF4tBua3yopkz6p6ugNw54Te0QfuBABwTZHgG3G9ewpAj1eO37ToGvLjcJPpRO+qydDG+aeB0nEEcig8z1OHaTHxC4yXftk9zM9FjgbNC+TgcdVqz8AOAymaa31qqrqh/GRYAwnIkjSFI5OA0SyEVIoujxD8N7Z+LoazRxgHy2HfqX9oez54sNqe/XbR3WgTvV0twPyvYv+eltgXLrzp6+bSIv0Ap2SELQA1GDxEjCCXSmTLPdKWc9OBZ0du1MXakvx5kfduXJfXKKwGBj8L3v1VITXeLpPDBCR44tf+Uu7/W3l4dGS8gjl1X0SPA6bCc+0pxos056q63D0NGurYWUHmqqUZWxA4Z8QXTec425xC40e4kWtZyFk2zYNPgIPK+HBp/qKfITCAdXbepz6OX+tmVKYxZEJ4BbsVftglc5tltxOnALtQAjpCJ5tNVPj2s1Dk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Provide a copy_folio_from_iter() wrapper. Signed-off-by: David Howells cc: Alexander Viro cc: Christian Brauner cc: Matthew Wilcox cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 845d110acadc..853f9de5aa05 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -189,6 +189,12 @@ static inline size_t copy_folio_to_iter(struct folio *folio, size_t offset, return copy_page_to_iter(&folio->page, offset, bytes, i); } +static inline size_t copy_folio_from_iter(struct folio *folio, size_t offset, + size_t bytes, struct iov_iter *i) +{ + return copy_page_from_iter(&folio->page, offset, bytes, i); +} + static inline size_t copy_folio_from_iter_atomic(struct folio *folio, size_t offset, size_t bytes, struct iov_iter *i) { From patchwork Wed Aug 14 20:38:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 262B7C3DA4A for ; Wed, 14 Aug 2024 20:40:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6F126B00AB; Wed, 14 Aug 2024 16:40:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B1BC26B00AC; Wed, 14 Aug 2024 16:40:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 996E36B00AD; Wed, 14 Aug 2024 16:40:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 797F96B00AB for ; Wed, 14 Aug 2024 16:40:57 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3807F12118E for ; Wed, 14 Aug 2024 20:40:57 +0000 (UTC) X-FDA: 82452020154.24.329C898 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 540E0100023 for ; Wed, 14 Aug 2024 20:40:55 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UIWYoXu2; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667996; a=rsa-sha256; cv=none; b=iZdClUtc6YKitSxXBm0bTmT68zbvi0X8AZ5QK5E/kMU3LuVVUSMCMi/x8ufw7LRZvDG1ej s80c8G7RhnBIVN3/Glxv812aC6Jl1I5uDEWusHJc4vTPoqxZeYqXaxCVZ9V2DW03yJZBlo uFu4e6AjwZi1yKjOhGZiLD2dOwGNxRQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UIWYoXu2; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf05.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1RZCIFt1nFHOn2K81N2VwRb0JQpc16vDcy/7plWa8pc=; b=f6y1vXJjr8MNzLqG6UKf+KkduFgBWNZAGC+AV7r5uRRCDt2KdYZzitZ09c3dsCTAIoymS9 ELCUUMCeeJFDoVuSPnaB90jpq85pogYgZeuJVJ8El58tjmUtXMgK2SZdf/Aj1JL5sKJcFa 2iufgFpVAllkWNBS/m/xdUY3cAP4Inw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668054; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1RZCIFt1nFHOn2K81N2VwRb0JQpc16vDcy/7plWa8pc=; b=UIWYoXu2t7KFn2U7imDP2GkvlrP5IS2UBbm0K8kLUAEe3L8fAyJDmVWbQpaIRmeTNBzPwO B2EgaRalUcO0j+QL2evdMDjhkA38O+5vYdr3KafRsuJTFUqBGkkbq490sCwNeBoSJxbVUu VUMN0Uv+clrYPrPUA5n4rAN2TA0FMbE= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-641-02X665vNNgu4MYu1Px-cDA-1; Wed, 14 Aug 2024 16:40:51 -0400 X-MC-Unique: 02X665vNNgu4MYu1Px-cDA-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8BCED1955F65; Wed, 14 Aug 2024 20:40:48 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5F4AD196BE80; Wed, 14 Aug 2024 20:40:42 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Enzo Matsumiya Subject: [PATCH v2 14/25] cifs: Provide the capability to extract from ITER_FOLIOQ to RDMA SGEs Date: Wed, 14 Aug 2024 21:38:34 +0100 Message-ID: <20240814203850.2240469-15-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 540E0100023 X-Stat-Signature: ucyu7rpmpornar13jdonuik7ukbdk473 X-Rspam-User: X-HE-Tag: 1723668055-715476 X-HE-Meta: U2FsdGVkX1/jAZBEeSIIYJI74ex2+jTxGBv0OsG9fdZTiIWCOcGjoDN3jOym9G3xwnzbyEbvZ64z1+YMyIfTF9oGwS2ZPQtItxbOdBQcgwDo75MDNQcG93SKL1wtfMiM071C58EmRoyILIhVCtXqymDEB9j7Nhwntjc7vpdOT53rhGkcQErJwgNyzibchERRda5ifC++X412rd/gXLp9WQTNI2UELfoJIDx0+HbCJD4FF/jvoTKjElsSXJK1/gyHNXboIMUSMy+nkkhB6J0Re/MdcuP/6xyK2L9Gc7VCMYrnIrPFAeqqIJrcxlcdDc9DmS+FTVXKrpIW9mVCRPav7yBB+H6Y6YF3UXtOnGgogburbw84OcH4U2g9GY+XpQXfW0tnur4Ik6LRtCf+AXFJGR4x9Ak0UV23oBQGu7CiBS5DkF8bBC/nup5B2px2nY9gCDHcK+N1HZNh9GNq5GcZgYEMgH/Ovd6hTqIs01uhGmtvRRL+bMgfrOtgfIwR7gPzG+LW7phD/GtCIV3iDDbfyjB3X503Xpt3N2XBaWeBGRTUdByRHsXVvYHaUhDhP6jUBp2p6AaB6l4F/BSAAkzLj5QM8q/iquTYpI5xk5yJPjbiEyGk+r1WUvkloVpadlg8b7Etin7VgOJeIqV8ohDq1dpX/EsRS8g3rg73Ui+o2jpXp43JiUz+TUWs9zPtLnzsx4Ooq3sIrnGQaSKOmhqiUuuhwqj1rBwESLbnutGrmnXOL7Od+HH6TlGOp3QHDa3lhoA8vK0dW7FEuyIzoUWy4zGSJ17i3Jc9+YbxAxSlDpzIzlUBMySyJ3hyNw93s+roD2JtiSM8tmDL4bQypTt0+W0OtOGqsmzxyIdbyACJKmbEJMUWh0EzpEI7KR21awI2yeEyL1OJSAnhvWY4QXqpuRQ/VMw7ZY4Vz28Rh3ARUvn3Bt7Ltni20MUs9tbXdMR5s/AcVK5LliWyDkspDXI BsW15E7D u44EBPe/sQXuQevdhHNlAZqpM8JHwyfH8d18d/F2AaWNBKfor6u1MM3SnpRt/BEBAs4Mzs9kgZdb4talzkE+lVW2L4l0SEKufr5GAe8bEqQ/z2m/SBY3FUb++wAMYgjZe7zhhTNTSjYWZe8uYx6m9/GtW4lsA9VKdMHBrf0fzmP88fCt4kAg6fGZ7ZxSThUyirFC2a14X9oEay1JT27FuOP3vURLANgktO69I/x1jvZ7Ex9lJAxLg0pIfJmfaaidLZkFiTr+8M4zGWhCxMEknp7Hn9PaTb/Dkd2trG5/E9Pvte9g13r3fnKcq560hPN0fNKV/WuR/bXdBlJFXcKThVzAmcWYazLKDnI+YevfKxJA7JYmnCasuewevgn4MTk7EoFeGFcB8MGSPOWUMkaT+H7pk3pKC/Gieb/qvoDJlfYrrlcYygeRatUAUk4UfQdTSv0vPOtQJM7LBUx+lVnVO4vo74Naf/jQmD5HQ3BYxPFqALEyVBF2abx2RXYZR3Z4VCCude8fl6VWAkonm40R6zyL9Yvib1vsSymq1bLA8kckvVT2GpA5bJzqhhYdB8G/Nh76UKQgUupwkfmE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Make smb_extract_iter_to_rdma() extract page fragments from an ITER_FOLIOQ iterator into RDMA SGEs. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Tom Talpey cc: Enzo Matsumiya cc: linux-cifs@vger.kernel.org --- fs/smb/client/smbdirect.c | 71 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 68 insertions(+), 3 deletions(-) diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c index 7bcc379014ca..c946b38ca825 100644 --- a/fs/smb/client/smbdirect.c +++ b/fs/smb/client/smbdirect.c @@ -6,6 +6,7 @@ */ #include #include +#include #include "smbdirect.h" #include "cifs_debug.h" #include "cifsproto.h" @@ -2463,6 +2464,8 @@ static ssize_t smb_extract_bvec_to_rdma(struct iov_iter *iter, start = 0; } + if (ret > 0) + iov_iter_advance(iter, ret); return ret; } @@ -2519,6 +2522,65 @@ static ssize_t smb_extract_kvec_to_rdma(struct iov_iter *iter, start = 0; } + if (ret > 0) + iov_iter_advance(iter, ret); + return ret; +} + +/* + * Extract folio fragments from a FOLIOQ-class iterator and add them to an RDMA + * list. The folios are not pinned. + */ +static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter, + struct smb_extract_to_rdma *rdma, + ssize_t maxsize) +{ + const struct folio_queue *folioq = iter->folioq; + unsigned int slot = iter->folioq_slot; + ssize_t ret = 0; + size_t offset = iter->iov_offset; + + BUG_ON(!folioq); + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + if (WARN_ON_ONCE(!folioq)) + return -EIO; + slot = 0; + } + + do { + struct folio *folio = folioq_folio(folioq, slot); + size_t fsize = folioq_folio_size(folioq, slot); + + if (offset < fsize) { + size_t part = umin(maxsize - ret, fsize - offset); + + if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part)) + return -EIO; + + offset += part; + ret += part; + } + + if (offset >= fsize) { + offset = 0; + slot++; + if (slot >= folioq_nr_slots(folioq)) { + if (!folioq->next) { + WARN_ON_ONCE(ret < iter->count); + break; + } + folioq = folioq->next; + slot = 0; + } + } + } while (rdma->nr_sge < rdma->max_sge || maxsize > 0); + + iter->folioq = folioq; + iter->folioq_slot = slot; + iter->iov_offset = offset; + iter->count -= ret; return ret; } @@ -2563,6 +2625,8 @@ static ssize_t smb_extract_xarray_to_rdma(struct iov_iter *iter, } rcu_read_unlock(); + if (ret > 0) + iov_iter_advance(iter, ret); return ret; } @@ -2590,6 +2654,9 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len, case ITER_KVEC: ret = smb_extract_kvec_to_rdma(iter, rdma, len); break; + case ITER_FOLIOQ: + ret = smb_extract_folioq_to_rdma(iter, rdma, len); + break; case ITER_XARRAY: ret = smb_extract_xarray_to_rdma(iter, rdma, len); break; @@ -2598,9 +2665,7 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len, return -EIO; } - if (ret > 0) { - iov_iter_advance(iter, ret); - } else if (ret < 0) { + if (ret < 0) { while (rdma->nr_sge > before) { struct ib_sge *sge = &rdma->sge[rdma->nr_sge--]; From patchwork Wed Aug 14 20:38:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41CBDC52D7F for ; Wed, 14 Aug 2024 20:41:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1C7A6B0089; Wed, 14 Aug 2024 16:41:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCA456B00AE; Wed, 14 Aug 2024 16:41:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1FF56B00AF; Wed, 14 Aug 2024 16:41:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8BD196B0089 for ; Wed, 14 Aug 2024 16:41:05 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 33CC1140B4D for ; Wed, 14 Aug 2024 20:41:05 +0000 (UTC) X-FDA: 82452020490.08.2259650 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 6ED0A80027 for ; Wed, 14 Aug 2024 20:41:03 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UKNApt7Z; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q/f0iKQp5bBeHLiH6VB9r17X0Ct4wL+73s2Qpqqd9fI=; b=noMRqi0kTm5URf0gqVtcyd89RZtgd+6+upzTsY6x+2jXLkZ+sEhykJZr1o66Hr2zhc+F9E MRPpoNPDh/lqjdBb2yeNMNxjNeJ3DuWjLJ4GA8Za+hX/BXD/N7CQmNzfSlILs0HBQWWmk6 lsdixwqloihDsWBUWq+lZeBrDaYODxw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667991; a=rsa-sha256; cv=none; b=dc5RDG9vMOCrN/MzJCVZ1F513cVeU8G0sAywq0EtKNsFpubvrqofciCsYa7e9UQACPg6nW SWPh5Nc0H/7cRvmj+q496oBba0DhxQV3oaJ0IJrKZsGlXTudxyjzJ3/+3quAQErR/MGq+/ LnYv4yEePuCH7Gf4QEc680wXXVOeX5g= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UKNApt7Z; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668062; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q/f0iKQp5bBeHLiH6VB9r17X0Ct4wL+73s2Qpqqd9fI=; b=UKNApt7Zo8HIK8Cr/VX3aV6uJ6BW0f+vPG8xD24jdqVexD8yro3tvboCqI0PPycxxQuw0u r72L2iuP+ESh5rOFGilE0nWpNq4EdZkXIqcoIHNU7Pz8Ex9elTfzWnO7adaXHxbcW4eMz1 TXJhdz/H6KhE6MZZ/+krWGF9o2vkPqY= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-269-hsTaR9U-PLSDYPegYtMkMg-1; Wed, 14 Aug 2024 16:40:58 -0400 X-MC-Unique: hsTaR9U-PLSDYPegYtMkMg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7EB631944F02; Wed, 14 Aug 2024 20:40:55 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D8ABA19560AA; Wed, 14 Aug 2024 20:40:49 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 15/25] netfs: Use new folio_queue data type and iterator instead of xarray iter Date: Wed, 14 Aug 2024 21:38:35 +0100 Message-ID: <20240814203850.2240469-16-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6ED0A80027 X-Stat-Signature: jt8febk3q999tfq4twj4cyotrjd85hto X-HE-Tag: 1723668063-390014 X-HE-Meta: U2FsdGVkX1/RhVFVVaNlRFlp6cfzLzghkod0vfmQC/29aQ5Zd28tEss14+XmYUvtu2C9gzOwq2WRUqxskls9dSBhyjPSAIbbefw7V8echtDLARhlx+o4PpVm2yozmWsLSr6JvX0jCGTKLwt75k9wjfQl9lPPVJAWL4uioUD+8zVf3l+wwPNxEZLGXxqQTSI1KAfrTgKy6iLCWq9FDN+m4jVFqSohZiMq01IIJ1sZBmAfUnP27tDQ9cXpYCzVrlasyh8Hts713E8zoYVRMtrSgckK3cvHY0a9TIwlunk8uM58PIPKum9HqENWlpjuks8MHwg3b113fe8Fa6hp0I1alwkbDBAFoIfWcoDS9NYfRHQrjo8CIhxOe4xJrZDDm2ieyUV5p/nw64QJ5ko/7ny090fbrLIp/2kMB+33C43hUyNiIfGH5Y1nLo7BwBSx47KtlaXMXF+YL88ookNge2hwBJBn3HHRrscrCxDKmBxNz2zotl82CPI0g3NcD4QGeVKuL5a3nnYBJWSpuj9a5DRk4fmi9t0GgMGSHjmn2ttqp/VGXybTF7DE8VnHhoTesWw52n3P4hWRV8X+8ahUCbfukkg+jFeP6VtrnoiH7wz7D2CF1AlgG8yRzNZCH8jUS6n2Og1chGBJRjRBnqnluupAr2ssLcIFBvW6ANjdSWy+Cjt+R60GYhdLQPNTWqg7KEPCB01TDIaAnsC9h3rCg3D/leyVMjuDVOfG9OS69O92nJgehU9p4sTQZWF06tjFl0/tX/jBIXrZfotVm60+t8UqgqFJ1JsuMFsDOXNW0Qqp3Dl0TmaRe5FZJ3g4vUaePhiMOSzRBGKznNePSctfULV2xosKXFhb3Q2v4LCg/dT/7T2ucxml5hwHlktJ6TTHALxnRSuq1gkrMGDbzXkZSB4tpImR5wbHZdP7b6HMWcV+JmYBOXDLbERty5q1mQgdmiauYdud7whD65GkcGOt0WP sQJdSVkI p8jm+FIffs6RbpzLm07UNcIju9xWwtaVhRYFf+6rDdaaQhY2VVA8GUM/MKYgZRxQKe6C0msb83V7Vg4w0wnCOjsfuWbE2mvAOHxQ4u6Mp1I3LLQB5cBv+1QlTYRtBTt4BIAgSYX8zUfSfF7+TqB9762XsHqENj9ti4r68BFEAyitViqpFt9utbKO/jQa2o4eBLK7IeDJdeP+n69qgIaydU/vkYgd1SZYS27g3Qz6dlTSvGWPzwsyXwLl4fweNxDgvuWinbNdKKaLYJSDU0FiNHs48bUmYXRrvvOwlkeQiiPUGR2pRlZftNGB3xP8VEYwhVzS0aYqZmoeQft2wvnBYw1/tzaPoHBH7ZILPBuvoiTdszOrmMNbzmI8LpSB6fMP8G55myxwhMls/A/CYwx4FLeGbVg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Make the netfs write-side routines use the new folio_queue struct to hold a rolling buffer of folios, with the issuer adding folios at the tail and the collector removing them from the head as they're processed instead of using an xarray. This will allow a subsequent patch to simplify the write collector. The primary mark (as tested by folioq_is_marked()) is used to note if the corresponding folio needs putting. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 9 +++- fs/netfs/misc.c | 76 ++++++++++++++++++++++++++++++++ fs/netfs/objects.c | 1 + fs/netfs/stats.c | 4 +- fs/netfs/write_collect.c | 84 +++++++++++++++++++----------------- fs/netfs/write_issue.c | 28 ++++++------ include/linux/netfs.h | 8 ++-- include/trace/events/netfs.h | 1 + 8 files changed, 150 insertions(+), 61 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index f2920b4ee726..e1149e05a5c8 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -64,6 +65,10 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {} /* * misc.c */ +int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, + bool needs_put); +struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq); +void netfs_clear_buffer(struct netfs_io_request *rreq); /* * objects.c @@ -120,6 +125,7 @@ extern atomic_t netfs_n_wh_write_done; extern atomic_t netfs_n_wh_write_failed; extern atomic_t netfs_n_wb_lock_skip; extern atomic_t netfs_n_wb_lock_wait; +extern atomic_t netfs_n_folioq; int netfs_stats_show(struct seq_file *m, void *v); @@ -153,7 +159,8 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, loff_t start, enum netfs_io_origin origin); void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq); + struct netfs_io_subrequest *subreq, + struct iov_iter *source); int netfs_advance_write(struct netfs_io_request *wreq, struct netfs_io_stream *stream, loff_t start, size_t len, bool to_eof); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 554a1a4615ad..e642e5cacb8d 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,82 @@ #include #include "internal.h" +/* + * Append a folio to the rolling queue. + */ +int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, + bool needs_put) +{ + struct folio_queue *tail = rreq->buffer_tail; + unsigned int slot, order = folio_order(folio); + + if (WARN_ON_ONCE(!rreq->buffer && tail) || + WARN_ON_ONCE(rreq->buffer && !tail)) + return -EIO; + + if (!tail || folioq_full(tail)) { + tail = kmalloc(sizeof(*tail), GFP_NOFS); + if (!tail) + return -ENOMEM; + netfs_stat(&netfs_n_folioq); + folioq_init(tail); + tail->prev = rreq->buffer_tail; + if (tail->prev) + tail->prev->next = tail; + rreq->buffer_tail = tail; + if (!rreq->buffer) { + rreq->buffer = tail; + iov_iter_folio_queue(&rreq->io_iter, ITER_SOURCE, tail, 0, 0, 0); + } + rreq->buffer_tail_slot = 0; + } + + rreq->io_iter.count += PAGE_SIZE << order; + + slot = folioq_append(tail, folio); + /* Store the counter after setting the slot. */ + smp_store_release(&rreq->buffer_tail_slot, slot); + return 0; +} + +/* + * Delete the head of a rolling queue. + */ +struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq) +{ + struct folio_queue *head = wreq->buffer, *next = head->next; + + if (next) + next->prev = NULL; + netfs_stat_d(&netfs_n_folioq); + kfree(head); + wreq->buffer = next; + return next; +} + +/* + * Clear out a rolling queue. + */ +void netfs_clear_buffer(struct netfs_io_request *rreq) +{ + struct folio_queue *p; + + while ((p = rreq->buffer)) { + rreq->buffer = p->next; + for (int slot = 0; slot < folioq_nr_slots(p); slot++) { + struct folio *folio = folioq_folio(p, slot); + if (!folio) + continue; + if (folioq_is_marked(p, slot)) { + trace_netfs_folio(folio, netfs_folio_trace_put); + folio_put(folio); + } + } + netfs_stat_d(&netfs_n_folioq); + kfree(p); + } +} + /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback * @mapping: The mapping the folio belongs to. diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index d6e9785ce7a3..4291cd405fc1 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -141,6 +141,7 @@ static void netfs_free_request(struct work_struct *work) } kvfree(rreq->direct_bv); } + netfs_clear_buffer(rreq); if (atomic_dec_and_test(&ictx->io_count)) wake_up_var(&ictx->io_count); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 5fe1c396e24f..5065289f5555 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -41,6 +41,7 @@ atomic_t netfs_n_wh_write_done; atomic_t netfs_n_wh_write_failed; atomic_t netfs_n_wb_lock_skip; atomic_t netfs_n_wb_lock_wait; +atomic_t netfs_n_folioq; int netfs_stats_show(struct seq_file *m, void *v) { @@ -76,9 +77,10 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_wh_write), atomic_read(&netfs_n_wh_write_done), atomic_read(&netfs_n_wh_write_failed)); - seq_printf(m, "Objs : rr=%u sr=%u wsc=%u\n", + seq_printf(m, "Objs : rr=%u sr=%u foq=%u wsc=%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), + atomic_read(&netfs_n_folioq), atomic_read(&netfs_n_wh_wstream_conflict)); seq_printf(m, "WbLock : skip=%u wait=%u\n", atomic_read(&netfs_n_wb_lock_skip), diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 5f504b03a1e7..1521a23077c3 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -74,42 +74,6 @@ int netfs_folio_written_back(struct folio *folio) return gcount; } -/* - * Get hold of a folio we have under writeback. We don't want to get the - * refcount on it. - */ -static struct folio *netfs_writeback_lookup_folio(struct netfs_io_request *wreq, loff_t pos) -{ - XA_STATE(xas, &wreq->mapping->i_pages, pos / PAGE_SIZE); - struct folio *folio; - - rcu_read_lock(); - - for (;;) { - xas_reset(&xas); - folio = xas_load(&xas); - if (xas_retry(&xas, folio)) - continue; - - if (!folio || xa_is_value(folio)) - kdebug("R=%08x: folio %lx (%llx) not present", - wreq->debug_id, xas.xa_index, pos / PAGE_SIZE); - BUG_ON(!folio || xa_is_value(folio)); - - if (folio == xas_reload(&xas)) - break; - } - - rcu_read_unlock(); - - if (WARN_ONCE(!folio_test_writeback(folio), - "R=%08x: folio %lx is not under writeback\n", - wreq->debug_id, folio->index)) { - trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); - } - return folio; -} - /* * Unlock any folios we've finished with. */ @@ -117,13 +81,25 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned long long collected_to, unsigned int *notes) { + struct folio_queue *folioq = wreq->buffer; + unsigned int slot = wreq->buffer_head_slot; + + if (slot >= folioq_nr_slots(folioq)) { + folioq = netfs_delete_buffer_head(wreq); + slot = 0; + } + for (;;) { struct folio *folio; struct netfs_folio *finfo; unsigned long long fpos, fend; size_t fsize, flen; - folio = netfs_writeback_lookup_folio(wreq, wreq->cleaned_to); + folio = folioq_folio(folioq, slot); + if (WARN_ONCE(!folio_test_writeback(folio), + "R=%08x: folio %lx is not under writeback\n", + wreq->debug_id, folio->index)) + trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); fpos = folio_pos(folio); fsize = folio_size(folio); @@ -148,9 +124,25 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, wreq->cleaned_to = fpos + fsize; *notes |= MADE_PROGRESS; + /* Clean up the head folioq. If we clear an entire folioq, then + * we can get rid of it provided it's not also the tail folioq + * being filled by the issuer. + */ + folioq_clear(folioq, slot); + slot++; + if (slot >= folioq_nr_slots(folioq)) { + if (READ_ONCE(wreq->buffer_tail) == folioq) + break; + folioq = netfs_delete_buffer_head(wreq); + slot = 0; + } + if (fpos + fsize >= collected_to) break; } + + wreq->buffer = folioq; + wreq->buffer_head_slot = slot; } /* @@ -181,9 +173,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) break; if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + struct iov_iter source = subreq->io_iter; + + iov_iter_revert(&source, subreq->len - source.count); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); } } return; @@ -193,6 +188,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, do { struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp; + struct iov_iter source; unsigned long long start, len; size_t part; bool boundary = false; @@ -220,6 +216,14 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, len += to->len; } + /* Determine the set of buffers we're going to use. Each + * subreq gets a subset of a single overall contiguous buffer. + */ + source = from->io_iter; + iov_iter_revert(&source, subreq->len - source.count); + iov_iter_advance(&source, from->transferred); + source.count = len; + /* Work through the sublist. */ subreq = from; list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { @@ -242,7 +246,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, boundary = true; netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); if (subreq == to) break; } @@ -309,7 +313,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, boundary = false; } - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); if (!len) break; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 7880a586343f..a75b62b202c5 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -213,9 +213,11 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, * netfs_write_subrequest_terminated() when complete. */ static void netfs_do_issue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct iov_iter *source) { struct netfs_io_request *wreq = subreq->rreq; + size_t size = subreq->len - subreq->transferred; _enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len); @@ -223,27 +225,20 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream, return netfs_write_subrequest_terminated(subreq, subreq->error, false); // TODO: Use encrypted buffer - if (test_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags)) { - subreq->io_iter = wreq->io_iter; - iov_iter_advance(&subreq->io_iter, - subreq->start + subreq->transferred - wreq->start); - iov_iter_truncate(&subreq->io_iter, - subreq->len - subreq->transferred); - } else { - iov_iter_xarray(&subreq->io_iter, ITER_SOURCE, &wreq->mapping->i_pages, - subreq->start + subreq->transferred, - subreq->len - subreq->transferred); - } + subreq->io_iter = *source; + iov_iter_advance(source, size); + iov_iter_truncate(&subreq->io_iter, size); trace_netfs_sreq(subreq, netfs_sreq_trace_submit); stream->issue_write(subreq); } void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct iov_iter *source) { __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - netfs_do_issue_write(stream, subreq); + netfs_do_issue_write(stream, subreq, source); } static void netfs_issue_write(struct netfs_io_request *wreq, @@ -257,7 +252,7 @@ static void netfs_issue_write(struct netfs_io_request *wreq, if (subreq->start + subreq->len > wreq->start + wreq->submitted) WRITE_ONCE(wreq->submitted, subreq->start + subreq->len - wreq->start); - netfs_do_issue_write(stream, subreq); + netfs_do_issue_write(stream, subreq, &wreq->io_iter); } /* @@ -422,6 +417,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq, trace_netfs_folio(folio, netfs_folio_trace_store_plus); } + /* Attach the folio to the rolling buffer. */ + netfs_buffer_append_folio(wreq, folio, false); + /* Move the submission point forward to allow for write-streaming data * not starting at the front of the page. We don't do write-streaming * with the cache as the cache requires DIO alignment. diff --git a/include/linux/netfs.h b/include/linux/netfs.h index ae4abf121d97..6428be9d99ba 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -38,10 +38,6 @@ static inline void folio_start_private_2(struct folio *folio) folio_set_private_2(folio); } -/* Marks used on xarray-based buffers */ -#define NETFS_BUF_PUT_MARK XA_MARK_0 /* - Page needs putting */ -#define NETFS_BUF_PAGECACHE_MARK XA_MARK_1 /* - Page needs wb/dirty flag wrangling */ - enum netfs_io_source { NETFS_SOURCE_UNKNOWN, NETFS_FILL_WITH_ZEROES, @@ -232,6 +228,8 @@ struct netfs_io_request { struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */ #define NR_IO_STREAMS 2 //wreq->nr_io_streams struct netfs_group *group; /* Writeback group being written back */ + struct folio_queue *buffer; /* Head of I/O buffer */ + struct folio_queue *buffer_tail; /* Tail of I/O buffer */ struct iov_iter iter; /* Unencrypted-side iterator */ struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ void *netfs_priv; /* Private data for the netfs */ @@ -253,6 +251,8 @@ struct netfs_io_request { short error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ + u8 buffer_head_slot; /* First slot in ->buffer */ + u8 buffer_tail_slot; /* Next slot in ->buffer_tail */ unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ atomic64_t issued_to; /* Write issuer folio cursor */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 47cd11aaccac..4e13774a06e6 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -153,6 +153,7 @@ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_not_under_wback, "!wback") \ + EM(netfs_folio_trace_put, "put") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ From patchwork Wed Aug 14 20:38:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763964 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A876C52D7F for ; Wed, 14 Aug 2024 20:41:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2447E6B00AF; Wed, 14 Aug 2024 16:41:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5896B00B0; Wed, 14 Aug 2024 16:41:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06F716B00B1; Wed, 14 Aug 2024 16:41:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D9DDA6B00AF for ; Wed, 14 Aug 2024 16:41:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 96F1341228 for ; Wed, 14 Aug 2024 20:41:13 +0000 (UTC) X-FDA: 82452020826.26.63C7D7A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id DF90E2000D for ; Wed, 14 Aug 2024 20:41:11 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZWhpo26B; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668013; a=rsa-sha256; cv=none; b=p+5ncSBCA7hFKFX24LAGm2DgkvSjjotcEb/QZyoAWsw/yNi5Jq4JDSsJprsQX4lTuiV2/d /RlBJ0YSl6O2qasJUOb0c8JYOeBD9sBMbdRAWC62RRKBQRWk0RqY2dKiOsX/ag/i2phGnP fK4+ybLroLcgfbCT0LGf6/fRQCId630= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZWhpo26B; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/5GudhtF+T887yYtTq2ccnSMC605t1SyQsaR8Y8Wa80=; b=r55TBFco0FpEm+thr+UWdh1NlVuixmcPWdXYLgDCTZzBTjtaGHK4pjgfwLZDK0+FZCZnRr 4XwY/slZ1g6e8KvLnsGFXZthZWmnmo3wFUtq1BmLSB6DyC/n1hj7YL/N4qJlCSLB+MZtsr 9D59TkAoKBdi5okxS1Si+eu7TteEiVg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/5GudhtF+T887yYtTq2ccnSMC605t1SyQsaR8Y8Wa80=; b=ZWhpo26BnMDy9JOo6H8eMFzpQ+BKnm/oD2MQVv+6B/uYo/j40xmURJUUXaI8ydiZeimWUf BRvM3Sfe2Uw8hU5UusLy+wlxQpG8lR9Xibo6o3dRiTHFclxHgcYLEOCzaYoT70wZ0n1zwf 8AEA5hW0vqKJR3Wr7TGAfKIzxs6BtcA= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-378-cuSPPapgNmmADvVkpaXVwA-1; Wed, 14 Aug 2024 16:41:05 -0400 X-MC-Unique: cuSPPapgNmmADvVkpaXVwA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 837281954B0E; Wed, 14 Aug 2024 20:41:02 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E3421300019A; Wed, 14 Aug 2024 20:40:56 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 16/25] netfs: Provide an iterator-reset function Date: Wed, 14 Aug 2024 21:38:36 +0100 Message-ID: <20240814203850.2240469-17-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DF90E2000D X-Stat-Signature: zhfz54farzhrmtkmg637umatmg6yfxcn X-Rspam-User: X-HE-Tag: 1723668071-410042 X-HE-Meta: U2FsdGVkX1+wRTeLOeeh7Q4YWo10CxOVtmG1AlJ+bFQQPAX04aX2pPABSF8WzESC/KxoGBO4AaOCW72aXZxBcexMFBPn0R98VOxPX9UtraExUn2mPNuqnXenhaYiZx+YU+9dOWp+/9m+qfhZo4CGKnbwAWEqzMd9+1GXLCMBZyLlI3UQtA4XoCj4azV1jkbZrx8jkXWhmqNY8ZpM2FFoK3g/OyT+gibtI07BqsT8bAeBlAK5rO4CL27TJ27X5OhWjeUv8HycgAcaiBWMycQ1L0TUYp/97d+ZjMFP8bD/onFUPxvnZtcjGs3hJtFwZio88bTaMWSg4tFVLgO027WaCJHQ9XZ9YjFGaeAPKgtd+cB73UUXLSryR/NF+ALTpy6fC7gMw4Ed8GqeJpF/E1Rtc0LW+wrI5TSFgzMPRnB96yePr6XNtwV7erqXBjBX+Qikkg+ueuQGIl3dnEwHQF2fyGNaz9v3BWn8kRSwgHQYgUzoEh2sd7FZnMKcOEK1DoSpESNwEu9n1F108/xc3vDt0E/hbj/hdlvmxIi9YSOWapVH+tzvckP7FMG7CmtR9uQ0fAImA0qAA//QYWZllk7wIujM7wLbbDTDew90FkBdf/TtNwaoTnrtkoJXygdW9xp5hKgq3wtaXfADlZOEq0SniZObnXmlyKOeUGrB62cNdVYdpU+1yClCtvrVu0iHlpu1U5rq7tl5rUTCesu8Eo1fNNppwn5AobilsOJ92DiQLv80fZUtzhHzPUXoYRK3GJnd5JtiIQak5qmYkQ/N0ds4LroTRisBKDBZ73IMcm9ld5rU42/ZmXetr2hk4gHJijabf9Xzpknc3EZxtEMqiIsCRj/4RE639w0z1gNg4E9EebpZkl+neY0Xyk81Udr0ezv5iegtoruNi/BHXlaYPZhF6SrovLkpIH6WmCPy0dLQiB+CCX6LPrf+ZE50HjN0UJdrbyRZJCNq+v+W3YtopEN 3HMTG/BW mF9jhQWXZPmfTnsFEA6Q2UfoUheP/7oPO3ZNmETr5wOUlyT8VKZAOhsTwPUqOPuL6u7h/oyGvNMf0IGUSGzaOWpqIJXdtYNXxIeT8Tl6XLD2QpQDL8rHDDwLQ4laOYkGZ4ny5KonVcB2VM0z8DR3vX5rmtBH58hXKsof2wymgv7z2NNXp3iFIcC77YeTrIViQSSbfusMGYLYvK3MwtBwto9xNhjJsUrVfbn1VJWHi82MtnXkYbaFstPOR6SGLTZgWCV1tXg4iHFhm/F7gVQed5HyykvLkYb1TqDz//ICTPCT+VnOQztARSH6pxNrGkZJFvMwDoG6yGdOt2s3LMT7++5Ln83dXLXdGzNa+J8w9MTiTCrM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Provide a function to reset the iterator on a subrequest. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 4 +--- fs/netfs/misc.c | 18 ++++++++++++++++++ fs/netfs/write_collect.c | 3 +-- fs/netfs/write_issue.c | 6 +++--- 4 files changed, 23 insertions(+), 8 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index e1149e05a5c8..21a3c7d13585 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -69,6 +69,7 @@ int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio bool needs_put); struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq); void netfs_clear_buffer(struct netfs_io_request *rreq); +void netfs_reset_iter(struct netfs_io_subrequest *subreq); /* * objects.c @@ -161,9 +162,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, void netfs_reissue_write(struct netfs_io_stream *stream, struct netfs_io_subrequest *subreq, struct iov_iter *source); -int netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof); struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len); int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc, struct folio *folio, size_t copied, bool to_page_end, diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index e642e5cacb8d..08987765306f 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -84,6 +84,24 @@ void netfs_clear_buffer(struct netfs_io_request *rreq) } } +/* + * Reset the subrequest iterator to refer just to the region remaining to be + * read. The iterator may or may not have been advanced by socket ops or + * extraction ops to an extent that may or may not match the amount actually + * read. + */ +void netfs_reset_iter(struct netfs_io_subrequest *subreq) +{ + struct iov_iter *io_iter = &subreq->io_iter; + size_t remain = subreq->len - subreq->transferred; + + if (io_iter->count > remain) + iov_iter_advance(io_iter, io_iter->count - remain); + else if (io_iter->count < remain) + iov_iter_revert(io_iter, remain - io_iter->count); + iov_iter_truncate(&subreq->io_iter, remain); +} + /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback * @mapping: The mapping the folio belongs to. diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 1521a23077c3..801a130a0ce1 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -219,9 +219,8 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, /* Determine the set of buffers we're going to use. Each * subreq gets a subset of a single overall contiguous buffer. */ + netfs_reset_iter(from); source = from->io_iter; - iov_iter_revert(&source, subreq->len - source.count); - iov_iter_advance(&source, from->transferred); source.count = len; /* Work through the sublist. */ diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index a75b62b202c5..9ead075962f0 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -261,9 +261,9 @@ static void netfs_issue_write(struct netfs_io_request *wreq, * we can avoid overrunning the credits obtained (cifs) and try to parallelise * content-crypto preparation with network writes. */ -int netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof) +static int netfs_advance_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start, size_t len, bool to_eof) { struct netfs_io_subrequest *subreq = stream->construct; size_t part; From patchwork Wed Aug 14 20:38:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6AEAC531DC for ; Wed, 14 Aug 2024 20:41:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B77F6B0096; Wed, 14 Aug 2024 16:41:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 467196B00B2; Wed, 14 Aug 2024 16:41:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 291126B00B1; Wed, 14 Aug 2024 16:41:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 04FD76B0093 for ; Wed, 14 Aug 2024 16:41:18 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C5A2D161196 for ; Wed, 14 Aug 2024 20:41:18 +0000 (UTC) X-FDA: 82452021036.09.CB2B8CB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 17E9F1C000A for ; Wed, 14 Aug 2024 20:41:16 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q9duVzsi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723667996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OugbyVh68lQVPbYDjD0BUi+05NZ0AdMPumwHofSlQBo=; b=RtDz5HwtXEWdsXkEjwLqwnMwYGtCGa4v98cYR7QX4UtKO4fP+P8/ZaInjQexbZq7Uf9oXb pDoDlaezKBK/9PdK6PFT0IbDf63kvXeqI3BGdfBwm8rZoGGWUuXpeNIWSORrEP52UrrSUH EkNPHMpdsPx5pwY29MJqr5UTA9wKzGw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723667996; a=rsa-sha256; cv=none; b=T0wROe68FcTOUMKfKAZv45oKzuT61vKKuE2uyfUOQ5l7UkFjJBt9TLcSSZBFcw1Z4IjkVo XKb1xipf8gAvTPhYFEHhb01DJmdGcgr4c1LZK0E+nbE/jHDG0vaqXx7A3AcZeSYJDJ3JIv ZV30dbTGWyoAsOWAgivm3G9FPUE1tzU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q9duVzsi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668076; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OugbyVh68lQVPbYDjD0BUi+05NZ0AdMPumwHofSlQBo=; b=Q9duVzsiZX2yKyc8Y3y3KQEaQrcFuKaXHJt8h7860O0kidzswtazx55Q+bab9I19zhR7BZ JBfrJmhye/s0zufcReGkkVLR7ewtxHei9Or3l8YsgKTaTyXSVbGTwOBx+AXAtTMHTK9en8 CNJfWRHXJIgqJM4anwVMvJ6/w7Yc4yY= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-8-F23rl_jHNHOwczEAEIoh3A-1; Wed, 14 Aug 2024 16:41:12 -0400 X-MC-Unique: F23rl_jHNHOwczEAEIoh3A-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 776D819332AC; Wed, 14 Aug 2024 20:41:09 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DDF3319560A3; Wed, 14 Aug 2024 20:41:03 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/25] netfs: Simplify the writeback code Date: Wed, 14 Aug 2024 21:38:37 +0100 Message-ID: <20240814203850.2240469-18-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 17E9F1C000A X-Stat-Signature: ngz9h7rgx1ga1k1kzbwe3uyhgp4q77e6 X-Rspam-User: X-HE-Tag: 1723668076-683177 X-HE-Meta: U2FsdGVkX1/vo9VODfXYHui4VUtKFDjosKjCPk2BxMajVwSsEvJaj7+CkIVx82GBHmaCh8+fD7AlscTfO/wkb9tlF5Rch82cf2ai3IdLcL4W9BsMR2+XgRcxVdZkfSq2BWDIuRgUeCAGMwT+nVnkFRqGRs5WuTRZkzTP1au5kqeXxyWplVGpshXx1OzypaCBQ9PhVuX+R0426CKqhZ6uABudOmCNVPYKcL8sHBR2xWZJT2JlhPHoQ8KXu71H00vkD14H2tBD3QgB9pF0Zzlx72c+LWDftgXT+fThhYmGXiuBhVxog+qG3O+G7zxzqPXKDeSGt5Rm8hZB9YTtf7z2u2LU4QyPLpbwuHeSUF7muT0dqj6lSJTFrp5aP2KxOJIyTLy478wOtVf+izn8l8w5TjOcqwVIfrxLMYgAW/kqXEBJVDoqtb+mZtAqOoiNSvD6RGbCKu3CfLwwSd+g5C/EOgx8XyARxV75HwV6T9LJN06GASHYA7Hd/x6w4OSujGYtAgfniZtHwCeRzFaGk+lbXxjPBEXtmiMER89OSqJP3XYFQP4Sc0pvSW5SpP090kL9QdAxZGwEtboIgDo+LOFQKUyyCzM0W+wGNOH1QElwxlGRkWNygmEB/o9TkizPHgVh0b6HWJk2vnvrygY/EjwqIKDRe3pr9eYLWKedrzKIeSQNkoEbH1umkwPmIJifpIomcx8PqkauOv4C94c5RFWaIBwFICdurlnmFLYcl6j77vsFwj8lGxnhOdV5f+gG9vn1q0lc/rRNXOVEY9bDcJHT8PC0ygqkfQAhqVfTNm+fnYjhsXKPEVRvWMUApCdONxFedNuAya5n9A8sPWoDRGZBg9pDxBqU1jHI9XeP+05lGPJv+jXXluq+Y54PzxRjqqRhW6W61qOHBuLR3jmCQGvoJrNgOqqOtTIULLDpYZgruoWZqsOnYTRXHYnF+65DPpj8fUZEeRtW9CJkw8Ik/cj Ye0TE8Mv C9s+/8IQhoASBui1zewU11GhGdBtW0uoPRSNQe62Md4xsqovcF43IfiThXcsykG6WxI3r0r+bgu1AAbo35d8dnhj10KRYJ1+qw3ZF/bd9CQTAhlLQ5aSmAZtZKYToWGIqLYIiSPMdLhM62y8dSKKmID+wwKUewknYiI0LzD4e5cYAxL1f/Oud5PvqQ/CLK65nh+EUdFhP3rWlSj58D+TGX/0EqcBN5xbEm3sTH5Np7xgPsGWZNtrBhpbrfIdNelwXZfCcliZuM75tDQss1blLpDMBk6VgnZrSDVr8WtytjQvAgsnp3lf/uXhGPC1fWmiHgPaKqbX0k6PbvVtH4IZIz6TsDi+E3zxhf8km73OCx2eoPHi+T9lYGAq9VpjFcTXupdcb X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the new folio_queue structures to simplify the writeback code. The problem with referring to the i_pages xarray directly is that we may have gaps in the sequence of folios we're writing from that we need to skip when we're removing the writeback mark from the folios we're writing back from. At the moment the code tries to deal with this by carefully tracking the gaps in each writeback stream (eg. write to server and write to cache) and divining when there's a gap that spans folios (something that's not helped by folios not being a consistent size). Instead, the folio_queue buffer contains pointers only the folios we're dealing with, has them in ascending order and indicates a gap by placing non-consequitive folios next to each other. This makes it possible to track where we need to clean up to by just keeping track of where we've processed to on each stream and taking the minimum. Note that the I/O iterator is always rounded up to the end of the folio, even if that is beyond the EOF position, so that the cache can do DIO from the page. The excess space is cleared, though mmapped writes clobber it. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_collect.c | 146 ++++++----------------------------- fs/netfs/write_issue.c | 36 +++++---- include/linux/netfs.h | 1 - include/trace/events/netfs.h | 33 +------- 4 files changed, 45 insertions(+), 171 deletions(-) diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 801a130a0ce1..0116b336fa07 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -15,15 +15,11 @@ /* Notes made in the collector */ #define HIT_PENDING 0x01 /* A front op was still pending */ -#define SOME_EMPTY 0x02 /* One of more streams are empty */ -#define ALL_EMPTY 0x04 /* All streams are empty */ -#define MAYBE_DISCONTIG 0x08 /* A front op may be discontiguous (rounded to PAGE_SIZE) */ -#define NEED_REASSESS 0x10 /* Need to loop round and reassess */ -#define REASSESS_DISCONTIG 0x20 /* Reassess discontiguity if contiguity advances */ -#define MADE_PROGRESS 0x40 /* Made progress cleaning up a stream or the folio set */ -#define BUFFERED 0x80 /* The pagecache needs cleaning up */ -#define NEED_RETRY 0x100 /* A front op requests retrying */ -#define SAW_FAILURE 0x200 /* One stream or hit a permanent failure */ +#define NEED_REASSESS 0x02 /* Need to loop round and reassess */ +#define MADE_PROGRESS 0x04 /* Made progress cleaning up a stream or the folio set */ +#define BUFFERED 0x08 /* The pagecache needs cleaning up */ +#define NEED_RETRY 0x10 /* A front op requests retrying */ +#define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ /* * Successful completion of write of a folio to the server and/or cache. Note @@ -78,10 +74,10 @@ int netfs_folio_written_back(struct folio *folio) * Unlock any folios we've finished with. */ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, - unsigned long long collected_to, unsigned int *notes) { struct folio_queue *folioq = wreq->buffer; + unsigned long long collected_to = wreq->collected_to; unsigned int slot = wreq->buffer_head_slot; if (slot >= folioq_nr_slots(folioq)) { @@ -110,12 +106,6 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, trace_netfs_collect_folio(wreq, folio, fend, collected_to); - if (fpos + fsize > wreq->contiguity) { - trace_netfs_collect_contig(wreq, fpos + fsize, - netfs_contig_trace_unlock); - wreq->contiguity = fpos + fsize; - } - /* Unlock any folio we've transferred all of. */ if (collected_to < fend) break; @@ -373,7 +363,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) { struct netfs_io_subrequest *front, *remove; struct netfs_io_stream *stream; - unsigned long long collected_to; + unsigned long long collected_to, issued_to; unsigned int notes; int s; @@ -382,28 +372,21 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) trace_netfs_rreq(wreq, netfs_rreq_trace_collect); reassess_streams: + issued_to = atomic64_read(&wreq->issued_to); smp_rmb(); collected_to = ULLONG_MAX; - if (wreq->origin == NETFS_WRITEBACK) - notes = ALL_EMPTY | BUFFERED | MAYBE_DISCONTIG; - else if (wreq->origin == NETFS_WRITETHROUGH) - notes = ALL_EMPTY | BUFFERED; + if (wreq->origin == NETFS_WRITEBACK || + wreq->origin == NETFS_WRITETHROUGH) + notes = BUFFERED; else - notes = ALL_EMPTY; + notes = 0; /* Remove completed subrequests from the front of the streams and * advance the completion point on each stream. We stop when we hit * something that's in progress. The issuer thread may be adding stuff * to the tail whilst we're doing this. - * - * We must not, however, merge in discontiguities that span whole - * folios that aren't under writeback. This is made more complicated - * by the folios in the gap being of unpredictable sizes - if they even - * exist - but we don't want to look them up. */ for (s = 0; s < NR_IO_STREAMS; s++) { - loff_t rstart, rend; - stream = &wreq->io_streams[s]; /* Read active flag before list pointers */ if (!smp_load_acquire(&stream->active)) @@ -415,26 +398,10 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) //_debug("sreq [%x] %llx %zx/%zx", // front->debug_index, front->start, front->transferred, front->len); - /* Stall if there may be a discontinuity. */ - rstart = round_down(front->start, PAGE_SIZE); - if (rstart > wreq->contiguity) { - if (wreq->contiguity > stream->collected_to) { - trace_netfs_collect_gap(wreq, stream, - wreq->contiguity, 'D'); - stream->collected_to = wreq->contiguity; - } - notes |= REASSESS_DISCONTIG; - break; + if (stream->collected_to < front->start) { + trace_netfs_collect_gap(wreq, stream, issued_to, 'F'); + stream->collected_to = front->start; } - rend = round_up(front->start + front->len, PAGE_SIZE); - if (rend > wreq->contiguity) { - trace_netfs_collect_contig(wreq, rend, - netfs_contig_trace_collect); - wreq->contiguity = rend; - if (notes & REASSESS_DISCONTIG) - notes |= NEED_REASSESS; - } - notes &= ~MAYBE_DISCONTIG; /* Stall if the front is still undergoing I/O. */ if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) { @@ -476,15 +443,6 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) front = list_first_entry_or_null(&stream->subrequests, struct netfs_io_subrequest, rreq_link); stream->front = front; - if (!front) { - unsigned long long jump_to = atomic64_read(&wreq->issued_to); - - if (stream->collected_to < jump_to) { - trace_netfs_collect_gap(wreq, stream, jump_to, 'A'); - stream->collected_to = jump_to; - } - } - spin_unlock_bh(&wreq->lock); netfs_put_subrequest(remove, false, notes & SAW_FAILURE ? @@ -492,10 +450,13 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) netfs_sreq_trace_put_done); } - if (front) - notes &= ~ALL_EMPTY; - else - notes |= SOME_EMPTY; + /* If we have an empty stream, we need to jump it forward + * otherwise the collection point will never advance. + */ + if (!front && issued_to > stream->collected_to) { + trace_netfs_collect_gap(wreq, stream, issued_to, 'E'); + stream->collected_to = issued_to; + } if (stream->collected_to < collected_to) collected_to = stream->collected_to; @@ -504,36 +465,6 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) if (collected_to != ULLONG_MAX && collected_to > wreq->collected_to) wreq->collected_to = collected_to; - /* If we have an empty stream, we need to jump it forward over any gap - * otherwise the collection point will never advance. - * - * Note that the issuer always adds to the stream with the lowest - * so-far submitted start, so if we see two consecutive subreqs in one - * stream with nothing between then in another stream, then the second - * stream has a gap that can be jumped. - */ - if (notes & SOME_EMPTY) { - unsigned long long jump_to = wreq->start + READ_ONCE(wreq->submitted); - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && - stream->front && - stream->front->start < jump_to) - jump_to = stream->front->start; - } - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && - !stream->front && - stream->collected_to < jump_to) { - trace_netfs_collect_gap(wreq, stream, jump_to, 'B'); - stream->collected_to = jump_to; - } - } - } - for (s = 0; s < NR_IO_STREAMS; s++) { stream = &wreq->io_streams[s]; if (stream->active) @@ -544,43 +475,14 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) /* Unlock any folios that we have now finished with. */ if (notes & BUFFERED) { - unsigned long long clean_to = min(wreq->collected_to, wreq->contiguity); - - if (wreq->cleaned_to < clean_to) - netfs_writeback_unlock_folios(wreq, clean_to, ¬es); + if (wreq->cleaned_to < wreq->collected_to) + netfs_writeback_unlock_folios(wreq, ¬es); } else { wreq->cleaned_to = wreq->collected_to; } // TODO: Discard encryption buffers - /* If all streams are discontiguous with the last folio we cleared, we - * may need to skip a set of folios. - */ - if ((notes & (MAYBE_DISCONTIG | ALL_EMPTY)) == MAYBE_DISCONTIG) { - unsigned long long jump_to = ULLONG_MAX; - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && stream->front && - stream->front->start < jump_to) - jump_to = stream->front->start; - } - - trace_netfs_collect_contig(wreq, jump_to, netfs_contig_trace_jump); - wreq->contiguity = jump_to; - wreq->cleaned_to = jump_to; - wreq->collected_to = jump_to; - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->collected_to < jump_to) - stream->collected_to = jump_to; - } - //cond_resched(); - notes |= MADE_PROGRESS; - goto reassess_streams; - } - if (notes & NEED_RETRY) goto need_retry; if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 9ead075962f0..25fb7e166cc0 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -107,7 +107,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, if (is_buffered && netfs_is_cache_enabled(ictx)) fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx)); - wreq->contiguity = wreq->start; wreq->cleaned_to = wreq->start; wreq->io_streams[0].stream_nr = 0; @@ -158,6 +157,7 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, subreq->source = stream->source; subreq->start = start; subreq->stream_nr = stream->stream_nr; + subreq->io_iter = wreq->io_iter; _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); @@ -213,22 +213,15 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, * netfs_write_subrequest_terminated() when complete. */ static void netfs_do_issue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq, - struct iov_iter *source) + struct netfs_io_subrequest *subreq) { struct netfs_io_request *wreq = subreq->rreq; - size_t size = subreq->len - subreq->transferred; _enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len); if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) return netfs_write_subrequest_terminated(subreq, subreq->error, false); - // TODO: Use encrypted buffer - subreq->io_iter = *source; - iov_iter_advance(source, size); - iov_iter_truncate(&subreq->io_iter, size); - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); stream->issue_write(subreq); } @@ -237,8 +230,15 @@ void netfs_reissue_write(struct netfs_io_stream *stream, struct netfs_io_subrequest *subreq, struct iov_iter *source) { + size_t size = subreq->len - subreq->transferred; + + // TODO: Use encrypted buffer + subreq->io_iter = *source; + iov_iter_advance(source, size); + iov_iter_truncate(&subreq->io_iter, size); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - netfs_do_issue_write(stream, subreq, source); + netfs_do_issue_write(stream, subreq); } static void netfs_issue_write(struct netfs_io_request *wreq, @@ -249,10 +249,8 @@ static void netfs_issue_write(struct netfs_io_request *wreq, if (!subreq) return; stream->construct = NULL; - - if (subreq->start + subreq->len > wreq->start + wreq->submitted) - WRITE_ONCE(wreq->submitted, subreq->start + subreq->len - wreq->start); - netfs_do_issue_write(stream, subreq, &wreq->io_iter); + subreq->io_iter.count = subreq->len; + netfs_do_issue_write(stream, subreq); } /* @@ -464,10 +462,11 @@ static int netfs_write_folio(struct netfs_io_request *wreq, if (choose_s < 0) break; stream = &wreq->io_streams[choose_s]; + wreq->io_iter.iov_offset = stream->submit_off; + atomic64_set(&wreq->issued_to, fpos + stream->submit_off); part = netfs_advance_write(wreq, stream, fpos + stream->submit_off, stream->submit_len, to_eof); - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); stream->submit_off += part; stream->submit_max_len -= part; if (part > stream->submit_len) @@ -478,6 +477,8 @@ static int netfs_write_folio(struct netfs_io_request *wreq, debug = true; } + wreq->io_iter.iov_offset = 0; + iov_iter_advance(&wreq->io_iter, fsize); atomic64_set(&wreq->issued_to, fpos + fsize); if (!debug) @@ -526,10 +527,10 @@ int netfs_writepages(struct address_space *mapping, netfs_stat(&netfs_n_wh_writepages); do { - _debug("wbiter %lx %llx", folio->index, wreq->start + wreq->submitted); + _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)); /* It appears we don't have to handle cyclic writeback wrapping. */ - WARN_ON_ONCE(wreq && folio_pos(folio) < wreq->start + wreq->submitted); + WARN_ON_ONCE(wreq && folio_pos(folio) < atomic64_read(&wreq->issued_to)); if (netfs_folio_group(folio) != NETFS_FOLIO_COPY_TO_CACHE && unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) { @@ -673,6 +674,7 @@ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t part = netfs_advance_write(wreq, upload, start, len, false); start += part; len -= part; + iov_iter_advance(&wreq->io_iter, part); if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { trace_netfs_rreq(wreq, netfs_rreq_trace_wait_pause); wait_on_bit(&wreq->flags, NETFS_RREQ_PAUSE, TASK_UNINTERRUPTIBLE); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 6428be9d99ba..348f8f5ab5e6 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -256,7 +256,6 @@ struct netfs_io_request { unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ atomic64_t issued_to; /* Write issuer folio cursor */ - unsigned long long contiguity; /* Tracking for gaps in the writeback sequence */ unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 4e13774a06e6..58bf23002fc1 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -512,33 +512,6 @@ TRACE_EVENT(netfs_collect, __entry->start + __entry->len) ); -TRACE_EVENT(netfs_collect_contig, - TP_PROTO(const struct netfs_io_request *wreq, unsigned long long to, - enum netfs_collect_contig_trace type), - - TP_ARGS(wreq, to, type), - - TP_STRUCT__entry( - __field(unsigned int, wreq) - __field(enum netfs_collect_contig_trace, type) - __field(unsigned long long, contiguity) - __field(unsigned long long, to) - ), - - TP_fast_assign( - __entry->wreq = wreq->debug_id; - __entry->type = type; - __entry->contiguity = wreq->contiguity; - __entry->to = to; - ), - - TP_printk("R=%08x %llx -> %llx %s", - __entry->wreq, - __entry->contiguity, - __entry->to, - __print_symbolic(__entry->type, netfs_collect_contig_traces)) - ); - TRACE_EVENT(netfs_collect_sreq, TP_PROTO(const struct netfs_io_request *wreq, const struct netfs_io_subrequest *subreq), @@ -610,7 +583,6 @@ TRACE_EVENT(netfs_collect_state, __field(unsigned int, notes ) __field(unsigned long long, collected_to ) __field(unsigned long long, cleaned_to ) - __field(unsigned long long, contiguity ) ), TP_fast_assign( @@ -618,12 +590,11 @@ TRACE_EVENT(netfs_collect_state, __entry->notes = notes; __entry->collected_to = collected_to; __entry->cleaned_to = wreq->cleaned_to; - __entry->contiguity = wreq->contiguity; ), - TP_printk("R=%08x cto=%llx fto=%llx ctg=%llx n=%x", + TP_printk("R=%08x col=%llx cln=%llx n=%x", __entry->wreq, __entry->collected_to, - __entry->cleaned_to, __entry->contiguity, + __entry->cleaned_to, __entry->notes) ); From patchwork Wed Aug 14 20:38:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13763966 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39A27C3DA4A for ; Wed, 14 Aug 2024 20:41:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A82E06B0092; Wed, 14 Aug 2024 16:41:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A32676B00B3; Wed, 14 Aug 2024 16:41:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D2E56B00B4; Wed, 14 Aug 2024 16:41:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6ADF56B0092 for ; Wed, 14 Aug 2024 16:41:25 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 23DB81211A9 for ; Wed, 14 Aug 2024 20:41:25 +0000 (UTC) X-FDA: 82452021330.16.B8AE32E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 6BDB4160028 for ; Wed, 14 Aug 2024 20:41:23 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QiUC0y+w; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668070; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bfu/TnsX2YiQImhcvgy6+BH/oHXTJfoVCbcIECxbkG0=; b=a+6eX8TxUJNpeQlHGpAhbtW/dp+f2xEXrbLEHvfQRVvgsOTMXchQcVApQ3k1VHdbMyAEyZ 8mP3oXf18BbzMnZricJyM7lGpunZc1eNpXfW49R1ywMvrgxrZRJfOZ0LtJO0zQYO/RL/Zu UfqZopz0laewYko8dtoPMlCWgWXOEfI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QiUC0y+w; spf=pass (imf08.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668070; a=rsa-sha256; cv=none; b=QDtI8bylqx/Cm88k1swSIDU6XVz55be6rPqkEFHw3KIybvXEQb7TXsqGFwOTKOZQj7AgfH aq97htKyI0IPDij98oURmDoHUvajSdfb2TFB9q00PggVecq+qPWjQeiTLFtPvlmoojzSky vhvdMkVAeH39xgJxymeoj135/MWE3X4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668082; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bfu/TnsX2YiQImhcvgy6+BH/oHXTJfoVCbcIECxbkG0=; b=QiUC0y+wIyLB2LHjDsFU0YJwzuV7MZXCG87Epv54FYynRI4I+AvDjl0GGcG3bRShE3bkHT jfWO+xxwVMVSxm+d8SjpxDhPEKn/AYBSEno/iiHBbj2ZSiPMlVf0kmK5U/I1i+kYiBUeji cMNWscD60G6JRnI3Ii9entM63Z6YxAk= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-183-7bTnzRmsN_6lVyVNWWDYbg-1; Wed, 14 Aug 2024 16:41:19 -0400 X-MC-Unique: 7bTnzRmsN_6lVyVNWWDYbg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7D4011954B15; Wed, 14 Aug 2024 20:41:16 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BEECA300019A; Wed, 14 Aug 2024 20:41:10 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 18/25] afs: Make read subreqs async Date: Wed, 14 Aug 2024 21:38:38 +0100 Message-ID: <20240814203850.2240469-19-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Stat-Signature: 65767pm8473b5bb6wxq6ybp1i4ied4ro X-Rspamd-Queue-Id: 6BDB4160028 X-Rspamd-Server: rspam11 X-HE-Tag: 1723668083-95839 X-HE-Meta: U2FsdGVkX1+MKpHF30cPZhrnVAk3ijZP5bYxn0DNyOYjJ2Kny87IcXRU37jhVg2rPN8qG4FPM38k5uJHfonZIVGdyFqdUOXM01ckGaQ8I8NCdJfvTyntk+ciMrDy89pYWMF4zJivAwQEJ3eYVngzKkrsAQ59/aVgLvtEnukTArQMvN3BHyTOgB+NlOnRNfzLSOXqw6SgXCsMQB+6+g/HdWnFv1ycG00E5/bLm6Jxbjija5hx1KIi0bub38c9wsKwzfWZ//fcoXt7IX9UwMzISa1+8BoyDKWg+m8aQLAYyPANhRtJuTJB0HJ07VX7H/w6tNT/t691a4tLG731JLk2YocUjyrWuIemr31PhMfpJ5Va8+sqzG5pjkEuyrqFPVQAbTmfXxjv6CLE2nxKy0TwbnmjGJGfjytMzSab3kO6uRwtcCvDP7C2H0Oju0FVVDWBABvr5KqcPRNU27IqLReADKE88eAOuqaQPcPxNXca3N8BSGrHo/n44c25JdOa9PCH9UHSxC6qFMoKIJsUBdKz2RuSpXTjLUy784w7ORywhT79IDyveooLPoItGNqeEe5ARyUNGLPwQ5Yti86cXTqNfe29FPCR1aVa38aZOoBdOzyx97rArWYnfP/PAjZJM/nnPlWEErO3FJ6KnJ8DvhucI2kLtxT//uclIzufjnuSsNwhrTbrJb3i9g7UbdgkOpIR6xup6cV7IbfVWFt9PXrT4CRKj+2zrTjyUZ5OiPj7iboRjIQCnsBb09VuF+bB+JwqCPGLZSZ+OYL6c/WD8DAUBSlaZ6MlDtydq4E1IKrvIHV6iqk8PVmHy1os5itsWxe23WDKhYP19brp36x6kNKRt3yVNI27+Sf1IrDmhHRFJirDc4dqn9U9hVt/aqIuakEU2PNRufu//RD+pq4s6mx3n7MYCyzKteyZQyiSUHSPYsWzELByklflPEo91LhHONwdB80vbFXbKBWQWe2aCoe oMe/kJu9 9Y9U6cP59ufj3ZIMlI3zvwCK2HfDUHhQU9LxB+BnPlwNAXSybkcTeT+UfyoZn8vPc+DeHMOBSSfJGw3eAB2YVc4f0quFD+iDmxKGkQHloXreDj88+GXod8rouDt2lsVZ5G58y+D2UCmbg+rtSG+iB2HA5Me8S2OqEEUaqqOBeshP48GACpqWlI3H1r0LmhinF/Kvd6OhKu8rPnkcRKn0q2lzhPxorV+lEAXfDbdvsg3qTR6bKhQgrDGAQoW5EYh4Svbfs6kUmzI4yfLZF/XktZGbQkDh6uL20AO8zbPBwFAVMQY76i8P6R4NdzXYi7Q5H2i3lUE6fPD9wnvNE7ZYStsWybKCpDKwjIOh0Smmu5xBuuX9gZhLcHtNHKfQsRAMXbjzn5xv+6SPpLC1AAd5Vm06g1eoTRj44gPGNYqcKSZ3+VCaE/MWg2oMIo1syhTvBEfQfSgHHh95uKvr0t67YORAaiSncY1LQd536 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Perform AFS read subrequests in a work item rather than in the calling thread. For normal buffered reads, this will allow the calling thread to copy data from the pagecache to the application at the same time as the demarshalling thread is shovelling data from skbuffs into the pagecache. This will also allow the RA mark to trigger a new read before we've finished shovelling the data from the current one. Note: This would be a bit safer if the FS.FetchData RPC ops returned the metadata (including the data version number) before returning the data. This would allow me to flush the pagecache before installing the new data. In future, it may be possible to asynchronously flush the pagecache either side of the region being read. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/file.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index ec1be0091fdb..5a9d16848ad5 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -305,8 +305,9 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) return afs_do_sync_operation(op); } -static void afs_issue_read(struct netfs_io_subrequest *subreq) +static void afs_read_worker(struct work_struct *work) { + struct netfs_io_subrequest *subreq = container_of(work, struct netfs_io_subrequest, work); struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode); struct afs_read *fsreq; @@ -325,6 +326,12 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq) afs_put_read(fsreq); } +static void afs_issue_read(struct netfs_io_subrequest *subreq) +{ + INIT_WORK(&subreq->work, afs_read_worker); + queue_work(system_long_wq, &subreq->work); +} + static int afs_symlink_read_folio(struct file *file, struct folio *folio) { struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host); From patchwork Wed Aug 14 20:38:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7935C531DC for ; Wed, 14 Aug 2024 20:41:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68AEF6B00B5; Wed, 14 Aug 2024 16:41:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63A0B6B00B6; Wed, 14 Aug 2024 16:41:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37CBA6B00B7; Wed, 14 Aug 2024 16:41:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 09DB26B00B5 for ; Wed, 14 Aug 2024 16:41:36 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A2E651C1F81 for ; Wed, 14 Aug 2024 20:41:35 +0000 (UTC) X-FDA: 82452021750.15.96DEC16 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 8CA7612000E for ; Wed, 14 Aug 2024 20:41:33 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Abnai2TJ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668081; a=rsa-sha256; cv=none; b=Nus64GTLVw26osCZBSihUqtKHeOCg7siAPyIR41GksX1vjLHgspo8dmIf1kJlEoAOEsC1k YqU/lZ7Z1lRmgBYvIJTmtSQEXjLgCluw9V3QS5WxXeTEiaq1nW66glhnSL9msOakOmuYWx kZoli8zFuYrI2+0W7gKM0qXyS6ge5AY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Abnai2TJ; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668081; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r2CkGYFFBDeGUQX5+vS3EcFzWb373y7gmne0MD+WEtE=; b=vmiNv3l2QvGgy8WS1Ns9yyAL+/+Ti8I+ylHUA9aSfiCekryLJLzDUs0MtPG9vgGNSASzOU VrWO7wym1JVcTxdNxYV4nkpSLoqt698CZ0VGq5CRWFOuwMcfUqHpz+SLRGGU7HDOvJEN90 3vz2MJ7f+QWQLkWdcOTgPTHVvqBxL2g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668092; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r2CkGYFFBDeGUQX5+vS3EcFzWb373y7gmne0MD+WEtE=; b=Abnai2TJ/62pzyQHAS7/5KgDfszdNdskJQaF8RH9qnpxzMTuYkXPQF0nSrHVFHiAUJvomz ae4cUSa/i3O1mKnCHribk5BjF7yqwer5taGVM9oPdahb4N5rHXqx95ZFYYm0+TtSjvRLUg Tlrm2rFCJhg2gCKjEldMHZ53eqUVjf8= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-642--o7TRWbiM4emMZ6E8iU5dQ-1; Wed, 14 Aug 2024 16:41:27 -0400 X-MC-Unique: -o7TRWbiM4emMZ6E8iU5dQ-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 098BC1925387; Wed, 14 Aug 2024 20:41:24 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C1C5719560A3; Wed, 14 Aug 2024 20:41:17 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 19/25] netfs: Speed up buffered reading Date: Wed, 14 Aug 2024 21:38:39 +0100 Message-ID: <20240814203850.2240469-20-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspam-User: X-Rspamd-Queue-Id: 8CA7612000E X-Rspamd-Server: rspam01 X-Stat-Signature: 4ts8p6ec9o87urzcu8ntocsa63x1ndsn X-HE-Tag: 1723668093-271979 X-HE-Meta: U2FsdGVkX1/dKaxfKDfn2jGOAP0FRaLxNd7lACDV3+LXDDJCk9rRMsfziIeBXQbhbKuPEZPngzhuxp2w8M2kE/3axvL2r/Qkq8niKfJS7Xlvcz4lGVJ/SyYHQIIPtT/vFpAOiFDb+vxkPMoWwudRiab9MlnL1AK1sB9Rfpu47XuzI8auk1H0xZPeTrrDE+zQFMp092ybcveZ3QoFDyC4W7j3t6OZdIdbh5zXadJUx96gE0a78yajWOBH70O0tAFfAi7lakjFWbCif7AGqoE0wC9gZ13GBcSP52osQojNEjA1hifVIRa1yio6PPvjMR/DdY3TuvZdxzOnv+xsihleJSERg3JOdpaNtoGdBuvc4Df52pLZm/5RK67v1SHNgD8EB0no4SpAB9EkmFMSm+hLpFdaa29Qr47S0B6IHlk6k5NDL9wieIc8ChQE9a6QrO7ktjiBXTEfwf2CsUnAk7n0VvFiyZm7nP1L1XLaT8B1Vb3jJ1rwVjILWR24jMmkFZ0UOiwtD6CNrQrSc9karz8ekZNPvFsdf46i/YuWYBqJsg8o3RoDZwaB8RmSXKEGMApGsaaksSb2HXON08xDMpwz5+ZW1W0r0JnNarcKPxJIR57FmSg5/jIsDY2oIgN6WZlrzSKeArzj1E3MxL0zkzzfBrrxRo9mbMx5c2yLgBznjAtwUol6ExbMskaWuiR+mmRn6kvNB/OdX1uCtHOzBCH3gUiEZ6m0omy7iznqw3rGavU/qZl72UlOJP6c+KtSkBbNRyTtWrXmH7c7CJxte0Fp38hG/i+O//bKS8wuhDBDiYehQTlSSiZATsQgWhP1URDs7wHXsPWq+MaqXdlu2wpc4XZoHbPiC3E9HuBxn8QU6pvqRyQ8EhJO646bHIVHrPH4K7xL//E2YHvE1YtIHrBYEeBjw8JrbLl5c28cGWdvb7fp5hET+lk0mEo20uGY/eecMP92hM6vXkBva6kt0li TrNBib7S jNPuvTjF0h2AALhBB4lOTu+WFJgXqih/g/fJsvbht3kedZSy5h0++uJfQPptrDwxwbnoDkyxDsumvdH1T2m0QmpeWHPCm9kO8enB8pIREoyBxiDgR/Gj8XggGzG7fGu+TXpRjM4dixTY/Yy5RE2sWh7V9+Evrnj9t4c6xR4Wf4oL/E8SeefpcVIn4PCLmPc8KnEvCocche4nx1RrW5MGu1iOljrYBvbhAgIcVxQs3aQfwRnwomHTzG9xP0HyG+nRytBprY5FKgX9KM4nZ/8Y+bGJjNAObf7qYcDPrt8mh5yCGvqk894Ig0jSDzkxA3Y7JAQzrCBA43cusVE5Fsky82imagIwrKoDznHuO7odfwz4Cwo7fOTjxTrJVsqZHTlrfxwGv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Improve the efficiency of buffered reads in a number of ways: (1) Overhaul the algorithm in general so that it's a lot more compact and split the read submission code between buffered and unbuffered versions. The unbuffered version can be vastly simplified. (2) Read-result collection is handed off to a work queue rather than being done in the I/O thread. Multiple subrequests can be processes simultaneously. (3) When a subrequest is collected, any folios it fully spans are collected and "spare" data on either side is donated to either the previous or the next subrequest in the sequence. Notes: (*) Readahead expansion is massively slows down fio, presumably because it causes a load of extra allocations, both folio and xarray, up front before RPC requests can be transmitted. (*) RDMA with cifs does appear to work, both with SIW and RXE. (*) PG_private_2-based reading and copy-to-cache is split out into its own file and altered to use folio_queue. Note that the copy to the cache now creates a new write transaction against the cache and adds the folios to be copied into it. This allows it to use part of the writeback I/O code. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org Signed-off-by: David Howells Reported-by: Manu Bretelle Reported-by: Eduard Zingerman Reported-by: Leon Romanovsky Signed-off-by: David Howells --- fs/9p/vfs_addr.c | 5 +- fs/afs/file.c | 21 +- fs/afs/fsclient.c | 9 +- fs/afs/yfsclient.c | 9 +- fs/ceph/addr.c | 76 ++-- fs/netfs/Makefile | 4 +- fs/netfs/buffered_read.c | 766 +++++++++++++++++++++-------------- fs/netfs/direct_read.c | 147 ++++++- fs/netfs/internal.h | 35 +- fs/netfs/iterator.c | 50 +++ fs/netfs/main.c | 4 +- fs/netfs/objects.c | 8 +- fs/netfs/read_collect.c | 544 +++++++++++++++++++++++++ fs/netfs/read_pgpriv2.c | 264 ++++++++++++ fs/netfs/read_retry.c | 256 ++++++++++++ fs/netfs/stats.c | 6 +- fs/netfs/write_collect.c | 9 +- fs/netfs/write_issue.c | 17 +- fs/nfs/fscache.c | 19 +- fs/nfs/fscache.h | 7 +- fs/smb/client/cifssmb.c | 6 +- fs/smb/client/file.c | 57 +-- fs/smb/client/smb2pdu.c | 10 +- include/linux/folio_queue.h | 18 + include/linux/netfs.h | 25 +- include/trace/events/netfs.h | 103 ++++- 26 files changed, 2045 insertions(+), 430 deletions(-) create mode 100644 fs/netfs/read_collect.c create mode 100644 fs/netfs/read_pgpriv2.c create mode 100644 fs/netfs/read_retry.c diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 24fdc74caeba..469ea158a73d 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -78,7 +78,10 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq) if (subreq->rreq->origin != NETFS_DIO_READ) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, err ?: total, false); + if (!err) + subreq->transferred += total; + + netfs_read_subreq_terminated(subreq, err, false); } /** diff --git a/fs/afs/file.c b/fs/afs/file.c index 5a9d16848ad5..492d857a3fa0 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "internal.h" static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); @@ -242,9 +243,10 @@ static void afs_fetch_data_notify(struct afs_operation *op) req->error = error; if (subreq) { - if (subreq->rreq->origin != NETFS_DIO_READ) - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, error ?: req->actual_len, false); + subreq->rreq->i_size = req->file_size; + if (req->pos + req->actual_len >= req->file_size) + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); + netfs_read_subreq_terminated(subreq, error, false); req->subreq = NULL; } else if (req->done) { req->done(req); @@ -262,6 +264,12 @@ static void afs_fetch_data_success(struct afs_operation *op) afs_fetch_data_notify(op); } +static void afs_fetch_data_aborted(struct afs_operation *op) +{ + afs_check_for_remote_deletion(op); + afs_fetch_data_notify(op); +} + static void afs_fetch_data_put(struct afs_operation *op) { op->fetch.req->error = afs_op_error(op); @@ -272,7 +280,7 @@ static const struct afs_operation_ops afs_fetch_data_operation = { .issue_afs_rpc = afs_fs_fetch_data, .issue_yfs_rpc = yfs_fs_fetch_data, .success = afs_fetch_data_success, - .aborted = afs_check_for_remote_deletion, + .aborted = afs_fetch_data_aborted, .failed = afs_fetch_data_notify, .put = afs_fetch_data_put, }; @@ -294,7 +302,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) op = afs_alloc_operation(req->key, vnode->volume); if (IS_ERR(op)) { if (req->subreq) - netfs_subreq_terminated(req->subreq, PTR_ERR(op), false); + netfs_read_subreq_terminated(req->subreq, PTR_ERR(op), false); return PTR_ERR(op); } @@ -313,7 +321,7 @@ static void afs_read_worker(struct work_struct *work) fsreq = afs_alloc_read(GFP_NOFS); if (!fsreq) - return netfs_subreq_terminated(subreq, -ENOMEM, false); + return netfs_read_subreq_terminated(subreq, -ENOMEM, false); fsreq->subreq = subreq; fsreq->pos = subreq->start + subreq->transferred; @@ -322,6 +330,7 @@ static void afs_read_worker(struct work_struct *work) fsreq->vnode = vnode; fsreq->iter = &subreq->io_iter; + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); afs_fetch_data(fsreq->vnode, fsreq); afs_put_read(fsreq); } diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index 79cd30775b7a..098fa034a1cc 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -304,6 +304,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call) struct afs_vnode_param *vp = &op->file[0]; struct afs_read *req = op->fetch.req; const __be32 *bp; + size_t count_before; int ret; _enter("{%u,%zu,%zu/%llu}", @@ -345,10 +346,14 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call) /* extract the returned data */ case 2: - _debug("extract data %zu/%llu", - iov_iter_count(call->iter), req->actual_len); + count_before = call->iov_len; + _debug("extract data %zu/%llu", count_before, req->actual_len); ret = afs_extract_data(call, true); + if (req->subreq) { + req->subreq->transferred += count_before - call->iov_len; + netfs_read_subreq_progress(req->subreq, false); + } if (ret < 0) return ret; diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index f521e66d3bf6..024227aba4cd 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -355,6 +355,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call) struct afs_vnode_param *vp = &op->file[0]; struct afs_read *req = op->fetch.req; const __be32 *bp; + size_t count_before; int ret; _enter("{%u,%zu, %zu/%llu}", @@ -391,10 +392,14 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call) /* extract the returned data */ case 2: - _debug("extract data %zu/%llu", - iov_iter_count(call->iter), req->actual_len); + count_before = call->iov_len; + _debug("extract data %zu/%llu", count_before, req->actual_len); ret = afs_extract_data(call, true); + if (req->subreq) { + req->subreq->transferred += count_before - call->iov_len; + netfs_read_subreq_progress(req->subreq, false); + } if (ret < 0) return ret; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index c4744a02db75..c500c1fd6b9f 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "super.h" #include "mds_client.h" @@ -205,21 +206,6 @@ static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq) } } -static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq) -{ - struct inode *inode = subreq->rreq->inode; - struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); - struct ceph_inode_info *ci = ceph_inode(inode); - u64 objno, objoff; - u32 xlen; - - /* Truncate the extent at the end of the current block */ - ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, - &objno, &objoff, &xlen); - subreq->len = min(xlen, fsc->mount_options->rsize); - return true; -} - static void finish_netfs_read(struct ceph_osd_request *req) { struct inode *inode = req->r_inode; @@ -264,7 +250,12 @@ static void finish_netfs_read(struct ceph_osd_request *req) calc_pages_for(osd_data->alignment, osd_data->length), false); } - netfs_subreq_terminated(subreq, err, false); + if (err > 0) { + subreq->transferred = err; + err = 0; + } + trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); + netfs_read_subreq_terminated(subreq, err, false); iput(req->r_inode); ceph_dec_osd_stopping_blocker(fsc->mdsc); } @@ -278,7 +269,6 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) struct ceph_mds_request *req; struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb); struct ceph_inode_info *ci = ceph_inode(inode); - struct iov_iter iter; ssize_t err = 0; size_t len; int mode; @@ -301,6 +291,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) req->r_args.getattr.mask = cpu_to_le32(CEPH_STAT_CAP_INLINE_DATA); req->r_num_caps = 2; + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); err = ceph_mdsc_do_request(mdsc, NULL, req); if (err < 0) goto out; @@ -314,17 +305,36 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) } len = min_t(size_t, iinfo->inline_len - subreq->start, subreq->len); - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); - err = copy_to_iter(iinfo->inline_data + subreq->start, len, &iter); - if (err == 0) + err = copy_to_iter(iinfo->inline_data + subreq->start, len, &subreq->io_iter); + if (err == 0) { err = -EFAULT; + } else { + subreq->transferred += err; + err = 0; + } ceph_mdsc_put_request(req); out: - netfs_subreq_terminated(subreq, err, false); + netfs_read_subreq_terminated(subreq, err, false); return true; } +static int ceph_netfs_prepare_read(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + struct inode *inode = rreq->inode; + struct ceph_inode_info *ci = ceph_inode(inode); + struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); + u64 objno, objoff; + u32 xlen; + + /* Truncate the extent at the end of the current block */ + ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, + &objno, &objoff, &xlen); + rreq->io_streams[0].sreq_max_len = umin(xlen, fsc->mount_options->rsize); + return 0; +} + static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; @@ -334,9 +344,8 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) struct ceph_client *cl = fsc->client; struct ceph_osd_request *req = NULL; struct ceph_vino vino = ceph_vino(inode); - struct iov_iter iter; - int err = 0; - u64 len = subreq->len; + int err; + u64 len; bool sparse = IS_ENCRYPTED(inode) || ceph_test_mount_opt(fsc, SPARSEREAD); u64 off = subreq->start; int extent_cnt; @@ -349,6 +358,12 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) return; + // TODO: This rounding here is slightly dodgy. It *should* work, for + // now, as the cache only deals in blocks that are a multiple of + // PAGE_SIZE and fscrypt blocks are at most PAGE_SIZE. What needs to + // happen is for the fscrypt driving to be moved into netfslib and the + // data in the cache also to be stored encrypted. + len = subreq->len; ceph_fscrypt_adjust_off_and_len(inode, &off, &len); req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, @@ -371,8 +386,6 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) doutc(cl, "%llx.%llx pos=%llu orig_len=%zu len=%llu\n", ceph_vinop(inode), subreq->start, subreq->len, len); - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); - /* * FIXME: For now, use CEPH_OSD_DATA_TYPE_PAGES instead of _ITER for * encrypted inodes. We'd need infrastructure that handles an iov_iter @@ -384,7 +397,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) struct page **pages; size_t page_off; - err = iov_iter_get_pages_alloc2(&iter, &pages, len, &page_off); + err = iov_iter_get_pages_alloc2(&subreq->io_iter, &pages, len, &page_off); if (err < 0) { doutc(cl, "%llx.%llx failed to allocate pages, %d\n", ceph_vinop(inode), err); @@ -399,7 +412,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) osd_req_op_extent_osd_data_pages(req, 0, pages, len, 0, false, false); } else { - osd_req_op_extent_osd_iter(req, 0, &iter); + osd_req_op_extent_osd_iter(req, 0, &subreq->io_iter); } if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) { err = -EIO; @@ -410,17 +423,19 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) req->r_inode = inode; ihold(inode); + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); ceph_osdc_start_request(req->r_osdc, req); out: ceph_osdc_put_request(req); if (err) - netfs_subreq_terminated(subreq, err, false); + netfs_read_subreq_terminated(subreq, err, false); doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); } static int ceph_init_request(struct netfs_io_request *rreq, struct file *file) { struct inode *inode = rreq->inode; + struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); struct ceph_client *cl = ceph_inode_to_client(inode); int got = 0, want = CEPH_CAP_FILE_CACHE; struct ceph_netfs_request_data *priv; @@ -472,6 +487,7 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file) priv->caps = got; rreq->netfs_priv = priv; + rreq->io_streams[0].sreq_max_len = fsc->mount_options->rsize; out: if (ret < 0) @@ -496,9 +512,9 @@ static void ceph_netfs_free_request(struct netfs_io_request *rreq) const struct netfs_request_ops ceph_netfs_ops = { .init_request = ceph_init_request, .free_request = ceph_netfs_free_request, + .prepare_read = ceph_netfs_prepare_read, .issue_read = ceph_netfs_issue_read, .expand_readahead = ceph_netfs_expand_readahead, - .clamp_length = ceph_netfs_clamp_length, .check_write_begin = ceph_netfs_check_write_begin, }; diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 8e6781e0b10b..d08b0bfb6756 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -5,12 +5,14 @@ netfs-y := \ buffered_write.o \ direct_read.o \ direct_write.o \ - io.o \ iterator.o \ locking.o \ main.o \ misc.o \ objects.o \ + read_collect.o \ + read_pgpriv2.o \ + read_retry.o \ write_collect.o \ write_issue.o diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 27c750d39476..c40e226053cc 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -9,266 +9,388 @@ #include #include "internal.h" -/* - * [DEPRECATED] Unlock the folios in a read operation for when the filesystem - * is using PG_private_2 and direct writing to the cache from here rather than - * marking the page for writeback. - * - * Note that we don't touch folio->private in this code. - */ -static void netfs_rreq_unlock_folios_pgpriv2(struct netfs_io_request *rreq, - size_t *account) +static void netfs_cache_expand_readahead(struct netfs_io_request *rreq, + unsigned long long *_start, + unsigned long long *_len, + unsigned long long i_size) { - struct netfs_io_subrequest *subreq; - struct folio *folio; - pgoff_t start_page = rreq->start / PAGE_SIZE; - pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; - bool subreq_failed = false; + struct netfs_cache_resources *cres = &rreq->cache_resources; - XA_STATE(xas, &rreq->mapping->i_pages, start_page); + if (cres->ops && cres->ops->expand_readahead) + cres->ops->expand_readahead(cres, _start, _len, i_size); +} - /* Walk through the pagecache and the I/O request lists simultaneously. - * We may have a mixture of cached and uncached sections and we only - * really want to write out the uncached sections. This is slightly - * complicated by the possibility that we might have huge pages with a - * mixture inside. +static void netfs_rreq_expand(struct netfs_io_request *rreq, + struct readahead_control *ractl) +{ + /* Give the cache a chance to change the request parameters. The + * resultant request must contain the original region. */ - subreq = list_first_entry(&rreq->subrequests, - struct netfs_io_subrequest, rreq_link); - subreq_failed = (subreq->error < 0); + netfs_cache_expand_readahead(rreq, &rreq->start, &rreq->len, rreq->i_size); - trace_netfs_rreq(rreq, netfs_rreq_trace_unlock_pgpriv2); + /* Give the netfs a chance to change the request parameters. The + * resultant request must contain the original region. + */ + if (rreq->netfs_ops->expand_readahead) + rreq->netfs_ops->expand_readahead(rreq); - rcu_read_lock(); - xas_for_each(&xas, folio, last_page) { - loff_t pg_end; - bool pg_failed = false; - bool folio_started = false; + /* Expand the request if the cache wants it to start earlier. Note + * that the expansion may get further extended if the VM wishes to + * insert THPs and the preferred start and/or end wind up in the middle + * of THPs. + * + * If this is the case, however, the THP size should be an integer + * multiple of the cache granule size, so we get a whole number of + * granules to deal with. + */ + if (rreq->start != readahead_pos(ractl) || + rreq->len != readahead_length(ractl)) { + readahead_expand(ractl, rreq->start, rreq->len); + rreq->start = readahead_pos(ractl); + rreq->len = readahead_length(ractl); - if (xas_retry(&xas, folio)) - continue; + trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl), + netfs_read_trace_expanded); + } +} - pg_end = folio_pos(folio) + folio_size(folio) - 1; +/* + * Begin an operation, and fetch the stored zero point value from the cookie if + * available. + */ +static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_inode *ctx) +{ + return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx)); +} - for (;;) { - loff_t sreq_end; +/* + * Decant the list of folios to read into a rolling buffer. + */ +static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, + struct folio_queue *folioq) +{ + unsigned int order, nr; + size_t size = 0; + + nr = __readahead_batch(rreq->ractl, (struct page **)folioq->vec.folios, + ARRAY_SIZE(folioq->vec.folios)); + folioq->vec.nr = nr; + for (int i = 0; i < nr; i++) { + struct folio *folio = folioq_folio(folioq, i); + + trace_netfs_folio(folio, netfs_folio_trace_read); + order = folio_order(folio); + folioq->orders[i] = order; + size += PAGE_SIZE << order; + } - if (!subreq) { - pg_failed = true; - break; - } + for (int i = nr; i < folioq_nr_slots(folioq); i++) + folioq_clear(folioq, i); - if (!folio_started && - test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags) && - fscache_operation_valid(&rreq->cache_resources)) { - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); - folio_start_private_2(folio); - folio_started = true; - } + return size; +} - pg_failed |= subreq_failed; - sreq_end = subreq->start + subreq->len - 1; - if (pg_end < sreq_end) - break; +/* + * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O + * @subreq: The subrequest to be set up + * + * Prepare the I/O iterator representing the read buffer on a subrequest for + * the filesystem to use for I/O (it can be passed directly to a socket). This + * is intended to be called from the ->issue_read() method once the filesystem + * has trimmed the request to the size it wants. + * + * Returns the limited size if successful and -ENOMEM if insufficient memory + * available. + * + * [!] NOTE: This must be run in the same thread as ->issue_read() was called + * in as we access the readahead_control struct. + */ +static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + size_t rsize = subreq->len; + + if (subreq->source == NETFS_DOWNLOAD_FROM_SERVER) + rsize = umin(rsize, rreq->io_streams[0].sreq_max_len); + + if (rreq->ractl) { + /* If we don't have sufficient folios in the rolling buffer, + * extract a folioq's worth from the readahead region at a time + * into the buffer. Note that this acquires a ref on each page + * that we will need to release later - but we don't want to do + * that until after we've started the I/O. + */ + while (rreq->submitted < subreq->start + rsize) { + struct folio_queue *tail = rreq->buffer_tail, *new; + size_t added; + + new = kmalloc(sizeof(*new), GFP_NOFS); + if (!new) + return -ENOMEM; + netfs_stat(&netfs_n_folioq); + folioq_init(new); + new->prev = tail; + tail->next = new; + rreq->buffer_tail = new; + added = netfs_load_buffer_from_ra(rreq, new); + rreq->iter.count += added; + rreq->submitted += added; + } + } - *account += subreq->transferred; - if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { - subreq = list_next_entry(subreq, rreq_link); - subreq_failed = (subreq->error < 0); - } else { - subreq = NULL; - subreq_failed = false; - } + subreq->len = rsize; + if (unlikely(rreq->io_streams[0].sreq_max_segs)) { + size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, + rreq->io_streams[0].sreq_max_segs); - if (pg_end == sreq_end) - break; + if (limit < rsize) { + subreq->len = limit; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); } + } - if (!pg_failed) { - flush_dcache_folio(folio); - folio_mark_uptodate(folio); - } + subreq->io_iter = rreq->iter; - if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { - if (folio->index == rreq->no_unlock_folio && - test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) - _debug("no unlock"); - else - folio_unlock(folio); + if (iov_iter_is_folioq(&subreq->io_iter)) { + if (subreq->io_iter.folioq_slot >= folioq_nr_slots(subreq->io_iter.folioq)) { + subreq->io_iter.folioq = subreq->io_iter.folioq->next; + subreq->io_iter.folioq_slot = 0; } + subreq->curr_folioq = (struct folio_queue *)subreq->io_iter.folioq; + subreq->curr_folioq_slot = subreq->io_iter.folioq_slot; + subreq->curr_folio_order = subreq->curr_folioq->orders[subreq->curr_folioq_slot]; } - rcu_read_unlock(); + + iov_iter_truncate(&subreq->io_iter, subreq->len); + iov_iter_advance(&rreq->iter, subreq->len); + return subreq->len; } -/* - * Unlock the folios in a read operation. We need to set PG_writeback on any - * folios we're going to write back before we unlock them. - * - * Note that if the deprecated NETFS_RREQ_USE_PGPRIV2 is set then we use - * PG_private_2 and do a direct write to the cache from here instead. - */ -void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) +static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq, + loff_t i_size) { - struct netfs_io_subrequest *subreq; - struct netfs_folio *finfo; - struct folio *folio; - pgoff_t start_page = rreq->start / PAGE_SIZE; - pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; - size_t account = 0; - bool subreq_failed = false; + struct netfs_cache_resources *cres = &rreq->cache_resources; - XA_STATE(xas, &rreq->mapping->i_pages, start_page); + if (!cres->ops) + return NETFS_DOWNLOAD_FROM_SERVER; + return cres->ops->prepare_read(subreq, i_size); +} - if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) { - __clear_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); - } - } +static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, + bool was_async) +{ + struct netfs_io_subrequest *subreq = priv; - /* Handle deprecated PG_private_2 case. */ - if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { - netfs_rreq_unlock_folios_pgpriv2(rreq, &account); - goto out; + if (transferred_or_error < 0) { + netfs_read_subreq_terminated(subreq, transferred_or_error, was_async); + return; } - /* Walk through the pagecache and the I/O request lists simultaneously. - * We may have a mixture of cached and uncached sections and we only - * really want to write out the uncached sections. This is slightly - * complicated by the possibility that we might have huge pages with a - * mixture inside. - */ - subreq = list_first_entry(&rreq->subrequests, - struct netfs_io_subrequest, rreq_link); - subreq_failed = (subreq->error < 0); - - trace_netfs_rreq(rreq, netfs_rreq_trace_unlock); + if (transferred_or_error > 0) + subreq->transferred += transferred_or_error; + netfs_read_subreq_terminated(subreq, 0, was_async); +} - rcu_read_lock(); - xas_for_each(&xas, folio, last_page) { - loff_t pg_end; - bool pg_failed = false; - bool wback_to_cache = false; +/* + * Issue a read against the cache. + * - Eats the caller's ref on subreq. + */ +static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_cache_resources *cres = &rreq->cache_resources; - if (xas_retry(&xas, folio)) - continue; + netfs_stat(&netfs_n_rh_read); + cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_IGNORE, + netfs_cache_read_terminated, subreq); +} - pg_end = folio_pos(folio) + folio_size(folio) - 1; +/* + * Perform a read to the pagecache from a series of sources of different types, + * slicing up the region to be read according to available cache blocks and + * network rsize. + */ +static void netfs_read_to_pagecache(struct netfs_io_request *rreq) +{ + struct netfs_inode *ictx = netfs_inode(rreq->inode); + unsigned long long start = rreq->start; + ssize_t size = rreq->len; + int ret = 0; + + atomic_inc(&rreq->nr_outstanding); + + do { + struct netfs_io_subrequest *subreq; + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; + ssize_t slice; + + subreq = netfs_alloc_subrequest(rreq); + if (!subreq) { + ret = -ENOMEM; + break; + } - for (;;) { - loff_t sreq_end; + subreq->start = start; + subreq->len = size; + + atomic_inc(&rreq->nr_outstanding); + spin_lock_bh(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + subreq->prev_donated = rreq->prev_donated; + rreq->prev_donated = 0; + trace_netfs_sreq(subreq, netfs_sreq_trace_added); + spin_unlock_bh(&rreq->lock); + + source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size); + subreq->source = source; + if (source == NETFS_DOWNLOAD_FROM_SERVER) { + unsigned long long zp = umin(ictx->zero_point, rreq->i_size); + size_t len = subreq->len; + + if (subreq->start >= zp) { + subreq->source = source = NETFS_FILL_WITH_ZEROES; + goto fill_with_zeroes; + } - if (!subreq) { - pg_failed = true; + if (len > zp - subreq->start) + len = zp - subreq->start; + if (len == 0) { + pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx", + rreq->debug_id, subreq->debug_index, + subreq->len, size, + subreq->start, ictx->zero_point, rreq->i_size); break; } + subreq->len = len; + + netfs_stat(&netfs_n_rh_download); + if (rreq->netfs_ops->prepare_read) { + ret = rreq->netfs_ops->prepare_read(subreq); + if (ret < 0) { + atomic_dec(&rreq->nr_outstanding); + netfs_put_subrequest(subreq, false, + netfs_sreq_trace_put_cancel); + break; + } + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + } - wback_to_cache |= test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); - pg_failed |= subreq_failed; - sreq_end = subreq->start + subreq->len - 1; - if (pg_end < sreq_end) + slice = netfs_prepare_read_iterator(subreq); + if (slice < 0) { + atomic_dec(&rreq->nr_outstanding); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + ret = slice; break; - - account += subreq->transferred; - if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { - subreq = list_next_entry(subreq, rreq_link); - subreq_failed = (subreq->error < 0); - } else { - subreq = NULL; - subreq_failed = false; } - if (pg_end == sreq_end) - break; + rreq->netfs_ops->issue_read(subreq); + goto done; } - if (!pg_failed) { - flush_dcache_folio(folio); - finfo = netfs_folio_info(folio); - if (finfo) { - trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); - if (finfo->netfs_group) - folio_change_private(folio, finfo->netfs_group); - else - folio_detach_private(folio); - kfree(finfo); - } - folio_mark_uptodate(folio); - if (wback_to_cache && !WARN_ON_ONCE(folio_get_private(folio) != NULL)) { - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); - folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); - filemap_dirty_folio(folio->mapping, folio); - } + fill_with_zeroes: + if (source == NETFS_FILL_WITH_ZEROES) { + subreq->source = NETFS_FILL_WITH_ZEROES; + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + netfs_stat(&netfs_n_rh_zero); + slice = netfs_prepare_read_iterator(subreq); + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + netfs_read_subreq_terminated(subreq, 0, false); + goto done; } - if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { - if (folio->index == rreq->no_unlock_folio && - test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) - _debug("no unlock"); - else - folio_unlock(folio); + if (source == NETFS_READ_FROM_CACHE) { + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + slice = netfs_prepare_read_iterator(subreq); + netfs_read_cache_to_pagecache(rreq, subreq); + goto done; } - } - rcu_read_unlock(); -out: - task_io_account_read(account); - if (rreq->netfs_ops->done) - rreq->netfs_ops->done(rreq); -} + pr_err("Unexpected read source %u\n", source); + WARN_ON_ONCE(1); + break; -static void netfs_cache_expand_readahead(struct netfs_io_request *rreq, - unsigned long long *_start, - unsigned long long *_len, - unsigned long long i_size) -{ - struct netfs_cache_resources *cres = &rreq->cache_resources; + done: + size -= slice; + start += slice; + cond_resched(); + } while (size > 0); - if (cres->ops && cres->ops->expand_readahead) - cres->ops->expand_readahead(cres, _start, _len, i_size); + if (atomic_dec_and_test(&rreq->nr_outstanding)) + netfs_rreq_terminated(rreq, false); + + /* Defer error return as we may need to wait for outstanding I/O. */ + cmpxchg(&rreq->error, 0, ret); } -static void netfs_rreq_expand(struct netfs_io_request *rreq, - struct readahead_control *ractl) +/* + * Wait for the read operation to complete, successfully or otherwise. + */ +static int netfs_wait_for_read(struct netfs_io_request *rreq) { - /* Give the cache a chance to change the request parameters. The - * resultant request must contain the original region. - */ - netfs_cache_expand_readahead(rreq, &rreq->start, &rreq->len, rreq->i_size); + int ret; - /* Give the netfs a chance to change the request parameters. The - * resultant request must contain the original region. - */ - if (rreq->netfs_ops->expand_readahead) - rreq->netfs_ops->expand_readahead(rreq); + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); + ret = rreq->error; + if (ret == 0 && rreq->submitted < rreq->len) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret = -EIO; + } - /* Expand the request if the cache wants it to start earlier. Note - * that the expansion may get further extended if the VM wishes to - * insert THPs and the preferred start and/or end wind up in the middle - * of THPs. - * - * If this is the case, however, the THP size should be an integer - * multiple of the cache granule size, so we get a whole number of - * granules to deal with. - */ - if (rreq->start != readahead_pos(ractl) || - rreq->len != readahead_length(ractl)) { - readahead_expand(ractl, rreq->start, rreq->len); - rreq->start = readahead_pos(ractl); - rreq->len = readahead_length(ractl); + return ret; +} - trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl), - netfs_read_trace_expanded); - } +/* + * Set up the initial folioq of buffer folios in the rolling buffer and set the + * iterator to refer to it. + */ +static int netfs_prime_buffer(struct netfs_io_request *rreq) +{ + struct folio_queue *folioq; + size_t added; + + folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); + if (!folioq) + return -ENOMEM; + netfs_stat(&netfs_n_folioq); + folioq_init(folioq); + rreq->buffer = folioq; + rreq->buffer_tail = folioq; + rreq->submitted = rreq->start; + iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0); + + added = netfs_load_buffer_from_ra(rreq, folioq); + rreq->iter.count += added; + rreq->submitted += added; + return 0; } /* - * Begin an operation, and fetch the stored zero point value from the cookie if - * available. + * Drop the ref on each folio that we inherited from the VM readahead code. We + * still have the folio locks to pin the page until we complete the I/O. + * + * Note that we can't just release the batch in each queue struct as we use the + * occupancy count in other places. */ -static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_inode *ctx) +static void netfs_put_ra_refs(struct folio_queue *folioq) { - return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx)); + struct folio_batch fbatch; + + folio_batch_init(&fbatch); + while (folioq) { + for (unsigned int slot = 0; slot < folioq_count(folioq); slot++) { + struct folio *folio = folioq_folio(folioq, slot); + if (!folio) + continue; + trace_netfs_folio(folio, netfs_folio_trace_read_put); + if (!folio_batch_add(&fbatch, folio)) + folio_batch_release(&fbatch); + } + folioq = folioq->next; + } + + folio_batch_release(&fbatch); } /** @@ -289,22 +411,17 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in void netfs_readahead(struct readahead_control *ractl) { struct netfs_io_request *rreq; - struct netfs_inode *ctx = netfs_inode(ractl->mapping->host); + struct netfs_inode *ictx = netfs_inode(ractl->mapping->host); + unsigned long long start = readahead_pos(ractl); + size_t size = readahead_length(ractl); int ret; - _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); - - if (readahead_count(ractl) == 0) - return; - - rreq = netfs_alloc_request(ractl->mapping, ractl->file, - readahead_pos(ractl), - readahead_length(ractl), + rreq = netfs_alloc_request(ractl->mapping, ractl->file, start, size, NETFS_READAHEAD); if (IS_ERR(rreq)) return; - ret = netfs_begin_cache_read(rreq, ctx); + ret = netfs_begin_cache_read(rreq, ictx); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto cleanup_free; @@ -314,18 +431,15 @@ void netfs_readahead(struct readahead_control *ractl) netfs_rreq_expand(rreq, ractl); - /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &ractl->mapping->i_pages, - rreq->start, rreq->len); + rreq->ractl = ractl; + if (netfs_prime_buffer(rreq) < 0) + goto cleanup_free; + netfs_read_to_pagecache(rreq); - /* Drop the refs on the folios here rather than in the cache or - * filesystem. The locks will be dropped in netfs_rreq_unlock(). - */ - while (readahead_folio(ractl)) - ; + /* Release the folio refs whilst we're waiting for the I/O. */ + netfs_put_ra_refs(rreq->buffer); - netfs_begin_read(rreq, false); - netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + netfs_put_request(rreq, true, netfs_rreq_trace_put_return); return; cleanup_free: @@ -334,6 +448,117 @@ void netfs_readahead(struct readahead_control *ractl) } EXPORT_SYMBOL(netfs_readahead); +/* + * Create a rolling buffer with a single occupying folio. + */ +static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio) +{ + struct folio_queue *folioq; + + folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); + if (!folioq) + return -ENOMEM; + + netfs_stat(&netfs_n_folioq); + folioq_init(folioq); + folioq_append(folioq, folio); + BUG_ON(folioq_folio(folioq, 0) != folio); + BUG_ON(folioq_folio_order(folioq, 0) != folio_order(folio)); + rreq->buffer = folioq; + rreq->buffer_tail = folioq; + rreq->submitted = rreq->start + rreq->len; + iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, rreq->len); + rreq->ractl = (struct readahead_control *)1UL; + return 0; +} + +/* + * Read into gaps in a folio partially filled by a streaming write. + */ +static int netfs_read_gaps(struct file *file, struct folio *folio) +{ + struct netfs_io_request *rreq; + struct address_space *mapping = folio->mapping; + struct netfs_folio *finfo = netfs_folio_info(folio); + struct netfs_inode *ctx = netfs_inode(mapping->host); + struct folio *sink = NULL; + struct bio_vec *bvec; + unsigned int from = finfo->dirty_offset; + unsigned int to = from + finfo->dirty_len; + unsigned int off = 0, i = 0; + size_t flen = folio_size(folio); + size_t nr_bvec = flen / PAGE_SIZE + 2; + size_t part; + int ret; + + _enter("%lx", folio->index); + + rreq = netfs_alloc_request(mapping, file, folio_pos(folio), flen, NETFS_READ_GAPS); + if (IS_ERR(rreq)) { + ret = PTR_ERR(rreq); + goto alloc_error; + } + + ret = netfs_begin_cache_read(rreq, ctx); + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) + goto discard; + + netfs_stat(&netfs_n_rh_read_folio); + trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_read_gaps); + + /* Fiddle the buffer so that a gap at the beginning and/or a gap at the + * end get copied to, but the middle is discarded. + */ + ret = -ENOMEM; + bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL); + if (!bvec) + goto discard; + + sink = folio_alloc(GFP_KERNEL, 0); + if (!sink) { + kfree(bvec); + goto discard; + } + + trace_netfs_folio(folio, netfs_folio_trace_read_gaps); + + rreq->direct_bv = bvec; + rreq->direct_bv_count = nr_bvec; + if (from > 0) { + bvec_set_folio(&bvec[i++], folio, from, 0); + off = from; + } + while (off < to) { + part = min_t(size_t, to - off, PAGE_SIZE); + bvec_set_folio(&bvec[i++], sink, part, 0); + off += part; + } + if (to < flen) + bvec_set_folio(&bvec[i++], folio, flen - to, to); + iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); + rreq->submitted = rreq->start + flen; + + netfs_read_to_pagecache(rreq); + + if (sink) + folio_put(sink); + + ret = netfs_wait_for_read(rreq); + if (ret == 0) { + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + } + folio_unlock(folio); + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + return ret < 0 ? ret : 0; + +discard: + netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); +alloc_error: + folio_unlock(folio); + return ret; +} + /** * netfs_read_folio - Helper to manage a read_folio request * @file: The file to read from @@ -353,9 +578,13 @@ int netfs_read_folio(struct file *file, struct folio *folio) struct address_space *mapping = folio->mapping; struct netfs_io_request *rreq; struct netfs_inode *ctx = netfs_inode(mapping->host); - struct folio *sink = NULL; int ret; + if (folio_test_dirty(folio)) { + trace_netfs_folio(folio, netfs_folio_trace_read_gaps); + return netfs_read_gaps(file, folio); + } + _enter("%lx", folio->index); rreq = netfs_alloc_request(mapping, file, @@ -374,54 +603,12 @@ int netfs_read_folio(struct file *file, struct folio *folio) trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); /* Set up the output buffer */ - if (folio_test_dirty(folio)) { - /* Handle someone trying to read from an unflushed streaming - * write. We fiddle the buffer so that a gap at the beginning - * and/or a gap at the end get copied to, but the middle is - * discarded. - */ - struct netfs_folio *finfo = netfs_folio_info(folio); - struct bio_vec *bvec; - unsigned int from = finfo->dirty_offset; - unsigned int to = from + finfo->dirty_len; - unsigned int off = 0, i = 0; - size_t flen = folio_size(folio); - size_t nr_bvec = flen / PAGE_SIZE + 2; - size_t part; - - ret = -ENOMEM; - bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL); - if (!bvec) - goto discard; - - sink = folio_alloc(GFP_KERNEL, 0); - if (!sink) - goto discard; - - trace_netfs_folio(folio, netfs_folio_trace_read_gaps); - - rreq->direct_bv = bvec; - rreq->direct_bv_count = nr_bvec; - if (from > 0) { - bvec_set_folio(&bvec[i++], folio, from, 0); - off = from; - } - while (off < to) { - part = min_t(size_t, to - off, PAGE_SIZE); - bvec_set_folio(&bvec[i++], sink, part, 0); - off += part; - } - if (to < flen) - bvec_set_folio(&bvec[i++], folio, flen - to, to); - iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); - } else { - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); - } + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto discard; - ret = netfs_begin_read(rreq, true); - if (sink) - folio_put(sink); + netfs_read_to_pagecache(rreq); + ret = netfs_wait_for_read(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); return ret < 0 ? ret : 0; @@ -494,13 +681,10 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, * * Pre-read data for a write-begin request by drawing data from the cache if * possible, or the netfs if not. Space beyond the EOF is zero-filled. - * Multiple I/O requests from different sources will get munged together. If - * necessary, the readahead window can be expanded in either direction to a - * more convenient alighment for RPC efficiency or to make storage in the cache - * feasible. + * Multiple I/O requests from different sources will get munged together. * * The calling netfs must provide a table of operations, only one of which, - * issue_op, is mandatory. + * issue_read, is mandatory. * * The check_write_begin() operation can be provided to check for and flush * conflicting writes once the folio is grabbed and locked. It is passed a @@ -528,8 +712,6 @@ int netfs_write_begin(struct netfs_inode *ctx, pgoff_t index = pos >> PAGE_SHIFT; int ret; - DEFINE_READAHEAD(ractl, file, NULL, mapping, index); - retry: folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, mapping_gfp_mask(mapping)); @@ -577,22 +759,13 @@ int netfs_write_begin(struct netfs_inode *ctx, netfs_stat(&netfs_n_rh_write_begin); trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); - /* Expand the request to meet caching requirements and download - * preferences. - */ - ractl._nr_pages = folio_nr_pages(folio); - netfs_rreq_expand(rreq, &ractl); - /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); - - /* We hold the folio locks, so we can drop the references */ - folio_get(folio); - while (readahead_folio(&ractl)) - ; + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto error_put; - ret = netfs_begin_read(rreq, true); + netfs_read_to_pagecache(rreq); + ret = netfs_wait_for_read(rreq); if (ret < 0) goto error; netfs_put_request(rreq, false, netfs_rreq_trace_put_return); @@ -652,10 +825,13 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto error_put; - ret = netfs_begin_read(rreq, true); + folioq_mark2(rreq->buffer, 0); + netfs_read_to_pagecache(rreq); + ret = netfs_wait_for_read(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); return ret; diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 10a1e4da6bda..b1a66a6e6bc2 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -16,6 +16,143 @@ #include #include "internal.h" +static void netfs_prepare_dio_read_iterator(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + size_t rsize; + + rsize = umin(subreq->len, rreq->io_streams[0].sreq_max_len); + subreq->len = rsize; + + if (unlikely(rreq->io_streams[0].sreq_max_segs)) { + size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, + rreq->io_streams[0].sreq_max_segs); + + if (limit < rsize) { + subreq->len = limit; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + } + + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + + subreq->io_iter = rreq->iter; + iov_iter_truncate(&subreq->io_iter, subreq->len); + iov_iter_advance(&rreq->iter, subreq->len); +} + +/* + * Perform a read to a buffer from the server, slicing up the region to be read + * according to the network rsize. + */ +static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) +{ + unsigned long long start = rreq->start; + ssize_t size = rreq->len; + int ret = 0; + + atomic_set(&rreq->nr_outstanding, 1); + + do { + struct netfs_io_subrequest *subreq; + ssize_t slice; + + subreq = netfs_alloc_subrequest(rreq); + if (!subreq) { + ret = -ENOMEM; + break; + } + + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; + subreq->start = start; + subreq->len = size; + + atomic_inc(&rreq->nr_outstanding); + spin_lock_bh(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + subreq->prev_donated = rreq->prev_donated; + rreq->prev_donated = 0; + trace_netfs_sreq(subreq, netfs_sreq_trace_added); + spin_unlock_bh(&rreq->lock); + + netfs_stat(&netfs_n_rh_download); + if (rreq->netfs_ops->prepare_read) { + ret = rreq->netfs_ops->prepare_read(subreq); + if (ret < 0) { + atomic_dec(&rreq->nr_outstanding); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + break; + } + } + + netfs_prepare_dio_read_iterator(subreq); + slice = subreq->len; + rreq->netfs_ops->issue_read(subreq); + + size -= slice; + start += slice; + rreq->submitted += slice; + + if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && + test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) + break; + cond_resched(); + } while (size > 0); + + if (atomic_dec_and_test(&rreq->nr_outstanding)) + netfs_rreq_terminated(rreq, false); + return ret; +} + +/* + * Perform a read to an application buffer, bypassing the pagecache and the + * local disk cache. + */ +static int netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync) +{ + int ret; + + _enter("R=%x %llx-%llx", + rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); + + if (rreq->len == 0) { + pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); + return -EIO; + } + + // TODO: Use bounce buffer if requested + + inode_dio_begin(rreq->inode); + + ret = netfs_dispatch_unbuffered_reads(rreq); + + if (!rreq->submitted) { + netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); + inode_dio_end(rreq->inode); + ret = 0; + goto out; + } + + if (sync) { + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + + ret = rreq->error; + if (ret == 0 && rreq->submitted < rreq->len && + rreq->origin != NETFS_DIO_READ) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret = -EIO; + } + } else { + ret = -EIOCBQUEUED; + } + +out: + _leave(" = %d", ret); + return ret; +} + /** * netfs_unbuffered_read_iter_locked - Perform an unbuffered or direct I/O read * @iocb: The I/O control descriptor describing the read @@ -31,7 +168,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i struct netfs_io_request *rreq; ssize_t ret; size_t orig_count = iov_iter_count(iter); - bool async = !is_sync_kiocb(iocb); + bool sync = is_sync_kiocb(iocb); _enter(""); @@ -78,13 +215,13 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i // TODO: Set up bounce buffer if needed - if (async) + if (!sync) rreq->iocb = iocb; - ret = netfs_begin_read(rreq, is_sync_kiocb(iocb)); + ret = netfs_unbuffered_read(rreq, sync); if (ret < 0) goto out; /* May be -EIOCBQUEUED */ - if (!async) { + if (sync) { // TODO: Copy from bounce buffer iocb->ki_pos += rreq->transferred; ret = rreq->transferred; @@ -94,8 +231,6 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i netfs_put_request(rreq, false, netfs_rreq_trace_put_return); if (ret > 0) orig_count -= ret; - if (ret != -EIOCBQUEUED) - iov_iter_revert(iter, orig_count - iov_iter_count(iter)); return ret; } EXPORT_SYMBOL(netfs_unbuffered_read_iter_locked); diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 21a3c7d13585..c9f0ed24cb7b 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -23,16 +23,9 @@ /* * buffered_read.c */ -void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); -/* - * io.c - */ -void netfs_rreq_work(struct work_struct *work); -int netfs_begin_read(struct netfs_io_request *rreq, bool sync); - /* * main.c */ @@ -90,6 +83,28 @@ static inline void netfs_see_request(struct netfs_io_request *rreq, trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what); } +/* + * read_collect.c + */ +void netfs_read_termination_worker(struct work_struct *work); +void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async); + +/* + * read_pgpriv2.c + */ +void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, + struct netfs_io_request *rreq, + struct folio_queue *folioq, + int slot); +void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq); +bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq); + +/* + * read_retry.c + */ +void netfs_retry_reads(struct netfs_io_request *rreq); +void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq); + /* * stats.c */ @@ -117,6 +132,7 @@ extern atomic_t netfs_n_wh_buffered_write; extern atomic_t netfs_n_wh_writethrough; extern atomic_t netfs_n_wh_dio_write; extern atomic_t netfs_n_wh_writepages; +extern atomic_t netfs_n_wh_copy_to_cache; extern atomic_t netfs_n_wh_wstream_conflict; extern atomic_t netfs_n_wh_upload; extern atomic_t netfs_n_wh_upload_done; @@ -162,6 +178,11 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, void netfs_reissue_write(struct netfs_io_stream *stream, struct netfs_io_subrequest *subreq, struct iov_iter *source); +void netfs_issue_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream); +int netfs_advance_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start, size_t len, bool to_eof); struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len); int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc, struct folio *folio, size_t copied, bool to_page_end, diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index b781bbbf1d8d..72a435e5fc6d 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -188,9 +188,59 @@ static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offse return min(span, max_size); } +/* + * Select the span of a folio queue iterator we're going to use. Limit it by + * both maximum size and maximum number of segments. Returns the size of the + * span in bytes. + */ +static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs) +{ + const struct folio_queue *folioq = iter->folioq; + unsigned int nsegs = 0; + unsigned int slot = iter->folioq_slot; + size_t span = 0, n = iter->count; + + if (WARN_ON(!iov_iter_is_folioq(iter)) || + WARN_ON(start_offset > n) || + n == 0) + return 0; + max_size = umin(max_size, n - start_offset); + + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + slot = 0; + } + + start_offset += iter->iov_offset; + do { + size_t flen = folioq_folio_size(folioq, slot); + + if (start_offset < flen) { + span += flen - start_offset; + nsegs++; + start_offset = 0; + } else { + start_offset -= flen; + } + if (span >= max_size || nsegs >= max_segs) + break; + + slot++; + if (slot >= folioq_nr_slots(folioq)) { + folioq = folioq->next; + slot = 0; + } + } while (folioq); + + return umin(span, max_size); +} + size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, size_t max_size, size_t max_segs) { + if (iov_iter_is_folioq(iter)) + return netfs_limit_folioq(iter, start_offset, max_size, max_segs); if (iov_iter_is_bvec(iter)) return netfs_limit_bvec(iter, start_offset, max_size, max_segs); if (iov_iter_is_xarray(iter)) diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 1ee712bb3610..4f7212ca3470 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -36,12 +36,14 @@ DEFINE_SPINLOCK(netfs_proc_lock); static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READAHEAD] = "RA", [NETFS_READPAGE] = "RP", + [NETFS_READ_GAPS] = "RG", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_DIO_READ] = "DR", [NETFS_WRITEBACK] = "WB", [NETFS_WRITETHROUGH] = "WT", [NETFS_UNBUFFERED_WRITE] = "UW", [NETFS_DIO_WRITE] = "DW", + [NETFS_PGPRIV2_COPY_TO_CACHE] = "2C", }; /* @@ -61,7 +63,7 @@ static int netfs_requests_seq_show(struct seq_file *m, void *v) rreq = list_entry(v, struct netfs_io_request, proc_link); seq_printf(m, - "%08x %s %3d %2lx %4d %3d @%04llx %llx/%llx", + "%08x %s %3d %2lx %4ld %3d @%04llx %llx/%llx", rreq->debug_id, netfs_origins[rreq->origin], refcount_read(&rreq->ref), diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 4291cd405fc1..31e388ec6e48 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -36,7 +36,6 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, memset(rreq, 0, kmem_cache_size(cache)); rreq->start = start; rreq->len = len; - rreq->upper_len = len; rreq->origin = origin; rreq->netfs_ops = ctx->ops; rreq->mapping = mapping; @@ -44,6 +43,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, rreq->i_size = i_size_read(inode); rreq->debug_id = atomic_inc_return(&debug_ids); rreq->wsize = INT_MAX; + rreq->io_streams[0].sreq_max_len = ULONG_MAX; + rreq->io_streams[0].sreq_max_segs = 0; spin_lock_init(&rreq->lock); INIT_LIST_HEAD(&rreq->io_streams[0].subrequests); INIT_LIST_HEAD(&rreq->io_streams[1].subrequests); @@ -52,9 +53,10 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, if (origin == NETFS_READAHEAD || origin == NETFS_READPAGE || + origin == NETFS_READ_GAPS || origin == NETFS_READ_FOR_WRITE || origin == NETFS_DIO_READ) - INIT_WORK(&rreq->work, netfs_rreq_work); + INIT_WORK(&rreq->work, netfs_read_termination_worker); else INIT_WORK(&rreq->work, netfs_write_collection_worker); @@ -163,7 +165,7 @@ void netfs_put_request(struct netfs_io_request *rreq, bool was_async, if (was_async) { rreq->work.func = netfs_free_request; if (!queue_work(system_unbound_wq, &rreq->work)) - BUG(); + WARN_ON(1); } else { netfs_free_request(&rreq->work); } diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c new file mode 100644 index 000000000000..b18c65ba5580 --- /dev/null +++ b/fs/netfs/read_collect.c @@ -0,0 +1,544 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem read subrequest result collection, assessment and + * retrying. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Clear the unread part of an I/O request. + */ +static void netfs_clear_unread(struct netfs_io_subrequest *subreq) +{ + netfs_reset_iter(subreq); + WARN_ON_ONCE(subreq->len - subreq->transferred != iov_iter_count(&subreq->io_iter)); + iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); + if (subreq->start + subreq->transferred >= subreq->rreq->i_size) + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); +} + +/* + * Flush, mark and unlock a folio that's now completely read. If we want to + * cache the folio, we set the group to NETFS_FOLIO_COPY_TO_CACHE, mark it + * dirty and let writeback handle it. + */ +static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq, + struct netfs_io_request *rreq, + struct folio_queue *folioq, + int slot) +{ + struct netfs_folio *finfo; + struct folio *folio = folioq_folio(folioq, slot); + + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + + if (!test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { + finfo = netfs_folio_info(folio); + if (finfo) { + trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); + if (finfo->netfs_group) + folio_change_private(folio, finfo->netfs_group); + else + folio_detach_private(folio); + kfree(finfo); + } + + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { + if (!WARN_ON_ONCE(folio_get_private(folio) != NULL)) { + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); + folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); + folio_mark_dirty(folio); + } + } else { + trace_netfs_folio(folio, netfs_folio_trace_read_done); + } + } else { + // TODO: Use of PG_private_2 is deprecated. + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) + netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot); + } + + if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { + if (folio->index == rreq->no_unlock_folio && + test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) { + _debug("no unlock"); + } else { + trace_netfs_folio(folio, netfs_folio_trace_read_unlock); + folio_unlock(folio); + } + } +} + +/* + * Unlock any folios that are now completely read. Returns true if the + * subrequest is removed from the list. + */ +static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was_async) +{ + struct netfs_io_subrequest *prev, *next; + struct netfs_io_request *rreq = subreq->rreq; + struct folio_queue *folioq = subreq->curr_folioq; + size_t avail, prev_donated, next_donated, fsize, part, excess; + loff_t fpos, start; + loff_t fend; + int slot = subreq->curr_folioq_slot; + + if (WARN(subreq->transferred > subreq->len, + "Subreq overread: R%x[%x] %zu > %zu", + rreq->debug_id, subreq->debug_index, + subreq->transferred, subreq->len)) + subreq->transferred = subreq->len; + +next_folio: + fsize = PAGE_SIZE << subreq->curr_folio_order; + fpos = round_down(subreq->start + subreq->consumed, fsize); + fend = fpos + fsize; + + if (WARN_ON_ONCE(!folioq) || + WARN_ON_ONCE(!folioq_folio(folioq, slot)) || + WARN_ON_ONCE(folioq_folio(folioq, slot)->index != fpos / PAGE_SIZE)) { + pr_err("R=%08x[%x] s=%llx-%llx ctl=%zx/%zx/%zx sl=%u\n", + rreq->debug_id, subreq->debug_index, + subreq->start, subreq->start + subreq->transferred - 1, + subreq->consumed, subreq->transferred, subreq->len, + slot); + if (folioq) { + struct folio *folio = folioq_folio(folioq, slot); + + pr_err("folioq: orders=%02x%02x%02x%02x\n", + folioq->orders[0], folioq->orders[1], + folioq->orders[2], folioq->orders[3]); + if (folio) + pr_err("folio: %llx-%llx ix=%llx o=%u qo=%u\n", + fpos, fend - 1, folio_pos(folio), folio_order(folio), + folioq_folio_order(folioq, slot)); + } + } + +donation_changed: + /* Try to consume the current folio if we've hit or passed the end of + * it. There's a possibility that this subreq doesn't start at the + * beginning of the folio, in which case we need to donate to/from the + * preceding subreq. + * + * We also need to include any potential donation back from the + * following subreq. + */ + prev_donated = READ_ONCE(subreq->prev_donated); + next_donated = READ_ONCE(subreq->next_donated); + if (prev_donated || next_donated) { + spin_lock_bh(&rreq->lock); + prev_donated = subreq->prev_donated; + next_donated = subreq->next_donated; + subreq->start -= prev_donated; + subreq->len += prev_donated; + subreq->transferred += prev_donated; + prev_donated = subreq->prev_donated = 0; + if (subreq->transferred == subreq->len) { + subreq->len += next_donated; + subreq->transferred += next_donated; + next_donated = subreq->next_donated = 0; + } + trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations); + spin_unlock_bh(&rreq->lock); + } + + avail = subreq->transferred; + if (avail == subreq->len) + avail += next_donated; + start = subreq->start; + if (subreq->consumed == 0) { + start -= prev_donated; + avail += prev_donated; + } else { + start += subreq->consumed; + avail -= subreq->consumed; + } + part = umin(avail, fsize); + + trace_netfs_progress(subreq, start, avail, part); + + if (start + avail >= fend) { + if (fpos == start) { + /* Flush, unlock and mark for caching any folio we've just read. */ + subreq->consumed = fend - subreq->start; + netfs_unlock_read_folio(subreq, rreq, folioq, slot); + folioq_mark2(folioq, slot); + if (subreq->consumed >= subreq->len) + goto remove_subreq; + } else if (fpos < start) { + excess = fend - subreq->start; + + spin_lock_bh(&rreq->lock); + /* If we complete first on a folio split with the + * preceding subreq, donate to that subreq - otherwise + * we get the responsibility. + */ + if (subreq->prev_donated != prev_donated) { + spin_unlock_bh(&rreq->lock); + goto donation_changed; + } + + if (list_is_first(&subreq->rreq_link, &rreq->subrequests)) { + spin_unlock_bh(&rreq->lock); + pr_err("Can't donate prior to front\n"); + goto bad; + } + + prev = list_prev_entry(subreq, rreq_link); + WRITE_ONCE(prev->next_donated, prev->next_donated + excess); + subreq->start += excess; + subreq->len -= excess; + subreq->transferred -= excess; + trace_netfs_donate(rreq, subreq, prev, excess, + netfs_trace_donate_tail_to_prev); + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); + + if (subreq->consumed >= subreq->len) + goto remove_subreq_locked; + spin_unlock_bh(&rreq->lock); + } else { + pr_err("fpos > start\n"); + goto bad; + } + + /* Advance the rolling buffer to the next folio. */ + slot++; + if (slot >= folioq_nr_slots(folioq)) { + slot = 0; + folioq = folioq->next; + subreq->curr_folioq = folioq; + } + subreq->curr_folioq_slot = slot; + if (folioq && folioq_folio(folioq, slot)) + subreq->curr_folio_order = folioq->orders[slot]; + if (!was_async) + cond_resched(); + goto next_folio; + } + + /* Deal with partial progress. */ + if (subreq->transferred < subreq->len) + return false; + + /* Donate the remaining downloaded data to one of the neighbouring + * subrequests. Note that we may race with them doing the same thing. + */ + spin_lock_bh(&rreq->lock); + + if (subreq->prev_donated != prev_donated || + subreq->next_donated != next_donated) { + spin_unlock_bh(&rreq->lock); + cond_resched(); + goto donation_changed; + } + + /* Deal with the trickiest case: that this subreq is in the middle of a + * folio, not touching either edge, but finishes first. In such a + * case, we donate to the previous subreq, if there is one, so that the + * donation is only handled when that completes - and remove this + * subreq from the list. + * + * If the previous subreq finished first, we will have acquired their + * donation and should be able to unlock folios and/or donate nextwards. + */ + if (!subreq->consumed && + !prev_donated && + !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { + prev = list_prev_entry(subreq, rreq_link); + WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); + subreq->start += subreq->len; + subreq->len = 0; + subreq->transferred = 0; + trace_netfs_donate(rreq, subreq, prev, subreq->len, + netfs_trace_donate_to_prev); + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); + goto remove_subreq_locked; + } + + /* If we can't donate down the chain, donate up the chain instead. */ + excess = subreq->len - subreq->consumed + next_donated; + + if (!subreq->consumed) + excess += prev_donated; + + if (list_is_last(&subreq->rreq_link, &rreq->subrequests)) { + rreq->prev_donated = excess; + trace_netfs_donate(rreq, subreq, NULL, excess, + netfs_trace_donate_to_deferred_next); + } else { + next = list_next_entry(subreq, rreq_link); + WRITE_ONCE(next->prev_donated, excess); + trace_netfs_donate(rreq, subreq, next, excess, + netfs_trace_donate_to_next); + } + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_next); + subreq->len = subreq->consumed; + subreq->transferred = subreq->consumed; + goto remove_subreq_locked; + +remove_subreq: + spin_lock_bh(&rreq->lock); +remove_subreq_locked: + subreq->consumed = subreq->len; + list_del(&subreq->rreq_link); + spin_unlock_bh(&rreq->lock); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); + return true; + +bad: + /* Errr... prev and next both donated to us, but insufficient to finish + * the folio. + */ + printk("R=%08x[%x] s=%llx-%llx %zx/%zx/%zx\n", + rreq->debug_id, subreq->debug_index, + subreq->start, subreq->start + subreq->transferred - 1, + subreq->consumed, subreq->transferred, subreq->len); + printk("folio: %llx-%llx\n", fpos, fend - 1); + printk("donated: prev=%zx next=%zx\n", prev_donated, next_donated); + printk("s=%llx av=%zx part=%zx\n", start, avail, part); + BUG(); +} + +/* + * Do page flushing and suchlike after DIO. + */ +static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + unsigned int i; + + /* Collect unbuffered reads and direct reads, adding up the transfer + * sizes until we find the first short or failed subrequest. + */ + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + rreq->transferred += subreq->transferred; + + if (subreq->transferred < subreq->len || + test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { + rreq->error = subreq->error; + break; + } + } + + if (rreq->origin == NETFS_DIO_READ) { + for (i = 0; i < rreq->direct_bv_count; i++) { + flush_dcache_page(rreq->direct_bv[i].bv_page); + // TODO: cifs marks pages in the destination buffer + // dirty under some circumstances after a read. Do we + // need to do that too? + set_page_dirty(rreq->direct_bv[i].bv_page); + } + } + + if (rreq->iocb) { + rreq->iocb->ki_pos += rreq->transferred; + if (rreq->iocb->ki_complete) + rreq->iocb->ki_complete( + rreq->iocb, rreq->error ? rreq->error : rreq->transferred); + } + if (rreq->netfs_ops->done) + rreq->netfs_ops->done(rreq); + if (rreq->origin == NETFS_DIO_READ) + inode_dio_end(rreq->inode); +} + +/* + * Assess the state of a read request and decide what to do next. + * + * Note that we're in normal kernel thread context at this point, possibly + * running on a workqueue. + */ +static void netfs_rreq_assess(struct netfs_io_request *rreq) +{ + trace_netfs_rreq(rreq, netfs_rreq_trace_assess); + + //netfs_rreq_is_still_valid(rreq); + + if (test_and_clear_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags)) { + netfs_retry_reads(rreq); + return; + } + + if (rreq->origin == NETFS_DIO_READ || + rreq->origin == NETFS_READ_GAPS) + netfs_rreq_assess_dio(rreq); + task_io_account_read(rreq->transferred); + + trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); + clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); + wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); + + trace_netfs_rreq(rreq, netfs_rreq_trace_done); + netfs_clear_subrequests(rreq, false); + netfs_unlock_abandoned_read_pages(rreq); + if (unlikely(test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags))) + netfs_pgpriv2_write_to_the_cache(rreq); +} + +void netfs_read_termination_worker(struct work_struct *work) +{ + struct netfs_io_request *rreq = + container_of(work, struct netfs_io_request, work); + netfs_see_request(rreq, netfs_rreq_trace_see_work); + netfs_rreq_assess(rreq); + netfs_put_request(rreq, false, netfs_rreq_trace_put_work_complete); +} + +/* + * Handle the completion of all outstanding I/O operations on a read request. + * We inherit a ref from the caller. + */ +void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async) +{ + if (!was_async) + return netfs_rreq_assess(rreq); + if (!work_pending(&rreq->work)) { + netfs_get_request(rreq, netfs_rreq_trace_get_work); + if (!queue_work(system_unbound_wq, &rreq->work)) + netfs_put_request(rreq, was_async, netfs_rreq_trace_put_work_nq); + } +} + +/** + * netfs_read_subreq_progress - Note progress of a read operation. + * @subreq: The read request that has terminated. + * @was_async: True if we're in an asynchronous context. + * + * This tells the read side of netfs lib that a contributory I/O operation has + * made some progress and that it may be possible to unlock some folios. + * + * Before calling, the filesystem should update subreq->transferred to track + * the amount of data copied into the output buffer. + * + * If @was_async is true, the caller might be running in softirq or interrupt + * context and we can't sleep. + */ +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, + bool was_async) +{ + struct netfs_io_request *rreq = subreq->rreq; + + trace_netfs_sreq(subreq, netfs_sreq_trace_progress); + + if (subreq->transferred > subreq->consumed && + (rreq->origin == NETFS_READAHEAD || + rreq->origin == NETFS_READPAGE || + rreq->origin == NETFS_READ_FOR_WRITE)) { + netfs_consume_read_data(subreq, was_async); + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + } +} +EXPORT_SYMBOL(netfs_read_subreq_progress); + +/** + * netfs_read_subreq_terminated - Note the termination of an I/O operation. + * @subreq: The I/O request that has terminated. + * @error: Error code indicating type of completion. + * @was_async: The termination was asynchronous + * + * This tells the read helper that a contributory I/O operation has terminated, + * one way or another, and that it should integrate the results. + * + * The caller indicates the outcome of the operation through @error, supplying + * 0 to indicate a successful or retryable transfer (if NETFS_SREQ_NEED_RETRY + * is set) or a negative error code. The helper will look after reissuing I/O + * operations as appropriate and writing downloaded data to the cache. + * + * Before calling, the filesystem should update subreq->transferred to track + * the amount of data copied into the output buffer. + * + * If @was_async is true, the caller might be running in softirq or interrupt + * context and we can't sleep. + */ +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, + int error, bool was_async) +{ + struct netfs_io_request *rreq = subreq->rreq; + + switch (subreq->source) { + case NETFS_READ_FROM_CACHE: + netfs_stat(&netfs_n_rh_read_done); + break; + case NETFS_DOWNLOAD_FROM_SERVER: + netfs_stat(&netfs_n_rh_download_done); + break; + default: + break; + } + + if (rreq->origin != NETFS_DIO_READ) { + /* Collect buffered reads. + * + * If the read completed validly short, then we can clear the + * tail before going on to unlock the folios. + */ + if (error == 0 && subreq->transferred < subreq->len && + (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags) || + test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags))) { + netfs_clear_unread(subreq); + subreq->transferred = subreq->len; + trace_netfs_sreq(subreq, netfs_sreq_trace_clear); + } + if (subreq->transferred > subreq->consumed && + (rreq->origin == NETFS_READAHEAD || + rreq->origin == NETFS_READPAGE || + rreq->origin == NETFS_READ_FOR_WRITE)) { + netfs_consume_read_data(subreq, was_async); + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + } + rreq->transferred += subreq->transferred; + } + + /* Deal with retry requests, short reads and errors. If we retry + * but don't make progress, we abandon the attempt. + */ + if (!error && subreq->transferred < subreq->len) { + if (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) { + trace_netfs_sreq(subreq, netfs_sreq_trace_hit_eof); + } else { + trace_netfs_sreq(subreq, netfs_sreq_trace_short); + if (subreq->transferred > subreq->consumed) { + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); + } else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); + } else { + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + error = -ENODATA; + } + } + } + + subreq->error = error; + trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); + + if (unlikely(error < 0)) { + trace_netfs_failure(rreq, subreq, error, netfs_fail_read); + if (subreq->source == NETFS_READ_FROM_CACHE) { + netfs_stat(&netfs_n_rh_read_failed); + } else { + netfs_stat(&netfs_n_rh_download_failed); + set_bit(NETFS_RREQ_FAILED, &rreq->flags); + rreq->error = subreq->error; + } + } + + if (atomic_dec_and_test(&rreq->nr_outstanding)) + netfs_rreq_terminated(rreq, was_async); + + netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); +} +EXPORT_SYMBOL(netfs_read_subreq_terminated); diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c new file mode 100644 index 000000000000..9439461d535f --- /dev/null +++ b/fs/netfs/read_pgpriv2.c @@ -0,0 +1,264 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Read with PG_private_2 [DEPRECATED]. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * [DEPRECATED] Mark page as requiring copy-to-cache using PG_private_2. The + * third mark in the folio queue is used to indicate that this folio needs + * writing. + */ +void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, + struct netfs_io_request *rreq, + struct folio_queue *folioq, + int slot) +{ + struct folio *folio = folioq_folio(folioq, slot); + + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); + folio_start_private_2(folio); + folioq_mark3(folioq, slot); +} + +/* + * [DEPRECATED] Cancel PG_private_2 on all marked folios in the event of an + * unrecoverable error. + */ +static void netfs_pgpriv2_cancel(struct folio_queue *folioq) +{ + struct folio *folio; + int slot; + + while (folioq) { + if (!folioq->marks3) { + folioq = folioq->next; + continue; + } + + slot = __ffs(folioq->marks3); + folio = folioq_folio(folioq, slot); + + trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); + folio_end_private_2(folio); + folioq_unmark3(folioq, slot); + } +} + +/* + * [DEPRECATED] Copy a folio to the cache with PG_private_2 set. + */ +static int netfs_pgpriv2_copy_folio(struct netfs_io_request *wreq, struct folio *folio) +{ + struct netfs_io_stream *cache = &wreq->io_streams[1]; + size_t fsize = folio_size(folio), flen = fsize; + loff_t fpos = folio_pos(folio), i_size; + bool to_eof = false; + + _enter(""); + + /* netfs_perform_write() may shift i_size around the page or from out + * of the page to beyond it, but cannot move i_size into or through the + * page since we have it locked. + */ + i_size = i_size_read(wreq->inode); + + if (fpos >= i_size) { + /* mmap beyond eof. */ + _debug("beyond eof"); + folio_end_private_2(folio); + return 0; + } + + if (fpos + fsize > wreq->i_size) + wreq->i_size = i_size; + + if (flen > i_size - fpos) { + flen = i_size - fpos; + to_eof = true; + } else if (flen == i_size - fpos) { + to_eof = true; + } + + _debug("folio %zx %zx", flen, fsize); + + trace_netfs_folio(folio, netfs_folio_trace_store_copy); + + /* Attach the folio to the rolling buffer. */ + if (netfs_buffer_append_folio(wreq, folio, false) < 0) + return -ENOMEM; + + cache->submit_max_len = fsize; + cache->submit_off = 0; + cache->submit_len = flen; + + /* Attach the folio to one or more subrequests. For a big folio, we + * could end up with thousands of subrequests if the wsize is small - + * but we might need to wait during the creation of subrequests for + * network resources (eg. SMB credits). + */ + do { + ssize_t part; + + wreq->io_iter.iov_offset = cache->submit_off; + + atomic64_set(&wreq->issued_to, fpos + cache->submit_off); + part = netfs_advance_write(wreq, cache, fpos + cache->submit_off, + cache->submit_len, to_eof); + cache->submit_off += part; + cache->submit_max_len -= part; + if (part > cache->submit_len) + cache->submit_len = 0; + else + cache->submit_len -= part; + } while (cache->submit_len > 0); + + wreq->io_iter.iov_offset = 0; + iov_iter_advance(&wreq->io_iter, fsize); + atomic64_set(&wreq->issued_to, fpos + fsize); + + if (flen < fsize) + netfs_issue_write(wreq, cache); + + _leave(" = 0"); + return 0; +} + +/* + * [DEPRECATED] Go through the buffer and write any folios that are marked with + * the third mark to the cache. + */ +void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq) +{ + struct netfs_io_request *wreq; + struct folio_queue *folioq; + struct folio *folio; + int error = 0; + int slot = 0; + + _enter(""); + + if (!fscache_resources_valid(&rreq->cache_resources)) + goto couldnt_start; + + /* Need the first folio to be able to set up the op. */ + for (folioq = rreq->buffer; folioq; folioq = folioq->next) { + if (folioq->marks3) { + slot = __ffs(folioq->marks3); + break; + } + } + if (!folioq) + return; + folio = folioq_folio(folioq, slot); + + wreq = netfs_create_write_req(rreq->mapping, NULL, folio_pos(folio), + NETFS_PGPRIV2_COPY_TO_CACHE); + if (IS_ERR(wreq)) { + kleave(" [create %ld]", PTR_ERR(wreq)); + goto couldnt_start; + } + + trace_netfs_write(wreq, netfs_write_trace_copy_to_cache); + netfs_stat(&netfs_n_wh_copy_to_cache); + + for (;;) { + error = netfs_pgpriv2_copy_folio(wreq, folio); + if (error < 0) + break; + + folioq_unmark3(folioq, slot); + if (!folioq->marks3) { + folioq = folioq->next; + if (!folioq) + break; + } + + slot = __ffs(folioq->marks3); + folio = folioq_folio(folioq, slot); + } + + netfs_issue_write(wreq, &wreq->io_streams[1]); + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); + + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + _leave(" = %d", error); +couldnt_start: + netfs_pgpriv2_cancel(rreq->buffer); +} + +/* + * [DEPRECATED] Remove the PG_private_2 mark from any folios we've finished + * copying. + */ +bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq) +{ + struct folio_queue *folioq = wreq->buffer; + unsigned long long collected_to = wreq->collected_to; + unsigned int slot = wreq->buffer_head_slot; + bool made_progress = false; + + if (slot >= folioq_nr_slots(folioq)) { + folioq = netfs_delete_buffer_head(wreq); + slot = 0; + } + + for (;;) { + struct folio *folio; + unsigned long long fpos, fend; + size_t fsize, flen; + + folio = folioq_folio(folioq, slot); + if (WARN_ONCE(!folio_test_private_2(folio), + "R=%08x: folio %lx is not marked private_2\n", + wreq->debug_id, folio->index)) + trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); + + fpos = folio_pos(folio); + fsize = folio_size(folio); + flen = fsize; + + fend = min_t(unsigned long long, fpos + flen, wreq->i_size); + + trace_netfs_collect_folio(wreq, folio, fend, collected_to); + + /* Unlock any folio we've transferred all of. */ + if (collected_to < fend) + break; + + trace_netfs_folio(folio, netfs_folio_trace_end_copy); + folio_end_private_2(folio); + wreq->cleaned_to = fpos + fsize; + made_progress = true; + + /* Clean up the head folioq. If we clear an entire folioq, then + * we can get rid of it provided it's not also the tail folioq + * being filled by the issuer. + */ + folioq_clear(folioq, slot); + slot++; + if (slot >= folioq_nr_slots(folioq)) { + if (READ_ONCE(wreq->buffer_tail) == folioq) + break; + folioq = netfs_delete_buffer_head(wreq); + slot = 0; + } + + if (fpos + fsize >= collected_to) + break; + } + + wreq->buffer = folioq; + wreq->buffer_head_slot = slot; + return made_progress; +} diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c new file mode 100644 index 000000000000..0350592ea804 --- /dev/null +++ b/fs/netfs/read_retry.c @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem read subrequest retrying. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include "internal.h" + +static void netfs_reissue_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct iov_iter *io_iter = &subreq->io_iter; + + if (iov_iter_is_folioq(io_iter)) { + subreq->curr_folioq = (struct folio_queue *)io_iter->folioq; + subreq->curr_folioq_slot = io_iter->folioq_slot; + subreq->curr_folio_order = subreq->curr_folioq->orders[subreq->curr_folioq_slot]; + } + + atomic_inc(&rreq->nr_outstanding); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + subreq->rreq->netfs_ops->issue_read(subreq); +} + +/* + * Go through the list of failed/short reads, retrying all retryable ones. We + * need to switch failed cache reads to network downloads. + */ +static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream0 = &rreq->io_streams[0]; + LIST_HEAD(sublist); + LIST_HEAD(queue); + + _enter("R=%x", rreq->debug_id); + + if (list_empty(&rreq->subrequests)) + return; + + if (rreq->netfs_ops->retry_request) + rreq->netfs_ops->retry_request(rreq, NULL); + + /* If there's no renegotiation to do, just resend each retryable subreq + * up to the first permanently failed one. + */ + if (!rreq->netfs_ops->prepare_read && + !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) { + struct netfs_io_subrequest *subreq; + + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + break; + if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + netfs_reset_iter(subreq); + netfs_reissue_read(rreq, subreq); + } + } + return; + } + + /* Okay, we need to renegotiate all the download requests and flip any + * failed cache reads over to being download requests and negotiate + * those also. All fully successful subreqs have been removed from the + * list and any spare data from those has been donated. + * + * What we do is decant the list and rebuild it one subreq at a time so + * that we don't end up with donations jumping over a gap we're busy + * populating with smaller subrequests. In the event that the subreq + * we just launched finishes before we insert the next subreq, it'll + * fill in rreq->prev_donated instead. + + * Note: Alternatively, we could split the tail subrequest right before + * we reissue it and fix up the donations under lock. + */ + list_splice_init(&rreq->subrequests, &queue); + + do { + struct netfs_io_subrequest *from; + struct iov_iter source; + unsigned long long start, len; + size_t part, deferred_next_donated = 0; + bool boundary = false; + + /* Go through the subreqs and find the next span of contiguous + * buffer that we then rejig (cifs, for example, needs the + * rsize renegotiating) and reissue. + */ + from = list_first_entry(&queue, struct netfs_io_subrequest, rreq_link); + list_move_tail(&from->rreq_link, &sublist); + start = from->start + from->transferred; + len = from->len - from->transferred; + + _debug("from R=%08x[%x] s=%llx ctl=%zx/%zx/%zx", + rreq->debug_id, from->debug_index, + from->start, from->consumed, from->transferred, from->len); + + if (test_bit(NETFS_SREQ_FAILED, &from->flags) || + !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) + goto abandon; + + deferred_next_donated = from->next_donated; + while ((subreq = list_first_entry_or_null( + &queue, struct netfs_io_subrequest, rreq_link))) { + if (subreq->start != start + len || + subreq->transferred > 0 || + !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) + break; + list_move_tail(&subreq->rreq_link, &sublist); + len += subreq->len; + deferred_next_donated = subreq->next_donated; + if (test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags)) + break; + } + + _debug(" - range: %llx-%llx %llx", start, start + len - 1, len); + + /* Determine the set of buffers we're going to use. Each + * subreq gets a subset of a single overall contiguous buffer. + */ + netfs_reset_iter(from); + source = from->io_iter; + source.count = len; + + /* Work through the sublist. */ + while ((subreq = list_first_entry_or_null( + &sublist, struct netfs_io_subrequest, rreq_link))) { + list_del(&subreq->rreq_link); + + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; + subreq->start = start - subreq->transferred; + subreq->len = len + subreq->transferred; + stream0->sreq_max_len = subreq->len; + + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + + spin_lock_bh(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + subreq->prev_donated += rreq->prev_donated; + rreq->prev_donated = 0; + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + spin_unlock_bh(&rreq->lock); + + BUG_ON(!len); + + /* Renegotiate max_len (rsize) */ + if (rreq->netfs_ops->prepare_read(subreq) < 0) { + trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + } + + part = umin(len, stream0->sreq_max_len); + if (unlikely(rreq->io_streams[0].sreq_max_segs)) + part = netfs_limit_iter(&source, 0, part, stream0->sreq_max_segs); + subreq->len = subreq->transferred + part; + subreq->io_iter = source; + iov_iter_truncate(&subreq->io_iter, part); + iov_iter_advance(&source, part); + len -= part; + start += part; + if (!len) { + if (boundary) + __set_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); + subreq->next_donated = deferred_next_donated; + } else { + __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); + subreq->next_donated = 0; + } + + netfs_reissue_read(rreq, subreq); + if (!len) + break; + + /* If we ran out of subrequests, allocate another. */ + if (list_empty(&sublist)) { + subreq = netfs_alloc_subrequest(rreq); + if (!subreq) + goto abandon; + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; + subreq->start = start; + + /* We get two refs, but need just one. */ + netfs_put_subrequest(subreq, false, netfs_sreq_trace_new); + trace_netfs_sreq(subreq, netfs_sreq_trace_split); + list_add_tail(&subreq->rreq_link, &sublist); + } + } + + /* If we managed to use fewer subreqs, we can discard the + * excess. + */ + while ((subreq = list_first_entry_or_null( + &sublist, struct netfs_io_subrequest, rreq_link))) { + trace_netfs_sreq(subreq, netfs_sreq_trace_discard); + list_del(&subreq->rreq_link); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); + } + + } while (!list_empty(&queue)); + + return; + + /* If we hit ENOMEM, fail all remaining subrequests */ +abandon: + list_splice_init(&sublist, &queue); + list_for_each_entry(subreq, &queue, rreq_link) { + if (!subreq->error) + subreq->error = -ENOMEM; + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); + } + spin_lock_bh(&rreq->lock); + list_splice_tail_init(&queue, &rreq->subrequests); + spin_unlock_bh(&rreq->lock); +} + +/* + * Retry reads. + */ +void netfs_retry_reads(struct netfs_io_request *rreq) +{ + trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); + + atomic_inc(&rreq->nr_outstanding); + + netfs_retry_read_subrequests(rreq); + + if (atomic_dec_and_test(&rreq->nr_outstanding)) + netfs_rreq_terminated(rreq, false); +} + +/* + * Unlock any the pages that haven't been unlocked yet due to abandoned + * subrequests. + */ +void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq) +{ + struct folio_queue *p; + + for (p = rreq->buffer; p; p = p->next) { + for (int slot = 0; slot < folioq_count(p); slot++) { + struct folio *folio = folioq_folio(p, slot); + + if (folio && !folioq_is_marked2(p, slot)) { + trace_netfs_folio(folio, netfs_folio_trace_abandon); + folio_unlock(folio); + } + } + } +} diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 5065289f5555..8e63516b40f6 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -32,6 +32,7 @@ atomic_t netfs_n_wh_buffered_write; atomic_t netfs_n_wh_writethrough; atomic_t netfs_n_wh_dio_write; atomic_t netfs_n_wh_writepages; +atomic_t netfs_n_wh_copy_to_cache; atomic_t netfs_n_wh_wstream_conflict; atomic_t netfs_n_wh_upload; atomic_t netfs_n_wh_upload_done; @@ -51,11 +52,12 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_rh_read_folio), atomic_read(&netfs_n_rh_write_begin), atomic_read(&netfs_n_rh_write_zskip)); - seq_printf(m, "Writes : BW=%u WT=%u DW=%u WP=%u\n", + seq_printf(m, "Writes : BW=%u WT=%u DW=%u WP=%u 2C=%u\n", atomic_read(&netfs_n_wh_buffered_write), atomic_read(&netfs_n_wh_writethrough), atomic_read(&netfs_n_wh_dio_write), - atomic_read(&netfs_n_wh_writepages)); + atomic_read(&netfs_n_wh_writepages), + atomic_read(&netfs_n_wh_copy_to_cache)); seq_printf(m, "ZeroOps: ZR=%u sh=%u sk=%u\n", atomic_read(&netfs_n_rh_zero), atomic_read(&netfs_n_rh_short_read), diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 0116b336fa07..e4ac7f68450a 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -80,6 +80,12 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned long long collected_to = wreq->collected_to; unsigned int slot = wreq->buffer_head_slot; + if (wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) { + if (netfs_pgpriv2_unlock_copied_folios(wreq)) + *notes |= MADE_PROGRESS; + return; + } + if (slot >= folioq_nr_slots(folioq)) { folioq = netfs_delete_buffer_head(wreq); slot = 0; @@ -376,7 +382,8 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) smp_rmb(); collected_to = ULLONG_MAX; if (wreq->origin == NETFS_WRITEBACK || - wreq->origin == NETFS_WRITETHROUGH) + wreq->origin == NETFS_WRITETHROUGH || + wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) notes = BUFFERED; else notes = 0; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 25fb7e166cc0..975436d3dc3f 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -95,7 +95,8 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, struct netfs_io_request *wreq; struct netfs_inode *ictx; bool is_buffered = (origin == NETFS_WRITEBACK || - origin == NETFS_WRITETHROUGH); + origin == NETFS_WRITETHROUGH || + origin == NETFS_PGPRIV2_COPY_TO_CACHE); wreq = netfs_alloc_request(mapping, file, start, 0, origin); if (IS_ERR(wreq)) @@ -161,10 +162,6 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); - trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, - refcount_read(&subreq->ref), - netfs_sreq_trace_new); - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); stream->sreq_max_len = UINT_MAX; @@ -241,8 +238,8 @@ void netfs_reissue_write(struct netfs_io_stream *stream, netfs_do_issue_write(stream, subreq); } -static void netfs_issue_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream) +void netfs_issue_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream) { struct netfs_io_subrequest *subreq = stream->construct; @@ -259,9 +256,9 @@ static void netfs_issue_write(struct netfs_io_request *wreq, * we can avoid overrunning the credits obtained (cifs) and try to parallelise * content-crypto preparation with network writes. */ -static int netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof) +int netfs_advance_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start, size_t len, bool to_eof) { struct netfs_io_subrequest *subreq = stream->construct; size_t part; diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 7a558dea75c4..810269ee0a50 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -267,6 +267,7 @@ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *fi rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); + rreq->io_streams[0].sreq_max_len = NFS_SB(rreq->inode->i_sb)->rsize; return 0; } @@ -288,14 +289,6 @@ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sre return netfs; } -static bool nfs_netfs_clamp_length(struct netfs_io_subrequest *sreq) -{ - size_t rsize = NFS_SB(sreq->rreq->inode->i_sb)->rsize; - - sreq->len = min(sreq->len, rsize); - return true; -} - static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) { struct nfs_netfs_io_data *netfs; @@ -304,17 +297,18 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) struct nfs_open_context *ctx = sreq->rreq->netfs_priv; struct page *page; unsigned long idx; + pgoff_t start, last; int err; - pgoff_t start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; - pgoff_t last = ((sreq->start + sreq->len - - sreq->transferred - 1) >> PAGE_SHIFT); + + start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; + last = ((sreq->start + sreq->len - sreq->transferred - 1) >> PAGE_SHIFT); nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); netfs = nfs_netfs_alloc(sreq); if (!netfs) - return netfs_subreq_terminated(sreq, -ENOMEM, false); + return netfs_read_subreq_terminated(sreq, -ENOMEM, false); pgio.pg_netfs = netfs; /* used in completion */ @@ -380,5 +374,4 @@ const struct netfs_request_ops nfs_netfs_ops = { .init_request = nfs_netfs_init_request, .free_request = nfs_netfs_free_request, .issue_read = nfs_netfs_issue_read, - .clamp_length = nfs_netfs_clamp_length }; diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index e8adae1bc260..772d485e96d3 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -60,8 +60,6 @@ static inline void nfs_netfs_get(struct nfs_netfs_io_data *netfs) static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) { - ssize_t final_len; - /* Only the last RPC completion should call netfs_subreq_terminated() */ if (!refcount_dec_and_test(&netfs->refcount)) return; @@ -74,8 +72,9 @@ static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) * Correct the final length here to be no larger than the netfs subrequest * length, and thus avoid netfs's "Subreq overread" warning message. */ - final_len = min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); - netfs_subreq_terminated(netfs->sreq, netfs->error ?: final_len, false); + netfs->sreq->transferred = min_t(s64, netfs->sreq->len, + atomic64_read(&netfs->transferred)); + netfs_read_subreq_terminated(netfs->sreq, netfs->error, false); kfree(netfs); } static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 595c4b673707..d5e1bbefd5e8 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1309,10 +1309,8 @@ cifs_readv_callback(struct mid_q_entry *mid) if (rdata->result == 0 || rdata->result == -EAGAIN) iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); rdata->credits.value = 0; - netfs_subreq_terminated(&rdata->subreq, - (rdata->result == 0 || rdata->result == -EAGAIN) ? - rdata->got_bytes : rdata->result, - false); + rdata->subreq.transferred += rdata->got_bytes; + netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); release_mid(mid); add_credits(server, &credits, 0); } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 0ff1a286e9ee..59ac02bbdd19 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -140,25 +140,22 @@ static void cifs_netfs_invalidate_cache(struct netfs_io_request *wreq) } /* - * Split the read up according to how many credits we can get for each piece. - * It's okay to sleep here if we need to wait for more credit to become - * available. - * - * We also choose the server and allocate an operation ID to be cleaned up - * later. + * Negotiate the size of a read operation on behalf of the netfs library. */ -static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) +static int cifs_prepare_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; - struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr]; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); struct TCP_Server_Info *server = req->server; struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - int rc; + size_t size; + int rc = 0; - rdata->xid = get_xid(); - rdata->have_xid = true; + if (!rdata->have_xid) { + rdata->xid = get_xid(); + rdata->have_xid = true; + } rdata->server = server; if (cifs_sb->ctx->rsize == 0) @@ -166,13 +163,12 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) server->ops->negotiate_rsize(tlink_tcon(req->cfile->tlink), cifs_sb->ctx); - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &stream->sreq_max_len, &rdata->credits); - if (rc) { - subreq->error = rc; - return false; - } + &size, &rdata->credits); + if (rc) + return rc; + + rreq->io_streams[0].sreq_max_len = size; rdata->credits.in_flight_check = 1; rdata->credits.rreq_debug_id = rreq->debug_id; @@ -184,13 +180,11 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) server->credits, server->in_flight, 0, cifs_trace_rw_credits_read_submit); - subreq->len = umin(subreq->len, stream->sreq_max_len); - #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; + rreq->io_streams[0].sreq_max_segs = server->smbd_conn->max_frmr_depth; #endif - return true; + return 0; } /* @@ -199,32 +193,41 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) * to only read a portion of that, but as long as we read something, the netfs * helper will call us again so that we can issue another read. */ -static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) +static void cifs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct TCP_Server_Info *server = req->server; int rc = 0; cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, subreq->transferred, subreq->len); + rc = adjust_credits(server, rdata, cifs_trace_rw_credits_issue_read_adjust); + if (rc) + goto failed; + if (req->cfile->invalidHandle) { do { rc = cifs_reopen_file(req->cfile, true); } while (rc == -EAGAIN); if (rc) - goto out; + goto failed; } if (subreq->rreq->origin != NETFS_DIO_READ) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); rc = rdata->server->ops->async_readv(rdata); -out: if (rc) - netfs_subreq_terminated(subreq, rc, false); + goto failed; + return; + +failed: + netfs_read_subreq_terminated(subreq, rc, false); } /* @@ -331,8 +334,8 @@ const struct netfs_request_ops cifs_req_ops = { .init_request = cifs_init_request, .free_request = cifs_free_request, .free_subrequest = cifs_free_subrequest, - .clamp_length = cifs_clamp_length, - .issue_read = cifs_req_issue_read, + .prepare_read = cifs_prepare_read, + .issue_read = cifs_issue_read, .done = cifs_rreq_done, .begin_writeback = cifs_begin_writeback, .prepare_write = cifs_prepare_write, diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 83facb54276a..ff0c0017417b 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4498,9 +4498,7 @@ static void smb2_readv_worker(struct work_struct *work) struct cifs_io_subrequest *rdata = container_of(work, struct cifs_io_subrequest, subreq.work); - netfs_subreq_terminated(&rdata->subreq, - (rdata->result == 0 || rdata->result == -EAGAIN) ? - rdata->got_bytes : rdata->result, true); + netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); } static void @@ -4554,6 +4552,7 @@ smb2_readv_callback(struct mid_q_entry *mid) break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: + __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); rdata->result = -EAGAIN; if (server->sign && rdata->got_bytes) /* reset bytes number since we can not check a sign */ @@ -4607,6 +4606,10 @@ smb2_readv_callback(struct mid_q_entry *mid) server->credits, server->in_flight, 0, cifs_trace_rw_credits_read_response_clear); rdata->credits.value = 0; + rdata->subreq.transferred += rdata->got_bytes; + if (rdata->subreq.start + rdata->subreq.transferred >= rdata->subreq.rreq->i_size) + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); INIT_WORK(&rdata->subreq.work, smb2_readv_worker); queue_work(cifsiod_wq, &rdata->subreq.work); release_mid(mid); @@ -4870,6 +4873,7 @@ smb2_writev_callback(struct mid_q_entry *mid) server->credits, server->in_flight, 0, cifs_trace_rw_credits_write_response_clear); wdata->credits.value = 0; + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress); cifs_write_subrequest_terminated(wdata, result ?: written, true); release_mid(mid); trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0, diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h index 52773613bf23..955680c3bb5f 100644 --- a/include/linux/folio_queue.h +++ b/include/linux/folio_queue.h @@ -27,6 +27,7 @@ struct folio_queue { struct folio_queue *prev; /* Previous queue segment of NULL */ unsigned long marks; /* 1-bit mark per folio */ unsigned long marks2; /* Second 1-bit mark per folio */ + unsigned long marks3; /* Third 1-bit mark per folio */ #if PAGEVEC_SIZE > BITS_PER_LONG #error marks is not big enough #endif @@ -39,6 +40,7 @@ static inline void folioq_init(struct folio_queue *folioq) folioq->prev = NULL; folioq->marks = 0; folioq->marks2 = 0; + folioq->marks3 = 0; } static inline unsigned int folioq_nr_slots(const struct folio_queue *folioq) @@ -87,6 +89,21 @@ static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int slot) clear_bit(slot, &folioq->marks2); } +static inline bool folioq_is_marked3(const struct folio_queue *folioq, unsigned int slot) +{ + return test_bit(slot, &folioq->marks3); +} + +static inline void folioq_mark3(struct folio_queue *folioq, unsigned int slot) +{ + set_bit(slot, &folioq->marks3); +} + +static inline void folioq_unmark3(struct folio_queue *folioq, unsigned int slot) +{ + clear_bit(slot, &folioq->marks3); +} + static inline unsigned int __folio_order(struct folio *folio) { if (!folio_test_large(folio)) @@ -133,6 +150,7 @@ static inline void folioq_clear(struct folio_queue *folioq, unsigned int slot) folioq->vec.folios[slot] = NULL; folioq_unmark(folioq, slot); folioq_unmark2(folioq, slot); + folioq_unmark3(folioq, slot); } #endif /* _LINUX_FOLIO_QUEUE_H */ diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 348f8f5ab5e6..c0f0c9c87d86 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -178,20 +178,26 @@ struct netfs_io_subrequest { unsigned long long start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ + size_t consumed; /* Amount of read data consumed */ + size_t prev_donated; /* Amount of data donated from previous subreq */ + size_t next_donated; /* Amount of data donated from next subreq */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ unsigned int nr_segs; /* Number of segs in io_iter */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ + unsigned char curr_folioq_slot; /* Folio currently being read */ + unsigned char curr_folio_order; /* Order of folio */ + struct folio_queue *curr_folioq; /* Queue segment in which current folio resides */ unsigned long flags; #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ -#define NETFS_SREQ_SHORT_IO 2 /* Set if the I/O was short */ #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ #define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph object) */ +#define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ #define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest completes */ #define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry */ #define NETFS_SREQ_RETRYING 10 /* Set if we're retrying */ @@ -201,12 +207,14 @@ struct netfs_io_subrequest { enum netfs_io_origin { NETFS_READAHEAD, /* This read was triggered by readahead */ NETFS_READPAGE, /* This read is a synchronous read */ + NETFS_READ_GAPS, /* This read is a synchronous read to fill gaps */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_WRITEBACK, /* This write was triggered by writepages */ NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_WRITE, /* This is a direct I/O write */ + NETFS_PGPRIV2_COPY_TO_CACHE, /* [DEPRECATED] This is writing read data to the cache */ nr__netfs_io_origin } __mode(byte); @@ -223,6 +231,7 @@ struct netfs_io_request { struct address_space *mapping; /* The mapping being accessed */ struct kiocb *iocb; /* AIO completion vector */ struct netfs_cache_resources cache_resources; + struct readahead_control *ractl; /* Readahead descriptor */ struct list_head proc_link; /* Link in netfs_iorequests */ struct list_head subrequests; /* Contributory I/O operations */ struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */ @@ -243,12 +252,10 @@ struct netfs_io_request { unsigned int nr_group_rel; /* Number of refs to release on ->group */ spinlock_t lock; /* Lock for queuing subreqs */ atomic_t nr_outstanding; /* Number of ops in progress */ - atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ - size_t upper_len; /* Length can be extended to here */ unsigned long long submitted; /* Amount submitted for I/O so far */ unsigned long long len; /* Length of the request */ size_t transferred; /* Amount to be indicated as transferred */ - short error; /* 0 or error that occurred */ + long error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ u8 buffer_head_slot; /* First slot in ->buffer */ @@ -259,9 +266,9 @@ struct netfs_io_request { unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ + size_t prev_donated; /* Fallback for subreq->prev_donated */ refcount_t ref; unsigned long flags; -#define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */ #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */ #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ @@ -273,6 +280,7 @@ struct netfs_io_request { #define NETFS_RREQ_PAUSE 11 /* Pause subrequest generation */ #define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages */ #define NETFS_RREQ_ALL_QUEUED 13 /* All subreqs are now queued */ +#define NETFS_RREQ_NEED_RETRY 14 /* Need to try retrying */ #define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark * write to cache on read */ const struct netfs_request_ops *netfs_ops; @@ -291,7 +299,7 @@ struct netfs_request_ops { /* Read request handling */ void (*expand_readahead)(struct netfs_io_request *rreq); - bool (*clamp_length)(struct netfs_io_subrequest *subreq); + int (*prepare_read)(struct netfs_io_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq); bool (*is_still_valid)(struct netfs_io_request *rreq); int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, @@ -421,7 +429,10 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp); vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); /* (Sub)request management API. */ -void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, + bool was_async); +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, + int error, bool was_async); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); void netfs_put_subrequest(struct netfs_io_subrequest *subreq, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 58bf23002fc1..7b26463cb98f 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -20,6 +20,7 @@ EM(netfs_read_trace_expanded, "EXPANDED ") \ EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readpage, "READPAGE ") \ + EM(netfs_read_trace_read_gaps, "READ-GAPS") \ EM(netfs_read_trace_prefetch_for_write, "PREFETCHW") \ E_(netfs_read_trace_write_begin, "WRITEBEGN") @@ -33,12 +34,14 @@ #define netfs_rreq_origins \ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ + EM(NETFS_READ_GAPS, "RG") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_DIO_READ, "DR") \ EM(NETFS_WRITEBACK, "WB") \ EM(NETFS_WRITETHROUGH, "WT") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ - E_(NETFS_DIO_WRITE, "DW") + EM(NETFS_DIO_WRITE, "DW") \ + E_(NETFS_PGPRIV2_COPY_TO_CACHE, "2C") #define netfs_rreq_traces \ EM(netfs_rreq_trace_assess, "ASSESS ") \ @@ -69,15 +72,25 @@ E_(NETFS_INVALID_WRITE, "INVL") #define netfs_sreq_traces \ + EM(netfs_sreq_trace_add_donations, "+DON ") \ + EM(netfs_sreq_trace_added, "ADD ") \ + EM(netfs_sreq_trace_clear, "CLEAR") \ EM(netfs_sreq_trace_discard, "DSCRD") \ + EM(netfs_sreq_trace_donate_to_prev, "DON-P") \ + EM(netfs_sreq_trace_donate_to_next, "DON-N") \ EM(netfs_sreq_trace_download_instead, "RDOWN") \ EM(netfs_sreq_trace_fail, "FAIL ") \ EM(netfs_sreq_trace_free, "FREE ") \ + EM(netfs_sreq_trace_hit_eof, "EOF ") \ + EM(netfs_sreq_trace_io_progress, "IO ") \ EM(netfs_sreq_trace_limited, "LIMIT") \ EM(netfs_sreq_trace_prepare, "PREP ") \ EM(netfs_sreq_trace_prep_failed, "PRPFL") \ - EM(netfs_sreq_trace_resubmit_short, "SHORT") \ + EM(netfs_sreq_trace_progress, "PRGRS") \ + EM(netfs_sreq_trace_reprep_failed, "REPFL") \ EM(netfs_sreq_trace_retry, "RETRY") \ + EM(netfs_sreq_trace_short, "SHORT") \ + EM(netfs_sreq_trace_split, "SPLIT") \ EM(netfs_sreq_trace_submit, "SUBMT") \ EM(netfs_sreq_trace_terminated, "TERM ") \ EM(netfs_sreq_trace_write, "WRITE") \ @@ -118,7 +131,7 @@ EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_put_cancel, "PUT CANCEL ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ - EM(netfs_sreq_trace_put_discard, "PUT DISCARD") \ + EM(netfs_sreq_trace_put_consumed, "PUT CONSUME") \ EM(netfs_sreq_trace_put_done, "PUT DONE ") \ EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \ @@ -138,6 +151,7 @@ EM(netfs_flush_content, "flush") \ EM(netfs_streaming_filled_page, "mod-streamw-f") \ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ + EM(netfs_folio_trace_abandon, "abandon") \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_clear, "clear") \ EM(netfs_folio_trace_clear_cc, "clear-cc") \ @@ -154,7 +168,11 @@ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_not_under_wback, "!wback") \ EM(netfs_folio_trace_put, "put") \ + EM(netfs_folio_trace_read, "read") \ + EM(netfs_folio_trace_read_done, "read-done") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \ + EM(netfs_folio_trace_read_put, "read-put") \ + EM(netfs_folio_trace_read_unlock, "read-unlock") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ EM(netfs_folio_trace_store_copy, "store-copy") \ @@ -167,6 +185,12 @@ EM(netfs_contig_trace_jump, "-->JUMP-->") \ E_(netfs_contig_trace_unlock, "Unlock") +#define netfs_donate_traces \ + EM(netfs_trace_donate_tail_to_prev, "tail-to-prev") \ + EM(netfs_trace_donate_to_prev, "to-prev") \ + EM(netfs_trace_donate_to_next, "to-next") \ + E_(netfs_trace_donate_to_deferred_next, "defer-next") + #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY @@ -184,6 +208,7 @@ enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte); enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); enum netfs_folio_trace { netfs_folio_traces } __mode(byte); enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byte); +enum netfs_donate_trace { netfs_donate_traces } __mode(byte); #endif @@ -206,6 +231,7 @@ netfs_rreq_ref_traces; netfs_sreq_ref_traces; netfs_folio_traces; netfs_collect_contig_traces; +netfs_donate_traces; /* * Now redefine the EM() and E_() macros to map the enums to the strings that @@ -226,6 +252,7 @@ TRACE_EVENT(netfs_read, TP_STRUCT__entry( __field(unsigned int, rreq ) __field(unsigned int, cookie ) + __field(loff_t, i_size ) __field(loff_t, start ) __field(size_t, len ) __field(enum netfs_read_trace, what ) @@ -235,18 +262,19 @@ TRACE_EVENT(netfs_read, TP_fast_assign( __entry->rreq = rreq->debug_id; __entry->cookie = rreq->cache_resources.debug_id; + __entry->i_size = rreq->i_size; __entry->start = start; __entry->len = len; __entry->what = what; __entry->netfs_inode = rreq->inode->i_ino; ), - TP_printk("R=%08x %s c=%08x ni=%x s=%llx %zx", + TP_printk("R=%08x %s c=%08x ni=%x s=%llx l=%zx sz=%llx", __entry->rreq, __print_symbolic(__entry->what, netfs_read_traces), __entry->cookie, __entry->netfs_inode, - __entry->start, __entry->len) + __entry->start, __entry->len, __entry->i_size) ); TRACE_EVENT(netfs_rreq, @@ -651,6 +679,71 @@ TRACE_EVENT(netfs_collect_stream, __entry->collected_to, __entry->front) ); +TRACE_EVENT(netfs_progress, + TP_PROTO(const struct netfs_io_subrequest *subreq, + unsigned long long start, size_t avail, size_t part), + + TP_ARGS(subreq, start, avail, part), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, subreq) + __field(unsigned int, consumed) + __field(unsigned int, transferred) + __field(unsigned long long, f_start) + __field(unsigned int, f_avail) + __field(unsigned int, f_part) + __field(unsigned char, slot) + ), + + TP_fast_assign( + __entry->rreq = subreq->rreq->debug_id; + __entry->subreq = subreq->debug_index; + __entry->consumed = subreq->consumed; + __entry->transferred = subreq->transferred; + __entry->f_start = start; + __entry->f_avail = avail; + __entry->f_part = part; + __entry->slot = subreq->curr_folioq_slot; + ), + + TP_printk("R=%08x[%02x] s=%llx ct=%x/%x pa=%x/%x sl=%x", + __entry->rreq, __entry->subreq, __entry->f_start, + __entry->consumed, __entry->transferred, + __entry->f_part, __entry->f_avail, __entry->slot) + ); + +TRACE_EVENT(netfs_donate, + TP_PROTO(const struct netfs_io_request *rreq, + const struct netfs_io_subrequest *from, + const struct netfs_io_subrequest *to, + size_t amount, + enum netfs_donate_trace trace), + + TP_ARGS(rreq, from, to, amount, trace), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, from) + __field(unsigned int, to) + __field(unsigned int, amount) + __field(enum netfs_donate_trace, trace) + ), + + TP_fast_assign( + __entry->rreq = rreq->debug_id; + __entry->from = from->debug_index; + __entry->to = to ? to->debug_index : -1; + __entry->amount = amount; + __entry->trace = trace; + ), + + TP_printk("R=%08x[%02x] -> [%02x] %s am=%x", + __entry->rreq, __entry->from, __entry->to, + __print_symbolic(__entry->trace, netfs_donate_traces), + __entry->amount) + ); + #undef EM #undef E_ #endif /* _TRACE_NETFS_H */ From patchwork Wed Aug 14 20:38:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E451C52D7F for ; Wed, 14 Aug 2024 20:41:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B2136B009E; Wed, 14 Aug 2024 16:41:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8610E6B00B8; Wed, 14 Aug 2024 16:41:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B15F6B00B9; Wed, 14 Aug 2024 16:41:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4260C6B009E for ; Wed, 14 Aug 2024 16:41:43 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id ECDE81C4C64 for ; Wed, 14 Aug 2024 20:41:42 +0000 (UTC) X-FDA: 82452022044.24.A6E4536 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 2E38D180015 for ; Wed, 14 Aug 2024 20:41:41 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WjpqW9gX; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fDuOF0s2BOs8KZl66qGdXe3l4UUO6Nb/3xLiR9dgE/s=; b=Mx064z7NIjxnqyumxpDHLWZ3XmkBueI/n3KyZRvPrF3nUCPJl40FupoxG+roy6R7BHf1y6 q8nMzDL5QKXpDdzOfDzgWGe1nVPXCJrB29zmY9ZxMujxPQ0JFFD0IzP5cZMDUXbsNrTxAT eZ6H1l9yp9jDD5E5vdiDzgIyh2/V4ow= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668029; a=rsa-sha256; cv=none; b=GQFCFlt41UtiKyFvxUwjHDk3epCMfpN7w1xIMTVBS9qcLK24d/wGL3KDnDUozsVpxG9PO5 gb6CX8b03kVTnmrIk8r6yHCgwPnyid9Yaoa7hVlw8gw5TC4iiSqqk77vMbLhtf4+cU1CGC L/975RoZ3HIIaknMZ6gwIyEPeETVU4E= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WjpqW9gX; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668100; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fDuOF0s2BOs8KZl66qGdXe3l4UUO6Nb/3xLiR9dgE/s=; b=WjpqW9gXcUr+Gr09ma4MDyVPFIeY20pEB+k0I356JXfeE0yJoUk7AJUlZtzJsHKvTp0Jxo DAOQukHEJbs1Vq76IfC7xYd/pr/8C3iGq9kVxhUkHuQBlvCpfKQh2RlpKUYCuJzRs6RwlM A9WqS0PfdQ5TYSjFt9nGXU32Aw9CgR8= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-581-_KCJxC8hOEaZzlOi1GU9WA-1; Wed, 14 Aug 2024 16:41:34 -0400 X-MC-Unique: _KCJxC8hOEaZzlOi1GU9WA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4B0321944F02; Wed, 14 Aug 2024 20:41:31 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6990B3001FDD; Wed, 14 Aug 2024 20:41:25 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 20/25] netfs: Remove fs/netfs/io.c Date: Wed, 14 Aug 2024 21:38:40 +0100 Message-ID: <20240814203850.2240469-21-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2E38D180015 X-Stat-Signature: 1m673ygxcu4zxeuxp74dor41i5rc94h9 X-HE-Tag: 1723668101-552652 X-HE-Meta: U2FsdGVkX1+FvS+WAGckDtSOy8gWcSwDuBwodY+8tHF1phn8eLAEz7fCLAJrzj/XWvRaw+t33Kj9xQD08pmX5e4+2+SCAd8Qn8gVK1FgWDIvDSxRv2JQ6682PSvL8FQn6YhojgO5mu4MYMmRDh1Suh4sdOJgJx89/2l/yBrYtqeXhMgjTG5Vv33xhIpSUiLzM7dNn9MxWRk3Xd05DdqZ27/vpT9m9A/jXPLxzEgycni8cViF3xvvvn88FB15ETLdZA8mCQkVjTfXhiQzMwnzzX1Qrjte1A3UpiZiMU+BaOjyJQqgITba4xE6tQJOhJW/Aa6qpTpflTAgpAG/SaxeIbEhK/fM3U5qAolMZnFF+mUU6Po3CAG12tjO9i1MMNnd9ls0eGusax+/CZzB0S8AQzym5td1MBpryw3WPBnHDmFzVC90PQo9eIdxWPrLenPbZrQUi6dZn0BDYFJD2/No69qd82lwAFjBrBBCWjedcEXqxuLxci7i6/Q5y9KXyEIqb0voGa1z/wN+LxbXZZWMm3IQMwKogza6eT1AsTz9rvpJoBdSXUVtoM+MBgnzZJnXyq1Ytg3uf8OYzK1ZiSZJeBfiGvSkh5HUyYoTeJC78rUvEXjfkyZIj52ENTDZ3EWuikqjt6rs7rsh2G5B9OcHvA90UQZIKOJE+50cO69piBarL6WULpfK99bAEHQddjB63TQKDexUD8Zf+VIu/mtd7gdH71sdmOlFBBI1rUT/9yzNaQvpWRXzNdbq5Q418IhCw3fe+6b8MNy0eqVQd7z7/VLUZoNdKK/gqgOiX3Rb6e9cXZiwD/qE24tCnc8YYpGzXQT+Cc9DIhxbWD/CZJ6yF/D7sScAofJ6FDWt5Eyft03YhUf4IkFYvatzTUFOImkoz4Yk0Ygszl1v6gJxPytilI+HJjB7Q9TCN3zhvv2HVku9bD3N9itf8JH1SI0Ay4VhgyLzRhDBeL6r3iHgO/u JNn50fhX vriuoj+csTgOY15q8e+KI8ln2e/A6o1o3KlOUHD28YYEKaqZtRQjluGCSLKpTODhOJAhjKPO5l12S6UEnIGfe7ro99JJZ0/bU0Ht7Ffs1jYJdA/9cECAQxKILqGSoCMZZD2hqj+XDZxu/lxqt89ru/Po8uPH8CWxa9r8AdyV/Utmru/3ZDJL7sOeSZuZWQ+nVDvBaFPBnUuJyQ3FhOdycsW9y9F9DFDrKNHEScXOHnx6tMXcI6z2i0NI5jm4McvDyPHBY21FwjO4vmrKmOKISVvaGNrjEWwbwa1uaEtD0ihfONZ/3iylXqXpHQtlTL7CG2X5REjOUADgQFr8GruSAx4oAFy5lVqnmK2BEu7LUBsh31uoewLWh7rrgSb5TBdvQKNhQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Remove fs/netfs/io.c as it is no longer used. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/io.c | 794 -------------------------------------------------- 1 file changed, 794 deletions(-) delete mode 100644 fs/netfs/io.c diff --git a/fs/netfs/io.c b/fs/netfs/io.c deleted file mode 100644 index 8b9aaa99d787..000000000000 --- a/fs/netfs/io.c +++ /dev/null @@ -1,794 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* Network filesystem high-level read support. - * - * Copyright (C) 2021 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include "internal.h" - -/* - * Clear the unread part of an I/O request. - */ -static void netfs_clear_unread(struct netfs_io_subrequest *subreq) -{ - iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); -} - -static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, - bool was_async) -{ - struct netfs_io_subrequest *subreq = priv; - - netfs_subreq_terminated(subreq, transferred_or_error, was_async); -} - -/* - * Issue a read against the cache. - * - Eats the caller's ref on subreq. - */ -static void netfs_read_from_cache(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - enum netfs_read_from_hole read_hole) -{ - struct netfs_cache_resources *cres = &rreq->cache_resources; - - netfs_stat(&netfs_n_rh_read); - cres->ops->read(cres, subreq->start, &subreq->io_iter, read_hole, - netfs_cache_read_terminated, subreq); -} - -/* - * Fill a subrequest region with zeroes. - */ -static void netfs_fill_with_zeroes(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - netfs_stat(&netfs_n_rh_zero); - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, 0, false); -} - -/* - * Ask the netfs to issue a read request to the server for us. - * - * The netfs is expected to read from subreq->pos + subreq->transferred to - * subreq->pos + subreq->len - 1. It may not backtrack and write data into the - * buffer prior to the transferred point as it might clobber dirty data - * obtained from the cache. - * - * Alternatively, the netfs is allowed to indicate one of two things: - * - * - NETFS_SREQ_SHORT_READ: A short read - it will get called again to try and - * make progress. - * - * - NETFS_SREQ_CLEAR_TAIL: A short read - the rest of the buffer will be - * cleared. - */ -static void netfs_read_from_server(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - netfs_stat(&netfs_n_rh_download); - - if (rreq->origin != NETFS_DIO_READ && - iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) - pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n", - rreq->debug_id, subreq->debug_index, - iov_iter_count(&subreq->io_iter), subreq->len, - subreq->transferred, subreq->flags); - rreq->netfs_ops->issue_read(subreq); -} - -/* - * Release those waiting. - */ -static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async) -{ - trace_netfs_rreq(rreq, netfs_rreq_trace_done); - netfs_clear_subrequests(rreq, was_async); - netfs_put_request(rreq, was_async, netfs_rreq_trace_put_complete); -} - -/* - * [DEPRECATED] Deal with the completion of writing the data to the cache. We - * have to clear the PG_fscache bits on the folios involved and release the - * caller's ref. - * - * May be called in softirq mode and we inherit a ref from the caller. - */ -static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq, - bool was_async) -{ - struct netfs_io_subrequest *subreq; - struct folio *folio; - pgoff_t unlocked = 0; - bool have_unlocked = false; - - rcu_read_lock(); - - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE); - - xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) { - if (xas_retry(&xas, folio)) - continue; - - /* We might have multiple writes from the same huge - * folio, but we mustn't unlock a folio more than once. - */ - if (have_unlocked && folio->index <= unlocked) - continue; - unlocked = folio_next_index(folio) - 1; - trace_netfs_folio(folio, netfs_folio_trace_end_copy); - folio_end_private_2(folio); - have_unlocked = true; - } - } - - rcu_read_unlock(); - netfs_rreq_completed(rreq, was_async); -} - -static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error, - bool was_async) /* [DEPRECATED] */ -{ - struct netfs_io_subrequest *subreq = priv; - struct netfs_io_request *rreq = subreq->rreq; - - if (IS_ERR_VALUE(transferred_or_error)) { - netfs_stat(&netfs_n_rh_write_failed); - trace_netfs_failure(rreq, subreq, transferred_or_error, - netfs_fail_copy_to_cache); - } else { - netfs_stat(&netfs_n_rh_write_done); - } - - trace_netfs_sreq(subreq, netfs_sreq_trace_write_term); - - /* If we decrement nr_copy_ops to 0, the ref belongs to us. */ - if (atomic_dec_and_test(&rreq->nr_copy_ops)) - netfs_rreq_unmark_after_write(rreq, was_async); - - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); -} - -/* - * [DEPRECATED] Perform any outstanding writes to the cache. We inherit a ref - * from the caller. - */ -static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq) -{ - struct netfs_cache_resources *cres = &rreq->cache_resources; - struct netfs_io_subrequest *subreq, *next, *p; - struct iov_iter iter; - int ret; - - trace_netfs_rreq(rreq, netfs_rreq_trace_copy); - - /* We don't want terminating writes trying to wake us up whilst we're - * still going through the list. - */ - atomic_inc(&rreq->nr_copy_ops); - - list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) { - if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { - list_del_init(&subreq->rreq_link); - netfs_put_subrequest(subreq, false, - netfs_sreq_trace_put_no_copy); - } - } - - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - /* Amalgamate adjacent writes */ - while (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { - next = list_next_entry(subreq, rreq_link); - if (next->start != subreq->start + subreq->len) - break; - subreq->len += next->len; - list_del_init(&next->rreq_link); - netfs_put_subrequest(next, false, - netfs_sreq_trace_put_merged); - } - - ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len, - subreq->len, rreq->i_size, true); - if (ret < 0) { - trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write); - trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip); - continue; - } - - iov_iter_xarray(&iter, ITER_SOURCE, &rreq->mapping->i_pages, - subreq->start, subreq->len); - - atomic_inc(&rreq->nr_copy_ops); - netfs_stat(&netfs_n_rh_write); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_copy_to_cache); - trace_netfs_sreq(subreq, netfs_sreq_trace_write); - cres->ops->write(cres, subreq->start, &iter, - netfs_rreq_copy_terminated, subreq); - } - - /* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */ - if (atomic_dec_and_test(&rreq->nr_copy_ops)) - netfs_rreq_unmark_after_write(rreq, false); -} - -static void netfs_rreq_write_to_cache_work(struct work_struct *work) /* [DEPRECATED] */ -{ - struct netfs_io_request *rreq = - container_of(work, struct netfs_io_request, work); - - netfs_rreq_do_write_to_cache(rreq); -} - -static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq) /* [DEPRECATED] */ -{ - rreq->work.func = netfs_rreq_write_to_cache_work; - if (!queue_work(system_unbound_wq, &rreq->work)) - BUG(); -} - -/* - * Handle a short read. - */ -static void netfs_rreq_short_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - __clear_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); - __set_bit(NETFS_SREQ_SEEK_DATA_READ, &subreq->flags); - - netfs_stat(&netfs_n_rh_short_read); - trace_netfs_sreq(subreq, netfs_sreq_trace_resubmit_short); - - netfs_get_subrequest(subreq, netfs_sreq_trace_get_short_read); - atomic_inc(&rreq->nr_outstanding); - if (subreq->source == NETFS_READ_FROM_CACHE) - netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_CLEAR); - else - netfs_read_from_server(rreq, subreq); -} - -/* - * Reset the subrequest iterator prior to resubmission. - */ -static void netfs_reset_subreq_iter(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - size_t remaining = subreq->len - subreq->transferred; - size_t count = iov_iter_count(&subreq->io_iter); - - if (count == remaining) - return; - - _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n", - rreq->debug_id, subreq->debug_index, - iov_iter_count(&subreq->io_iter), subreq->transferred, - subreq->len, rreq->i_size, - subreq->io_iter.iter_type); - - if (count < remaining) - iov_iter_revert(&subreq->io_iter, remaining - count); - else - iov_iter_advance(&subreq->io_iter, count - remaining); -} - -/* - * Resubmit any short or failed operations. Returns true if we got the rreq - * ref back. - */ -static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq) -{ - struct netfs_io_subrequest *subreq; - - WARN_ON(in_interrupt()); - - trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); - - /* We don't want terminating submissions trying to wake us up whilst - * we're still going through the list. - */ - atomic_inc(&rreq->nr_outstanding); - - __clear_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - if (subreq->error) { - if (subreq->source != NETFS_READ_FROM_CACHE) - break; - subreq->source = NETFS_DOWNLOAD_FROM_SERVER; - subreq->error = 0; - netfs_stat(&netfs_n_rh_download_instead); - trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - atomic_inc(&rreq->nr_outstanding); - netfs_reset_subreq_iter(rreq, subreq); - netfs_read_from_server(rreq, subreq); - } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) { - netfs_rreq_short_read(rreq, subreq); - } - } - - /* If we decrement nr_outstanding to 0, the usage ref belongs to us. */ - if (atomic_dec_and_test(&rreq->nr_outstanding)) - return true; - - wake_up_var(&rreq->nr_outstanding); - return false; -} - -/* - * Check to see if the data read is still valid. - */ -static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq) -{ - struct netfs_io_subrequest *subreq; - - if (!rreq->netfs_ops->is_still_valid || - rreq->netfs_ops->is_still_valid(rreq)) - return; - - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - if (subreq->source == NETFS_READ_FROM_CACHE) { - subreq->error = -ESTALE; - __set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - } - } -} - -/* - * Determine how much we can admit to having read from a DIO read. - */ -static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) -{ - struct netfs_io_subrequest *subreq; - unsigned int i; - size_t transferred = 0; - - for (i = 0; i < rreq->direct_bv_count; i++) { - flush_dcache_page(rreq->direct_bv[i].bv_page); - // TODO: cifs marks pages in the destination buffer - // dirty under some circumstances after a read. Do we - // need to do that too? - set_page_dirty(rreq->direct_bv[i].bv_page); - } - - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - if (subreq->error || subreq->transferred == 0) - break; - transferred += subreq->transferred; - if (subreq->transferred < subreq->len) - break; - } - - for (i = 0; i < rreq->direct_bv_count; i++) - flush_dcache_page(rreq->direct_bv[i].bv_page); - - rreq->transferred = transferred; - task_io_account_read(transferred); - - if (rreq->iocb) { - rreq->iocb->ki_pos += transferred; - if (rreq->iocb->ki_complete) - rreq->iocb->ki_complete( - rreq->iocb, rreq->error ? rreq->error : transferred); - } - if (rreq->netfs_ops->done) - rreq->netfs_ops->done(rreq); - inode_dio_end(rreq->inode); -} - -/* - * Assess the state of a read request and decide what to do next. - * - * Note that we could be in an ordinary kernel thread, on a workqueue or in - * softirq context at this point. We inherit a ref from the caller. - */ -static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) -{ - trace_netfs_rreq(rreq, netfs_rreq_trace_assess); - -again: - netfs_rreq_is_still_valid(rreq); - - if (!test_bit(NETFS_RREQ_FAILED, &rreq->flags) && - test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags)) { - if (netfs_rreq_perform_resubmissions(rreq)) - goto again; - return; - } - - if (rreq->origin != NETFS_DIO_READ) - netfs_rreq_unlock_folios(rreq); - else - netfs_rreq_assess_dio(rreq); - - trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); - wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); - - if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags) && - test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) - return netfs_rreq_write_to_cache(rreq); - - netfs_rreq_completed(rreq, was_async); -} - -void netfs_rreq_work(struct work_struct *work) -{ - struct netfs_io_request *rreq = - container_of(work, struct netfs_io_request, work); - netfs_rreq_assess(rreq, false); -} - -/* - * Handle the completion of all outstanding I/O operations on a read request. - * We inherit a ref from the caller. - */ -static void netfs_rreq_terminated(struct netfs_io_request *rreq, - bool was_async) -{ - if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) && - was_async) { - if (!queue_work(system_unbound_wq, &rreq->work)) - BUG(); - } else { - netfs_rreq_assess(rreq, was_async); - } -} - -/** - * netfs_subreq_terminated - Note the termination of an I/O operation. - * @subreq: The I/O request that has terminated. - * @transferred_or_error: The amount of data transferred or an error code. - * @was_async: The termination was asynchronous - * - * This tells the read helper that a contributory I/O operation has terminated, - * one way or another, and that it should integrate the results. - * - * The caller indicates in @transferred_or_error the outcome of the operation, - * supplying a positive value to indicate the number of bytes transferred, 0 to - * indicate a failure to transfer anything that should be retried or a negative - * error code. The helper will look after reissuing I/O operations as - * appropriate and writing downloaded data to the cache. - * - * If @was_async is true, the caller might be running in softirq or interrupt - * context and we can't sleep. - */ -void netfs_subreq_terminated(struct netfs_io_subrequest *subreq, - ssize_t transferred_or_error, - bool was_async) -{ - struct netfs_io_request *rreq = subreq->rreq; - int u; - - _enter("R=%x[%x]{%llx,%lx},%zd", - rreq->debug_id, subreq->debug_index, - subreq->start, subreq->flags, transferred_or_error); - - switch (subreq->source) { - case NETFS_READ_FROM_CACHE: - netfs_stat(&netfs_n_rh_read_done); - break; - case NETFS_DOWNLOAD_FROM_SERVER: - netfs_stat(&netfs_n_rh_download_done); - break; - default: - break; - } - - if (IS_ERR_VALUE(transferred_or_error)) { - subreq->error = transferred_or_error; - trace_netfs_failure(rreq, subreq, transferred_or_error, - netfs_fail_read); - goto failed; - } - - if (WARN(transferred_or_error > subreq->len - subreq->transferred, - "Subreq overread: R%x[%x] %zd > %zu - %zu", - rreq->debug_id, subreq->debug_index, - transferred_or_error, subreq->len, subreq->transferred)) - transferred_or_error = subreq->len - subreq->transferred; - - subreq->error = 0; - subreq->transferred += transferred_or_error; - if (subreq->transferred < subreq->len) - goto incomplete; - -complete: - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) - set_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); - -out: - trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); - - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ - u = atomic_dec_return(&rreq->nr_outstanding); - if (u == 0) - netfs_rreq_terminated(rreq, was_async); - else if (u == 1) - wake_up_var(&rreq->nr_outstanding); - - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); - return; - -incomplete: - if (test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) { - netfs_clear_unread(subreq); - subreq->transferred = subreq->len; - goto complete; - } - - if (transferred_or_error == 0) { - if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { - if (rreq->origin != NETFS_DIO_READ) - subreq->error = -ENODATA; - goto failed; - } - } else { - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - } - - __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - goto out; - -failed: - if (subreq->source == NETFS_READ_FROM_CACHE) { - netfs_stat(&netfs_n_rh_read_failed); - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - } else { - netfs_stat(&netfs_n_rh_download_failed); - set_bit(NETFS_RREQ_FAILED, &rreq->flags); - rreq->error = subreq->error; - } - goto out; -} -EXPORT_SYMBOL(netfs_subreq_terminated); - -static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest *subreq, - loff_t i_size) -{ - struct netfs_io_request *rreq = subreq->rreq; - struct netfs_cache_resources *cres = &rreq->cache_resources; - - if (cres->ops) - return cres->ops->prepare_read(subreq, i_size); - if (subreq->start >= rreq->i_size) - return NETFS_FILL_WITH_ZEROES; - return NETFS_DOWNLOAD_FROM_SERVER; -} - -/* - * Work out what sort of subrequest the next one will be. - */ -static enum netfs_io_source -netfs_rreq_prepare_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - struct iov_iter *io_iter) -{ - enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; - struct netfs_inode *ictx = netfs_inode(rreq->inode); - size_t lsize; - - _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); - - if (rreq->origin != NETFS_DIO_READ) { - source = netfs_cache_prepare_read(subreq, rreq->i_size); - if (source == NETFS_INVALID_READ) - goto out; - } - - if (source == NETFS_DOWNLOAD_FROM_SERVER) { - /* Call out to the netfs to let it shrink the request to fit - * its own I/O sizes and boundaries. If it shinks it here, it - * will be called again to make simultaneous calls; if it wants - * to make serial calls, it can indicate a short read and then - * we will call it again. - */ - if (rreq->origin != NETFS_DIO_READ) { - if (subreq->start >= ictx->zero_point) { - source = NETFS_FILL_WITH_ZEROES; - goto set; - } - if (subreq->len > ictx->zero_point - subreq->start) - subreq->len = ictx->zero_point - subreq->start; - - /* We limit buffered reads to the EOF, but let the - * server deal with larger-than-EOF DIO/unbuffered - * reads. - */ - if (subreq->len > rreq->i_size - subreq->start) - subreq->len = rreq->i_size - subreq->start; - } - if (rreq->rsize && subreq->len > rreq->rsize) - subreq->len = rreq->rsize; - - if (rreq->netfs_ops->clamp_length && - !rreq->netfs_ops->clamp_length(subreq)) { - source = NETFS_INVALID_READ; - goto out; - } - - if (rreq->io_streams[0].sreq_max_segs) { - lsize = netfs_limit_iter(io_iter, 0, subreq->len, - rreq->io_streams[0].sreq_max_segs); - if (subreq->len > lsize) { - subreq->len = lsize; - trace_netfs_sreq(subreq, netfs_sreq_trace_limited); - } - } - } - -set: - if (subreq->len > rreq->len) - pr_warn("R=%08x[%u] SREQ>RREQ %zx > %llx\n", - rreq->debug_id, subreq->debug_index, - subreq->len, rreq->len); - - if (WARN_ON(subreq->len == 0)) { - source = NETFS_INVALID_READ; - goto out; - } - - subreq->source = source; - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - - subreq->io_iter = *io_iter; - iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(io_iter, subreq->len); -out: - subreq->source = source; - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - return source; -} - -/* - * Slice off a piece of a read request and submit an I/O request for it. - */ -static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, - struct iov_iter *io_iter) -{ - struct netfs_io_subrequest *subreq; - enum netfs_io_source source; - - subreq = netfs_alloc_subrequest(rreq); - if (!subreq) - return false; - - subreq->start = rreq->start + rreq->submitted; - subreq->len = io_iter->count; - - _debug("slice %llx,%zx,%llx", subreq->start, subreq->len, rreq->submitted); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - - /* Call out to the cache to find out what it can do with the remaining - * subset. It tells us in subreq->flags what it decided should be done - * and adjusts subreq->len down if the subset crosses a cache boundary. - * - * Then when we hand the subset, it can choose to take a subset of that - * (the starts must coincide), in which case, we go around the loop - * again and ask it to download the next piece. - */ - source = netfs_rreq_prepare_read(rreq, subreq, io_iter); - if (source == NETFS_INVALID_READ) - goto subreq_failed; - - atomic_inc(&rreq->nr_outstanding); - - rreq->submitted += subreq->len; - - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - switch (source) { - case NETFS_FILL_WITH_ZEROES: - netfs_fill_with_zeroes(rreq, subreq); - break; - case NETFS_DOWNLOAD_FROM_SERVER: - netfs_read_from_server(rreq, subreq); - break; - case NETFS_READ_FROM_CACHE: - netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_IGNORE); - break; - default: - BUG(); - } - - return true; - -subreq_failed: - rreq->error = subreq->error; - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_failed); - return false; -} - -/* - * Begin the process of reading in a chunk of data, where that data may be - * stitched together from multiple sources, including multiple servers and the - * local cache. - */ -int netfs_begin_read(struct netfs_io_request *rreq, bool sync) -{ - struct iov_iter io_iter; - int ret; - - _enter("R=%x %llx-%llx", - rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); - - if (rreq->len == 0) { - pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); - return -EIO; - } - - if (rreq->origin == NETFS_DIO_READ) - inode_dio_begin(rreq->inode); - - // TODO: Use bounce buffer if requested - rreq->io_iter = rreq->iter; - - /* Chop the read into slices according to what the cache and the netfs - * want and submit each one. - */ - netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding); - atomic_set(&rreq->nr_outstanding, 1); - io_iter = rreq->io_iter; - do { - _debug("submit %llx + %llx >= %llx", - rreq->start, rreq->submitted, rreq->i_size); - if (!netfs_rreq_submit_slice(rreq, &io_iter)) - break; - if (test_bit(NETFS_SREQ_NO_PROGRESS, &rreq->flags)) - break; - if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && - test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) - break; - - } while (rreq->submitted < rreq->len); - - if (!rreq->submitted) { - netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); - if (rreq->origin == NETFS_DIO_READ) - inode_dio_end(rreq->inode); - ret = 0; - goto out; - } - - if (sync) { - /* Keep nr_outstanding incremented so that the ref always - * belongs to us, and the service code isn't punted off to a - * random thread pool to process. Note that this might start - * further work, such as writing to the cache. - */ - wait_var_event(&rreq->nr_outstanding, - atomic_read(&rreq->nr_outstanding) == 1); - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_assess(rreq, false); - - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - - ret = rreq->error; - if (ret == 0 && rreq->submitted < rreq->len && - rreq->origin != NETFS_DIO_READ) { - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); - ret = -EIO; - } - } else { - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_assess(rreq, false); - ret = -EIOCBQUEUED; - } - -out: - return ret; -} From patchwork Wed Aug 14 20:38:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D73CC531DC for ; Wed, 14 Aug 2024 20:41:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3DB16B00B9; Wed, 14 Aug 2024 16:41:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF1176B00BA; Wed, 14 Aug 2024 16:41:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D67F46B00BB; Wed, 14 Aug 2024 16:41:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B84756B00B9 for ; Wed, 14 Aug 2024 16:41:47 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6F47880D55 for ; Wed, 14 Aug 2024 20:41:47 +0000 (UTC) X-FDA: 82452022254.21.186AA16 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id B79AE180005 for ; Wed, 14 Aug 2024 20:41:45 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Taf89cgu; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iwp88PKNI4mMdeB9eXHleWn+X4nv/CC8OxS+odKjs5Q=; b=KNZ+bN6odaTjt7DMywmE/FaYrYDgTf/X4g0FZKvHmVYUgBA79mZWd+v0YjvqSs2BVu5vsq 3CkT7rNq5TlLaQjC8H+qFbkxWzBA3+49tDdKk0/rMH9vAI26IoL+p8W+QaeCZHhUg8ESd9 JGRTHMRBoThhl69a2Ds7B19HhhxTN2E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668025; a=rsa-sha256; cv=none; b=5hlinmiOAVBcpGYP1VwRRe9eGdkQq1vjl7vhtYBwKv2xwX9/3lzWwKEWRu75YrSE4P6w8g wwmbbMJNfnePuwv9v5yBge6+NQkKZNdWS7HB/JPKciOuUA8ulh8bGeyoiVfXnL1SryTeCN 7ZNaL4xu1YO5U8m0tnpYOuE3FDJsUTg= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Taf89cgu; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668105; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Iwp88PKNI4mMdeB9eXHleWn+X4nv/CC8OxS+odKjs5Q=; b=Taf89cgulJcW8+J0vrn/XsgfQPNl9Yhn0dSYrL+CW53SBQc1iLCeAovMYHSDD9z0JFaqbh 2Lqe/DUaz2bhPimQ/7WqS21K3DImwu5TYgz90NpYuHJtLGtdd+uAw8amhWsVrjcW6oZFgd Xk5NB5yFADfXp5OHYlMGQxegmG50SZs= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-689-7X87DD1LN9GwpLr3LM1nyg-1; Wed, 14 Aug 2024 16:41:41 -0400 X-MC-Unique: 7X87DD1LN9GwpLr3LM1nyg-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 47CCD1954B11; Wed, 14 Aug 2024 20:41:38 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C92FB1955E8C; Wed, 14 Aug 2024 20:41:32 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 21/25] cachefiles, netfs: Fix write to partial block at EOF Date: Wed, 14 Aug 2024 21:38:41 +0100 Message-ID: <20240814203850.2240469-22-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B79AE180005 X-Stat-Signature: qdt45k1jwznu3iish47jy7btzy4bazsj X-Rspam-User: X-HE-Tag: 1723668105-605412 X-HE-Meta: U2FsdGVkX19N+ZwegHcOayNNOiDZb+97a77xAqGKP3RLyoX9fwcUngzlFtLj7yvctiBAEv8/NiTlBKHTIuTP7vR4p0ad+9Y7OxdDEHnX9A7+gwcbNt3BISMnYnmNcECPgW0pMRXRDAShYLyyQwyxLpApMGqd3cCCUdArUgVunRNy1T2KyBdmVjYHDe1G1R1xybcY7eJXELJiuxnzxvck/wGXPZgc4Tiueeauwylu4DAbDVMj8YOdKOZSV14/dFMx0eRRIt9CWMwhIEIS7aNVn/jzoYe0Xg6jHdnSG2lAMdUcRDGGE/Vr3kmVnoJDcJa/ibiptpnpbSmZauo2hCUEF5gJ5zXwstoRhyQ9/wB/MM93JYdBOVq/6DaEj21rUj2SEA1M7JEPcPDmRXUEztoGhBd/Kt6qbiiPhio8odUbcBGxFI+TW9qMGCRErqMCDRRfd0m/CVH8kDnTcaoROy5ikKROjm3bKXmNHPrrWcVSA6fySnILJOd8ZDA7erzNo/tAe7hJoYsRm2mn52j3hA0pCTXiW0GjVvT2NgV72kg6oxxYBaiDaJjrEhh4cNOXtNXnZz4w5UxjnrEotREsyvj7gKY2KpirOTtVCbAdw/jITeohxd1+cLTrqh9sj12INX5bAoVsEhr/pt5GRHOtN4CKj2CP++bmyQOx+2WoEod5b+G5O+dkQEWsVvMfkNjSWJ0p/7dfe91C/6oj4poLWU0WY0nzUtL2jPt63kyv5MWON83+P8+2hMYlE8x7YwBZ6+R8nFrVrywCk5eGBgJ4bSgEB6OObZvNt7AsqKmuIPRwMiqbeCpJbq32EGwogPtIkzzSVe3Y4MZv9mvTnRHJ7XAIvN0Q53fZOqgLGYLMRsKBf8MJdPJVJB2d+8Mp4EkIY0wuj764Q33kDlRapMMwppb+FczzbPRqInKZxQqJgftGhtvEMDAAJaKdqzbRwqC+yX0xBlxJvIo2ZX0yMeZPxIO 9bNGH8fG GO3GEJrdtPffPGrpQ43r1UbRp1eDZZmshTrz4Jr0S2/68znGp4rYT/PSTfoEM38qyAv2seyXeCjmOJ2UneMOdYhcVb7uqbGHqVFuBNbyegNDRKsTrW/fAMd4fKnE7vjNbnYy7fUTJ6s4OES72FxygjqqjAQsI9HdtWrO+sQE1dDmzJF+r5v1tosnQrZ7lLMZMf3/39ypzjMn3eQJY9EHqkd4Nux9gJL8Bm8kUwDqc3V3dc6V2+ijw7Ha9keUBwTfcaASjwBVgXUaQ8dp5koUFcXocKxA5bVtaE/U9K2ztU2gSVY1ezxYzI5gEMqKx3PUPhSeKGX3eLt5dAB68muIUB7cJWKkQA6AyGgUlg2CNqjNUMc8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Because it uses DIO writes, cachefiles is unable to make a write to the backing file if that write is not aligned to and sized according to the backing file's DIO block alignment. This makes it tricky to handle a write to the cache where the EOF on the network file is not correctly aligned. To get around this, netfslib attempts to tell the driver it is calling how much more data there is available beyond the EOF that it can use to pad the write (netfslib preclears the part of the folio above the EOF). However, it tries to tell the cache what the maximum length is, but doesn't calculate this correctly; and, in any case, cachefiles actually ignores the value and just skips the block. Fix this by: (1) Change the value passed to indicate the amount of extra data that can be added to the operation (now ->submit_extendable_to). This is much simpler to calculate as it's just the end of the folio minus the top of the data within the folio - rather than having to account for data spread over multiple folios. (2) Make cachefiles add some of this data if the subrequest it is given ends at the network file's i_size if the extra data is sufficient to pad out to a whole block. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/cachefiles/io.c | 14 ++++++++++++++ fs/netfs/read_pgpriv2.c | 4 ++-- fs/netfs/write_issue.c | 5 ++--- include/linux/netfs.h | 2 +- 4 files changed, 19 insertions(+), 6 deletions(-) diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index 5b82ba7785cd..6a821a959b59 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -648,6 +648,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq) struct netfs_cache_resources *cres = &wreq->cache_resources; struct cachefiles_object *object = cachefiles_cres_object(cres); struct cachefiles_cache *cache = object->volume->cache; + struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; const struct cred *saved_cred; size_t off, pre, post, len = subreq->len; loff_t start = subreq->start; @@ -661,6 +662,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq) if (off) { pre = CACHEFILES_DIO_BLOCK_SIZE - off; if (pre >= len) { + fscache_count_dio_misfit(); netfs_write_subrequest_terminated(subreq, len, false); return; } @@ -671,10 +673,22 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq) } /* We also need to end on the cache granularity boundary */ + if (start + len == wreq->i_size) { + size_t part = len % CACHEFILES_DIO_BLOCK_SIZE; + size_t need = CACHEFILES_DIO_BLOCK_SIZE - part; + + if (part && stream->submit_extendable_to >= need) { + len += need; + subreq->len += need; + subreq->io_iter.count += need; + } + } + post = len & (CACHEFILES_DIO_BLOCK_SIZE - 1); if (post) { len -= post; if (len == 0) { + fscache_count_dio_misfit(); netfs_write_subrequest_terminated(subreq, post, false); return; } diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index 9439461d535f..ba5af89d37fa 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -97,7 +97,7 @@ static int netfs_pgpriv2_copy_folio(struct netfs_io_request *wreq, struct folio if (netfs_buffer_append_folio(wreq, folio, false) < 0) return -ENOMEM; - cache->submit_max_len = fsize; + cache->submit_extendable_to = fsize; cache->submit_off = 0; cache->submit_len = flen; @@ -112,10 +112,10 @@ static int netfs_pgpriv2_copy_folio(struct netfs_io_request *wreq, struct folio wreq->io_iter.iov_offset = cache->submit_off; atomic64_set(&wreq->issued_to, fpos + cache->submit_off); + cache->submit_extendable_to = fsize - cache->submit_off; part = netfs_advance_write(wreq, cache, fpos + cache->submit_off, cache->submit_len, to_eof); cache->submit_off += part; - cache->submit_max_len -= part; if (part > cache->submit_len) cache->submit_len = 0; else diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 975436d3dc3f..f7d59f0bb8c2 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -283,6 +283,7 @@ int netfs_advance_write(struct netfs_io_request *wreq, _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len); subreq->len += part; subreq->nr_segs++; + stream->submit_extendable_to -= part; if (subreq->len >= stream->sreq_max_len || subreq->nr_segs >= stream->sreq_max_segs || @@ -424,7 +425,6 @@ static int netfs_write_folio(struct netfs_io_request *wreq, */ for (int s = 0; s < NR_IO_STREAMS; s++) { stream = &wreq->io_streams[s]; - stream->submit_max_len = fsize; stream->submit_off = foff; stream->submit_len = flen; if ((stream->source == NETFS_WRITE_TO_CACHE && streamw) || @@ -432,7 +432,6 @@ static int netfs_write_folio(struct netfs_io_request *wreq, fgroup == NETFS_FOLIO_COPY_TO_CACHE)) { stream->submit_off = UINT_MAX; stream->submit_len = 0; - stream->submit_max_len = 0; } } @@ -462,10 +461,10 @@ static int netfs_write_folio(struct netfs_io_request *wreq, wreq->io_iter.iov_offset = stream->submit_off; atomic64_set(&wreq->issued_to, fpos + stream->submit_off); + stream->submit_extendable_to = fsize - stream->submit_off; part = netfs_advance_write(wreq, stream, fpos + stream->submit_off, stream->submit_len, to_eof); stream->submit_off += part; - stream->submit_max_len -= part; if (part > stream->submit_len) stream->submit_len = 0; else diff --git a/include/linux/netfs.h b/include/linux/netfs.h index c0f0c9c87d86..5eaceef41e6c 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -135,7 +135,7 @@ struct netfs_io_stream { unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */ unsigned int submit_off; /* Folio offset we're submitting from */ unsigned int submit_len; /* Amount of data left to submit */ - unsigned int submit_max_len; /* Amount I/O can be rounded up to */ + unsigned int submit_extendable_to; /* Amount I/O can be rounded up to */ void (*prepare_write)(struct netfs_io_subrequest *subreq); void (*issue_write)(struct netfs_io_subrequest *subreq); /* Collection tracking */ From patchwork Wed Aug 14 20:38:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95F9FC52D7F for ; Wed, 14 Aug 2024 20:41:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E4006B00A0; Wed, 14 Aug 2024 16:41:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 292296B00BC; Wed, 14 Aug 2024 16:41:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10C936B00BD; Wed, 14 Aug 2024 16:41:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E40316B00A0 for ; Wed, 14 Aug 2024 16:41:54 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9F3E0C1179 for ; Wed, 14 Aug 2024 20:41:54 +0000 (UTC) X-FDA: 82452022548.20.751EC78 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 023E48000B for ; Wed, 14 Aug 2024 20:41:52 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hqCw/m5M"; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=71aFx04FBERopiPPtaaAjec9tErJI5QLn4peDvc3p4I=; b=Mb5yZyG/ijGW8pfgyS8DT8UxzyNxVFAOCc6F+vwiMvyZZzGX480kzgujz8LaE6b9kSx11t NBSQp1nPDbMtg5taOFgVc8xNAPo735S1bWErjmt+MnyJwyV6Jgwv27h3R671CfQpNlSX+t OdbReltL7xa/2KYiDYW2rP5UDVqbRQc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668041; a=rsa-sha256; cv=none; b=QHp4gxm9prG3lL1YjZcXjzPMRSj9P7QtnkwMwC6RBnql/kNPesHAS94bODaQexVSKV+PHJ qwweahUOYMeIdFukXIasCay7HbgygeksWa0bZ5Q0QVbvXRHIgwbfXx+lWFN8m+b/PjM2Q3 S3PIqPlQXFrfni9b35uKKZobXed7tZQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hqCw/m5M"; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668112; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=71aFx04FBERopiPPtaaAjec9tErJI5QLn4peDvc3p4I=; b=hqCw/m5MKIx0/luPCcNDucYEbrhPTxw9ZMJwegR6/kIvVw52D9d/ofny+3Xo5LIr0MxFyY +s7Q1IWxJJgm65dmf/0t0lZmxZtyu0qtBEp6dYjf3R4wNelardLLncqALGk7Ht+Ocrd79s cTCUHS7Bi6Dj0ctkeuPJn56choIUFyc= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-277-F-R71-2QP0anx8F_m7msPg-1; Wed, 14 Aug 2024 16:41:49 -0400 X-MC-Unique: F-R71-2QP0anx8F_m7msPg-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2B0671954B11; Wed, 14 Aug 2024 20:41:46 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A276F1955D88; Wed, 14 Aug 2024 20:41:39 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 22/25] netfs: Cancel dirty folios that have no storage destination Date: Wed, 14 Aug 2024 21:38:42 +0100 Message-ID: <20240814203850.2240469-23-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Queue-Id: 023E48000B X-Stat-Signature: naaszpbjg8j63hgmko1ope7jpjftc8wh X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1723668112-778465 X-HE-Meta: U2FsdGVkX188OOK13JqL8bCSxMoMgzGSYC5HXzy8m3BIhl0x4i+zBcjjXaFL/9zlXO/4nGsHcNwnFL7LWZ9LhDsxbGC7aOjIjd2w8RLDKtntVURMqUMwX+lgEnur7yWagtbCxGxuEqBv6mPEo56ICUGUHeEZl4Vln2h09MpdLuEn8kA+XTW4qNe4MbYtb+PHrQ++ioLjwrBbyhCWsjQK7yWq3DYgNoojAt9/bNdcfYmbmsLlAVDICpAkqlCpTtZ1wbBmNZMKJmb44FpaQabZvfuajR7kgtDt4kJ6QaY4TJgAVQ4N8quK1PdDlHr/GOGJTn1DGlnXF/+LwKC6lfbvGCj8u2fRPQSlLxp9M9N30bnbrw9w1OXQX9yN7ynHtlbRlH7mJzZj4z5kK25CtMhAOm/B+tNrd9qGC+e/+Xt0LKOyrMRDxweqikQYdSCAH7W7Aqo9904POKHPP99TemqplvG5U1dHXKPMjG59FF4w8HsqKfWjuwRgyu6r2Vf4CiUb726NjFvn0RWOsVarGJnO70NhafCjrHbRGPhaEk0rAV0RqOsNPovi3LwXMuTQKQf0Sph+DU601wr2MYP3llcbFauJSOkSuz3bFzBxQviZkILQBmBtm/JG5Ycb+/Y0CSzhiLol3dASdFLqRNDVwNefchDFglurHzjL9savZ8KNT08JBxutbnS+fme7FTKNg7iC8ka1HOuoOVKlRszff4EOdXhQXasGxKg8/RHNA81DM2lX7yS6d3ZM5fdowRi2ExJxsZ4S/HFIL4fvujM2zK8PhVKt75ui2E2iYTj70++VJOH7m+AUQxsSKS9fOmwoQA1QiWhkSM6kMOE+NRZWLphxjI9JMy0IqBhrcbjd8ZQWiDK4uqNfoXmRr03stluAMc2FFYH6GafygEK2kqGPwLu9iwOtjGjXauxfkKIykh2d5bX+H7+z2cuhGcteg8LKQCNAxunrTeA2j1N5uz9fngv eXRJkS9v oaK3KtgGeA6gO7FYA6cbnXuBGmSvbIiDbueYcUIFpnP7S5uMzcUjYZJTips3iu7hToNTezywvwiHabqOZ7waH5Ox6Frzx5EakBUc7Ob4TGNkPv9WJZvqYkwX7dFz+yoKvoYdyaQiyDqMtej8BYhi9EAdSKf128cJubbPy5Xx34xVUml9ojU5ilAH9JQvHiJnyb0m7k91TmaurG+UaGK9eG2jF7NJv3TIndxlFF659BWYpUFMbz3W4WqxsrkBRlTZ6XCEJp0KKGA3DnHa/QzAB/Hms+FQLbVyI+W8YDpxNLgSoulrc2h8eiM1xvcDEVyZsCvar1Nhf4QyQ5FHkSPV+CfO6f5XescIIs5+WahIwr+g0ltQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kafs wants to be able to cache the contents of directories (and symlinks), but whilst these are downloaded from the server with the FS.FetchData RPC op and similar, the same as for regular files, they can't be updated by FS.StoreData, but rather have special operations (FS.MakeDir, etc.). Now, rather than redownloading a directory's content after each change made to that directory, kafs modifies the local blob. This blob can be saved out to the cache, and since it's using netfslib, kafs just marks the folios dirty and lets ->writepages() on the directory take care of it, as for an regular file. This is fine as long as there's a cache as although the upload stream is disabled, there's a cache stream to drive the procedure. But if the cache goes away in the meantime, suddenly there's no way do any writes and the code gets confused, complains "R=%x: No submit" to dmesg and leaves the dirty folio hanging. Fix this by just cancelling the store of the folio if neither stream is active. (If there's no cache at the time of dirtying, we should just not mark the folio dirty). Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_issue.c | 6 +++++- include/trace/events/netfs.h | 1 + 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index f7d59f0bb8c2..04e66d587f77 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -400,13 +400,17 @@ static int netfs_write_folio(struct netfs_io_request *wreq, folio_unlock(folio); if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) { - if (!fscache_resources_valid(&wreq->cache_resources)) { + if (!cache->avail) { trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); netfs_issue_write(wreq, upload); netfs_folio_written_back(folio); return 0; } trace_netfs_folio(folio, netfs_folio_trace_store_copy); + } else if (!upload->avail && !cache->avail) { + trace_netfs_folio(folio, netfs_folio_trace_cancel_store); + netfs_folio_written_back(folio); + return 0; } else if (!upload->construct) { trace_netfs_folio(folio, netfs_folio_trace_store); } else { diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 7b26463cb98f..76bd42a96815 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -153,6 +153,7 @@ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ EM(netfs_folio_trace_abandon, "abandon") \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ + EM(netfs_folio_trace_cancel_store, "cancel-store") \ EM(netfs_folio_trace_clear, "clear") \ EM(netfs_folio_trace_clear_cc, "clear-cc") \ EM(netfs_folio_trace_clear_g, "clear-g") \ From patchwork Wed Aug 14 20:38:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 217CCC52D7F for ; Wed, 14 Aug 2024 20:42:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0F056B00BC; Wed, 14 Aug 2024 16:42:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 996B16B00BE; Wed, 14 Aug 2024 16:42:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EB6B6B00BF; Wed, 14 Aug 2024 16:42:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5C4266B00BC for ; Wed, 14 Aug 2024 16:42:03 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1E2391611A3 for ; Wed, 14 Aug 2024 20:42:03 +0000 (UTC) X-FDA: 82452022926.12.45FF3F6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 6F06D18000A for ; Wed, 14 Aug 2024 20:42:01 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HRN0mQcA; spf=temperror (imf16.hostedemail.com: error in processing during lookup of dhowells@redhat.com: DNS error) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668049; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iXFw+oU47qKVQiA+0YFU3Ycf5Gk6FAGXhwuof46YCCg=; b=iojAFykjtI3dnS1T6+zxRl0c6vaoGhYinxo0YFDIX8rM46GS7Ha1v7loVl+sXxiESVBX5x BcYLXYJPcj7yGecFU/fGanhccx2nPAXsB8jVGhXg1iv/2rYi6DrC/37ayz1TzjOL8rn4l5 fVoM5sRbnuYZLPyf9MHvggQAW4H6fFQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668049; a=rsa-sha256; cv=none; b=vQAOYTBS4bcoY3rEuQ3NjJ3wBanWUYl9vYrpq63XU5HGnnj2bWTw15ss9G35/f3j4fGCs/ d6GdU1Q/I58LrLTfliAkjGDtcTng0ZmwKgpZ9jbaVAHewpGSZU9RjLbxbC5eqO35xglFre YlP5gz+4kWrfi7VBcDpMlElruHfN02s= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HRN0mQcA; spf=temperror (imf16.hostedemail.com: error in processing during lookup of dhowells@redhat.com: DNS error) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668120; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iXFw+oU47qKVQiA+0YFU3Ycf5Gk6FAGXhwuof46YCCg=; b=HRN0mQcA50HKyY204oYv0rvtprGmEtV/x//tWxsTok+tUi1RmDCFTqmUVNeK/k2FOEsf2E TXEUBvqVv11G9wbwn8YGuD4hxe2m3UHttdVkta7Gd84VFaUQpaQ0bqULnnO8qHHDBk+xCo kkwlU1wEEZJBY2UC53TjM1BTR58vkRs= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-0YvQwBjyPzaCGwVpAix73g-1; Wed, 14 Aug 2024 16:41:57 -0400 X-MC-Unique: 0YvQwBjyPzaCGwVpAix73g-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C84201954B16; Wed, 14 Aug 2024 20:41:53 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 910B51955D88; Wed, 14 Aug 2024 20:41:47 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Enzo Matsumiya Subject: [PATCH v2 23/25] cifs: Use iterate_and_advance*() routines directly for hashing Date: Wed, 14 Aug 2024 21:38:43 +0100 Message-ID: <20240814203850.2240469-24-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Stat-Signature: yyhmf1h57ribozff5kfanzp6tyhzmqhi X-Rspamd-Queue-Id: 6F06D18000A X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1723668121-382355 X-HE-Meta: U2FsdGVkX1+dtLyL9afcCVgUsS398SXILXeVu2chDWU/iU2vhS4ROIsnRK2hVZo4Hcj2YtdZhbY2ya/Tj4oti01RYOUf3Um9ZLjKUlRpuFeYA2AqdEpV8MY4qHGd6SRXE0b+MXeXareWJ6Mus2oBi3PNQsYhUw9F9O5Mmc1MGJdZiPo3ovYtKsOmlVKKxtBWyvNkRkw2DlFbw5WYGU0fI1dKJDh5lnjRKUIbX5CY7yqKl/2GFYD//cjQ7eyDtyqabFaJkYErA+qEeRjjFpQWEa+gjIrH3O/NfBASuUkKpsrd11oih6vyTX6pEnZEs7m1K2SpVoPOivQTDKYzRx7Clf9RsWFnUpKunLkPo3n6JqcfjNAbDqzohlSwYetwmCkS9APbmF2IVPoMxbkt+0EG/0GKm+wTO2hFTdSvOecqPvJJOJxUj81GxcsGj/BNtdnmLPnPeU3OSNyYM4Lrxg+i7RP+/n0habx4On2+2vw9pSSm0LoIT8JetWnGiRHDoNwprQXKXOx4g+Gil0KSve9JJ0LHzPs8BwmRgHCexW7tsgxxrJ0BpehzjS4Ho5mSPrG1HCR8x3m5+X03elZxmefnZlX2VipeyAKvRkPiy5Sm1OE6ozck15QAW47Iykfw5xMet9n76L0rMxB1+4lmJU7cXHNdhO6j0Cq7vzWIlYqHUHtLGMY3MvJUEQLJQx4u1au7xPiFnZhzVZjmrNoPxHCpZvsjoypzSNtrErq0cbI0E/jSQgT5WM2g1/tMig6VFe5Bvx2atvKOntt8lt2GR+Cmny16uGZzb252fEDcXNX9M3vqdgMNnB+gYLwdpc9eKkcRfZrAcp1KOz2jvmAdM1KTb9ULfot4MAg+RqrpEsHiYFFFoc87OvU+p6wM2h1W4wlAUAFwQo61SB7uecBKHinHZZS1lrs510oc4ridIal3wjyhkP1ulAo5Nwk/yBdvMwe/s2P0AYiqYc9o96ktRlo oSRQOyS3 Jh266kI/O2Elae61c2Mgp9Ov+IXiPhCYa4u9uAo4bTr3vbhVVkLspNyuJcAO/SwAz4blRSAfqRqXWkgdFdkWuB9ytHV8R7siQo16QNvuCYKSy+FPxk2vXnbxkC8njoPmjKhlkEABHIdieVhBNdxFaiwDhyPYPTnSAMzFyqEoTzfDSpjyDVj2s6L04gDU6uHUZPiv3DmXHMV/jgVY9wBsMJBhC8ZddAnvn5bTuS0cultr+eobegYO5oD07X/OQHhnDFD4HvAn40yAyaAdKgRHCaUFpfcQf++pJOtXL5r6UlOMsbj+xVcVzDNPaC88YkwDyOw3/xev9tKzYk5GQ41bTBvgWWQ+0U7FD/o4bKhxWbtsRhLSzwvPXSzMBeEyQAupY21LZRmHcEVHfSZRx+GJqeudNRrLKrSjCL3plwf32xXv6T30KIp7rldOq4YVadexA64WheYUjWMbxfjvKwZgFulkBEmCGFhkm9O9duHdBAPUKdCkLgaay6DbBja0VayyvGyx9dp7LvShn1JTLem4BTJnmdX+oNVSEtso4oUibGHMa5qqoVG5hZFh7ihBzXaO5FyQ35+gYTxVtMVY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace the bespoke cifs iterators of ITER_BVEC and ITER_KVEC to do hashing with iterate_and_advance_kernel() - a variant on iterate_and_advance() that only supports kernel-internal ITER_* types and not UBUF/IOVEC types. The bespoke ITER_XARRAY is left because we don't really want to be calling crypto_shash_update() under the RCU read lock for large amounts of data; besides, ITER_XARRAY is going to be phased out. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Tom Talpey cc: Enzo Matsumiya cc: linux-cifs@vger.kernel.org --- fs/smb/client/cifsencrypt.c | 109 ++++++++---------------------------- include/linux/iov_iter.h | 47 ++++++++++++++++ 2 files changed, 70 insertions(+), 86 deletions(-) diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c index 6322f0f68a17..991a1ab047e7 100644 --- a/fs/smb/client/cifsencrypt.c +++ b/fs/smb/client/cifsencrypt.c @@ -21,82 +21,10 @@ #include #include #include +#include #include "../common/arc4.h" #include -/* - * Hash data from a BVEC-type iterator. - */ -static int cifs_shash_bvec(const struct iov_iter *iter, ssize_t maxsize, - struct shash_desc *shash) -{ - const struct bio_vec *bv = iter->bvec; - unsigned long start = iter->iov_offset; - unsigned int i; - void *p; - int ret; - - for (i = 0; i < iter->nr_segs; i++) { - size_t off, len; - - len = bv[i].bv_len; - if (start >= len) { - start -= len; - continue; - } - - len = min_t(size_t, maxsize, len - start); - off = bv[i].bv_offset + start; - - p = kmap_local_page(bv[i].bv_page); - ret = crypto_shash_update(shash, p + off, len); - kunmap_local(p); - if (ret < 0) - return ret; - - maxsize -= len; - if (maxsize <= 0) - break; - start = 0; - } - - return 0; -} - -/* - * Hash data from a KVEC-type iterator. - */ -static int cifs_shash_kvec(const struct iov_iter *iter, ssize_t maxsize, - struct shash_desc *shash) -{ - const struct kvec *kv = iter->kvec; - unsigned long start = iter->iov_offset; - unsigned int i; - int ret; - - for (i = 0; i < iter->nr_segs; i++) { - size_t len; - - len = kv[i].iov_len; - if (start >= len) { - start -= len; - continue; - } - - len = min_t(size_t, maxsize, len - start); - ret = crypto_shash_update(shash, kv[i].iov_base + start, len); - if (ret < 0) - return ret; - maxsize -= len; - - if (maxsize <= 0) - break; - start = 0; - } - - return 0; -} - /* * Hash data from an XARRAY-type iterator. */ @@ -145,27 +73,36 @@ static ssize_t cifs_shash_xarray(const struct iov_iter *iter, ssize_t maxsize, return 0; } +static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len, + void *priv, void *priv2) +{ + struct shash_desc *shash = priv; + int ret, *pret = priv2; + + ret = crypto_shash_update(shash, iter_base, len); + if (ret < 0) { + *pret = ret; + return len; + } + return 0; +} + /* * Pass the data from an iterator into a hash. */ static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize, struct shash_desc *shash) { - if (maxsize == 0) - return 0; + struct iov_iter tmp_iter = *iter; + int err = -EIO; - switch (iov_iter_type(iter)) { - case ITER_BVEC: - return cifs_shash_bvec(iter, maxsize, shash); - case ITER_KVEC: - return cifs_shash_kvec(iter, maxsize, shash); - case ITER_XARRAY: + if (iov_iter_type(iter) == ITER_XARRAY) return cifs_shash_xarray(iter, maxsize, shash); - default: - pr_err("cifs_shash_iter(%u) unsupported\n", iov_iter_type(iter)); - WARN_ON_ONCE(1); - return -EIO; - } + + if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err, + cifs_shash_step) != maxsize) + return err; + return 0; } int __cifs_calc_signature(struct smb_rqst *rqst, diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index a223370a59a7..c4aa58032faf 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -328,4 +328,51 @@ size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv, return iterate_and_advance2(iter, len, priv, NULL, ustep, step); } +/** + * iterate_and_advance_kernel - Iterate over a kernel-internal iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @priv2: More data for the step functions. + * @step: Function for other iterators; given kernel addresses. + * + * Iterate over the next part of an iterator, up to the specified length. The + * buffer is presented in segments, which for kernel iteration are broken up by + * physical pages and mapped, with the mapped address being presented. + * + * [!] Note This will only handle BVEC, KVEC, FOLIOQ, XARRAY and DISCARD-type + * iterators; it will not handle UBUF or IOVEC-type iterators. + * + * A step functions, @step, must be provided, one for handling mapped kernel + * addresses and the other is given user addresses which have the potential to + * fault since no pinning is performed. + * + * The step functions are passed the address and length of the segment, @priv, + * @priv2 and the amount of data so far iterated over (which can, for example, + * be added to @priv to point to the right part of a second buffer). The step + * functions should return the amount of the segment they didn't process (ie. 0 + * indicates complete processsing). + * + * This function returns the amount of data processed (ie. 0 means nothing was + * processed and the value of @len means processes to completion). + */ +static __always_inline +size_t iterate_and_advance_kernel(struct iov_iter *iter, size_t len, void *priv, + void *priv2, iov_step_f step) +{ + if (unlikely(iter->count < len)) + len = iter->count; + if (unlikely(!len)) + return 0; + if (iov_iter_is_bvec(iter)) + return iterate_bvec(iter, len, priv, priv2, step); + if (iov_iter_is_kvec(iter)) + return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_folioq(iter)) + return iterate_folioq(iter, len, priv, priv2, step); + if (iov_iter_is_xarray(iter)) + return iterate_xarray(iter, len, priv, priv2, step); + return iterate_discard(iter, len, priv, priv2, step); +} + #endif /* _LINUX_IOV_ITER_H */ From patchwork Wed Aug 14 20:38:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82FDCC52D7F for ; Wed, 14 Aug 2024 20:42:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 182A36B00BE; Wed, 14 Aug 2024 16:42:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 10BB36B00C0; Wed, 14 Aug 2024 16:42:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC75A6B00C1; Wed, 14 Aug 2024 16:42:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C8E296B00BE for ; Wed, 14 Aug 2024 16:42:11 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 87C84A8564 for ; Wed, 14 Aug 2024 20:42:11 +0000 (UTC) X-FDA: 82452023262.04.3E165BF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id BBF8718000D for ; Wed, 14 Aug 2024 20:42:09 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B5iTNYvf; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668117; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6ULLeqms1eMqJp58KKAatHIrhvjPmclhiZnLforJ4Tk=; b=P/tR3ebAB21bWHbrGX/ic9K7uMy9c7DC9cU3NLuLOa12E4LZkzCsyTobltS0OLA1pnt1jA enBE9TcuPyOeL8P17QxqzSV7LIsY1ygCN0pb7q+p/oLSYYBdZGks802hqyqao0R/Fp/QNy yXEc4HXzTxMTqnFzyHeqUWHK92nne5E= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B5iTNYvf; spf=pass (imf06.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668117; a=rsa-sha256; cv=none; b=C1SH/GjXVOyJXjYF4+pD1Na9tprKnUF+YeMZciQ/RetWxruv9DHvYVZBbopZbAURwZ49jT mn4VoC9XbSeS4VrVpmffMyM/4Y2vAezsrm6wn/5G5fyz8CXH5v6SE0e0Lh1tUoXJDKHy8L j0VVCcHUXqsm49VyYI2vZD9ENLdiRbE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668129; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6ULLeqms1eMqJp58KKAatHIrhvjPmclhiZnLforJ4Tk=; b=B5iTNYvfs7tCfG5inGSC4GEYJ5p+vkl4CBQm+MR04/P9KlpwRQBuau88oM2alq2oGFTUy8 Sim1CEmWHw70o4iCO5Bie+F0ntjMzHUW4IA7/P3rEPV3Vpij26iXHEJA9gxBSqaofc5vdM F/gNNQ6qJ8XyS/nSYnjfEmra0GPGvfk= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-43-euL4FeedOwm1Nc0ZUnp-Ig-1; Wed, 14 Aug 2024 16:42:04 -0400 X-MC-Unique: euL4FeedOwm1Nc0ZUnp-Ig-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 232E718EB234; Wed, 14 Aug 2024 20:42:01 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3628919560A3; Wed, 14 Aug 2024 20:41:55 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Enzo Matsumiya Subject: [PATCH v2 24/25] cifs: Switch crypto buffer to use a folio_queue rather than an xarray Date: Wed, 14 Aug 2024 21:38:44 +0100 Message-ID: <20240814203850.2240469-25-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspam-User: X-Stat-Signature: f4xepk5gm6mzk56geq1mrsxn63umc1xj X-Rspamd-Queue-Id: BBF8718000D X-Rspamd-Server: rspam11 X-HE-Tag: 1723668129-258955 X-HE-Meta: U2FsdGVkX1/f/IJ2wevH6joaLhOlqsm4tDaFkmt0/Fhm8vXUyv1I1SBfhqtAu8O24l3PSiu0Ze82aWgzjBG5AizzRf3zrkG57gNEsWUyY+i/Woubkk5oO0OPkzPjodevpZMjjB+5d58ywVwVvSui/Pl0aiFc8S/BzUDvMVdVd/ls3wsMe04Z+0exf4n43yLj7yww/VLT6TRt29qvEpsIE4BLxvwVUn65aLO6q4CPqeP0H24L9pXzC+uSLQ7ay1FRYx9bvSqzpsXEIdzc3orz3pIPYnrerbkBFqfRsymnnP/78ZFfb6EzDYUqg6y/j50odrRSJf5Wpg21nRUcL2dkao49GkxGXI9VuKmArNuy0T7F0ygryg9acMlcfBHI6r7Kv117pZa/5l8IC5tlIJKEpfsi8dOGZDaqFB5lsdRH36lu6i9V5Mrr/9lVhDxkiTXDcvIX+Pe7RJzPeH6g1GgvhhFeOuBAWH/rWTRNeocJl2EIsbNsXVIgZFlVI+AkmsrnTzh7p+4OZPYBnMCVjolI4kCxst5gKbVev7CC+VRHrsRveK7rtBr5BO1y+XnQ5isH9IqeEl6o+wafn9qUpJU38JRi6fTQB7W6Pd4Igt0WrcInwShhqJrNIuP5cnWSM9Q/PPT6O32y1hCcmo/w774R7l85vZhkUrfvG9AN6Yn8GEGYpPsYJx47a3hKDBzLG2YC+3Nibx5cfEhvFXcbIxiOu+AIWcuR+jMT0EvAp85DQKxB7JWK54QK9A24msUO1sCnA0MGLgjfTTdHsFJSaV/KcNJZ9zYVhgyp6/2P31f/ZTQ1rvMUot8m+upDnJF6xNd3i6eYvdvuHnRDJUQdTbC3ue1hdFfpJZY8JftiEY8Axkuo4OXgP8iernER6xhL/Y/wtEcCqdwjxwq3SDnbuHF4/Ew0H4wJmbCvAbkXT59db6z2l465o+8tczCRf0L5kXQwjKQvgVnAMm8TJmvIaR1 xwqroxER TDCftYyw5ds3UAbYg6xdhO175Ut29xEkWvsz7COGbM+Jj6EzoyvGbSEQqXL1IuUIF2n+dvvaHGZQs64etvmRzemqphE7q+3rnmQ6aU6e4tmyA539TEO6rTFC3dvDMUTxpC8aukO4aR7yBLxBeIZREF9SPVvxB86v9bc85DRL/QHmhFeXoOITDVlU0k2F00/eBbvmXqBnDQj+ihD3yPwvgZMZrjZeBJD7gZzV/6RaIvD4KeoA7ONdqDSYq0ORlLorARI8HtePKxA591Lw0wlOwlrzNC6fNioj/ouEyrxubmiy5yziDDjwp+Ox4DAH4b8+p/LOZOizMtg6LBMqIagIViAW1dK2ablwqlpoKzv36/Q6h0PPGWEFtDfoQsm9k5HplAVpdOTs9LnJMjw6nVfzWfvZv8UHBNb3JON7Km+Xqbam1uRc1aZHLmHtmPrqrs47zRk0MBGKR6HB25UtD9Sb0n0JxtdqQ7o2bmmF1yV8f6WLEHW9YEm/FC5YeaLfIPA/4VkqGdMhIjNnHEMToYZ7wGAket4VTNLnulW4Pjpe/LV5UTN7oEI0YhozCt4wa45rIf+sO2GI9N2LuacU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Switch cifs from using an xarray to hold the transport crypto buffer to using a folio_queue and use ITER_FOLIOQ rather than ITER_XARRAY. This is part of the process of phasing out ITER_XARRAY. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Tom Talpey cc: Enzo Matsumiya cc: linux-cifs@vger.kernel.org --- fs/smb/client/cifsglob.h | 2 +- fs/smb/client/smb2ops.c | 218 +++++++++++++++++++++------------------ 2 files changed, 121 insertions(+), 99 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 1028881098e1..cba3572915ae 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -256,7 +256,7 @@ struct smb_rqst { unsigned int rq_nvec; /* number of kvecs in array */ size_t rq_iter_size; /* Amount of data in ->rq_iter */ struct iov_iter rq_iter; /* Data iterator */ - struct xarray rq_buffer; /* Page buffer for encryption */ + struct folio_queue *rq_buffer; /* Buffer for encryption */ }; struct mid_q_entry; diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 322cabc69c6f..cb9a18e31b03 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include "cifsfs.h" #include "cifsglob.h" @@ -4356,30 +4357,86 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst, } /* - * Clear a read buffer, discarding the folios which have XA_MARK_0 set. + * Clear a read buffer, discarding the folios which have the 1st mark set. */ -static void cifs_clear_xarray_buffer(struct xarray *buffer) +static void cifs_clear_folioq_buffer(struct folio_queue *buffer) { + struct folio_queue *folioq; + + while ((folioq = buffer)) { + for (int s = 0; s < folioq_count(folioq); s++) + if (folioq_is_marked(folioq, s)) + folio_put(folioq_folio(folioq, s)); + buffer = folioq->next; + kfree(folioq); + } +} + +/* + * Allocate buffer space into a folio queue. + */ +static struct folio_queue *cifs_alloc_folioq_buffer(ssize_t size) +{ + struct folio_queue *buffer = NULL, *tail = NULL, *p; struct folio *folio; + unsigned int slot; + + do { + if (!tail || folioq_full(tail)) { + p = kmalloc(sizeof(*p), GFP_NOFS); + if (!p) + goto nomem; + folioq_init(p); + if (tail) { + tail->next = p; + p->prev = tail; + } else { + buffer = p; + } + tail = p; + } + + folio = folio_alloc(GFP_KERNEL|__GFP_HIGHMEM, 0); + if (!folio) + goto nomem; + + slot = folioq_append_mark(tail, folio); + size -= folioq_folio_size(tail, slot); + } while (size > 0); + + return buffer; + +nomem: + cifs_clear_folioq_buffer(buffer); + return NULL; +} + +/* + * Copy data from an iterator to the folios in a folio queue buffer. + */ +static bool cifs_copy_iter_to_folioq(struct iov_iter *iter, size_t size, + struct folio_queue *buffer) +{ + for (; buffer; buffer = buffer->next) { + for (int s = 0; s < folioq_count(buffer); s++) { + struct folio *folio = folioq_folio(buffer, s); + size_t part = folioq_folio_size(buffer, s); - XA_STATE(xas, buffer, 0); + part = umin(part, size); - rcu_read_lock(); - xas_for_each_marked(&xas, folio, ULONG_MAX, XA_MARK_0) { - folio_put(folio); + if (copy_folio_from_iter(folio, 0, part, iter) != part) + return false; + size -= part; + } } - rcu_read_unlock(); - xa_destroy(buffer); + return true; } void smb3_free_compound_rqst(int num_rqst, struct smb_rqst *rqst) { - int i; - - for (i = 0; i < num_rqst; i++) - if (!xa_empty(&rqst[i].rq_buffer)) - cifs_clear_xarray_buffer(&rqst[i].rq_buffer); + for (int i = 0; i < num_rqst; i++) + cifs_clear_folioq_buffer(rqst[i].rq_buffer); } /* @@ -4400,53 +4457,33 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst, struct smb_rqst *new_rq, struct smb_rqst *old_rq) { struct smb2_transform_hdr *tr_hdr = new_rq[0].rq_iov[0].iov_base; - struct page *page; unsigned int orig_len = 0; - int i, j; int rc = -ENOMEM; - for (i = 1; i < num_rqst; i++) { + for (int i = 1; i < num_rqst; i++) { struct smb_rqst *old = &old_rq[i - 1]; struct smb_rqst *new = &new_rq[i]; - struct xarray *buffer = &new->rq_buffer; - size_t size = iov_iter_count(&old->rq_iter), seg, copied = 0; + struct folio_queue *buffer; + size_t size = iov_iter_count(&old->rq_iter); orig_len += smb_rqst_len(server, old); new->rq_iov = old->rq_iov; new->rq_nvec = old->rq_nvec; - xa_init(buffer); - if (size > 0) { - unsigned int npages = DIV_ROUND_UP(size, PAGE_SIZE); - - for (j = 0; j < npages; j++) { - void *o; - - rc = -ENOMEM; - page = alloc_page(GFP_KERNEL|__GFP_HIGHMEM); - if (!page) - goto err_free; - page->index = j; - o = xa_store(buffer, j, page, GFP_KERNEL); - if (xa_is_err(o)) { - rc = xa_err(o); - put_page(page); - goto err_free; - } + buffer = cifs_alloc_folioq_buffer(size); + if (!buffer) + goto err_free; - xa_set_mark(buffer, j, XA_MARK_0); + new->rq_buffer = buffer; + iov_iter_folio_queue(&new->rq_iter, ITER_SOURCE, + buffer, 0, 0, size); + new->rq_iter_size = size; - seg = min_t(size_t, size - copied, PAGE_SIZE); - if (copy_page_from_iter(page, 0, seg, &old->rq_iter) != seg) { - rc = -EFAULT; - goto err_free; - } - copied += seg; + if (!cifs_copy_iter_to_folioq(&old->rq_iter, size, buffer)) { + rc = -EIO; + goto err_free; } - iov_iter_xarray(&new->rq_iter, ITER_SOURCE, - buffer, 0, size); - new->rq_iter_size = size; } } @@ -4511,22 +4548,23 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf, } static int -cifs_copy_pages_to_iter(struct xarray *pages, unsigned int data_size, - unsigned int skip, struct iov_iter *iter) +cifs_copy_folioq_to_iter(struct folio_queue *folioq, size_t data_size, + size_t skip, struct iov_iter *iter) { - struct page *page; - unsigned long index; - - xa_for_each(pages, index, page) { - size_t n, len = min_t(unsigned int, PAGE_SIZE - skip, data_size); - - n = copy_page_to_iter(page, skip, len, iter); - if (n != len) { - cifs_dbg(VFS, "%s: something went wrong\n", __func__); - return -EIO; + for (; folioq; folioq = folioq->next) { + for (int s = 0; s < folioq_count(folioq); s++) { + struct folio *folio = folioq_folio(folioq, s); + size_t fsize = folio_size(folio); + size_t n, len = umin(fsize - skip, data_size); + + n = copy_folio_to_iter(folio, skip, len, iter); + if (n != len) { + cifs_dbg(VFS, "%s: something went wrong\n", __func__); + return -EIO; + } + data_size -= n; + skip = 0; } - data_size -= n; - skip = 0; } return 0; @@ -4534,8 +4572,8 @@ cifs_copy_pages_to_iter(struct xarray *pages, unsigned int data_size, static int handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, - char *buf, unsigned int buf_len, struct xarray *pages, - unsigned int pages_len, bool is_offloaded) + char *buf, unsigned int buf_len, struct folio_queue *buffer, + unsigned int buffer_len, bool is_offloaded) { unsigned int data_offset; unsigned int data_len; @@ -4632,7 +4670,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, return 0; } - if (data_len > pages_len - pad_len) { + if (data_len > buffer_len - pad_len) { /* data_len is corrupt -- discard frame */ rdata->result = -EIO; if (is_offloaded) @@ -4643,8 +4681,8 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, } /* Copy the data to the output I/O iterator. */ - rdata->result = cifs_copy_pages_to_iter(pages, pages_len, - cur_off, &rdata->subreq.io_iter); + rdata->result = cifs_copy_folioq_to_iter(buffer, buffer_len, + cur_off, &rdata->subreq.io_iter); if (rdata->result != 0) { if (is_offloaded) mid->mid_state = MID_RESPONSE_MALFORMED; @@ -4652,12 +4690,11 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, dequeue_mid(mid, rdata->result); return 0; } - rdata->got_bytes = pages_len; + rdata->got_bytes = buffer_len; } else if (buf_len >= data_offset + data_len) { /* read response payload is in buf */ - WARN_ONCE(pages && !xa_empty(pages), - "read data can be either in buf or in pages"); + WARN_ONCE(buffer, "read data can be either in buf or in buffer"); length = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter); if (length < 0) return length; @@ -4683,7 +4720,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, struct smb2_decrypt_work { struct work_struct decrypt; struct TCP_Server_Info *server; - struct xarray buffer; + struct folio_queue *buffer; char *buf; unsigned int len; }; @@ -4697,7 +4734,7 @@ static void smb2_decrypt_offload(struct work_struct *work) struct mid_q_entry *mid; struct iov_iter iter; - iov_iter_xarray(&iter, ITER_DEST, &dw->buffer, 0, dw->len); + iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len); rc = decrypt_raw_data(dw->server, dw->buf, dw->server->vals->read_rsp_size, &iter, true); if (rc) { @@ -4713,7 +4750,7 @@ static void smb2_decrypt_offload(struct work_struct *work) mid->decrypted = true; rc = handle_read_data(dw->server, mid, dw->buf, dw->server->vals->read_rsp_size, - &dw->buffer, dw->len, + dw->buffer, dw->len, true); if (rc >= 0) { #ifdef CONFIG_CIFS_STATS2 @@ -4746,7 +4783,7 @@ static void smb2_decrypt_offload(struct work_struct *work) } free_pages: - cifs_clear_xarray_buffer(&dw->buffer); + cifs_clear_folioq_buffer(dw->buffer); cifs_small_buf_release(dw->buf); kfree(dw); } @@ -4756,20 +4793,17 @@ static int receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, int *num_mids) { - struct page *page; char *buf = server->smallbuf; struct smb2_transform_hdr *tr_hdr = (struct smb2_transform_hdr *)buf; struct iov_iter iter; - unsigned int len, npages; + unsigned int len; unsigned int buflen = server->pdu_size; int rc; - int i = 0; struct smb2_decrypt_work *dw; dw = kzalloc(sizeof(struct smb2_decrypt_work), GFP_KERNEL); if (!dw) return -ENOMEM; - xa_init(&dw->buffer); INIT_WORK(&dw->decrypt, smb2_decrypt_offload); dw->server = server; @@ -4785,26 +4819,14 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, len = le32_to_cpu(tr_hdr->OriginalMessageSize) - server->vals->read_rsp_size; dw->len = len; - npages = DIV_ROUND_UP(len, PAGE_SIZE); + len = round_up(dw->len, PAGE_SIZE); rc = -ENOMEM; - for (; i < npages; i++) { - void *old; - - page = alloc_page(GFP_KERNEL|__GFP_HIGHMEM); - if (!page) - goto discard_data; - page->index = i; - old = xa_store(&dw->buffer, i, page, GFP_KERNEL); - if (xa_is_err(old)) { - rc = xa_err(old); - put_page(page); - goto discard_data; - } - xa_set_mark(&dw->buffer, i, XA_MARK_0); - } + dw->buffer = cifs_alloc_folioq_buffer(len); + if (!dw->buffer) + goto discard_data; - iov_iter_xarray(&iter, ITER_DEST, &dw->buffer, 0, npages * PAGE_SIZE); + iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len); /* Read the data into the buffer and clear excess bufferage. */ rc = cifs_read_iter_from_socket(server, &iter, dw->len); @@ -4812,9 +4834,9 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, goto discard_data; server->total_read += rc; - if (rc < npages * PAGE_SIZE) - iov_iter_zero(npages * PAGE_SIZE - rc, &iter); - iov_iter_revert(&iter, npages * PAGE_SIZE); + if (rc < len) + iov_iter_zero(len - rc, &iter); + iov_iter_revert(&iter, len); iov_iter_truncate(&iter, dw->len); rc = cifs_discard_remaining_data(server); @@ -4849,7 +4871,7 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, (*mid)->decrypted = true; rc = handle_read_data(server, *mid, buf, server->vals->read_rsp_size, - &dw->buffer, dw->len, false); + dw->buffer, dw->len, false); if (rc >= 0) { if (server->ops->is_network_name_deleted) { server->ops->is_network_name_deleted(buf, @@ -4859,7 +4881,7 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, } free_pages: - cifs_clear_xarray_buffer(&dw->buffer); + cifs_clear_folioq_buffer(dw->buffer); free_dw: kfree(dw); return rc; From patchwork Wed Aug 14 20:38:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13764107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22CFBC3DA4A for ; Wed, 14 Aug 2024 20:42:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A839D6B00C0; Wed, 14 Aug 2024 16:42:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A33536B00C2; Wed, 14 Aug 2024 16:42:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AC826B00C3; Wed, 14 Aug 2024 16:42:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6AEA36B00C0 for ; Wed, 14 Aug 2024 16:42:18 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1F077140D6D for ; Wed, 14 Aug 2024 20:42:18 +0000 (UTC) X-FDA: 82452023556.28.850F3D0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 6161BA0002 for ; Wed, 14 Aug 2024 20:42:16 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="IwZ7L/gQ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723668083; a=rsa-sha256; cv=none; b=TUohXBS2Bb5kclpaTjcA7aDA42vVCXNfBWLlvmScK1l9s2a4aLnuddln0jHjAz6PHq1mG7 Xyn8whZZ+TYrCvkwecY4NaRAGmSdmpFgqkkq9HcRxLxx/gSaxx9lJ+jBpLsdo+Ttmn4v47 vrMFuSyfCmwzpPXAXx0YznkldXVv2Tw= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="IwZ7L/gQ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723668083; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G4MNzaMiFDukGg48jovETePA3npABSpoRCz9c88nf5g=; b=oub2YxkfSD0HYN7HZz11T+tCVQ4kqfc6P3fXxEOJvktWmy4E+/WurAXCdAiiDT/6DvFVlZ VOCQhYs83It+VIF4x1qiDJHuNaREjRLny5kIOEvxATEmWdJja5rtSCcblRgB+soKA2LcBC 3G93IRUgsvwqGCkkCOrfko7rjfrRfHs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723668135; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G4MNzaMiFDukGg48jovETePA3npABSpoRCz9c88nf5g=; b=IwZ7L/gQG9rokjh/Nh4CF4gDfwPFIv4tUePx4Wgh5Db7mi1qOVtopyVTipTuzwDxfW0s+4 +fa25mwdhy1YmVihkz5jj5fw4V15DZPjoRia0dEYyW3hRgS1XtNCRhKCLd+n3+GJEjAEeu YHRPrUnY628jEHkKG/T2UKt+F1FCbeM= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-497-k4Nb6-ZQNa-w-MVZAByrPQ-1; Wed, 14 Aug 2024 16:42:11 -0400 X-MC-Unique: k4Nb6-ZQNa-w-MVZAByrPQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 97CAC1954B0F; Wed, 14 Aug 2024 20:42:08 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9B2041955F66; Wed, 14 Aug 2024 20:42:02 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Enzo Matsumiya Subject: [PATCH v2 25/25] cifs: Don't support ITER_XARRAY Date: Wed, 14 Aug 2024 21:38:45 +0100 Message-ID: <20240814203850.2240469-26-dhowells@redhat.com> In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com> References: <20240814203850.2240469-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Queue-Id: 6161BA0002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: dmpjsifg8buep5ounfd3oyuy5rqmt7p8 X-HE-Tag: 1723668136-699585 X-HE-Meta: U2FsdGVkX1/OeKr6dCfdBBNi7vJ1bkutoHceuk5mDsETh+zVffD3mcfulBkBdsS0IQM34KNtD7PnhP9HMfH4ceApnhWjjiP2lpoOvkerEQhDxW6ucyiAnjMrBBQo33cX1V4+YrqStmaKEZEgu2rcbo37h5kn+xNXKbx35BRfDgmADOnE+7KH/jEe+mrm5DCbWTwjOHGg26TOfCxCpbu/tCYAxzB8lWQXYOzyYUhHQRSnVYpUNYjZNMwhGul35zfqm4oefMdSrwBAFd18f4LAsxstFSGykOSFmNQNzD3rNxb+ycKpTVwLKPUrRO5J1KXyI5OAtnvVhQ+W0YFWZZgD61+E5HlIJLiCIYpXKntEsiRQeoEHn2gPC7DYYHgoj9s9AyvfIrtUrNTCTu5kkTNbTl+11LyTaJ0zNTy1vQb7jr+kWRBE7PFMtoEbrVccMgxKwIMrSJb1jcTdowVtwAMIp2FIdXuDKJmWYZ2VHj1tBmSOPqqpjjCB9U96AYZ2nJKzkzU/X/dMdkvSGDmhq5MNhtrVs/Kz2wNY2I1ujZaZDWwDj4ltDrIpJ/4b99LW3wRZaMvTO/RysDWAZ5yrwCAl1qwInQcjAnlRXPw4BNaLcwifWv3INWppJ5gkKStIO5PJndeusUrQkef16d3hEAUKl9Q/JAnNZzcqVd9T27hE2v4/KDFcvVJMl+rP+aEEtJoU7I5pVjsVif5JZYqQlmF5Bbl0E1+2wUsD6QdOyOUTia3NvO8Dk/TCeGx+VY2VA0jaqhOrNdgASI6Qno7SJ68xIfF63TTW5IiEwU8iAqjIVJmejylZYLvCCPrbih4gidu1+IJ8V6nJ0+81syfJWvEbFc8Teg03tNGGcQjejXwfut3lFHyPMycWuzj+cXzKag4XOPEgUyZ3XpSg5atoRQa7pmKSk/lr5bpoooDaCAtBOHfJibVTsAbBbPrTero0EJov4JokyfUhMGYUPFKCyOg xFRJZpOf aWX6fhKVyUFWpbjRKr5qaZquY4CPyxKLZQeBUgni36MQej4quMBO6iQLmLInT9a8YmL6Bc53zjkTyCU++kzk9TmzpIaorjFtpBnDi4fW1nI+2iTfno34NuWFfe6yd5xQj6XRpt7izK3hKTwlCn++oZXD669LpmowxBD7bPxaXLREp8mS6ZYS8sTYuzDAZ1ZC8JNRW6U1jcqdDlJ+rdBn3DqmoahR5fULh2RKkGwg6w4odEunUXZij2VQUOTlx3TPxLEVNphTQeTnKeqaiKAouQTKoWxEfEM+cGrap0dQ+Py/l5a+1rI17Nnd2ysigW1GQaWLGhD92iEdNq0QYDHqDonTglorqqdjCt3u+ahmJoKDPjvnLte8UwkjOJ/svQ7v/vTXlmU95r7StoJA6UKrS0dMGgEcuK2MF3yT5wTkdMBei84AlYYky5BgDEHNQT8VekasWEkBEPcaKDLkcR7tvrzKfBp8ToDSI4K79Z2tQA4Yx4idCvWx6SmOrPZDV3IKan6IHqQjJQfhXQP4mreYZsWXrJjOi0/NnLqsPXrZ4s5KmQilnJhNGBwFgRa6Rv/LeZVTwvP/n8oe9RKA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There's now no need to support ITER_XARRAY in cifs as netfslib hands down ITER_FOLIOQ instead - and that's simpler to use with iterate_and_advance() as it doesn't hold the RCU read lock over the step function. This is part of the process of phasing out ITER_XARRAY. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Tom Talpey cc: Enzo Matsumiya cc: linux-cifs@vger.kernel.org --- fs/smb/client/cifsencrypt.c | 51 ------------------------------------- fs/smb/client/smbdirect.c | 49 ----------------------------------- 2 files changed, 100 deletions(-) diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c index 991a1ab047e7..7481b21a0489 100644 --- a/fs/smb/client/cifsencrypt.c +++ b/fs/smb/client/cifsencrypt.c @@ -25,54 +25,6 @@ #include "../common/arc4.h" #include -/* - * Hash data from an XARRAY-type iterator. - */ -static ssize_t cifs_shash_xarray(const struct iov_iter *iter, ssize_t maxsize, - struct shash_desc *shash) -{ - struct folio *folios[16], *folio; - unsigned int nr, i, j, npages; - loff_t start = iter->xarray_start + iter->iov_offset; - pgoff_t last, index = start / PAGE_SIZE; - ssize_t ret = 0; - size_t len, offset, foffset; - void *p; - - if (maxsize == 0) - return 0; - - last = (start + maxsize - 1) / PAGE_SIZE; - do { - nr = xa_extract(iter->xarray, (void **)folios, index, last, - ARRAY_SIZE(folios), XA_PRESENT); - if (nr == 0) - return -EIO; - - for (i = 0; i < nr; i++) { - folio = folios[i]; - npages = folio_nr_pages(folio); - foffset = start - folio_pos(folio); - offset = foffset % PAGE_SIZE; - for (j = foffset / PAGE_SIZE; j < npages; j++) { - len = min_t(size_t, maxsize, PAGE_SIZE - offset); - p = kmap_local_page(folio_page(folio, j)); - ret = crypto_shash_update(shash, p, len); - kunmap_local(p); - if (ret < 0) - return ret; - maxsize -= len; - if (maxsize <= 0) - return 0; - start += len; - offset = 0; - index++; - } - } - } while (nr == ARRAY_SIZE(folios)); - return 0; -} - static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len, void *priv, void *priv2) { @@ -96,9 +48,6 @@ static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize, struct iov_iter tmp_iter = *iter; int err = -EIO; - if (iov_iter_type(iter) == ITER_XARRAY) - return cifs_shash_xarray(iter, maxsize, shash); - if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err, cifs_shash_step) != maxsize) return err; diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c index c946b38ca825..80262a36030f 100644 --- a/fs/smb/client/smbdirect.c +++ b/fs/smb/client/smbdirect.c @@ -2584,52 +2584,6 @@ static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter, return ret; } -/* - * Extract folio fragments from an XARRAY-class iterator and add them to an - * RDMA list. The folios are not pinned. - */ -static ssize_t smb_extract_xarray_to_rdma(struct iov_iter *iter, - struct smb_extract_to_rdma *rdma, - ssize_t maxsize) -{ - struct xarray *xa = iter->xarray; - struct folio *folio; - loff_t start = iter->xarray_start + iter->iov_offset; - pgoff_t index = start / PAGE_SIZE; - ssize_t ret = 0; - size_t off, len; - XA_STATE(xas, xa, index); - - rcu_read_lock(); - - xas_for_each(&xas, folio, ULONG_MAX) { - if (xas_retry(&xas, folio)) - continue; - if (WARN_ON(xa_is_value(folio))) - break; - if (WARN_ON(folio_test_hugetlb(folio))) - break; - - off = offset_in_folio(folio, start); - len = min_t(size_t, maxsize, folio_size(folio) - off); - - if (!smb_set_sge(rdma, folio_page(folio, 0), off, len)) { - rcu_read_unlock(); - return -EIO; - } - - maxsize -= len; - ret += len; - if (rdma->nr_sge >= rdma->max_sge || maxsize <= 0) - break; - } - - rcu_read_unlock(); - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - /* * Extract page fragments from up to the given amount of the source iterator * and build up an RDMA list that refers to all of those bits. The RDMA list @@ -2657,9 +2611,6 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len, case ITER_FOLIOQ: ret = smb_extract_folioq_to_rdma(iter, rdma, len); break; - case ITER_XARRAY: - ret = smb_extract_xarray_to_rdma(iter, rdma, len); - break; default: WARN_ON_ONCE(1); return -EIO;