From patchwork Thu Jun 20 17:31:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C725BC2BB85 for ; Thu, 20 Jun 2024 17:32:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 574D46B031B; Thu, 20 Jun 2024 13:32:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 524656B031C; Thu, 20 Jun 2024 13:32:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37B7B6B031E; Thu, 20 Jun 2024 13:32:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 184516B031B for ; Thu, 20 Jun 2024 13:32:15 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C4114C0431 for ; Thu, 20 Jun 2024 17:32:14 +0000 (UTC) X-FDA: 82251960588.24.44B8D57 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 178F8180016 for ; Thu, 20 Jun 2024 17:32:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iapsQlQF; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904727; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OOxliJmAhngSfDrJfQOkQc4/7NxulOuU/yfY7uglXAU=; b=NMsyLESfeN9SInrYazHe6f+xVzjfbJvQXMP9ue6LugfLQQ6Qdn9Yu577mrkDft8sKJVX89 2qGceJQSBce42ZelWyYBPIc9qMHXCNG+CDANVpZKdvN8PTeNIm7XFT3rwwykIURG2HjKa9 7cbTvdHuwT9TsonR1gjbUqhb3H9iDrE= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iapsQlQF; spf=pass (imf24.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904727; a=rsa-sha256; cv=none; b=NDzrLTK9Va9eQSHaTMxRIsnNbJnQvUmIdzui1odg49VsYhcQQbUfCUzWvMMXqBJe386vpD 9NFixjoxwhsg9fKrOWincb833reNu8vf79Zv6V4vHvU6tasIOGR9ggJECma88u/XeEN56r JtYxMX9K1D58OZ5/ZpoFGMBUSM7nppk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904731; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OOxliJmAhngSfDrJfQOkQc4/7NxulOuU/yfY7uglXAU=; b=iapsQlQFVBfynKldFt93Kyzx51Y9uIdTrxWlGphnp+QVwKWK2DtcnEDYdAHnJf+u1b5l3s XTYl4RsZrjOihlIAllvFfX78ryWAIhGwwF1b9dkMO0WULmXYcbtHQjdanC/kPG2T6wDpFg 0iN1kZfV/Jlcc2HafFOn64/6tbjbNKI= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-256-xwNS2YLuPLuNd7fIfy6AIQ-1; Thu, 20 Jun 2024 13:32:05 -0400 X-MC-Unique: xwNS2YLuPLuNd7fIfy6AIQ-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A2F4D195608C; Thu, 20 Jun 2024 17:31:58 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A4E3119560B4; Thu, 20 Jun 2024 17:31:50 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Enzo Matsumiya , Christian Brauner Subject: [PATCH 01/17] netfs: Fix io_uring based write-through Date: Thu, 20 Jun 2024 18:31:19 +0100 Message-ID: <20240620173137.610345-2-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 178F8180016 X-Stat-Signature: jnzxkpstro16ahanpepcxj8rjgbm35nz X-HE-Tag: 1718904731-904898 X-HE-Meta: U2FsdGVkX1/1R57DHlxbh7kYoPmQwLeTk5BAQlHJadC0jnCWJ+xXgnankXfnTwHDApj8Z0pE78OqRvwuhJaZzv2JyvAWA9lgECDPe85WZW33mL0Z+Mt3fh7JyM3ooROunJgR3E+Ucua8lXhrSNihDI+1RgzlzUBqhpxsF9at4rOTZ720rsQ79luwLs0c+jmGbmLN6dwdMuleXQXqhhN1JxnA4Zo8/B+JPIxFoMpcOpRAnMOSUaVTc6Dd387uyN8hgMUacD/lTH3JTBrQwG7n30pfual1SvjC0imzil1XAArLvW3rp2hIEaqISj2VfzbQFpEdf7ypcTbDzlx/e3S9xEkYvcLA0On7E8pI5l13dz6N0H++g/nncywQAu0MnmpUdXIQtavuuxXmkgeqFmC8LPExHu6QTJgEj8Gznxz9iRD+hM8xa3NQJLBCtrWb/W1paST0ZdfSKYX+T29P0QeWhPQaha1oBx7h+6pys3RErW2mhhuLwAmeYdTq3ANsh5PIOc7XyW7PeGDiKAE3wXMkyQzLdn/IEanPyUotJg+1/Zc/UwDEVVTJyOsdJ0l+z758kYNzdA2CQve3tyzRLSCyaDuzz3ooDiEYd44anxt5NZCUKQC5SVJhmIXgoZ8IK8RS3y0K2j3tdgltmLTuzwIgCIJKTGii1mpnJe5Ifhv/0DsKRuuPjwGZe/JVVeP2nQFJV9iARdOixZsfKzOM/nx5v4XWDmLCmFQc/F9df64mdC/8R0sdjZW8kA2FdHWOs0cyDbqksvWNl4RTWF6+sHanSQKX+hxKNVCtUMk2Zz1O9Vx0J7W6ZLZvy+STakPoi1OiXorX8KSJGPiZQ6HEFLIrq5ttjW9Oz8DNffuUAW8EtyJHbzn9OMlUWcrFkwkhSVJbntFEhEXS9Js3eKlh1A21ZdwZiKxeqRsJmzPeKuSelgTsylx59MdJiVhzkma0l8r6kFsrsmYwv/EjArNjHF6 FaxFfTPO eO6UCQk4p1NL7ro3g/eyZ+Y2X+XGvP2GH+PAJ0Yw2/N0WUwupiQHb0WMjmCRenmzv0JBERxIMp2f0bJUgcgS7KjH6GHNlv074QBxHJhohbbxNWtCnMgMskUL94zw1fUPv4fsGg0VRpud0SN3BBAQ8tsRx54diAEIpqqwhmvq+QTBwCRvo0y3IhrtmEF0oFg64oETo04vbbLwkRm1mgaEQ/rn7NMX7i8XQ1pj/Y5XLV/uSjZo+Hbpw0wCPne92LFDNDGLkrTO71SoeXFk8nWmhUxEEQ7CxEiShEyO/Isl47I/A3kFVXmHOjB8rPANeUIEKZftXcPflrXyv4BQqCt1UfWE5eW/XCcDMYIwWJikEUOtbPg1pD+1ys3B1GPdmWrqWlo+3ssx8uFKfcpsUMbHzV26NrwA6e8S5udNbFb5sDNj5FCABad16Q1bq3HeJlTyK67i4ohr/dXhoartiBtYGlEOVc/Wy0Az9rurCC2jERqtm89k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: [This was included in v2 of 9b038d004ce95551cb35381c49fe896c5bc11ffe, but v1 got pushed instead] Fix netfs_unbuffered_write_iter_locked() to set the total request length in the netfs_io_request struct rather than leaving it as zero. Fixes: 288ace2f57c9 ("netfs: New writeback implementation") Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Enzo Matsumiya cc: Christian Brauner cc: netfs@lists.linux.dev cc: v9fs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/direct_write.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index e14cd53ac9fd..88f2adfab75e 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -92,8 +92,9 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter * __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); if (async) wreq->iocb = iocb; + wreq->len = iov_iter_count(&wreq->io_iter); wreq->cleanup = netfs_cleanup_dio_write; - ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), iov_iter_count(&wreq->io_iter)); + ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); if (ret < 0) { _debug("begin = %zd", ret); goto out; From patchwork Thu Jun 20 17:31:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 393EBC27C79 for ; Thu, 20 Jun 2024 17:32:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 945716B0320; Thu, 20 Jun 2024 13:32:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F5336B0322; Thu, 20 Jun 2024 13:32:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 796DB6B0323; Thu, 20 Jun 2024 13:32:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5B75D6B0320 for ; Thu, 20 Jun 2024 13:32:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F3AA11C0FDD for ; Thu, 20 Jun 2024 17:32:19 +0000 (UTC) X-FDA: 82251960798.25.387E01C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 49E5EA0006 for ; Thu, 20 Jun 2024 17:32:17 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=G9ELAovR; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904732; a=rsa-sha256; cv=none; b=ylelkhDBqEWeoXB5Qm0Hf4NLpqu/rzKTyuqDZuVBSTbR496bLmEgTT7UDSyluEWol4cVvO E6oHDzG02yelIga5mNgIeQ9pqjWT23l+W9eB5qewN1sGBxUu+qN44PNm2XzkPYwtYG93FD cPWpG1Ft8pNhdlmvJ9o/XEst3eOXB7I= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=G9ELAovR; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JUNe0qGPbYPucT2oL0wEdLncnKH7kSG74rEcjvbSDSE=; b=eks5vUY30cRbaH6F9Fh+UouJ8m5tD+KrzEL26dBo3Z1yW6m1zVeOpRml0cK75xMCoWsunE vV6sF/dA2rnnB8J/48QPqS123d3cA2GGacTE8cmF2Ulfmx8QT+pRyK2whpGuI+WcuQ/JjO R+DFKHkIeZfffnOktLYZTwWAv+PlRHU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904736; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JUNe0qGPbYPucT2oL0wEdLncnKH7kSG74rEcjvbSDSE=; b=G9ELAovR2u9qsUvQJo5Ia5SKArE1ye1z/kA11Fx2vgRmKEFUH3x5iNcue20J3UhqVXBnEN ZP7YdGsXNg6lELhCDb9PJjBk3zrVAsKmnFor6j0N9wa5R1MLysy95hx8lKQMVVeiLkwfGF fY4DYkrx2ZD05Bh/DiTkHMtZIeVhlBc= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-148-fOdZNPsaPiG_jKLo6iUksg-1; Thu, 20 Jun 2024 13:32:13 -0400 X-MC-Unique: fOdZNPsaPiG_jKLo6iUksg-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E1E6219560B1; Thu, 20 Jun 2024 17:32:06 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E94C31956087; Thu, 20 Jun 2024 17:31:59 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH 02/17] netfs, cifs: Move CIFS_INO_MODIFIED_ATTR to netfs_inode Date: Thu, 20 Jun 2024 18:31:20 +0100 Message-ID: <20240620173137.610345-3-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Stat-Signature: 9jc9q6h5xu6b8s7wk5szk1y3iyg3bqyo X-Rspamd-Queue-Id: 49E5EA0006 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1718904737-554307 X-HE-Meta: U2FsdGVkX1/D5oJON7i2Q9gDrHXPjqmLgeRs9eo7c14YA+UIb+0jtWmMNI281aQ2+gIb4sgcS/zrEUIyy870VcIeubKbiHcuO7EsNNT0RDkPUKPZbE9D0+BUH96lGUEy511K+q6d6VjQTeweiSsbv+bJD0jdJNlol5h3BXQArbSGbhdIWmdwbwe49QE04Or1WC7++mJtFHdFn7hyuILdtUZ0s/kgE8P5mW5Bb+jlWdZsyrKqmv/sY6wwfUa3i4pQa79JAmVpMpz3DlvwMHC+6zr88bmnqrATKl1WaRksn65n1B/G1C8OhHJ93ndF1BWUuWaK7azEzmMISQDQUpISbmTgtobQaCUNVeWRgddtQTTuZpIiog8Yiin3NQbs5GQTSUep1NqegqCENVYMeEXTPy+pKhHVZO7Tl8DbCiesH7/ZK54HaqkBH2M3Jl01JqCjQkrqy8JD1H7FLnozjYJKlLNog3j6IDCd/cvHoxu0pl4OUEACtZ1srvWvkZBPCLAN7gKDdeCGVF6lOTXQhm3w7gzI0n/xmzT6nXJZ6ZseADPvQcX8EJY/xMjQGNiPoosapnhDtOoZjr5fSSr3p9jEd7FSK0ljsoCWA0Ci/3LXEG/5ooirn0Jlcup2UTvYK0dTUUZPNdJ2wfMmUyVN7u082EdB8YokJl8dbZYAkc22TIjZaKGz5a5lSZssnBGH0jNHTzU9Vgnhwc371ofnQbQHWu6WkYccO2AoDxofRPKompuPIqH7rP+43saz2rxrZj9UK1gyDK/XReYugSmHATRV3nykuKWXsbEfdOx/WK8pQKHcghRguid/9MDOZLPgUZ+ZUhiH1ZdDv4bhu7zHgMqfyI9WYrchPm6YyjlbIaBw34ocQMy1r22M1dIAgYSqKOPNlAT/Lof5jhItq2oc/vaWRlBVlHEj5M8ptBmVFOXAYzhJApE1QQKa7QfWZJb41ah6Cx/RczthtfZOXKie29/ X1r00gcP pPj7CAZFgZdWelXz0SgNPvR23E00b7G7f6AkLUDecwVUU0T11CMr1pY/GgW9CYxqqsOguD0BVKQPVgvOVkg+ZEB0wI8K5zAlL0ThMr+zusbCgru15a+GA2l/duSII/s2pyf7ybIiiNpzsCQ0bTtzoVVgScmALTde5dKIFR6nX521+lV/zVskmoBQfmpYXIp3L/dK4XV2tubGDusr603EyTRwkQPNNW5vwi4KmXaCxytR9dggDO355tZVtl+TBRbzoUFdrBYJq6045pQN+Y1EbaFpFp97J15yngG1MNmZ4GAym83XUy24dcwbL1iCVquLcFin38unaHPBS8vTpIWfgT9HqRTeWzH5icKkQbyIiQ6Ge/uPj02Tsvr/8GQhlOw5/TzDc8wIN9FpYOUSqCIZLZM/uwacZPUVfQQcXy8cGJVWvN0WamOpykA2USBjztCSNgMyg65pRypG3eRc3/owB0O8P19jOpleGYPqR0PGYjmzvg0bKVcgBaO/ILjvs8srK/DMuXaQhR7KbbKI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move CIFS_INO_MODIFIED_ATTR to netfs_inode as NETFS_ICTX_MODIFIED_ATTR and then make netfs_perform_write() set it. This means that cifs doesn't need to implement the ->post_modify() hook. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Paulo Alcantara cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_write.c | 10 ++++++++-- fs/smb/client/cifsglob.h | 1 - fs/smb/client/file.c | 9 +-------- include/linux/netfs.h | 1 + 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 07bc1fd43530..41d556fe382a 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -405,8 +405,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: - if (likely(written) && ctx->ops->post_modify) - ctx->ops->post_modify(inode); + if (likely(written)) { + /* Set indication that ctime and mtime got updated in case + * close is deferred. + */ + set_bit(NETFS_ICTX_MODIFIED_ATTR, &ctx->flags); + if (unlikely(ctx->ops->post_modify)) + ctx->ops->post_modify(inode); + } if (unlikely(wreq)) { ret2 = netfs_end_writethrough(wreq, &wbc, writethrough); diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 73482734a8d8..4b00512fb9f9 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1569,7 +1569,6 @@ struct cifsInodeInfo { #define CIFS_INO_DELETE_PENDING (3) /* delete pending on server */ #define CIFS_INO_INVALID_MAPPING (4) /* pagecache is invalid */ #define CIFS_INO_LOCK (5) /* lock bit for synchronization */ -#define CIFS_INO_MODIFIED_ATTR (6) /* Indicate change in mtime/ctime */ #define CIFS_INO_CLOSE_ON_LOCK (7) /* Not to defer the close when lock is set */ unsigned long flags; spinlock_t writers_lock; diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 9d5c2440abfc..67dd8fcd0e6d 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -302,12 +302,6 @@ static void cifs_rreq_done(struct netfs_io_request *rreq) inode_set_atime_to_ts(inode, inode_get_mtime(inode)); } -static void cifs_post_modify(struct inode *inode) -{ - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); -} - static void cifs_free_request(struct netfs_io_request *rreq) { struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); @@ -346,7 +340,6 @@ const struct netfs_request_ops cifs_req_ops = { .clamp_length = cifs_clamp_length, .issue_read = cifs_req_issue_read, .done = cifs_rreq_done, - .post_modify = cifs_post_modify, .begin_writeback = cifs_begin_writeback, .prepare_write = cifs_prepare_write, .issue_write = cifs_issue_write, @@ -1369,7 +1362,7 @@ int cifs_close(struct inode *inode, struct file *file) dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL); if ((cfile->status_file_deleted == false) && (smb2_can_defer_close(inode, dclose))) { - if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) { + if (test_and_clear_bit(NETFS_ICTX_MODIFIED_ATTR, &cinode->netfs.flags)) { inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 5d0288938cc2..2d438aaae685 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -73,6 +73,7 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ #define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */ +#define NETFS_ICTX_MODIFIED_ATTR 3 /* Indicate change in mtime/ctime */ #define NETFS_ICTX_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark * write to cache on read */ }; From patchwork Thu Jun 20 17:31:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F64AC27C79 for ; Thu, 20 Jun 2024 17:32:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7EB26B0371; Thu, 20 Jun 2024 13:32:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A2D5C6B0372; Thu, 20 Jun 2024 13:32:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91CB76B0373; Thu, 20 Jun 2024 13:32:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6D7D16B0371 for ; Thu, 20 Jun 2024 13:32:32 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 26C211C0E32 for ; Thu, 20 Jun 2024 17:32:32 +0000 (UTC) X-FDA: 82251961344.28.1A60228 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 647A740017 for ; Thu, 20 Jun 2024 17:32:30 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bWbuJCtj; spf=pass (imf17.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904746; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9Za3sI46DNoDRS2zLMaVKKAo5577Q7Mj2mbpphqJpg8=; b=gFAG0N4Pw319bp8Ubb7uoYkh78aOC1W83JQAQ69Mfnq8tWVTgtBDUvV4JKpuNqrmJO+KGj wP8gobkm3v9Q43r5alJY3m86gSCUAeE5ek+uMwaTTtc3tsl+W2kWKj0axi9w1OjdFAfPBC aVzlpgVKTFw2xg5++5M6ZfanA1gNT9Q= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bWbuJCtj; spf=pass (imf17.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904746; a=rsa-sha256; cv=none; b=HMOY5otAmd3l/g5S8/aWzKt5o4zba7WpFFST95/0Rg7pDikO1ND8Hbpkrbw7nJfGA0u2Z2 kn3lHE2lg00bqhAcomYDq/O1200ZY5fU4Sg56/+x1NlTIe3zEvjV2+MQC2kZiEkizGKypj RG8Fn3t+o7B5Zk7o0cV2xCTAxtQTk0M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9Za3sI46DNoDRS2zLMaVKKAo5577Q7Mj2mbpphqJpg8=; b=bWbuJCtjBSe16YrY1PhW64apgpHONEWD6qlM0yKaAxGssSM22k8CjYlRUxrhT6Riup7ftF x/ygLHZuNkOQlkJz6CcpbrAs/0DYLVmDudsWgaCx+AAaykCstuSek8S9QiBobU/b2IepxE WF6HzncpdcGHUqrSB1iAx2jsnxMAOsk= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-133-lamvqmTDOOq2ya95QZA4mQ-1; Thu, 20 Jun 2024 13:32:23 -0400 X-MC-Unique: lamvqmTDOOq2ya95QZA4mQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 270E61956068; Thu, 20 Jun 2024 17:32:16 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7215F3000602; Thu, 20 Jun 2024 17:32:08 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Latchesar Ionkov , Christian Schoenebeck , Steve French Subject: [PATCH 03/17] netfs: Fix early issue of write op on partial write to folio tail Date: Thu, 20 Jun 2024 18:31:21 +0100 Message-ID: <20240620173137.610345-4-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 647A740017 X-Stat-Signature: 7q5ofwj1xi5sa51zccd481bjpei8c7ez X-HE-Tag: 1718904750-390775 X-HE-Meta: U2FsdGVkX19xH08qxcvUO85xL0a2S2SWa9VbPdBiv996yIPvJZEtv0KdH6sTsPCRoOerOYpUbsAo6MfIx6HcOqIjkB2M7Uvlqwd1br6pWlcRUJCHKMkCBk4UcYfmsP/DKXSxoCB7Jp5PNuk4YUzSgyvq42WsnfHODcXgtjogDPa6Zd7e8liL2v1CDJFG88teQszeh7J2Es5hUNztuPrtD30iLzTx2ox7ZZmEzcCZwkVQkLD8+f3LVdtIG8JvD/NEZRyaFe+6tOuvdjozCRy+73Nj/7LdLRWT6/be4zmp9xZwPKfqZgFlogtX9f1rtb5aZP+whM7/yZ9rmgq+cuXsKRMcuiSE83AP4thVh6F9t0P+0CKDb7EYenD0hVZ6TCPMWjiXo3PYi8SV6MI0KfZqt9/y9zwa29qe4xS0CFmyi+ITaqOqQ7OWYRhGLx4KEpRhnztoiFpNi0l5FEwIoZDMKcvgOIaSIPcjQY+lxrh8WHKuM/WZ6zkO3mcUusofbrSHLNXw3r6VL3c0OQHtVv4HSWxAG9k3txK9C8jlY8hL1SBCocy81bFFIXTGCeVbhQF8Ege5IA9NWHexu97ViSt7KlYHkUkEbkSxDfj0gL6hljlqI72/Pu7v47h79UuKTHQ8PiongqD3fn7V5opeBBcJ/Wt3dCgIS//e788QClV6gTp4ayIWCEGw5A0yhto5FInY+rKojO9qzk6uOyYtcR5N6VcLDt+E+eDi6kb6n0ZnY03XynjA5t/y2YGO6IL7eL3498YSa2oa8PuajFO5P0+ySSU2MoE56pyTPM87lChzk8GlPs+l3FzGkMQ/F4bm9S13xVND1qky8EuAhQENW7SWGpmxQXQrw4ViR/PIClWCg2zQkDqBE9wKqsE9VM5dursauFrUcjGm5GSgA9RVRkSvUHFGe5uJaKLDqydEkCTx2cmwqsc2SMx10LOsDwSkiZ9mBPHkZC4fFxH2DOyBOcP wCMNxU31 fUZuKVmBYeVaSNXvBtTEkP8fB0QdrO36iXM7SivaAolcks1xj7J1tSQ3sd9ELTOXtDoU4cUYbnxxiPAFq/R2ycdobcxcdHGNexCgHWaP5/C5e7RCtoX6DAlFtXa/xeaqbJJZuLXXyh6OCGte1wXA65bSkWCixqGB6HXLkJTKd87bamyYI65dZ32NWueXtfDuHs+t7DDnyKRX3wOESjPnGtgLbG4HCj6tDuNnJs638Inca2e8Tz1D0LHT/G9xZcEsauMlJ+QZ01K4RYLSvAwAfPxGgwnzsR3e8o3fguoKXhJvepy+8FGZhvTiTCPNcKP3n6EFFJQub3orjtXGx44kdheYFhPMjMIMBGXazAtxk/WOXJ9LzUDQU0oQzJoB3ewkAGWkLSG67JT0xPf7D9KVIojLNnxz/8VFipxuig4rl3HSIHS2gJipFC+vTz1cXK9Jqsdc40asutJADZZl822W9jD6MqD6jv4Yzd+bBd5IwNPfgPspvNasgs+4zomjVX872liGxD1TeJw/9q1XjkdXWenK6XBieIa5JPPq6wHFHQPKknz4h24fxahviuQKVBkFrTTc/25F4o/RS2b7wfEKGwVW8VXSm7tRT0VFf0YOzU8Ri/uuLWEP7QgAVf2rTKtNSsUBMqfnJTVHmZ6wtoJilrIvIWzfqejF719AOBFHzMfU1W4VOpU5MFagfAudnC839raFXY/dLB/Pft264XZdr99clsFhLzLX79/nJI3UdjmvkeSKutWd6UZRsky1AEaZk3rmioGH12VPn/28= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: During the writeback procedure, at the end of netfs_write_folio(), pending write operations are flushed if the amount of write-streaming data stored in a page is less than the size of the folio because if we haven't modified a folio to the end, it cannot be contiguous with the following folio... except if the dirty region of the folio is right at the end of the folio space. Fix the test to take the offset into the folio into account as well, such that if the dirty region runs right up to the end of the folio, we leave the flushing for later. Fixes: 288ace2f57c9 ("netfs: New writeback implementation") Signed-off-by: David Howells cc: Jeff Layton cc: Eric Van Hensbergen cc: Latchesar Ionkov cc: Dominique Martinet cc: Christian Schoenebeck cc: Marc Dionne cc: Steve French cc: Paulo Alcantara (DFS, global name space) cc: v9fs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_issue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 3aa86e268f40..ec6cf8707fb0 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -483,7 +483,7 @@ static int netfs_write_folio(struct netfs_io_request *wreq, if (!debug) kdebug("R=%x: No submit", wreq->debug_id); - if (flen < fsize) + if (foff + flen < fsize) for (int s = 0; s < NR_IO_STREAMS; s++) netfs_issue_write(wreq, &wreq->io_streams[s]); From patchwork Thu Jun 20 17:31:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6D14C2BB85 for ; Thu, 20 Jun 2024 17:32:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EEBA46B0372; Thu, 20 Jun 2024 13:32:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC26F6B0373; Thu, 20 Jun 2024 13:32:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CED956B0374; Thu, 20 Jun 2024 13:32:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AE18A6B0372 for ; Thu, 20 Jun 2024 13:32:33 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 677DDA1EA7 for ; Thu, 20 Jun 2024 17:32:33 +0000 (UTC) X-FDA: 82251961386.19.908D48E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 98FCC80020 for ; Thu, 20 Jun 2024 17:32:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PGwwXak3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904744; a=rsa-sha256; cv=none; b=k/hGL2Abp+0LgdJgzOJl01MSLJcuGP2vbphdi4LNq7K/t+rTEFDhRMlwaoOISQTUFakXlX pv4D8aj9MisW+JyVRDk15kV1UKf+q8oxxGoP6n5qJYoBuyAnmu4jJ+Fu4/1/EkVMbBy9a9 Givymn13HAciSVEF7hI+MugDuS60A50= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PGwwXak3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nD8BuDcHuaWMd5Fn2FK+IU9NPvXO40RV6M0I0jL+Pvo=; b=iv8CCWoJREtWD9NVD55OPE8HZH4H1K4JROYzJxWTtdfLxMT8wZHbjPN51ARGtz9WdKZYlS D60vJEITz9n/HedP9+rhLKUtPZGOSqtbwpzUZ85L5tDql33AZuNKalki9Nez67T20Kd2a+ 78g1YtAZKssv+NiAGVPKQdg/+1nx71A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904750; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nD8BuDcHuaWMd5Fn2FK+IU9NPvXO40RV6M0I0jL+Pvo=; b=PGwwXak3v982pj9rR6pEc+zxo0un9DDEusz+31YFa46u5rQljGrUygT2TWTy/07uUqk3mg hMz2YrTAPf43VRHgUNsttGDosHEt4XjmCRkGDRxmwnvsX6q9ih022/GQ32B8JXF88OTT4o FtvfQWldXdS3oMCGEuDSIHIicfZQhVg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-49-iKP-auBnN7ORn_GOf3niIw-1; Thu, 20 Jun 2024 13:32:27 -0400 X-MC-Unique: iKP-auBnN7ORn_GOf3niIw-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 66832195609F; Thu, 20 Jun 2024 17:32:24 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8C4DB19560AF; Thu, 20 Jun 2024 17:32:17 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/17] netfs: Adjust labels in /proc/fs/netfs/stats Date: Thu, 20 Jun 2024 18:31:22 +0100 Message-ID: <20240620173137.610345-5-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Queue-Id: 98FCC80020 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: um3yq8aby9nx6awiqt8stes5aryqs8ng X-HE-Tag: 1718904751-759305 X-HE-Meta: U2FsdGVkX1+RnfAnjeEleZpUIBjqDbGS5sFZtW02Q1hDxdw4KZyd1+Ppu73z8wwaaes7G9a0iYFwuPW4CmpGxXjkkXDWbvEtI6DAaarwh4idFZAjFjeMee1ARSEjqWweOL9nZYMDYegs+Y/90jAVaB3+Nwhf+mVgRCZhiW1iXyBdIVEz5VfRrtky/6MBH69yxv0ru/a8Do61twuVuN2y3NGYZknrXthE3FZejS2bwMnoyJNWdsF4oYt/kTFtv6mcPIo3uaKF0/A19/E9aOlvDfTlvRo+hARUC0rD9lDd2uhzq2YH9wqAPFBQwMYXphnweTR6HjDGG9BsJwqzwkkuDx12JjvImc2U52wcKpkv1qGNYLzfZqsf119gRTy4CnR2rY9QrTc20mql8EGXROVFssR2K413F7He0JnJlW2IwRk1K4ipYsU3+LaQHmxapaPDwjYcUunRWvVYJNqUrlqRl/XVOGgtTF9MrgN6YGvfyxrNHtGld8mLkB8k19qdG+lPVf4m9T5nsj9jSvIZOuAorFMfyhpu22D4Y+uAa6LDwT1yO+nUjETA6uzqgjNdj7/iAnEFnUWMiWgvKO0pxre5KaWM+/OJLRV1NPmSyL12wJWTNDXogJxYGHpHhLPYF2uy8p4c4N2Gwbi7sLPaOoK3QyFp9Mv6/wU64DDoTtTiyeh33ecWfmltdfZS0ZsAfct28ZNeKYNn6WJwb56NezrR/yHkXqBBphREsvnxwanvKXsJL18+yoj+5g5qkVFhy1r1b9T57hDSTvDC4+GWb+4cL3A2OaKmqXIxfEoGN+KTCU92aOfgqmOlm3XyMPfnSDtEk6GxzVH7InkrV7RjKWenyf9pPEaP18Wx8hxVXDwksx9XepH993vVUXQQ++wVbrQbK0KEarfxtHyUbcOgcElIJqYV8N2brSN1IaGor433P+YG7sZYEEVGV5aFcHfxyVov+rdhcRdzpkMadxAfgc1 vdjKkSzf tj2b5G82LkYfsotPNTEhLwcBhF8iaL9vCBT0hEvzSzD6vWu9Q08CozJtVetV0IvaHfMz1In/w/Jf0oot7Nju5GpHSADcfHM8F1Ff7dJeg+KANaqAvrzBXoaEXFjWzJGuf2KawI1xsNmyGNkiGtzbWKnt5xfifuwaZUI3avS1QWSgsN7YJvXvD+SU2hPtr2AqEUOda1IjCXgVgFtRJRlXvvu2vBZWS5Ot1y1w0io/1dddaLOzRC/Et2Okzmxnlel09dP25fDFVcUl/+Wr5PHgX+x+6edInnChYzu6hMMSnNK6w5xZXLKTROG0Ut95TDoVuQ6Q8f+65Bgk/TGASawpctWHYSVloTNuATsqrG3uT90ePh90= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Adjust the labels in /proc/fs/netfs/stats that refer to netfs-specific counters. These currently all begin with "Netfs", but change them to begin with more specific labels. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/stats.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 0892768eea32..95ed2d2623a8 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -42,39 +42,39 @@ atomic_t netfs_n_wh_write_failed; int netfs_stats_show(struct seq_file *m, void *v) { - seq_printf(m, "Netfs : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", + seq_printf(m, "Reads : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", atomic_read(&netfs_n_rh_dio_read), atomic_read(&netfs_n_rh_readahead), atomic_read(&netfs_n_rh_read_folio), atomic_read(&netfs_n_rh_write_begin), atomic_read(&netfs_n_rh_write_zskip)); - seq_printf(m, "Netfs : BW=%u WT=%u DW=%u WP=%u\n", + seq_printf(m, "Writes : BW=%u WT=%u DW=%u WP=%u\n", atomic_read(&netfs_n_wh_buffered_write), atomic_read(&netfs_n_wh_writethrough), atomic_read(&netfs_n_wh_dio_write), atomic_read(&netfs_n_wh_writepages)); - seq_printf(m, "Netfs : ZR=%u sh=%u sk=%u\n", + seq_printf(m, "ZeroOps: ZR=%u sh=%u sk=%u\n", atomic_read(&netfs_n_rh_zero), atomic_read(&netfs_n_rh_short_read), atomic_read(&netfs_n_rh_write_zskip)); - seq_printf(m, "Netfs : DL=%u ds=%u df=%u di=%u\n", + seq_printf(m, "DownOps: DL=%u ds=%u df=%u di=%u\n", atomic_read(&netfs_n_rh_download), atomic_read(&netfs_n_rh_download_done), atomic_read(&netfs_n_rh_download_failed), atomic_read(&netfs_n_rh_download_instead)); - seq_printf(m, "Netfs : RD=%u rs=%u rf=%u\n", + seq_printf(m, "CaRdOps: RD=%u rs=%u rf=%u\n", atomic_read(&netfs_n_rh_read), atomic_read(&netfs_n_rh_read_done), atomic_read(&netfs_n_rh_read_failed)); - seq_printf(m, "Netfs : UL=%u us=%u uf=%u\n", + seq_printf(m, "UpldOps: UL=%u us=%u uf=%u\n", atomic_read(&netfs_n_wh_upload), atomic_read(&netfs_n_wh_upload_done), atomic_read(&netfs_n_wh_upload_failed)); - seq_printf(m, "Netfs : WR=%u ws=%u wf=%u\n", + seq_printf(m, "CaWrOps: WR=%u ws=%u wf=%u\n", atomic_read(&netfs_n_wh_write), atomic_read(&netfs_n_wh_write_done), atomic_read(&netfs_n_wh_write_failed)); - seq_printf(m, "Netfs : rr=%u sr=%u wsc=%u\n", + seq_printf(m, "Objs : rr=%u sr=%u wsc=%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), atomic_read(&netfs_n_wh_wstream_conflict)); From patchwork Thu Jun 20 17:31:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF630C41513 for ; Thu, 20 Jun 2024 17:32:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 528968D00C5; Thu, 20 Jun 2024 13:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E3C88D00AF; Thu, 20 Jun 2024 13:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CD6D8D00C5; Thu, 20 Jun 2024 13:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D696A8D00AF for ; Thu, 20 Jun 2024 13:32:44 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7CF9E160445 for ; Thu, 20 Jun 2024 17:32:44 +0000 (UTC) X-FDA: 82251961848.25.1700D9C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id BCF9C40018 for ; Thu, 20 Jun 2024 17:32:42 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dffVP7uT; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904758; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fBEsJ0JBnkoJ2efzkpyXGTGqwkO8a4ye4BCeFPtv9T0=; b=SIKi62ftb9qh67EyJNyDKzNtNFEMBXt/fpO220ibFrFHDRJjcz8h2XHwWKMwSLiV671uu7 RskQSFF/xrCUm0EzpIgclLeN49mEHuvtKiEHDL+uRA3TmH1KK+kXtNMDUpmO2zZ4XaH4Vk cQFjkFAmNI14MfBKsFmfz5qhKeysWw0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dffVP7uT; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904758; a=rsa-sha256; cv=none; b=goTEe/A5wZH9C4uIO/8l8M4cU5/8I3OzOZa8F5Sx2e8oeZopgCGNGdo91KzFpIQgZkjkNJ HryTVGTolqLteIJ+A9TIpIp6oqo2OTlUeDyDoqBxUCJzoFJYhhT704tVZriAShsNGbzIIo /z2HTgOnQ45uXYliqYU+vhy8KNmjH3w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904762; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fBEsJ0JBnkoJ2efzkpyXGTGqwkO8a4ye4BCeFPtv9T0=; b=dffVP7uT8Ly7KyyOSuFSjtrk0sCOmFQKhl4Ez/34lGiAAXLJ9qRS/yhvJaAcxMUGNvamfc 1AABbTS0HhdQTVr5a5D+Bl1pQHnWk5QCokWdQpCfloD3Z54gQ3WisIU3MY49Luk1eAPTJI KE0H/rWLPdwVgST1Ea/y4Li1FbDUrD4= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-663-cjerBB3ZN4CST4QI6Rz8_g-1; Thu, 20 Jun 2024 13:32:36 -0400 X-MC-Unique: cjerBB3ZN4CST4QI6Rz8_g-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3704619560BE; Thu, 20 Jun 2024 17:32:32 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C96C319560AE; Thu, 20 Jun 2024 17:32:25 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH 05/17] netfs: Record contention stats for writeback lock Date: Thu, 20 Jun 2024 18:31:23 +0100 Message-ID: <20240620173137.610345-6-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: BCF9C40018 X-Stat-Signature: ioa5toqjnmw7zrwu8jkqxd6ia5du89wg X-HE-Tag: 1718904762-320732 X-HE-Meta: U2FsdGVkX1+Bdv4zOQQxkIf/DG58H89PCTyp8kZVG70lIJHeyMz0ybfSff755qCghhJO6iQdN9vEyE4/+e5Ch13n60+GNCc0pXIkmSTEOAdjU0qmZ6dc/26cA/Dz5blcEwl/sY3HevKo85feAX8vfrrZaSjuoI0ve5C7u2mip83P3ZtVBH4Ky7XCEIXw6ht5bqeAR99rsx5RxRL50z1wQqtCoi9iRMO3tNuAjgIrr2AORO6rCmujw37koB347UCaynd2GBxxREgK03Xd+WaJ9C/zEBJvqPf4pf7l4EutNe+/HoY32sq5B72WSBbmjp0Ejfd4HtbAnGWjNg37hfZ+j7Jnl9GzsSZhOKxyubYlCku0g1oE7dZYP6iEhcmyZHfhERuupYYcN+LwkOvY7wj3k/8mWGAN7Y5YYvolcWK5Nozuie3J0027701iKx8uxm0mDYlpcIUV8tVSIXAYOqLneZ3da6adlus9Sn56bb5lwzE2DFBbkjEMztMnLJV80l7lvyzzLsgxSQ5CWdi/Mc/SQlsqaG8M+Bd8TLubE36txzlA00KdU7Iq5ymEMYTmcxFK5UFBRspPLBCqBm+42AHTqdnJjFFFDrnCq9PdwncdCzAy09TNcA/+8ta4asSK+z7mRtuDhQMhPpTAG/ZyvyXboTJATdKjv38kkHl5+CVV/c3IAEnbxO7AqIsl13QhzbFCoXL25qrFwFHQl9Y0XoR33J+ErEao7ikDcgP309n1rS7LDdKhA93vZNUuKORPxRfvipk7KmeTan+eAAsMzr7+tM+S5ZV3w7wAS9dh7IpwvIrOKX7h0LW1aaW1oO+gtlmLrMAqN4Kj9gZr25AjveG7ND/LXbNnMAcBHMEBU2+8FImRnNNf91Ziwa/cFoPTAtyX1LlGnDF7Xe17RN6IkNrRggoepyxgvZQtOWyn2r+YrQdSdkXH2uRfK0Ax9IPyPS+RTCZHV7lDmW0FnYozYda vIJHpLip D0YWv7INSrgStexVR8lIftE5DlaVbCq0jHBVs/knCLC0SVXh/AaZ5YM+DTA1HnJFJnpVTF3j8UjyPI1kBkUMh+aHsL0MHaLNqV7DsJEi8+ZjKgTXKWioJJPCN0vXubT/4lV3Xi5c3jt46l/tfYVL3IpaYTC/5DUfav5Gl9eo4yxf2aS58KlK5ATP2CFiusdfQQOvR97anjRzs3Fvwp0S2u7Ob4ThudaY29ennS5vwj1lIXprFtg8YXJhQ0MPvQVsxj5tokktGhqJtvSLiG1SThG3yEKU82UQhxfUXawi0JIxXpSWzCOqWCpN6K4knaJcWkLMjJ5VtDtq4XrvXTPgeIvykdY9FLcDrjTf5MuuWSP04Fsqr0UEaMdbxrOsSzEMdOghSZmvLQS5KuOpBAb3wVoo2cBDjriPq55azTXKE5MqFuJqp+MUY8KoPbVEa5LG3k3J8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Record statistics for contention upon the writeback serialisation lock that prevents racing writeback calls from causing each other to interleave their writebacks. These can be viewed in /proc/fs/netfs/stats on the WbLock line, with skip=N indicating the number of non-SYNC writebacks skipped and wait=N indicating the number of SYNC writebacks that waited. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 2 ++ fs/netfs/stats.c | 5 +++++ fs/netfs/write_issue.c | 10 +++++++--- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 95e281a8af78..42443d99967d 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -126,6 +126,8 @@ extern atomic_t netfs_n_wh_upload_failed; extern atomic_t netfs_n_wh_write; extern atomic_t netfs_n_wh_write_done; extern atomic_t netfs_n_wh_write_failed; +extern atomic_t netfs_n_wb_lock_skip; +extern atomic_t netfs_n_wb_lock_wait; int netfs_stats_show(struct seq_file *m, void *v); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 95ed2d2623a8..5fe1c396e24f 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -39,6 +39,8 @@ atomic_t netfs_n_wh_upload_failed; atomic_t netfs_n_wh_write; atomic_t netfs_n_wh_write_done; atomic_t netfs_n_wh_write_failed; +atomic_t netfs_n_wb_lock_skip; +atomic_t netfs_n_wb_lock_wait; int netfs_stats_show(struct seq_file *m, void *v) { @@ -78,6 +80,9 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), atomic_read(&netfs_n_wh_wstream_conflict)); + seq_printf(m, "WbLock : skip=%u wait=%u\n", + atomic_read(&netfs_n_wb_lock_skip), + atomic_read(&netfs_n_wb_lock_wait)); return fscache_stats_show(m); } EXPORT_SYMBOL(netfs_stats_show); diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index ec6cf8707fb0..cd3ddf07ab49 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -502,10 +502,14 @@ int netfs_writepages(struct address_space *mapping, struct folio *folio; int error = 0; - if (wbc->sync_mode == WB_SYNC_ALL) + if (!mutex_trylock(&ictx->wb_lock)) { + if (wbc->sync_mode == WB_SYNC_NONE) { + netfs_stat(&netfs_n_wb_lock_skip); + return 0; + } + netfs_stat(&netfs_n_wb_lock_wait); mutex_lock(&ictx->wb_lock); - else if (!mutex_trylock(&ictx->wb_lock)) - return 0; + } /* Need the first folio to be able to set up the op. */ folio = writeback_iter(mapping, wbc, NULL, &error); From patchwork Thu Jun 20 17:31:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F792C27C79 for ; Thu, 20 Jun 2024 17:32:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F21C8D00C6; Thu, 20 Jun 2024 13:32:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A3698D00AF; Thu, 20 Jun 2024 13:32:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 043458D00C6; Thu, 20 Jun 2024 13:32:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D94B38D00AF for ; Thu, 20 Jun 2024 13:32:55 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8BCE31203DF for ; Thu, 20 Jun 2024 17:32:55 +0000 (UTC) X-FDA: 82251962310.05.0F42101 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id E171240018 for ; Thu, 20 Jun 2024 17:32:53 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=goCrfK7t; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904766; a=rsa-sha256; cv=none; b=VPu/1a0ICG4NL23IQ43nKc4ZA+OS+zAYllXVDeT3Hoa5xp9vxh3BDMlwCfmjITyT71Gd9f zhb3ub8MVU6CEdCpreTpubkyjs1grQeWW/NuJ0N4SmHNHwrS/NatKY4SS6MmSiEdWeMQFK k5esFvEcspI1xpDrNCBgxmRtyGk23Oo= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=goCrfK7t; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf04.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904766; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h+6UwHgWPCU3as5VcBuF/bmsEb6ZpHwJQoR4UTelMjE=; b=ewfSaZ3o7LWXPbpAzmKNTATuam7lz6u2/UTV6vdVghN7KBQr6zwsDSghUN+57TG7s3HNFt wmQBKct80uYJX0WcNMG6KW9G5yqY5UlC+A9mT1+czEzlQqOWX/KP1u7D+aDDwqNuEYhq9c lzPFMan4m/E8occqU6l/fdK9dS1Shsw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904773; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h+6UwHgWPCU3as5VcBuF/bmsEb6ZpHwJQoR4UTelMjE=; b=goCrfK7tY65JdN1wk6HnSgyrYxqMeFL0Bl45CF4HZPQ7LpO+4EgsbGRANwBnYHmrvKLJhG IIE6xBaF2RrrGj2R+ttAGcDFml1Pei+R6xQ5gxXPqbus6pq+DsFjKLLVCL9E9QQsoTaHjO i3/9ITZ45jeS8kTZhL3yZRwpCJxa6yw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-360-6nmBBFraPjiFkZ70vE544A-1; Thu, 20 Jun 2024 13:32:45 -0400 X-MC-Unique: 6nmBBFraPjiFkZ70vE544A-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B7B2E19560A0; Thu, 20 Jun 2024 17:32:40 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C0BA41956052; Thu, 20 Jun 2024 17:32:33 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Latchesar Ionkov , Christian Schoenebeck Subject: [PATCH 06/17] 9p: Enable multipage folios Date: Thu, 20 Jun 2024 18:31:24 +0100 Message-ID: <20240620173137.610345-7-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E171240018 X-Stat-Signature: kzx56tmcs5ban1gx8ffsxcbwu67oarj8 X-Rspam-User: X-HE-Tag: 1718904773-822624 X-HE-Meta: U2FsdGVkX18Gn7TXT8hq/tIB09RY4cXiYOST3CBuss0vquLElTiRs9okCbvghz3c55ehRfZoO7qoW++gh3W0gbDWkjXNDOdExTo8Dwhzd15bWb4U9TRE6SDcOfMyDBkHQWWA6eUhnXYQl/zFbD0os+JMKPiq2RoraE81l/0nlQdN0yMiE1KijEWVjJQyP76Ce9PUg3J4TurMkprEI2HtU9tTMe4/NSnxheH5x/oI6ETnl/1n73BkY3LTYX/8EiXFvUpS2yzIRDR1aLNtb9eJc8zbulIVTKUDEIs0s9z7Q6E371AcoaYF/vrOMN3zX/OkB3kHjrs8VeSRAVq1p2aS6GeRlJbOxQm2y0gsxXhdicnVBV6J2L/17NcIZBX/gkQjae9+T3XW78RysLQkTTLMu1muzg2wQQYXpG1xAZsNfPqDueimYlOtGhvi8gluryCzs8bDNoMS790wrVPL+iYnwhqxY7YUkzVwfRACd5IuMp1yklPKmU+h0ovY13Wf7CSx/SSzr7SIFA3KPaNNHaUjrzTifPxgyMxo2fy6fDatyGx2n7fBK7VZNtGyXY9MFmrTuETdxpPY334atZ38mzF9UOi6ynJAsYyd8E1XY0ETGRjsBX5l7YvMhADj7IQtAtX0qA12c8UpNwYiO65YjWR9AqzyMeReUcGqK0IT9sqwzWZgORu9TGvWT40khJ23eqSKjS1Fb1Iyu2Et3+OZwtKZFZYK/x6HBdi1gM41F0d0DK9qw7wG+rHsAIGzLyCLx0VgHTdzXLtxbZXCLJsFXhsJNZk/6mrIOw+GtTE6eGG+faMWuq3GYu4UL1wRjMqyt5Ri7cZrr0s+XmOt3XEtcQIRuL4VHV67P87RxYOxivY+hLdl4I/DamBh7ZiyZEwg+TtEwV5MMrx5rlCUY2U+m4Q67M8mzcQl63VTVbbfcovf4w5fjtiqycKGDIP6HJsPHqwOwXq4IW5rPhlydxRj+W6 eph9LyrQ Rxv7Wb7K71xnU1AxJ0DExgFREI50DEDZqcCeQhEVTNlgkEhDYXOp4UjFxG9yFdhmSWPFUOV0bXYv45jBZMSz9744H2+2aZeaJxrGYo8ze4Zw/ak562howBtI98EV/TlWvuLxB7w/MFHFHzwsH6hv3H97i+c4xBTcD+toEgKFEVj2N9BtHBfv9Ba2ifnu3xuKcKYARz/xvIk6pZDjRG6bf38sl2UlMiLjG6x36U6KYAMZg+9YWPnZQp9vmE4ypjYLGcsXXQxk7uEoYAXMq7BJPq5nG6Ch2ksaqoCnZ7FJC4z15scofvnw2TPPXIogC099Lq4pzLGHWHZNORIDfOECh+NidgPKQMX0KMvYl3v8SK4qDkc4Q9wp97XJOp7dYFjzxEP9qzVaZgS5IRdaEvCtwyCon7TUUJ98PBWQiJnjErqOq7a17kAc04YThtg/4r0qiy4cwB+ggy8obvWnb040bSUIj+8z3Yxq1gOCa0pXlUjPqUtEI3ut1MoizxzxwnoLi14eoZkQFuJjWrpBnrfKGrhIQAnsBZXu2JHztGqpl56lcOD7CoWO+j6RPqu7BdmfIwx9rdCpYsQgwEteiOgf8mSQCgqQ2/cbCTrPP10scLFa4MJIvjk9pJKB9V9LxkhAzTzvrVvSghopsI9qQfJyj03T9cw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Enable support for multipage folios on the 9P filesystem. This is all handled through netfslib and is already enabled on AFS and CIFS also. Signed-off-by: David Howells cc: Eric Van Hensbergen cc: Latchesar Ionkov cc: Dominique Martinet cc: Christian Schoenebeck cc: Jeff Layton cc: Matthew Wilcox cc: v9fs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/9p/vfs_inode.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c index fd72fc38c8f5..effb3aa1f3ed 100644 --- a/fs/9p/vfs_inode.c +++ b/fs/9p/vfs_inode.c @@ -295,6 +295,7 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses, inode->i_op = &v9fs_file_inode_operations; inode->i_fop = &v9fs_file_operations; } + mapping_set_large_folios(inode->i_mapping); break; case S_IFLNK: From patchwork Thu Jun 20 17:31:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B7BC2BB85 for ; Thu, 20 Jun 2024 17:32:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 06FB48D00C7; Thu, 20 Jun 2024 13:32:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE8668D00AF; Thu, 20 Jun 2024 13:32:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C536E8D00C7; Thu, 20 Jun 2024 13:32:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8CE248D00AF for ; Thu, 20 Jun 2024 13:32:57 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2D0511A015D for ; Thu, 20 Jun 2024 17:32:57 +0000 (UTC) X-FDA: 82251962394.01.B4767C7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 72E7580026 for ; Thu, 20 Jun 2024 17:32:55 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eVB2Suvx; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904768; a=rsa-sha256; cv=none; b=HUuzJM2jGz4d/51ZQRps1IHnNcJJeJ3M32KxnisahkF5g/VS39Usr/B1xzNExsea5eNS1H CTuc8Ni4rQN5xGtK0ylIVqzp6KVyXO8Trth85z9pMRo+1SOnrAMfqQGNMdIZv+FoyYcuR1 S5RlwKoT4/dnjkikSrJFch3AWsHgCPU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eVB2Suvx; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904768; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TrccuAG9R/dWVF86Cp/YdCeiPX6xfR34GbFBV74jwMY=; b=mjIw4eFZkBVosS3DwYSpzxj3gqR81LrY3juU0op5XF0TLKMFTSxEOVwFPVIGP1aRoMmoBt g8mAJ4wJRRqpig76TPWPEb/rSA0TE1zWCMAF5HM/qpRd4iOFFFzlbi07FkVCn18gt3SYBI 9R/4Eqact5goraiHMXw8Jxz0ZAaOAow= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904774; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TrccuAG9R/dWVF86Cp/YdCeiPX6xfR34GbFBV74jwMY=; b=eVB2SuvxdNmzqOAx+huMS7W3TrF5NShl10HjmsqoejVc3powlm5oF41krIHGvr2ulRyCpB FkBVNjh7rvBjBgA4uLKBfLSMdZGf1A1zHTtuYn5zizwL3lY8Tp/mUv4JzcviuAwO48nx8d ThE9Ine5HXAMEqNrXsBUJQxzehrh6hE= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-576-PMFnxKI_MCOKHFzYjbu9wA-1; Thu, 20 Jun 2024 13:32:51 -0400 X-MC-Unique: PMFnxKI_MCOKHFzYjbu9wA-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5E40119560B9; Thu, 20 Jun 2024 17:32:48 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4EF111956055; Thu, 20 Jun 2024 17:32:42 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/17] netfs: Reduce number of conditional branches in netfs_perform_write() Date: Thu, 20 Jun 2024 18:31:25 +0100 Message-ID: <20240620173137.610345-8-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Queue-Id: 72E7580026 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: obrp1xgsujmd3nhhzoo6w3649ybgtohx X-HE-Tag: 1718904775-905503 X-HE-Meta: U2FsdGVkX18+/gHtn56h58d8tCpGbiF5ti37keoL5kGlUjjMwYoR4x8fRkyjVB1D2ABLJDloisOTnvjJV+xYrTj4ozLSMyl0t9AjHHtXuub7QDOLLJJRaKIXYL8HFm5yaqEVG8t0GSPe7P35//5syMeafoEJw1/coUu+ylNdHyJ2bXY3tXCJivnLHUbQfD00TfHDkhK71JnfOx65vEsahIar5DmJSN6rAcGDb/3UiYt7YFJtpw3FTRcepwtJz6V0bR4DfYHeZYGPm8qK7VwY582WJvycKzOWkTtjUb96vklfJhN/OLvn4XFhc8DzVLfU6U6DPAHXdW92rXdAN+Gi5AjfB8Jw0g3MnmDKP9KPLgxLGFlEMFYtfo8Rpc0tXQcAiTAZ/QgZtylRnsCUWmUxsbCUHG8mCD1Ku64FQmXUwKk3QZsLn8+5oXJe7zY6sv9HyA6sMIe1eQA2IpC4oCf1gDThIi0GHm9KTj67WqlJgjptH+MFF5Dlrgljrqw8KkBGkS9+2vCJy5IPLo3EuO1sxXTRf1GY8ZES4MBEOBicuSF9vDz9//DgRcPqUGrDlQJxrsqKYHJtgd0YGyYAF+XU5HCw1oYagnZiiV7TZP8hCjEpQMWPVi0WeyMx7CzObl8QFvEVKpmLWPznESyfNEVMnw5AOgPxGY7PrH0EdmOUFtixLVNYqhs5jGxFgNy8FYA6sYvEc/xv87xR8eiNerGrrgPQQUU57sR8ctCsdVpDPcMxhsxAC6dR9xBmuhZp8qknvJ75OgvXQCfGvHaoD/5olrWsnU30e5ZHipzQ/9wOrhGPOawtlScYFu2+jhqwWD+npI+c8KDB/8Pn1MxuSkeYTy7rn2ExGOIYDSCxZv/IKkcCR+JMf2IlDYdivNQnI2i1syV/cvZbX9Ud+dWwNghMwAOqlTkjCb2T+W7DOPFLyt97EWihZXXPstFUnOFQItQUxZH0pk42rR6ZLUgYCoh 4h2wnooP sMFY9G/LcPKj5WUGh3J6QHYJWdE5Ug/Xn/OTfVwoRP2VRl1NVP+35exaL2kvzILX4X1Ji/YQ2lHSEPx5VLliYshBvc0Zgh323pMlRed586EdvIzfAUSKJl4SYTsi0HlYJt/DgrBBd18urNXS5CbhZ640lZPNbHHrZvrEVTjawmcXW/gkvugVZScXhxZQoUQGDdKwtmgzR+HdltHujePhOfoBQJcF+8JH8uTtBNpbgKOr4TFMaXRC2VAJ9iHBboCneKBamhjlh8wuyzUZHowkIMIhdUI8wNXsGTqMp3sC+SIQtpS7dsaIF9M6J6XQAdbzMm9GSQZ1IrhSYI2VqbcSLv3Rxv+lgvMp8wDnj3umiDmzjpXIB4+1oyruwXHTcEGrOiQQIRbybICZ0XijnrhB8TdPLfg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce the number of conditional branches in netfs_perform_write() by merging in netfs_how_to_modify() and then creating a separate if-statement for each way we might modify a folio. Note that this means replicating the data copy in each path. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_write.c | 298 ++++++++++++++++------------------- include/trace/events/netfs.h | 2 - 2 files changed, 133 insertions(+), 167 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 41d556fe382a..c36643c97cb5 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -13,91 +13,22 @@ #include #include "internal.h" -/* - * Determined write method. Adjust netfs_folio_traces if this is changed. - */ -enum netfs_how_to_modify { - NETFS_FOLIO_IS_UPTODATE, /* Folio is uptodate already */ - NETFS_JUST_PREFETCH, /* We have to read the folio anyway */ - NETFS_WHOLE_FOLIO_MODIFY, /* We're going to overwrite the whole folio */ - NETFS_MODIFY_AND_CLEAR, /* We can assume there is no data to be downloaded. */ - NETFS_STREAMING_WRITE, /* Store incomplete data in non-uptodate page. */ - NETFS_STREAMING_WRITE_CONT, /* Continue streaming write. */ - NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ -}; - -static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) +static void __netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { - void *priv = folio_get_private(folio); - - if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) + if (netfs_group) folio_attach_private(folio, netfs_get_group(netfs_group)); - else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) - folio_detach_private(folio); } -/* - * Decide how we should modify a folio. We might be attempting to do - * write-streaming, in which case we don't want to a local RMW cycle if we can - * avoid it. If we're doing local caching or content crypto, we award that - * priority over avoiding RMW. If the file is open readably, then we also - * assume that we may want to read what we wrote. - */ -static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, - struct file *file, - struct folio *folio, - void *netfs_group, - size_t flen, - size_t offset, - size_t len, - bool maybe_trouble) +static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { - struct netfs_folio *finfo = netfs_folio_info(folio); - struct netfs_group *group = netfs_folio_group(folio); - loff_t pos = folio_file_pos(folio); - - _enter(""); - - if (group != netfs_group && group != NETFS_FOLIO_COPY_TO_CACHE) - return NETFS_FLUSH_CONTENT; - - if (folio_test_uptodate(folio)) - return NETFS_FOLIO_IS_UPTODATE; - - if (pos >= ctx->zero_point) - return NETFS_MODIFY_AND_CLEAR; - - if (!maybe_trouble && offset == 0 && len >= flen) - return NETFS_WHOLE_FOLIO_MODIFY; - - if (file->f_mode & FMODE_READ) - goto no_write_streaming; - - if (netfs_is_cache_enabled(ctx)) { - /* We don't want to get a streaming write on a file that loses - * caching service temporarily because the backing store got - * culled. - */ - goto no_write_streaming; - } + void *priv = folio_get_private(folio); - if (!finfo) - return NETFS_STREAMING_WRITE; - - /* We can continue a streaming write only if it continues on from the - * previous. If it overlaps, we must flush lest we suffer a partial - * copy and disjoint dirty regions. - */ - if (offset == finfo->dirty_offset + finfo->dirty_len) - return NETFS_STREAMING_WRITE_CONT; - return NETFS_FLUSH_CONTENT; - -no_write_streaming: - if (finfo) { - netfs_stat(&netfs_n_wh_wstream_conflict); - return NETFS_FLUSH_CONTENT; + if (unlikely(priv != netfs_group)) { + if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) + folio_attach_private(folio, netfs_get_group(netfs_group)); + else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) + folio_detach_private(folio); } - return NETFS_JUST_PREFETCH; } /* @@ -177,13 +108,10 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, .range_end = iocb->ki_pos + iter->count, }; struct netfs_io_request *wreq = NULL; - struct netfs_folio *finfo; - struct folio *folio, *writethrough = NULL; - enum netfs_how_to_modify howto; - enum netfs_folio_trace trace; + struct folio *folio = NULL, *writethrough = NULL; unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0; ssize_t written = 0, ret, ret2; - loff_t i_size, pos = iocb->ki_pos, from, to; + loff_t i_size, pos = iocb->ki_pos; size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; bool maybe_trouble = false; @@ -213,15 +141,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } do { + struct netfs_folio *finfo; + struct netfs_group *group; + unsigned long long fpos; size_t flen; size_t offset; /* Offset into pagecache folio */ size_t part; /* Bytes to write to folio */ size_t copied; /* Bytes copied from user */ - ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); - if (unlikely(ret < 0)) - break; - offset = pos & (max_chunk - 1); part = min(max_chunk - offset, iov_iter_count(iter)); @@ -247,7 +174,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } flen = folio_size(folio); - offset = pos & (flen - 1); + fpos = folio_pos(folio); + offset = pos - fpos; part = min_t(size_t, flen - offset, part); /* Wait for writeback to complete. The writeback engine owns @@ -265,71 +193,52 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, goto error_folio_unlock; } - /* See if we need to prefetch the area we're going to modify. - * We need to do this before we get a lock on the folio in case - * there's more than one writer competing for the same cache - * block. + /* Decide how we should modify a folio. We might be attempting + * to do write-streaming, in which case we don't want to a + * local RMW cycle if we can avoid it. If we're doing local + * caching or content crypto, we award that priority over + * avoiding RMW. If the file is open readably, then we also + * assume that we may want to read what we wrote. */ - howto = netfs_how_to_modify(ctx, file, folio, netfs_group, - flen, offset, part, maybe_trouble); - _debug("howto %u", howto); - switch (howto) { - case NETFS_JUST_PREFETCH: - ret = netfs_prefetch_for_write(file, folio, offset, part); - if (ret < 0) { - _debug("prefetch = %zd", ret); - goto error_folio_unlock; - } - break; - case NETFS_FOLIO_IS_UPTODATE: - case NETFS_WHOLE_FOLIO_MODIFY: - case NETFS_STREAMING_WRITE_CONT: - break; - case NETFS_MODIFY_AND_CLEAR: - zero_user_segment(&folio->page, 0, offset); - break; - case NETFS_STREAMING_WRITE: - ret = -EIO; - if (WARN_ON(folio_get_private(folio))) - goto error_folio_unlock; - break; - case NETFS_FLUSH_CONTENT: - trace_netfs_folio(folio, netfs_flush_content); - from = folio_pos(folio); - to = from + folio_size(folio) - 1; - folio_unlock(folio); - folio_put(folio); - ret = filemap_write_and_wait_range(mapping, from, to); - if (ret < 0) - goto error_folio_unlock; - continue; - } - - if (mapping_writably_mapped(mapping)) - flush_dcache_folio(folio); - - copied = copy_folio_from_iter_atomic(folio, offset, part, iter); - - flush_dcache_folio(folio); - - /* Deal with a (partially) failed copy */ - if (copied == 0) { - ret = -EFAULT; - goto error_folio_unlock; + finfo = netfs_folio_info(folio); + group = netfs_folio_group(folio); + + if (unlikely(group != netfs_group) && + group != NETFS_FOLIO_COPY_TO_CACHE) + goto flush_content; + + if (folio_test_uptodate(folio)) { + if (mapping_writably_mapped(mapping)) + flush_dcache_folio(folio); + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; + netfs_set_group(folio, netfs_group); + trace_netfs_folio(folio, netfs_folio_is_uptodate); + goto copied; } - trace = (enum netfs_folio_trace)howto; - switch (howto) { - case NETFS_FOLIO_IS_UPTODATE: - case NETFS_JUST_PREFETCH: - netfs_set_group(folio, netfs_group); - break; - case NETFS_MODIFY_AND_CLEAR: + /* If the page is above the zero-point then we assume that the + * server would just return a block of zeros or a short read if + * we try to read it. + */ + if (fpos >= ctx->zero_point) { + zero_user_segment(&folio->page, 0, offset); + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; zero_user_segment(&folio->page, offset + copied, flen); - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - break; - case NETFS_WHOLE_FOLIO_MODIFY: + trace_netfs_folio(folio, netfs_modify_and_clear); + goto copied; + } + + /* See if we can write a whole folio in one go. */ + if (!maybe_trouble && offset == 0 && part >= flen) { + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; if (unlikely(copied < part)) { maybe_trouble = true; iov_iter_revert(iter, copied); @@ -337,16 +246,52 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_unlock(folio); goto retry; } - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - break; - case NETFS_STREAMING_WRITE: + trace_netfs_folio(folio, netfs_whole_folio_modify); + goto copied; + } + + /* We don't want to do a streaming write on a file that loses + * caching service temporarily because the backing store got + * culled and we don't really want to get a streaming write on + * a file that's open for reading as ->read_folio() then has to + * be able to flush it. + */ + if ((file->f_mode & FMODE_READ) || + netfs_is_cache_enabled(ctx)) { + if (finfo) { + netfs_stat(&netfs_n_wh_wstream_conflict); + goto flush_content; + } + ret = netfs_prefetch_for_write(file, folio, offset, part); + if (ret < 0) { + _debug("prefetch = %zd", ret); + goto error_folio_unlock; + } + + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; + __netfs_set_group(folio, netfs_group); + trace_netfs_folio(folio, netfs_just_prefetch); + goto copied; + } + + if (!finfo) { + ret = -EIO; + if (WARN_ON(folio_get_private(folio))) + goto error_folio_unlock; + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; if (offset == 0 && copied == flen) { - netfs_set_group(folio, netfs_group); + __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - trace = netfs_streaming_filled_page; - break; + trace_netfs_folio(folio, netfs_streaming_filled_page); + goto copied; } + finfo = kzalloc(sizeof(*finfo), GFP_KERNEL); if (!finfo) { iov_iter_revert(iter, copied); @@ -358,9 +303,18 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, finfo->dirty_len = copied; folio_attach_private(folio, (void *)((unsigned long)finfo | NETFS_FOLIO_INFO)); - break; - case NETFS_STREAMING_WRITE_CONT: - finfo = netfs_folio_info(folio); + trace_netfs_folio(folio, netfs_streaming_write); + goto copied; + } + + /* We can continue a streaming write only if it continues on + * from the previous. If it overlaps, we must flush lest we + * suffer a partial copy and disjoint dirty regions. + */ + if (offset == finfo->dirty_offset + finfo->dirty_len) { + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); + if (unlikely(copied == 0)) + goto copy_failed; finfo->dirty_len += copied; if (finfo->dirty_offset == 0 && finfo->dirty_len == flen) { if (finfo->netfs_group) @@ -369,17 +323,25 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_detach_private(folio); folio_mark_uptodate(folio); kfree(finfo); - trace = netfs_streaming_cont_filled_page; + trace_netfs_folio(folio, netfs_streaming_cont_filled_page); + } else { + trace_netfs_folio(folio, netfs_streaming_write_cont); } - break; - default: - WARN(true, "Unexpected modify type %u ix=%lx\n", - howto, folio->index); - ret = -EIO; - goto error_folio_unlock; + goto copied; } - trace_netfs_folio(folio, trace); + /* Incompatible write; flush the folio and try again. */ + flush_content: + trace_netfs_folio(folio, netfs_flush_content); + folio_unlock(folio); + folio_put(folio); + ret = filemap_write_and_wait_range(mapping, fpos, fpos + flen - 1); + if (ret < 0) + goto error_folio_unlock; + continue; + + copied: + flush_dcache_folio(folio); /* Update the inode size if we moved the EOF marker */ pos += copied; @@ -401,6 +363,10 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_put(folio); folio = NULL; + ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); + if (unlikely(ret < 0)) + break; + cond_resched(); } while (iov_iter_count(iter)); @@ -427,6 +393,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, _leave(" = %zd [%zd]", written, ret); return written ? written : ret; +copy_failed: + ret = -EFAULT; error_folio_unlock: folio_unlock(folio); folio_put(folio); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index da23484268df..fc5dbd19f120 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -128,7 +128,6 @@ E_(netfs_sreq_trace_put_terminated, "PUT TERM ") #define netfs_folio_traces \ - /* The first few correspond to enum netfs_how_to_modify */ \ EM(netfs_folio_is_uptodate, "mod-uptodate") \ EM(netfs_just_prefetch, "mod-prefetch") \ EM(netfs_whole_folio_modify, "mod-whole-f") \ @@ -138,7 +137,6 @@ EM(netfs_flush_content, "flush") \ EM(netfs_streaming_filled_page, "mod-streamw-f") \ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ - /* The rest are for writeback */ \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_clear, "clear") \ EM(netfs_folio_trace_clear_cc, "clear-cc") \ From patchwork Thu Jun 20 17:31:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B50D3C2BA1A for ; Thu, 20 Jun 2024 17:33:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B3618D00C8; Thu, 20 Jun 2024 13:33:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 467498D00AF; Thu, 20 Jun 2024 13:33:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B8608D00C8; Thu, 20 Jun 2024 13:33:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 01BF68D00AF for ; Thu, 20 Jun 2024 13:33:07 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B08A61A04C6 for ; Thu, 20 Jun 2024 17:33:07 +0000 (UTC) X-FDA: 82251962814.21.CC49F8B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 04852C0004 for ; Thu, 20 Jun 2024 17:33:05 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gKsvSVEP; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904777; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JsgLdDMxlbFRDEl04HJEi9WqLkbVt68ZbQT1Q/QXXG4=; b=n7L4aqkVapZabi63bkUcPGoT7ESvtpRKl50tfJIswbnS4WX8bnk1GDVS576G/E2DI9vwx1 A/M6v5nyeduZ11UdA//HX+VekFoRVnz6TFdbhczY3mS5Y9FwmENonEL40UjhkXnkzqgk5S LLEulR10t0VMzCgkg6NLxjLj4GZeFn0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904777; a=rsa-sha256; cv=none; b=ce2xsfD9i5RJ2WeUAZIhVXaN0KGUCOukPME82kauJryPBSPB5U2kfZ0zHRUjcB9uyRaHo4 GaMLmcZOfOImos7ukVaTtSo2msSW2e5G9M/oXxoguJ1hfTCWhr6bEFFBwZIWck7JDE0HoK kZxAqqYKrA2XT4pJWSSy/fthL0lorhk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gKsvSVEP; spf=pass (imf28.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904785; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JsgLdDMxlbFRDEl04HJEi9WqLkbVt68ZbQT1Q/QXXG4=; b=gKsvSVEPMviMKgfYg/enRA6pv2BUGpsgWX5tJGHevLF8f+Lp2KoOjPnWbU4h+dX8nc46vS IwoLDIF1dlfK4IOuoGKIVoArOoCFb5ApE18eNWHCZqRc+NBgR+EMoWs0Eb2vZjFNONW2GD BxjnHw1FV4hioQWxp4o2kbgM+a0MfL0= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-324-4DqBux0nMw2-JdVI-Ia2qg-1; Thu, 20 Jun 2024 13:32:58 -0400 X-MC-Unique: 4DqBux0nMw2-JdVI-Ia2qg-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D829719560AD; Thu, 20 Jun 2024 17:32:55 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C147319560B0; Thu, 20 Jun 2024 17:32:49 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/17] netfs: Delete some xarray-wangling functions that aren't used Date: Thu, 20 Jun 2024 18:31:26 +0100 Message-ID: <20240620173137.610345-9-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Queue-Id: 04852C0004 X-Stat-Signature: yoidkj9r3jtexqb15p816siuo6nrac8s X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1718904785-134113 X-HE-Meta: U2FsdGVkX1+Vhb3SlFQYsT+nRqmyodG+Cz+l2mvn8m2c+qC4tuuKfKaguWA5Kr6w8QXEYrXvudn8pT4tMPlJ23OXJNsqsWJc8W9OzInNksQgL3Sn6qBnXfLetKab/Rp5hH9wAg5DREJoFNaZSa2x27bKTjzfFKoUtXzLmL5R92NyVBF5EqByN4g9hIJynMUjvQPJvF9takakGgmIy+YL6MuTmlKiO3M/rDRFQ51X6UssmNEGIunaczTIKoij3JIR9TTxm1bp3pxwNuHYRUUcq8UKpd/hhY1T/7g0RJ/PUQcG+zUbpFuMzriN7a8ym/SJVEsp4Q794ZSqh6z+BqAjorpCzK7pE+U34rilOIYRXrwMAjFvJJcspXhw99ZgVSCplYXweuSejb/rPjkDPzviVFSbOR8dVd0BER0AmWZKcGJ26JstC/GV3rGGaSgvViYzWZnx/PM7tHVMeVS+6emAq63Eu0IzO9T1D0l08ZKyfzOAO5zJ+tM8j5Ecs/pTM28mMvwPm4QYHywquz5fX92hRcDTTj5ykyqt3u8MmWqNB/q6pY+yYiNv+Zf2M8LK+ZPDhwWCF4KVApwCKaVK3clQBbF4bHl849S8Ov05rUet/jA0TTzMl4vZr/9YymkyblDYKqVdCfsxsi/K0o+3x5CEdXT9/xbXsKObuqLRP7pMaONCFlCtrrrs8xWh+2BPmRDc0A7NizWk3XmnXjxkxE2TnyQb61NUIDWlwDuLgqnoZi7VfUXFiVnTXb60kv062NvqA8vWlCc28de7D1cCuML2xmPilJCtKcn/uHIcXYz9IcdWDILOMhj7Q07PvxQhyjisPuN8Mge4IJC6K9d3Q9tT9iGGla/oX1qgJe5QxsteqeTCKqLlWhMdhW6HZH6HHlmAVl/zekIiiXCmSrkkvyQRUpUDfSKR2XyKUJXnkDwJufY+Oqak8Zd+uMfrMcaaS+qTR7z0YW7/11tXLTQ/rdj u7DbhFpC 8yKqBcVZ4JlNqIMJ+4xXMQvejYf0bVFN5sDV0TCilZFvGm2nS/B/mzr+cLfLn7fQnC0wjJi/G3n8YGecBspkABlH1w1MBN8LEVONBmTbFuyyzFCifH1yrXbezlT6LprzsT7ffUOPXZi4IHukcudNt+wUbzwSrrDN2WdbvqnQT57lSI9b8ZRU2wPVYCreCZmFyD7LK02L3zeNgKOD09FrxYKQyp0pygHR2IyTrDMTYvwHIXHHZXs42sP6LSOvRZmxmzEDgaDdYZI56TdXm4v07GrA2NQMSFR0LBHSMtE0JWuogqsXylvuCnKAGZw1Yrfp2ov83D6+CCBoZsZr7qcN2bC3o5sKr5I5TmxJRRr2koGsL2Uk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Delete some xarray-based buffer wangling functions that are intended for use with bounce buffering, but aren't used because bounce-buffering got deferred to a later patch series. Now, however, the intention is to use something other than an xarray to do this. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 9 ----- fs/netfs/misc.c | 81 --------------------------------------------- 2 files changed, 90 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 42443d99967d..a44d480a0fa2 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -63,15 +63,6 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {} /* * misc.c */ -#define NETFS_FLAG_PUT_MARK BIT(0) -#define NETFS_FLAG_PAGECACHE_MARK BIT(1) -int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index, - struct folio *folio, unsigned int flags, - gfp_t gfp_mask); -int netfs_add_folios_to_buffer(struct xarray *buffer, - struct address_space *mapping, - pgoff_t index, pgoff_t to, gfp_t gfp_mask); -void netfs_clear_buffer(struct xarray *buffer); /* * objects.c diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index bc1fc54fb724..83e644bd518f 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,87 +8,6 @@ #include #include "internal.h" -/* - * Attach a folio to the buffer and maybe set marks on it to say that we need - * to put the folio later and twiddle the pagecache flags. - */ -int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index, - struct folio *folio, unsigned int flags, - gfp_t gfp_mask) -{ - XA_STATE_ORDER(xas, xa, index, folio_order(folio)); - -retry: - xas_lock(&xas); - for (;;) { - xas_store(&xas, folio); - if (!xas_error(&xas)) - break; - xas_unlock(&xas); - if (!xas_nomem(&xas, gfp_mask)) - return xas_error(&xas); - goto retry; - } - - if (flags & NETFS_FLAG_PUT_MARK) - xas_set_mark(&xas, NETFS_BUF_PUT_MARK); - if (flags & NETFS_FLAG_PAGECACHE_MARK) - xas_set_mark(&xas, NETFS_BUF_PAGECACHE_MARK); - xas_unlock(&xas); - return xas_error(&xas); -} - -/* - * Create the specified range of folios in the buffer attached to the read - * request. The folios are marked with NETFS_BUF_PUT_MARK so that we know that - * these need freeing later. - */ -int netfs_add_folios_to_buffer(struct xarray *buffer, - struct address_space *mapping, - pgoff_t index, pgoff_t to, gfp_t gfp_mask) -{ - struct folio *folio; - int ret; - - if (to + 1 == index) /* Page range is inclusive */ - return 0; - - do { - /* TODO: Figure out what order folio can be allocated here */ - folio = filemap_alloc_folio(readahead_gfp_mask(mapping), 0); - if (!folio) - return -ENOMEM; - folio->index = index; - ret = netfs_xa_store_and_mark(buffer, index, folio, - NETFS_FLAG_PUT_MARK, gfp_mask); - if (ret < 0) { - folio_put(folio); - return ret; - } - - index += folio_nr_pages(folio); - } while (index <= to && index != 0); - - return 0; -} - -/* - * Clear an xarray buffer, putting a ref on the folios that have - * NETFS_BUF_PUT_MARK set. - */ -void netfs_clear_buffer(struct xarray *buffer) -{ - struct folio *folio; - XA_STATE(xas, buffer, 0); - - rcu_read_lock(); - xas_for_each_marked(&xas, folio, ULONG_MAX, NETFS_BUF_PUT_MARK) { - folio_put(folio); - } - rcu_read_unlock(); - xa_destroy(buffer); -} - /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback * @mapping: The mapping the folio belongs to. From patchwork Thu Jun 20 17:31:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE3CC2BB85 for ; Thu, 20 Jun 2024 17:33:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D3AD8D00C9; Thu, 20 Jun 2024 13:33:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 85A838D00AF; Thu, 20 Jun 2024 13:33:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65E198D00C9; Thu, 20 Jun 2024 13:33:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3CC9E8D00AF for ; Thu, 20 Jun 2024 13:33:20 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D5F17C045D for ; Thu, 20 Jun 2024 17:33:19 +0000 (UTC) X-FDA: 82251963318.08.8E4B54D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 286591A001F for ; Thu, 20 Jun 2024 17:33:17 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UvQWKUzf; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ts5ZCKmEyEPwC8pHpdqPl9OxtkpjpqDkyGOj4sN+myw=; b=ir+/4EVLUGzNmPXxfBgLN9IwZef8KEiHWKgt+NI8vmHLAAtT9cTLTeqtuUX7h6vDtkKJ4J t7z/C+593qi8oMX4P9CiIbGO074rFzmBmms+QdDYj/pJKKTrHBylUVdCYM5uVw5JFykQZF EKFw2/d2D/q14YOsGA3Og/IeDsMd2do= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UvQWKUzf; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904793; a=rsa-sha256; cv=none; b=Ud0/lkMEHMcdwH/uhVd+2QHXpkCE/rUWwl036NlOCsc3VklonmVJHKyOVv8Ux9AUCGYe6P ykvurfMbvwZTuCaPoEWWmxwClc/G/Cdm4dMF6cN6dNb7tq8ter2g4E2DWZkAuo8LppQ3HK GphSojOVneMPCBguzV08RNv+ROWk8UM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904797; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ts5ZCKmEyEPwC8pHpdqPl9OxtkpjpqDkyGOj4sN+myw=; b=UvQWKUzfuQ3IO5MlDBMNI1ap5zxzhO0+jz9b2M3/PdKIJqpi5zTeM84YCjIuiRFtYCwwxe Xos7MTdSFsY9oujJz5K5gcClizDdWQkoVGMVaOqxm1l4uFUlxpF34zjgZMVCe+PNBTtP97 ChthCSLesdaYnN6dFPLJnPhsSKRQjMA= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-378-OqI8PTsvNIisGVH6w9IAGQ-1; Thu, 20 Jun 2024 13:33:14 -0400 X-MC-Unique: OqI8PTsvNIisGVH6w9IAGQ-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2C42B1956056; Thu, 20 Jun 2024 17:33:03 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 340171955E80; Thu, 20 Jun 2024 17:32:56 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/17] cifs: Defer read completion Date: Thu, 20 Jun 2024 18:31:27 +0100 Message-ID: <20240620173137.610345-10-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 286591A001F X-Stat-Signature: x1n7mimuapxpafd91ent7imwt8t1mwja X-HE-Tag: 1718904797-147311 X-HE-Meta: U2FsdGVkX1/ugP/x33c9cZtPIiP2J8NjWfqa91PVHhgtAL0ZANuHaBF0qXuSSngt3FufVVgri5hmAJCiBqZTtSl2SYw20LC8mjl3rsEN+VwzjFR3iCsJrLLqYVTbPTmxj0MU7ywmOK8w9Gq+Y5WILaL6GkzXzLSHzpxJUAIziajtSUovQsDz9wDF0SvFrvS5K6kDV+Rkt23dGXqHpFo4sBzH/cbiDTPzaz3yZ6Ax+thxEiTRlURJJtqH+CLKskmJGKjlsXsUnoZxxn+h1cuyzfPEcvd225dyyhY773QbBYY1gx8S824x7JeoS0UWDmrzeR6OX3KUK0zADFCAEdRl5crznQ0nK2uGpkxhBBxrHZteD4h0Snb4IH2il24YbX8YCdm0RqYuTRaZrGRpgnLIXOg5jfhSLUAIXlmEV+feGtYOeh1tNO+0n5+kxD0/CSu6W3IhU+52+ZsnjIGuk6X+7B6vWd0gApaZBSrM0CtU7zeZ+X2MabVRelxClH/bsaYPh+3ifr7q9n2CjDxYkhclbkZ8CtG2GlSJtC1lRR8MU2uZ7EZjGtDAqQTx7CfHuVBWG7Sl+J3p14ZukehhOGlncywas9M0heZHbCLKAuVX6w7jtUhw/pn4N7BpsZ0tnvO77QKSDjFlYCMy4BK3MC7f/bWFLROu9s1l1EzYTnS/e2R7+3rNBQIdxlSxLz7Nz2p9LeG2PR5asO66G+kSKniWtKiZ4yuxKeal15oX8B6eTi94DfmABjYFQ+iC7q6JqHzjJuU7Yx2kMit+xHQTmRdGj769MxV4LCu1izhSGFOJTBLS66Ms0HsRLj7+dz9g5zNfRgv/nfALn+YjpxgFtLnfLV6PngkN0q3DXOHkSaVelSgDpx1iK6htNfvEStsTa+dP5TafoyNC21QvpZHTezhapMd4Gnu6CFTcHXkvRlbbEbC6MqAyboXe3N4n/HPJeQrmniYQPpr/9mwGXd/OcVQ coZA/GJb M1/3wxehiaFAVuhUgjynkdfRWivAty+xzgnZfFCM5A6THwdxtbdNEs6ZNa+q5ie0fpzi9eLNPwNUh4+MWBRYrHVb0wLKeYeqbX6zQZBFDZu1QTLf/9Fqd0eakkPA0kT2OGH5RlE8Nq6h1n1BX83GfGO6VqE5GUyKCn4ZZkIv5Fkhn546MgITtmal87uSxsgC5GT9eIV43prYHMmhQYXKFAEbcljosz7SUWRr7NMOc5Df2TOkYI9lZKx2i6jyjP1LbUNy7m1Y471zp/SimjNqCH0ZVgA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: --- fs/smb/client/smb2pdu.c | 15 ++++++++++++--- include/trace/events/netfs.h | 7 +++++-- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 38a06e8a0f90..e213cecd5094 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4484,6 +4484,16 @@ smb2_new_read_req(void **buf, unsigned int *total_len, return rc; } +static void smb2_readv_worker(struct work_struct *work) +{ + struct cifs_io_subrequest *rdata = + container_of(work, struct cifs_io_subrequest, subreq.work); + + netfs_subreq_terminated(&rdata->subreq, + (rdata->result == 0 || rdata->result == -EAGAIN) ? + rdata->got_bytes : rdata->result, true); +} + static void smb2_readv_callback(struct mid_q_entry *mid) { @@ -4578,9 +4588,8 @@ smb2_readv_callback(struct mid_q_entry *mid) rdata->result = 0; } rdata->credits.value = 0; - netfs_subreq_terminated(&rdata->subreq, - (rdata->result == 0 || rdata->result == -EAGAIN) ? - rdata->got_bytes : rdata->result, true); + INIT_WORK(&rdata->subreq.work, smb2_readv_worker); + queue_work(cifsiod_wq, &rdata->subreq.work); release_mid(mid); add_credits(server, &credits, 0); } diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index fc5dbd19f120..db603a4e22cd 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -412,6 +412,7 @@ TRACE_EVENT(netfs_folio, __field(ino_t, ino) __field(pgoff_t, index) __field(unsigned int, nr) + __field(bool, ra_trigger) __field(enum netfs_folio_trace, why) ), @@ -420,11 +421,13 @@ TRACE_EVENT(netfs_folio, __entry->why = why; __entry->index = folio_index(folio); __entry->nr = folio_nr_pages(folio); + __entry->ra_trigger = folio_test_readahead(folio); ), - TP_printk("i=%05lx ix=%05lx-%05lx %s", + TP_printk("i=%05lx ix=%05lx-%05lx %s%s", __entry->ino, __entry->index, __entry->index + __entry->nr - 1, - __print_symbolic(__entry->why, netfs_folio_traces)) + __print_symbolic(__entry->why, netfs_folio_traces), + __entry->ra_trigger ? " *RA*" : "") ); TRACE_EVENT(netfs_write_iter, From patchwork Thu Jun 20 17:31:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F9E9C27C79 for ; Thu, 20 Jun 2024 17:33:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC59B6B039D; Thu, 20 Jun 2024 13:33:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2CFE8D00AF; Thu, 20 Jun 2024 13:33:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BB1E6B0395; Thu, 20 Jun 2024 13:33:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6A83E6B038E for ; Thu, 20 Jun 2024 13:33:18 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2CBE5A1EA7 for ; Thu, 20 Jun 2024 17:33:18 +0000 (UTC) X-FDA: 82251963276.20.C3D0979 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 7F8FD80027 for ; Thu, 20 Jun 2024 17:33:16 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TTKJjq4r; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904789; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tcCzhA8XUd6wWPEL49ATAv4+dcrLDrKKLcUOQlZf5aE=; b=NoIKXfdZO+ho+0a+M+pzSk83qORAiz8f10319htmk0kO7jkxJ67Kse4ZJ0QjdI08tfRz10 jPKqNlMJlh25r+oZWccKJK1IAvhd/gLSiBg7+QQWByjPqxdM+k5zac9qBui3iOc0daXAhi F2xu8wY0tQLiufQdNz7rQcwx78dGAXM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TTKJjq4r; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904789; a=rsa-sha256; cv=none; b=Hy24V/x2AHJfcy1/JjfN7WBA5KXTOz+tVMPKlsCsRXf9+gRLZKj/q+FcwAhZT5S00e2CZH 89QmaQfb8OMOLVKvKV6nUfI42UoWqmWyXd7f3tNKliF1/X0SUHi4VnnuUJgCcoEMHw8M0h HopwR3WreQCqKsn1fVv8WerO5APiq6w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904795; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tcCzhA8XUd6wWPEL49ATAv4+dcrLDrKKLcUOQlZf5aE=; b=TTKJjq4rZPOpM61bkRtrcAdbo+57zLEVtls+I6TLXh+GUYyXNdyz2KRj1t7lbhvW693sB2 X5zHm3q71aUH6irtG8eH9qIac+FfO1K8SQ1fz7rundkqZC94SmWzUglJUY2caOeFPx3wOK uMqf5QYqqY6UauScXBLTz1vF1CE/HjI= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-694-dzyJCI2LOGq-cKgSo75gZA-1; Thu, 20 Jun 2024 13:33:14 -0400 X-MC-Unique: dzyJCI2LOGq-cKgSo75gZA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E57F51956087; Thu, 20 Jun 2024 17:33:10 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B52743000602; Thu, 20 Jun 2024 17:33:04 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH 10/17] cifs: Only pick a channel once per read request Date: Thu, 20 Jun 2024 18:31:28 +0100 Message-ID: <20240620173137.610345-11-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Stat-Signature: 7o9s9krcomiqfuh6suo78abhhu97xr94 X-Rspam-User: X-Rspamd-Queue-Id: 7F8FD80027 X-Rspamd-Server: rspam02 X-HE-Tag: 1718904796-294825 X-HE-Meta: U2FsdGVkX18Ound1slovG5fdIVLA6LRC46yaXBBxb7MozcaUqXAHGwWYcPWIonNysmSZnfNSkCgJyXhBS6PzvguLirLntLBOaVNWSxwU+p7x8av3hvUmLyluwRBi7UzjYfjecWns5Or4NkfBF/1ZUTQkv2XbuoHTuCuvzZq+NpHb727E5qalty0Erybq2Bc3NDEQXXZG1xruRiSWUHdcXVCo/gvEzNFlOUUHDZtUxSdstNGWFR3rw45VRtIZC4w5Y9+PTBFBxEvgnDg94n48E9jLTTi2WFkVjTTz/ffS0dA9kv036mcpzHptKDhod4zOLLi/TIQrDw5vRfcfv8OCd1i59G6FwkGTS6WYMEW2DuiQGCdKb34+NmtRDwEKu9yxlr/7SJnr5oDbjDlGkudgDW02MX4KDtmrfpwPmv76qLqUatT9CVmD7wHZ37QPlSPPXUa/gLm8SH3jNLKP+d24zXuhdpWctH+lohqkmLaOmIpbmfXJat3rL1xetvbg6H7gR8f+dmgFbZijWDW8wtb+xXXXpjMNkT/PEW2KS2ywbF3X6D1HW9wDVHbvWxIaT8y6/0mohlYhnkPm5jQ7Q3KXBT9O0vYYsGNDYAzDExkzRj4Qg4hp1X2Loy543+y556BsAuqjk+gdxvwypENEc2QdXjP2fPFjpDGh+aBl/V7zmNVZ4nhC3dA0GOvtEAowC212Spe63b78HrW12XyeG0b9WPzYor/NLaUe9SIxhHt6UnvexA8CXpjtzKsVIoUA2B2QovctxXXDgG4B1uPWAZTrNY5rK5fVVyukVOJpW0d8sm9LHZJl9rTmpzH3kQ8ZCB0r0bxsyWoALx0fP2R1Lfk44K3DRDb5G59M5FnTn9KYnfP+/CjPhYNgaBjCesoSPuVGWB+P+GUc0VElSnV2/Ed3drPbwJKzE3Rh0vSs1W3oDWA8t3sQVqWHTdDtx/LrQo9NTAZd+btJDzkJfzW3cH2 sOA/YcFW KL0DDc5+LiYQeFMBdbSwPXRysP/9eF3fT5qnGmoYKohZAtaNKpZoHrKYo/U7cpHZRfoVnV7oJRJx5Ju6ajor1Uj3Ty8NZZa+LjM/O4W1VpKwb8gUE62pAZ3r/UVBcbcRFayQw5tOB224ayJsQGRO5o8/TYVZ1mkzOyYNTGYPj8O6cqz097yWqOLa1lDfH8Ai88hUhJrGTjYK8usilhPo1AQp4uxObgbqcB5Pt94mzdkj7KaiQO19GxrTqP9eugn19B/ymoBBiCTRugtKt0NMOmrlpth+N7+UCqN+zIVj/zzAvjhLsY8k7Re0DbcEfZcVMEhm575nEtM7FhYFKj9qzh8xrCKLdcDmCMMjoE6pA6/3atL5Dlb/KrhRpJwU7xCWT9UiNLdlTli2rTMPU/KO3hwOqlyp/l5PgLkfFkcfxNoDmf8m3Rjd1HNG5b2ije0Q1Jo5L8WOrk8SVLxdOGVm/FneU6rQPHolmyXBk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In cifs, only pick a channel when setting up a read request rather than doing so individually for every subrequest and instead use that channel for all. This mirrors what the code in v6.9 does. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/smb/client/cifsglob.h | 1 + fs/smb/client/file.c | 14 +++----------- 2 files changed, 4 insertions(+), 11 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 4b00512fb9f9..b48d3f5e8889 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1494,6 +1494,7 @@ struct cifs_aio_ctx { struct cifs_io_request { struct netfs_io_request rreq; struct cifsFileInfo *cfile; + struct TCP_Server_Info *server; }; /* asynchronous read support */ diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 67dd8fcd0e6d..16fa1ac1ed2d 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -134,17 +134,15 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq) static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; - struct TCP_Server_Info *server; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct TCP_Server_Info *server = req->server; struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); size_t rsize = 0; int rc; rdata->xid = get_xid(); rdata->have_xid = true; - - server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); rdata->server = server; if (cifs_sb->ctx->rsize == 0) @@ -203,14 +201,7 @@ static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); rdata->pid = pid; - rc = adjust_credits(rdata->server, &rdata->credits, rdata->subreq.len); - if (!rc) { - if (rdata->req->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = rdata->server->ops->async_readv(rdata); - } - + rc = rdata->server->ops->async_readv(rdata); out: if (rc) netfs_subreq_terminated(subreq, rc, false); @@ -250,6 +241,7 @@ static int cifs_init_request(struct netfs_io_request *rreq, struct file *file) open_file = file->private_data; rreq->netfs_priv = file->private_data; req->cfile = cifsFileInfo_get(open_file); + req->server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); } else if (rreq->origin != NETFS_WRITEBACK) { WARN_ON_ONCE(1); return -EIO; From patchwork Thu Jun 20 17:31:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 529FEC27C79 for ; Thu, 20 Jun 2024 17:33:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA2558D00CA; Thu, 20 Jun 2024 13:33:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C505B8D00AF; Thu, 20 Jun 2024 13:33:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A574E8D00CA; Thu, 20 Jun 2024 13:33:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7F2248D00AF for ; Thu, 20 Jun 2024 13:33:28 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 35D1B1203DF for ; Thu, 20 Jun 2024 17:33:28 +0000 (UTC) X-FDA: 82251963696.25.8C6F7D8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 5DBD8140004 for ; Thu, 20 Jun 2024 17:33:26 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B9gh5+1K; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904796; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uX8xcUkGpuu8sXJI7G0aibuVHEQvl2VRr7qcOSjNkD0=; b=a3OFjzblnOcfl5F7gp9B0jgQuB2wLqHG5Dg9rKqsK6b8vfodC/UpDoPx9RekeBWS9dwswi +lL+JRcW34VavHWVusUswtSERtJhchhPOIIkSMkb0+Mplq0PW9VxbnbxjLkJiu1j3J6Yit 7pyOGAq3Yng3kx5GsP+G0q+kE8dRD1g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904796; a=rsa-sha256; cv=none; b=IPGp+yFD0DV89Hd56wvInu+kmWCUTUyMqWzllZo9XbGN4fjVXZZAcPGuID2Z20HoYmtM7R agHy9nDnAY8xwlpgnuhhzTLZohTD/0X8cyTeZ85L5Sz3ebRKirigOEEAQrIQBDaS0j3ks4 3vw1e3DNRSUt88bx63B8dmujKa03izo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B9gh5+1K; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904805; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uX8xcUkGpuu8sXJI7G0aibuVHEQvl2VRr7qcOSjNkD0=; b=B9gh5+1KU85aaAL6SOrsPKEtFGzAIdGajDHA4wquUcFSbtgdK/EojfSiFPFTtR08ZvG4dR bplsN3dKOgSSQfK1f72YGryp/djA2Ey7UTI39ilBVhlyn1OI7G6EsxXlN6vEaYuI81r7np 7sqqhMpGo/17BCvZkO89JMzp9TXt5vs= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-231-5gkrIObUOdmWGIEp57IBsQ-1; Thu, 20 Jun 2024 13:33:22 -0400 X-MC-Unique: 5gkrIObUOdmWGIEp57IBsQ-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EDA6719560BE; Thu, 20 Jun 2024 17:33:18 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 76B5219560AF; Thu, 20 Jun 2024 17:33:12 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French Subject: [PATCH 11/17] cifs: Move the 'pid' from the subreq to the req Date: Thu, 20 Jun 2024 18:31:29 +0100 Message-ID: <20240620173137.610345-12-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Stat-Signature: 879ddeugm1fdfic66x8ut9t4969zdtux X-Rspamd-Queue-Id: 5DBD8140004 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1718904806-326303 X-HE-Meta: U2FsdGVkX18Fd2Yx0Pi/pNVuTbNevYR48O4uxG21gctTCMS9opmh/wjkkgrKNswD90KLqulPQp/+LELjysvAdOU9qYUQ+1LXQM0yZ5ZN4mtaTAbBB91Y2alWPZ+TqMN9wm/golPsrhXZgu5uRvhFf5Keigtbyy+1qLpiq6pYN0Xq24/1uNDLlkqjAPJYHmfSbsM0qkCpBOaCMjfO/sZ8Tn6DlWOTF7449By/WH5IprX6+VK7ID0WPt48/57OA5x6ysJCIicOLyjNIWLTzAYqrfBTCZr1tTw9XlYQxCAiorOJyR5duQOQIB5cBbXz995GA7aRRrLPFwMAgZKD4ldHbl1traG8aDdiSuBsXF8QbJao0Nihh3zdAQRr2/KvLTABPj/uJ5lspuuPJRHQMzgWBDHYOfOqtYB1aCSxq5XIfCExXj3TgU3RRQf9QZe/l+buJCkZfAl31zupU+LSZ+K6eUbR6Aqt5Ri7PG1S9bYQObV5CtsGGrQVfsX332ADDuRZU4yvP11uQY4x57mw/tnirt50oE+qF4/VBQg7GJlHmdXDlzBKjN/+ziopeshLfU+LDAQmE5dX96syTaWJ8f6U7LNtD47KP+gii/ieGFol8qqRhi4IK7el1RZSfNEv71DuFpJGBFS9qHb6WGBe/AFQzornoHOVlEwa0iI+McUk4LCdqftD9y/6X9V2vvzQxz8qjV5PODCVDNObEXKYnTdp3+WnqJMBUdkx3kwidYEVQI8wSqmLEWz7ScRaamxtiIk1tQoaqfI2f9GXNkFBZllZNyAvebsCKjRUoSykKysSV86KbaB8j5l2jBiD1szl6uSS7GxuRUy2OlMiTh9qRlLMjUxMzTflyuHS3coixVtmD9V9M4BSz2wsu5y2Ys0PFndHUOZ7dUG2z+Ff3rOsSO0czjUtzM/4HzcwiERxUQzaHkk2zd2uMe7xO797zMfPixNb7A23E+uxaE7KQMJIfGW U/aGSv7Y WM6KpaqcVTRi+NTlTRNJEvaOu4Dnu298D/EMAt4RlsukBpUmev2jrPXtMl/svLOD9iZt2ljuYvgMa9ROeDAGfE51GiqAjJ7sisPq+CjGUxa9uphY/0VbhJibP8oj7u6Bn7BDB5P+o21Ih1JwI1pE1vjPLlOa/5sRFrlhPn68xka6nDXGEze3/7CCAqJk2FKRymuC/lXElMC2A9RhIQyS5TI515GBCK5VxRSw/HkxLkc9lwHxxSWMYGJJCdLb2lXxTfBljRTEg5WbVyNnECcaS/JjzP2ZYdJDWJz0DYDknGCq3XghaqJ0MmskfLV5BISrZ9rsvXdQh04TcKE9WRMsSI6UtEp2oRzoTtjslPJthn8pvwM96UzIRi07ngtX+geNXirE0MdKtPznuY2hy/6CvuJOM/LywM3Dy3zI/KBjDo3CMeu1NN/V/8dmtcAGZyhwqO8BnXRkV5SMiQk2C/MpQvfYHCpdyk/orWepuiNizti/pf1Uw5U0a/k7wV06ZAOQqNLtz+peEKp44m8Rflu1v3gdj2A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move the reference pid from the cifs_io_subrequest struct to the cifs_io_request struct as it's the same for all subreqs of a particular request. Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/smb/client/cifsglob.h | 2 +- fs/smb/client/cifssmb.c | 8 ++++---- fs/smb/client/file.c | 10 +++------- fs/smb/client/smb2pdu.c | 4 ++-- 4 files changed, 10 insertions(+), 14 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index b48d3f5e8889..bbcc552c07be 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1495,6 +1495,7 @@ struct cifs_io_request { struct netfs_io_request rreq; struct cifsFileInfo *cfile; struct TCP_Server_Info *server; + pid_t pid; }; /* asynchronous read support */ @@ -1505,7 +1506,6 @@ struct cifs_io_subrequest { struct cifs_io_request *req; }; ssize_t got_bytes; - pid_t pid; unsigned int xid; int result; bool have_xid; diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 25e9ab947c17..595c4b673707 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1345,8 +1345,8 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) if (rc) return rc; - smb->hdr.Pid = cpu_to_le16((__u16)rdata->pid); - smb->hdr.PidHigh = cpu_to_le16((__u16)(rdata->pid >> 16)); + smb->hdr.Pid = cpu_to_le16((__u16)rdata->req->pid); + smb->hdr.PidHigh = cpu_to_le16((__u16)(rdata->req->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ smb->Fid = rdata->req->cfile->fid.netfid; @@ -1689,8 +1689,8 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) if (rc) goto async_writev_out; - smb->hdr.Pid = cpu_to_le16((__u16)wdata->pid); - smb->hdr.PidHigh = cpu_to_le16((__u16)(wdata->pid >> 16)); + smb->hdr.Pid = cpu_to_le16((__u16)wdata->req->pid); + smb->hdr.PidHigh = cpu_to_le16((__u16)(wdata->req->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ smb->Fid = wdata->req->cfile->fid.netfid; diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 16fa1ac1ed2d..45c860f0e7fd 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -178,14 +178,8 @@ static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - pid_t pid; int rc = 0; - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = req->cfile->pid; - else - pid = current->tgid; // Ummm... This may be a workqueue - cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, subreq->transferred, subreq->len); @@ -199,7 +193,6 @@ static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) } __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - rdata->pid = pid; rc = rdata->server->ops->async_readv(rdata); out: @@ -236,12 +229,15 @@ static int cifs_init_request(struct netfs_io_request *rreq, struct file *file) rreq->rsize = cifs_sb->ctx->rsize; rreq->wsize = cifs_sb->ctx->wsize; + req->pid = current->tgid; // Ummm... This may be a workqueue if (file) { open_file = file->private_data; rreq->netfs_priv = file->private_data; req->cfile = cifsFileInfo_get(open_file); req->server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) + req->pid = req->cfile->pid; } else if (rreq->origin != NETFS_WRITEBACK) { WARN_ON_ONCE(1); return -EIO; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index e213cecd5094..2ae2dbb6202b 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4621,7 +4621,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) io_parms.length = rdata->subreq.len; io_parms.persistent_fid = rdata->req->cfile->fid.persistent_fid; io_parms.volatile_fid = rdata->req->cfile->fid.volatile_fid; - io_parms.pid = rdata->pid; + io_parms.pid = rdata->req->pid; rc = smb2_new_read_req( (void **) &buf, &total_len, &io_parms, rdata, 0, 0); @@ -4873,7 +4873,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) .length = wdata->subreq.len, .persistent_fid = wdata->req->cfile->fid.persistent_fid, .volatile_fid = wdata->req->cfile->fid.volatile_fid, - .pid = wdata->pid, + .pid = wdata->req->pid, }; io_parms = &_io_parms; From patchwork Thu Jun 20 17:31:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3B2DC2BA1A for ; Thu, 20 Jun 2024 17:33:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 136A08D00CB; Thu, 20 Jun 2024 13:33:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B91A8D00AF; Thu, 20 Jun 2024 13:33:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E74378D00CB; Thu, 20 Jun 2024 13:33:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CA2088D00AF for ; Thu, 20 Jun 2024 13:33:38 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 860378044B for ; Thu, 20 Jun 2024 17:33:38 +0000 (UTC) X-FDA: 82251964116.13.0716063 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id D06FA100006 for ; Thu, 20 Jun 2024 17:33:36 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UZBU8D2k; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904807; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jVPcNLptz8EyYlOezgzJ7ddJroieefDkzwJH2DWe9Tc=; b=NQXo34SQNi4gwR4IAfLkUuBc75pw9dW/yiH2tGHSmXnZZZH/rJYECCaHiFm0Rg4QcyFvC9 JeeilkacJqB7BcRBO6FRFnBFX61BVqnGWLMVppdyXGhRur/BfOLPEUXj8PUerTpIiml2Or OJKfRvaNWe5NEDnIFfSHaylaM4E1WGs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904807; a=rsa-sha256; cv=none; b=vYq7sV7tVqJoGol/hasCWjGDEEIotfq/dnjNPjySckpB+MBu7AYqnqUVNJO84YNxBcc6/P cwDBJ1JVZeBsoXFPsNFO+vuV7zxNUSkJ7131gOS1bUrlG1uIfuSV4qtfyLG30xbkbWhob8 PWzqKyKT0A39rSZlEREu9JBo/1TGC5g= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UZBU8D2k; spf=pass (imf14.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jVPcNLptz8EyYlOezgzJ7ddJroieefDkzwJH2DWe9Tc=; b=UZBU8D2kRLelkMBC2xCxGL4KfFZFhRWilsgvPKAUq7pDwUMZUYjt2HhwCiNb+0Q1oqeRyB 3+RrwDC4lV4DRQx++TugtzH9b3YShydYstfVDJKV4Fhhm/kNjY6tW9dQoeM3KfmeUWVAgX tmJr0mW8YtnRGkBPTdAqz4ivBldUrpE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-615-qYsG-IlmNlavq9EIcpvwFQ-1; Thu, 20 Jun 2024 13:33:29 -0400 X-MC-Unique: qYsG-IlmNlavq9EIcpvwFQ-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DF30519560BA; Thu, 20 Jun 2024 17:33:26 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7B3F71956087; Thu, 20 Jun 2024 17:33:20 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/17] netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream Date: Thu, 20 Jun 2024 18:31:30 +0100 Message-ID: <20240620173137.610345-13-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D06FA100006 X-Stat-Signature: cd9q5wdecuxega3qsmikz3otsjjta6q3 X-HE-Tag: 1718904816-184812 X-HE-Meta: U2FsdGVkX1+yTqSKMn4/96EiAPBNW9ttnmSXDNCds5bmRfj7ltYiD3oKzuQYSGAOWmcaRI9kgwRRHtGTAaYZByA0CUj8Kv1WihpYvjYjQKwbusvEchxenG/W00re8STmk1pze93a9F3Ryf9YwtHsSFpi7/YZZiOHagDbuGopeFOYkNihaxUCSWopfL6E/hsGsbtJkD0zOHyU6xDlwe4qSbLgH+Eb2qCR+lrlfM94D7D8oryX4OYLUSxfmHBQ76vQ12SECYu1WDQ0DAyEGCqr8MOtIWabAG19A92JE1+3iCOh3XJoP5vo9aRBv4/gjDLUimFH1GZ4JfRC7zDCL9AohurdaSA0raztBX3LNbf49RQkCxriRWKz2EeYvSykQf/xEn4Tk0nwjBvyjNitZiflktW9W7edrYklYVuGuYI0e3an/yYStYswKuPQ3IzN6F6R8CTkaOgRvCE529zqf8l0oDZQ3eMVN4NNLgMWQxXOR7cp7OTHAGycx4Zcyf53SxQ3tTGlZ5IISQX/gnSIIgKruNr2zliJvMlv5J9I7Z5J1Ozl8QqTnyytS5viaS83DJTKYA62VWEv2Nr1oWlInbT/fF5znt21PiLwCuiqZqLJ4TfxJfCi7AW25F0g2qSQ5am04PGNqve4Oe7xOABf8wyUqGdPtj3q3qR4GgrSndtdv8/0mUTwMxZ1iE5+H6bnJP4ltBe4Up+NcdXM29MLOswEl1/t+l30HJtxRpL+43dBR2Flxw43C4DrNfpt8yO7sLJmBrOzxIDbt5VOdZn30TwiJvPe9jGBpL3pm3TWDDzJqPiRrjCi5eee27xASTo+3kNdBhXr7d7Up4liUKLdnaCFtGmWrhCSlEVGhPcjUjaXQnOuSJNTJ8hmozNOC/zT/CqAoClxXPNr/6z8OTwXYdVQ1kmAb9xapbKPDMq+NdliJmLnVRMVQjdM2H0dZOIL4tM9le6q/3qAGoGhGL15iW2 cMtKG7ga OiE4xFhF9Lq2iesPjqHDSjc09lYOnLP9qTZWsECns8zm8mx7tEErM0I1IHtlO3bQlZa1VvOodG+wMD0IjrkUWk7T/ojhL0tsGSNNIZGm8fpd0P6xejDq0LlgeDfQrh8OBGYJqa7xv5jN74IfyWs9lCGOgFi3ZKgwrpXFYOH2GKAndc7BavJjZdesIuTianctjZmAvkqL+BVQmnewtm+bC3Dh/P6tMRkkl3w95Lfw5oiNus+iFY8drFocPQS/vuUlWyQNIJXbVrWNZ+1huSOMjOdtXig1Yh5a6l4NYMSPfEoWIi3NWrZ+vdMIO9lkZDYTu0HzyJo97R4qvFQohGeXRyxByVWNbi/Ok+4BZsVKtsp3fUMFqW9BA21jW96GGy8kfnCYsQogcCpUYgcfCri8oa7nglQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move max_len/max_nr_segs from struct netfs_io_subrequest to struct netfs_io_stream as we only issue one subreq at a time and then don't need these values again for that subreq unless and until we have to retry it - in which case we want to renegotiate them. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/write.c | 4 +++- fs/cachefiles/io.c | 5 +++-- fs/netfs/io.c | 4 ++-- fs/netfs/write_collect.c | 10 +++++----- fs/netfs/write_issue.c | 14 +++++++------- fs/smb/client/file.c | 15 ++++++++------- include/linux/netfs.h | 4 ++-- 7 files changed, 30 insertions(+), 26 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index e959640694c2..34107b55f834 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -89,10 +89,12 @@ static const struct afs_operation_ops afs_store_data_operation = { */ void afs_prepare_write(struct netfs_io_subrequest *subreq) { + struct netfs_io_stream *stream = &subreq->rreq->io_streams[subreq->stream_nr]; + //if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) // subreq->max_len = 512 * 1024; //else - subreq->max_len = 256 * 1024 * 1024; + stream->sreq_max_len = 256 * 1024 * 1024; } /* diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index e667dbcd20e8..a32f442b559b 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -627,11 +627,12 @@ static void cachefiles_prepare_write_subreq(struct netfs_io_subrequest *subreq) { struct netfs_io_request *wreq = subreq->rreq; struct netfs_cache_resources *cres = &wreq->cache_resources; + struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; _enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start); - subreq->max_len = ULONG_MAX; - subreq->max_nr_segs = BIO_MAX_VECS; + stream->sreq_max_len = UINT_MAX; + stream->sreq_max_segs = BIO_MAX_VECS; if (!cachefiles_cres_file(cres)) { if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index c93851b98368..27dbea0f3867 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -469,9 +469,9 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq, goto out; } - if (subreq->max_nr_segs) { + if (rreq->io_streams[0].sreq_max_segs) { lsize = netfs_limit_iter(io_iter, 0, subreq->len, - subreq->max_nr_segs); + rreq->io_streams[0].sreq_max_segs); if (subreq->len > lsize) { subreq->len = lsize; trace_netfs_sreq(subreq, netfs_sreq_trace_limited); diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 426cf87aaf2e..e105ac270090 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -231,7 +231,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); stream->prepare_write(subreq); - part = min(len, subreq->max_len); + part = min(len, stream->sreq_max_len); subreq->len = part; subreq->start = start; subreq->transferred = 0; @@ -271,8 +271,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, subreq = netfs_alloc_subrequest(wreq); subreq->source = to->source; subreq->start = start; - subreq->max_len = len; - subreq->max_nr_segs = INT_MAX; subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); subreq->stream_nr = to->stream_nr; __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); @@ -286,10 +284,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, to = list_next_entry(to, rreq_link); trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + stream->sreq_max_len = len; + stream->sreq_max_segs = INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - subreq->max_len = min(len, wreq->wsize); + stream->sreq_max_len = umin(len, wreq->wsize); break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -300,7 +300,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, stream->prepare_write(subreq); - part = min(len, subreq->max_len); + part = umin(len, stream->sreq_max_len); subreq->len = subreq->transferred + part; len -= part; start += part; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index cd3ddf07ab49..c87e72a3b16d 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -155,8 +155,6 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, subreq = netfs_alloc_subrequest(wreq); subreq->source = stream->source; subreq->start = start; - subreq->max_len = ULONG_MAX; - subreq->max_nr_segs = INT_MAX; subreq->stream_nr = stream->stream_nr; _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); @@ -167,10 +165,12 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + stream->sreq_max_len = UINT_MAX; + stream->sreq_max_segs = INT_MAX; switch (stream->source) { case NETFS_UPLOAD_TO_SERVER: netfs_stat(&netfs_n_wh_upload); - subreq->max_len = wreq->wsize; + stream->sreq_max_len = wreq->wsize; break; case NETFS_WRITE_TO_CACHE: netfs_stat(&netfs_n_wh_write); @@ -287,13 +287,13 @@ int netfs_advance_write(struct netfs_io_request *wreq, netfs_prepare_write(wreq, stream, start); subreq = stream->construct; - part = min(subreq->max_len - subreq->len, len); - _debug("part %zx/%zx %zx/%zx", subreq->len, subreq->max_len, part, len); + part = umin(stream->sreq_max_len - subreq->len, len); + _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len); subreq->len += part; subreq->nr_segs++; - if (subreq->len >= subreq->max_len || - subreq->nr_segs >= subreq->max_nr_segs || + if (subreq->len >= stream->sreq_max_len || + subreq->nr_segs >= stream->sreq_max_segs || to_eof) { netfs_issue_write(wreq, stream); subreq = NULL; diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 45c860f0e7fd..4732c63f7531 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -49,6 +49,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) struct cifs_io_subrequest *wdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = wdata->req; + struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr]; struct TCP_Server_Info *server; struct cifsFileInfo *open_file = req->cfile; size_t wsize = req->rreq.wsize; @@ -73,7 +74,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) } } - rc = server->ops->wait_mtu_credits(server, wsize, &wdata->subreq.max_len, + rc = server->ops->wait_mtu_credits(server, wsize, &stream->sreq_max_len, &wdata->credits); if (rc < 0) { subreq->error = rc; @@ -82,7 +83,7 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq) #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; + stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; #endif } @@ -134,11 +135,11 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq) static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; + struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr]; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); struct TCP_Server_Info *server = req->server; struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - size_t rsize = 0; int rc; rdata->xid = get_xid(); @@ -151,17 +152,17 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) cifs_sb->ctx); - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, &rsize, - &rdata->credits); + rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, + &stream->sreq_max_len, &rdata->credits); if (rc) { subreq->error = rc; return false; } - subreq->len = min_t(size_t, subreq->len, rsize); + subreq->len = umin(subreq->len, stream->sreq_max_len); #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; + stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; #endif return true; } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 2d438aaae685..aada40d2182b 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -136,6 +136,8 @@ static inline struct netfs_group *netfs_folio_group(struct folio *folio) struct netfs_io_stream { /* Submission tracking */ struct netfs_io_subrequest *construct; /* Op being constructed */ + size_t sreq_max_len; /* Maximum size of a subrequest */ + unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */ unsigned int submit_off; /* Folio offset we're submitting from */ unsigned int submit_len; /* Amount of data left to submit */ unsigned int submit_max_len; /* Amount I/O can be rounded up to */ @@ -179,14 +181,12 @@ struct netfs_io_subrequest { struct list_head rreq_link; /* Link in rreq->subrequests */ struct iov_iter io_iter; /* Iterator for this subrequest */ unsigned long long start; /* Where to start the I/O */ - size_t max_len; /* Maximum size of the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ unsigned int nr_segs; /* Number of segs in io_iter */ - unsigned int max_nr_segs; /* 0 or max number of segments in an iterator */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ unsigned long flags; From patchwork Thu Jun 20 17:31:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9D54C2BB85 for ; Thu, 20 Jun 2024 17:33:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74A648D00CC; Thu, 20 Jun 2024 13:33:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F9828D00AF; Thu, 20 Jun 2024 13:33:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 525B18D00CC; Thu, 20 Jun 2024 13:33:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2BB648D00AF for ; Thu, 20 Jun 2024 13:33:47 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BB0DD1403FD for ; Thu, 20 Jun 2024 17:33:46 +0000 (UTC) X-FDA: 82251964452.12.B6BACC5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 7DCD780014 for ; Thu, 20 Jun 2024 17:33:43 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b+AO2kXY; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904814; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gG5hZ5+44vznxIKh5hd9WtB53dbFSaPHyJMjHSD03HM=; b=zCVxn+kH2tyeTz8KFesXYwRw31Jeme1MEjNOSCHXjFNB22/CfrFDYQnK14fgUF9G3C6X04 6Nb3//Of4r3cxOO0agJCJp7xlem9I0MmmIStBqQwakodB8171I0fkPPsQFoMXTWrA1Ijfz DzlgpNEK51+mhXpR0H5DHOpNmUrbA24= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904814; a=rsa-sha256; cv=none; b=BrD7Tpm5wMKPKPT8GG2QAArlmv9Epwl/BE3NbAweEsslThUWXluF8Dr4ZFPryqSW2svvs1 ng4cgQ3mv+Vwcef+5Uwyu6nbZ2sqOjy5vtHoBjTRMt60OB+WpoTStQ9MaDAG3LgN0InwLW ZGykmepfOkyqCtFxB+pRDff0LBJE4Uw= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b+AO2kXY; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gG5hZ5+44vznxIKh5hd9WtB53dbFSaPHyJMjHSD03HM=; b=b+AO2kXYFSnaeEy2yv6YQ3CE3hfP5cslCXLawo3vrpQdCX7MbCfglEHY2h8wKrb0efUTY2 RQOfoX1tG3j+DSX2Noelz+yy8NRAyfnBEhTWua+VFiPK5xQbSxKjLexvAnCnlGHdNFz3h2 Z4Mndl4NVQdsoYYuFunWDDQAVW7DAEc= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-447-UzIqrurqMCa_Gx5nsIh9ZQ-1; Thu, 20 Jun 2024 13:33:39 -0400 X-MC-Unique: UzIqrurqMCa_Gx5nsIh9ZQ-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7614B19560B2; Thu, 20 Jun 2024 17:33:35 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5847419560AF; Thu, 20 Jun 2024 17:33:27 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Gao Xiang , Mike Marshall , devel@lists.orangefs.org Subject: [PATCH 13/17] mm: Define struct sheaf and ITER_SHEAF to handle a sequence of folios Date: Thu, 20 Jun 2024 18:31:31 +0100 Message-ID: <20240620173137.610345-14-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Queue-Id: 7DCD780014 X-Stat-Signature: 41y8ywnm5eqhixdjq8pt81nt7y1bxcr7 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1718904823-597391 X-HE-Meta: U2FsdGVkX1+X1svBASuP5T4PFWmBQLMaxrQaAA5HLeftYkMZ1Jboa/NEP986JQSO7J1sQCBSaKQUipDZWhYzpeirgD7SWsRRUiVDaq8+R4vM996mF/EnbD+/CSfOHad2bdW3tb+AQIVkZ9/p/tUT2XCKQo/v2D2SzDuzQzSI+jsLbXCjEsYNG/1/dfHP0NKt8PuluBm7wfQhblxfXOL/djIy5qLMEVR2Xtpnhcz/3cvFLlpVwlaaG+RgP/GKSXZnGR56ZVPS8SQcy8N/xymGxzcTZFMPBMgX63cQeT4+r3L7CgOF4rVD4SNiWcZ5IBsP8+qN+a4jZQN8DIMkuuenRTZA6CpSMhv16IUlE1KE9F6EbTS5dOEk0j0uPmlNiTzrabGnEzw1UE0ATaSKZaL/VbMNOsQIROcU2ev2HM581N/8xJxcKkwtKEgmZB816iAIr/OVOwTHD95pQshR0gyBIJsSpeF29CIGUP//4h6E23qXB9UC31dY9ZXpBfLDE5F+O87CrW9lZ9s7NbNPkbbU4xkQ8SSwG5M8tafQJ6NMauHygNQbdoy6KvP6tqrbvxSh+xekHyGuxv7QvBlXvPf7MQRsidZX+Nv9zvERCqt5H/BmXYz/sgFE7OdkpfqGre68VZqqpYdjhL74dRgPfihervvmHkSaBKKVJ/x4jeu9fhvp/Bvk3ZPY62SG5b36NB44yLLiFv960AoPPm0ANCH6JheqTOkzS8/I4sqI+qPoHWqnX6N07J5Ok/XRRL3NhTe5dS1Vn36OdbYowwGcwbioeAzXCJW6OEEsM3eae1d3zOkYQsRfvI+Mi1KNbb2SbPwiFlrP3eN2BespMImlC6VdJPujTqw3VU71NpdSOBs5sFHG28xiShHGJEubBbh85zmR0nPlkYxUi8fXk2NDm2BLsGzXMRWnLUNEFp7aoivWbThdJeLM21EgaF8Iz2EIaskTubQlFu8IYOjBiFaGTlQ NHO+uscb v1YC5g7tAsed1GCvGJOUVFVTwBfbZ+E8ecAP+7+DApQY+b5xCmywBF2guwzoAb8ly5jJ1yD2ZmiDCiYV/7v/9G1sdYHERoDwEdVe4nMW+Pko8eRe6aPMcqFthwHCEDKXEGvUGqxzmSdfz92rQFr3P1Nn75m046xU92/4NpwCZgS4YJRMoh29H8MQ+JVALO5DpIuevyWKsVIKPqelWwC+eufElgksYZAxvhQ9WX5CzOun1XpSkIlEXl/7uUcT6cS6pxDcKPGPIhfnO9NJ7F0JVXoCTmnfMvEJdQOV9DPzI2/5z0LVhjyKklfZZPSZZ6jTsyUkrTkK4A2XW8x4KZ+uo2rJKnxnuAqauxuhzeDq5TS8QN72RqLPzLw/1Pgs4k5b+IBD6lw0aMbjI+839WTQ74mH+1gGXfRzeLxiES1yP6YkkSsxpyVw3hZVj5oQr9OCDWqK1R/199dkf+Q8pnMSs9g96d2MoZwZbq5QO8X3plNeTvSRHlNmMqvJ81oFRqU85MLmGFGcbt819ozQMQU/kdz5b2O9mJyl+K2vmhQlcK03cjxWJrwIq35UIOy5BfkESI8lMdcQNxNNEl2FkXw2oghkX/hDTnM2Vqt33exGPquJLz7pbepLq0ozeD/VXGP2XojtNg8wTwFV6zJBpnslQKncEmYwCxW/PZcMXvrvl7mP1KOQ/z7t9/DqocXJglTsg/OEcZAfrb9Bpu40VI/aquxS30FW64Y8WG+TSfTpYpkf9QVOi7jePeNU0GfUXVG5zaMPFNUvKQwiwd7elxGadjATxVTgVu7enotcuU9fBWB+mG9v+v79Ua+W7rA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Define a data structure, struct sheaf, to represent a sequence of folios and a kernel-internal I/O iterator type, ITER_SHEAF, to allow a list of sheaf structures to be used to provide a buffer to iov_iter-taking functions, such as sendmsg and recvmsg. The sheaf structure looks like: struct sheaf { struct sheaf *next; struct sheaf *prev; sheaf_slot_t slots[29]; u8 orders[29]; }; It does not use a list_head so that next and/or prev can be set to NULL at the ends of the list, allowing iov_iter-handling routines to determine that they *are* the ends without needing to store the head pointer in the iov_iter struct. The slots are a folio pointer with the bottom two bits usable for storing marks. The intention is to use at least one of them to mark folios that need putting, but that might not be ultimately necessary. Accessor functions are used to access the slots to do the masking and an additional accessor function is used to indicate the size of the array. The order of each folio is also stored in the structure to avoid the need for iov_iter_advance() and iov_iter_revert() to have to query each folio to find its size. With careful barriering, this can be used as an extending buffer with new folios inserted and new sheaf structs added without the need for a lock. Further, provided we always keep at least one struct in the buffer, we can also remove consumed folios and consumed structs from the head end as we without the need for locks. [Questions/thoughts] (1) I'm told that 'sheaf' is now being used for something else mm-related out-of-tree, so I should probably change the name to something else - but what? Continuing on the papery theme, "scroll" and "fanfold" have occurred to me, but they're probably not the best. Maybe "folio_queue" or "folio_seq"? (2) To manage this, I need a head pointer, a tail pointer, a tail slot number (assuming insertion happens at the tail end and the next pointers point from head to tail). Should I put these into a struct of their own, say "sheaf_list" or "rolling_buffer" or "folio_queue"? I will end up with two of these in netfs_io_request eventually, one keeping track of the pagecache I'm dealing with for buffered I/O and the other to hold a bounce buffer when we need one. (3) I have set the number of elements to 29 so that it fits into a 32-word allocation (either 256 bytes or 512 bytes). However, should I replace slots[] with a folio_batch struct? The disadvantage of that is that adds a bunch of fields I don't need for the purpose of the buffer and pushes the size of the struct to a non-pow-2 size. The advantage would be that I can pass the batch directly to a variety of mm routines. (4) Should I make the slots {folio,off,len} or bio_vec? (5) This is intended to replace ITER_XARRAY eventually. Using an xarray in I/O iteration requires the taking of the RCU read lock, doing copying under the RCU read lock, walking the xarray (which may change under us), handling retries and dealing with special values. The advantage of ITER_XARRAY is that when we're dealing with the pagecache directly, we don't need any allocation - but if we're doing encrypted comms, there's a good chance we'd be using a bounce buffer anyway. This will require afs, erofs, cifs and fscache to be converted to not use this. afs still uses it for dirs and symlinks; some of erofs usages should be easy to change, but there's one which won't be so easy; ceph's use via fscache can be fixed by porting ceph to netfslib; cifs is using xarray as a bounce buffer - that can be moved to use sheaves instead; and orangefs has a similar problem to erofs - maybe orangefs could use netfslib? Signed-off-by: David Howells cc: Matthew Wilcox cc: Jeff Layton cc: Steve French cc: Ilya Dryomov cc: Gao Xiang cc: Mike Marshall cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: linux-afs@lists.infradead.org cc: linux-cifs@vger.kernel.org cc: ceph-devel@vger.kernel.org cc: linux-erofs@lists.ozlabs.org cc: devel@lists.orangefs.org --- include/linux/iov_iter.h | 57 +++++++++ include/linux/sheaf.h | 83 +++++++++++++ include/linux/uio.h | 11 ++ lib/iov_iter.c | 222 ++++++++++++++++++++++++++++++++- lib/kunit_iov_iter.c | 259 +++++++++++++++++++++++++++++++++++++++ lib/scatterlist.c | 69 ++++++++++- 6 files changed, 697 insertions(+), 4 deletions(-) create mode 100644 include/linux/sheaf.h diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index 270454a6703d..39e8df54dffe 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -10,6 +10,7 @@ #include #include +#include typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, void *priv, void *priv2); @@ -140,6 +141,60 @@ size_t iterate_bvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, return progress; } +/* + * Handle ITER_SHEAF. + */ +static __always_inline +size_t iterate_sheaf(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct sheaf *sheaf = iter->sheaf; + unsigned int slot = iter->sheaf_slot; + size_t progress = 0, skip = iter->iov_offset; + + if (slot == sheaf_nr_slots(sheaf)) { + /* The iterator may have been extended. */ + sheaf = sheaf->next; + slot = 0; + } + + do { + struct folio *folio = sheaf_slot_folio(sheaf, slot); + size_t part, remain, consumed; + size_t fsize; + void *base; + + if (!folio) + break; + + fsize = sheaf_folio_size(sheaf, slot); + base = kmap_local_folio(folio, skip); + part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); + remain = step(base, progress, part, priv, priv2); + kunmap_local(base); + consumed = part - remain; + len -= consumed; + progress += consumed; + skip += consumed; + if (skip >= fsize) { + skip = 0; + slot++; + if (slot == sheaf_nr_slots(sheaf) && sheaf->next) { + sheaf = sheaf->next; + slot = 0; + } + } + if (remain) + break; + } while (len); + + iter->sheaf_slot = slot; + iter->sheaf = sheaf; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + /* * Handle ITER_XARRAY. */ @@ -249,6 +304,8 @@ size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv, return iterate_bvec(iter, len, priv, priv2, step); if (iov_iter_is_kvec(iter)) return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_sheaf(iter)) + return iterate_sheaf(iter, len, priv, priv2, step); if (iov_iter_is_xarray(iter)) return iterate_xarray(iter, len, priv, priv2, step); return iterate_discard(iter, len, priv, priv2, step); diff --git a/include/linux/sheaf.h b/include/linux/sheaf.h new file mode 100644 index 000000000000..57f07214063c --- /dev/null +++ b/include/linux/sheaf.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Sheaf of folios definitions + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#ifndef _LINUX_SHEAF_H +#define _LINUX_SHEAF_H + +#include + +typedef struct sheaf_slot *sheaf_slot_t; + +/* + * Segment in a queue of running buffers. Each segment can hold a number of + * folios and a portion of the queue can be referenced with the ITER_SHEAF + * iterator. The possibility exists of inserting non-folio elements into the + * queue (such as gaps). + * + * Explicit prev and next pointers are used instead of a list_head to make it + * easier to add segments to tail and remove them from the head without the + * need for a lock. + */ +struct sheaf { + struct sheaf *next; + struct sheaf *prev; + sheaf_slot_t slots[29]; +#define SHEAF_SLOT_FOLIO_MARK 0x01UL /* Bit 0 in a folio ptr is a generic mark */ + /* Bit 1 is reserved. */ +#define SHEAF_SLOT_FOLIO_PTR ~0x03UL /* Bit 2.. are the folio ptr */ + u8 orders[29]; /* Order of each folio */ +}; + +static inline unsigned int sheaf_nr_slots(const struct sheaf *sheaf) +{ + return ARRAY_SIZE(sheaf->slots); +} + +static inline struct folio *sheaf_slot_folio(const struct sheaf *sheaf, unsigned int slot) +{ + return (struct folio *)((unsigned long)sheaf->slots[slot] & SHEAF_SLOT_FOLIO_PTR); +} + +static inline bool sheaf_slot_is_folio(const struct sheaf *sheaf, unsigned int slot) +{ + return true; +} + +static inline size_t sheaf_folio_size(const struct sheaf *sheaf, unsigned int slot) +{ + return PAGE_SIZE << sheaf->orders[slot]; +} + +static inline bool sheaf_slot_is_marked(const struct sheaf *sheaf, unsigned int slot) +{ + /* Must check is_folio first */ + return (unsigned long)sheaf->slots[slot] & SHEAF_SLOT_FOLIO_MARK; +} + +static inline sheaf_slot_t sheaf_make_folio(struct folio *folio, bool mark) +{ + return (sheaf_slot_t)((unsigned long)folio | (mark ? SHEAF_SLOT_FOLIO_MARK : 0)); +} + +static inline void sheaf_slot_set(struct sheaf *sheaf, unsigned int slot, sheaf_slot_t val) +{ + sheaf->slots[slot] = val; +} + +static inline void sheaf_slot_set_folio(struct sheaf *sheaf, unsigned int slot, + struct folio *folio) +{ + sheaf_slot_set(sheaf, slot, sheaf_make_folio(folio, false)); +} + +static inline void sheaf_slot_set_folio_marked(struct sheaf *sheaf, unsigned int slot, + struct folio *folio) +{ + sheaf_slot_set(sheaf, slot, sheaf_make_folio(folio, true)); +} + +#endif /* _LINUX_SHEAF_H */ diff --git a/include/linux/uio.h b/include/linux/uio.h index 7020adedfa08..55ca3d0c7d48 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -11,6 +11,7 @@ #include struct page; +struct sheaf; typedef unsigned int __bitwise iov_iter_extraction_t; @@ -25,6 +26,7 @@ enum iter_type { ITER_IOVEC, ITER_BVEC, ITER_KVEC, + ITER_SHEAF, ITER_XARRAY, ITER_DISCARD, }; @@ -66,6 +68,7 @@ struct iov_iter { const struct iovec *__iov; const struct kvec *kvec; const struct bio_vec *bvec; + const struct sheaf *sheaf; struct xarray *xarray; void __user *ubuf; }; @@ -74,6 +77,7 @@ struct iov_iter { }; union { unsigned long nr_segs; + u8 sheaf_slot; loff_t xarray_start; }; }; @@ -126,6 +130,11 @@ static inline bool iov_iter_is_discard(const struct iov_iter *i) return iov_iter_type(i) == ITER_DISCARD; } +static inline bool iov_iter_is_sheaf(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_SHEAF; +} + static inline bool iov_iter_is_xarray(const struct iov_iter *i) { return iov_iter_type(i) == ITER_XARRAY; @@ -273,6 +282,8 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec, unsigned long nr_segs, size_t count); void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); +void iov_iter_sheaf(struct iov_iter *i, unsigned int direction, const struct sheaf *sheaf, + unsigned int first_slot, unsigned int offset, size_t count); void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, loff_t start, size_t count); ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 4a6a9f419bd7..d7b36ce32f90 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -527,6 +527,39 @@ static void iov_iter_iovec_advance(struct iov_iter *i, size_t size) i->__iov = iov; } +static void iov_iter_sheaf_advance(struct iov_iter *i, size_t size) +{ + const struct sheaf *sheaf = i->sheaf; + unsigned int slot = i->sheaf_slot; + + if (!i->count) + return; + i->count -= size; + + if (slot >= sheaf_nr_slots(sheaf)) { + sheaf = sheaf->next; + slot = 0; + } + + size += i->iov_offset; /* From beginning of current segment. */ + do { + size_t fsize = sheaf_folio_size(sheaf, slot); + + if (likely(size < fsize)) + break; + size -= fsize; + slot++; + if (slot >= sheaf_nr_slots(sheaf) && sheaf->next) { + sheaf = sheaf->next; + slot = 0; + } + } while (size); + + i->iov_offset = size; + i->sheaf_slot = slot; + i->sheaf = sheaf; +} + void iov_iter_advance(struct iov_iter *i, size_t size) { if (unlikely(i->count < size)) @@ -539,12 +572,40 @@ void iov_iter_advance(struct iov_iter *i, size_t size) iov_iter_iovec_advance(i, size); } else if (iov_iter_is_bvec(i)) { iov_iter_bvec_advance(i, size); + } else if (iov_iter_is_sheaf(i)) { + iov_iter_sheaf_advance(i, size); } else if (iov_iter_is_discard(i)) { i->count -= size; } } EXPORT_SYMBOL(iov_iter_advance); +static void iov_iter_sheaf_revert(struct iov_iter *i, size_t unroll) +{ + const struct sheaf *sheaf = i->sheaf; + unsigned int slot = i->sheaf_slot; + + for (;;) { + size_t fsize; + + if (slot == 0) { + sheaf = sheaf->prev; + slot = sheaf_nr_slots(sheaf); + } + slot--; + + fsize = sheaf_folio_size(sheaf, slot); + if (unroll <= fsize) { + i->iov_offset = fsize - unroll; + break; + } + unroll -= fsize; + } + + i->sheaf_slot = slot; + i->sheaf = sheaf; +} + void iov_iter_revert(struct iov_iter *i, size_t unroll) { if (!unroll) @@ -576,6 +637,9 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll) } unroll -= n; } + } else if (iov_iter_is_sheaf(i)) { + i->iov_offset = 0; + iov_iter_sheaf_revert(i, unroll); } else { /* same logics for iovec and kvec */ const struct iovec *iov = iter_iov(i); while (1) { @@ -603,6 +667,9 @@ size_t iov_iter_single_seg_count(const struct iov_iter *i) if (iov_iter_is_bvec(i)) return min(i->count, i->bvec->bv_len - i->iov_offset); } + if (unlikely(iov_iter_is_sheaf(i))) + return !i->count ? 0 : + umin(sheaf_folio_size(i->sheaf, i->sheaf_slot), i->count); return i->count; } EXPORT_SYMBOL(iov_iter_single_seg_count); @@ -639,6 +706,36 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_bvec); +/** + * iov_iter_sheaf - Initialise an I/O iterator to use the folios in a sheaf + * @i: The iterator to initialise. + * @direction: The direction of the transfer. + * @sheaf: The starting point in the sheaf. + * @first_slot: The first slot in the sheaf to use + * @offset: The offset into the folio in the first slot to start at + * @count: The size of the I/O buffer in bytes. + * + * Set up an I/O iterator to either draw data out of the pages attached to an + * inode or to inject data into those pages. The pages *must* be prevented + * from evaporation, either by taking a ref on them or locking them by the + * caller. + */ +void iov_iter_sheaf(struct iov_iter *i, unsigned int direction, + const struct sheaf *sheaf, unsigned int first_slot, + unsigned int offset, size_t count) +{ + BUG_ON(direction & ~1); + *i = (struct iov_iter) { + .iter_type = ITER_SHEAF, + .data_source = direction, + .sheaf = sheaf, + .sheaf_slot = first_slot, + .count = count, + .iov_offset = offset, + }; +} +EXPORT_SYMBOL(iov_iter_sheaf); + /** * iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray * @i: The iterator to initialise. @@ -765,12 +862,19 @@ bool iov_iter_is_aligned(const struct iov_iter *i, unsigned addr_mask, if (iov_iter_is_bvec(i)) return iov_iter_aligned_bvec(i, addr_mask, len_mask); + /* With both xarray and sheaf types, we're dealing with whole folios. */ if (iov_iter_is_xarray(i)) { if (i->count & len_mask) return false; if ((i->xarray_start + i->iov_offset) & addr_mask) return false; } + if (iov_iter_is_sheaf(i)) { + if (i->count & len_mask) + return false; + if (i->iov_offset & addr_mask) + return false; + } return true; } @@ -835,6 +939,9 @@ unsigned long iov_iter_alignment(const struct iov_iter *i) if (iov_iter_is_bvec(i)) return iov_iter_alignment_bvec(i); + /* With both xarray and sheaf types, we're dealing with whole folios. */ + if (iov_iter_is_sheaf(i)) + return i->iov_offset | i->count; if (iov_iter_is_xarray(i)) return (i->xarray_start + i->iov_offset) | i->count; @@ -887,6 +994,51 @@ static int want_pages_array(struct page ***res, size_t size, return count; } +static ssize_t iter_sheaf_get_pages(struct iov_iter *iter, + struct page ***ppages, size_t maxsize, + unsigned maxpages, size_t *_start_offset) +{ + const struct sheaf *sheaf = iter->sheaf; + struct page **pages; + unsigned int slot = iter->sheaf_slot; + size_t extracted = 0; + + maxpages = want_pages_array(ppages, maxsize, iter->iov_offset & ~PAGE_MASK, maxpages); + if (!maxpages) + return -ENOMEM; + *_start_offset = iter->iov_offset & ~PAGE_MASK; + pages = *ppages; + + for (;;) { + struct folio *folio = sheaf_slot_folio(sheaf, slot); + size_t offset = iter->iov_offset, fsize = sheaf_folio_size(sheaf, slot); + size_t part = PAGE_SIZE - offset % PAGE_SIZE; + + part = umin(part, umin(maxsize - extracted, fsize - offset)); + iter->count -= part; + iter->iov_offset += part; + extracted += part; + + *pages++ = folio_page(folio, offset % PAGE_SIZE); + maxpages--; + if (maxpages == 0 || extracted >= maxsize) + break; + + if (offset >= fsize) { + iter->iov_offset = 0; + slot++; + if (slot == sheaf_nr_slots(sheaf) && sheaf->next) { + sheaf = sheaf->next; + slot = 0; + } + } + } + + iter->sheaf = sheaf; + iter->sheaf_slot = slot; + return extracted; +} + static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa, pgoff_t index, unsigned int nr_pages) { @@ -1034,6 +1186,8 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, } return maxsize; } + if (iov_iter_is_sheaf(i)) + return iter_sheaf_get_pages(i, pages, maxsize, maxpages, start); if (iov_iter_is_xarray(i)) return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); return -EFAULT; @@ -1118,6 +1272,11 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages) return iov_npages(i, maxpages); if (iov_iter_is_bvec(i)) return bvec_npages(i, maxpages); + if (iov_iter_is_sheaf(i)) { + unsigned offset = i->iov_offset % PAGE_SIZE; + int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); + return min(npages, maxpages); + } if (iov_iter_is_xarray(i)) { unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE; int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); @@ -1398,6 +1557,61 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->nr_segs = state->nr_segs; } +/* + * Extract a list of contiguous pages from an ITER_SHEAF iterator. This does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_sheaf_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + iov_iter_extraction_t extraction_flags, + size_t *offset0) +{ + const struct sheaf *sheaf = i->sheaf; + struct page **p; + unsigned int nr = 0; + size_t extracted = 0, offset, slot = i->sheaf_slot; + + offset = i->iov_offset & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + for (;;) { + struct folio *folio = sheaf_slot_folio(sheaf, slot); + size_t offset = i->iov_offset, fsize = sheaf_folio_size(sheaf, slot); + size_t part = PAGE_SIZE - offset % PAGE_SIZE; + + if (offset < fsize) { + part = umin(part, umin(maxsize - extracted, fsize - offset)); + i->count -= part; + i->iov_offset += part; + extracted += part; + + p[nr++] = folio_page(folio, offset / PAGE_SIZE); + } + + if (nr >= maxpages || extracted >= maxsize) + break; + + if (i->iov_offset >= fsize) { + i->iov_offset = 0; + slot++; + if (slot == sheaf_nr_slots(sheaf) && sheaf->next) { + sheaf = sheaf->next; + slot = 0; + } + } + } + + i->sheaf = sheaf; + i->sheaf_slot = slot; + return extracted; +} + /* * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not * get references on the pages, nor does it get a pin on them. @@ -1618,8 +1832,8 @@ static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, * added to the pages, but refs will not be taken. * iov_iter_extract_will_pin() will return true. * - * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are - * merely listed; no extra refs or pins are obtained. + * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_SHEAF or ITER_XARRAY, the + * pages are merely listed; no extra refs or pins are obtained. * iov_iter_extract_will_pin() will return 0. * * Note also: @@ -1654,6 +1868,10 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i, return iov_iter_extract_bvec_pages(i, pages, maxsize, maxpages, extraction_flags, offset0); + if (iov_iter_is_sheaf(i)) + return iov_iter_extract_sheaf_pages(i, pages, maxsize, + maxpages, extraction_flags, + offset0); if (iov_iter_is_xarray(i)) return iov_iter_extract_xarray_pages(i, pages, maxsize, maxpages, extraction_flags, diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 27e0c8ee71d8..423bf280393f 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -12,6 +12,7 @@ #include #include #include +#include #include MODULE_DESCRIPTION("iov_iter testing"); @@ -62,6 +63,9 @@ static void *__init iov_kunit_create_buffer(struct kunit *test, KUNIT_ASSERT_EQ(test, got, npages); } + for (int i = 0; i < npages; i++) + pages[i]->index = i; + buffer = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); @@ -362,6 +366,179 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test) KUNIT_SUCCEED(test); } +static void iov_kunit_destroy_sheaf(void *data) +{ + struct sheaf *sheaf, *next; + + for (sheaf = data; sheaf; sheaf = next) { + next = sheaf->next; + for (int i = 0; i < sheaf_nr_slots(sheaf); i++) + if (sheaf->slots[i]) + folio_put(sheaf_slot_folio(sheaf, i)); + kfree(sheaf); + } +} + +static void __init iov_kunit_load_sheaf(struct kunit *test, + struct iov_iter *iter, int dir, + struct sheaf *sheaf, + struct page **pages, size_t npages) +{ + struct sheaf *p = sheaf; + unsigned int slot = 0; + size_t size = 0; + int i; + + for (i = 0; i < npages; i++) { + if (slot >= sheaf_nr_slots(p)) { + p->next = kzalloc(sizeof(struct sheaf), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next); + p->next->prev = p; + p = p->next; + slot = 0; + } + sheaf_slot_set_folio(p, slot++, page_folio(pages[i])); + size += PAGE_SIZE; + } + iov_iter_sheaf(iter, dir, sheaf, 0, 0, size); +} + +static struct sheaf *iov_kunit_create_sheaf(struct kunit *test) +{ + struct sheaf *sheaf; + + sheaf = kzalloc(sizeof(struct sheaf), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sheaf); + kunit_add_action_or_reset(test, iov_kunit_destroy_sheaf, sheaf); + return sheaf; +} + +/* + * Test copying to a ITER_SHEAF-type iterator. + */ +static void __init iov_kunit_copy_to_sheaf(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct sheaf *sheaf; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, patt; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + sheaf = iov_kunit_create_sheaf(test); + + scratch = iov_kunit_create_buffer(test, &spages, npages); + for (i = 0; i < bufsize; i++) + scratch[i] = pattern(i); + + buffer = iov_kunit_create_buffer(test, &bpages, npages); + memset(buffer, 0, bufsize); + + iov_kunit_load_sheaf(test, &iter, READ, sheaf, bpages, npages); + + i = 0; + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + size = pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_sheaf(&iter, READ, sheaf, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied = copy_to_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); + i += size; + if (test->status == KUNIT_FAILURE) + goto stop; + } + + /* Build the expected image in the scratch buffer. */ + patt = 0; + memset(scratch, 0, bufsize); + for (pr = kvec_test_ranges; pr->from >= 0; pr++) + for (i = pr->from; i < pr->to; i++) + scratch[i] = pattern(patt++); + + /* Compare the images */ + for (i = 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i); + if (buffer[i] != scratch[i]) + return; + } + +stop: + KUNIT_SUCCEED(test); +} + +/* + * Test copying from a ITER_SHEAF-type iterator. + */ +static void __init iov_kunit_copy_from_sheaf(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct sheaf *sheaf; + struct page **spages, **bpages; + u8 *scratch, *buffer; + size_t bufsize, npages, size, copied; + int i, j; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + sheaf = iov_kunit_create_sheaf(test); + + buffer = iov_kunit_create_buffer(test, &bpages, npages); + for (i = 0; i < bufsize; i++) + buffer[i] = pattern(i); + + scratch = iov_kunit_create_buffer(test, &spages, npages); + memset(scratch, 0, bufsize); + + iov_kunit_load_sheaf(test, &iter, READ, sheaf, bpages, npages); + + i = 0; + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + size = pr->to - pr->from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_sheaf(&iter, WRITE, sheaf, 0, 0, pr->to); + iov_iter_advance(&iter, pr->from); + copied = copy_from_iter(scratch + i, size, &iter); + + KUNIT_EXPECT_EQ(test, copied, size); + KUNIT_EXPECT_EQ(test, iter.count, 0); + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); + i += size; + } + + /* Build the expected image in the main buffer. */ + i = 0; + memset(buffer, 0, bufsize); + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + for (j = pr->from; j < pr->to; j++) { + buffer[i++] = pattern(j); + if (i >= bufsize) + goto stop; + } + } +stop: + + /* Compare the images */ + for (i = 0; i < bufsize; i++) { + KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i); + if (scratch[i] != buffer[i]) + return; + } + + KUNIT_SUCCEED(test); +} + static void iov_kunit_destroy_xarray(void *data) { struct xarray *xarray = data; @@ -677,6 +854,85 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test) KUNIT_SUCCEED(test); } +/* + * Test the extraction of ITER_SHEAF-type iterators. + */ +static void __init iov_kunit_extract_pages_sheaf(struct kunit *test) +{ + const struct kvec_test_range *pr; + struct iov_iter iter; + struct sheaf *sheaf; + struct page **bpages, *pagelist[8], **pages = pagelist; + ssize_t len; + size_t bufsize, size = 0, npages; + int i, from; + + bufsize = 0x100000; + npages = bufsize / PAGE_SIZE; + + sheaf = iov_kunit_create_sheaf(test); + + iov_kunit_create_buffer(test, &bpages, npages); + iov_kunit_load_sheaf(test, &iter, READ, sheaf, bpages, npages); + + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { + from = pr->from; + size = pr->to - from; + KUNIT_ASSERT_LE(test, pr->to, bufsize); + + iov_iter_sheaf(&iter, WRITE, sheaf, 0, 0, pr->to); + iov_iter_advance(&iter, from); + + do { + size_t offset0 = LONG_MAX; + + for (i = 0; i < ARRAY_SIZE(pagelist); i++) + pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL; + + len = iov_iter_extract_pages(&iter, &pages, 100 * 1024, + ARRAY_SIZE(pagelist), 0, &offset0); + KUNIT_EXPECT_GE(test, len, 0); + if (len < 0) + break; + KUNIT_EXPECT_LE(test, len, size); + KUNIT_EXPECT_EQ(test, iter.count, size - len); + if (len == 0) + break; + size -= len; + KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); + KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); + + for (i = 0; i < ARRAY_SIZE(pagelist); i++) { + struct page *p; + ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0); + int ix; + + KUNIT_ASSERT_GE(test, part, 0); + ix = from / PAGE_SIZE; + KUNIT_ASSERT_LT(test, ix, npages); + p = bpages[ix]; + KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); + KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); + from += part; + len -= part; + KUNIT_ASSERT_GE(test, len, 0); + if (len == 0) + break; + offset0 = 0; + } + + if (test->status == KUNIT_FAILURE) + goto stop; + } while (iov_iter_count(&iter) > 0); + + KUNIT_EXPECT_EQ(test, size, 0); + KUNIT_EXPECT_EQ(test, iter.count, 0); + } + +stop: + KUNIT_SUCCEED(test); +} + /* * Test the extraction of ITER_XARRAY-type iterators. */ @@ -761,10 +1017,13 @@ static struct kunit_case __refdata iov_kunit_cases[] = { KUNIT_CASE(iov_kunit_copy_from_kvec), KUNIT_CASE(iov_kunit_copy_to_bvec), KUNIT_CASE(iov_kunit_copy_from_bvec), + KUNIT_CASE(iov_kunit_copy_to_sheaf), + KUNIT_CASE(iov_kunit_copy_from_sheaf), KUNIT_CASE(iov_kunit_copy_to_xarray), KUNIT_CASE(iov_kunit_copy_from_xarray), KUNIT_CASE(iov_kunit_extract_pages_kvec), KUNIT_CASE(iov_kunit_extract_pages_bvec), + KUNIT_CASE(iov_kunit_extract_pages_sheaf), KUNIT_CASE(iov_kunit_extract_pages_xarray), {} }; diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 7bc2220fea80..87075f9d759b 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -11,6 +11,7 @@ #include #include #include +#include /** * sg_next - return the next scatterlist entry in a list @@ -1261,6 +1262,67 @@ static ssize_t extract_kvec_to_sg(struct iov_iter *iter, return ret; } +/* + * Extract up to sg_max folios from an SHEAF-type iterator and add them to + * the scatterlist. The pages are not pinned. + */ +static ssize_t extract_sheaf_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + const struct sheaf *sheaf = iter->sheaf; + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + unsigned int slot = iter->sheaf_slot; + ssize_t ret = 0; + size_t offset = iter->iov_offset; + + BUG_ON(!sheaf); + + if (slot >= sheaf_nr_slots(sheaf)) { + sheaf = sheaf->next; + if (WARN_ON_ONCE(!sheaf)) + return 0; + slot = 0; + } + + do { + struct folio *folio = sheaf_slot_folio(sheaf, slot); + size_t fsize = sheaf_folio_size(sheaf, slot); + + if (offset < fsize) { + size_t part = umin(maxsize - ret, fsize - offset); + + sg_set_page(sg, folio_page(folio, 0), part, offset); + sgtable->nents++; + sg++; + sg_max--; + offset += part; + ret += part; + } + + if (offset >= fsize) { + offset = 0; + slot++; + if (slot >= sheaf_nr_slots(sheaf)) { + if (!sheaf->next) { + WARN_ON_ONCE(ret < iter->count); + break; + } + sheaf = sheaf->next; + slot = 0; + } + } + } while (sg_max > 0 && ret < maxsize); + + iter->sheaf = sheaf; + iter->sheaf_slot = slot; + iter->iov_offset = offset; + iter->count -= ret; + return ret; +} + /* * Extract up to sg_max folios from an XARRAY-type iterator and add them to * the scatterlist. The pages are not pinned. @@ -1323,8 +1385,8 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *iter, * addition of @sg_max elements. * * The pages referred to by UBUF- and IOVEC-type iterators are extracted and - * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; PIPE- - * and DISCARD-type are not supported. + * pinned; BVEC-, KVEC-, SHEAF- and XARRAY-type are extracted but aren't + * pinned; DISCARD-type is not supported. * * No end mark is placed on the scatterlist; that's left to the caller. * @@ -1356,6 +1418,9 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, case ITER_KVEC: return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); + case ITER_SHEAF: + return extract_sheaf_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_XARRAY: return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, extraction_flags); From patchwork Thu Jun 20 17:31:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16037C2BA1A for ; Thu, 20 Jun 2024 17:33:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BB778D00CD; Thu, 20 Jun 2024 13:33:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96C0E8D00AF; Thu, 20 Jun 2024 13:33:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BE158D00CD; Thu, 20 Jun 2024 13:33:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 59CE28D00AF for ; Thu, 20 Jun 2024 13:33:56 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 159D4A11AA for ; Thu, 20 Jun 2024 17:33:56 +0000 (UTC) X-FDA: 82251964872.06.58ABC89 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 38B9A18000C for ; Thu, 20 Jun 2024 17:33:54 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YBzH+nyG; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904828; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mgHCG13EqsOvPOuMm0ZR+XHws0Ty5xoLcrdq43UuEXc=; b=y1H6mnPHkA6ZNdpMlida6b3ZKHUO7Tzf2eHgskgxNPoeBZoIvcrdkA44bvg40G3fsSR0IC qn3M0WjFhUPly1eoYI/UrmZ4nX5t/mh4VArW1pBu+mpKuc9fDAyBsvrsuvJSJM+n9iVcfe q+8hYazv5RYcuPhzUp2gDiGJ9GMFMe8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YBzH+nyG; spf=pass (imf16.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904828; a=rsa-sha256; cv=none; b=3AL6wJ0V/OE2iwhkpDHrniqjT1dOAiBPmlJQNkr5AMPdHW1FlZNgvVd5hbDiLeUVJw9HkC Efd7iBdT/ut5+L/C4K5ySQqEsSJC6cphkLm1wuo4roEQVnGFTGr2q+NRLV+8sDyTjcxbKk oaSd2VNr6+5u+alfhAhQZ2OykdBKoSU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904833; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mgHCG13EqsOvPOuMm0ZR+XHws0Ty5xoLcrdq43UuEXc=; b=YBzH+nyGxvhSC5iuF+9pohUyUQZgLWUArMRzIpt7jYyynKFihWD+G4EJ9aCmdHiChIrnCR 3O6vMT7A0jbBtOQiH9oqc+spiGSuU5IQ1XYr+03pI4DmOs16/aKmcSh9mLFpp63HaAom9k I9ogRrfesBhfB35vSKCvJAXMPx57hSE= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-327-l4eMt56DPRmXn6Gn0fYEZQ-1; Thu, 20 Jun 2024 13:33:49 -0400 X-MC-Unique: l4eMt56DPRmXn6Gn0fYEZQ-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7414919560A7; Thu, 20 Jun 2024 17:33:43 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0EB9019560B3; Thu, 20 Jun 2024 17:33:36 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/17] netfs: Use new sheaf data type and iterator instead of xarray iter Date: Thu, 20 Jun 2024 18:31:32 +0100 Message-ID: <20240620173137.610345-15-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 38B9A18000C X-Stat-Signature: 7pouycmsuj3hdfpogxyeuua6qewiuf16 X-Rspam-User: X-HE-Tag: 1718904834-744062 X-HE-Meta: U2FsdGVkX19OEt9CNx9AjpFbcAFTvxH3HrRiOuqmxmXcBUn7nPNeLgHV/2Qvr/u8pj4uak768dbpIz+K1NGPLf0EYp5WfNGDJ+ROSmcF31uGGYh15C3XzUfF+GDhdJKZyHE2E8QZUvGT/BId96yGESrfD73ib2DSSo2cDP3PeWR/fS4wwY5i24tzo30SGw8nONI/KSenCLqvTlCEeaD7zYr7sCRMTioxoGZzvtMPIjI2qB4tDvhK4eEf8+hvM5v1fWEhnSBAWOFCPMvxhuO7U6+jNCaGoPVjDRCVCaj+xaQ3cQxoLwRqBrKyhikP2BLQCxNBbFQGFW0EtK6VzSVs1zj9kOXVd2y1X1UpFltXpONrUBr5JauuZ+3WwjtYVoBokEbTyy9nKsYLo0nnu6TK3qKCFq0gTUvi8YJ8hAr/f4+S+b05cwkJmO+TsJNdsQSU8yZVV+uXQwZszIs7HQ2EVab83Q+2ymGc7EQsVofqSf1POFkdDdW3V+h4WTOsbfsG0vlLvi2PptDTkgYOulB9z1QmOiZ96OFz4GHVv0J2mdd/drlbTaTXxH57MRZHAVrYzmXWMrtFrG5yijNknndGvjjDLlj9fvVDL+cwkrL9nkQvaEj9jVzIjTGdBdTT/iI1kXr80zuCbiPcO8mZ0KoC09KlFxttc/HdDqSJkFXZDF6TdisI2P4Jt/H6C8eAh3n9vcObxX3PdNs76A0XvE0EeDxgn7gsbmwldXnq3nd4bMGOgaQIHav6lvti6HRvWsvN+wORh+qiNoVroydqtlfXv7sQVHCARwKwk2E0J3dYwsy6/RNo9bMWEPQF9ElrrgsnJsZYIHXDkfaQkdLCLto2BbS2C2h5WsRQQ2eOKxTXw7ZrHj0qdRuvPG6fYP2uAbZ0lhK9qoqOt1y9JcJzdzrvsm9XQFbVscVYIm8H8b8DLRqvvQR2ncqv7mSd4cWCiliHNHjBZa2F3QBz83KLtxb b24EQ9ms rksaZq1depROy255H6jB945J4HP2R31Cqpx3tGQq6yeHksTNIS8PXwpP3/yjmJLnOQDuPkjmwC6mwbdkG+CHzN5zJjZARBbG9y+fb7aw08HV9ilXI849t5EQ2PN+dOZIz186ZZT2xIlkGi/YNuE99xsJKAfOepn/WBULqpGy9jYckTExBwDWQSCnrq5Ahq/YQlzrCRfk0jNFM/kRUk5r2a4aJYn6BGYU1TB447DM6iGbdHKlbVCBZE7/RsB5rRo6mlis0DJJ8Kb5FYgPsmwOVskcabWQE2XE/P7BgjTMcQ4RFpVL8JXr6krB/70l/Ea1V9Mf1aEfJBQQo3ingL19pZwfqKqEP9xPsoWHAdQEb7DZUIMWn50vVcAe5jh+nBMWhFXfT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Make the netfs write-side routines use the new sheaf struct to hold a rolling buffer of folios, with the issuer adding folios at the tail and the collector removing them from the head as they're processed instead of using an xarray. This will allow a subsequent patch to simplify the write collector. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/internal.h | 9 ++++- fs/netfs/misc.c | 74 +++++++++++++++++++++++++++++++++++ fs/netfs/objects.c | 1 + fs/netfs/stats.c | 4 +- fs/netfs/write_collect.c | 84 +++++++++++++++++++++------------------- fs/netfs/write_issue.c | 28 +++++++------- include/linux/netfs.h | 4 ++ 7 files changed, 147 insertions(+), 57 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index a44d480a0fa2..fe0974a95152 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -63,6 +64,10 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {} /* * misc.c */ +int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, + bool needs_put); +struct sheaf *netfs_delete_buffer_head(struct netfs_io_request *wreq); +void netfs_clear_buffer(struct netfs_io_request *rreq); /* * objects.c @@ -119,6 +124,7 @@ extern atomic_t netfs_n_wh_write_done; extern atomic_t netfs_n_wh_write_failed; extern atomic_t netfs_n_wb_lock_skip; extern atomic_t netfs_n_wb_lock_wait; +extern atomic_t netfs_n_sheaf; int netfs_stats_show(struct seq_file *m, void *v); @@ -152,7 +158,8 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, loff_t start, enum netfs_io_origin origin); void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq); + struct netfs_io_subrequest *subreq, + struct iov_iter *source); int netfs_advance_write(struct netfs_io_request *wreq, struct netfs_io_stream *stream, loff_t start, size_t len, bool to_eof); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 83e644bd518f..b4a0943f3c35 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,80 @@ #include #include "internal.h" + +/* + * Append a folio to the rolling queue. + */ +int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, + bool needs_put) +{ + struct sheaf *tail = rreq->buffer_tail; + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(!rreq->buffer && tail) || + WARN_ON_ONCE(rreq->buffer && !tail)) + return -EIO; + + if (!tail || rreq->buffer_tail_slot >= sheaf_nr_slots(tail)) { + tail = kzalloc(sizeof(*tail), GFP_NOFS); + if (!tail) + return -ENOMEM; + netfs_stat(&netfs_n_sheaf); + tail->prev = rreq->buffer_tail; + if (tail->prev) + tail->prev->next = tail; + rreq->buffer_tail = tail; + if (!rreq->buffer) { + rreq->buffer = tail; + iov_iter_sheaf(&rreq->io_iter, ITER_SOURCE, tail, 0, 0, 0); + } + rreq->buffer_tail_slot = 0; + } + + rreq->io_iter.count += PAGE_SIZE << order; + + sheaf_slot_set(tail, rreq->buffer_tail_slot, sheaf_make_folio(folio, needs_put)); + tail->orders[rreq->buffer_tail_slot] = order; + /* Store the counter after setting the slot. */ + smp_store_release(&rreq->buffer_tail_slot, rreq->buffer_tail_slot + 1); + return 0; +} + +/* + * Delete the head of a rolling queue. + */ +struct sheaf *netfs_delete_buffer_head(struct netfs_io_request *wreq) +{ + struct sheaf *head = wreq->buffer, *next = head->next; + + if (next) + next->prev = NULL; + netfs_stat_d(&netfs_n_sheaf); + kfree(head); + wreq->buffer = next; + return next; +} + +/* + * Clear out a rolling queue. + */ +void netfs_clear_buffer(struct netfs_io_request *rreq) +{ + struct sheaf *p; + + while ((p = rreq->buffer)) { + rreq->buffer = p->next; + for (int slot = 0; slot < sheaf_nr_slots(p); slot++) { + if (!p->slots[slot] || !sheaf_slot_is_folio(p, slot)) + continue; + if (sheaf_slot_is_marked(p, slot)) + folio_put(sheaf_slot_folio(p, slot)); + } + netfs_stat_d(&netfs_n_sheaf); + kfree(p); + } +} + /** * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback * @mapping: The mapping the folio belongs to. diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index f4a642727479..d148a955fa55 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -144,6 +144,7 @@ static void netfs_free_request(struct work_struct *work) } kvfree(rreq->direct_bv); } + netfs_clear_buffer(rreq); if (atomic_dec_and_test(&ictx->io_count)) wake_up_var(&ictx->io_count); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 5fe1c396e24f..84606287c43f 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -41,6 +41,7 @@ atomic_t netfs_n_wh_write_done; atomic_t netfs_n_wh_write_failed; atomic_t netfs_n_wb_lock_skip; atomic_t netfs_n_wb_lock_wait; +atomic_t netfs_n_sheaf; int netfs_stats_show(struct seq_file *m, void *v) { @@ -76,9 +77,10 @@ int netfs_stats_show(struct seq_file *m, void *v) atomic_read(&netfs_n_wh_write), atomic_read(&netfs_n_wh_write_done), atomic_read(&netfs_n_wh_write_failed)); - seq_printf(m, "Objs : rr=%u sr=%u wsc=%u\n", + seq_printf(m, "Objs : rr=%u sr=%u shf=%u wsc=%u\n", atomic_read(&netfs_n_rh_rreq), atomic_read(&netfs_n_rh_sreq), + atomic_read(&netfs_n_sheaf), atomic_read(&netfs_n_wh_wstream_conflict)); seq_printf(m, "WbLock : skip=%u wait=%u\n", atomic_read(&netfs_n_wb_lock_skip), diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index e105ac270090..394761041b4a 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -74,42 +74,6 @@ int netfs_folio_written_back(struct folio *folio) return gcount; } -/* - * Get hold of a folio we have under writeback. We don't want to get the - * refcount on it. - */ -static struct folio *netfs_writeback_lookup_folio(struct netfs_io_request *wreq, loff_t pos) -{ - XA_STATE(xas, &wreq->mapping->i_pages, pos / PAGE_SIZE); - struct folio *folio; - - rcu_read_lock(); - - for (;;) { - xas_reset(&xas); - folio = xas_load(&xas); - if (xas_retry(&xas, folio)) - continue; - - if (!folio || xa_is_value(folio)) - kdebug("R=%08x: folio %lx (%llx) not present", - wreq->debug_id, xas.xa_index, pos / PAGE_SIZE); - BUG_ON(!folio || xa_is_value(folio)); - - if (folio == xas_reload(&xas)) - break; - } - - rcu_read_unlock(); - - if (WARN_ONCE(!folio_test_writeback(folio), - "R=%08x: folio %lx is not under writeback\n", - wreq->debug_id, folio->index)) { - trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); - } - return folio; -} - /* * Unlock any folios we've finished with. */ @@ -117,13 +81,25 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned long long collected_to, unsigned int *notes) { + struct sheaf *sheaf = wreq->buffer; + unsigned int slot = wreq->buffer_head_slot; + + if (slot >= sheaf_nr_slots(sheaf)) { + sheaf = netfs_delete_buffer_head(wreq); + slot = 0; + } + for (;;) { struct folio *folio; struct netfs_folio *finfo; unsigned long long fpos, fend; size_t fsize, flen; - folio = netfs_writeback_lookup_folio(wreq, wreq->cleaned_to); + folio = sheaf_slot_folio(sheaf, slot); + if (WARN_ONCE(!folio_test_writeback(folio), + "R=%08x: folio %lx is not under writeback\n", + wreq->debug_id, folio->index)) + trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); fpos = folio_pos(folio); fsize = folio_size(folio); @@ -148,9 +124,25 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, wreq->cleaned_to = fpos + fsize; *notes |= MADE_PROGRESS; + /* Clean up the head sheaf. If we clear an entire sheaf, then + * we can get rid of it provided it's not also the tail sheaf + * being filled by the issuer. + */ + sheaf_slot_set(sheaf, slot, NULL); + slot++; + if (slot >= sheaf_nr_slots(sheaf)) { + if (READ_ONCE(wreq->buffer_tail) == sheaf) + break; + sheaf = netfs_delete_buffer_head(wreq); + slot = 0; + } + if (fpos + fsize >= collected_to) break; } + + wreq->buffer = sheaf; + wreq->buffer_head_slot = slot; } /* @@ -181,9 +173,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) break; if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + struct iov_iter source = subreq->io_iter; + + iov_iter_revert(&source, subreq->len - source.count); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); } } return; @@ -193,6 +188,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, do { struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp; + struct iov_iter source; unsigned long long start, len; size_t part; bool boundary = false; @@ -220,6 +216,14 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, len += to->len; } + /* Determine the set of buffers we're going to use. Each + * subreq gets a subset of a single overall contiguous buffer. + */ + source = from->io_iter; + iov_iter_revert(&source, subreq->len - source.count); + iov_iter_advance(&source, from->transferred); + source.count = len; + /* Work through the sublist. */ subreq = from; list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { @@ -242,7 +246,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, boundary = true; netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); if (subreq == to) break; } @@ -309,7 +313,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, boundary = false; } - netfs_reissue_write(stream, subreq); + netfs_reissue_write(stream, subreq, &source); if (!len) break; diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index c87e72a3b16d..8c65ac7c5d62 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -211,9 +211,11 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, * netfs_write_subrequest_terminated() when complete. */ static void netfs_do_issue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct iov_iter *source) { struct netfs_io_request *wreq = subreq->rreq; + size_t size = subreq->len - subreq->transferred; _enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len); @@ -221,27 +223,20 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream, return netfs_write_subrequest_terminated(subreq, subreq->error, false); // TODO: Use encrypted buffer - if (test_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags)) { - subreq->io_iter = wreq->io_iter; - iov_iter_advance(&subreq->io_iter, - subreq->start + subreq->transferred - wreq->start); - iov_iter_truncate(&subreq->io_iter, - subreq->len - subreq->transferred); - } else { - iov_iter_xarray(&subreq->io_iter, ITER_SOURCE, &wreq->mapping->i_pages, - subreq->start + subreq->transferred, - subreq->len - subreq->transferred); - } + subreq->io_iter = *source; + iov_iter_advance(source, size); + iov_iter_truncate(&subreq->io_iter, size); trace_netfs_sreq(subreq, netfs_sreq_trace_submit); stream->issue_write(subreq); } void netfs_reissue_write(struct netfs_io_stream *stream, - struct netfs_io_subrequest *subreq) + struct netfs_io_subrequest *subreq, + struct iov_iter *source) { __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); - netfs_do_issue_write(stream, subreq); + netfs_do_issue_write(stream, subreq, source); } static void netfs_issue_write(struct netfs_io_request *wreq, @@ -255,7 +250,7 @@ static void netfs_issue_write(struct netfs_io_request *wreq, if (subreq->start + subreq->len > wreq->start + wreq->submitted) WRITE_ONCE(wreq->submitted, subreq->start + subreq->len - wreq->start); - netfs_do_issue_write(stream, subreq); + netfs_do_issue_write(stream, subreq, &wreq->io_iter); } /* @@ -420,6 +415,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq, trace_netfs_folio(folio, netfs_folio_trace_store_plus); } + /* Attach the folio to the rolling buffer. */ + netfs_buffer_append_folio(wreq, folio, false); + /* Move the submission point forward to allow for write-streaming data * not starting at the front of the page. We don't do write-streaming * with the cache as the cache requires DIO alignment. diff --git a/include/linux/netfs.h b/include/linux/netfs.h index aada40d2182b..dc980f107c37 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -234,6 +234,8 @@ struct netfs_io_request { struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */ #define NR_IO_STREAMS 2 //wreq->nr_io_streams struct netfs_group *group; /* Writeback group being written back */ + struct sheaf *buffer; /* Head of I/O buffer */ + struct sheaf *buffer_tail; /* Tail of I/O buffer */ struct iov_iter iter; /* Unencrypted-side iterator */ struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ void *netfs_priv; /* Private data for the netfs */ @@ -255,6 +257,8 @@ struct netfs_io_request { short error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ + u8 buffer_head_slot; /* First slot in ->buffer */ + u8 buffer_tail_slot; /* Next slot in ->buffer_tail */ unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ atomic64_t issued_to; /* Write issuer folio cursor */ From patchwork Thu Jun 20 17:31:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E574C2BA1A for ; Thu, 20 Jun 2024 17:34:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0BA868D00CE; Thu, 20 Jun 2024 13:34:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 06AA38D00AF; Thu, 20 Jun 2024 13:34:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E277B8D00CE; Thu, 20 Jun 2024 13:34:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BF9118D00AF for ; Thu, 20 Jun 2024 13:34:03 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 79456A042B for ; Thu, 20 Jun 2024 17:34:03 +0000 (UTC) X-FDA: 82251965166.13.E1DA914 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id B0F9540004 for ; Thu, 20 Jun 2024 17:34:01 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GzGuAA71; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CpXuiacqqpADs77iIW3OXqAQBVMt9Y77HehVEfYnBzw=; b=E/1sDl+YZHaVsGfM0JnWzDWK04yDe+mz9tEQZGvForq8BrEziStluOc/FdSbT/BuSbuV9P 4MdRLxFv5YmgbtJ9j/njpth0XIkVMK8I6yVw7L/V3jlFwF3Rt41Vn4uAvDGiKuqORytVa+ IvT7urPzxgdMyFDbz4cPCcqKWQO4zDU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GzGuAA71; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904837; a=rsa-sha256; cv=none; b=wyo/VA1BmFOXJLFjGkccqL9XFJwElCIEbxCN6X1LHb45ESU3tWHgjHlzicI7F+Xv2+tCYF u31fElIYDljB5aP1zz2BF6XPvqXZrvSCFf6Vwn1f2X+6mtnluOAMVOrGPTByeO49K/Sa17 joUgSXFFqVI5A68H/BRputqOs7zzzzM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904841; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CpXuiacqqpADs77iIW3OXqAQBVMt9Y77HehVEfYnBzw=; b=GzGuAA71sZiIiJnv/QPfDcJEUWvwyJiKPR7wAcX5HqsLGfowZ1pVDftPwlVIV8ZINdIw4a /xZP49R8axBTXsfX0iErKIEbBqK8QFSqXfNJs1lC/guKRKuP5VCI1HiGcvsm9qfgcAytX0 cc/eB4AWNsUEFcstY/DUr4z/vKbuGHw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-rByF131_NGSi_kks8_YHFw-1; Thu, 20 Jun 2024 13:33:54 -0400 X-MC-Unique: rByF131_NGSi_kks8_YHFw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 109B319560A3; Thu, 20 Jun 2024 17:33:51 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 083E51955E72; Thu, 20 Jun 2024 17:33:44 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/17] netfs: Simplify the writeback code Date: Thu, 20 Jun 2024 18:31:33 +0100 Message-ID: <20240620173137.610345-16-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: B0F9540004 X-Stat-Signature: eddpp33nmuauc4b5uic5zhizuhpwonwg X-HE-Tag: 1718904841-268834 X-HE-Meta: U2FsdGVkX1/ghAYNiZ/0RUNYPtZ4J6zy8A5WLvzbEMunDvUkttSbnURYeoiOWumEtVNJOTx+Zh35zfEKSSFSB5909knUq23RFhb2jW0BLeNLQt49MFqC+kAMIn2FHFkqDfXvKTbUBts6cLalQnc0cDuB/AXivj+7T1wNGkpRrz2R4ev9qjDNbPG7q5tkEawFsRpDAQgXUnqj2XnD5MsD9NPmo+dPOyJvCI3g9WaknewNJkP6VFPIE5EUV8iU8ckV5FZG9qPbD+iQXCU3ptFYTN7+xup/SCTe3x6on852iibDf0jF9pqZcgpEcq8d0gB8XoniOKGYHHMOkzfFsESP93DuTjTsCLqppxEl6qGpG4sBZ8FVdTh9Drt8GToaPsQS7FWAFY0vvM8pKEGxDW6DZsOhq8Fq2LovUNBd4EpQr/WgdJIQcOorIM8cWvrCeUSjdneHlOXLxX/hLzuAp7TTCXUNsDcFdRAuT6L8s7y+IJZARHzuUxZUe1ELMAlq67xR3bpcUl/6OeO1EK2F7M0WXfDKH1gjjz3TziP6v8Q+nt2uILwyP7EyvuV03fbiY6QvhJNC5GvFZIBVM4cy/OSE19lv7W+g+UV45bHU9rvBhO2ffljNkYWuah4GGuZ3cxLCrNUDt8H4W4Sw/EoQItwiEffGQbsL9aYwu4njxY/yZvNh8tFcQsaVhSmPtnTAaTl3zpjl8Pr9MkbgM++1cFgu2TdmJilaz4U1ZR8ot4oOjWWAiz7nHCq3OPrIXDIuXemKQnQWWk3qojbj2qRkSMfv0aCGfdCxX0WJLSrk9cx4KCBfBQoJ+Al13svEMkc5YbPRG+djkvBUiw241MbaF/iBx8DIcyswp/xy8sFaDiTmgvJmLP9KS46YWOYPKdY8M3YUJPi0MCLFkjzlGthoxsJeFURb4XkJmGDfi8i0Kj88OoU9nP86RkRLajeXBNXrKPZA1tbYn14yJFWLlg9F3xI IK/tM+xI LcHuUqC5scKs9TOAtM8NNZUl9eNP2HXtSgruHerLYrdsBqBEc4RAL/6e/k05RZdMhSkhW3DCchR+idJtXQxeJE70jy5AB/9vdtxrRA8TP1ytNCHxmzzhEJUJOuLSVBm7pfCVbmBmlOjXkph9UWzHnMKJylAqgkIKEUpeEOBJ5tiGSu9YqPXV0oDf6R1DeTHqV18pXmQ7YK74IYC71efYroHd6I9fKsdN++H1wZal7DirCQQRKfzCLuYoiSw8ZFYAu4o8/J2oXC7KD/DHAp1fpTntvgMOLw/kQQSAlbZzzgSWVotfaEm70etkoXQMeQpr4ldeW9aKeZUm2RvrviyAblFvaWKLQ5B0ww06n+lOCHI31g9M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the new sheaf structures to simplify the writeback code. The problem with referring to the i_pages xarray directly is that we may have gaps in the sequence of folios we're writing from that we need to skip when we're removing the writeback mark from the folios we're writing back from. At the moment the code tries to deal with this by carefully tracking the gaps in each writeback stream (eg. write to server and write to cache) and divining when there's a gap that spans folios (something that's not helped by folios not being a consistent size). Instead, the sheaf buffer contains pointers only the folios we're dealing with, has them in ascending order and indicates a gap by placing non-consequitive folios next to each other. This makes it possible to track where we need to clean up to by just keeping track of where we've processed to on each stream and taking the minimum. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_collect.c | 148 ++++++----------------------------- fs/netfs/write_issue.c | 10 +-- include/linux/netfs.h | 1 - include/trace/events/netfs.h | 33 +------- 4 files changed, 31 insertions(+), 161 deletions(-) diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 394761041b4a..5de059ea9f75 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -15,15 +15,11 @@ /* Notes made in the collector */ #define HIT_PENDING 0x01 /* A front op was still pending */ -#define SOME_EMPTY 0x02 /* One of more streams are empty */ -#define ALL_EMPTY 0x04 /* All streams are empty */ -#define MAYBE_DISCONTIG 0x08 /* A front op may be discontiguous (rounded to PAGE_SIZE) */ -#define NEED_REASSESS 0x10 /* Need to loop round and reassess */ -#define REASSESS_DISCONTIG 0x20 /* Reassess discontiguity if contiguity advances */ -#define MADE_PROGRESS 0x40 /* Made progress cleaning up a stream or the folio set */ -#define BUFFERED 0x80 /* The pagecache needs cleaning up */ -#define NEED_RETRY 0x100 /* A front op requests retrying */ -#define SAW_FAILURE 0x200 /* One stream or hit a permanent failure */ +#define NEED_REASSESS 0x02 /* Need to loop round and reassess */ +#define MADE_PROGRESS 0x04 /* Made progress cleaning up a stream or the folio set */ +#define BUFFERED 0x08 /* The pagecache needs cleaning up */ +#define NEED_RETRY 0x10 /* A front op requests retrying */ +#define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ /* * Successful completion of write of a folio to the server and/or cache. Note @@ -78,10 +74,10 @@ int netfs_folio_written_back(struct folio *folio) * Unlock any folios we've finished with. */ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, - unsigned long long collected_to, unsigned int *notes) { struct sheaf *sheaf = wreq->buffer; + unsigned long long collected_to = wreq->collected_to; unsigned int slot = wreq->buffer_head_slot; if (slot >= sheaf_nr_slots(sheaf)) { @@ -110,12 +106,6 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, trace_netfs_collect_folio(wreq, folio, fend, collected_to); - if (fpos + fsize > wreq->contiguity) { - trace_netfs_collect_contig(wreq, fpos + fsize, - netfs_contig_trace_unlock); - wreq->contiguity = fpos + fsize; - } - /* Unlock any folio we've transferred all of. */ if (collected_to < fend) break; @@ -374,7 +364,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) { struct netfs_io_subrequest *front, *remove; struct netfs_io_stream *stream; - unsigned long long collected_to; + unsigned long long collected_to, issued_to; unsigned int notes; int s; @@ -383,28 +373,21 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) trace_netfs_rreq(wreq, netfs_rreq_trace_collect); reassess_streams: + issued_to = atomic64_read(&wreq->issued_to); smp_rmb(); collected_to = ULLONG_MAX; - if (wreq->origin == NETFS_WRITEBACK) - notes = ALL_EMPTY | BUFFERED | MAYBE_DISCONTIG; - else if (wreq->origin == NETFS_WRITETHROUGH) - notes = ALL_EMPTY | BUFFERED; + if (wreq->origin == NETFS_WRITEBACK || + wreq->origin == NETFS_WRITETHROUGH) + notes = BUFFERED; else - notes = ALL_EMPTY; + notes = 0; /* Remove completed subrequests from the front of the streams and * advance the completion point on each stream. We stop when we hit * something that's in progress. The issuer thread may be adding stuff * to the tail whilst we're doing this. - * - * We must not, however, merge in discontiguities that span whole - * folios that aren't under writeback. This is made more complicated - * by the folios in the gap being of unpredictable sizes - if they even - * exist - but we don't want to look them up. */ for (s = 0; s < NR_IO_STREAMS; s++) { - loff_t rstart, rend; - stream = &wreq->io_streams[s]; /* Read active flag before list pointers */ if (!smp_load_acquire(&stream->active)) @@ -416,26 +399,10 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) //_debug("sreq [%x] %llx %zx/%zx", // front->debug_index, front->start, front->transferred, front->len); - /* Stall if there may be a discontinuity. */ - rstart = round_down(front->start, PAGE_SIZE); - if (rstart > wreq->contiguity) { - if (wreq->contiguity > stream->collected_to) { - trace_netfs_collect_gap(wreq, stream, - wreq->contiguity, 'D'); - stream->collected_to = wreq->contiguity; - } - notes |= REASSESS_DISCONTIG; - break; + if (stream->collected_to < front->start) { + trace_netfs_collect_gap(wreq, stream, issued_to, 'F'); + stream->collected_to = front->start; } - rend = round_up(front->start + front->len, PAGE_SIZE); - if (rend > wreq->contiguity) { - trace_netfs_collect_contig(wreq, rend, - netfs_contig_trace_collect); - wreq->contiguity = rend; - if (notes & REASSESS_DISCONTIG) - notes |= NEED_REASSESS; - } - notes &= ~MAYBE_DISCONTIG; /* Stall if the front is still undergoing I/O. */ if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) { @@ -477,15 +444,6 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) front = list_first_entry_or_null(&stream->subrequests, struct netfs_io_subrequest, rreq_link); stream->front = front; - if (!front) { - unsigned long long jump_to = atomic64_read(&wreq->issued_to); - - if (stream->collected_to < jump_to) { - trace_netfs_collect_gap(wreq, stream, jump_to, 'A'); - stream->collected_to = jump_to; - } - } - spin_unlock(&wreq->lock); netfs_put_subrequest(remove, false, notes & SAW_FAILURE ? @@ -493,11 +451,16 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) netfs_sreq_trace_put_done); } - if (front) - notes &= ~ALL_EMPTY; - else - notes |= SOME_EMPTY; + /* If we have an empty stream, we need to jump it forward + * otherwise the collection point will never advance. + */ + if (!front && issued_to > stream->collected_to) { + trace_netfs_collect_gap(wreq, stream, issued_to, 'E'); + stream->collected_to = issued_to; + } + if (list_empty(&stream->subrequests)) + stream->collected_to = issued_to; if (stream->collected_to < collected_to) collected_to = stream->collected_to; } @@ -505,36 +468,6 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) if (collected_to != ULLONG_MAX && collected_to > wreq->collected_to) wreq->collected_to = collected_to; - /* If we have an empty stream, we need to jump it forward over any gap - * otherwise the collection point will never advance. - * - * Note that the issuer always adds to the stream with the lowest - * so-far submitted start, so if we see two consecutive subreqs in one - * stream with nothing between then in another stream, then the second - * stream has a gap that can be jumped. - */ - if (notes & SOME_EMPTY) { - unsigned long long jump_to = wreq->start + READ_ONCE(wreq->submitted); - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && - stream->front && - stream->front->start < jump_to) - jump_to = stream->front->start; - } - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && - !stream->front && - stream->collected_to < jump_to) { - trace_netfs_collect_gap(wreq, stream, jump_to, 'B'); - stream->collected_to = jump_to; - } - } - } - for (s = 0; s < NR_IO_STREAMS; s++) { stream = &wreq->io_streams[s]; if (stream->active) @@ -545,43 +478,14 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) /* Unlock any folios that we have now finished with. */ if (notes & BUFFERED) { - unsigned long long clean_to = min(wreq->collected_to, wreq->contiguity); - - if (wreq->cleaned_to < clean_to) - netfs_writeback_unlock_folios(wreq, clean_to, ¬es); + if (wreq->cleaned_to < wreq->collected_to) + netfs_writeback_unlock_folios(wreq, ¬es); } else { wreq->cleaned_to = wreq->collected_to; } // TODO: Discard encryption buffers - /* If all streams are discontiguous with the last folio we cleared, we - * may need to skip a set of folios. - */ - if ((notes & (MAYBE_DISCONTIG | ALL_EMPTY)) == MAYBE_DISCONTIG) { - unsigned long long jump_to = ULLONG_MAX; - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->active && stream->front && - stream->front->start < jump_to) - jump_to = stream->front->start; - } - - trace_netfs_collect_contig(wreq, jump_to, netfs_contig_trace_jump); - wreq->contiguity = jump_to; - wreq->cleaned_to = jump_to; - wreq->collected_to = jump_to; - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->collected_to < jump_to) - stream->collected_to = jump_to; - } - //cond_resched(); - notes |= MADE_PROGRESS; - goto reassess_streams; - } - if (notes & NEED_RETRY) goto need_retry; if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 8c65ac7c5d62..fb92dd8160f3 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -105,7 +105,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags)) fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx)); - wreq->contiguity = wreq->start; wreq->cleaned_to = wreq->start; INIT_WORK(&wreq->work, netfs_write_collection_worker); @@ -247,9 +246,6 @@ static void netfs_issue_write(struct netfs_io_request *wreq, if (!subreq) return; stream->construct = NULL; - - if (subreq->start + subreq->len > wreq->start + wreq->submitted) - WRITE_ONCE(wreq->submitted, subreq->start + subreq->len - wreq->start); netfs_do_issue_write(stream, subreq, &wreq->io_iter); } @@ -463,9 +459,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq, break; stream = &wreq->io_streams[choose_s]; + atomic64_set(&wreq->issued_to, fpos + stream->submit_off); part = netfs_advance_write(wreq, stream, fpos + stream->submit_off, stream->submit_len, to_eof); - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); stream->submit_off += part; stream->submit_max_len -= part; if (part > stream->submit_len) @@ -524,10 +520,10 @@ int netfs_writepages(struct address_space *mapping, netfs_stat(&netfs_n_wh_writepages); do { - _debug("wbiter %lx %llx", folio->index, wreq->start + wreq->submitted); + _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)); /* It appears we don't have to handle cyclic writeback wrapping. */ - WARN_ON_ONCE(wreq && folio_pos(folio) < wreq->start + wreq->submitted); + WARN_ON_ONCE(wreq && folio_pos(folio) < atomic64_read(&wreq->issued_to)); if (netfs_folio_group(folio) != NETFS_FOLIO_COPY_TO_CACHE && unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) { diff --git a/include/linux/netfs.h b/include/linux/netfs.h index dc980f107c37..b880687bb932 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -262,7 +262,6 @@ struct netfs_io_request { unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ atomic64_t issued_to; /* Write issuer folio cursor */ - unsigned long long contiguity; /* Tracking for gaps in the writeback sequence */ unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index db603a4e22cd..64238a64ae5f 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -512,33 +512,6 @@ TRACE_EVENT(netfs_collect, __entry->start + __entry->len) ); -TRACE_EVENT(netfs_collect_contig, - TP_PROTO(const struct netfs_io_request *wreq, unsigned long long to, - enum netfs_collect_contig_trace type), - - TP_ARGS(wreq, to, type), - - TP_STRUCT__entry( - __field(unsigned int, wreq) - __field(enum netfs_collect_contig_trace, type) - __field(unsigned long long, contiguity) - __field(unsigned long long, to) - ), - - TP_fast_assign( - __entry->wreq = wreq->debug_id; - __entry->type = type; - __entry->contiguity = wreq->contiguity; - __entry->to = to; - ), - - TP_printk("R=%08x %llx -> %llx %s", - __entry->wreq, - __entry->contiguity, - __entry->to, - __print_symbolic(__entry->type, netfs_collect_contig_traces)) - ); - TRACE_EVENT(netfs_collect_sreq, TP_PROTO(const struct netfs_io_request *wreq, const struct netfs_io_subrequest *subreq), @@ -610,7 +583,6 @@ TRACE_EVENT(netfs_collect_state, __field(unsigned int, notes ) __field(unsigned long long, collected_to ) __field(unsigned long long, cleaned_to ) - __field(unsigned long long, contiguity ) ), TP_fast_assign( @@ -618,12 +590,11 @@ TRACE_EVENT(netfs_collect_state, __entry->notes = notes; __entry->collected_to = collected_to; __entry->cleaned_to = wreq->cleaned_to; - __entry->contiguity = wreq->contiguity; ), - TP_printk("R=%08x cto=%llx fto=%llx ctg=%llx n=%x", + TP_printk("R=%08x col=%llx cln=%llx n=%x", __entry->wreq, __entry->collected_to, - __entry->cleaned_to, __entry->contiguity, + __entry->cleaned_to, __entry->notes) ); From patchwork Thu Jun 20 17:31:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5F7CC27C79 for ; Thu, 20 Jun 2024 17:34:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D7E88D00CF; Thu, 20 Jun 2024 13:34:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 685AF8D00AF; Thu, 20 Jun 2024 13:34:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FF7B8D00CF; Thu, 20 Jun 2024 13:34:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2B8348D00AF for ; Thu, 20 Jun 2024 13:34:07 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E254B1A0444 for ; Thu, 20 Jun 2024 17:34:06 +0000 (UTC) X-FDA: 82251965292.01.2B07417 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf07.hostedemail.com (Postfix) with ESMTP id 35A0140014 for ; Thu, 20 Jun 2024 17:34:05 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=e1uhkUpz; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904838; a=rsa-sha256; cv=none; b=FcwQkoHvD26khLltV/UoVNJKIAX28JJcZMiVpzezGVLdyNoBCUbs8AjUkUUXJ2TtYkru9P Mj5jmmSKrGUznnbdoo/9dpH8mpatNsl08oqg8zNmZby2dZbZpLTE8/34YMV4PxxfbZAiFS stF/mm25c0Dq5DreF0xmWtOp92mbQ90= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=e1uhkUpz; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OdyodcaxIsN9yjCj5kMWkPV/Ay0KIsoiyQ6+poCT0PE=; b=7J+ND91DIy6R8RutNEFDJdVKNaqoSjuHl187qVuWY5y970m9skeNjF+SAY0mbnzBNjLcRm FAP7Jjaf7odRDZyiw8Un7bVRbti73q6XjezHMgdErmPCdd8u+Z+QBq+w3Jvxuzj6IfHKff rfPVr33+cVdsGY248fcOuSo9aTgZxwk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904844; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OdyodcaxIsN9yjCj5kMWkPV/Ay0KIsoiyQ6+poCT0PE=; b=e1uhkUpzqsdcm+5rJaPcoxE7KWYYsD3HjDJXmg2yu5UHaywIqwNIwkXfHVLy5L/h/VIobd Nd/9eHD4Pt+LhXdCW6XLRBkZTGTa8t1sb/0AeY7pZBXzeayEIZieF9uGD/uyW3KLrpUL99 CEOxoWrJ2x6BcDjhBaSNttVjyGVXWkI= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-99-kf0yI3oINji29pOmCBJ72w-1; Thu, 20 Jun 2024 13:34:01 -0400 X-MC-Unique: kf0yI3oINji29pOmCBJ72w-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8CFEC1956063; Thu, 20 Jun 2024 17:33:58 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9316B3000218; Thu, 20 Jun 2024 17:33:52 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 16/17] afs: Make read subreqs async Date: Thu, 20 Jun 2024 18:31:34 +0100 Message-ID: <20240620173137.610345-17-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Queue-Id: 35A0140014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: et6qz8dkwx3qc8rxt1b8ph4i6ng7j3ht X-HE-Tag: 1718904845-267934 X-HE-Meta: U2FsdGVkX19c5qvzaSMRPJjcr10EN5Lj05/jIfW3kNzAE0C+ANMDD+hZB1aVChSOMvAaGPhQu1JmbayzznZWd8RSFu9zBgZ9izkUGG4Kb+Xuh0tUseFjy9ZqCYvY7e+8yIeP0KgNi9zai5Y/lIgbbWbNM3YvnDQXJDkKfieVSOLYknwxjjxMbKZWn1vF83pcK08SxAh8gQvarmVELSUCB4YYa6yZYAJ0IVuOffJT3DI2SX3BYXUrD7MZbQQ/krqE4ygDwIDTrexjyxFGka000bXIcZnfhnWrchLvkrUXbQj2RiSqtusNOykmibIlMTlbYjlPGjq9RfH47FCWHvQV9AuOEfNSdbYVZttRfmmkpA75eKzwucZAjacTWxRSapr4jTRfCn2DAkaui9b/4Qc0jTGZ2Fef/lolps0tHk3RUZsAsmigyaiTbfcnQPYu6j8ziLZJ9BUfdXBJvpDfexf7hR5MynIdBuxxOCPP/dcjVNvWHmTtHHULb3A/mA2F8sfNoacEbEypMrvYnpOmOGCiZOTC1Lu4HRDG6UYFLDLvgEcmXkliW+LED/O7rquZxgUSPE12M8pbgfNHGUXG/alLKANFQfQAT4nrBNIeXCh/VNqmLZHqOS4VyMH/23CXfQvkcEfDNT+wMFtqznWFhpBzbuVyKKmSEgYAPLP3nhxYrcH8xc8G+UnmOx8AV/4YsBl1GO6gONMFtbhR9WUkSND15SOCSv63t5FHRHnq19lf7P6tqsze3ZCrPe7qVxPC41vmaZQDGNE9R9tq3tOQ6Nvr5c8HxZaje9iqFDC9ajYGn6Ne3mWLen84IMjl5tiJ0rR/aY9nj5i4QESLMFGZJgImmhKp3iVrk3ppBSvYnh2Q1Z7L3xmCP3jJMi9KtlS1gQVkOR/tFyrlyi96PeZIR/XWjn2lmD0q2nz6wb3x8nrPbW0QoUoswttBqPaYSSKSIcW9kO0aCUVQgJuSF/U3ws2 rHq0HI00 0/dAFqtMyclMlPQxc6Fh8MkItqRh/ueBcj7ZWslm65DCU68B+QmWZNfKoZcJRr3IW5FxoGB0ldYxXEwHWlIgt5LFORI7COYeSJqY/w03v6ZhPiBmK6tTom6LcCjN8GP27uBpb8mhLTStAsSovyzda5mcS9uMctP2xkztrAFBKTA2HVRV0Z0JgfOHuSlUWpOS9yKjh+Us2tU/NFiwJqdrNYTRGx6GxREewlxvXOxKQECopx5v4swlRgo2ENLE4IHTry5ZSyc9tYkVpA+EbthA7JR7i/IEGebtjvSL7Lv4S1muUUEq2RH7sWpgiAz7ZsOu5TACGOtnWzzd3gyH0/XwxKsUiGLhOaeFFWGTw6fFQQzxHCAOZJ+ObTYNsdQR+yCilg2F5ktpQF3cGDgmpe1EwPWJWjLa75FVx2vLRvzhlQAtRaYubqDQS4o2nQ0mhgxHxaLfxwCAPAsSHru4az7l1pcMAgB7NnbjVnRp3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Perform AFS read subrequests in a work item rather than in the calling thread. For normal buffered reads, this will allow the calling thread to copy data from the pagecache to the application at the same time as the demarshalling thread is shovelling data from skbuffs into the pagecache. This will also allow the RA mark to trigger a new read before we've finished shovelling the data from the current one. Note: This would be a bit safer if the FS.FetchData RPC ops returned the metadata (including the data version number) before returning the data. This would allow me to flush the pagecache before installing the new data. In future, it may be possible to asynchronously flush the pagecache either side of the region being read. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/file.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index c3f0c45ae9a9..addb106dba4c 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -304,8 +304,9 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) return afs_do_sync_operation(op); } -static void afs_issue_read(struct netfs_io_subrequest *subreq) +static void afs_read_worker(struct work_struct *work) { + struct netfs_io_subrequest *subreq = container_of(work, struct netfs_io_subrequest, work); struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode); struct afs_read *fsreq; @@ -324,6 +325,12 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq) afs_put_read(fsreq); } +static void afs_issue_read(struct netfs_io_subrequest *subreq) +{ + INIT_WORK(&subreq->work, afs_read_worker); + queue_work(system_long_wq, &subreq->work); +} + static int afs_symlink_read_folio(struct file *file, struct folio *folio) { struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host); From patchwork Thu Jun 20 17:31:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13705999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC86DC2BA1A for ; Thu, 20 Jun 2024 17:34:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 576758D00D0; Thu, 20 Jun 2024 13:34:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54D2A8D00AF; Thu, 20 Jun 2024 13:34:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DC888D00D0; Thu, 20 Jun 2024 13:34:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EFB9B8D00AF for ; Thu, 20 Jun 2024 13:34:19 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9EE52A0111 for ; Thu, 20 Jun 2024 17:34:19 +0000 (UTC) X-FDA: 82251965838.09.C536DEB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 8CBC32000A for ; Thu, 20 Jun 2024 17:34:17 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fOAV5Vji; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718904853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uR+6EPlHTcAD9HukMoGDWtDn8rsvtq/nTdXRI0bZ5pY=; b=3AskLw66LOlui1gxoTSp8JCQVF+uH1M5F1ez6DE5nTODHlQdAfzu1KWH8Xi9Lg07EA/VHM 07h86rTyaQeMVmqdiHy4AOibqU7rA6k8t2/TrbnT3mrgwMR0cVhaxGZtWBiMZ2Ry6hW33x zBRp15FvAGc30yfYgPQAxkIpgoTaHwU= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fOAV5Vji; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718904853; a=rsa-sha256; cv=none; b=3J4T7Nzl/k+wdrH4bQ+4cNDTWdEF1K9AXG9BgQYH7ruF+m4YMdk2HWMvqpoZ6kfyQvj0nt NUhh67rDfDH4z497KZOzwPsKM1uFWQIwRz1sr0Rn6tMF6kGy3aa5stHrj/UGzs0c1fBw7t 9gTKeWvrSq4UEM0edRh4VB+fqNyTwpc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718904856; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uR+6EPlHTcAD9HukMoGDWtDn8rsvtq/nTdXRI0bZ5pY=; b=fOAV5VjiSd5LJOxbJkhGa1pTzPRKMcBICk/tM+iId0Fm77eGGQoS/byN5Obceuikp1Dnv1 WdkGsOnDjFJwrzr47o5XXiyQX0Z4Lail87I5LJ/tjrBV098UiZb9ADw2/76ZGCMifri8y+ nvBsbu9t+mn0PWLAkSeXdr2K0ZIe84E= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-169-Sc2_dRjRNFmQDYFmSMA_Lw-1; Thu, 20 Jun 2024 13:34:10 -0400 X-MC-Unique: Sc2_dRjRNFmQDYFmSMA_Lw-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D906619560B0; Thu, 20 Jun 2024 17:34:07 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.39.195.156]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4CE651956087; Thu, 20 Jun 2024 17:34:00 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 17/17] netfs: Speed up buffered reading Date: Thu, 20 Jun 2024 18:31:35 +0100 Message-ID: <20240620173137.610345-18-dhowells@redhat.com> In-Reply-To: <20240620173137.610345-1-dhowells@redhat.com> References: <20240620173137.610345-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8CBC32000A X-Stat-Signature: c7jskj8q3du486djo4s5dj34uqwu645j X-HE-Tag: 1718904857-383410 X-HE-Meta: U2FsdGVkX1+LM6Cle/DFMOVemXBKIvshcfaKt+H75p8EtlTSYOCFyEuxf9LXCjHLaHalPIk07zF9deYddVCxMLUQG+BlPUvnNFVgf/vaJ2KJuyfC7oycVQfhZYUfp0jgjcGxzdStx5DOebEsbMS97LWV6sD3cccvng3VxPKHV4UGE4ONukZeo+Hy9wvRqfslSNhCVF5wAa82Ykah5D2uRvGK1rCaoLnSboTKtDX+/rmhQ0zZMVjdDjmZLBeLSnamibQM4981dHwYq75G42ilPMo1WZn9bjA5qx9pgAvgpCHIYBrDxNlRvAFkhFO6e4f06Bh4muk2ra3IBoDF/6K7Fjt9oEcDfJ4KnY3qd5Q78kg8oeSCx9iJkX+crD4bU2DGKv6UH47hX+aABDZJcGPI1H2UFM85b7GYnjhRXfM350oJ7i3Nbe+7jcHdo+yuGBG/xxloeOvrxJ0mJ6wY9Vh/LpqIfwKGAmAUi2Z/gRTrsH76ozNnWiO3KVtorGR5q2XZK19RrzpbpustFbMT39GtMZjXoS7XYOuNuDp1ePlLv13B7xt5mEHyh2ucC/ivHnsZlWzuuLbLMfnIgo4YuF/XoMV4KyMwBOHHrl+W+nMI4dtfZOR495A/zj6fvpYad1MWftdwrZyq7cXiinsO21IoB5DZdTh70Fy2zuZao5bOAcRxKmyevfiGjelCxplvg7pbdaNCTi+vYHywBj6NNC1wPnI8PFHWD3mzoBy4LykUJm8VqXtOMHcP6JtaLVP/AC6DS/QnEVRVvqhkIT5I1QZLXM2yC24zasR55wAGYdRluGb/nMx7k3iPyF1CU5iERuM+7k3WuvmYa0/QI9DrFvQNadW+vW1jBYon9CKUj9OrG728u3xh452AKuXyS2tYbXLQDnoALhWJ+rBRga+rNhhuHR7USyupi7EUfQhHbFfWuG2y7z0I2GL0Niit1r487K4bBNbCo6zCBaSviaJkfFS HNmKrvvi wPZ9aNqxULrgxwa08FFvk6SmIgoN5mVGfncvXmICjZ6g98P3P7RoIFjUYgQrnVmZRsBHKoUaD24NIHDA+FvoDgWW99OSZbx24RRU8cjPZQ9Y1BUjVEDHaL1XPJIMfiN4IGdRC50ivj4s5LlJT1Y+cwb7Uw0NGqjw9XbxeIhFxAP9uB/4kHoC9ouMlFjN1Pmy1JlTonQ6tAgLf04yzEUhsfCR4oD5ZN66cO/fBrOxSXJRjQt10yojx+4xNqhC0/NVJOrDF3WPlQS84q8je6K1aMi0FTEmpAjR41Z3475cUmfy9eqxYbMhUvoSEcJHtUXQzJmb7NMWXgQm3aJyUELGoqkOIy83Lhm+1kDJjg4gA8A98d81DT7+1R6rpoJ4z50vQhXiO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: [!!!] NOTE: THIS PATCH IS INCOMPLETE! Buffered reading mostly works, but unbuffered/direct reads don't. Further, I have not yet re-implemented read-retry or tested local caching. Improve the efficiency of buffered reads in a number of ways: (1) Overhaul the algorithm in general so that it's a lot more compact and split the read submission code between buffered and unbuffered versions. The unbuffered version can be vastly simplified. (2) Get rid of ->clamp_length(), instead calling into ->issue_read() and having it return the size of the slice issued. This gets rid of a function pointer. (3) After determining the size of the slice it wants, ->issue_read() must call netfs_prepare_read_iterator() to load more folios into the buffer (if necessary) and to set the iterators. This allows some of the work to be done whilst I/O is in progress. (4) netfs_subreq_terminated(), which was used to report termination, is replaced with a function, netfs_read_subreq_progress(), that can be used to report incomplete progress as well as termination. afs can then use this to start unlocking pages as it fills them in as its transport has a packetised approach to RPCs rather than complete smaller messages favoured by, say, cifs. (5) Read-result collection is handed off to a work queue rather than being done in the I/O thread. Multiple subrequests can be processes simultaneously. (6) When a subrequest is collected, any folios it fully spans are collected and "spare" data on either side is donated to either the previous or the next subrequest in the sequence. Notes: (*) Readahead expansion is currently not working and needs investigation. (*) Unbuffered/direct-I/O reads don't work and need debugging. (*) Caching is untested and may not work. (*) Failed or partial reads need retrying, but aren't yet. (*) RDMA with cifs does appear to work, both with SIW and RXE. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/9p/vfs_addr.c | 13 +- fs/afs/file.c | 18 +- fs/afs/fsclient.c | 9 +- fs/afs/yfsclient.c | 9 +- fs/ceph/addr.c | 72 ++--- fs/netfs/Makefile | 2 +- fs/netfs/buffered_read.c | 510 +++++++++++++++++++++------------ fs/netfs/direct_read.c | 99 ++++++- fs/netfs/internal.h | 6 - fs/netfs/io.c | 528 +---------------------------------- fs/netfs/iterator.c | 50 ++++ fs/netfs/main.c | 7 +- fs/netfs/objects.c | 1 - fs/netfs/read_collect.c | 450 +++++++++++++++++++++++++++++ fs/netfs/write_issue.c | 4 - fs/nfs/fscache.c | 31 +- fs/nfs/fscache.h | 7 +- fs/smb/client/cifssmb.c | 6 +- fs/smb/client/file.c | 69 +++-- fs/smb/client/smb2pdu.c | 8 +- include/linux/netfs.h | 19 +- include/trace/events/netfs.h | 82 +++++- 22 files changed, 1171 insertions(+), 829 deletions(-) create mode 100644 fs/netfs/read_collect.c diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index a97ceb105cd8..829a8215b870 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -64,12 +64,17 @@ static void v9fs_issue_write(struct netfs_io_subrequest *subreq) * v9fs_issue_read - Issue a read from 9P * @subreq: The read to make */ -static void v9fs_issue_read(struct netfs_io_subrequest *subreq) +static ssize_t v9fs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; struct p9_fid *fid = rreq->netfs_priv; + ssize_t len; int total, err; + len = netfs_prepare_read_iterator(subreq, ULONG_MAX, 0); + if (len < 0) + return len; + total = p9_client_read(fid, subreq->start + subreq->transferred, &subreq->io_iter, &err); @@ -77,7 +82,11 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq) * cache won't be on server and is zeroes */ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, err ?: total, false); + if (!err) + subreq->transferred += total; + + netfs_read_subreq_progress(subreq, err, false); + return len; } /** diff --git a/fs/afs/file.c b/fs/afs/file.c index addb106dba4c..54805a4c6d5c 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -243,7 +243,7 @@ static void afs_fetch_data_notify(struct afs_operation *op) req->error = error; if (subreq) { __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, error ?: req->actual_len, false); + netfs_read_subreq_progress(subreq, error, false); req->subreq = NULL; } else if (req->done) { req->done(req); @@ -293,7 +293,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) op = afs_alloc_operation(req->key, vnode->volume); if (IS_ERR(op)) { if (req->subreq) - netfs_subreq_terminated(req->subreq, PTR_ERR(op), false); + netfs_read_subreq_progress(req->subreq, PTR_ERR(op), false); return PTR_ERR(op); } @@ -312,7 +312,7 @@ static void afs_read_worker(struct work_struct *work) fsreq = afs_alloc_read(GFP_NOFS); if (!fsreq) - return netfs_subreq_terminated(subreq, -ENOMEM, false); + return netfs_read_subreq_progress(subreq, -ENOMEM, false); fsreq->subreq = subreq; fsreq->pos = subreq->start + subreq->transferred; @@ -325,10 +325,16 @@ static void afs_read_worker(struct work_struct *work) afs_put_read(fsreq); } -static void afs_issue_read(struct netfs_io_subrequest *subreq) +static ssize_t afs_issue_read(struct netfs_io_subrequest *subreq) { - INIT_WORK(&subreq->work, afs_read_worker); - queue_work(system_long_wq, &subreq->work); + ssize_t len; + + len = netfs_prepare_read_iterator(subreq, ULONG_MAX, 0); + if (len > 0) { + INIT_WORK(&subreq->work, afs_read_worker); + queue_work(system_long_wq, &subreq->work); + } + return len; } static int afs_symlink_read_folio(struct file *file, struct folio *folio) diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index 79cd30775b7a..55165dc59039 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -304,6 +304,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call) struct afs_vnode_param *vp = &op->file[0]; struct afs_read *req = op->fetch.req; const __be32 *bp; + size_t count_before; int ret; _enter("{%u,%zu,%zu/%llu}", @@ -345,10 +346,14 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call) /* extract the returned data */ case 2: - _debug("extract data %zu/%llu", - iov_iter_count(call->iter), req->actual_len); + count_before = call->iov_len; + _debug("extract data %zu/%llu", count_before, req->actual_len); ret = afs_extract_data(call, true); + if (req->subreq) { + req->subreq->transferred += count_before - call->iov_len; + netfs_read_subreq_progress(req->subreq, -EINPROGRESS, false); + } if (ret < 0) return ret; diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index f521e66d3bf6..33b1717e56a6 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -355,6 +355,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call) struct afs_vnode_param *vp = &op->file[0]; struct afs_read *req = op->fetch.req; const __be32 *bp; + size_t count_before; int ret; _enter("{%u,%zu, %zu/%llu}", @@ -391,10 +392,14 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call) /* extract the returned data */ case 2: - _debug("extract data %zu/%llu", - iov_iter_count(call->iter), req->actual_len); + count_before = call->iov_len; + _debug("extract data %zu/%llu", count_before, req->actual_len); ret = afs_extract_data(call, true); + if (req->subreq) { + req->subreq->transferred += count_before - call->iov_len; + netfs_read_subreq_progress(req->subreq, -EINPROGRESS, false); + } if (ret < 0) return ret; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 8c16bc5250ef..7024dc58b363 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -205,21 +205,6 @@ static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq) } } -static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq) -{ - struct inode *inode = subreq->rreq->inode; - struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); - struct ceph_inode_info *ci = ceph_inode(inode); - u64 objno, objoff; - u32 xlen; - - /* Truncate the extent at the end of the current block */ - ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, - &objno, &objoff, &xlen); - subreq->len = min(xlen, fsc->mount_options->rsize); - return true; -} - static void finish_netfs_read(struct ceph_osd_request *req) { struct inode *inode = req->r_inode; @@ -263,7 +248,11 @@ static void finish_netfs_read(struct ceph_osd_request *req) calc_pages_for(osd_data->alignment, osd_data->length), false); } - netfs_subreq_terminated(subreq, err, false); + if (err > 0) { + subreq->transferred = err; + err = 0; + } + netfs_read_subreq_progress(subreq, err, false); iput(req->r_inode); ceph_dec_osd_stopping_blocker(fsc->mdsc); } @@ -277,7 +266,6 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) struct ceph_mds_request *req; struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb); struct ceph_inode_info *ci = ceph_inode(inode); - struct iov_iter iter; ssize_t err = 0; size_t len; int mode; @@ -312,18 +300,21 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq) } len = min_t(size_t, iinfo->inline_len - subreq->start, subreq->len); - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); - err = copy_to_iter(iinfo->inline_data + subreq->start, len, &iter); - if (err == 0) + err = copy_to_iter(iinfo->inline_data + subreq->start, len, &subreq->io_iter); + if (err == 0) { err = -EFAULT; + } else { + subreq->transferred += err; + err = 0; + } ceph_mdsc_put_request(req); out: - netfs_subreq_terminated(subreq, err, false); + netfs_read_subreq_progress(subreq, err, false); return true; } -static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) +static ssize_t ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; struct inode *inode = rreq->inode; @@ -332,9 +323,11 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) struct ceph_client *cl = fsc->client; struct ceph_osd_request *req = NULL; struct ceph_vino vino = ceph_vino(inode); - struct iov_iter iter; - int err = 0; - u64 len = subreq->len; + ssize_t slice = subreq->len; + int err; + u64 objno, objoff; + u32 xlen; + u64 len; bool sparse = IS_ENCRYPTED(inode) || ceph_test_mount_opt(fsc, SPARSEREAD); u64 off = subreq->start; int extent_cnt; @@ -344,9 +337,24 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) goto out; } - if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) - return; + /* Truncate the extent at the end of the current block */ + ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, + &objno, &objoff, &xlen); + xlen = umin(xlen, fsc->mount_options->rsize); + slice = netfs_prepare_read_iterator(subreq, xlen, 0); + if (slice < 0) + return slice; + + if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) + return slice; + + // TODO: This rounding here is slightly dodgy. It *should* work, for + // now, as the cache only deals in blocks that are a multiple of + // PAGE_SIZE and fscrypt blocks are at most PAGE_SIZE. What needs to + // happen is for the fscrypt driving to be moved into netfslib and the + // data in the cache also to be stored encrypted. + len = slice; ceph_fscrypt_adjust_off_and_len(inode, &off, &len); req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, @@ -369,8 +377,6 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) doutc(cl, "%llx.%llx pos=%llu orig_len=%zu len=%llu\n", ceph_vinop(inode), subreq->start, subreq->len, len); - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); - /* * FIXME: For now, use CEPH_OSD_DATA_TYPE_PAGES instead of _ITER for * encrypted inodes. We'd need infrastructure that handles an iov_iter @@ -382,7 +388,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) struct page **pages; size_t page_off; - err = iov_iter_get_pages_alloc2(&iter, &pages, len, &page_off); + err = iov_iter_get_pages_alloc2(&subreq->io_iter, &pages, len, &page_off); if (err < 0) { doutc(cl, "%llx.%llx failed to allocate pages, %d\n", ceph_vinop(inode), err); @@ -397,7 +403,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) osd_req_op_extent_osd_data_pages(req, 0, pages, len, 0, false, false); } else { - osd_req_op_extent_osd_iter(req, 0, &iter); + osd_req_op_extent_osd_iter(req, 0, &subreq->io_iter); } if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) { err = -EIO; @@ -412,8 +418,9 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) out: ceph_osdc_put_request(req); if (err) - netfs_subreq_terminated(subreq, err, false); + netfs_read_subreq_progress(subreq, err, false); doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); + return err < 0 ? err : slice; } static int ceph_init_request(struct netfs_io_request *rreq, struct file *file) @@ -493,7 +500,6 @@ const struct netfs_request_ops ceph_netfs_ops = { .free_request = ceph_netfs_free_request, .issue_read = ceph_netfs_issue_read, .expand_readahead = ceph_netfs_expand_readahead, - .clamp_length = ceph_netfs_clamp_length, .check_write_begin = ceph_netfs_check_write_begin, }; diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 8e6781e0b10b..0bd2996a2a77 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -5,12 +5,12 @@ netfs-y := \ buffered_write.o \ direct_read.o \ direct_write.o \ - io.o \ iterator.o \ locking.o \ main.o \ misc.o \ objects.o \ + read_collect.o \ write_collect.o \ write_issue.o diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index a6bb03bea920..aabe79df765a 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -9,126 +9,6 @@ #include #include "internal.h" -/* - * Unlock the folios in a read operation. We need to set PG_writeback on any - * folios we're going to write back before we unlock them. - * - * Note that if the deprecated NETFS_RREQ_USE_PGPRIV2 is set then we use - * PG_private_2 and do a direct write to the cache from here instead. - */ -void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) -{ - struct netfs_io_subrequest *subreq; - struct netfs_folio *finfo; - struct folio *folio; - pgoff_t start_page = rreq->start / PAGE_SIZE; - pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; - size_t account = 0; - bool subreq_failed = false; - - XA_STATE(xas, &rreq->mapping->i_pages, start_page); - - if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) { - __clear_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); - } - } - - /* Walk through the pagecache and the I/O request lists simultaneously. - * We may have a mixture of cached and uncached sections and we only - * really want to write out the uncached sections. This is slightly - * complicated by the possibility that we might have huge pages with a - * mixture inside. - */ - subreq = list_first_entry(&rreq->subrequests, - struct netfs_io_subrequest, rreq_link); - subreq_failed = (subreq->error < 0); - - trace_netfs_rreq(rreq, netfs_rreq_trace_unlock); - - rcu_read_lock(); - xas_for_each(&xas, folio, last_page) { - loff_t pg_end; - bool pg_failed = false; - bool wback_to_cache = false; - bool folio_started = false; - - if (xas_retry(&xas, folio)) - continue; - - pg_end = folio_pos(folio) + folio_size(folio) - 1; - - for (;;) { - loff_t sreq_end; - - if (!subreq) { - pg_failed = true; - break; - } - if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { - if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, - &subreq->flags)) { - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); - folio_start_private_2(folio); - folio_started = true; - } - } else { - wback_to_cache |= - test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); - } - pg_failed |= subreq_failed; - sreq_end = subreq->start + subreq->len - 1; - if (pg_end < sreq_end) - break; - - account += subreq->transferred; - if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { - subreq = list_next_entry(subreq, rreq_link); - subreq_failed = (subreq->error < 0); - } else { - subreq = NULL; - subreq_failed = false; - } - - if (pg_end == sreq_end) - break; - } - - if (!pg_failed) { - flush_dcache_folio(folio); - finfo = netfs_folio_info(folio); - if (finfo) { - trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); - if (finfo->netfs_group) - folio_change_private(folio, finfo->netfs_group); - else - folio_detach_private(folio); - kfree(finfo); - } - folio_mark_uptodate(folio); - if (wback_to_cache && !WARN_ON_ONCE(folio_get_private(folio) != NULL)) { - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); - folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); - filemap_dirty_folio(folio->mapping, folio); - } - } - - if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { - if (folio->index == rreq->no_unlock_folio && - test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) - _debug("no unlock"); - else - folio_unlock(folio); - } - } - rcu_read_unlock(); - - task_io_account_read(account); - if (rreq->netfs_ops->done) - rreq->netfs_ops->done(rreq); -} - static void netfs_cache_expand_readahead(struct netfs_io_request *rreq, unsigned long long *_start, unsigned long long *_len, @@ -183,6 +63,278 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx)); } +/* + * Decant the list of folios to read into a rolling buffer. + */ +static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, + struct sheaf *sheaf) +{ + unsigned int order, nr; + size_t size = 0; + + nr = __readahead_batch(rreq->ractl, (struct page **)sheaf->slots, + ARRAY_SIZE(sheaf->slots)); + for (int i = 0; i < nr; i++) { + struct folio *folio = sheaf_slot_folio(sheaf, i); + + trace_netfs_folio(folio, netfs_folio_trace_read); + order = folio_order(folio); + sheaf->orders[i] = order; + size += PAGE_SIZE << order; + } + + for (int i = nr; i < ARRAY_SIZE(sheaf->slots); i++) + sheaf_slot_set(sheaf, i, NULL); + + return size; +} + +/** + * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O + * @subreq: The subrequest to be set up + * @rsize: Preferred (and maximum) size + * @max_segs: Maximum number of DMA segments (or 0) + * + * Prepare the I/O iterator representing the read buffer on a subrequest for + * the filesystem to use for I/O (it can be passed directly to a socket). This + * is intended to be called from the ->issue_read() method once the filesystem + * has trimmed the request to the size it wants. + * + * Returns the limited size if successful and -ENOMEM if insufficient memory + * available. + * + * [!] NOTE: This must be run in the same thread as ->issue_read() was called + * in as we access the readahead_control struct. + */ +ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq, size_t rsize, + unsigned int max_segs) +{ + struct netfs_io_request *rreq = subreq->rreq; + + rsize = umin(subreq->len, rsize); + + if (rreq->ractl) { + /* If we don't have sufficient folios in the rolling buffer, + * extract a sheaf's worth from the readahead region at a time + * into the buffer. Note that this acquires a ref on each page + * that we will need to release later - but we don't want to do + * that until after we've started the I/O. + */ + while (rreq->submitted < subreq->start + rsize) { + struct sheaf *tail = rreq->buffer_tail, *new; + size_t added; + + new = kmalloc(sizeof(*new), GFP_NOFS); + if (!new) + return -ENOMEM; + netfs_stat(&netfs_n_sheaf); + new->next = NULL; + new->prev = tail; + tail->next = new; + rreq->buffer_tail = new; + added = netfs_load_buffer_from_ra(rreq, new); + rreq->iter.count += added; + rreq->submitted += added; + } + } + + subreq->len = rsize; + if (unlikely(max_segs)) { + size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, max_segs); + + if (limit < rsize) { + subreq->len = limit; + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); + } + } + + subreq->io_iter = rreq->iter; + + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + + if (iov_iter_is_sheaf(&subreq->io_iter)) { + subreq->curr_sheaf = (struct sheaf *)subreq->io_iter.sheaf; + subreq->curr_sheaf_slot = subreq->io_iter.sheaf_slot; + subreq->curr_folio_order = subreq->curr_sheaf->orders[subreq->curr_sheaf_slot]; + } + + iov_iter_truncate(&subreq->io_iter, subreq->len); + iov_iter_advance(&rreq->iter, subreq->len); + return subreq->len; +} +EXPORT_SYMBOL(netfs_prepare_read_iterator); + +static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq, + loff_t i_size) +{ + struct netfs_cache_resources *cres = &rreq->cache_resources; + + if (!cres->ops) + return NETFS_DOWNLOAD_FROM_SERVER; + return cres->ops->prepare_read(subreq, i_size); +} + +static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, + bool was_async) +{ + struct netfs_io_subrequest *subreq = priv; + + if (transferred_or_error < 0) + netfs_read_subreq_progress(subreq, transferred_or_error, was_async); + + if (transferred_or_error > 0) + subreq->transferred += transferred_or_error; + netfs_read_subreq_progress(subreq, 0, was_async); +} + +/* + * Issue a read against the cache. + * - Eats the caller's ref on subreq. + */ +static ssize_t netfs_read_cache_to_pagecache(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_cache_resources *cres = &rreq->cache_resources; + ssize_t slice = subreq->len; + + netfs_stat(&netfs_n_rh_read); + cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_IGNORE, + netfs_cache_read_terminated, subreq); + return slice; +} + +/* + * Perform a read to the pagecache from a series of sources of different types, + * slicing up the region to be read according to available cache blocks and + * network rsize. + */ +static int netfs_read_to_pagecache(struct netfs_io_request *rreq) +{ + struct netfs_inode *ictx = netfs_inode(rreq->inode); + unsigned long long start = rreq->start; + ssize_t size = rreq->len; + + /* Chop the readahead request up into subrequests. */ + do { + struct netfs_io_subrequest *subreq; + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; + ssize_t slice; + + subreq = netfs_alloc_subrequest(rreq); + if (!subreq) + return -ENOMEM; + + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; + subreq->start = start; + subreq->len = size; + + spin_lock(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + subreq->prev_donated = rreq->prev_donated; + rreq->prev_donated = 0; + trace_netfs_sreq(subreq, netfs_sreq_trace_added); + spin_unlock(&rreq->lock); + + source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size); + if (source == NETFS_DOWNLOAD_FROM_SERVER) { + if (subreq->start >= ictx->zero_point) { + subreq->source = source = NETFS_FILL_WITH_ZEROES; + goto fill_with_zeroes; + } + + if (subreq->len > ictx->zero_point - subreq->start) + subreq->len = ictx->zero_point - subreq->start; + if (subreq->len > rreq->i_size - subreq->start) + subreq->len = rreq->i_size - subreq->start; + + netfs_stat(&netfs_n_rh_download); + slice = rreq->netfs_ops->issue_read(subreq); + if (slice <= 0) + return slice; + goto done; + } + + fill_with_zeroes: + if (source == NETFS_FILL_WITH_ZEROES) { + subreq->source = NETFS_FILL_WITH_ZEROES; + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + netfs_stat(&netfs_n_rh_zero); + slice = subreq->len; + subreq->transferred = slice; + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + netfs_read_subreq_progress(subreq, 0, false); + goto done; + } + + if (source == NETFS_READ_FROM_CACHE) { + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + slice = netfs_read_cache_to_pagecache(rreq, subreq); + goto done; + } + + if (source == NETFS_INVALID_READ) + break; + + done: + size -= slice; + start += slice; + cond_resched(); + } while (size > 0); + + return 0; +} + +/* + * Set up the initial sheaf of buffer folios in the rolling buffer and set the + * iterator to refer to it. + */ +static int netfs_prime_buffer(struct netfs_io_request *rreq) +{ + struct sheaf *sheaf; + size_t added; + + sheaf = kmalloc(sizeof(*sheaf), GFP_KERNEL); + if (!sheaf) + return -ENOMEM; + netfs_stat(&netfs_n_sheaf); + sheaf->next = NULL; + sheaf->prev = NULL; + rreq->buffer = sheaf; + rreq->buffer_tail = sheaf; + rreq->submitted = rreq->start; + iov_iter_sheaf(&rreq->iter, ITER_DEST, sheaf, 0, 0, 0); + + added = netfs_load_buffer_from_ra(rreq, sheaf); + rreq->iter.count += added; + rreq->submitted += added; + return 0; +} + +/* + * Drop the ref on each folio that we inherited from the VM readahead code. We + * still have the folio locks to pin the page until we complete the I/O. + */ +static void netfs_put_ra_refs(struct sheaf *sheaf) +{ + struct folio_batch fbatch; + + folio_batch_init(&fbatch); + while (sheaf) { + for (unsigned int slot = 0; slot < sheaf_nr_slots(sheaf); slot++) { + if (!sheaf->slots[slot]) + continue; + trace_netfs_folio(sheaf_slot_folio(sheaf, slot), + netfs_folio_trace_read_put); + if (!folio_batch_add(&fbatch, sheaf_slot_folio(sheaf, slot))) + folio_batch_release(&fbatch); + } + sheaf = sheaf->next; + } + + folio_batch_release(&fbatch); +} + /** * netfs_readahead - Helper to manage a read request * @ractl: The description of the readahead request @@ -201,22 +353,17 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in void netfs_readahead(struct readahead_control *ractl) { struct netfs_io_request *rreq; - struct netfs_inode *ctx = netfs_inode(ractl->mapping->host); + struct netfs_inode *ictx = netfs_inode(ractl->mapping->host); + unsigned long long start = readahead_pos(ractl); + size_t size = readahead_length(ractl); int ret; - _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); - - if (readahead_count(ractl) == 0) - return; - - rreq = netfs_alloc_request(ractl->mapping, ractl->file, - readahead_pos(ractl), - readahead_length(ractl), + rreq = netfs_alloc_request(ractl->mapping, ractl->file, start, size, NETFS_READAHEAD); if (IS_ERR(rreq)) return; - ret = netfs_begin_cache_read(rreq, ctx); + ret = netfs_begin_cache_read(rreq, ictx); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto cleanup_free; @@ -224,20 +371,17 @@ void netfs_readahead(struct readahead_control *ractl) trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl), netfs_read_trace_readahead); - netfs_rreq_expand(rreq, ractl); + //netfs_rreq_expand(rreq, ractl); - /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &ractl->mapping->i_pages, - rreq->start, rreq->len); + rreq->ractl = ractl; + if (netfs_prime_buffer(rreq) < 0) + goto cleanup_free; + netfs_read_to_pagecache(rreq); - /* Drop the refs on the folios here rather than in the cache or - * filesystem. The locks will be dropped in netfs_rreq_unlock(). - */ - while (readahead_folio(ractl)) - ; + /* Release the folio refs whilst we're waiting for the I/O. */ + netfs_put_ra_refs(rreq->buffer); - netfs_begin_read(rreq, false); - netfs_put_request(rreq, false, netfs_rreq_trace_put_return); + netfs_put_request(rreq, true, netfs_rreq_trace_put_return); return; cleanup_free: @@ -246,6 +390,30 @@ void netfs_readahead(struct readahead_control *ractl) } EXPORT_SYMBOL(netfs_readahead); +/* + * Create a rolling buffer with a single occupying folio. + */ +static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio) +{ + struct sheaf *sheaf; + + sheaf = kzalloc(sizeof(*sheaf), GFP_KERNEL); + if (!sheaf) + return -ENOMEM; + + netfs_stat(&netfs_n_sheaf); + sheaf->next = NULL; + sheaf->prev = NULL; + sheaf_slot_set_folio(sheaf, 0, folio); + sheaf->orders[0] = folio_order(folio); + rreq->buffer = sheaf; + rreq->buffer_tail = sheaf; + rreq->submitted = rreq->start + rreq->len; + iov_iter_sheaf(&rreq->iter, ITER_DEST, sheaf, 0, 0, rreq->len); + rreq->ractl = (struct readahead_control *)1UL; + return 0; +} + /** * netfs_read_folio - Helper to manage a read_folio request * @file: The file to read from @@ -326,14 +494,28 @@ int netfs_read_folio(struct file *file, struct folio *folio) if (to < flen) bvec_set_folio(&bvec[i++], folio, flen - to, to); iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); + + ret = netfs_read_to_pagecache(rreq); + + if (sink) + folio_put(sink); } else { - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto discard; + + ret = netfs_read_to_pagecache(rreq); + } + + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); + if (ret == 0) + ret = rreq->error; + if (ret == 0 && rreq->submitted < rreq->len) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret = -EIO; } - ret = netfs_begin_read(rreq, true); - if (sink) - folio_put(sink); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); return ret < 0 ? ret : 0; @@ -395,7 +577,7 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, } /** - * netfs_write_begin - Helper to prepare for writing + * netfs_write_begin - Helper to prepare for writing [DEPRECATED] * @ctx: The netfs context * @file: The file to read from * @mapping: The mapping to read from @@ -406,13 +588,10 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, * * Pre-read data for a write-begin request by drawing data from the cache if * possible, or the netfs if not. Space beyond the EOF is zero-filled. - * Multiple I/O requests from different sources will get munged together. If - * necessary, the readahead window can be expanded in either direction to a - * more convenient alighment for RPC efficiency or to make storage in the cache - * feasible. + * Multiple I/O requests from different sources will get munged together. * * The calling netfs must provide a table of operations, only one of which, - * issue_op, is mandatory. + * issue_read, is mandatory. * * The check_write_begin() operation can be provided to check for and flush * conflicting writes once the folio is grabbed and locked. It is passed a @@ -437,8 +616,6 @@ int netfs_write_begin(struct netfs_inode *ctx, pgoff_t index = pos >> PAGE_SHIFT; int ret; - DEFINE_READAHEAD(ractl, file, NULL, mapping, index); - retry: folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, mapping_gfp_mask(mapping)); @@ -486,22 +663,12 @@ int netfs_write_begin(struct netfs_inode *ctx, netfs_stat(&netfs_n_rh_write_begin); trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); - /* Expand the request to meet caching requirements and download - * preferences. - */ - ractl._nr_pages = folio_nr_pages(folio); - netfs_rreq_expand(rreq, &ractl); - /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); - - /* We hold the folio locks, so we can drop the references */ - folio_get(folio); - while (readahead_folio(&ractl)) - ; + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto error_put; - ret = netfs_begin_read(rreq, true); + ret = netfs_read_to_pagecache(rreq); if (ret < 0) goto error; netfs_put_request(rreq, false, netfs_rreq_trace_put_return); @@ -557,10 +724,11 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio, trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); /* Set up the output buffer */ - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, - rreq->start, rreq->len); + ret = netfs_create_singular_buffer(rreq, folio); + if (ret < 0) + goto error_put; - ret = netfs_begin_read(rreq, true); + ret = netfs_read_to_pagecache(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); return ret; diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 10a1e4da6bda..24a74eb25466 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -16,6 +16,103 @@ #include #include "internal.h" +/* + * Perform a read to a buffer from the server, slicing up the region to be read + * according to the network rsize. + */ +static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) +{ + unsigned long long start = rreq->start; + ssize_t size = rreq->len; + + do { + struct netfs_io_subrequest *subreq; + ssize_t slice; + + subreq = netfs_alloc_subrequest(rreq); + if (!subreq) + return -ENOMEM; + + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; + subreq->start = start; + subreq->len = size; + + spin_lock(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + subreq->prev_donated = rreq->prev_donated; + rreq->prev_donated = 0; + trace_netfs_sreq(subreq, netfs_sreq_trace_added); + spin_unlock(&rreq->lock); + + if (subreq->len > rreq->i_size - subreq->start) + subreq->len = rreq->i_size - subreq->start; + + netfs_stat(&netfs_n_rh_download); + slice = rreq->netfs_ops->issue_read(subreq); + if (slice <= 0) + return slice; + + size -= slice; + start += slice; + + if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && + test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) + break; + cond_resched(); + } while (size > 0); + + return 0; +} + +/* + * Perform a read to an application buffer, bypassing the pagecache and the + * local disk cache. + */ +static int netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync) +{ + int ret; + + kenter("R=%x %llx-%llx", + rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); + + if (rreq->len == 0) { + pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); + return -EIO; + } + + // TODO: Use bounce buffer if requested + + inode_dio_begin(rreq->inode); + + ret = netfs_dispatch_unbuffered_reads(rreq); + + if (!rreq->submitted) { + netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); + inode_dio_end(rreq->inode); + ret = 0; + goto out; + } + + if (sync) { + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + + ret = rreq->error; + if (ret == 0 && rreq->submitted < rreq->len && + rreq->origin != NETFS_DIO_READ) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret = -EIO; + } + } else { + ret = -EIOCBQUEUED; + } + +out: + kleave(" = %d", ret); + return ret; +} + /** * netfs_unbuffered_read_iter_locked - Perform an unbuffered or direct I/O read * @iocb: The I/O control descriptor describing the read @@ -81,7 +178,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i if (async) rreq->iocb = iocb; - ret = netfs_begin_read(rreq, is_sync_kiocb(iocb)); + ret = netfs_unbuffered_read(rreq, is_sync_kiocb(iocb)); if (ret < 0) goto out; /* May be -EIOCBQUEUED */ if (!async) { diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index fe0974a95152..c470857bfcf8 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -23,15 +23,9 @@ /* * buffered_read.c */ -void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); -/* - * io.c - */ -int netfs_begin_read(struct netfs_io_request *rreq, bool sync); - /* * main.c */ diff --git a/fs/netfs/io.c b/fs/netfs/io.c index 27dbea0f3867..84392eed87ee 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -16,88 +16,7 @@ #include #include "internal.h" -/* - * Clear the unread part of an I/O request. - */ -static void netfs_clear_unread(struct netfs_io_subrequest *subreq) -{ - iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); -} - -static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, - bool was_async) -{ - struct netfs_io_subrequest *subreq = priv; - - netfs_subreq_terminated(subreq, transferred_or_error, was_async); -} - -/* - * Issue a read against the cache. - * - Eats the caller's ref on subreq. - */ -static void netfs_read_from_cache(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - enum netfs_read_from_hole read_hole) -{ - struct netfs_cache_resources *cres = &rreq->cache_resources; - - netfs_stat(&netfs_n_rh_read); - cres->ops->read(cres, subreq->start, &subreq->io_iter, read_hole, - netfs_cache_read_terminated, subreq); -} - -/* - * Fill a subrequest region with zeroes. - */ -static void netfs_fill_with_zeroes(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - netfs_stat(&netfs_n_rh_zero); - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_subreq_terminated(subreq, 0, false); -} - -/* - * Ask the netfs to issue a read request to the server for us. - * - * The netfs is expected to read from subreq->pos + subreq->transferred to - * subreq->pos + subreq->len - 1. It may not backtrack and write data into the - * buffer prior to the transferred point as it might clobber dirty data - * obtained from the cache. - * - * Alternatively, the netfs is allowed to indicate one of two things: - * - * - NETFS_SREQ_SHORT_READ: A short read - it will get called again to try and - * make progress. - * - * - NETFS_SREQ_CLEAR_TAIL: A short read - the rest of the buffer will be - * cleared. - */ -static void netfs_read_from_server(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq) -{ - netfs_stat(&netfs_n_rh_download); - - if (rreq->origin != NETFS_DIO_READ && - iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) - pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n", - rreq->debug_id, subreq->debug_index, - iov_iter_count(&subreq->io_iter), subreq->len, - subreq->transferred, subreq->flags); - rreq->netfs_ops->issue_read(subreq); -} - -/* - * Release those waiting. - */ -static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async) -{ - trace_netfs_rreq(rreq, netfs_rreq_trace_done); - netfs_clear_subrequests(rreq, was_async); - netfs_put_request(rreq, was_async, netfs_rreq_trace_put_complete); -} - +#if 0 /* * Handle a short read. */ @@ -162,8 +81,6 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq) __clear_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { if (subreq->error) { - if (subreq->source != NETFS_READ_FROM_CACHE) - break; subreq->source = NETFS_DOWNLOAD_FROM_SERVER; subreq->error = 0; netfs_stat(&netfs_n_rh_download_instead); @@ -203,445 +120,4 @@ static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq) } } } - -/* - * Determine how much we can admit to having read from a DIO read. - */ -static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) -{ - struct netfs_io_subrequest *subreq; - unsigned int i; - size_t transferred = 0; - - for (i = 0; i < rreq->direct_bv_count; i++) { - flush_dcache_page(rreq->direct_bv[i].bv_page); - // TODO: cifs marks pages in the destination buffer - // dirty under some circumstances after a read. Do we - // need to do that too? - set_page_dirty(rreq->direct_bv[i].bv_page); - } - - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { - if (subreq->error || subreq->transferred == 0) - break; - transferred += subreq->transferred; - if (subreq->transferred < subreq->len) - break; - } - - for (i = 0; i < rreq->direct_bv_count; i++) - flush_dcache_page(rreq->direct_bv[i].bv_page); - - rreq->transferred = transferred; - task_io_account_read(transferred); - - if (rreq->iocb) { - rreq->iocb->ki_pos += transferred; - if (rreq->iocb->ki_complete) - rreq->iocb->ki_complete( - rreq->iocb, rreq->error ? rreq->error : transferred); - } - if (rreq->netfs_ops->done) - rreq->netfs_ops->done(rreq); - inode_dio_end(rreq->inode); -} - -/* - * Assess the state of a read request and decide what to do next. - * - * Note that we could be in an ordinary kernel thread, on a workqueue or in - * softirq context at this point. We inherit a ref from the caller. - */ -static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) -{ - trace_netfs_rreq(rreq, netfs_rreq_trace_assess); - -again: - netfs_rreq_is_still_valid(rreq); - - if (!test_bit(NETFS_RREQ_FAILED, &rreq->flags) && - test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags)) { - if (netfs_rreq_perform_resubmissions(rreq)) - goto again; - return; - } - - if (rreq->origin != NETFS_DIO_READ) - netfs_rreq_unlock_folios(rreq); - else - netfs_rreq_assess_dio(rreq); - - trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); - wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); - - netfs_rreq_completed(rreq, was_async); -} - -static void netfs_rreq_work(struct work_struct *work) -{ - struct netfs_io_request *rreq = - container_of(work, struct netfs_io_request, work); - netfs_rreq_assess(rreq, false); -} - -/* - * Handle the completion of all outstanding I/O operations on a read request. - * We inherit a ref from the caller. - */ -static void netfs_rreq_terminated(struct netfs_io_request *rreq, - bool was_async) -{ - if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) && - was_async) { - if (!queue_work(system_unbound_wq, &rreq->work)) - BUG(); - } else { - netfs_rreq_assess(rreq, was_async); - } -} - -/** - * netfs_subreq_terminated - Note the termination of an I/O operation. - * @subreq: The I/O request that has terminated. - * @transferred_or_error: The amount of data transferred or an error code. - * @was_async: The termination was asynchronous - * - * This tells the read helper that a contributory I/O operation has terminated, - * one way or another, and that it should integrate the results. - * - * The caller indicates in @transferred_or_error the outcome of the operation, - * supplying a positive value to indicate the number of bytes transferred, 0 to - * indicate a failure to transfer anything that should be retried or a negative - * error code. The helper will look after reissuing I/O operations as - * appropriate and writing downloaded data to the cache. - * - * If @was_async is true, the caller might be running in softirq or interrupt - * context and we can't sleep. - */ -void netfs_subreq_terminated(struct netfs_io_subrequest *subreq, - ssize_t transferred_or_error, - bool was_async) -{ - struct netfs_io_request *rreq = subreq->rreq; - int u; - - _enter("R=%x[%x]{%llx,%lx},%zd", - rreq->debug_id, subreq->debug_index, - subreq->start, subreq->flags, transferred_or_error); - - switch (subreq->source) { - case NETFS_READ_FROM_CACHE: - netfs_stat(&netfs_n_rh_read_done); - break; - case NETFS_DOWNLOAD_FROM_SERVER: - netfs_stat(&netfs_n_rh_download_done); - break; - default: - break; - } - - if (IS_ERR_VALUE(transferred_or_error)) { - subreq->error = transferred_or_error; - trace_netfs_failure(rreq, subreq, transferred_or_error, - netfs_fail_read); - goto failed; - } - - if (WARN(transferred_or_error > subreq->len - subreq->transferred, - "Subreq overread: R%x[%x] %zd > %zu - %zu", - rreq->debug_id, subreq->debug_index, - transferred_or_error, subreq->len, subreq->transferred)) - transferred_or_error = subreq->len - subreq->transferred; - - subreq->error = 0; - subreq->transferred += transferred_or_error; - if (subreq->transferred < subreq->len) - goto incomplete; - -complete: - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) - set_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); - -out: - trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); - - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ - u = atomic_dec_return(&rreq->nr_outstanding); - if (u == 0) - netfs_rreq_terminated(rreq, was_async); - else if (u == 1) - wake_up_var(&rreq->nr_outstanding); - - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); - return; - -incomplete: - if (test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) { - netfs_clear_unread(subreq); - subreq->transferred = subreq->len; - goto complete; - } - - if (transferred_or_error == 0) { - if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { - subreq->error = -ENODATA; - goto failed; - } - } else { - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - } - - __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - goto out; - -failed: - if (subreq->source == NETFS_READ_FROM_CACHE) { - netfs_stat(&netfs_n_rh_read_failed); - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); - } else { - netfs_stat(&netfs_n_rh_download_failed); - set_bit(NETFS_RREQ_FAILED, &rreq->flags); - rreq->error = subreq->error; - } - goto out; -} -EXPORT_SYMBOL(netfs_subreq_terminated); - -static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest *subreq, - loff_t i_size) -{ - struct netfs_io_request *rreq = subreq->rreq; - struct netfs_cache_resources *cres = &rreq->cache_resources; - - if (cres->ops) - return cres->ops->prepare_read(subreq, i_size); - if (subreq->start >= rreq->i_size) - return NETFS_FILL_WITH_ZEROES; - return NETFS_DOWNLOAD_FROM_SERVER; -} - -/* - * Work out what sort of subrequest the next one will be. - */ -static enum netfs_io_source -netfs_rreq_prepare_read(struct netfs_io_request *rreq, - struct netfs_io_subrequest *subreq, - struct iov_iter *io_iter) -{ - enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; - struct netfs_inode *ictx = netfs_inode(rreq->inode); - size_t lsize; - - _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); - - if (rreq->origin != NETFS_DIO_READ) { - source = netfs_cache_prepare_read(subreq, rreq->i_size); - if (source == NETFS_INVALID_READ) - goto out; - } - - if (source == NETFS_DOWNLOAD_FROM_SERVER) { - /* Call out to the netfs to let it shrink the request to fit - * its own I/O sizes and boundaries. If it shinks it here, it - * will be called again to make simultaneous calls; if it wants - * to make serial calls, it can indicate a short read and then - * we will call it again. - */ - if (rreq->origin != NETFS_DIO_READ) { - if (subreq->start >= ictx->zero_point) { - source = NETFS_FILL_WITH_ZEROES; - goto set; - } - if (subreq->len > ictx->zero_point - subreq->start) - subreq->len = ictx->zero_point - subreq->start; - } - if (subreq->len > rreq->i_size - subreq->start) - subreq->len = rreq->i_size - subreq->start; - if (rreq->rsize && subreq->len > rreq->rsize) - subreq->len = rreq->rsize; - - if (rreq->netfs_ops->clamp_length && - !rreq->netfs_ops->clamp_length(subreq)) { - source = NETFS_INVALID_READ; - goto out; - } - - if (rreq->io_streams[0].sreq_max_segs) { - lsize = netfs_limit_iter(io_iter, 0, subreq->len, - rreq->io_streams[0].sreq_max_segs); - if (subreq->len > lsize) { - subreq->len = lsize; - trace_netfs_sreq(subreq, netfs_sreq_trace_limited); - } - } - } - -set: - if (subreq->len > rreq->len) - pr_warn("R=%08x[%u] SREQ>RREQ %zx > %llx\n", - rreq->debug_id, subreq->debug_index, - subreq->len, rreq->len); - - if (WARN_ON(subreq->len == 0)) { - source = NETFS_INVALID_READ; - goto out; - } - - subreq->source = source; - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - - subreq->io_iter = *io_iter; - iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(io_iter, subreq->len); -out: - subreq->source = source; - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); - return source; -} - -/* - * Slice off a piece of a read request and submit an I/O request for it. - */ -static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, - struct iov_iter *io_iter) -{ - struct netfs_io_subrequest *subreq; - enum netfs_io_source source; - - subreq = netfs_alloc_subrequest(rreq); - if (!subreq) - return false; - - subreq->start = rreq->start + rreq->submitted; - subreq->len = io_iter->count; - - _debug("slice %llx,%zx,%llx", subreq->start, subreq->len, rreq->submitted); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - - /* Call out to the cache to find out what it can do with the remaining - * subset. It tells us in subreq->flags what it decided should be done - * and adjusts subreq->len down if the subset crosses a cache boundary. - * - * Then when we hand the subset, it can choose to take a subset of that - * (the starts must coincide), in which case, we go around the loop - * again and ask it to download the next piece. - */ - source = netfs_rreq_prepare_read(rreq, subreq, io_iter); - if (source == NETFS_INVALID_READ) - goto subreq_failed; - - atomic_inc(&rreq->nr_outstanding); - - rreq->submitted += subreq->len; - - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - switch (source) { - case NETFS_FILL_WITH_ZEROES: - netfs_fill_with_zeroes(rreq, subreq); - break; - case NETFS_DOWNLOAD_FROM_SERVER: - netfs_read_from_server(rreq, subreq); - break; - case NETFS_READ_FROM_CACHE: - netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_IGNORE); - break; - default: - BUG(); - } - - return true; - -subreq_failed: - rreq->error = subreq->error; - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_failed); - return false; -} - -/* - * Begin the process of reading in a chunk of data, where that data may be - * stitched together from multiple sources, including multiple servers and the - * local cache. - */ -int netfs_begin_read(struct netfs_io_request *rreq, bool sync) -{ - struct iov_iter io_iter; - int ret; - - _enter("R=%x %llx-%llx", - rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); - - if (rreq->len == 0) { - pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); - return -EIO; - } - - if (rreq->origin == NETFS_DIO_READ) - inode_dio_begin(rreq->inode); - - // TODO: Use bounce buffer if requested - rreq->io_iter = rreq->iter; - - INIT_WORK(&rreq->work, netfs_rreq_work); - - /* Chop the read into slices according to what the cache and the netfs - * want and submit each one. - */ - netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding); - atomic_set(&rreq->nr_outstanding, 1); - io_iter = rreq->io_iter; - do { - _debug("submit %llx + %llx >= %llx", - rreq->start, rreq->submitted, rreq->i_size); - if (rreq->origin == NETFS_DIO_READ && - rreq->start + rreq->submitted >= rreq->i_size) - break; - if (!netfs_rreq_submit_slice(rreq, &io_iter)) - break; - if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && - test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) - break; - - } while (rreq->submitted < rreq->len); - - if (!rreq->submitted) { - netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); - if (rreq->origin == NETFS_DIO_READ) - inode_dio_end(rreq->inode); - ret = 0; - goto out; - } - - if (sync) { - /* Keep nr_outstanding incremented so that the ref always - * belongs to us, and the service code isn't punted off to a - * random thread pool to process. Note that this might start - * further work, such as writing to the cache. - */ - wait_var_event(&rreq->nr_outstanding, - atomic_read(&rreq->nr_outstanding) == 1); - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_assess(rreq, false); - - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - - ret = rreq->error; - if (ret == 0 && rreq->submitted < rreq->len && - rreq->origin != NETFS_DIO_READ) { - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); - ret = -EIO; - } - } else { - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_assess(rreq, false); - ret = -EIOCBQUEUED; - } - -out: - return ret; -} +#endif diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index b781bbbf1d8d..52b7a337e5cd 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -188,9 +188,59 @@ static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offse return min(span, max_size); } +/* + * Select the span of a sheaf iterator we're going to use. Limit it by both + * maximum size and maximum number of segments. Returns the size of the span + * in bytes. + */ +static size_t netfs_limit_sheaf(const struct iov_iter *iter, size_t start_offset, + size_t max_size, size_t max_segs) +{ + const struct sheaf *sheaf = iter->sheaf; + unsigned int nsegs = 0; + unsigned int slot = iter->sheaf_slot; + size_t span = 0, n = iter->count; + + if (WARN_ON(!iov_iter_is_sheaf(iter)) || + WARN_ON(start_offset > n) || + n == 0) + return 0; + max_size = umin(max_size, n - start_offset); + + if (slot >= sheaf_nr_slots(sheaf)) { + sheaf = sheaf->next; + slot = 0; + } + + start_offset += iter->iov_offset; + do { + size_t flen = sheaf_folio_size(sheaf, slot); + + if (start_offset < flen) { + span += flen - start_offset; + nsegs++; + start_offset = 0; + } else { + start_offset -= flen; + } + if (span >= max_size || nsegs >= max_segs) + break; + + slot++; + if (slot >= sheaf_nr_slots(sheaf)) { + sheaf = sheaf->next; + slot = 0; + } + } while (sheaf); + + return umin(span, max_size); +} + size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, size_t max_size, size_t max_segs) { + if (iov_iter_is_sheaf(iter)) + return netfs_limit_sheaf(iter, start_offset, max_size, max_segs); if (iov_iter_is_bvec(iter)) return netfs_limit_bvec(iter, start_offset, max_size, max_segs); if (iov_iter_is_xarray(iter)) diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 5f0f438e5d21..b28e601a8386 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -54,21 +54,20 @@ static int netfs_requests_seq_show(struct seq_file *m, void *v) if (v == &netfs_io_requests) { seq_puts(m, - "REQUEST OR REF FL ERR OPS COVERAGE\n" - "======== == === == ==== === =========\n" + "REQUEST OR REF FL ERR COVERAGE\n" + "======== == === == ==== =========\n" ); return 0; } rreq = list_entry(v, struct netfs_io_request, proc_link); seq_printf(m, - "%08x %s %3d %2lx %4d %3d @%04llx %llx/%llx", + "%08x %s %3d %2lx %4d @%04llx %llx/%llx", rreq->debug_id, netfs_origins[rreq->origin], refcount_read(&rreq->ref), rreq->flags, rreq->error, - atomic_read(&rreq->nr_outstanding), rreq->start, rreq->submitted, rreq->len); seq_putc(m, '\n'); return 0; diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index d148a955fa55..a70d8d092401 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -40,7 +40,6 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, memset(rreq, 0, kmem_cache_size(cache)); rreq->start = start; rreq->len = len; - rreq->upper_len = len; rreq->origin = origin; rreq->netfs_ops = ctx->ops; rreq->mapping = mapping; diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c new file mode 100644 index 000000000000..c645fe5ba5b3 --- /dev/null +++ b/fs/netfs/read_collect.c @@ -0,0 +1,450 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem read subrequest result collection, assessment and + * retrying. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/* + * Clear the unread part of an I/O request. + */ +static void netfs_clear_unread(struct netfs_io_subrequest *subreq) +{ + WARN_ON_ONCE(subreq->len - subreq->transferred != iov_iter_count(&subreq->io_iter)); + iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); +} + +/* + * Flush, mark and unlock a folio that's now completely read. If we want to + * cache the folio, we set the group to NETFS_FOLIO_COPY_TO_CACHE, mark it + * dirty and let writeback handle it. + */ +static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq, + struct netfs_io_request *rreq, + struct folio *folio) +{ + struct netfs_folio *finfo; + + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + + if (!test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { + finfo = netfs_folio_info(folio); + if (finfo) { + trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); + if (finfo->netfs_group) + folio_change_private(folio, finfo->netfs_group); + else + folio_detach_private(folio); + kfree(finfo); + } + + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { + if (!WARN_ON_ONCE(folio_get_private(folio) != NULL)) { + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); + folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); + filemap_dirty_folio(folio->mapping, folio); + } + } else { + trace_netfs_folio(folio, netfs_folio_trace_read_done); + } + } else { + // TODO: Use of PG_private_2 is deprecated. + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); + folio_start_private_2(folio); + } + } + + if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { + if (folio->index == rreq->no_unlock_folio && + test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) + _debug("no unlock"); + else + folio_unlock(folio); + } +} + +/* + * Unlock any folios that are now completely read. Returns true if the + * subrequest is removed from the list. + */ +static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was_async) +{ + struct netfs_io_subrequest *prev, *next; + struct netfs_io_request *rreq = subreq->rreq; + struct sheaf *sheaf = subreq->curr_sheaf; + size_t avail, prev_donated, next_donated, fsize, part; + loff_t fpos, start; + loff_t fend; + int slot = subreq->curr_sheaf_slot; + + if (WARN(subreq->transferred > subreq->len, + "Subreq overread: R%x[%x] %zu > %zu", + rreq->debug_id, subreq->debug_index, + subreq->transferred, subreq->len)) + subreq->transferred = subreq->len; + +next_folio: + fsize = PAGE_SIZE << subreq->curr_folio_order; + fpos = round_down(subreq->start + subreq->consumed, fsize); + fend = fpos + fsize; + + if (WARN_ON_ONCE(sheaf_slot_folio(sheaf, slot)->index != fpos / PAGE_SIZE)) { + printk("R=%08x[%x] s=%llx-%llx ctl=%zx/%zx/%zx sl=%u\n", + rreq->debug_id, subreq->debug_index, + subreq->start, subreq->start + subreq->transferred, + subreq->consumed, subreq->transferred, subreq->len, + slot); + printk("folio: %llx-%llx ix=%llx\n", + fpos, fend - 1, folio_pos(sheaf_slot_folio(sheaf, slot))); + } + +donation_changed: + /* Try to consume the current folio if we've hit or passed the end of + * it. There's a possibility that this subreq doesn't start at the + * beginning of the folio, in which case we need to donate to/from the + * preceding subreq. + * + * We also need to include any potential donation back from the + * following subreq. + */ + prev_donated = READ_ONCE(subreq->prev_donated); + next_donated = READ_ONCE(subreq->next_donated); + + avail = subreq->transferred; + if (avail == subreq->len) + avail += next_donated; + start = subreq->start; + if (subreq->consumed == 0) { + start -= prev_donated; + avail += prev_donated; + } else { + start += subreq->consumed; + avail -= subreq->consumed; + } + part = umin(avail, fsize); + + trace_netfs_progress(subreq, start, avail, part); + + if (start + avail >= fend) { + if (fpos == start) { + /* Flush, unlock and mark for caching any folio we've just read. */ + subreq->consumed = fend - subreq->start; + netfs_unlock_read_folio(subreq, rreq, sheaf_slot_folio(sheaf, slot)); + if (subreq->consumed >= subreq->len) + goto remove_subreq; + } else if (fpos < start) { + size_t excess = fend - subreq->start; + + spin_lock(&rreq->lock); + /* If we complete first on a folio split with the + * preceding subreq, donate to that subreq - otherwise + * we get the responsibility. + */ + if (subreq->prev_donated != prev_donated) { + spin_unlock(&rreq->lock); + goto donation_changed; + } + + prev = list_prev_entry(subreq, rreq_link); + WRITE_ONCE(prev->next_donated, prev->next_donated + excess); + subreq->consumed = fend - subreq->start; + trace_netfs_donate(rreq, subreq, prev, excess, + netfs_trace_donate_tail_to_prev); + + if (subreq->consumed >= subreq->len) + goto remove_subreq_locked; + spin_unlock(&rreq->lock); + } else { + pr_err("fpos > start\n"); + goto bad; + } + + /* Advance the rolling buffer to the next folio. */ + slot++; + if (slot >= sheaf_nr_slots(sheaf)) { + slot = 0; + sheaf = sheaf->next; + subreq->curr_sheaf = sheaf; + } + subreq->curr_sheaf_slot = slot; + if (sheaf && sheaf->slots[slot]) + subreq->curr_folio_order = sheaf->orders[slot]; + cond_resched(); + goto next_folio; + } + + /* Deal with partial progress. */ + if (subreq->transferred < subreq->len) + return false; + + /* Donate the remaining downloaded data to one of the neighbouring + * subrequests. Note that we may race with them doing the same thing. + */ + spin_lock(&rreq->lock); + + if (subreq->prev_donated != prev_donated || + subreq->next_donated != next_donated) { + spin_unlock(&rreq->lock); + cond_resched(); + goto donation_changed; + } + + /* Deal with the trickiest case: that this subreq is in the middle of a + * folio, not touching either edge, but finishes first. In such a + * case, we donate to the previous subreq, if there is one, so that the + * donation is only handled when that completes - and remove this + * subreq from the list. + * + * If the previous subreq finished first, we will have acquired their + * donation and should be able to unlock folios and/or donate nextwards. + */ + if (!subreq->consumed) { + if (!prev_donated && !next_donated && + !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { + prev = list_prev_entry(subreq, rreq_link); + WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); + trace_netfs_donate(rreq, subreq, prev, prev->next_donated + subreq->len, + netfs_trace_donate_to_prev); + goto remove_subreq_locked; + } + } + + if (!next_donated) { + size_t excess = subreq->len - subreq->consumed; + + if (!subreq->consumed) + excess += prev_donated; + + if (list_is_last(&subreq->rreq_link, &rreq->subrequests)) { + rreq->prev_donated = excess; + trace_netfs_donate(rreq, subreq, NULL, excess, + netfs_trace_donate_to_deferred_next); + } else { + next = list_next_entry(subreq, rreq_link); + WRITE_ONCE(next->prev_donated, excess); + trace_netfs_donate(rreq, subreq, next, excess, + netfs_trace_donate_to_next); + } + goto remove_subreq_locked; + } + + spin_unlock(&rreq->lock); + +bad: + /* Errr... prev and next both donated to us, but insufficient to finish + * the folio. + */ + printk("R=%08x[%x] s=%llx-%llx %zx/%zx/%zx\n", + rreq->debug_id, subreq->debug_index, + subreq->start, subreq->start + subreq->transferred, + subreq->consumed, subreq->transferred, subreq->len); + printk("folio: %llx-%llx\n", fpos, fend - 1); + printk("donated: prev=%zx next=%zx\n", prev_donated, next_donated); + printk("s=%llx av=%zx part=%zx\n", start, avail, part); + BUG(); + +remove_subreq: + spin_lock(&rreq->lock); +remove_subreq_locked: + subreq->consumed = subreq->len; + list_del(&subreq->rreq_link); + spin_unlock(&rreq->lock); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); + return true; +} + +/* + * Release those waiting. + */ +static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async) +{ + trace_netfs_rreq(rreq, netfs_rreq_trace_done); + netfs_clear_subrequests(rreq, was_async); +} + +/* + * Determine how much we can admit to having read from a DIO read. + */ +static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + unsigned int i; + size_t transferred = 0; + + for (i = 0; i < rreq->direct_bv_count; i++) { + flush_dcache_page(rreq->direct_bv[i].bv_page); + // TODO: cifs marks pages in the destination buffer + // dirty under some circumstances after a read. Do we + // need to do that too? + set_page_dirty(rreq->direct_bv[i].bv_page); + } + + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + if (subreq->error || subreq->transferred == 0) + break; + transferred += subreq->transferred; + if (subreq->transferred < subreq->len) + break; + } + + for (i = 0; i < rreq->direct_bv_count; i++) + flush_dcache_page(rreq->direct_bv[i].bv_page); + + rreq->transferred = transferred; + + if (rreq->iocb) { + rreq->iocb->ki_pos += transferred; + if (rreq->iocb->ki_complete) + rreq->iocb->ki_complete( + rreq->iocb, rreq->error ? rreq->error : transferred); + } + if (rreq->netfs_ops->done) + rreq->netfs_ops->done(rreq); + inode_dio_end(rreq->inode); +} + +/* + * Assess the state of a read request and decide what to do next. + * + * Note that we could be in an ordinary kernel thread, on a workqueue or in + * softirq context at this point. We inherit a ref from the caller. + */ +static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) +{ + trace_netfs_rreq(rreq, netfs_rreq_trace_assess); + + //netfs_rreq_is_still_valid(rreq); + + if (rreq->origin == NETFS_DIO_READ) + netfs_rreq_assess_dio(rreq); + task_io_account_read(rreq->transferred); + + trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); + clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); + wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); + + netfs_rreq_completed(rreq, was_async); +} + +static void netfs_rreq_work(struct work_struct *work) +{ + struct netfs_io_request *rreq = + container_of(work, struct netfs_io_request, work); + netfs_rreq_assess(rreq, false); +} + +/* + * Handle the completion of all outstanding I/O operations on a read request. + * We inherit a ref from the caller. + */ +static void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async) +{ + if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) && + was_async) { + INIT_WORK(&rreq->work, netfs_rreq_work); + if (!queue_work(system_unbound_wq, &rreq->work)) + BUG(); + } else { + netfs_rreq_assess(rreq, was_async); + } +} + +/** + * netfs_read_subreq_progress - Note progress of a read operation. + * @subreq: The read request that has terminated. + * @error: Completion code or -EAGAIN if just a progress update. + * @was_async: The termination was asynchronous + * + * This tells the read side of netfs lib that a contributory I/O operation has + * made progress, one way or another, and that it may be possible to unlock + * some folios. + * + * The caller indicates in @error the state of the operation, supplying 0 to + * indicate successful completion of the operation, -EINPROGRESS to indicate + * that the operation is still ongoing or some other negative error code on + * failure. The helper will look after reissuing I/O operations as appropriate + * and writing downloaded data to the cache. + * + * The filesystem should update subreq->transferred to track the amount of data + * copied into the output buffer. + * + * If @was_async is true, the caller might be running in softirq or interrupt + * context and we can't sleep. + */ +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, + int error, bool was_async) +{ + struct netfs_io_request *rreq = subreq->rreq; + + /* If the read completed validly short, then we can clear the tail + * before going on to unlock the folios. + */ + if (error == 0 && subreq->transferred < subreq->len && + test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) { + netfs_clear_unread(subreq); + subreq->transferred = subreq->len; + } + + if (subreq->transferred > subreq->consumed) { + if (rreq->origin != NETFS_DIO_READ) + netfs_consume_read_data(subreq, was_async); + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + } + + /* If we had progress, but not completion, then we're done for now. */ + if (error == -EINPROGRESS) + return; + + switch (subreq->source) { + case NETFS_READ_FROM_CACHE: + netfs_stat(&netfs_n_rh_read_done); + break; + case NETFS_DOWNLOAD_FROM_SERVER: + netfs_stat(&netfs_n_rh_download_done); + break; + default: + break; + } + + if (subreq->transferred < subreq->len) { + __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); + set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); + if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) + error = -ENODATA; + } + + subreq->error = error; + if (error < 0) { + trace_netfs_failure(rreq, subreq, error, netfs_fail_read); + if (subreq->source == NETFS_READ_FROM_CACHE) { + netfs_stat(&netfs_n_rh_read_failed); + set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); + } else { + netfs_stat(&netfs_n_rh_download_failed); + set_bit(NETFS_RREQ_FAILED, &rreq->flags); + rreq->error = subreq->error; + } + } else { + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + } + + rreq->transferred += subreq->transferred; + trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); + + if (list_empty(&rreq->subrequests)) + netfs_rreq_terminated(rreq, was_async); + + netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); +} +EXPORT_SYMBOL(netfs_read_subreq_progress); diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index fb92dd8160f3..413199617476 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -158,10 +158,6 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); - trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, - refcount_read(&subreq->ref), - netfs_sreq_trace_new); - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); stream->sreq_max_len = UINT_MAX; diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index ddc1ee031955..6d7d04c9eff6 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -286,15 +286,7 @@ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sre return netfs; } -static bool nfs_netfs_clamp_length(struct netfs_io_subrequest *sreq) -{ - size_t rsize = NFS_SB(sreq->rreq->inode->i_sb)->rsize; - - sreq->len = min(sreq->len, rsize); - return true; -} - -static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) +static ssize_t nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) { struct nfs_netfs_io_data *netfs; struct nfs_pageio_descriptor pgio; @@ -302,17 +294,26 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) struct nfs_open_context *ctx = sreq->rreq->netfs_priv; struct page *page; unsigned long idx; + pgoff_t start, last; + ssize_t len; int err; - pgoff_t start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; - pgoff_t last = ((sreq->start + sreq->len - - sreq->transferred - 1) >> PAGE_SHIFT); + + err = netfs_prepare_read_iterator(sreq, NFS_SB(sreq->rreq->inode->i_sb)->rsize, 0); + if (err < 0) + return err; + + len = sreq->len; + start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; + last = ((sreq->start + len - sreq->transferred - 1) >> PAGE_SHIFT); nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); netfs = nfs_netfs_alloc(sreq); - if (!netfs) - return netfs_subreq_terminated(sreq, -ENOMEM, false); + if (!netfs) { + netfs_read_subreq_progress(sreq, -ENOMEM, false); + return -ENOMEM; + } pgio.pg_netfs = netfs; /* used in completion */ @@ -327,6 +328,7 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) out: nfs_pageio_complete_read(&pgio); nfs_netfs_put(netfs); + return len; } void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr) @@ -377,5 +379,4 @@ const struct netfs_request_ops nfs_netfs_ops = { .init_request = nfs_netfs_init_request, .free_request = nfs_netfs_free_request, .issue_read = nfs_netfs_issue_read, - .clamp_length = nfs_netfs_clamp_length }; diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index fbed0027996f..20c1d73085cd 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -60,8 +60,6 @@ static inline void nfs_netfs_get(struct nfs_netfs_io_data *netfs) static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) { - ssize_t final_len; - /* Only the last RPC completion should call netfs_subreq_terminated() */ if (!refcount_dec_and_test(&netfs->refcount)) return; @@ -74,8 +72,9 @@ static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) * Correct the final length here to be no larger than the netfs subrequest * length, and thus avoid netfs's "Subreq overread" warning message. */ - final_len = min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); - netfs_subreq_terminated(netfs->sreq, netfs->error ?: final_len, false); + netfs->sreq->transferred = min_t(s64, netfs->sreq->len, + atomic64_read(&netfs->transferred)); + netfs_read_subreq_progress(netfs->sreq, netfs->error, false); kfree(netfs); } static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 595c4b673707..ff7f95395239 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1309,10 +1309,8 @@ cifs_readv_callback(struct mid_q_entry *mid) if (rdata->result == 0 || rdata->result == -EAGAIN) iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); rdata->credits.value = 0; - netfs_subreq_terminated(&rdata->subreq, - (rdata->result == 0 || rdata->result == -EAGAIN) ? - rdata->got_bytes : rdata->result, - false); + rdata->subreq.transferred += rdata->got_bytes; + netfs_read_subreq_progress(&rdata->subreq, rdata->result, false); release_mid(mid); add_credits(server, &credits, 0); } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 4732c63f7531..f3be9444465a 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "cifsfs.h" #include "cifspdu.h" @@ -125,22 +126,21 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq) } /* - * Split the read up according to how many credits we can get for each piece. - * It's okay to sleep here if we need to wait for more credit to become - * available. - * - * We also choose the server and allocate an operation ID to be cleaned up - * later. + * Issue a read operation on behalf of the netfs helper functions. We're asked + * to make a read of a certain size at a point in the file. We are permitted + * to only read a portion of that, but as long as we read something, the netfs + * helper will call us again so that we can issue another read. */ -static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) +static ssize_t cifs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; - struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr]; struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); struct TCP_Server_Info *server = req->server; struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - int rc; + unsigned int max_segs = 0; + size_t rsize, len; + int rc = 0; rdata->xid = get_xid(); rdata->have_xid = true; @@ -153,52 +153,48 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &stream->sreq_max_len, &rdata->credits); - if (rc) { - subreq->error = rc; - return false; - } + &rsize, &rdata->credits); + if (rc) + goto failed; - subreq->len = umin(subreq->len, stream->sreq_max_len); #ifdef CONFIG_CIFS_SMB_DIRECT if (server->smbd_conn) - stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; + max_segs = server->smbd_conn->max_frmr_depth; #endif - return true; -} -/* - * Issue a read operation on behalf of the netfs helper functions. We're asked - * to make a read of a certain size at a point in the file. We are permitted - * to only read a portion of that, but as long as we read something, the netfs - * helper will call us again so that we can issue another read. - */ -static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) -{ - struct netfs_io_request *rreq = subreq->rreq; - struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); - struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); - struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); - int rc = 0; + len = netfs_prepare_read_iterator(subreq, rsize, max_segs); + if (len < 0) { + rc = len; + goto failed; + } cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, - subreq->transferred, subreq->len); + subreq->transferred, len); + + rc = adjust_credits(server, &rdata->credits, len); + if (rc) + goto failed; if (req->cfile->invalidHandle) { do { rc = cifs_reopen_file(req->cfile, true); } while (rc == -EAGAIN); if (rc) - goto out; + goto failed; } __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); rc = rdata->server->ops->async_readv(rdata); -out: if (rc) - netfs_subreq_terminated(subreq, rc, false); + goto failed; + return len; + +failed: + netfs_read_subreq_progress(subreq, rc, false); + return rc; } /* @@ -326,8 +322,7 @@ const struct netfs_request_ops cifs_req_ops = { .free_request = cifs_free_request, .free_subrequest = cifs_free_subrequest, .expand_readahead = cifs_expand_readahead, - .clamp_length = cifs_clamp_length, - .issue_read = cifs_req_issue_read, + .issue_read = cifs_issue_read, .done = cifs_rreq_done, .begin_writeback = cifs_begin_writeback, .prepare_write = cifs_prepare_write, diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 2ae2dbb6202b..14988281cce6 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4489,9 +4489,7 @@ static void smb2_readv_worker(struct work_struct *work) struct cifs_io_subrequest *rdata = container_of(work, struct cifs_io_subrequest, subreq.work); - netfs_subreq_terminated(&rdata->subreq, - (rdata->result == 0 || rdata->result == -EAGAIN) ? - rdata->got_bytes : rdata->result, true); + netfs_read_subreq_progress(&rdata->subreq, rdata->result, false); } static void @@ -4538,6 +4536,7 @@ smb2_readv_callback(struct mid_q_entry *mid) break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: + __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); rdata->result = -EAGAIN; if (server->sign && rdata->got_bytes) /* reset bytes number since we can not check a sign */ @@ -4588,6 +4587,8 @@ smb2_readv_callback(struct mid_q_entry *mid) rdata->result = 0; } rdata->credits.value = 0; + rdata->subreq.transferred += rdata->got_bytes; + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); INIT_WORK(&rdata->subreq.work, smb2_readv_worker); queue_work(cifsiod_wq, &rdata->subreq.work); release_mid(mid); @@ -4838,6 +4839,7 @@ smb2_writev_callback(struct mid_q_entry *mid) wdata->subreq.start, wdata->subreq.len); wdata->credits.value = 0; + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress); cifs_write_subrequest_terminated(wdata, result ?: written, true); release_mid(mid); add_credits(server, &credits, 0); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b880687bb932..6987c2c02074 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -183,12 +183,18 @@ struct netfs_io_subrequest { unsigned long long start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ + size_t consumed; /* Amount of read data consumed */ + unsigned int prev_donated; /* Amount of data donated from previous subreq */ + unsigned int next_donated; /* Amount of data donated from next subreq */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ unsigned int nr_segs; /* Number of segs in io_iter */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ + unsigned char curr_sheaf_slot; /* Folio currently being read */ + unsigned char curr_folio_order; /* Order of folio */ + struct sheaf *curr_sheaf; /* Sheaf in which current folio resides */ unsigned long flags; #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ @@ -229,6 +235,7 @@ struct netfs_io_request { struct address_space *mapping; /* The mapping being accessed */ struct kiocb *iocb; /* AIO completion vector */ struct netfs_cache_resources cache_resources; + struct readahead_control *ractl; /* Readahead descriptor */ struct list_head proc_link; /* Link in netfs_iorequests */ struct list_head subrequests; /* Contributory I/O operations */ struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */ @@ -248,9 +255,6 @@ struct netfs_io_request { atomic_t subreq_counter; /* Next subreq->debug_index */ unsigned int nr_group_rel; /* Number of refs to release on ->group */ spinlock_t lock; /* Lock for queuing subreqs */ - atomic_t nr_outstanding; /* Number of ops in progress */ - atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ - size_t upper_len; /* Length can be extended to here */ unsigned long long submitted; /* Amount submitted for I/O so far */ unsigned long long len; /* Length of the request */ size_t transferred; /* Amount to be indicated as transferred */ @@ -265,6 +269,7 @@ struct netfs_io_request { unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ + unsigned int prev_donated; /* Fallback for subreq->prev_donated */ refcount_t ref; unsigned long flags; #define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */ @@ -298,8 +303,7 @@ struct netfs_request_ops { /* Read request handling */ void (*expand_readahead)(struct netfs_io_request *rreq); - bool (*clamp_length)(struct netfs_io_subrequest *subreq); - void (*issue_read)(struct netfs_io_subrequest *subreq); + ssize_t (*issue_read)(struct netfs_io_subrequest *subreq); bool (*is_still_valid)(struct netfs_io_request *rreq); int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, struct folio **foliop, void **_fsdata); @@ -428,7 +432,10 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp); vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); /* (Sub)request management API. */ -void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); +ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq, size_t rsize, + unsigned int max_segs); +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, + int error, bool was_async); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); void netfs_put_subrequest(struct netfs_io_subrequest *subreq, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 64238a64ae5f..6d3403880e0b 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -68,10 +68,14 @@ E_(NETFS_INVALID_WRITE, "INVL") #define netfs_sreq_traces \ + EM(netfs_sreq_trace_added, "ADD ") \ EM(netfs_sreq_trace_discard, "DSCRD") \ + EM(netfs_sreq_trace_donate_to_prev, "DON-P") \ + EM(netfs_sreq_trace_donate_to_next, "DON-N") \ EM(netfs_sreq_trace_download_instead, "RDOWN") \ EM(netfs_sreq_trace_fail, "FAIL ") \ EM(netfs_sreq_trace_free, "FREE ") \ + EM(netfs_sreq_trace_io_progress, "IO ") \ EM(netfs_sreq_trace_limited, "LIMIT") \ EM(netfs_sreq_trace_prepare, "PREP ") \ EM(netfs_sreq_trace_prep_failed, "PRPFL") \ @@ -117,7 +121,7 @@ EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_put_cancel, "PUT CANCEL ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ - EM(netfs_sreq_trace_put_discard, "PUT DISCARD") \ + EM(netfs_sreq_trace_put_consumed, "PUT CONSUME") \ EM(netfs_sreq_trace_put_done, "PUT DONE ") \ EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \ @@ -151,7 +155,10 @@ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_not_under_wback, "!wback") \ + EM(netfs_folio_trace_read, "read") \ + EM(netfs_folio_trace_read_done, "read-done") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \ + EM(netfs_folio_trace_read_put, "read-put") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ EM(netfs_folio_trace_store_copy, "store-copy") \ @@ -164,6 +171,12 @@ EM(netfs_contig_trace_jump, "-->JUMP-->") \ E_(netfs_contig_trace_unlock, "Unlock") +#define netfs_donate_traces \ + EM(netfs_trace_donate_tail_to_prev, "tail-to-prev") \ + EM(netfs_trace_donate_to_prev, "to-prev") \ + EM(netfs_trace_donate_to_next, "to-next") \ + E_(netfs_trace_donate_to_deferred_next, "defer-next") + #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY @@ -181,6 +194,7 @@ enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte); enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); enum netfs_folio_trace { netfs_folio_traces } __mode(byte); enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byte); +enum netfs_donate_trace { netfs_donate_traces } __mode(byte); #endif @@ -203,6 +217,7 @@ netfs_rreq_ref_traces; netfs_sreq_ref_traces; netfs_folio_traces; netfs_collect_contig_traces; +netfs_donate_traces; /* * Now redefine the EM() and E_() macros to map the enums to the strings that @@ -651,6 +666,71 @@ TRACE_EVENT(netfs_collect_stream, __entry->collected_to, __entry->front) ); +TRACE_EVENT(netfs_progress, + TP_PROTO(const struct netfs_io_subrequest *subreq, + unsigned long long start, size_t avail, size_t part), + + TP_ARGS(subreq, start, avail, part), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, subreq) + __field(unsigned int, consumed) + __field(unsigned int, transferred) + __field(unsigned long long, f_start) + __field(unsigned int, f_avail) + __field(unsigned int, f_part) + __field(unsigned char, slot) + ), + + TP_fast_assign( + __entry->rreq = subreq->rreq->debug_id; + __entry->subreq = subreq->debug_index; + __entry->consumed = subreq->consumed; + __entry->transferred = subreq->transferred; + __entry->f_start = start; + __entry->f_avail = avail; + __entry->f_part = part; + __entry->slot = subreq->curr_sheaf_slot; + ), + + TP_printk("R=%08x[%02x] s=%llx ct=%x/%x pa=%x/%x sl=%x", + __entry->rreq, __entry->subreq, __entry->f_start, + __entry->consumed, __entry->transferred, + __entry->f_part, __entry->f_avail, __entry->slot) + ); + +TRACE_EVENT(netfs_donate, + TP_PROTO(const struct netfs_io_request *rreq, + const struct netfs_io_subrequest *from, + const struct netfs_io_subrequest *to, + size_t amount, + enum netfs_donate_trace trace), + + TP_ARGS(rreq, from, to, amount, trace), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, from) + __field(unsigned int, to) + __field(unsigned int, amount) + __field(enum netfs_donate_trace, trace) + ), + + TP_fast_assign( + __entry->rreq = rreq->debug_id; + __entry->from = from->debug_index; + __entry->to = to ? to->debug_index : -1; + __entry->amount = amount; + __entry->trace = trace; + ), + + TP_printk("R=%08x[%02x] -> [%02x] %s am=%x", + __entry->rreq, __entry->from, __entry->to, + __print_symbolic(__entry->trace, netfs_donate_traces), + __entry->amount) + ); + #undef EM #undef E_ #endif /* _TRACE_NETFS_H */