From patchwork Fri Aug 23 20:08:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13775934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 189DFC5472C for ; Fri, 23 Aug 2024 20:09:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A20C46B02CE; Fri, 23 Aug 2024 16:09:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D0346B02D0; Fri, 23 Aug 2024 16:09:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8713B6B02D1; Fri, 23 Aug 2024 16:09:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 69A176B02CE for ; Fri, 23 Aug 2024 16:09:04 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1C2E681E54 for ; Fri, 23 Aug 2024 20:09:04 +0000 (UTC) X-FDA: 82484599008.03.D57C373 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 60B9C1A000E for ; Fri, 23 Aug 2024 20:09:02 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KDyqb4ii; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724443677; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9pcRiiH/lYTDVoni9WSyylXIzj+0iyqYBGBDYoCG5zI=; b=oIG6tx6XYNU7ALVI340GW0VFFIUXzPUblKoQcY1ifM2B1ObjVUUPr6DCwpF7gA2fInh9us c6nrEeXMB212dHAQwzoAXEZyM84R3y9n2+QvDEqyBU8kZQvgBiVek260mBAB8Dyakj1H3p Fv8EAkbPiAaqFifWTPdI4oQvvmVmXFo= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KDyqb4ii; spf=pass (imf19.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724443677; a=rsa-sha256; cv=none; b=1s+XiIUpnT8OkDG4qlIzBU+MF82EcU/M71wfDgCdumQY73intHo7WMrqcYmUccjdl4J1n0 KCruHmNJcvwvu7JxqSaBJDaV9631UHmGB6qQBQcQB0lISuDIoU/iDtPHbJKTwM+RLIprZG wHSo7kz3bB4uRdx44E9vPFC0oOqVNw8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724443741; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9pcRiiH/lYTDVoni9WSyylXIzj+0iyqYBGBDYoCG5zI=; b=KDyqb4iiIX4v+uz9pi9cOp44FdJGHKfGzzB+E1sOSPutNRSrU0ysVN27EBCZpEVo8dfGMl 7luCFWL/8vlwM5cPGzKYjM91Ymm2HMktxz8Z8YioC++2kw01CYsagxlDvwg58riZJRttVn kHvGnzqQ5AxNnulPygPGCMi4jFWLuBs= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-461-Sz3wsQBWNzSGl0B-Sn23dQ-1; Fri, 23 Aug 2024 16:08:56 -0400 X-MC-Unique: Sz3wsQBWNzSGl0B-Sn23dQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4EEA41955BED; Fri, 23 Aug 2024 20:08:53 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5B0903001FE5; Fri, 23 Aug 2024 20:08:48 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French Cc: David Howells , Pankaj Raghav , Paulo Alcantara , Jeff Layton , Matthew Wilcox , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marc Dionne Subject: [PATCH 4/9] netfs: Fix trimming of streaming-write folios in netfs_inval_folio() Date: Fri, 23 Aug 2024 21:08:12 +0100 Message-ID: <20240823200819.532106-5-dhowells@redhat.com> In-Reply-To: <20240823200819.532106-1-dhowells@redhat.com> References: <20240823200819.532106-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Stat-Signature: 63as3qygiqettmgdw5wqpzu6sedu6j9u X-Rspam-User: X-Rspamd-Queue-Id: 60B9C1A000E X-Rspamd-Server: rspam02 X-HE-Tag: 1724443742-878444 X-HE-Meta: U2FsdGVkX1/djGp+z/hsU7+6iagJCuKGlbjdHIai/3ii1FNL+XaUqFCkWcIe78sKVA//ErEXD6fg2cGqFosXzZUNQvOlEaNcGCTG0VbCa+i2AFJSYWfoNFU8wllWxAkjsS2K1oeCngjVuUexkgS5FFSzcPw9qX9AAIQj0RlgWpbcgu2Y+DBDrG4XFsi173PBzlwsZpr1H29EinL+WJDUFtMPssla7q8KUuDjfbVT3Of3wbctIjSHpuHB9xjLW5BBHU8+ulr9QOAJP0DRnUnAdJq8BPSi2FJhMZ5kp6lqyc4cAJjiYnC2FU/aRjMDwf9IIFU1AQviw6Ba18fVAlfidThcJAuJlqGSUS+urHYYDwm6xMY8bFpeoc6X5togZuixqDCXyFVcEtDi5LH8OoQaneYsqiFCY7mJ/Ti23Cmokix3y6m1LxQCI9U91gPzgHe6ea4cJ//N3dq4MAtwNJRsSNMhMrfDrKcftpluyBtukTCGJx10vo63wAsb/nc5IUBR5UAVEFs5tGqhyT3zYlFD/VBH5BxAxGWKkGRmQ71PbaSenv6h1nEfi4LXkZYJIyNZL+uDhUbv7Fsy27tc9cDQxTGsS3GNCkD7xLrUWBsru4VKPzNERfBJZir2rl4osv6qtPTgmW20AWWUjfqQv0F4VTZG6kqbZAZ0XmhQ9YrsNT0u0pnDhCqniqRxThBIjqvubajIEyLPLgbJcTc39KESnCzhycQnpcyaqHrxkEPClPMPhj9BjrhwZe1xZZ9IJ1LZIvHIL8HPb7YAPsrhpxX9/vVU3CthfJ5eNM6hJb8om5LvwF0cJzzX29v43gP8CXdKa7e7h8vjlF5uenrmwt1hIXhqPiw8esCbc4J/9NYC/iy3YURmA3d9wZnPKRoFk7nzOZ4nN4gad70YiiRUNSfCnaym08OFaJB3IfhZ91LI7aSx+K+fKou7xyD/HkqmmUmacdos6VM4/XGZpvoCSqy JtXyYCjy H12h50NuWohidfZJQsxuYe7FOll+BAML4Ne8PVC2IMijC7ZeKcouZdf8msL3NXCya5BWZbQS9iRANAq9gZWdU09azsIDfQpemYPA5qW6fitSb8hOwMFewI8KKtDZMV2+TLxlLuQFyOsiEq0oAnky7H99anbokWFnAmHVEYttmCaLv1yZSyCF7VDRM7BpCT2ynnc4RZyHeykY9nyt9mVWj3V6VvEoHgiIZ4hRarMtpFTypBYvb6bw0QX1c4S4l4TVDvioRbdRkMF1gaUMHKMSDfdvQwnq+oj/jVUDL9Bm1RwwclhEfqyuYeHUpmxwOhnXWy0AbD33s6jugkzTh21qpDquyhG2n372ne7uh6JdBJcvK+D0SXjaILpVNU5FXnh1e4Qo0ASvcSj+pf0843p4iVWM6Aim0BFFW6Hm0vXyboAZNiX1Zd2LdKRvRKAYXKLJpjCTVt3iTMuPfdgz2XKuh4bvc66xo25UXX76CN3bwhsDAxXqV054KvYpzXSWWf8QmFNvkvsrEQvEFNXFjYLRY99orzOLGWs1gwaebSZH/P9ifnshxfpwfMxjOKaTeN9DS63Kz4WYOGqb14KFrnVpQXmO3Amg/9KTNRDmKRQkpAgJx5ND6BBwpLzpWrcKmj68gBJmo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When netfslib writes to a folio that it doesn't have data for, but that data exists on the server, it will make a 'streaming write' whereby it stores data in a folio that is marked dirty, but not uptodate. When it does this, it attaches a record to folio->private to track the dirty region. When truncate() or fallocate() wants to invalidate part of such a folio, it will call into ->invalidate_folio(), specifying the part of the folio that is to be invalidated. netfs_invalidate_folio(), on behalf of the filesystem, must then determine how to trim the streaming write record. In a couple of cases, however, it does this incorrectly (the reduce-length and move-start cases are switched over and don't, in any case, calculate the value correctly). Fix this by making the logic tree more obvious and fixing the cases. Fixes: 9ebff83e6481 ("netfs: Prep to use folio->private for write grouping and streaming write") Signed-off-by: David Howells cc: Matthew Wilcox (Oracle) cc: Pankaj Raghav cc: Jeff Layton cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/misc.c | 50 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 15 deletions(-) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 69324761fcf7..c1f321cf5999 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -97,10 +97,20 @@ EXPORT_SYMBOL(netfs_clear_inode_writeback); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) { struct netfs_folio *finfo; + struct netfs_inode *ctx = netfs_inode(folio_inode(folio)); size_t flen = folio_size(folio); _enter("{%lx},%zx,%zx", folio->index, offset, length); + if (offset == 0 && length == flen) { + unsigned long long i_size = i_size_read(&ctx->inode); + unsigned long long fpos = folio_pos(folio), end; + + end = umin(fpos + flen, i_size); + if (fpos < i_size && end > ctx->zero_point) + ctx->zero_point = end; + } + folio_wait_private_2(folio); /* [DEPRECATED] */ if (!folio_test_private(folio)) @@ -115,18 +125,34 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) /* We have a partially uptodate page from a streaming write. */ unsigned int fstart = finfo->dirty_offset; unsigned int fend = fstart + finfo->dirty_len; - unsigned int end = offset + length; + unsigned int iend = offset + length; if (offset >= fend) return; - if (end <= fstart) + if (iend <= fstart) + return; + + /* The invalidation region overlaps the data. If the region + * covers the start of the data, we either move along the start + * or just erase the data entirely. + */ + if (offset <= fstart) { + if (iend >= fend) + goto erase_completely; + /* Move the start of the data. */ + finfo->dirty_len = fend - iend; + finfo->dirty_offset = offset; + return; + } + + /* Reduce the length of the data if the invalidation region + * covers the tail part. + */ + if (iend >= fend) { + finfo->dirty_len = offset - fstart; return; - if (offset <= fstart && end >= fend) - goto erase_completely; - if (offset <= fstart && end > fstart) - goto reduce_len; - if (offset > fstart && end >= fend) - goto move_start; + } + /* A partial write was split. The caller has already zeroed * it, so just absorb the hole. */ @@ -139,12 +165,6 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) folio_clear_uptodate(folio); kfree(finfo); return; -reduce_len: - finfo->dirty_len = offset + length - finfo->dirty_offset; - return; -move_start: - finfo->dirty_len -= offset - finfo->dirty_offset; - finfo->dirty_offset = offset; } EXPORT_SYMBOL(netfs_invalidate_folio); @@ -164,7 +184,7 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp) if (folio_test_dirty(folio)) return false; - end = folio_pos(folio) + folio_size(folio); + end = umin(folio_pos(folio) + folio_size(folio), i_size_read(&ctx->inode)); if (end > ctx->zero_point) ctx->zero_point = end;