From patchwork Tue Nov 28 17:50:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13471472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44C2AC07E97 for ; Tue, 28 Nov 2023 17:50:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4C116B0337; Tue, 28 Nov 2023 12:50:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CFC796B0338; Tue, 28 Nov 2023 12:50:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC5B76B0339; Tue, 28 Nov 2023 12:50:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AEAA96B0337 for ; Tue, 28 Nov 2023 12:50:11 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 10F8C40321 for ; Tue, 28 Nov 2023 17:50:10 +0000 (UTC) X-FDA: 81508101780.07.F7B2471 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 306461C0010 for ; Tue, 28 Nov 2023 17:50:07 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iMeiyzyE; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701193808; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=8Y799twHekt/GgWBkSL5N9lUQou5nD0zXqSUc2yV/jQ=; b=vkIyAk3aQWTeDVE/sRbfidb6ortepwMw3tzAYbgCGIGOUuEegw9oeEUHF1pyK+dGnmmDsZ azltB9Z5m77RPLgw4LtLd7wQ/6jjfi+/7kj0yv7G/tKzGQ/UgcFp7pHoeLqghEmznZYRTS whA69/nGiFdZCHkHbzkZ19nr2FCpYQc= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iMeiyzyE; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701193808; a=rsa-sha256; cv=none; b=mxtZsiRCIsUlxRRO5cni+VMw4ECmjCYAsjdinEMaSkbG6cmcIjcC8xa4Z4JGfrZ9jeGayP 2jqUJRb/4SxvYepwnY+ypT/q3+YrdeutjP0YXoGqYLGKI//48grPqgctQDJduO1o2YxjUx tW4ERPajy5BrqFL4tYqoecu3ojUCh/s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701193807; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=8Y799twHekt/GgWBkSL5N9lUQou5nD0zXqSUc2yV/jQ=; b=iMeiyzyEKuApbqD0uIa8du7Qd7uUMJCrLt0/vD02bCfaPouQ8H3OVWDmhNBLK73qk4ejsY W0M0sEJLwMyO3qBwctQcosOoKLZZKkBMUAw0NmlzZLepHPVsUTg0B9a/2Mdd5kQhGLv3gj zY82tUvgdgtY0Z48LUHwOeOReQfUQgA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-339-cS_t7oo0P5e-QzF-jWT-KA-1; Tue, 28 Nov 2023 12:50:05 -0500 X-MC-Unique: cS_t7oo0P5e-QzF-jWT-KA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8ADEC101A53B; Tue, 28 Nov 2023 17:50:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.161]) by smtp.corp.redhat.com (Postfix) with ESMTP id 34BFB2166B26; Tue, 28 Nov 2023 17:50:03 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 From: David Howells To: Steve French cc: dhowells@redhat.com, Shyam Prasad N , Rohith Surabattula , Jeff Layton , Matthew Wilcox , linux-cifs@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] cifs: Fix issues with copy_file_range and FALLOC_FL_INSERT/ZERO_RANGE MIME-Version: 1.0 Content-ID: <2056122.1701193802.1@warthog.procyon.org.uk> Date: Tue, 28 Nov 2023 17:50:02 +0000 Message-ID: <2056123.1701193802@warthog.procyon.org.uk> X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 306461C0010 X-Stat-Signature: p4d3kr3e7rhq4mdqeni1xxmxx3jcmqc9 X-HE-Tag: 1701193807-636077 X-HE-Meta: U2FsdGVkX1+3JrO8P8O99U+yVXzRwxzmaodfqZFxoE3nFx01dV5P8ge2kfgVUzvvZFtuXMLxoPCJXJ2tIMPOQoQIej/dilx68cD74mDkNh9L5nBTWyaXH+y4qj63JLjMY/fOEvOiSFF87OuT7Tzkih/8irgaVDFnhVHMMJVWWpbIjyXD7RLPTZROavJGllIHn6olLQSmzN/16Ky/XU1sRd20Br4i15ER4GMn3Qbh5QDEVMwUPJTtXmb7ZbnlVh2CxNhJXbtEVZccYuoRYfGufS6FVby9g8pFxUajDLOHlGsLDTj+diss2Kq2WcWspVnzAaoZGS7BZDpWFQNl2sEGfEJOXmxJ9YtwxjnGDRRevgkkvGbHNMasf1IPSeyAtpwRn0zeBo7jB6UjExaZBb30W64npdFlxnJWFT6F4aKqzrZE6Vs6nxhHK/NTd3URldsz8IeDw+GjnwZU6F0wSwIeY8ACmGMaYsMfNJtY3BKkR2z4UMMEBn9xq5kgWMl/KYsfvkSwRoziSKem4nKIGsn8aVs8Lqdc8GOcsiXQXaQdT+JmIfMsTQBBmFareVakrR4tAB8tGwPQ+EmLZN4MgwDXiEgytnftyi7mc9lxFhHqnfPgF1Jo2/Vs4NM1UoRlJBz/Cs6JsiLdVANqGLKn7FMb/+Nf/1NhAmXM4SUdsZhas4lQK2DmeDnaFyigu9DBUdHDdGN7vOgAEDQvRAmtIXcE6hOFYnbF7VvGAJGFyvO3KrRgrdwmJYiLE3pNeEtrXGCE/Oq+7DtEx7OsivOb6DnZqDVBmZzsQjW0n0Wpw7tes2IhkobrZm+dzXNfzWPV6On55tk3POPEH9GPQRYm+6WDjQoyY5ACNP/zNCA8qmAAxFPr2jyuIFN11728PDIxpb3IowENZKIrYenux9DA2vfA3y/+XF/tOy0t5+YKVcUJS9LSpEGUgzTYeQ8hJpQecELK8Wgyk55MyUypOkbKmQk 4X9TTGHy PJI+1/aODzJRTSkeo9CNl4ea+gT7JGgy40Jyh26ZLIvHTpP6GF/L1KEHNMfKQzrB2Rd3384auK0qZuUHKDfQWggaqwwsO/PCNjyPagB6gDzbQ3DOS+NNFoPyf5ac3J0JED3G32qmpG72DKFMDqvbqqQf4CARd5LiOF2oiU/3sGcgivia/axjqQ3cyCxouE2ipHlKwYKKJqRnvcoUXEySx4Qip8zdGVBlrH4p3LxfJ2tX7xfqPLvy1GbQo8zc6PesX9KKxPo/rZl03nZUKeVOmGjYBPuO83SO7HOzr9Pdg23804hWdp8ZXmx1bhlr/4quIrx6RlEL6KjCI5i7Dro1bzPrmMf4QrO7ZBEEkI00jOxfTR3ZbAs8Sv7PdpKxwv/22ZlfeQ3HDoSMqitBGmc4Rpr2TqyGe3VOStFKConAQO8R8PmTzFMgY02L60No40ya8kEQrz2lROF75+sJtfZMetZQFCVm3zW4e7hk1FvZgfE1EwV1TNWLJyG7CPBjZAvN8CB08k6MKT1ojUVWaW4VgEchgPmLI/kE4D+Oza7qQ8YIGOvMqdPp80P/4dl6VhTd7DcXGOj1bvysFNeWGQgl2UKHZ4gfIGprAvZ8Yq1dDZRESg4fABAzA/3qUfw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Fix a number of issues in the cifs filesystem implementations of copy_file_range(), FALLOC_FL_INSERT_RANGE and FALLOC_FL_ZERO_RANGE: (1) In cifs_file_copychunk_range(), set i_size after doing the copychunk_range operation as this value may be used by various things internally. stat() hides the issue because setting ->time to 0 causes cifs_getatr() to revalidate the attributes. (2) In smb3_zero_range(), set i_size after extending the file on the server. (3) In smb3_insert_range(), set i_size after extending the file on the server and before we do the copy to open the gap (as we don't clean up the EOF marker if the copy fails). (4) Add a new MM function, discard_inode_pages_range(), which is equivalent to truncate_inode_pages_range(), but rounds out the range to include the entire folio at each end. [!] This might be better done by adding a flag to truncate_inode_pages_range(). (5) In cifs_file_copychunk_range(), fix the invalidation of the destination range. We shouldn't just invalidate the whole file as dirty data in the file may get lost and we can't just call truncate_inode_pages_range() as that will simply partially clear a partial folio at each end whilst invalidating and discarding all the folios in the middle. We need to force all the folios covering the range to be reloaded. Further, we shouldn't simply round out the range to PAGE_SIZE at each end as cifs should move to support multipage folios. So change the invalidation to flush the folio at each end of the range (which we can do simply by asking to flush a byte in it), then use discard_inode_pages_range() to fully invalidate the entire range of folios. Fixes: 620d8745b35d ("Introduce cifs_copy_file_range()") Fixes: 72c419d9b073 ("cifs: fix smb3_zero_range so it can expand the file-size when required") Fixes: 7fe6fe95b936 ("cifs: add FALLOC_FL_INSERT_RANGE support") Signed-off-by: David Howells cc: Steve French cc: Paulo Alcantara cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: Matthew Wilcox cc: linux-cifs@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsfs.c | 24 +++++++++++++++++++++--- fs/smb/client/smb2ops.c | 13 +++++++++++-- include/linux/mm.h | 2 ++ mm/truncate.c | 37 +++++++++++++++++++++++++++++++++---- 4 files changed, 67 insertions(+), 9 deletions(-) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index ea3a7a668b45..9b6e3cfd5a59 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -1267,6 +1267,7 @@ ssize_t cifs_file_copychunk_range(unsigned int xid, struct cifsFileInfo *smb_file_target; struct cifs_tcon *src_tcon; struct cifs_tcon *target_tcon; + unsigned long long destend; ssize_t rc; cifs_dbg(FYI, "copychunk range\n"); @@ -1306,13 +1307,30 @@ ssize_t cifs_file_copychunk_range(unsigned int xid, if (rc) goto unlock; - /* should we flush first and last page first */ - truncate_inode_pages(&target_inode->i_data, 0); + destend = destoff + len - 1; + + /* Flush the folios at either end of the destination range to prevent + * accidental loss of dirty data outside of the range. + */ + rc = filemap_write_and_wait_range(target_inode->i_mapping, destoff, destoff); + if (rc) + goto unlock; + if (destend > destoff) { + rc = filemap_write_and_wait_range(target_inode->i_mapping, destend, destend); + if (rc) + goto unlock; + } + + /* Discard all the folios that overlap the destination region. */ + discard_inode_pages_range(&target_inode->i_data, destoff, destend); rc = file_modified(dst_file); - if (!rc) + if (!rc) { rc = target_tcon->ses->server->ops->copychunk_range(xid, smb_file_src, smb_file_target, off, len, destoff); + if (rc > 0 && destoff + rc > i_size_read(target_inode)) + truncate_setsize(target_inode, destoff + rc); + } file_accessed(src_file); diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index a959ed2c9b22..65a00c8b8494 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -3307,6 +3307,7 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon, struct inode *inode = file_inode(file); struct cifsInodeInfo *cifsi = CIFS_I(inode); struct cifsFileInfo *cfile = file->private_data; + unsigned long long new_size; long rc; unsigned int xid; __le64 eof; @@ -3337,10 +3338,15 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon, /* * do we also need to change the size of the file? */ - if (keep_size == false && i_size_read(inode) < offset + len) { - eof = cpu_to_le64(offset + len); + new_size = offset + len; + if (keep_size == false && (unsigned long long)i_size_read(inode) < new_size) { + eof = cpu_to_le64(new_size); rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid, cfile->fid.volatile_fid, cfile->pid, &eof); + if (rc >= 0) { + truncate_setsize(inode, new_size); + fscache_resize_cookie(cifs_inode_cookie(inode), new_size); + } } zero_range_exit: @@ -3735,6 +3741,9 @@ static long smb3_insert_range(struct file *file, struct cifs_tcon *tcon, if (rc < 0) goto out_2; + truncate_setsize(inode, old_eof + len); + fscache_resize_cookie(cifs_inode_cookie(inode), i_size_read(inode)); + rc = smb2_copychunk_range(xid, cfile, cfile, off, count, off + len); if (rc < 0) goto out_2; diff --git a/include/linux/mm.h b/include/linux/mm.h index 64cd1ee4aacc..e930c930e3f5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3416,6 +3416,8 @@ extern unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info); extern void truncate_inode_pages(struct address_space *, loff_t); extern void truncate_inode_pages_range(struct address_space *, loff_t lstart, loff_t lend); +extern void discard_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend); extern void truncate_inode_pages_final(struct address_space *); /* generic vm_area_ops exported for stackable file systems */ diff --git a/mm/truncate.c b/mm/truncate.c index 52e3a703e7b2..b91d67de449c 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -295,10 +295,11 @@ long mapping_evict_folio(struct address_space *mapping, struct folio *folio) } /** - * truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets + * __truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets * @mapping: mapping to truncate * @lstart: offset from which to truncate * @lend: offset to which to truncate (inclusive) + * @round_out: Discard all overlapping folios * * Truncate the page cache, removing the pages that are between * specified offsets (and zeroing out partial pages @@ -318,8 +319,9 @@ long mapping_evict_folio(struct address_space *mapping, struct folio *folio) * truncate_inode_pages_range is able to handle cases where lend + 1 is not * page aligned properly. */ -void truncate_inode_pages_range(struct address_space *mapping, - loff_t lstart, loff_t lend) +static void __truncate_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend, + bool round_out) { pgoff_t start; /* inclusive */ pgoff_t end; /* exclusive */ @@ -367,7 +369,15 @@ void truncate_inode_pages_range(struct address_space *mapping, same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT); folio = __filemap_get_folio(mapping, lstart >> PAGE_SHIFT, FGP_LOCK, 0); if (!IS_ERR(folio)) { - same_folio = lend < folio_pos(folio) + folio_size(folio); + loff_t fend = folio_pos(folio) + folio_size(folio) - 1; + + if (unlikely(round_out)) { + if (folio_pos(folio) < lstart) + lstart = folio_pos(folio); + if (lend < fend) + lend = fend; + } + same_folio = lend <= fend; if (!truncate_inode_partial_folio(folio, lstart, lend)) { start = folio_next_index(folio); if (same_folio) @@ -382,6 +392,12 @@ void truncate_inode_pages_range(struct address_space *mapping, folio = __filemap_get_folio(mapping, lend >> PAGE_SHIFT, FGP_LOCK, 0); if (!IS_ERR(folio)) { + if (unlikely(round_out)) { + loff_t fend = folio_pos(folio) + folio_size(folio) - 1; + + if (lend < fend) + lend = fend; + } if (!truncate_inode_partial_folio(folio, lstart, lend)) end = folio->index; folio_unlock(folio); @@ -420,8 +436,21 @@ void truncate_inode_pages_range(struct address_space *mapping, folio_batch_release(&fbatch); } } + +void truncate_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend) +{ + __truncate_inode_pages_range(mapping, lstart, lend, false); +} EXPORT_SYMBOL(truncate_inode_pages_range); +void discard_inode_pages_range(struct address_space *mapping, + loff_t lstart, loff_t lend) +{ + __truncate_inode_pages_range(mapping, lstart, lend, true); +} +EXPORT_SYMBOL(discard_inode_pages_range); + /** * truncate_inode_pages - truncate *all* the pages from an offset * @mapping: mapping to truncate