From patchwork Wed Sep 25 22:47:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13812501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77DDECCFA05 for ; Wed, 25 Sep 2024 22:47:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B45C6B00B9; Wed, 25 Sep 2024 18:47:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 03C7D6B00BA; Wed, 25 Sep 2024 18:47:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E46B56B00BC; Wed, 25 Sep 2024 18:47:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C33556B00B9 for ; Wed, 25 Sep 2024 18:47:39 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3FB36A10A9 for ; Wed, 25 Sep 2024 22:47:39 +0000 (UTC) X-FDA: 82604749038.07.0DD4B76 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) by imf17.hostedemail.com (Postfix) with ESMTP id 941E440004 for ; Wed, 25 Sep 2024 22:47:36 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=poOh5QVY; spf=pass (imf17.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727304395; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g+3OMHz/a1Wuc/jYKVUdzYlQ51tOrama2GsuX31VDms=; b=NoGbwjyHuCHuzCVH9THGQbae9cJ61xoQGQXvgu7YqiBNGkcTM4IrCpBSbep+bKj0cBHVQB BKcvGKGcDncr0yY6Y7fY2VBatY0t7VqvExta2Nkrow4NfLztre/5kKmePjpqrUY39zFeqP OGfC7tm3xa47rpa754MXSBRH8e3roOg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=poOh5QVY; spf=pass (imf17.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727304395; a=rsa-sha256; cv=none; b=hYv/hrGFXFXW6wpohmDOpPJ5RIqN9z4U3z9/G3YMYlLR/LwdIRvf47JOXnPOCePrWorhcw ZF3oD7MGODYpSvmP12qE82qXCQUEkAnfuI6sz7NjPdnXbs+EVIGNpAWc8s/LllWefdKrqx 9YcTLOP/oEmv5UZz6gFGTtzq5dA9bks= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1727304454; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g+3OMHz/a1Wuc/jYKVUdzYlQ51tOrama2GsuX31VDms=; b=poOh5QVY7GqwCIGs3N3EgUq00XLMjZVziVPSxT/oZnhktBQ98KhavI5/FmhMPdgT2NDmdX lYrvY/IXi0XwdZcdRgIshA4r1/ScVCRIQJPqZYEKtHPmNWQ0WHyaFJGaetAHpqy/sSFMYP mHZqHopFHnlCTGKvYkanahcImiiMTJA= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Matthew Wilcox , Omar Sandoval , Chris Mason , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH v2 1/2] mm: optimize truncation of shadow entries Date: Wed, 25 Sep 2024 15:47:15 -0700 Message-ID: <20240925224716.2904498-2-shakeel.butt@linux.dev> In-Reply-To: <20240925224716.2904498-1-shakeel.butt@linux.dev> References: <20240925224716.2904498-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 941E440004 X-Stat-Signature: q54kdhyukizdrac66x7erm4hern635jj X-HE-Tag: 1727304456-693573 X-HE-Meta: U2FsdGVkX1+FJECPBJwYD1Iu2eDDWD+NQAB1Zke0mfzIHNxY0obfpkVHkPG8+ZBB9T/j9CJ7YLJKz61fM6EX44lXk+U3dOHnC/TspxTT0qQChO3zMbitPN1xXswzRwtlT78EJb2WnQ+Ylas4l276VPZJUFYgucqH51XryM0j7UDqno5XOkylFjv67FRUE/3e3vtYy1pAMRiEEc3xlz+oibWN9CyaMH8EgACh0ZQTsFg3OyBJZPfOdBW24CSblsZHa20urGwHdbslazslevAF963eTtQdlJ4lDSI7ihHjd4Q0vmaTibphwvT3H9Hdvy/dSCJNzWfWhb7U02VzHEzpoRl0z5JQkUchIPDh5s60Xe1pXq65k/sRZaHbeqc3OO8MHfKhC3/7B/qKqXEFI+J0bBvaF5bbF0viCTQ17e9Zp5Rci8W8LaxQK1FgYWXRuE3DNLcUdxhhXTpHZUM/Ru0Ux9/BFtP0xbyisrvYKB/Vb6DbYD3xlHXSfzqch0jAXVEMdTm+9+ZiySeHb+wiGfB6U3kx8aIWGLHqFhWrmRrNGzbBvsCIhJDp6ECw4PGLzueYKLn38mVesrHB5FMU9ao8MsdlCQhlJS7e7sAYY0Z17CJQwSXQJSHlLOaLDDyFLivtTsL5G1WQ7wsmVCMMohhqBkimvnUk5B6HulATAD7pLLanWknmqHP+zdryK03eE1mqnTXePZXXvRRb05NIkaMLIRL2AM4bKglf4I/4t4eY+/EWWLgPG18Supc/mII/jOpD61FqAjehLb4Jj8Pvn055LdNEmML1QtOGJ4igBTpxJsvyvhOZH6tfe8IkM53+I3czMoepltJWKvhFWJ9rQvh9v7917T0Kn6l4PSBAAYPywoozct24Z6EC1UWib1bH5sDscsyAuVLMv4JvRM9sTM9vTBkCIE9Zmu+0cMF5y+4f9I1ZtTsixE+Jhy+MGTVB2tSHDn7KDkJPSXZSFixZaq6 5Izzl3lu +NDDCalfTwEv96SgkQHu5fneqHAh5hdiI4YiWO/Bls31f+nHTo/ao9h4eJ3d8vaZTeC2QkVTcsKzOVvcc9J8yO1zHTDiEIrD/5ibM1ROljtycKdrbeQ7n7gGhWgdI9s8ucnZWVsxOWxOaRC9b5sLRpkjNPE+Ii1OI44zqxGi+2OmbwzJpO7HP+pTTMwzf8LvP9cmiy10jttp0xFQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The kernel truncates the page cache in batches of PAGEVEC_SIZE. For each batch, it traverses the page cache tree and collects the entries (folio and shadow entries) in the struct folio_batch. For the shadow entries present in the folio_batch, it has to traverse the page cache tree for each individual entry to remove them. This patch optimize this by removing them in a single tree traversal. On large machines in our production which run workloads manipulating large amount of data, we have observed that a large amount of CPUs are spent on truncation of very large files (100s of GiBs file sizes). More specifically most of time was spent on shadow entries cleanup, so optimizing the shadow entries cleanup, even a little bit, has good impact. To evaluate the changes, we created 200GiB file on a fuse fs and in a memcg. We created the shadow entries by triggering reclaim through memory.reclaim in that specific memcg and measure the simple truncation operation. # time truncate -s 0 file time (sec) Without 5.164 +- 0.059 With-patch 4.21 +- 0.066 (18.47% decrease) Acked-by: Johannes Weiner Signed-off-by: Shakeel Butt --- Changes since v1: - Added a comment on the assumption of indices array (Johannes) mm/truncate.c | 53 +++++++++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 27 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 0668cd340a46..1d51c023d9c5 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -68,54 +68,53 @@ static void clear_shadow_entries(struct address_space *mapping, * Unconditionally remove exceptional entries. Usually called from truncate * path. Note that the folio_batch may be altered by this function by removing * exceptional entries similar to what folio_batch_remove_exceptionals() does. + * Please note that indices[] has entries in ascending order as guaranteed by + * either find_get_entries() or find_lock_entries(). */ static void truncate_folio_batch_exceptionals(struct address_space *mapping, struct folio_batch *fbatch, pgoff_t *indices) { + XA_STATE(xas, &mapping->i_pages, indices[0]); + int nr = folio_batch_count(fbatch); + struct folio *folio; int i, j; - bool dax; /* Handled by shmem itself */ if (shmem_mapping(mapping)) return; - for (j = 0; j < folio_batch_count(fbatch); j++) + for (j = 0; j < nr; j++) if (xa_is_value(fbatch->folios[j])) break; - if (j == folio_batch_count(fbatch)) + if (j == nr) return; - dax = dax_mapping(mapping); - if (!dax) { - spin_lock(&mapping->host->i_lock); - xa_lock_irq(&mapping->i_pages); + if (dax_mapping(mapping)) { + for (i = j; i < nr; i++) { + if (xa_is_value(fbatch->folios[i])) + dax_delete_mapping_entry(mapping, indices[i]); + } + goto out; } - for (i = j; i < folio_batch_count(fbatch); i++) { - struct folio *folio = fbatch->folios[i]; - pgoff_t index = indices[i]; - - if (!xa_is_value(folio)) { - fbatch->folios[j++] = folio; - continue; - } + xas_set(&xas, indices[j]); + xas_set_update(&xas, workingset_update_node); - if (unlikely(dax)) { - dax_delete_mapping_entry(mapping, index); - continue; - } + spin_lock(&mapping->host->i_lock); + xas_lock_irq(&xas); - __clear_shadow_entry(mapping, index, folio); + xas_for_each(&xas, folio, indices[nr-1]) { + if (xa_is_value(folio)) + xas_store(&xas, NULL); } - if (!dax) { - xa_unlock_irq(&mapping->i_pages); - if (mapping_shrinkable(mapping)) - inode_add_lru(mapping->host); - spin_unlock(&mapping->host->i_lock); - } - fbatch->nr = j; + xas_unlock_irq(&xas); + if (mapping_shrinkable(mapping)) + inode_add_lru(mapping->host); + spin_unlock(&mapping->host->i_lock); +out: + folio_batch_remove_exceptionals(fbatch); } /** From patchwork Wed Sep 25 22:47:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13812502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 843ADCCFA07 for ; Wed, 25 Sep 2024 22:47:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19F9C6B009F; Wed, 25 Sep 2024 18:47:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 152EB6B00B6; Wed, 25 Sep 2024 18:47:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBC886B00BF; Wed, 25 Sep 2024 18:47:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C7EDB6B00BA for ; Wed, 25 Sep 2024 18:47:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7329C1A069C for ; Wed, 25 Sep 2024 22:47:41 +0000 (UTC) X-FDA: 82604749122.08.411BD36 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) by imf29.hostedemail.com (Postfix) with ESMTP id C46E4120012 for ; Wed, 25 Sep 2024 22:47:39 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=b96mvYb6; spf=pass (imf29.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.179 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727304340; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=slbZ9FKSwH6OAIBuSfcvPtxMsbwgB1Nos7HczeUcRQE=; b=r9L8PyV6YshG69wwOywWkOZG+uBSXEWZVKMBCEigqPB4Rfma4lcr1Nd1RoyOh46xQ/x60X AGD53f6i+ERJcGEX7I1CdeJYeT0ksKePYoc8rgvnvBUer2Ru+fy1SrW1SHyKTTXuq/cw3d tuvbRGba2uzemwHtQzwMZdi46tQCp7A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727304340; a=rsa-sha256; cv=none; b=TqJByeX7M3OPG4g/Scet4174eoqBsZExAFsbAeo2hOeWXQjStkiWjyPCdw6E6gCjF1nbCP E8FMdVUcrCFlBT7oEl6mY6kdze+RorWg4FWVXG2cWBvLNDOonLUoAGNSB53TseZ87gYz5U zYfiFL3/1F77c2nsq1Gxy1etXHu3Te4= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=b96mvYb6; spf=pass (imf29.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.179 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1727304458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=slbZ9FKSwH6OAIBuSfcvPtxMsbwgB1Nos7HczeUcRQE=; b=b96mvYb6xAxPtPPb3lkOg3MmF6z+lRx20ztLlq+lEb6GvoxLsb5txOTLu2fQ0nNRYskp6j MXlHRTFvlL2g/67ij5+TC0d0ktcdzvREN/Ogrrl0x2xzD70paFu5oXoOQ5D0zDYq+9XLgl ySe24QdKOYsZ0zMIOw2wwx9WjVcHvd4= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Matthew Wilcox , Omar Sandoval , Chris Mason , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH v2 2/2] mm: optimize invalidation of shadow entries Date: Wed, 25 Sep 2024 15:47:16 -0700 Message-ID: <20240925224716.2904498-3-shakeel.butt@linux.dev> In-Reply-To: <20240925224716.2904498-1-shakeel.butt@linux.dev> References: <20240925224716.2904498-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: C46E4120012 X-Stat-Signature: 74c8ht5kucco98hmfmx7g5emb9qd3e9t X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1727304459-820815 X-HE-Meta: U2FsdGVkX1+ukzfPbSNkYrjDa2rXIII53JlojvtZ2IkAywiNBVX/A+V1os5PRFxKBVXskSFF3u32LRk6uNHEApcQL2WDbsGLQxv8ioEeiqugjgmmJlMt0IPuUTrF3TdKWxV0dSO0e4Bu0xvPQddSO7+eV9ZNCmApnGuCJyV7wPff0JnVUWBXG+KHM/B2hHX7k53YWTDRQUxKPYUDrwGBpGnYKfjqdtBgioapNXFxqRwH5thaycNJ1QyV06Mh+z6I//AtmyWH4B/6bcdS/E3vToHSn4sVulo6kBzfoOGlOu6aoJSgfxr+FeXYfwvcn+GSs+9Ubmmt372QxYwBVWoUaoydJTWDobche1AE278FdUNrRe5k3hibjdUuUd09t/3Gb5arWCyOIgOyYFtC2J2ckBbK6UMFdFGKOSv5Rm0Qttfexpqmophq0MRebCmQIfg1afwKEjHwzu/YEBmuiCGK31B/JU4sPnBeT1dxegs2HkMO2/AWzoznI5L7IJsxLKToTg+r0bpqSoQLeW5PHdLFVfmSX3wxeR17JgSY1rw16NGmfcgr+gTxKNioZQtnAUZY63YbBg2pjIuMsvjhsMYQRe9PvwZIAoRioF0rSfkiv4/DmpOWQbmZXYp0/DnhFyNdsAM1Vdka/aYN73rpULDNZ0Ova3yBlMlZ0iS2gKSis8dhit6m9z070ZOaD9CAf9ufmzLOnHdUXBc5hgolqxII0R+BYwoAXgHZ5jPT0YtF7QTM9k9jLeSY4RUFQQnb2+bz7x94HKV8/tqcnMdlPiSDZHgEMDeYuZ+oZTDZSqZpA1jKKimj9IqoqGpXogMILyDCon9nDViMF99/XMErTiqkPfVwNbBixRy1WWBxf5DzTVobMihch2DLnlztrAMLCHlVGXJ7vNX9zqYSg+qwnuzP1OacSxcWl4XYjCqN22wZbmyR5Ljl0uuB/ZtktFjwFLwjhEgZ+GUor5N9aMgjyXo e3KtbVd5 VnuREsH9Jt8YyYfWCj+IeS1PR/8QqGepMeDg0/RpFvn4tv2z1NXr57O31WOf5adUOt/UVjno0/r6LhD4Kr16RTJH4YidYhwbry3VWpIxytzD4cB9vtxKo6UliFJBbdYQ654lyXB2ZaM9oaCI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The kernel invalidates the page cache in batches of PAGEVEC_SIZE. For each batch, it traverses the page cache tree and collects the entries (folio and shadow entries) in the struct folio_batch. For the shadow entries present in the folio_batch, it has to traverse the page cache tree for each individual entry to remove them. This patch optimize this by removing them in a single tree traversal. To evaluate the changes, we created 200GiB file on a fuse fs and in a memcg. We created the shadow entries by triggering reclaim through memory.reclaim in that specific memcg and measure the simple fadvise(DONTNEED) operation. # time xfs_io -c 'fadvise -d 0 ${file_size}' file time (sec) Without 5.12 +- 0.061 With-patch 4.19 +- 0.086 (18.16% decrease) Signed-off-by: Shakeel Butt --- Changes since v1: - N/A mm/truncate.c | 46 ++++++++++++++++++---------------------------- 1 file changed, 18 insertions(+), 28 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 1d51c023d9c5..520c8cf8f58f 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -23,42 +23,28 @@ #include #include "internal.h" -/* - * Regular page slots are stabilized by the page lock even without the tree - * itself locked. These unlocked entries need verification under the tree - * lock. - */ -static inline void __clear_shadow_entry(struct address_space *mapping, - pgoff_t index, void *entry) -{ - XA_STATE(xas, &mapping->i_pages, index); - - xas_set_update(&xas, workingset_update_node); - if (xas_load(&xas) != entry) - return; - xas_store(&xas, NULL); -} - static void clear_shadow_entries(struct address_space *mapping, - struct folio_batch *fbatch, pgoff_t *indices) + unsigned long start, unsigned long max) { - int i; + XA_STATE(xas, &mapping->i_pages, start); + struct folio *folio; /* Handled by shmem itself, or for DAX we do nothing. */ if (shmem_mapping(mapping) || dax_mapping(mapping)) return; - spin_lock(&mapping->host->i_lock); - xa_lock_irq(&mapping->i_pages); + xas_set_update(&xas, workingset_update_node); - for (i = 0; i < folio_batch_count(fbatch); i++) { - struct folio *folio = fbatch->folios[i]; + spin_lock(&mapping->host->i_lock); + xas_lock_irq(&xas); + /* Clear all shadow entries from start to max */ + xas_for_each(&xas, folio, max) { if (xa_is_value(folio)) - __clear_shadow_entry(mapping, indices[i], folio); + xas_store(&xas, NULL); } - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); if (mapping_shrinkable(mapping)) inode_add_lru(mapping->host); spin_unlock(&mapping->host->i_lock); @@ -481,7 +467,9 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, folio_batch_init(&fbatch); while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { - for (i = 0; i < folio_batch_count(&fbatch); i++) { + int nr = folio_batch_count(&fbatch); + + for (i = 0; i < nr; i++) { struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing folio->index */ @@ -508,7 +496,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, } if (xa_has_values) - clear_shadow_entries(mapping, &fbatch, indices); + clear_shadow_entries(mapping, indices[0], indices[nr-1]); folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); @@ -612,7 +600,9 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; while (find_get_entries(mapping, &index, end, &fbatch, indices)) { - for (i = 0; i < folio_batch_count(&fbatch); i++) { + int nr = folio_batch_count(&fbatch); + + for (i = 0; i < nr; i++) { struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing folio->index */ @@ -658,7 +648,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, } if (xa_has_values) - clear_shadow_entries(mapping, &fbatch, indices); + clear_shadow_entries(mapping, indices[0], indices[nr-1]); folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch);