From patchwork Tue Oct 11 21:56:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13004460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2918C43219 for ; Tue, 11 Oct 2022 21:57:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A545E8E0001; Tue, 11 Oct 2022 17:57:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0278900002; Tue, 11 Oct 2022 17:57:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82E498E0002; Tue, 11 Oct 2022 17:57:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 706AC8E0001 for ; Tue, 11 Oct 2022 17:57:12 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 33E28C0477 for ; Tue, 11 Oct 2022 21:57:12 +0000 (UTC) X-FDA: 80010029904.04.EDE3422 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf10.hostedemail.com (Postfix) with ESMTP id C799BC001E for ; Tue, 11 Oct 2022 21:57:11 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id g8-20020a17090a128800b0020c79f987ceso207948pja.5 for ; Tue, 11 Oct 2022 14:57:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2AXBXWk9KM8i6fKoe7c/6PSVGCSRYeW1GgiDFc8Dazo=; b=RcSCx6GfPgO1zqmEGbdDQcTUwIoMsiDavOa4LQzTIn/Kys1oeo6SjCBVThk33JTJqV lbJEHsFMZ7YTdE6nlgQh0eJxcFSzAqG4mXO8UOH0FDwsbOa2HIBnETyCkzMogatR+KjZ C5hu0sQPVZOOV1U766tB+LuH2Mg5cvwNUcHT2qaELJW4eV5wj8/v7aRQ5McJgFUCt6lL 2GC+XXnttD4uZbATsL6DOkqDgmFtFMH6ir67cYUdZTnN/veUnWCSMWLBjX6lp/NyZ+m9 RiOym3by81erUatbFgimWcCXnDcUnH9AAe8EHyx1Y3JDbKPa7BXFS5ISqh2IKGkwUSmZ hryg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2AXBXWk9KM8i6fKoe7c/6PSVGCSRYeW1GgiDFc8Dazo=; b=tr6MMdGBBufLppcQn/3im0wEMmcOstNnvRap2xCq7VL4lnVdqxF/upukol4X768Lq2 K3z1zhv56M0YrSU7QxdXuunRlA679/c8+DoiAYHJmv0gn3w+2fdqjbkmriAixssu9R2W T8ZUWTuHrQSQ94v2ahaFXqGoSbj2vdPQwlF406Dj0YnImkuJuW0DpfBjlqX9ij1Xyjh4 NuPhmh2ByDYDzMja0wRyUzby8WwJLQEzIZrS7la17rXn1E1Fqq1AGRJNz+jYIJsjzCTH hlLAb+LrO2keShvMJLNSBish+1ApUpk0s9dVZJTmgNUiYGNhpKYbWUSVeX/gmyUbMRiv JJkQ== X-Gm-Message-State: ACrzQf0XkOmuOMogA7JE0kBEndv44MuEHAF4CKveLZa78fBXT8VX93k4 WJ35unO2s9lzav3aEfL16Po= X-Google-Smtp-Source: AMsMyM5iKzVl/ZJYl342ueaPo+/J5AaNlY1YB4giH4BsCeRbJwSLh01n3j9aRW6XYipvoTRiZU04MA== X-Received: by 2002:a17:903:2285:b0:17f:6a39:7097 with SMTP id b5-20020a170903228500b0017f6a397097mr27134374plh.51.1665525430731; Tue, 11 Oct 2022 14:57:10 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:10 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 2/4] filemap: find_get_entries() now updates start offset Date: Tue, 11 Oct 2022 14:56:32 -0700 Message-Id: <20221011215634.478330-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665525431; a=rsa-sha256; cv=none; b=6Nnchi9tFXlUjwJAPgvj8S0NerNv5vJXXcXNKmTfBulPKqnAnLqdBY9Wt7OLHq5va2kwds 26YYNFFtZXDIilxwtroazi5Db6EvxFNWkh2qY8ebc9cyZ++AromWbCFLh50wInB6whsgl/ xOd7rjPRcUB6sTKu1hRB8hHmvl/hXDA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RcSCx6Gf; spf=pass (imf10.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665525431; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2AXBXWk9KM8i6fKoe7c/6PSVGCSRYeW1GgiDFc8Dazo=; b=PJIBFbzXwJ2WZC7J5ofV5m3NV3/fKA30QME4LGpWYFdJovW+pfCtlT9tLAKEknGyCclxGW B0NLowpi0lWAUCsF2/6dzD3yhoW5ub4dkNgmxuPDxNravl1b+tmfE4v9B/mDsLfjckYvnk Avsro2jvMd6CffwRUwx2X/DpuJOQoW8= X-Rspam-User: X-Stat-Signature: djwngaqy36bfkur5afxwe8xxku7reisd X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: C799BC001E Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RcSCx6Gf; spf=pass (imf10.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1665525431-347144 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Initially, find_get_entries() was being passed in the start offset as a value. That left the calculation of the offset to the callers. This led to complexity in the callers trying to keep track of the index. Now find_get_entires() takes in a pointer to the start offset and updates the value to be directly after the last entry found. If no entry is found, the offset is not changed. This gets rid of multiple hacky calculations that kept track of the start offset. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 15 +++++++++++++-- mm/internal.h | 2 +- mm/shmem.c | 11 ++++------- mm/truncate.c | 23 +++++++++-------------- 4 files changed, 27 insertions(+), 24 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index e95500b07ee9..1b8022c18dc7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2047,11 +2047,13 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, * shmem/tmpfs, are included in the returned array. * * Return: The number of entries which were found. + * Also updates @start to be positioned after the last found entry */ -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { - XA_STATE(xas, &mapping->i_pages, start); + XA_STATE(xas, &mapping->i_pages, *start); + unsigned long nr; struct folio *folio; rcu_read_lock(); @@ -2061,7 +2063,16 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, break; } rcu_read_unlock(); + nr = folio_batch_count(fbatch); + + if (nr) { + folio = fbatch->folios[nr - 1]; + nr = folio_nr_pages(folio); + if (folio_test_hugetlb(folio)) + nr = 1; + *start = folio->index + nr; + } return folio_batch_count(fbatch); } diff --git a/mm/internal.h b/mm/internal.h index c504ac7267e0..68afdbe7106e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -108,7 +108,7 @@ static inline void force_page_cache_readahead(struct address_space *mapping, unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); void filemap_free_folio(struct address_space *mapping, struct folio *folio); int truncate_inode_folio(struct address_space *mapping, struct folio *folio); diff --git a/mm/shmem.c b/mm/shmem.c index ab4f6dfcf6bb..8240e066edfc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -973,7 +973,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end) { cond_resched(); - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index == start || end != -1) @@ -985,13 +985,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, for (i = 0; i < folio_batch_count(&fbatch); i++) { folio = fbatch.folios[i]; - index = indices[i]; if (xa_is_value(folio)) { if (unfalloc) continue; - if (shmem_free_swap(mapping, index, folio)) { + if (shmem_free_swap(mapping, folio->index, folio)) { /* Swap was replaced by page: retry */ - index--; + index = folio->index; break; } nr_swaps_freed++; @@ -1004,19 +1003,17 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (folio_mapping(folio) != mapping) { /* Page was replaced by swap: retry */ folio_unlock(folio); - index--; + index = folio->index; break; } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); truncate_inode_folio(mapping, folio); } - index = folio->index + folio_nr_pages(folio) - 1; folio_unlock(folio); } folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); - index++; } spin_lock_irq(&info->lock); diff --git a/mm/truncate.c b/mm/truncate.c index b0bd63b2359f..846ddbdb27a4 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -400,7 +400,7 @@ void truncate_inode_pages_range(struct address_space *mapping, index = start; while (index < end) { cond_resched(); - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone from start onwards, we're done */ if (index == start) @@ -414,21 +414,18 @@ void truncate_inode_pages_range(struct address_space *mapping, struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing page->index */ - index = indices[i]; - if (xa_is_value(folio)) continue; folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, folio->index), + folio); folio_wait_writeback(folio); truncate_inode_folio(mapping, folio); folio_unlock(folio); - index = folio_index(folio) + folio_nr_pages(folio) - 1; } truncate_folio_batch_exceptionals(mapping, &fbatch, indices); folio_batch_release(&fbatch); - index++; } } EXPORT_SYMBOL(truncate_inode_pages_range); @@ -637,16 +634,14 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; - while (find_get_entries(mapping, index, end, &fbatch, indices)) { + while (find_get_entries(mapping, &index, end, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing folio->index */ - index = indices[i]; - if (xa_is_value(folio)) { if (!invalidate_exceptional_entry2(mapping, - index, folio)) + folio->index, folio)) ret = -EBUSY; continue; } @@ -656,13 +651,14 @@ int invalidate_inode_pages2_range(struct address_space *mapping, * If folio is mapped, before taking its lock, * zap the rest of the file in one hit. */ - unmap_mapping_pages(mapping, index, - (1 + end - index), false); + unmap_mapping_pages(mapping, folio->index, + (1 + end - folio->index), false); did_range_unmap = 1; } folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, folio->index), + folio); if (folio->mapping != mapping) { folio_unlock(folio); continue; @@ -685,7 +681,6 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } /* * For DAX we invalidate page tables after invalidating page cache. We