From patchwork Sat Oct 5 20:01:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13823459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37534CFB42B for ; Sat, 5 Oct 2024 20:01:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92B946B031E; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DCE16B031F; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A43F6B0322; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5C05A6B031E for ; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3232CA019D for ; Sat, 5 Oct 2024 20:01:30 +0000 (UTC) X-FDA: 82640618340.11.F540DE3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id EB5D2100006 for ; Sat, 5 Oct 2024 20:01:26 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dq18neqJ; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158380; a=rsa-sha256; cv=none; b=dRzX7QYwk9l14kfGW+lvJg1GKqlbcejPELtlUf26KmYwF/ilDY7zBhOlt1l50t57x3eI9d xJH18fn2VhLhCNfgOoFtREsq90b7rXyJE3BSYdMfEXf66cs27GPzfqD8+Gph9WqcbZ6k9i CXCILP/isSskCrr+erRiRLfx4OXx7Hg= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dq18neqJ; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158380; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qJW+rbBtSAKersOzagu+5Aywr53YYOEEvCKDOpd7s3U=; b=KZNTHXONX+tH7lEPb8U9JExoucX6Bq0IRQpzFTfbDelotoRlSVXY/KWDSx2dNXjCGMxlNN MaBmYz/NtLm1YgR69yp3etgfr9ll8mHRrkm9w4EibFKz8pSmSybTUArr2O7VJ+yBR1WlJ0 3jVXoGyk2MGhnMu4WSGh5IWUgsoXSao= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qJW+rbBtSAKersOzagu+5Aywr53YYOEEvCKDOpd7s3U=; b=Dq18neqJxFPe1ib5EoXbPg8QCa 24kc67+gFyAbqb+AQNqFdpmfrFiIY2seApVn70oejKYNjvvoCa66IaZCvdsM8R4lSDqbhEVj3IFCC UMg4q80MabrSI+d1JYq4b0UajGRJ8ik2JmeBw7TS0/OWvN67yCyJTRboxWOwvzabPJDwgwevXA+JL Zxjsd7dNxQDM0Tf13Nzu/pnjUlqnrPOePmIGwq8kYSLgBygFGQHnBUOgS6E7qwy+W079gqNnm+tko C9pa/DxQ2/SY2uZxeXtf2pv6tuV8t7hPJJqFsKLwDgCaL77I2HB1WP3x7L0DCYryYrmWEbDpn3hNa prfXuA2Q==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZf-37K3; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 3/7] mm: Renovate page_address_in_vma() Date: Sat, 5 Oct 2024 21:01:14 +0100 Message-ID: <20241005200121.3231142-4-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EB5D2100006 X-Stat-Signature: a6gyxi97d8wddt4bsbtfqbkudc9jibg7 X-Rspam-User: X-HE-Tag: 1728158486-11985 X-HE-Meta: U2FsdGVkX1/0SRWnbp42Hz6kZe9ji/F28Tv+C+d/JYyRAtB3LSVTeVYeAQsbfom0hBY/9slc0dDIEkBQNyn6AhYGGl9t01Kyn9QruaDzf0HDIG0Drm//5bjWlxKNSClGa//MrEEv6KoaURBxBBID/KmFh2adUOqe/KH3zulXm8xjPza0UTcsquzDKkPocasxybJ7r4dOoCrIO9IDm20E79rq6bxCz4dqrvateYkMCW2O1SyIergYesQpVVGJENtYNXxW5IdKJAIfIJf6oTuZ+U/GgUstodDCO7xiqayCDvMjI6+Y4TlL9Fs3PvnNBYR3h8aks4fPpdyaD0P1DwssMRLNtZ58+VXid3qy6nNHko6bfHXDsMsh5OfLOAKmTKRJ9z3zdwWWa0Ep7epT6bsfPn+HaGbMCYPswEGt23qycuGW0MDke55rzYIALkXQo6OmykCAd8ZqJ75Qp4MtYsd8nqBQ6uZGHyZZY4ALf+fstpw1MeDDSeuzO0+K4RW45qOuLDU78q7b9SR9ak9t8LCCDUh9tROr2PTL5D71oxrvtotEnxqdlLnG9X+UF5G62/Trm8PS2HMUQZaKCW3KTWplkpFP/VDhV86j72J53RDHrRUFdV5ewGbinpMiO6HIAzH3Ju9lAzo5vENFikSqYkosTTomsLasnY7QFJifRsS1v4FAII3B6p1Ti7lzf10FXdw5le41XdxXps3RpoWkzJNIgsmDiuT4VlyQ1345kg5c+nCV79EUNjSrHxaRAah2yIIu/4H7Ut860yJPAPPi/nwzhbbqji3nU5N3Y+A91nsWLo/92Xjogo1UD18J63jG31EE+SrMNyN7Zt+/NViELJnkObQa1M0M9JESn6X5a6Lm+OKIMLYM2Qdj/e7Aa6Q9L8vOHZNmKUDqb22/r+IPzVoDO83s9jVqIGF4EXN0tffiEYOh1tAxIaiy0KpHYywDlJ8LwfCX/gtyZOx2OJQ2QYr VZeNrnUt H5HjL8oBZ/l1MH9eTgWH8sutfhpi5Walip2D3Gt7907qT6RBjs9T6l8ffaAy1mw4PSKs0dQFNyrTdEJsoAW0zjmgKmz69OsjQ4FPeh1XzhPEmZ3xNnrlYOI48DOvP4JZnAlpYSAXLGD32DZnQ1TNI3t4KZCXDXX2vOQedMnCC7ZxrryhgYMoSUysOIURrT/BHiP0yUeVUnRg/+6PSzyF9qpnWTpxSYslBYo55m4BoJEkto9IcvXqeG9klTFvD7V8naGQP51qq0vD9YqioSYmba2aK6r24xPqRsYHy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This function doesn't modify any of its arguments, so if we make a few other functions take const pointers, we can make page_address_in_vma() take const pointers too. All of its callers have the containing folio already, so pass that in as an argument instead of recalculating it. Also add kernel-doc Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 7 ++----- mm/internal.h | 4 ++-- mm/ksm.c | 7 +++---- mm/memory-failure.c | 2 +- mm/mempolicy.c | 2 +- mm/rmap.c | 27 ++++++++++++++++++++------- mm/util.c | 2 +- 7 files changed, 30 insertions(+), 21 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index d5e93e44322e..78923015a2e8 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -728,11 +728,8 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) } bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); - -/* - * Used by swapoff to help locate where page is expected in vma. - */ -unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); +unsigned long page_address_in_vma(const struct folio *folio, + const struct page *, const struct vm_area_struct *); /* * Cleans the PTEs of shared mappings. diff --git a/mm/internal.h b/mm/internal.h index 93083bbeeefa..fffa9df41495 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -796,7 +796,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) } /* mm/util.c */ -struct anon_vma *folio_anon_vma(struct folio *folio); +struct anon_vma *folio_anon_vma(const struct folio *folio); #ifdef CONFIG_MMU void unmap_mapping_folio(struct folio *folio); @@ -914,7 +914,7 @@ extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); * If any page in this range is mapped by this VMA, return the first address * where any of these pages appear. Otherwise, return -EFAULT. */ -static inline unsigned long vma_address(struct vm_area_struct *vma, +static inline unsigned long vma_address(const struct vm_area_struct *vma, pgoff_t pgoff, unsigned long nr_pages) { unsigned long address; diff --git a/mm/ksm.c b/mm/ksm.c index a2e2a521df0a..2bbb321f92ac 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1257,7 +1257,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct folio *folio, if (WARN_ON_ONCE(folio_test_large(folio))) return err; - pvmw.address = page_address_in_vma(&folio->page, vma); + pvmw.address = page_address_in_vma(folio, folio_page(folio, 0), vma); if (pvmw.address == -EFAULT) goto out; @@ -1341,7 +1341,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, { struct folio *kfolio = page_folio(kpage); struct mm_struct *mm = vma->vm_mm; - struct folio *folio; + struct folio *folio = page_folio(page); pmd_t *pmd; pmd_t pmde; pte_t *ptep; @@ -1351,7 +1351,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, int err = -EFAULT; struct mmu_notifier_range range; - addr = page_address_in_vma(page, vma); + addr = page_address_in_vma(folio, page, vma); if (addr == -EFAULT) goto out; @@ -1417,7 +1417,6 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, ptep_clear_flush(vma, addr, ptep); set_pte_at(mm, addr, ptep, newpte); - folio = page_folio(page); folio_remove_rmap_pte(folio, page, vma); if (!folio_mapped(folio)) folio_free_swap(folio); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 58a3d80961a4..ea9d883c01c1 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -671,7 +671,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, */ if (vma->vm_mm != t->mm) continue; - addr = page_address_in_vma(page, vma); + addr = page_address_in_vma(folio, page, vma); add_to_kill_anon_file(t, page, vma, to_kill, addr); } } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b646fab3e45e..b92113d27f63 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1367,7 +1367,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_entry_is_head(folio, &pagelist, lru)) { vma_iter_init(&vmi, mm, start); for_each_vma_range(vmi, vma, end) { - addr = page_address_in_vma( + addr = page_address_in_vma(folio, folio_page(folio, 0), vma); if (addr != -EFAULT) break; diff --git a/mm/rmap.c b/mm/rmap.c index 90df71c640bf..a7b4f9ba9a14 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -768,14 +768,27 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) } #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ -/* - * At what user virtual address is page expected in vma? - * Caller should check the page is actually part of the vma. +/** + * page_address_in_vma - The virtual address of a page in this VMA. + * @folio: The folio containing the page. + * @page: The page within the folio. + * @vma: The VMA we need to know the address in. + * + * Calculates the user virtual address of this page in the specified VMA. + * It is the caller's responsibililty to check the page is actually + * within the VMA. There may not currently be a PTE pointing at this + * page, but if a page fault occurs at this address, this is the page + * which will be accessed. + * + * Context: Caller should hold a reference to the folio. Caller should + * hold a lock (eg the i_mmap_lock or the mmap_lock) which keeps the + * VMA from being altered. + * + * Return: The virtual address corresponding to this page in the VMA. */ -unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) +unsigned long page_address_in_vma(const struct folio *folio, + const struct page *page, const struct vm_area_struct *vma) { - struct folio *folio = page_folio(page); - if (folio_test_anon(folio)) { struct anon_vma *page__anon_vma = folio_anon_vma(folio); /* @@ -791,7 +804,7 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) return -EFAULT; } - /* The !page__anon_vma above handles KSM folios */ + /* KSM folios don't reach here because of the !page__anon_vma check */ return vma_address(vma, page_pgoff(folio, page), 1); } diff --git a/mm/util.c b/mm/util.c index 4f1275023eb7..60017d2a9e48 100644 --- a/mm/util.c +++ b/mm/util.c @@ -820,7 +820,7 @@ void *vcalloc_noprof(size_t n, size_t size) } EXPORT_SYMBOL(vcalloc_noprof); -struct anon_vma *folio_anon_vma(struct folio *folio) +struct anon_vma *folio_anon_vma(const struct folio *folio) { unsigned long mapping = (unsigned long)folio->mapping;