From patchwork Thu Feb 3 17:19:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12734402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761FCC433EF for ; Thu, 3 Feb 2022 17:19:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE1A66B0072; Thu, 3 Feb 2022 12:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D91626B0073; Thu, 3 Feb 2022 12:19:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE5536B0074; Thu, 3 Feb 2022 12:19:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id ADD706B0072 for ; Thu, 3 Feb 2022 12:19:16 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 70F5496376 for ; Thu, 3 Feb 2022 17:19:16 +0000 (UTC) X-FDA: 79102129512.19.CA02EE4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id EDB83140004 for ; Thu, 3 Feb 2022 17:19:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=isrZWqRiDdMx42DMSEkx5vtSZVftKUVo3TUSwLVAzko=; b=ihLSo8YuXM70gfQkimmPTLavSs Tre2Yr+bBdJBWevEiw512+D0OhOfH94FmDqBPsCdBRYTsu1HEcrmnALp6G0fb64lfWZcJ7fhStwNd y+n8ikaWl9CcxDOhjTNGHItvyDR+vgguJnNvVZ4sfeuEMsBMg8lleGxjoAao16PDy2HoR0RQc3gjh JcjaWA0q4K9T93NESvjT1zI1ruEJf8Mc9HDtUl8gqotsISbZtbCgmkUUeeZ2zOla5tfOmicHdo1wL 17srFHKbxeBOt/88/Y94qQAidesUMr4HEpHtdn8qosK9Pyo55wZfSk8wbjhJCzISKtmqwjl3vytiC UKwnw1ag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFflG-002YiR-HI; Thu, 03 Feb 2022 17:19:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Muchun Song Subject: [PATCH 1/2] mm: Add pvmw_set_page() Date: Thu, 3 Feb 2022 17:19:03 +0000 Message-Id: <20220203171904.609984-1-willy@infradead.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EDB83140004 X-Stat-Signature: 9i61swjpdyssfyuwqxy7fo8w8tzzajow Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ihLSo8Yu; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1643908755-548757 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of setting the page directly in struct page_vma_mapped_walk, use this helper to allow us to transition to a PFN approach in the next patch. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 6 ++++++ kernel/events/uprobes.c | 2 +- mm/damon/paddr.c | 4 ++-- mm/ksm.c | 2 +- mm/migrate.c | 2 +- mm/page_idle.c | 2 +- mm/rmap.c | 12 ++++++------ 7 files changed, 18 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e704b1a4c06c..003bb5775bb1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -213,6 +213,12 @@ struct page_vma_mapped_walk { unsigned int flags; }; +static inline void pvmw_set_page(struct page_vma_mapped_walk *pvmw, + struct page *page) +{ + pvmw->page = page; +} + static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6357c3580d07..5f74671b0066 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -156,13 +156,13 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = compound_head(old_page), .vma = vma, .address = addr, }; int err; struct mmu_notifier_range range; + pvmw_set_page(&pvmw, compound_head(old_page)); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 5e8244f65a1a..4e27d64abbb7 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -20,11 +20,11 @@ static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) @@ -94,11 +94,11 @@ static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma, { struct damon_pa_access_chk_result *result = arg; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; + pvmw_set_page(&pvmw, page); result->accessed = false; result->page_sz = PAGE_SIZE; while (page_vma_mapped_walk(&pvmw)) { diff --git a/mm/ksm.c b/mm/ksm.c index c20bd4d9a0d9..1639160c9e9a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1035,13 +1035,13 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, }; int swapped; int err = -EFAULT; struct mmu_notifier_range range; + pvmw_set_page(&pvmw, page); pvmw.address = page_address_in_vma(page, vma); if (pvmw.address == -EFAULT) goto out; diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..07464fd45925 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -177,7 +177,6 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *old) { struct page_vma_mapped_walk pvmw = { - .page = old, .vma = vma, .address = addr, .flags = PVMW_SYNC | PVMW_MIGRATION, @@ -187,6 +186,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, swp_entry_t entry; VM_BUG_ON_PAGE(PageTail(page), page); + pvmw_set_page(&pvmw, old); while (page_vma_mapped_walk(&pvmw)) { if (PageKsm(page)) new = page; diff --git a/mm/page_idle.c b/mm/page_idle.c index edead6a8a5f9..20d35d720872 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -49,12 +49,12 @@ static bool page_idle_clear_pte_refs_one(struct page *page, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; bool referenced = false; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) { diff --git a/mm/rmap.c b/mm/rmap.c index a531b64d53fa..fa8478372e94 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -803,12 +803,12 @@ static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, { struct page_referenced_arg *pra = arg; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; int referenced = 0; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { address = pvmw.address; @@ -932,7 +932,6 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, .flags = PVMW_SYNC, @@ -940,6 +939,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; int *cleaned = arg; + pvmw_set_page(&pvmw, page); /* * We have to assume the worse case ie pmd for invalidation. Note that * the page can not be free from this function. @@ -1423,7 +1423,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -1433,6 +1432,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -1723,7 +1723,6 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -1733,6 +1732,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -2003,11 +2003,11 @@ static bool page_mlock_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *unused) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; + pvmw_set_page(&pvmw, page); /* An un-locked vma doesn't have any pages to lock, continue the scan */ if (!(vma->vm_flags & VM_LOCKED)) return true; @@ -2078,7 +2078,6 @@ static bool page_make_device_exclusive_one(struct page *page, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -2090,6 +2089,7 @@ static bool page_make_device_exclusive_one(struct page *page, swp_entry_t entry; pte_t swp_pte; + pvmw_set_page(&pvmw, page); mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, address, min(vma->vm_end, address + page_size(page)), args->owner); From patchwork Thu Feb 3 17:19:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12734403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 061F9C433F5 for ; Thu, 3 Feb 2022 17:19:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F67B6B0071; Thu, 3 Feb 2022 12:19:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A5D66B0072; Thu, 3 Feb 2022 12:19:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26D7D6B0073; Thu, 3 Feb 2022 12:19:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 1A8B46B0071 for ; Thu, 3 Feb 2022 12:19:15 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BF019181F24A4 for ; Thu, 3 Feb 2022 17:19:14 +0000 (UTC) X-FDA: 79102129428.07.CD3A9BC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id E557B140004 for ; Thu, 3 Feb 2022 17:19:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0nbBLXqaxmxoBj/lVAxndinx2RCoJmGFkAbfhguFLLo=; b=Igi3L6ksYFO+cP1HSzMHhAduUE 0+PGEHQwTRXKn3CRVKDQLCkuSb4flyKVExD6FLZj3BlBdfuqzmreOs1U+5oXyNasm46HLH39A0Fyr 7XrW8wnYdzizOc1Zk6sB1Ox6BeeC5x9EOsFjOqhCsAroB58Vky2+iYhl9Cu8ah+5OYpIz0Ttt+pOT KLQmmYVY+RlqUrudZU6BbSVtLrychf+tRrjkWeXo0WQfb2e3mpgoJ9Fe2M2Jmie/dsbNy5SwkMWMP C5dnWPvk3REAj2OHSh9KPebQea+L3UM/4GQj6qMRCMP5Qqr8VcUgHrq32Qrz+LsltjqJmpxOm0Du+ cEI+FWiQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFflG-002YiT-K8; Thu, 03 Feb 2022 17:19:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Muchun Song Subject: [PATCH 2/2] mm: Convert page_vma_mapped_walk to work on PFNs Date: Thu, 3 Feb 2022 17:19:04 +0000 Message-Id: <20220203171904.609984-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220203171904.609984-1-willy@infradead.org> References: <20220203171904.609984-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E557B140004 X-Rspam-User: nil Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Igi3L6ks; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: rk8ga8ouowbtpd8nyi944xu5dkz4yq35 X-Rspamd-Server: rspam08 X-HE-Tag: 1643908753-11021 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_mapped_in_vma() really just wants to walk one page, but as the code stands, if passed the head page of a compound page, it will walk every page in the compound page. Extract pfn/nr_pages/pgoff from the struct page early, so they can be overridden by page_mapped_in_vma(). Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Reported-by: kernel test robot --- include/linux/hugetlb.h | 5 ++++ include/linux/rmap.h | 13 +++++++--- mm/internal.h | 15 ++++++----- mm/migrate.c | 2 +- mm/page_vma_mapped.c | 57 ++++++++++++++++++----------------------- mm/rmap.c | 8 +++--- 6 files changed, 52 insertions(+), 48 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d1897a69c540..6ba2f8e74fbb 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -970,6 +970,11 @@ static inline struct hstate *page_hstate(struct page *page) return NULL; } +static inline struct hstate *size_to_hstate(unsigned long size) +{ + return NULL; +} + static inline unsigned long huge_page_size(struct hstate *h) { return PAGE_SIZE; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 003bb5775bb1..6c0ebbd96e95 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -11,6 +11,7 @@ #include #include #include +#include /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -200,11 +201,13 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) -/* Look for migarion entries rather than present PTEs */ +/* Look for migration entries rather than present PTEs */ #define PVMW_MIGRATION (1 << 1) struct page_vma_mapped_walk { - struct page *page; + unsigned long pfn; + unsigned long nr_pages; + pgoff_t pgoff; struct vm_area_struct *vma; unsigned long address; pmd_t *pmd; @@ -216,13 +219,15 @@ struct page_vma_mapped_walk { static inline void pvmw_set_page(struct page_vma_mapped_walk *pvmw, struct page *page) { - pvmw->page = page; + pvmw->pfn = page_to_pfn(page); + pvmw->nr_pages = compound_nr(page); + pvmw->pgoff = page_to_pgoff(page); } static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ - if (pvmw->pte && !PageHuge(pvmw->page)) + if (pvmw->pte && !is_vm_hugetlb_page(pvmw->vma)) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); diff --git a/mm/internal.h b/mm/internal.h index b7a2195c12b1..7f1db0f1a8bc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -10,6 +10,7 @@ #include #include #include +#include #include struct folio_batch; @@ -459,18 +460,20 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* - * Then at what user virtual address will none of the page be found in vma? + * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. - * If page is a compound head, the entire compound page is considered. */ -static inline unsigned long -vma_address_end(struct page *page, struct vm_area_struct *vma) +static inline unsigned long vma_address_end(struct page_vma_mapped_walk *pvmw) { + struct vm_area_struct *vma = pvmw->vma; pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page) + compound_nr(page); + /* Common case, plus ->pgoff is invalid for KSM */ + if (pvmw->nr_pages == 1) + return pvmw->address + PAGE_SIZE; + + pgoff = pvmw->pgoff + pvmw->nr_pages; address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address > vma->vm_end) diff --git a/mm/migrate.c b/mm/migrate.c index 07464fd45925..766dc67874a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -191,7 +191,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, if (PageKsm(page)) new = page; else - new = page - pvmw.page->index + + new = page - pvmw.pgoff + linear_page_index(vma, pvmw.address); #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f7b331081791..228d5103e6d1 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -53,18 +53,6 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) return true; } -static inline bool pfn_is_match(struct page *page, unsigned long pfn) -{ - unsigned long page_pfn = page_to_pfn(page); - - /* normal page and hugetlbfs page */ - if (!PageTransCompound(page) || PageHuge(page)) - return page_pfn == pfn; - - /* THP can be referenced by any subpage */ - return pfn >= page_pfn && pfn - page_pfn < thp_nr_pages(page); -} - /** * check_pte - check if @pvmw->page is mapped at the @pvmw->pte * @pvmw: page_vma_mapped_walk struct, includes a pair pte and page for checking @@ -116,7 +104,17 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) pfn = pte_pfn(*pvmw->pte); } - return pfn_is_match(pvmw->page, pfn); + return (pfn - pvmw->pfn) < pvmw->nr_pages; +} + +/* Returns true if the two ranges overlap. Careful to not overflow. */ +static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw) +{ + if ((pfn + HPAGE_PMD_NR - 1) < pvmw->pfn) + return false; + if (pfn > pvmw->pfn + pvmw->nr_pages - 1) + return false; + return true; } static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) @@ -127,7 +125,7 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) } /** - * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at + * page_vma_mapped_walk - check if @pvmw->pfn is mapped in @pvmw->vma at * @pvmw->address * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags * must be set. pmd, pte and ptl must be NULL. @@ -152,8 +150,8 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) */ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) { - struct mm_struct *mm = pvmw->vma->vm_mm; - struct page *page = pvmw->page; + struct vm_area_struct *vma = pvmw->vma; + struct mm_struct *mm = vma->vm_mm; unsigned long end; pgd_t *pgd; p4d_t *p4d; @@ -164,32 +162,26 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); - if (unlikely(PageHuge(page))) { + if (unlikely(is_vm_hugetlb_page(vma))) { + unsigned long size = pvmw->nr_pages * PAGE_SIZE; /* The only possible mapping was handled on last iteration */ if (pvmw->pte) return not_found(pvmw); /* when pud is not present, pte will be NULL */ - pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); + pvmw->pte = huge_pte_offset(mm, pvmw->address, size); if (!pvmw->pte) return false; - pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte); + pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm, + pvmw->pte); spin_lock(pvmw->ptl); if (!check_pte(pvmw)) return not_found(pvmw); return true; } - /* - * Seek to next pte only makes sense for THP. - * But more important than that optimization, is to filter out - * any PageKsm page: whose page->index misleads vma_address() - * and vma_address_end() to disaster. - */ - end = PageTransCompound(page) ? - vma_address_end(page, pvmw->vma) : - pvmw->address + PAGE_SIZE; + end = vma_address_end(pvmw); if (pvmw->pte) goto next_pte; restart: @@ -224,7 +216,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); - if (pmd_page(pmde) != page) + if (!check_pmd(pmd_pfn(pmde), pvmw)) return not_found(pvmw); return true; } @@ -236,7 +228,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - pfn_swap_entry_to_page(entry) != page) + !check_pmd(swp_offset(entry), pvmw)) return not_found(pvmw); return true; } @@ -250,7 +242,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * cleared *pmd but not decremented compound_mapcount(). */ if ((pvmw->flags & PVMW_SYNC) && - PageTransCompound(page)) { + (pvmw->nr_pages >= HPAGE_PMD_NR)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); spin_unlock(ptl); @@ -307,7 +299,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) { struct page_vma_mapped_walk pvmw = { - .page = page, + .pfn = page_to_pfn(page), + .nr_pages = 1, .vma = vma, .flags = PVMW_SYNC, }; diff --git a/mm/rmap.c b/mm/rmap.c index fa8478372e94..d62a6fcef318 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -946,7 +946,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(page, vma)); + vma_address_end(&pvmw)); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { @@ -1453,8 +1453,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(&pvmw); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) { @@ -1757,8 +1756,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(&pvmw); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) {