From patchwork Thu Feb 3 17:19:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12734402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761FCC433EF for ; Thu, 3 Feb 2022 17:19:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE1A66B0072; Thu, 3 Feb 2022 12:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D91626B0073; Thu, 3 Feb 2022 12:19:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE5536B0074; Thu, 3 Feb 2022 12:19:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id ADD706B0072 for ; Thu, 3 Feb 2022 12:19:16 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 70F5496376 for ; Thu, 3 Feb 2022 17:19:16 +0000 (UTC) X-FDA: 79102129512.19.CA02EE4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id EDB83140004 for ; Thu, 3 Feb 2022 17:19:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=isrZWqRiDdMx42DMSEkx5vtSZVftKUVo3TUSwLVAzko=; b=ihLSo8YuXM70gfQkimmPTLavSs Tre2Yr+bBdJBWevEiw512+D0OhOfH94FmDqBPsCdBRYTsu1HEcrmnALp6G0fb64lfWZcJ7fhStwNd y+n8ikaWl9CcxDOhjTNGHItvyDR+vgguJnNvVZ4sfeuEMsBMg8lleGxjoAao16PDy2HoR0RQc3gjh JcjaWA0q4K9T93NESvjT1zI1ruEJf8Mc9HDtUl8gqotsISbZtbCgmkUUeeZ2zOla5tfOmicHdo1wL 17srFHKbxeBOt/88/Y94qQAidesUMr4HEpHtdn8qosK9Pyo55wZfSk8wbjhJCzISKtmqwjl3vytiC UKwnw1ag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFflG-002YiR-HI; Thu, 03 Feb 2022 17:19:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Muchun Song Subject: [PATCH 1/2] mm: Add pvmw_set_page() Date: Thu, 3 Feb 2022 17:19:03 +0000 Message-Id: <20220203171904.609984-1-willy@infradead.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EDB83140004 X-Stat-Signature: 9i61swjpdyssfyuwqxy7fo8w8tzzajow Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ihLSo8Yu; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1643908755-548757 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of setting the page directly in struct page_vma_mapped_walk, use this helper to allow us to transition to a PFN approach in the next patch. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 6 ++++++ kernel/events/uprobes.c | 2 +- mm/damon/paddr.c | 4 ++-- mm/ksm.c | 2 +- mm/migrate.c | 2 +- mm/page_idle.c | 2 +- mm/rmap.c | 12 ++++++------ 7 files changed, 18 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e704b1a4c06c..003bb5775bb1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -213,6 +213,12 @@ struct page_vma_mapped_walk { unsigned int flags; }; +static inline void pvmw_set_page(struct page_vma_mapped_walk *pvmw, + struct page *page) +{ + pvmw->page = page; +} + static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6357c3580d07..5f74671b0066 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -156,13 +156,13 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = compound_head(old_page), .vma = vma, .address = addr, }; int err; struct mmu_notifier_range range; + pvmw_set_page(&pvmw, compound_head(old_page)); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 5e8244f65a1a..4e27d64abbb7 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -20,11 +20,11 @@ static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) @@ -94,11 +94,11 @@ static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma, { struct damon_pa_access_chk_result *result = arg; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; + pvmw_set_page(&pvmw, page); result->accessed = false; result->page_sz = PAGE_SIZE; while (page_vma_mapped_walk(&pvmw)) { diff --git a/mm/ksm.c b/mm/ksm.c index c20bd4d9a0d9..1639160c9e9a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1035,13 +1035,13 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, }; int swapped; int err = -EFAULT; struct mmu_notifier_range range; + pvmw_set_page(&pvmw, page); pvmw.address = page_address_in_vma(page, vma); if (pvmw.address == -EFAULT) goto out; diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..07464fd45925 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -177,7 +177,6 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *old) { struct page_vma_mapped_walk pvmw = { - .page = old, .vma = vma, .address = addr, .flags = PVMW_SYNC | PVMW_MIGRATION, @@ -187,6 +186,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, swp_entry_t entry; VM_BUG_ON_PAGE(PageTail(page), page); + pvmw_set_page(&pvmw, old); while (page_vma_mapped_walk(&pvmw)) { if (PageKsm(page)) new = page; diff --git a/mm/page_idle.c b/mm/page_idle.c index edead6a8a5f9..20d35d720872 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -49,12 +49,12 @@ static bool page_idle_clear_pte_refs_one(struct page *page, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = addr, }; bool referenced = false; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) { diff --git a/mm/rmap.c b/mm/rmap.c index a531b64d53fa..fa8478372e94 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -803,12 +803,12 @@ static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, { struct page_referenced_arg *pra = arg; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; int referenced = 0; + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { address = pvmw.address; @@ -932,7 +932,6 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, .flags = PVMW_SYNC, @@ -940,6 +939,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; int *cleaned = arg; + pvmw_set_page(&pvmw, page); /* * We have to assume the worse case ie pmd for invalidation. Note that * the page can not be free from this function. @@ -1423,7 +1423,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -1433,6 +1432,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -1723,7 +1723,6 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -1733,6 +1732,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -2003,11 +2003,11 @@ static bool page_mlock_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *unused) { struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; + pvmw_set_page(&pvmw, page); /* An un-locked vma doesn't have any pages to lock, continue the scan */ if (!(vma->vm_flags & VM_LOCKED)) return true; @@ -2078,7 +2078,6 @@ static bool page_make_device_exclusive_one(struct page *page, { struct mm_struct *mm = vma->vm_mm; struct page_vma_mapped_walk pvmw = { - .page = page, .vma = vma, .address = address, }; @@ -2090,6 +2089,7 @@ static bool page_make_device_exclusive_one(struct page *page, swp_entry_t entry; pte_t swp_pte; + pvmw_set_page(&pvmw, page); mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, address, min(vma->vm_end, address + page_size(page)), args->owner);