From patchwork Thu Sep 21 16:20:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13394011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5EB2E71090 for ; Thu, 21 Sep 2023 16:20:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24F526B020B; Thu, 21 Sep 2023 12:20:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 226AE6B020C; Thu, 21 Sep 2023 12:20:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C79D6B020D; Thu, 21 Sep 2023 12:20:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F29D76B020B for ; Thu, 21 Sep 2023 12:20:53 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C5D1480D28 for ; Thu, 21 Sep 2023 16:20:53 +0000 (UTC) X-FDA: 81261118386.27.CDD4B2A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 290A8C0029 for ; Thu, 21 Sep 2023 16:20:51 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695313252; a=rsa-sha256; cv=none; b=kHbD/GgXcNcSdNVddwnbfTHsAtZ+BcDDVP3Z0KBLRE2HzxrBW7X8mM5SPWLD+gSRs3Cj7W /svgZO/NvwdvzLyuW8PiYqorq26zss+61G0jqm9ABfn/SgN2C4o8wHmF1jB9xhfMUiIW99 21j6sH2NFJ+S8LTQY3TUM6AF/9mcJAE= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695313252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KzfRLaLyR125P3gGdZxsyNO8aJT47vmGKTm1sKBR754=; b=ySMluvZ80aB0CxgcR89puuqIk4vgR0BEdgWhvq94UMhfjhqL4LzpVILdzTxcNjMiB4yAw0 LEhZjW80v4ZtiW1FjAN1vJo4aYMCMkHuWFKOdcV3XkDvQku8XzxUpX3J0V2YCE2ETojtM6 BUQr7bAV9I9FpmFNWrYNdu2WZZB5UFI= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 914391762; Thu, 21 Sep 2023 09:21:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C22D3F59C; Thu, 21 Sep 2023 09:20:46 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , "James E.J. Bottomley" , Helge Deller , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Arnd Bergmann , Mike Kravetz , Muchun Song , SeongJae Park , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Anshuman Khandual , Peter Xu , Axel Rasmussen , Qi Zheng Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH v1 6/8] mm: hugetlb: Convert set_huge_pte_at() to take vma Date: Thu, 21 Sep 2023 17:20:05 +0100 Message-Id: <20230921162007.1630149-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230921162007.1630149-1-ryan.roberts@arm.com> References: <20230921162007.1630149-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 290A8C0029 X-Stat-Signature: zwfpgr8jtnh93ezkyja6oejkdqpeekhm X-Rspam-User: X-HE-Tag: 1695313251-342588 X-HE-Meta: U2FsdGVkX1+X9SjX/ukO/55nCJEfGmpZI66WpPx8MVDrAX5Q8l25hjq3oi9KHvhREf4pRhSoN40sGuSTtb7nqLs38cLQb++U/rR8RXqc04JZO3/gZcCzyaRiJhppmL6W5vw67CQVN3QsiQzHECqGtyXwrqWmLIbI7f3DXLGKcFOmBkgz3lqeyLkOYy52zZ1tuCZehT+7UG3zVDR+VLtF9/UH1Km0w0sP6qhex6MZz8qD4U2UCLJfLUbPt7Qycl6UBIk00UkKT8Q4zZWrGW5itypQIaKypx9R19vXDaT4Z4r6pC3wl26TYbYQBnAx44rd7y45NT/e2QHd1/BT4te13Hn0osXosAb+zQZa8QK9La3co9cuJ8ZHijaieUWl3fH6MRLOMprdd0WGTaaJjNHe+ZEfzql/ypL7TwRNRoXNX7zDizInL84ronSa/o1uycHMyKoQaGboz0zcwpr2q7XSaYlmWWIWG9aPkdscuXJ7/JUPz2fJNQCSbQmGkc/OnbRWGjyU+A0vb32IXIXfv30sN43ATPy2vtvtePmLG4hgDRowdIWW3FSFDiOsKUbrx8IgWMM1Uug+y/Uy2uVQJE2/4vG/JhkPgcR5Wt6I05FJjEAFCKqApXVXmYIJhaWzFQHt5LnzFLmvSHWdt3YUmBWDReMHA3FwaxQYR7GL2sH+KSmfax5bY7eXSnEcO7urjA3Fb9Dmsu4Cnb/RoZbeuXV1TekhmZUmYaMtmfodStdsxXaVaKPk1X+VqWkxOt38cu3ZJImWeIWpYlaaJrOVrCyabYr3fqZuWppOtPXeQZgcwMgmkqJP0VZQjo6FQoOny1y7kgKR7qNLdmaIy9qtDyarsl+Ls90QHdI4JR3+3e6ZWn7s5YBUAxVhVIfVxWTkF6w3jMqNx8F9dvVoW71cETKuxPMVQ0wX6jVTSDAoSoh17QhJZzxUBYbuB04WWUmAgcYZe+Al/RMiGDPQocAvlWV dBGX2bXZ rHsbXmU7+kcNvo2hDMlU7DCoZhKC3ZKzr/P8cYWD5/I/3aXeL4FFrdADVhcMpl1cAzNK8EgFiGNHEexsuYyd1MQlJieDsriO+SgJiKbIrmrKoV6pIu/nH/9bJN6W0oTkVRa13pwvbMnKy8M5BQv//V2xqsef++91cYCWSSiJ6lKWBTTsaNBEznspB/T9zjF5XJlRYxAMG7R4Trb4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to fix a bug, arm64 needs access to the vma inside it's implementation of set_huge_pte_at(). Provide for this by converting the mm parameter to be a vma. Any implementations that require the mm can access it via vma->vm_mm. This commit makes the required modifications to the core mm. Separate commits update the arches, before the actual bug is fixed in arm64. No behavioral changes intended. Signed-off-by: Ryan Roberts Reviewed-by: SeongJae Park --- include/asm-generic/hugetlb.h | 6 +++--- include/linux/hugetlb.h | 6 +++--- mm/damon/vaddr.c | 2 +- mm/hugetlb.c | 30 +++++++++++++++--------------- mm/migrate.c | 2 +- mm/rmap.c | 10 +++++----- mm/vmalloc.c | 5 ++++- 7 files changed, 32 insertions(+), 29 deletions(-) diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 4da02798a00b..515e4777fb65 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -75,10 +75,10 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, #endif #ifndef __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT -static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_huge_pte_at(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, pte_t pte) { - set_pte_at(mm, addr, ptep, pte); + set_pte_at(vma->vm_mm, addr, ptep, pte); } #endif diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5b2626063f4f..08184f32430c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -984,7 +984,7 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t pte) { - set_huge_pte_at(vma->vm_mm, addr, ptep, pte); + set_huge_pte_at(vma, addr, ptep, pte); } #endif @@ -1172,8 +1172,8 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, #endif } -static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_huge_pte_at(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, pte_t pte) { } diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 4c81a9dbd044..55da8cee8fbc 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -347,7 +347,7 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm, if (pte_young(entry)) { referenced = true; entry = pte_mkold(entry); - set_huge_pte_at(mm, addr, pte, entry); + set_huge_pte_at(vma, addr, pte, entry); } #ifdef CONFIG_MMU_NOTIFIER diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ba6d39b71cb1..bcc30cd62586 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4988,7 +4988,7 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long add hugepage_add_new_anon_rmap(new_folio, vma, addr); if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old)) newpte = huge_pte_mkuffd_wp(newpte); - set_huge_pte_at(vma->vm_mm, addr, ptep, newpte); + set_huge_pte_at(vma, addr, ptep, newpte); hugetlb_count_add(pages_per_huge_page(hstate_vma(vma)), vma->vm_mm); folio_set_hugetlb_migratable(new_folio); } @@ -5065,7 +5065,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) { if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); } else if (unlikely(is_hugetlb_entry_migration(entry))) { swp_entry_t swp_entry = pte_to_swp_entry(entry); bool uffd_wp = pte_swp_uffd_wp(entry); @@ -5080,17 +5080,17 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, entry = swp_entry_to_pte(swp_entry); if (userfaultfd_wp(src_vma) && uffd_wp) entry = pte_swp_mkuffd_wp(entry); - set_huge_pte_at(src, addr, src_pte, entry); + set_huge_pte_at(src_vma, addr, src_pte, entry); } if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); } else if (unlikely(is_pte_marker(entry))) { pte_marker marker = copy_pte_marker( pte_to_swp_entry(entry), dst_vma); if (marker) - set_huge_pte_at(dst, addr, dst_pte, + set_huge_pte_at(dst_vma, addr, dst_pte, make_pte_marker(marker)); } else { entry = huge_ptep_get(src_pte); @@ -5166,7 +5166,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); hugetlb_count_add(npages, dst); } spin_unlock(src_ptl); @@ -5202,7 +5202,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); - set_huge_pte_at(mm, new_addr, dst_pte, pte); + set_huge_pte_at(vma, new_addr, dst_pte, pte); if (src_ptl != dst_ptl) spin_unlock(src_ptl); @@ -5336,7 +5336,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct */ if (pte_swp_uffd_wp_any(pte) && !(zap_flags & ZAP_FLAG_DROP_MARKER)) - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); else huge_pte_clear(mm, address, ptep, sz); @@ -5370,7 +5370,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct /* Leave a uffd-wp pte marker if needed */ if (huge_pte_uffd_wp(pte) && !(zap_flags & ZAP_FLAG_DROP_MARKER)) - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); hugetlb_count_sub(pages_per_huge_page(h), mm); page_remove_rmap(page, vma, true); @@ -5676,7 +5676,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte = huge_pte_mkuffd_wp(newpte); - set_huge_pte_at(mm, haddr, ptep, newpte); + set_huge_pte_at(vma, haddr, ptep, newpte); folio_set_hugetlb_migratable(new_folio); /* Make the old page be freed below */ new_folio = old_folio; @@ -5972,7 +5972,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, */ if (unlikely(pte_marker_uffd_wp(old_pte))) new_pte = huge_pte_mkuffd_wp(new_pte); - set_huge_pte_at(mm, haddr, ptep, new_pte); + set_huge_pte_at(vma, haddr, ptep, new_pte); hugetlb_count_add(pages_per_huge_page(h), mm); if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { @@ -6261,7 +6261,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } _dst_pte = make_pte_marker(PTE_MARKER_POISONED); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_huge_pte_at(dst_vma, dst_addr, dst_pte, _dst_pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); @@ -6412,7 +6412,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (wp_enabled) _dst_pte = huge_pte_mkuffd_wp(_dst_pte); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_huge_pte_at(dst_vma, dst_addr, dst_pte, _dst_pte); hugetlb_count_add(pages_per_huge_page(h), dst_mm); @@ -6598,7 +6598,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, else if (uffd_wp_resolve) newpte = pte_swp_clear_uffd_wp(newpte); if (!pte_same(pte, newpte)) - set_huge_pte_at(mm, address, ptep, newpte); + set_huge_pte_at(vma, address, ptep, newpte); } else if (unlikely(is_pte_marker(pte))) { /* No other markers apply for now. */ WARN_ON_ONCE(!pte_marker_uffd_wp(pte)); @@ -6622,7 +6622,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, /* None pte */ if (unlikely(uffd_wp)) /* Safe to modify directly (none->non-present). */ - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); } spin_unlock(ptl); diff --git a/mm/migrate.c b/mm/migrate.c index b7fa020003f3..6aa752984f32 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -251,7 +251,7 @@ static bool remove_migration_pte(struct folio *folio, rmap_flags); else page_dup_file_rmap(new, true); - set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); + set_huge_pte_at(vma, pvmw.address, pvmw.pte, pte); } else #endif { diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..a6353a0c67e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1628,7 +1628,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); @@ -2020,7 +2020,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); @@ -2044,7 +2044,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (arch_unmap_one(mm, vma, address, pteval) < 0) { if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); else set_pte_at(mm, address, pvmw.pte, pteval); ret = false; @@ -2058,7 +2058,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (anon_exclusive && page_try_share_anon_rmap(subpage)) { if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); else set_pte_at(mm, address, pvmw.pte, pteval); ret = false; @@ -2090,7 +2090,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, swp_pte); + set_huge_pte_at(vma, address, pvmw.pte, swp_pte); else set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ef8599d394fd..10fa40222f30 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -94,6 +94,9 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, pgtbl_mod_mask *mask) { +#ifdef CONFIG_HUGETLB_PAGE + struct vm_area_struct vma = TLB_FLUSH_VMA(&init_mm, 0); +#endif pte_t *pte; u64 pfn; unsigned long size = PAGE_SIZE; @@ -111,7 +114,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t entry = pfn_pte(pfn, prot); entry = arch_make_huge_pte(entry, ilog2(size), 0); - set_huge_pte_at(&init_mm, addr, pte, entry); + set_huge_pte_at(&vma, addr, pte, entry); pfn += PFN_DOWN(size); continue; }