Message ID | 20240715192142.3241557-9-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/mprotect: Fix dax puds | expand |
On Mon, Jul 15, 2024 at 12:22 PM Peter Xu <peterx@redhat.com> wrote: > > This is only relevant to the two archs that support PUD dax, aka, x86_64 > and ppc64. PUD THPs do not yet exist elsewhere, and hugetlb PUDs do not > count in this case. > > DAX have had PUD mappings for years, but change protection path never > worked. When the path is triggered in any form (a simple test program > would be: call mprotect() on a 1G dev_dax mapping), the kernel will report > "bad pud". This patch should fix that. > > The new change_huge_pud() tries to keep everything simple. For example, it > doesn't optimize write bit as that will need even more PUD helpers. It's > not too bad anyway to have one more write fault in the worst case once for > 1G range; may be a bigger thing for each PAGE_SIZE, though. Neither does > it support userfault-wp bits, as there isn't such PUD mappings that is > supported; file mappings always need a split there. > > The same to TLB shootdown: the pmd path (which was for x86 only) has the > trick of using _ad() version of pmdp_invalidate*() which can avoid one > redundant TLB, but let's also leave that for later. Again, the larger the > mapping, the smaller of such effect. > > Another thing worth mention is this path needs to be careful on handling > "retry" event for change_huge_pud() (where it can return 0): it isn't like > change_huge_pmd(), as the pmd version is safe with all conditions handled > in change_pte_range() later, thanks to Hugh's new pte_offset_map_lock(). > In short, change_pte_range() is simply smarter than change_pmd_range() now > after the shmem thp collapse rework. For that reason, change_pud_range() > will need proper retry if it races with something else when a huge PUD > changed from under us. > > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Dave Jiang <dave.jiang@intel.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Kirill A. Shutemov <kirill@shutemov.name> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Borislav Petkov <bp@alien8.de> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Michael Ellerman <mpe@ellerman.id.au> > Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: x86@kernel.org > Cc: linuxppc-dev@lists.ozlabs.org > Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages") > Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage") > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > include/linux/huge_mm.h | 24 +++++++++++++++++++ > mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++++++++++++ > mm/mprotect.c | 34 ++++++++++++++++++++++----- > 3 files changed, 104 insertions(+), 6 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index cff002be83eb..6e742680590a 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -336,6 +336,17 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, > void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, > unsigned long address); > > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, > + pud_t *pudp, unsigned long addr, pgprot_t newprot, > + unsigned long cp_flags); > +#else > +static inline int > +change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, > + pud_t *pudp, unsigned long addr, pgprot_t newprot, > + unsigned long cp_flags) { return 0; } > +#endif > + > #define split_huge_pud(__vma, __pud, __address) \ > do { \ > pud_t *____pud = (__pud); \ > @@ -579,6 +590,19 @@ static inline int next_order(unsigned long *orders, int prev) > { > return 0; > } > + > +static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, > + unsigned long address) > +{ > +} > + > +static inline int change_huge_pud(struct mmu_gather *tlb, > + struct vm_area_struct *vma, pud_t *pudp, > + unsigned long addr, pgprot_t newprot, > + unsigned long cp_flags) > +{ > + return 0; > +} > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > static inline int split_folio_to_list_to_order(struct folio *folio, > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index c10247bef08a..9a00c5955c0c 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2112,6 +2112,53 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > return ret; > } > > +/* > + * Returns: > + * > + * - 0: if pud leaf changed from under us > + * - 1: if pud can be skipped > + * - HPAGE_PUD_NR: if pud was successfully processed > + */ > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, > + pud_t *pudp, unsigned long addr, pgprot_t newprot, > + unsigned long cp_flags) > +{ > + struct mm_struct *mm = vma->vm_mm; > + pud_t oldpud, entry; > + spinlock_t *ptl; > + > + tlb_change_page_size(tlb, HPAGE_PUD_SIZE); > + > + /* NUMA balancing doesn't apply to dax */ > + if (cp_flags & MM_CP_PROT_NUMA) > + return 1; > + > + /* > + * Huge entries on userfault-wp only works with anonymous, while we > + * don't have anonymous PUDs yet. > + */ > + if (WARN_ON_ONCE(cp_flags & MM_CP_UFFD_WP_ALL)) > + return 1; > + > + ptl = __pud_trans_huge_lock(pudp, vma); > + if (!ptl) > + return 0; > + > + /* > + * Can't clear PUD or it can race with concurrent zapping. See > + * change_huge_pmd(). > + */ > + oldpud = pudp_invalidate(vma, addr, pudp); > + entry = pud_modify(oldpud, newprot); > + set_pud_at(mm, addr, pudp, entry); > + tlb_flush_pud_range(tlb, addr, HPAGE_PUD_SIZE); > + > + spin_unlock(ptl); > + return HPAGE_PUD_NR; > +} > +#endif > + > #ifdef CONFIG_USERFAULTFD > /* > * The PT lock for src_pmd and dst_vma/src_vma (for reading) are locked by > @@ -2342,6 +2389,11 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, > spin_unlock(ptl); > mmu_notifier_invalidate_range_end(&range); > } > +#else > +void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, > + unsigned long address) > +{ > +} > #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ > > static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 2a81060b603d..694f13b83864 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -425,31 +425,53 @@ static inline long change_pud_range(struct mmu_gather *tlb, > unsigned long end, pgprot_t newprot, unsigned long cp_flags) > { > struct mmu_notifier_range range; > - pud_t *pud; > + pud_t *pudp, pud; > unsigned long next; > long pages = 0, ret; > > range.start = 0; > > - pud = pud_offset(p4d, addr); > + pudp = pud_offset(p4d, addr); > do { > +again: > next = pud_addr_end(addr, end); > - ret = change_prepare(vma, pud, pmd, addr, cp_flags); > + ret = change_prepare(vma, pudp, pmd, addr, cp_flags); > if (ret) { > pages = ret; > break; > } > - if (pud_none_or_clear_bad(pud)) > + > + pud = READ_ONCE(*pudp); > + if (pud_none(pud)) > continue; > + > if (!range.start) { > mmu_notifier_range_init(&range, > MMU_NOTIFY_PROTECTION_VMA, 0, > vma->vm_mm, addr, end); > mmu_notifier_invalidate_range_start(&range); > } > - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, > + > + if (pud_leaf(pud)) { > + if ((next - addr != PUD_SIZE) || > + pgtable_split_needed(vma, cp_flags)) { > + __split_huge_pud(vma, pudp, addr); > + goto again; IIUC, most of the time, we're just going to end up clearing the PUD in this case. __split_huge_pud() will just clear it, then we goto again and `continue` to the next pudp. Is that ok? (I think it's ok as long as: you never map an anonymous page with a PUD, and that uffd-wp is not usable with non-hugetlb PUD mappings of user memory (which I think is only DAX?). So it seems ok today...?) Also, does the comment in pgtable_split_needed() need updating? Somewhat related question: change_huge_pmd() is very careful not to clear the PMD before writing the new value. Yet change_pmd_range(), when it calls into __split_huge_pmd(), will totally clear the PMD and then populate the PTEs underneath (in some cases at least), seemingly reintroducing the MADV_DONTNEED concern. But your PUD version, because it never re-populates the PUD (or PMDs/PTEs underneath) does not have this issue. WDYT? Thanks for this series! > + } else { > + ret = change_huge_pud(tlb, vma, pudp, > + addr, newprot, cp_flags); > + if (ret == 0) > + goto again; > + /* huge pud was handled */ > + if (ret == HPAGE_PUD_NR) > + pages += HPAGE_PUD_NR; > + continue; > + } > + } > + > + pages += change_pmd_range(tlb, vma, pudp, addr, next, newprot, > cp_flags); > - } while (pud++, addr = next, addr != end); > + } while (pudp++, addr = next, addr != end); > > if (range.start) > mmu_notifier_invalidate_range_end(&range); > -- > 2.45.0 > >
On Thu, Jul 25, 2024 at 11:29:49AM -0700, James Houghton wrote: > > - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, > > + > > + if (pud_leaf(pud)) { > > + if ((next - addr != PUD_SIZE) || > > + pgtable_split_needed(vma, cp_flags)) { > > + __split_huge_pud(vma, pudp, addr); > > + goto again; > > IIUC, most of the time, we're just going to end up clearing the PUD in > this case. __split_huge_pud() will just clear it, then we goto again > and `continue` to the next pudp. Is that ok? > > (I think it's ok as long as: you never map an anonymous page with a > PUD, I think this is true. > and that uffd-wp is not usable with non-hugetlb PUD mappings of > user memory (which I think is only DAX?). Uffd-wp has the async mode that can even work with dax puds.. even though I don't think anyone should be using it. Just like I'm more sure that nobody is using mprotect() too with dax pud, and it further justifies why nobody cared this much.. What uffd-wp would do in this case is it'll make pgtable_split_needed() returns true on this PUD, the PUD got wiped out, goto again, then change_prepare() will populate this pud with a pgtable page. Then it goes downwards, install PMD pgtable, then probably start installing pte markers ultimately if it's a wr-protect operation. > So it seems ok today...?) Yes I think it's ok so far, unless you think it's not. :) > > Also, does the comment in pgtable_split_needed() need updating? /* * Return true if we want to split THPs into PTE mappings in change * protection procedure, false otherwise. */ It looks to me it's ok for now to me? THP can represents PUD in dax, and we indeed want to break THPs (no matter PUD/PMD) finally into PTE mappings. > > Somewhat related question: change_huge_pmd() is very careful not to > clear the PMD before writing the new value. Yet change_pmd_range(), > when it calls into __split_huge_pmd(), will totally clear the PMD and > then populate the PTEs underneath (in some cases at least), seemingly > reintroducing the MADV_DONTNEED concern. But your PUD version, because > it never re-populates the PUD (or PMDs/PTEs underneath) does not have > this issue. WDYT? Could you elaborate more on the DONTNEED issue you're mentioning here? > > Thanks for this series! Thanks for reviewing it, James.
On Thu, Jul 25, 2024 at 3:41 PM Peter Xu <peterx@redhat.com> wrote: > > On Thu, Jul 25, 2024 at 11:29:49AM -0700, James Houghton wrote: > > > - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, > > > + > > > + if (pud_leaf(pud)) { > > > + if ((next - addr != PUD_SIZE) || > > > + pgtable_split_needed(vma, cp_flags)) { > > > + __split_huge_pud(vma, pudp, addr); > > > + goto again; > > > > IIUC, most of the time, we're just going to end up clearing the PUD in > > this case. __split_huge_pud() will just clear it, then we goto again > > and `continue` to the next pudp. Is that ok? > > > > (I think it's ok as long as: you never map an anonymous page with a > > PUD, > > I think this is true. > > > and that uffd-wp is not usable with non-hugetlb PUD mappings of > > user memory (which I think is only DAX?). > > Uffd-wp has the async mode that can even work with dax puds.. even though I > don't think anyone should be using it. Just like I'm more sure that nobody > is using mprotect() too with dax pud, and it further justifies why nobody > cared this much.. > > What uffd-wp would do in this case is it'll make pgtable_split_needed() > returns true on this PUD, the PUD got wiped out, goto again, then > change_prepare() will populate this pud with a pgtable page. Then it goes > downwards, install PMD pgtable, then probably start installing pte markers > ultimately if it's a wr-protect operation. Ah, I didn't understand this patch correctly. You're right, you'll install PMDs underneath -- I thought we'd just find pud_none() and bail out. So this all makes sense, thanks! > > > So it seems ok today...?) > > Yes I think it's ok so far, unless you think it's not. :) > > > > > Also, does the comment in pgtable_split_needed() need updating? > > /* > * Return true if we want to split THPs into PTE mappings in change > * protection procedure, false otherwise. > */ > > It looks to me it's ok for now to me? THP can represents PUD in dax, and we > indeed want to break THPs (no matter PUD/PMD) finally into PTE mappings. Oh, heh I was thinking of the other comment: /* * pte markers only resides in pte level, if we need pte markers, * we need to split. We cannot wr-protect shmem thp because file * thp is handled differently when split by erasing the pmd so far. */ I guess this is fine too, it just kind of reads as if this function is only called for PMDs. *shrug* > > > > > Somewhat related question: change_huge_pmd() is very careful not to > > clear the PMD before writing the new value. Yet change_pmd_range(), > > when it calls into __split_huge_pmd(), will totally clear the PMD and > > then populate the PTEs underneath (in some cases at least), seemingly > > reintroducing the MADV_DONTNEED concern. But your PUD version, because > > it never re-populates the PUD (or PMDs/PTEs underneath) does not have > > this issue. WDYT? I guess I'm wrong about this -- your PUD version is the same as the PMD version in this respect: both clear the PUD/PMD and then install lower page table entries. > > Could you elaborate more on the DONTNEED issue you're mentioning here? In change_huge_pmd(), there is a comment about not clearing the pmd before setting the new value. I guess the issue is: if a user is MADV_DONTNEEDing some memory and happens to see a cleared PMD, it will just move on. But in this case, if change_huge_pmd() temporarily cleared a PMD, then MADV_DONTNEED saw it and moved on, and then change_huge_pmd() installed a new PMD, the MADV_DONTNEED has basically skipped over the PMD, probably not what the user wanted. It seems like we have the same issue when a PMD is cleared when we're splitting it. But yeah, given that your PUD version is apparently no different from the PMD version in this respect, maybe this question isn't very interesting. :) > > > > > Thanks for this series! > > Thanks for reviewing it, James.
On Thu, Jul 25, 2024 at 05:23:48PM -0700, James Houghton wrote: > On Thu, Jul 25, 2024 at 3:41 PM Peter Xu <peterx@redhat.com> wrote: > > > > On Thu, Jul 25, 2024 at 11:29:49AM -0700, James Houghton wrote: > > > > - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, > > > > + > > > > + if (pud_leaf(pud)) { > > > > + if ((next - addr != PUD_SIZE) || > > > > + pgtable_split_needed(vma, cp_flags)) { > > > > + __split_huge_pud(vma, pudp, addr); > > > > + goto again; > > > > > > IIUC, most of the time, we're just going to end up clearing the PUD in > > > this case. __split_huge_pud() will just clear it, then we goto again > > > and `continue` to the next pudp. Is that ok? > > > > > > (I think it's ok as long as: you never map an anonymous page with a > > > PUD, > > > > I think this is true. > > > > > and that uffd-wp is not usable with non-hugetlb PUD mappings of > > > user memory (which I think is only DAX?). > > > > Uffd-wp has the async mode that can even work with dax puds.. even though I > > don't think anyone should be using it. Just like I'm more sure that nobody > > is using mprotect() too with dax pud, and it further justifies why nobody > > cared this much.. > > > > What uffd-wp would do in this case is it'll make pgtable_split_needed() > > returns true on this PUD, the PUD got wiped out, goto again, then > > change_prepare() will populate this pud with a pgtable page. Then it goes > > downwards, install PMD pgtable, then probably start installing pte markers > > ultimately if it's a wr-protect operation. > > Ah, I didn't understand this patch correctly. You're right, you'll > install PMDs underneath -- I thought we'd just find pud_none() and > bail out. So this all makes sense, thanks! > > > > > > So it seems ok today...?) > > > > Yes I think it's ok so far, unless you think it's not. :) > > > > > > > > Also, does the comment in pgtable_split_needed() need updating? > > > > /* > > * Return true if we want to split THPs into PTE mappings in change > > * protection procedure, false otherwise. > > */ > > > > It looks to me it's ok for now to me? THP can represents PUD in dax, and we > > indeed want to break THPs (no matter PUD/PMD) finally into PTE mappings. > > Oh, heh I was thinking of the other comment: > > /* > * pte markers only resides in pte level, if we need pte markers, > * we need to split. We cannot wr-protect shmem thp because file > * thp is handled differently when split by erasing the pmd so far. > */ > > I guess this is fine too, it just kind of reads as if this function is > only called for PMDs. *shrug* Ah ok, yeah it looks mostly fine here too. Let's make this easy by keeping it as-is, but if there'll be a new version I can touch it up if it helps the readings. > > > > > > > > > Somewhat related question: change_huge_pmd() is very careful not to > > > clear the PMD before writing the new value. Yet change_pmd_range(), > > > when it calls into __split_huge_pmd(), will totally clear the PMD and > > > then populate the PTEs underneath (in some cases at least), seemingly > > > reintroducing the MADV_DONTNEED concern. But your PUD version, because > > > it never re-populates the PUD (or PMDs/PTEs underneath) does not have > > > this issue. WDYT? > > I guess I'm wrong about this -- your PUD version is the same as the > PMD version in this respect: both clear the PUD/PMD and then install > lower page table entries. > > > > > Could you elaborate more on the DONTNEED issue you're mentioning here? > > In change_huge_pmd(), there is a comment about not clearing the pmd > before setting the new value. I guess the issue is: if a user is > MADV_DONTNEEDing some memory and happens to see a cleared PMD, it will > just move on. But in this case, if change_huge_pmd() temporarily > cleared a PMD, then MADV_DONTNEED saw it and moved on, and then > change_huge_pmd() installed a new PMD, the MADV_DONTNEED has basically > skipped over the PMD, probably not what the user wanted. It seems like > we have the same issue when a PMD is cleared when we're splitting it. > > But yeah, given that your PUD version is apparently no different from > the PMD version in this respect, maybe this question isn't very > interesting. :) Ah that one, so yeah that's why I introduced pudp_invalidate() in this series to make sure that issue isn't there. Then I assume we're good. Thanks,
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index cff002be83eb..6e742680590a 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -336,6 +336,17 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, unsigned long address); +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags); +#else +static inline int +change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) { return 0; } +#endif + #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ @@ -579,6 +590,19 @@ static inline int next_order(unsigned long *orders, int prev) { return 0; } + +static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) +{ +} + +static inline int change_huge_pud(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pudp, + unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) +{ + return 0; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c10247bef08a..9a00c5955c0c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2112,6 +2112,53 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; } +/* + * Returns: + * + * - 0: if pud leaf changed from under us + * - 1: if pud can be skipped + * - HPAGE_PUD_NR: if pud was successfully processed + */ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) +{ + struct mm_struct *mm = vma->vm_mm; + pud_t oldpud, entry; + spinlock_t *ptl; + + tlb_change_page_size(tlb, HPAGE_PUD_SIZE); + + /* NUMA balancing doesn't apply to dax */ + if (cp_flags & MM_CP_PROT_NUMA) + return 1; + + /* + * Huge entries on userfault-wp only works with anonymous, while we + * don't have anonymous PUDs yet. + */ + if (WARN_ON_ONCE(cp_flags & MM_CP_UFFD_WP_ALL)) + return 1; + + ptl = __pud_trans_huge_lock(pudp, vma); + if (!ptl) + return 0; + + /* + * Can't clear PUD or it can race with concurrent zapping. See + * change_huge_pmd(). + */ + oldpud = pudp_invalidate(vma, addr, pudp); + entry = pud_modify(oldpud, newprot); + set_pud_at(mm, addr, pudp, entry); + tlb_flush_pud_range(tlb, addr, HPAGE_PUD_SIZE); + + spin_unlock(ptl); + return HPAGE_PUD_NR; +} +#endif + #ifdef CONFIG_USERFAULTFD /* * The PT lock for src_pmd and dst_vma/src_vma (for reading) are locked by @@ -2342,6 +2389,11 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } +#else +void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) +{ +} #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, diff --git a/mm/mprotect.c b/mm/mprotect.c index 2a81060b603d..694f13b83864 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -425,31 +425,53 @@ static inline long change_pud_range(struct mmu_gather *tlb, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { struct mmu_notifier_range range; - pud_t *pud; + pud_t *pudp, pud; unsigned long next; long pages = 0, ret; range.start = 0; - pud = pud_offset(p4d, addr); + pudp = pud_offset(p4d, addr); do { +again: next = pud_addr_end(addr, end); - ret = change_prepare(vma, pud, pmd, addr, cp_flags); + ret = change_prepare(vma, pudp, pmd, addr, cp_flags); if (ret) { pages = ret; break; } - if (pud_none_or_clear_bad(pud)) + + pud = READ_ONCE(*pudp); + if (pud_none(pud)) continue; + if (!range.start) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, vma->vm_mm, addr, end); mmu_notifier_invalidate_range_start(&range); } - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, + + if (pud_leaf(pud)) { + if ((next - addr != PUD_SIZE) || + pgtable_split_needed(vma, cp_flags)) { + __split_huge_pud(vma, pudp, addr); + goto again; + } else { + ret = change_huge_pud(tlb, vma, pudp, + addr, newprot, cp_flags); + if (ret == 0) + goto again; + /* huge pud was handled */ + if (ret == HPAGE_PUD_NR) + pages += HPAGE_PUD_NR; + continue; + } + } + + pages += change_pmd_range(tlb, vma, pudp, addr, next, newprot, cp_flags); - } while (pud++, addr = next, addr != end); + } while (pudp++, addr = next, addr != end); if (range.start) mmu_notifier_invalidate_range_end(&range);
This is only relevant to the two archs that support PUD dax, aka, x86_64 and ppc64. PUD THPs do not yet exist elsewhere, and hugetlb PUDs do not count in this case. DAX have had PUD mappings for years, but change protection path never worked. When the path is triggered in any form (a simple test program would be: call mprotect() on a 1G dev_dax mapping), the kernel will report "bad pud". This patch should fix that. The new change_huge_pud() tries to keep everything simple. For example, it doesn't optimize write bit as that will need even more PUD helpers. It's not too bad anyway to have one more write fault in the worst case once for 1G range; may be a bigger thing for each PAGE_SIZE, though. Neither does it support userfault-wp bits, as there isn't such PUD mappings that is supported; file mappings always need a split there. The same to TLB shootdown: the pmd path (which was for x86 only) has the trick of using _ad() version of pmdp_invalidate*() which can avoid one redundant TLB, but let's also leave that for later. Again, the larger the mapping, the smaller of such effect. Another thing worth mention is this path needs to be careful on handling "retry" event for change_huge_pud() (where it can return 0): it isn't like change_huge_pmd(), as the pmd version is safe with all conditions handled in change_pte_range() later, thanks to Hugh's new pte_offset_map_lock(). In short, change_pte_range() is simply smarter than change_pmd_range() now after the shmem thp collapse rework. For that reason, change_pud_range() will need proper retry if it races with something else when a huge PUD changed from under us. Cc: Dan Williams <dan.j.williams@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: x86@kernel.org Cc: linuxppc-dev@lists.ozlabs.org Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages") Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage") Signed-off-by: Peter Xu <peterx@redhat.com> --- include/linux/huge_mm.h | 24 +++++++++++++++++++ mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++++++++++++ mm/mprotect.c | 34 ++++++++++++++++++++++----- 3 files changed, 104 insertions(+), 6 deletions(-)