Message ID | 20191220153826.24229-1-steven.price@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hmm: Cleanup hmm_vma_walk_pud()/walk_pud_range() | expand |
On 12/20/19 4:38 PM, Steven Price wrote: > There are a number of minor misuses of the page table APIs in > hmm_vma_walk_pud(): > > If the pud_trans_huge_lock() hasn't been obtained it might be because > the PUD is unstable, so we should retry. > > If it has been obtained then there's no need for a READ_ONCE, and the > PUD cannot be pud_none() or !pud_present() so these paths are dead code. > > Finally in walk_pud_range(), after a call to split_huge_pud() the code > should check pud_trans_unstable() rather than pud_none() to decide > whether the PUD should be retried. > > Suggested-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> > Signed-off-by: Steven Price <steven.price@arm.com> > --- > This is based on top of my "Generic page walk and ptdump" series and > fixes some pre-existing bugs spotted by Thomas. > > mm/hmm.c | 16 +++++----------- > mm/pagewalk.c | 2 +- > 2 files changed, 6 insertions(+), 12 deletions(-) LGTM. Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
diff --git a/mm/hmm.c b/mm/hmm.c index a71295e99968..d4aae4dcc6e8 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -480,28 +480,22 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, int ret = 0; spinlock_t *ptl = pud_trans_huge_lock(pudp, walk->vma); - if (!ptl) + if (!ptl) { + if (pud_trans_unstable(pudp)) + walk->action = ACTION_AGAIN; return 0; + } /* Normally we don't want to split the huge page */ walk->action = ACTION_CONTINUE; - pud = READ_ONCE(*pudp); - if (pud_none(pud)) { - ret = hmm_vma_walk_hole(start, end, -1, walk); - goto out_unlock; - } + pud = *pudp; if (pud_huge(pud) && pud_devmap(pud)) { unsigned long i, npages, pfn; uint64_t *pfns, cpu_flags; bool fault, write_fault; - if (!pud_present(pud)) { - ret = hmm_vma_walk_hole(start, end, -1, walk); - goto out_unlock; - } - i = (addr - range->start) >> PAGE_SHIFT; npages = (end - addr) >> PAGE_SHIFT; pfns = &range->pfns[i]; diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 5895ce4f1a85..4598f545b869 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -154,7 +154,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, if (walk->vma) split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) + if (pud_trans_unstable(pud)) goto again; err = walk_pmd_range(pud, addr, next, walk);
There are a number of minor misuses of the page table APIs in hmm_vma_walk_pud(): If the pud_trans_huge_lock() hasn't been obtained it might be because the PUD is unstable, so we should retry. If it has been obtained then there's no need for a READ_ONCE, and the PUD cannot be pud_none() or !pud_present() so these paths are dead code. Finally in walk_pud_range(), after a call to split_huge_pud() the code should check pud_trans_unstable() rather than pud_none() to decide whether the PUD should be retried. Suggested-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Signed-off-by: Steven Price <steven.price@arm.com> --- This is based on top of my "Generic page walk and ptdump" series and fixes some pre-existing bugs spotted by Thomas. mm/hmm.c | 16 +++++----------- mm/pagewalk.c | 2 +- 2 files changed, 6 insertions(+), 12 deletions(-)