Message ID | 20181111200443.10772-6-paulmck@linux.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | None | expand |
On Sun, 11 Nov 2018, Paul E. McKenney wrote: >From: Lance Roy <ldr709@gmail.com> > >lockdep_assert_held() is better suited to checking locking requirements, >since it only checks if the current thread holds the lock regardless of >whether someone else does. This is also a step towards possibly removing >spin_is_locked(). So fyi I'm not crazy about these kind of patches simply because lockdep is a lot less used out of anything that's not a lab, and we can be missing potential offenders. There's obviously nothing wrong about what you describe above perse, just my two cents. Thansk, Davidlohr
On Thu, Nov 15, 2018 at 10:49:17AM -0800, Davidlohr Bueso wrote: > On Sun, 11 Nov 2018, Paul E. McKenney wrote: > > >From: Lance Roy <ldr709@gmail.com> > > > >lockdep_assert_held() is better suited to checking locking requirements, > >since it only checks if the current thread holds the lock regardless of > >whether someone else does. This is also a step towards possibly removing > >spin_is_locked(). > > So fyi I'm not crazy about these kind of patches simply because lockdep > is a lot less used out of anything that's not a lab, and we can be missing > potential offenders. There's obviously nothing wrong about what you describe > above perse, just my two cents. Fair point! One countervailing advantage of lockdep is that it is not subject to the false negatives that can happen if someone else happens to be currently holding the lock. But what would you suggest instead? Thanx, Paul
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c13625c1ad5e..7b86600a47c9 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1225,7 +1225,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) { struct mm_struct *mm = mm_slot->mm; - VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock)); + lockdep_assert_held(&khugepaged_mm_lock); if (khugepaged_test_exit(mm)) { /* free mm_slot */ @@ -1631,7 +1631,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int progress = 0; VM_BUG_ON(!pages); - VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock)); + lockdep_assert_held(&khugepaged_mm_lock); if (khugepaged_scan.mm_slot) mm_slot = khugepaged_scan.mm_slot; diff --git a/mm/swap.c b/mm/swap.c index aa483719922e..5d786019eab9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -823,8 +823,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); - VM_BUG_ON(NR_CPUS != 1 && - !spin_is_locked(&lruvec_pgdat(lruvec)->lru_lock)); + lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); if (!list) SetPageLRU(page_tail);