Message ID | 20200501210518.DA161B7E@viggo.jf.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: tweak page cache migration | expand |
On 5/1/20 2:05 PM, Dave Hansen wrote: > From: Dave Hansen <dave.hansen@linux.intel.com> > > This is not a bug fix. It was found by inspection, but I believe > that it is confusing as it stands. > > First, page_ref_freeze() is implemented internally with: > > atomic_cmpxchg(&page->_refcount, expected, 0) == expected > > The "cmp" part of cmpxchg is making sure that _refcount==expected > which means that there's an implicit check here, equivalent to: > > page_count(page) == expected_count > > This appears to have originated in "e286781: mm: speculative page > references", which is pretty ancient. This check is also somewhat > dangerous to have here because it might lead someone to think that > page_ref_freeze() *doesn't* do its own page_count() checking. > > Remove the unnecessary check. Make sense to me. Acked-by: Yang Shi <yang.shi@linux.alibaba.com> > > Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Nicholas Piggin <npiggin@gmail.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: Yang Shi <yang.shi@linux.alibaba.com> > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > > b/mm/migrate.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c > --- a/mm/migrate.c~remove_extra_page_count_check 2020-05-01 14:00:42.331525924 -0700 > +++ b/mm/migrate.c 2020-05-01 14:00:42.336525924 -0700 > @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add > newzone = page_zone(newpage); > > xas_lock_irq(&xas); > - if (page_count(page) != expected_count || xas_load(&xas) != page) { > + if (xas_load(&xas) != page) { > xas_unlock_irq(&xas); > return -EAGAIN; > } > > + /* Freezing will fail if page_count()!=expected_count */ > if (!page_ref_freeze(page, expected_count)) { > xas_unlock_irq(&xas); > return -EAGAIN; > _
diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c --- a/mm/migrate.c~remove_extra_page_count_check 2020-05-01 14:00:42.331525924 -0700 +++ b/mm/migrate.c 2020-05-01 14:00:42.336525924 -0700 @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add newzone = page_zone(newpage); xas_lock_irq(&xas); - if (page_count(page) != expected_count || xas_load(&xas) != page) { + if (xas_load(&xas) != page) { xas_unlock_irq(&xas); return -EAGAIN; } + /* Freezing will fail if page_count()!=expected_count */ if (!page_ref_freeze(page, expected_count)) { xas_unlock_irq(&xas); return -EAGAIN;