Message ID | 161044408728.1482714.9086710868634042303.stgit@dwillia2-desk3.amr.corp.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE | expand |
On 12.01.21 10:34, Dan Williams wrote: > pfn_section_valid() determines pfn validity on subsection granularity. > > pfn_valid_within() internally uses pfn_section_valid(), but gates it > with early_section() to preserve the traditional behavior of pfn_valid() > before subsection support was added. > > pfn_to_online_page() wants the explicit precision that pfn_valid() does > not offer, so use pfn_section_valid() directly. Since > pfn_to_online_page() already open codes the validity of the section > number vs NR_MEM_SECTIONS, there's not much value to using > pfn_valid_within(), just use pfn_section_valid(). This loses the > valid_section() check that pfn_valid_within() was performing, but that > was already redundant with the online check. > > Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()") > Cc: Qian Cai <cai@lca.pw> > Cc: Michal Hocko <mhocko@suse.com> > Reported-by: David Hildenbrand <david@redhat.com> > --- > mm/memory_hotplug.c | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 55a69d4396e7..a845b3979bc0 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -308,11 +308,19 @@ static int check_hotplug_memory_addressable(unsigned long pfn, > struct page *pfn_to_online_page(unsigned long pfn) > { > unsigned long nr = pfn_to_section_nr(pfn); > + struct mem_section *ms; > + > + if (nr >= NR_MEM_SECTIONS) > + return NULL; > + > + ms = __nr_to_section(nr); > + if (!online_section(ms)) > + return NULL; > + > + if (!pfn_section_valid(ms, pfn)) > + return NULL; That's not sufficient for alternative implementations of pfn_valid(). You still need some kind of pfn_valid(pfn) for alternative versions of pfn_valid(). Consider arm64 memory holes in the memmap. See their current (yet to be fixed/reworked) pfn_valid() implementation. (pfn_valid_within() is implicitly active on arm64) Actually, I think we should add something like the following, to make this clearer (pfn_valid_within() is confusing) #ifdef CONFIG_HAVE_ARCH_PFN_VALID /* We might have to check for holes inside the memmap. */ if (!pfn_valid()) return NULL; #endif > > - if (nr < NR_MEM_SECTIONS && online_section_nr(nr) && > - pfn_valid_within(pfn)) > - return pfn_to_page(pfn); > - return NULL; > + return pfn_to_page(pfn); > } > EXPORT_SYMBOL_GPL(pfn_to_online_page); > >
On Tue, Jan 12, 2021 at 10:53:17AM +0100, David Hildenbrand wrote: > That's not sufficient for alternative implementations of pfn_valid(). > > You still need some kind of pfn_valid(pfn) for alternative versions of > pfn_valid(). Consider arm64 memory holes in the memmap. See their > current (yet to be fixed/reworked) pfn_valid() implementation. > (pfn_valid_within() is implicitly active on arm64) > > Actually, I think we should add something like the following, to make > this clearer (pfn_valid_within() is confusing) > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > /* We might have to check for holes inside the memmap. */ > if (!pfn_valid()) > return NULL; > #endif I have to confess that I was a bit confused by pfn_valid_within + HOLES_IN_ZONES + HAVE_ARCH_PFN_VALID. At first I thought that we should stick with pfn_valid_within, as we also depend on HOLES_IN_ZONES, so it could be that if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID)) ... would to too much work, as if CONFIG_HOLES_IN_ZONES was not set but an arch pfn_valid was provided, we would perform unedeed checks. But on a closer look, CONFIG_HOLES_IN_ZONES is set by default on arm64, and on ia64 when SPARSEMEM is set, so looks fine.
On Tue, Jan 12, 2021 at 1:53 AM David Hildenbrand <david@redhat.com> wrote: > > On 12.01.21 10:34, Dan Williams wrote: > > pfn_section_valid() determines pfn validity on subsection granularity. > > > > pfn_valid_within() internally uses pfn_section_valid(), but gates it > > with early_section() to preserve the traditional behavior of pfn_valid() > > before subsection support was added. > > > > pfn_to_online_page() wants the explicit precision that pfn_valid() does > > not offer, so use pfn_section_valid() directly. Since > > pfn_to_online_page() already open codes the validity of the section > > number vs NR_MEM_SECTIONS, there's not much value to using > > pfn_valid_within(), just use pfn_section_valid(). This loses the > > valid_section() check that pfn_valid_within() was performing, but that > > was already redundant with the online check. > > > > Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()") > > Cc: Qian Cai <cai@lca.pw> > > Cc: Michal Hocko <mhocko@suse.com> > > Reported-by: David Hildenbrand <david@redhat.com> > > --- > > mm/memory_hotplug.c | 16 ++++++++++++---- > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > index 55a69d4396e7..a845b3979bc0 100644 > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -308,11 +308,19 @@ static int check_hotplug_memory_addressable(unsigned long pfn, > > struct page *pfn_to_online_page(unsigned long pfn) > > { > > unsigned long nr = pfn_to_section_nr(pfn); > > + struct mem_section *ms; > > + > > + if (nr >= NR_MEM_SECTIONS) > > + return NULL; > > + > > + ms = __nr_to_section(nr); > > + if (!online_section(ms)) > > + return NULL; > > + > > + if (!pfn_section_valid(ms, pfn)) > > + return NULL; > > That's not sufficient for alternative implementations of pfn_valid(). > > You still need some kind of pfn_valid(pfn) for alternative versions of > pfn_valid(). Consider arm64 memory holes in the memmap. See their > current (yet to be fixed/reworked) pfn_valid() implementation. > (pfn_valid_within() is implicitly active on arm64) > > Actually, I think we should add something like the following, to make > this clearer (pfn_valid_within() is confusing) > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > /* We might have to check for holes inside the memmap. */ > if (!pfn_valid()) > return NULL; > #endif Looks good to me, I'll take Oscar's version that uses IS_ENABLED(). Skipping the call to pfn_valid() saves 16-bytes of code text on x86_64.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 55a69d4396e7..a845b3979bc0 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -308,11 +308,19 @@ static int check_hotplug_memory_addressable(unsigned long pfn, struct page *pfn_to_online_page(unsigned long pfn) { unsigned long nr = pfn_to_section_nr(pfn); + struct mem_section *ms; + + if (nr >= NR_MEM_SECTIONS) + return NULL; + + ms = __nr_to_section(nr); + if (!online_section(ms)) + return NULL; + + if (!pfn_section_valid(ms, pfn)) + return NULL; - if (nr < NR_MEM_SECTIONS && online_section_nr(nr) && - pfn_valid_within(pfn)) - return pfn_to_page(pfn); - return NULL; + return pfn_to_page(pfn); } EXPORT_SYMBOL_GPL(pfn_to_online_page);