Message ID | 20191006085646.5768-8-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v6,01/10] mm/memunmap: Don't access uninitialized memmap in memunmap_pages() | expand |
On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote: > With shrink_pgdat_span() out of the way, we now always have a valid > zone. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: David Hildenbrand <david@redhat.com> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Wei Yang <richardw.yang@linux.intel.com> > Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> > --- > mm/memory_hotplug.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index bf5173e7913d..f294918f7211 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -337,7 +337,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > if (unlikely(pfn_to_nid(start_pfn) != nid)) > continue; > > - if (zone && zone != page_zone(pfn_to_page(start_pfn))) > + if (zone != page_zone(pfn_to_page(start_pfn))) > continue; > > return start_pfn; > @@ -362,7 +362,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > if (unlikely(pfn_to_nid(pfn) != nid)) > continue; > > - if (zone && zone != page_zone(pfn_to_page(pfn))) > + if (zone != page_zone(pfn_to_page(pfn))) > continue; > > return pfn; > -- > 2.21.0 >
On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote: >With shrink_pgdat_span() out of the way, we now always have a valid >zone. > >Cc: Andrew Morton <akpm@linux-foundation.org> >Cc: Oscar Salvador <osalvador@suse.de> >Cc: David Hildenbrand <david@redhat.com> >Cc: Michal Hocko <mhocko@suse.com> >Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >Cc: Dan Williams <dan.j.williams@intel.com> >Cc: Wei Yang <richardw.yang@linux.intel.com> >Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
On 05.02.20 09:57, Wei Yang wrote: > On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote: >> With shrink_pgdat_span() out of the way, we now always have a valid >> zone. >> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Oscar Salvador <osalvador@suse.de> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Michal Hocko <mhocko@suse.com> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >> Cc: Dan Williams <dan.j.williams@intel.com> >> Cc: Wei Yang <richardw.yang@linux.intel.com> >> Signed-off-by: David Hildenbrand <david@redhat.com> > > Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> Just FYI, the patches are now upstream, so the rb's can no longer be applied. (but we can send fixes if we find that something is broken ;) ). Thanks!
On Wed, Feb 05, 2020 at 09:59:41AM +0100, David Hildenbrand wrote: >On 05.02.20 09:57, Wei Yang wrote: >> On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote: >>> With shrink_pgdat_span() out of the way, we now always have a valid >>> zone. >>> >>> Cc: Andrew Morton <akpm@linux-foundation.org> >>> Cc: Oscar Salvador <osalvador@suse.de> >>> Cc: David Hildenbrand <david@redhat.com> >>> Cc: Michal Hocko <mhocko@suse.com> >>> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >>> Cc: Dan Williams <dan.j.williams@intel.com> >>> Cc: Wei Yang <richardw.yang@linux.intel.com> >>> Signed-off-by: David Hildenbrand <david@redhat.com> >> >> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> > >Just FYI, the patches are now upstream, so the rb's can no longer be >applied. (but we can send fixes if we find that something is broken ;) >). Thanks! > Thanks for reminding. :-) >-- >Thanks, > >David / dhildenb
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bf5173e7913d..f294918f7211 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -337,7 +337,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, if (unlikely(pfn_to_nid(start_pfn) != nid)) continue; - if (zone && zone != page_zone(pfn_to_page(start_pfn))) + if (zone != page_zone(pfn_to_page(start_pfn))) continue; return start_pfn; @@ -362,7 +362,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, if (unlikely(pfn_to_nid(pfn) != nid)) continue; - if (zone && zone != page_zone(pfn_to_page(pfn))) + if (zone != page_zone(pfn_to_page(pfn))) continue; return pfn;
With shrink_pgdat_span() out of the way, we now always have a valid zone. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: David Hildenbrand <david@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Wei Yang <richardw.yang@linux.intel.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/memory_hotplug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)