Message ID | 20190507183804.5512-2-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2,1/8] mm/memory_hotplug: Simplify and fix check_hotplug_memory_range() | expand |
On Tue, May 7, 2019 at 11:38 AM David Hildenbrand <david@redhat.com> wrote: > > By converting start and size to page granularity, we actually ignore > unaligned parts within a page instead of properly bailing out with an > error. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Michal Hocko <mhocko@suse.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Pavel Tatashin <pasha.tatashin@soleen.com> > Cc: Qian Cai <cai@lca.pw> > Cc: Wei Yang <richard.weiyang@gmail.com> > Cc: Arun KS <arunks@codeaurora.org> > Cc: Mathieu Malaterre <malat@debian.org> > Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
On Tue, May 07, 2019 at 08:37:57PM +0200, David Hildenbrand wrote: >By converting start and size to page granularity, we actually ignore >unaligned parts within a page instead of properly bailing out with an >error. > >Cc: Andrew Morton <akpm@linux-foundation.org> >Cc: Oscar Salvador <osalvador@suse.de> >Cc: Michal Hocko <mhocko@suse.com> >Cc: David Hildenbrand <david@redhat.com> >Cc: Pavel Tatashin <pasha.tatashin@soleen.com> >Cc: Qian Cai <cai@lca.pw> >Cc: Wei Yang <richard.weiyang@gmail.com> >Cc: Arun KS <arunks@codeaurora.org> >Cc: Mathieu Malaterre <malat@debian.org> >Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 328878b6799d..202febe88b58 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1050,16 +1050,11 @@ int try_online_node(int nid) static int check_hotplug_memory_range(u64 start, u64 size) { - unsigned long block_sz = memory_block_size_bytes(); - u64 block_nr_pages = block_sz >> PAGE_SHIFT; - u64 nr_pages = size >> PAGE_SHIFT; - u64 start_pfn = PFN_DOWN(start); - /* memory range must be block size aligned */ - if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) || - !IS_ALIGNED(nr_pages, block_nr_pages)) { + if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) || + !IS_ALIGNED(size, memory_block_size_bytes())) { pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, size %#llx", - block_sz, start, size); + memory_block_size_bytes(), start, size); return -EINVAL; }
By converting start and size to page granularity, we actually ignore unaligned parts within a page instead of properly bailing out with an error. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Michal Hocko <mhocko@suse.com> Cc: David Hildenbrand <david@redhat.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Qian Cai <cai@lca.pw> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Arun KS <arunks@codeaurora.org> Cc: Mathieu Malaterre <malat@debian.org> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/memory_hotplug.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-)