diff mbox series

[2/2] mm: Add a bounds check in devm_memremap_pages()

Message ID 20190910025225.25904-3-alastair@au1.ibm.com (mailing list archive)
State New, archived
Headers show
Series Add bounds check for Hotplugged memory | expand

Commit Message

Alastair D'Silva Sept. 10, 2019, 2:52 a.m. UTC
From: Alastair D'Silva <alastair@d-silva.org>

The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.

Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
---
 mm/memremap.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

David Hildenbrand Sept. 10, 2019, 7:38 a.m. UTC | #1
On 10.09.19 04:52, Alastair D'Silva wrote:
> From: Alastair D'Silva <alastair@d-silva.org>
> 
> The call to check_hotplug_memory_addressable() validates that the memory
> is fully addressable.
> 
> Without this call, it is possible that we may remap pages that is
> not physically addressable, resulting in bogus section numbers
> being returned from __section_nr().
> 
> Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
> ---
>  mm/memremap.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 86432650f829..fd00993caa3e 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
>  
>  	mem_hotplug_begin();
>  
> +	error = check_hotplug_memory_addressable(res->start,
> +						 resource_size(res));
> +	if (error) {
> +		mem_hotplug_done();
> +		goto err_checkrange;
> +	}
> +

No need to check under the memory hotplug lock.

>  	/*
>  	 * For device private memory we call add_pages() as we only need to
>  	 * allocate and initialize struct page for the device memory. More-
> @@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
>  
>   err_add_memory:
>  	kasan_remove_zero_shadow(__va(res->start), resource_size(res));
> + err_checkrange:
>   err_kasan:
>  	untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
>   err_pfn_remap:
>
Alastair D'Silva Sept. 10, 2019, 10:24 a.m. UTC | #2
> -----Original Message-----
> From: David Hildenbrand <david@redhat.com>
> Sent: Tuesday, 10 September 2019 5:39 PM
> To: Alastair D'Silva <alastair@au1.ibm.com>; alastair@d-silva.org
> Cc: Andrew Morton <akpm@linux-foundation.org>; Oscar Salvador
> <osalvador@suse.com>; Michal Hocko <mhocko@suse.com>; Pavel Tatashin
> <pasha.tatashin@soleen.com>; Dan Williams <dan.j.williams@intel.com>;
> Wei Yang <richard.weiyang@gmail.com>; Qian Cai <cai@lca.pw>; Jason
> Gunthorpe <jgg@ziepe.ca>; Logan Gunthorpe <logang@deltatee.com>; Ira
> Weiny <ira.weiny@intel.com>; linux-mm@kvack.org; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH 2/2] mm: Add a bounds check in
> devm_memremap_pages()
> 
> On 10.09.19 04:52, Alastair D'Silva wrote:
> > From: Alastair D'Silva <alastair@d-silva.org>
> >
> > The call to check_hotplug_memory_addressable() validates that the
> > memory is fully addressable.
> >
> > Without this call, it is possible that we may remap pages that is not
> > physically addressable, resulting in bogus section numbers being
> > returned from __section_nr().
> >
> > Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
> > ---
> >  mm/memremap.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/mm/memremap.c b/mm/memremap.c index
> > 86432650f829..fd00993caa3e 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device
> *dev,
> > struct dev_pagemap *pgmap)
> >
> >  	mem_hotplug_begin();
> >
> > +	error = check_hotplug_memory_addressable(res->start,
> > +						 resource_size(res));
> > +	if (error) {
> > +		mem_hotplug_done();
> > +		goto err_checkrange;
> > +	}
> > +
> 
> No need to check under the memory hotplug lock.
> 

Thanks, I'll adjust it.

> >  	/*
> >  	 * For device private memory we call add_pages() as we only need to
> >  	 * allocate and initialize struct page for the device memory. More-
> > @@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev,
> > struct dev_pagemap *pgmap)
> >
> >   err_add_memory:
> >  	kasan_remove_zero_shadow(__va(res->start), resource_size(res));
> > + err_checkrange:
> >   err_kasan:
> >  	untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
> >   err_pfn_remap:
> >
> 
> 
> --
> 
> Thanks,
> 
> David / dhildenb
>
diff mbox series

Patch

diff --git a/mm/memremap.c b/mm/memremap.c
index 86432650f829..fd00993caa3e 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -269,6 +269,13 @@  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 
 	mem_hotplug_begin();
 
+	error = check_hotplug_memory_addressable(res->start,
+						 resource_size(res));
+	if (error) {
+		mem_hotplug_done();
+		goto err_checkrange;
+	}
+
 	/*
 	 * For device private memory we call add_pages() as we only need to
 	 * allocate and initialize struct page for the device memory. More-
@@ -324,6 +331,7 @@  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 
  err_add_memory:
 	kasan_remove_zero_shadow(__va(res->start), resource_size(res));
+ err_checkrange:
  err_kasan:
 	untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
  err_pfn_remap: