Message ID | 20190926013406.16133-2-alastair@au1.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add bounds check for Hotplugged memory | expand |
On 26.09.19 03:34, Alastair D'Silva wrote: > From: Alastair D'Silva <alastair@d-silva.org> > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > are allocated from firmware. These address ranges may be higher > than what older kernels permit, as we increased the maximum > permissable address in commit 4ffe713b7587 > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is > possible that the addressable range may change again in the > future. > > In this scenario, we end up with a bogus section returned from > __section_nr (see the discussion on the thread "mm: Trigger bug on > if a section is not found in __section_nr"). > > Adding a check here means that we fail early and have an > opportunity to handle the error gracefully, rather than rumbling > on and potentially accessing an incorrect section. > > Further discussion is also on the thread ("powerpc: Perform a bounds > check in arch_add_memory") > http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> > --- > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c73f09913165..212804c0f7f5 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > return 0; > } > > +static int check_hotplug_memory_addressable(unsigned long pfn, > + unsigned long nr_pages) > +{ > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; > + > + if (max_addr >> MAX_PHYSMEM_BITS) { > + WARN(1, > + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", > + pfn << PAGE_SHIFT, max_addr, > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); > + return -E2BIG; > + } > + > + return 0; > +} > + > /* > * Reasonably generic function for adding memory. It is > * expected that archs that support memory hotplug will > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > unsigned long nr, start_sec, end_sec; > struct vmem_altmap *altmap = restrictions->altmap; > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > + if (err) > + return err; > + > if (altmap) { > /* > * Validate altmap is within bounds of the total request > I know Michal suggested this, but I still prefer checking early instead of when we're knees-deep into adding of memory. But as I don't have any power here, the code looks fine, although I consider the computations in check_hotplug_memory_addressable() fairly ugly.
On 26.09.19 09:12, David Hildenbrand wrote: > On 26.09.19 03:34, Alastair D'Silva wrote: >> From: Alastair D'Silva <alastair@d-silva.org> >> >> On PowerPC, the address ranges allocated to OpenCAPI LPC memory >> are allocated from firmware. These address ranges may be higher >> than what older kernels permit, as we increased the maximum >> permissable address in commit 4ffe713b7587 >> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is >> possible that the addressable range may change again in the >> future. >> >> In this scenario, we end up with a bogus section returned from >> __section_nr (see the discussion on the thread "mm: Trigger bug on >> if a section is not found in __section_nr"). >> >> Adding a check here means that we fail early and have an >> opportunity to handle the error gracefully, rather than rumbling >> on and potentially accessing an incorrect section. >> >> Further discussion is also on the thread ("powerpc: Perform a bounds >> check in arch_add_memory") >> http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com >> >> Signed-off-by: Alastair D'Silva <alastair@d-silva.org> >> --- >> mm/memory_hotplug.c | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> >> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >> index c73f09913165..212804c0f7f5 100644 >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, >> return 0; >> } >> >> +static int check_hotplug_memory_addressable(unsigned long pfn, >> + unsigned long nr_pages) >> +{ >> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; >> + >> + if (max_addr >> MAX_PHYSMEM_BITS) { >> + WARN(1, >> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", >> + pfn << PAGE_SHIFT, max_addr, >> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); >> + return -E2BIG; >> + } >> + >> + return 0; >> +} >> + >> /* >> * Reasonably generic function for adding memory. It is >> * expected that archs that support memory hotplug will >> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, >> unsigned long nr, start_sec, end_sec; >> struct vmem_altmap *altmap = restrictions->altmap; >> >> + err = check_hotplug_memory_addressable(pfn, nr_pages); >> + if (err) >> + return err; >> + >> if (altmap) { >> /* >> * Validate altmap is within bounds of the total request >> > > > I know Michal suggested this, but I still prefer checking early instead > of when we're knees-deep into adding of memory. But as I don't have any > power here, the code looks fine, although I consider the computations in > check_hotplug_memory_addressable() fairly ugly. > Forgot to add Acked-by: David Hildenbrand <david@redhat.com> :)
On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote: > From: Alastair D'Silva <alastair@d-silva.org> > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > unsigned long nr, start_sec, end_sec; > struct vmem_altmap *altmap = restrictions->altmap; > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > + if (err) > + return err; > + I am probably off here because 1) I am jumping blind in a middle of a discussion and 2) I got back from holydays yesterday, so bear with me. Would not be better to just place the check in add_memory_resource instead? Take into account that we create the memory mapping for this range in arch_add_memory, so it looks weird to me to create the mapping if we are going to fail right after because the range is simply off. But as I said, I might be missing some previous discussion.
On 26.09.19 09:40, Oscar Salvador wrote: > On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote: >> From: Alastair D'Silva <alastair@d-silva.org> >> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, >> unsigned long nr, start_sec, end_sec; >> struct vmem_altmap *altmap = restrictions->altmap; >> >> + err = check_hotplug_memory_addressable(pfn, nr_pages); >> + if (err) >> + return err; >> + > > I am probably off here because 1) I am jumping blind in a middle of a discussion and > 2) I got back from holydays yesterday, so bear with me. > > Would not be better to just place the check in add_memory_resource instead? At least devmem/memremap needs special handling. > Take into account that we create the memory mapping for this range in > arch_add_memory, so it looks weird to me to create the mapping if we are going to > fail right after because the range is simply off. > > But as I said, I might be missing some previous discussion. >
On Thu 26-09-19 09:12:50, David Hildenbrand wrote: > On 26.09.19 03:34, Alastair D'Silva wrote: > > From: Alastair D'Silva <alastair@d-silva.org> > > > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > > are allocated from firmware. These address ranges may be higher > > than what older kernels permit, as we increased the maximum > > permissable address in commit 4ffe713b7587 > > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is > > possible that the addressable range may change again in the > > future. > > > > In this scenario, we end up with a bogus section returned from > > __section_nr (see the discussion on the thread "mm: Trigger bug on > > if a section is not found in __section_nr"). > > > > Adding a check here means that we fail early and have an > > opportunity to handle the error gracefully, rather than rumbling > > on and potentially accessing an incorrect section. > > > > Further discussion is also on the thread ("powerpc: Perform a bounds > > check in arch_add_memory") > > http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com > > > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> > > --- > > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > > 1 file changed, 20 insertions(+) > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > index c73f09913165..212804c0f7f5 100644 > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > > return 0; > > } > > > > +static int check_hotplug_memory_addressable(unsigned long pfn, > > + unsigned long nr_pages) > > +{ > > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; > > + > > + if (max_addr >> MAX_PHYSMEM_BITS) { > > + WARN(1, > > + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", > > + pfn << PAGE_SHIFT, max_addr, > > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); > > + return -E2BIG; > > + } > > + > > + return 0; > > +} > > + > > /* > > * Reasonably generic function for adding memory. It is > > * expected that archs that support memory hotplug will > > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > > unsigned long nr, start_sec, end_sec; > > struct vmem_altmap *altmap = restrictions->altmap; > > > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > > + if (err) > > + return err; > > + > > if (altmap) { > > /* > > * Validate altmap is within bounds of the total request > > > > > I know Michal suggested this, but I still prefer checking early instead > of when we're knees-deep into adding of memory. What is your concern here? Unwinding the state should be pretty straightfoward from this failure path.
On Thu 26-09-19 11:34:05, Alastair D'Silva wrote: > From: Alastair D'Silva <alastair@d-silva.org> > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > are allocated from firmware. These address ranges may be higher > than what older kernels permit, as we increased the maximum > permissable address in commit 4ffe713b7587 > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is > possible that the addressable range may change again in the > future. > > In this scenario, we end up with a bogus section returned from > __section_nr (see the discussion on the thread "mm: Trigger bug on > if a section is not found in __section_nr"). > > Adding a check here means that we fail early and have an > opportunity to handle the error gracefully, rather than rumbling > on and potentially accessing an incorrect section. > > Further discussion is also on the thread ("powerpc: Perform a bounds > check in arch_add_memory") > http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Yes, this looks better to me. E2BIG is a new error code for this path but no callers seem to be deeply concerned about a specific error codes so this should be safe. Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c73f09913165..212804c0f7f5 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > return 0; > } > > +static int check_hotplug_memory_addressable(unsigned long pfn, > + unsigned long nr_pages) > +{ > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; > + > + if (max_addr >> MAX_PHYSMEM_BITS) { > + WARN(1, > + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", > + pfn << PAGE_SHIFT, max_addr, > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); > + return -E2BIG; > + } > + > + return 0; > +} > + > /* > * Reasonably generic function for adding memory. It is > * expected that archs that support memory hotplug will > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > unsigned long nr, start_sec, end_sec; > struct vmem_altmap *altmap = restrictions->altmap; > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > + if (err) > + return err; > + > if (altmap) { > /* > * Validate altmap is within bounds of the total request > -- > 2.21.0
On 26.09.19 09:43, Michal Hocko wrote: > On Thu 26-09-19 09:12:50, David Hildenbrand wrote: >> On 26.09.19 03:34, Alastair D'Silva wrote: >>> From: Alastair D'Silva <alastair@d-silva.org> >>> >>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory >>> are allocated from firmware. These address ranges may be higher >>> than what older kernels permit, as we increased the maximum >>> permissable address in commit 4ffe713b7587 >>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is >>> possible that the addressable range may change again in the >>> future. >>> >>> In this scenario, we end up with a bogus section returned from >>> __section_nr (see the discussion on the thread "mm: Trigger bug on >>> if a section is not found in __section_nr"). >>> >>> Adding a check here means that we fail early and have an >>> opportunity to handle the error gracefully, rather than rumbling >>> on and potentially accessing an incorrect section. >>> >>> Further discussion is also on the thread ("powerpc: Perform a bounds >>> check in arch_add_memory") >>> http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com >>> >>> Signed-off-by: Alastair D'Silva <alastair@d-silva.org> >>> --- >>> mm/memory_hotplug.c | 20 ++++++++++++++++++++ >>> 1 file changed, 20 insertions(+) >>> >>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>> index c73f09913165..212804c0f7f5 100644 >>> --- a/mm/memory_hotplug.c >>> +++ b/mm/memory_hotplug.c >>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, >>> return 0; >>> } >>> >>> +static int check_hotplug_memory_addressable(unsigned long pfn, >>> + unsigned long nr_pages) >>> +{ >>> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; >>> + >>> + if (max_addr >> MAX_PHYSMEM_BITS) { >>> + WARN(1, >>> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", >>> + pfn << PAGE_SHIFT, max_addr, >>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); >>> + return -E2BIG; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> /* >>> * Reasonably generic function for adding memory. It is >>> * expected that archs that support memory hotplug will >>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, >>> unsigned long nr, start_sec, end_sec; >>> struct vmem_altmap *altmap = restrictions->altmap; >>> >>> + err = check_hotplug_memory_addressable(pfn, nr_pages); >>> + if (err) >>> + return err; >>> + >>> if (altmap) { >>> /* >>> * Validate altmap is within bounds of the total request >>> >> >> >> I know Michal suggested this, but I still prefer checking early instead >> of when we're knees-deep into adding of memory. > > What is your concern here? Unwinding the state should be pretty > straightfoward from this failure path. Just the general "check what you can check early without locks" approach. But yeah, this series is probably not worth a v5, so I can live with this change just fine :)
On Thu 26-09-19 09:40:05, Oscar Salvador wrote: > On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote: > > From: Alastair D'Silva <alastair@d-silva.org> > > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > > unsigned long nr, start_sec, end_sec; > > struct vmem_altmap *altmap = restrictions->altmap; > > > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > > + if (err) > > + return err; > > + > > I am probably off here because 1) I am jumping blind in a middle of a discussion and > 2) I got back from holydays yesterday, so bear with me. > > Would not be better to just place the check in add_memory_resource instead? This was the previous version of the patch. The argument is that we do not want each add_pages user to think of this special handling.
On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote: > From: Alastair D'Silva <alastair@d-silva.org> > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > are allocated from firmware. These address ranges may be higher > than what older kernels permit, as we increased the maximum > permissable address in commit 4ffe713b7587 > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is > possible that the addressable range may change again in the > future. > > In this scenario, we end up with a bogus section returned from > __section_nr (see the discussion on the thread "mm: Trigger bug on > if a section is not found in __section_nr"). > > Adding a check here means that we fail early and have an > opportunity to handle the error gracefully, rather than rumbling > on and potentially accessing an incorrect section. > > Further discussion is also on the thread ("powerpc: Perform a bounds > check in arch_add_memory") > http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Reviewed-by: Oscar Salvador <osalvador@suse.de> Just a nit-picking below: > --- > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c73f09913165..212804c0f7f5 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, > return 0; > } > > +static int check_hotplug_memory_addressable(unsigned long pfn, > + unsigned long nr_pages) > +{ > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; I would use PFN_PHYS instead: unsigned long max_addr = PFN_PHYS(pfn + nr_pages) - 1; > + > + if (max_addr >> MAX_PHYSMEM_BITS) { > + WARN(1, > + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", > + pfn << PAGE_SHIFT, max_addr, Same here. > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); I would use a local variable to hold this computation. > + return -E2BIG; > + } > + > + return 0;
On Thu, 2019-09-26 at 09:53 +0200, Oscar Salvador wrote: > On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote: > > From: Alastair D'Silva <alastair@d-silva.org> > > > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > > are allocated from firmware. These address ranges may be higher > > than what older kernels permit, as we increased the maximum > > permissable address in commit 4ffe713b7587 > > ("powerpc/mm: Increase the max addressable memory to 2PB"). It is > > possible that the addressable range may change again in the > > future. > > > > In this scenario, we end up with a bogus section returned from > > __section_nr (see the discussion on the thread "mm: Trigger bug on > > if a section is not found in __section_nr"). > > > > Adding a check here means that we fail early and have an > > opportunity to handle the error gracefully, rather than rumbling > > on and potentially accessing an incorrect section. > > > > Further discussion is also on the thread ("powerpc: Perform a > > bounds > > check in arch_add_memory") > > http://lkml.kernel.org/r/20190827052047.31547-1-alastair@au1.ibm.com > > > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> > > Reviewed-by: Oscar Salvador <osalvador@suse.de> > > Just a nit-picking below: > > > --- > > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > > 1 file changed, 20 insertions(+) > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > index c73f09913165..212804c0f7f5 100644 > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, > > unsigned long nr_pages, > > return 0; > > } > > > > +static int check_hotplug_memory_addressable(unsigned long pfn, > > + unsigned long nr_pages) > > +{ > > + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; > > I would use PFN_PHYS instead: > > unsigned long max_addr = PFN_PHYS(pfn + nr_pages) - 1; > > > + > > + if (max_addr >> MAX_PHYSMEM_BITS) { > > + WARN(1, > > + "Hotplugged memory exceeds maximum addressable > > address, range=%#lx-%#lx, maximum=%#lx\n", > > + pfn << PAGE_SHIFT, max_addr, > > Same here. > > > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); > > I would use a local variable to hold this computation. > > > + return -E2BIG; > > + } > > + > > + return 0; Looks like I'll have to do another spin to change that to a ull anyway, so I'll implement those suggestions.
On Thu, 2019-09-26 at 09:46 +0200, David Hildenbrand wrote: > On 26.09.19 09:43, Michal Hocko wrote: > > On Thu 26-09-19 09:12:50, David Hildenbrand wrote: > > > On 26.09.19 03:34, Alastair D'Silva wrote: > > > > From: Alastair D'Silva <alastair@d-silva.org> > > > > > > > > On PowerPC, the address ranges allocated to OpenCAPI LPC memory > > > > are allocated from firmware. These address ranges may be higher > > > > than what older kernels permit, as we increased the maximum > > > > permissable address in commit 4ffe713b7587 > > > > ("powerpc/mm: Increase the max addressable memory to 2PB"). It > > > > is > > > > possible that the addressable range may change again in the > > > > future. > > > > > > > > In this scenario, we end up with a bogus section returned from > > > > __section_nr (see the discussion on the thread "mm: Trigger bug > > > > on > > > > if a section is not found in __section_nr"). > > > > > > > > Adding a check here means that we fail early and have an > > > > opportunity to handle the error gracefully, rather than > > > > rumbling > > > > on and potentially accessing an incorrect section. > > > > > > > > Further discussion is also on the thread ("powerpc: Perform a > > > > bounds > > > > check in arch_add_memory") > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lkml.kernel.org_r_20190827052047.31547-2D1-2Dalastair-40au1.ibm.com&d=DwICaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cT4tgeEQ0Ll3SIlZDHE5AEXyKy6uKADMtf9_Eb7-vec&m=p9ZS4kSnvF0zq81jcCFd2nYj1zfTMvfbApCtmKI2KNA&s=yif-duzz_RESW3LUyU_0kkmefRAnKWjjn_p5Et-9B2g&e= > > > > > > > > Signed-off-by: Alastair D'Silva <alastair@d-silva.org> > > > > --- > > > > mm/memory_hotplug.c | 20 ++++++++++++++++++++ > > > > 1 file changed, 20 insertions(+) > > > > > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > > > index c73f09913165..212804c0f7f5 100644 > > > > --- a/mm/memory_hotplug.c > > > > +++ b/mm/memory_hotplug.c > > > > @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long > > > > pfn, unsigned long nr_pages, > > > > return 0; > > > > } > > > > > > > > +static int check_hotplug_memory_addressable(unsigned long pfn, > > > > + unsigned long > > > > nr_pages) > > > > +{ > > > > + unsigned long max_addr = ((pfn + nr_pages) << > > > > PAGE_SHIFT) - 1; > > > > + > > > > + if (max_addr >> MAX_PHYSMEM_BITS) { > > > > + WARN(1, > > > > + "Hotplugged memory exceeds maximum > > > > addressable address, range=%#lx-%#lx, maximum=%#lx\n", > > > > + pfn << PAGE_SHIFT, max_addr, > > > > + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); > > > > + return -E2BIG; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > + > > > > /* > > > > * Reasonably generic function for adding memory. It is > > > > * expected that archs that support memory hotplug will > > > > @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned > > > > long pfn, unsigned long nr_pages, > > > > unsigned long nr, start_sec, end_sec; > > > > struct vmem_altmap *altmap = restrictions->altmap; > > > > > > > > + err = check_hotplug_memory_addressable(pfn, nr_pages); > > > > + if (err) > > > > + return err; > > > > + > > > > if (altmap) { > > > > /* > > > > * Validate altmap is within bounds of the > > > > total request > > > > > > > > > > I know Michal suggested this, but I still prefer checking early > > > instead > > > of when we're knees-deep into adding of memory. > > > > What is your concern here? Unwinding the state should be pretty > > straightfoward from this failure path. > > Just the general "check what you can check early without locks" > approach. But yeah, this series is probably not worth a v5, so I can > live with this change just fine :) > > I'm going to spin a V5 anyway - where were you suggesting? > -- > > Thanks, > > David / dhildenb
On 27.09.19 08:33, Alastair D'Silva wrote: > On Thu, 2019-09-26 at 09:46 +0200, David Hildenbrand wrote: >> On 26.09.19 09:43, Michal Hocko wrote: >>> On Thu 26-09-19 09:12:50, David Hildenbrand wrote: >>>> On 26.09.19 03:34, Alastair D'Silva wrote: >>>>> From: Alastair D'Silva <alastair@d-silva.org> >>>>> >>>>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory >>>>> are allocated from firmware. These address ranges may be higher >>>>> than what older kernels permit, as we increased the maximum >>>>> permissable address in commit 4ffe713b7587 >>>>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It >>>>> is >>>>> possible that the addressable range may change again in the >>>>> future. >>>>> >>>>> In this scenario, we end up with a bogus section returned from >>>>> __section_nr (see the discussion on the thread "mm: Trigger bug >>>>> on >>>>> if a section is not found in __section_nr"). >>>>> >>>>> Adding a check here means that we fail early and have an >>>>> opportunity to handle the error gracefully, rather than >>>>> rumbling >>>>> on and potentially accessing an incorrect section. >>>>> >>>>> Further discussion is also on the thread ("powerpc: Perform a >>>>> bounds >>>>> check in arch_add_memory") >>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lkml.kernel.org_r_20190827052047.31547-2D1-2Dalastair-40au1.ibm.com&d=DwICaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cT4tgeEQ0Ll3SIlZDHE5AEXyKy6uKADMtf9_Eb7-vec&m=p9ZS4kSnvF0zq81jcCFd2nYj1zfTMvfbApCtmKI2KNA&s=yif-duzz_RESW3LUyU_0kkmefRAnKWjjn_p5Et-9B2g&e= >>>>> >>>>> Signed-off-by: Alastair D'Silva <alastair@d-silva.org> >>>>> --- >>>>> mm/memory_hotplug.c | 20 ++++++++++++++++++++ >>>>> 1 file changed, 20 insertions(+) >>>>> >>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>>>> index c73f09913165..212804c0f7f5 100644 >>>>> --- a/mm/memory_hotplug.c >>>>> +++ b/mm/memory_hotplug.c >>>>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long >>>>> pfn, unsigned long nr_pages, >>>>> return 0; >>>>> } >>>>> >>>>> +static int check_hotplug_memory_addressable(unsigned long pfn, >>>>> + unsigned long >>>>> nr_pages) >>>>> +{ >>>>> + unsigned long max_addr = ((pfn + nr_pages) << >>>>> PAGE_SHIFT) - 1; >>>>> + >>>>> + if (max_addr >> MAX_PHYSMEM_BITS) { >>>>> + WARN(1, >>>>> + "Hotplugged memory exceeds maximum >>>>> addressable address, range=%#lx-%#lx, maximum=%#lx\n", >>>>> + pfn << PAGE_SHIFT, max_addr, >>>>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); >>>>> + return -E2BIG; >>>>> + } >>>>> + >>>>> + return 0; >>>>> +} >>>>> + >>>>> /* >>>>> * Reasonably generic function for adding memory. It is >>>>> * expected that archs that support memory hotplug will >>>>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned >>>>> long pfn, unsigned long nr_pages, >>>>> unsigned long nr, start_sec, end_sec; >>>>> struct vmem_altmap *altmap = restrictions->altmap; >>>>> >>>>> + err = check_hotplug_memory_addressable(pfn, nr_pages); >>>>> + if (err) >>>>> + return err; >>>>> + >>>>> if (altmap) { >>>>> /* >>>>> * Validate altmap is within bounds of the >>>>> total request >>>>> >>>> >>>> I know Michal suggested this, but I still prefer checking early >>>> instead >>>> of when we're knees-deep into adding of memory. >>> >>> What is your concern here? Unwinding the state should be pretty >>> straightfoward from this failure path. >> >> Just the general "check what you can check early without locks" >> approach. But yeah, this series is probably not worth a v5, so I can >> live with this change just fine :) >> >> > > I'm going to spin a V5 anyway - where were you suggesting? I preferred the previous places where we checked, but I think we settled on __add_pages(). So I am fine with the changes Oscar proposed. You might want to turn "max_addr" into a const if you feel fancy. :)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c73f09913165..212804c0f7f5 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, return 0; } +static int check_hotplug_memory_addressable(unsigned long pfn, + unsigned long nr_pages) +{ + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; + + if (max_addr >> MAX_PHYSMEM_BITS) { + WARN(1, + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", + pfn << PAGE_SHIFT, max_addr, + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); + return -E2BIG; + } + + return 0; +} + /* * Reasonably generic function for adding memory. It is * expected that archs that support memory hotplug will @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, unsigned long nr, start_sec, end_sec; struct vmem_altmap *altmap = restrictions->altmap; + err = check_hotplug_memory_addressable(pfn, nr_pages); + if (err) + return err; + if (altmap) { /* * Validate altmap is within bounds of the total request