diff mbox series

[3/3] mm/sparsemem: avoid memmap overwrite for non-SPARSEMEM_VMEMMAP

Message ID 20200206231629.14151-4-richardw.yang@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series Fixes "mm/sparsemem: support sub-section hotplug" | expand

Commit Message

Wei Yang Feb. 6, 2020, 11:16 p.m. UTC
In case of SPARSEMEM, populate_section_memmap() would allocate memmap
for the whole section, even we just want a sub-section. This would lead
to memmap overwrite if we a sub-section to an already populated section.

Just return the populated memmap for non-SPARSEMEM_VMEMMAP case.

Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
CC: Dan Williams <dan.j.williams@intel.com>
---
 mm/sparse.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Dan Williams Feb. 7, 2020, 2:06 a.m. UTC | #1
On Thu, Feb 6, 2020 at 3:17 PM Wei Yang <richardw.yang@linux.intel.com> wrote:
>
> In case of SPARSEMEM, populate_section_memmap() would allocate memmap
> for the whole section, even we just want a sub-section. This would lead
> to memmap overwrite if we a sub-section to an already populated section.
>
> Just return the populated memmap for non-SPARSEMEM_VMEMMAP case.
>
> Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> CC: Dan Williams <dan.j.williams@intel.com>
> ---
>  mm/sparse.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 56816f653588..c75ca40db513 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -836,6 +836,16 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
>         if (nr_pages < PAGES_PER_SECTION && early_section(ms))
>                 return pfn_to_page(pfn);
>
> +       /*
> +        * If it is not SPARSEMEM_VMEMMAP, we always populate memmap for the
> +        * whole section, even for a sub-section.
> +        *
> +        * Return its memmap if already populated to avoid memmap overwrite.
> +        */
> +       if (!IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
> +               valid_section(ms))
> +               return __section_mem_map_addr(ms);

Again, is check_pfn_span() failing to prevent this path?
Baoquan He Feb. 7, 2020, 3:50 a.m. UTC | #2
On 02/06/20 at 06:06pm, Dan Williams wrote:
> On Thu, Feb 6, 2020 at 3:17 PM Wei Yang <richardw.yang@linux.intel.com> wrote:
> >
> > In case of SPARSEMEM, populate_section_memmap() would allocate memmap
> > for the whole section, even we just want a sub-section. This would lead
> > to memmap overwrite if we a sub-section to an already populated section.
> >
> > Just return the populated memmap for non-SPARSEMEM_VMEMMAP case.
> >
> > Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> > CC: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  mm/sparse.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 56816f653588..c75ca40db513 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -836,6 +836,16 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
> >         if (nr_pages < PAGES_PER_SECTION && early_section(ms))
> >                 return pfn_to_page(pfn);
> >
> > +       /*
> > +        * If it is not SPARSEMEM_VMEMMAP, we always populate memmap for the
> > +        * whole section, even for a sub-section.
> > +        *
> > +        * Return its memmap if already populated to avoid memmap overwrite.
> > +        */
> > +       if (!IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
> > +               valid_section(ms))
> > +               return __section_mem_map_addr(ms);
> 
> Again, is check_pfn_span() failing to prevent this path?

The answer should be yes, this patch is not needed.

>
Wei Yang Feb. 7, 2020, 11:02 a.m. UTC | #3
On Thu, Feb 06, 2020 at 06:06:54PM -0800, Dan Williams wrote:
>On Thu, Feb 6, 2020 at 3:17 PM Wei Yang <richardw.yang@linux.intel.com> wrote:
>>
>> In case of SPARSEMEM, populate_section_memmap() would allocate memmap
>> for the whole section, even we just want a sub-section. This would lead
>> to memmap overwrite if we a sub-section to an already populated section.
>>
>> Just return the populated memmap for non-SPARSEMEM_VMEMMAP case.
>>
>> Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> CC: Dan Williams <dan.j.williams@intel.com>
>> ---
>>  mm/sparse.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/mm/sparse.c b/mm/sparse.c
>> index 56816f653588..c75ca40db513 100644
>> --- a/mm/sparse.c
>> +++ b/mm/sparse.c
>> @@ -836,6 +836,16 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
>>         if (nr_pages < PAGES_PER_SECTION && early_section(ms))
>>                 return pfn_to_page(pfn);
>>
>> +       /*
>> +        * If it is not SPARSEMEM_VMEMMAP, we always populate memmap for the
>> +        * whole section, even for a sub-section.
>> +        *
>> +        * Return its memmap if already populated to avoid memmap overwrite.
>> +        */
>> +       if (!IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
>> +               valid_section(ms))
>> +               return __section_mem_map_addr(ms);
>
>Again, is check_pfn_span() failing to prevent this path?

Oh, you are right. Thanks
diff mbox series

Patch

diff --git a/mm/sparse.c b/mm/sparse.c
index 56816f653588..c75ca40db513 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -836,6 +836,16 @@  static struct page * __meminit section_activate(int nid, unsigned long pfn,
 	if (nr_pages < PAGES_PER_SECTION && early_section(ms))
 		return pfn_to_page(pfn);
 
+	/*
+	 * If it is not SPARSEMEM_VMEMMAP, we always populate memmap for the
+	 * whole section, even for a sub-section.
+	 *
+	 * Return its memmap if already populated to avoid memmap overwrite.
+	 */
+	if (!IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
+		valid_section(ms))
+		return __section_mem_map_addr(ms);
+
 	memmap = populate_section_memmap(pfn, nr_pages, nid, altmap);
 	if (!memmap) {
 		section_deactivate(pfn, nr_pages, altmap);