diff mbox series

[v2,1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps

Message ID 20200619125923.22602-2-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/shuffle: fix and cleanups | expand

Commit Message

David Hildenbrand June 19, 2020, 12:59 p.m. UTC
Especially with memory hotplug, we can have offline sections (with a
garbage memmap) and overlapping zones. We have to make sure to only
touch initialized memmaps (online sections managed by the buddy) and that
the zone matches, to not move pages between zones.

To test if this can actually happen, I added a simple
	BUG_ON(page_zone(page_i) != page_zone(page_j));
right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
onlining the first memory block "online_movable" and the second memory
block "online_kernel", it will trigger the BUG, as both zones (NORMAL
and MOVABLE) overlap.

This might result in all kinds of weird situations (e.g., double
allocations, list corruptions, unmovable allocations ending up in the
movable zone).

Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: stable@vger.kernel.org # v5.2+
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/shuffle.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

Comments

Dan Williams June 20, 2020, 1:37 a.m. UTC | #1
On Fri, 2020-06-19 at 14:59 +0200, David Hildenbrand wrote:
> Especially with memory hotplug, we can have offline sections (with a
> garbage memmap) and overlapping zones. We have to make sure to only
> touch initialized memmaps (online sections managed by the buddy) and
> that
> the zone matches, to not move pages between zones.
> 
> To test if this can actually happen, I added a simple
> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM
> and
> onlining the first memory block "online_movable" and the second
> memory
> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
> and MOVABLE) overlap.
> 
> This might result in all kinds of weird situations (e.g., double
> allocations, list corruptions, unmovable allocations ending up in the
> movable zone).
> 
> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
> memory-side-cache utilization")
> Acked-by: Michal Hocko <mhocko@suse.com>
> Cc: stable@vger.kernel.org # v5.2+
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Wei Yang <richard.weiyang@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Looks good to me.

Acked-by: Dan Williams <dan.j.williams@intel.com>
Wei Yang June 22, 2020, 8:26 a.m. UTC | #2
On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>Especially with memory hotplug, we can have offline sections (with a
>garbage memmap) and overlapping zones. We have to make sure to only
>touch initialized memmaps (online sections managed by the buddy) and that
>the zone matches, to not move pages between zones.
>
>To test if this can actually happen, I added a simple
>	BUG_ON(page_zone(page_i) != page_zone(page_j));
>right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>onlining the first memory block "online_movable" and the second memory
>block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>and MOVABLE) overlap.
>
>This might result in all kinds of weird situations (e.g., double
>allocations, list corruptions, unmovable allocations ending up in the
>movable zone).
>
>Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>Acked-by: Michal Hocko <mhocko@suse.com>
>Cc: stable@vger.kernel.org # v5.2+
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Johannes Weiner <hannes@cmpxchg.org>
>Cc: Michal Hocko <mhocko@suse.com>
>Cc: Minchan Kim <minchan@kernel.org>
>Cc: Huang Ying <ying.huang@intel.com>
>Cc: Wei Yang <richard.weiyang@gmail.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Signed-off-by: David Hildenbrand <david@redhat.com>
>---
> mm/shuffle.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
>diff --git a/mm/shuffle.c b/mm/shuffle.c
>index 44406d9977c77..dd13ab851b3ee 100644
>--- a/mm/shuffle.c
>+++ b/mm/shuffle.c
>@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>  * For two pages to be swapped in the shuffle, they must be free (on a
>  * 'free_area' lru), have the same order, and have the same migratetype.
>  */
>-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>+static struct page * __meminit shuffle_valid_page(struct zone *zone,
>+						  unsigned long pfn, int order)
> {
>-	struct page *page;
>+	struct page *page = pfn_to_online_page(pfn);

Hi, David and Dan,

One thing I want to confirm here is we won't have partially online section,
right? We can add a sub-section to system, but we won't manage it by buddy.

With this confirmed:

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

> 
> 	/*
> 	 * Given we're dealing with randomly selected pfns in a zone we
> 	 * need to ask questions like...
> 	 */
> 
>-	/* ...is the pfn even in the memmap? */
>-	if (!pfn_valid_within(pfn))
>+	/* ... is the page managed by the buddy? */
>+	if (!page)
> 		return NULL;
> 
>-	/* ...is the pfn in a present section or a hole? */
>-	if (!pfn_in_present_section(pfn))
>+	/* ... is the page assigned to the same zone? */
>+	if (page_zone(page) != zone)
> 		return NULL;
> 
> 	/* ...is the page free and currently on a free_area list? */
>-	page = pfn_to_page(pfn);
> 	if (!PageBuddy(page))
> 		return NULL;
> 
>@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
> 		 * page_j randomly selected in the span @zone_start_pfn to
> 		 * @spanned_pages.
> 		 */
>-		page_i = shuffle_valid_page(i, order);
>+		page_i = shuffle_valid_page(z, i, order);
> 		if (!page_i)
> 			continue;
> 
>@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
> 			j = z->zone_start_pfn +
> 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
> 						order_pages);
>-			page_j = shuffle_valid_page(j, order);
>+			page_j = shuffle_valid_page(z, j, order);
> 			if (page_j && page_j != page_i)
> 				break;
> 		}
>-- 
>2.26.2
David Hildenbrand June 22, 2020, 8:43 a.m. UTC | #3
On 22.06.20 10:26, Wei Yang wrote:
> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>> Especially with memory hotplug, we can have offline sections (with a
>> garbage memmap) and overlapping zones. We have to make sure to only
>> touch initialized memmaps (online sections managed by the buddy) and that
>> the zone matches, to not move pages between zones.
>>
>> To test if this can actually happen, I added a simple
>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>> onlining the first memory block "online_movable" and the second memory
>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>> and MOVABLE) overlap.
>>
>> This might result in all kinds of weird situations (e.g., double
>> allocations, list corruptions, unmovable allocations ending up in the
>> movable zone).
>>
>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>> Acked-by: Michal Hocko <mhocko@suse.com>
>> Cc: stable@vger.kernel.org # v5.2+
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Cc: Huang Ying <ying.huang@intel.com>
>> Cc: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> mm/shuffle.c | 18 +++++++++---------
>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>> index 44406d9977c77..dd13ab851b3ee 100644
>> --- a/mm/shuffle.c
>> +++ b/mm/shuffle.c
>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>  */
>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>> +						  unsigned long pfn, int order)
>> {
>> -	struct page *page;
>> +	struct page *page = pfn_to_online_page(pfn);
> 
> Hi, David and Dan,
> 
> One thing I want to confirm here is we won't have partially online section,
> right? We can add a sub-section to system, but we won't manage it by buddy.

Hi,

there is still a BUG with sub-section hot-add (devmem), which broke
pfn_to_online_page() in corner cases (especially, see the description in
include/linux/mmzone.h). We can have a boot-memory section partially
populated and marked online. Then, we can hot-add devmem, marking the
remaining pfns valid - and as the section is maked online, also as online.

This is, however, a different problem to solve and affects most other
pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us
from most harm, as the devmem zone won't match.

Thanks!
Wei Yang June 22, 2020, 9:22 a.m. UTC | #4
On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>On 22.06.20 10:26, Wei Yang wrote:
>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>> Especially with memory hotplug, we can have offline sections (with a
>>> garbage memmap) and overlapping zones. We have to make sure to only
>>> touch initialized memmaps (online sections managed by the buddy) and that
>>> the zone matches, to not move pages between zones.
>>>
>>> To test if this can actually happen, I added a simple
>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>> onlining the first memory block "online_movable" and the second memory
>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>> and MOVABLE) overlap.
>>>
>>> This might result in all kinds of weird situations (e.g., double
>>> allocations, list corruptions, unmovable allocations ending up in the
>>> movable zone).
>>>
>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>> Cc: stable@vger.kernel.org # v5.2+
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Minchan Kim <minchan@kernel.org>
>>> Cc: Huang Ying <ying.huang@intel.com>
>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>> mm/shuffle.c | 18 +++++++++---------
>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>> index 44406d9977c77..dd13ab851b3ee 100644
>>> --- a/mm/shuffle.c
>>> +++ b/mm/shuffle.c
>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>  */
>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>> +						  unsigned long pfn, int order)
>>> {
>>> -	struct page *page;
>>> +	struct page *page = pfn_to_online_page(pfn);
>> 
>> Hi, David and Dan,
>> 
>> One thing I want to confirm here is we won't have partially online section,
>> right? We can add a sub-section to system, but we won't manage it by buddy.
>
>Hi,
>
>there is still a BUG with sub-section hot-add (devmem), which broke
>pfn_to_online_page() in corner cases (especially, see the description in
>include/linux/mmzone.h). We can have a boot-memory section partially
>populated and marked online. Then, we can hot-add devmem, marking the
>remaining pfns valid - and as the section is maked online, also as online.

Oh, yes, I see this description.

This means we could have section marked as online, but with a sub-section even
not added.

While the good news is even the sub-section is not added, but its memmap is
populated for an early section. So the page returned from pfn_to_online_page()
is a valid one.

But what would happen, if the sub-section is removed after added? Would
section_deactivate() release related memmap to this "struct page"?

>
>This is, however, a different problem to solve and affects most other
>pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us
>from most harm, as the devmem zone won't match.
>

Yes, a different problem, just jump into my mind. Hope this won't affect this
patch.

>Thanks!
>
>-- 
>Thanks,
>
>David / dhildenb
David Hildenbrand June 22, 2020, 9:51 a.m. UTC | #5
On 22.06.20 11:22, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>> On 22.06.20 10:26, Wei Yang wrote:
>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>> Especially with memory hotplug, we can have offline sections (with a
>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>> the zone matches, to not move pages between zones.
>>>>
>>>> To test if this can actually happen, I added a simple
>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>> onlining the first memory block "online_movable" and the second memory
>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>> and MOVABLE) overlap.
>>>>
>>>> This might result in all kinds of weird situations (e.g., double
>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>> movable zone).
>>>>
>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>> Cc: stable@vger.kernel.org # v5.2+
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> mm/shuffle.c | 18 +++++++++---------
>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>> --- a/mm/shuffle.c
>>>> +++ b/mm/shuffle.c
>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>  */
>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>> +						  unsigned long pfn, int order)
>>>> {
>>>> -	struct page *page;
>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>
>>> Hi, David and Dan,
>>>
>>> One thing I want to confirm here is we won't have partially online section,
>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>
>> Hi,
>>
>> there is still a BUG with sub-section hot-add (devmem), which broke
>> pfn_to_online_page() in corner cases (especially, see the description in
>> include/linux/mmzone.h). We can have a boot-memory section partially
>> populated and marked online. Then, we can hot-add devmem, marking the
>> remaining pfns valid - and as the section is maked online, also as online.
> 
> Oh, yes, I see this description.
> 
> This means we could have section marked as online, but with a sub-section even
> not added.
> 
> While the good news is even the sub-section is not added, but its memmap is
> populated for an early section. So the page returned from pfn_to_online_page()
> is a valid one.
> 
> But what would happen, if the sub-section is removed after added? Would
> section_deactivate() release related memmap to this "struct page"?

If devmem is removed, the memmap will be freed and the sub-sections are
marked as non-present. So this works as expected.
Wei Yang June 22, 2020, 1:10 p.m. UTC | #6
On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>On 22.06.20 11:22, Wei Yang wrote:
>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>> On 22.06.20 10:26, Wei Yang wrote:
>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>> the zone matches, to not move pages between zones.
>>>>>
>>>>> To test if this can actually happen, I added a simple
>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>> onlining the first memory block "online_movable" and the second memory
>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>> and MOVABLE) overlap.
>>>>>
>>>>> This might result in all kinds of weird situations (e.g., double
>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>> movable zone).
>>>>>
>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>> ---
>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>> --- a/mm/shuffle.c
>>>>> +++ b/mm/shuffle.c
>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>  */
>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>> +						  unsigned long pfn, int order)
>>>>> {
>>>>> -	struct page *page;
>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>
>>>> Hi, David and Dan,
>>>>
>>>> One thing I want to confirm here is we won't have partially online section,
>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>
>>> Hi,
>>>
>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>> pfn_to_online_page() in corner cases (especially, see the description in
>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>> populated and marked online. Then, we can hot-add devmem, marking the
>>> remaining pfns valid - and as the section is maked online, also as online.
>> 
>> Oh, yes, I see this description.
>> 
>> This means we could have section marked as online, but with a sub-section even
>> not added.
>> 
>> While the good news is even the sub-section is not added, but its memmap is
>> populated for an early section. So the page returned from pfn_to_online_page()
>> is a valid one.
>> 
>> But what would happen, if the sub-section is removed after added? Would
>> section_deactivate() release related memmap to this "struct page"?
>
>If devmem is removed, the memmap will be freed and the sub-sections are
>marked as non-present. So this works as expected.
>

Sorry, I may not catch your point. If my understanding is correct, the
above behavior happens in function section_deactivate().

Let me draw my understanding of function section_deactivate():

    section_deactivate(pfn, nr_pages)
        clear_subsection_map(pfn, nr_pages)
	depopulate_section_memmap(pfn, nr_pages)

Since we just remove a sub-section, I skipped some un-related codes. These two
functions would:

  * clear bitmap in ms->usage->subsection_map
  * free memmap for the sub-section

While since the section is not empty, ms->section_mem_map is not set no null.

Per my understanding, the section present state is set in ms->section_mem_map
with SECTION_MARKED_PRESENT. It looks we don't clear it when just remote a
sub-section.

Do I miss something?

>-- 
>Thanks,
>
>David / dhildenb
David Hildenbrand June 22, 2020, 2:06 p.m. UTC | #7
On 22.06.20 15:10, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>> On 22.06.20 11:22, Wei Yang wrote:
>>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>>> On 22.06.20 10:26, Wei Yang wrote:
>>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>>> the zone matches, to not move pages between zones.
>>>>>>
>>>>>> To test if this can actually happen, I added a simple
>>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>>> onlining the first memory block "online_movable" and the second memory
>>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>>> and MOVABLE) overlap.
>>>>>>
>>>>>> This might result in all kinds of weird situations (e.g., double
>>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>>> movable zone).
>>>>>>
>>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>>> ---
>>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>>> --- a/mm/shuffle.c
>>>>>> +++ b/mm/shuffle.c
>>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>>  */
>>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>>> +						  unsigned long pfn, int order)
>>>>>> {
>>>>>> -	struct page *page;
>>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>>
>>>>> Hi, David and Dan,
>>>>>
>>>>> One thing I want to confirm here is we won't have partially online section,
>>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>>
>>>> Hi,
>>>>
>>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>>> pfn_to_online_page() in corner cases (especially, see the description in
>>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>>> populated and marked online. Then, we can hot-add devmem, marking the
>>>> remaining pfns valid - and as the section is maked online, also as online.
>>>
>>> Oh, yes, I see this description.
>>>
>>> This means we could have section marked as online, but with a sub-section even
>>> not added.
>>>
>>> While the good news is even the sub-section is not added, but its memmap is
>>> populated for an early section. So the page returned from pfn_to_online_page()
>>> is a valid one.
>>>
>>> But what would happen, if the sub-section is removed after added? Would
>>> section_deactivate() release related memmap to this "struct page"?
>>
>> If devmem is removed, the memmap will be freed and the sub-sections are
>> marked as non-present. So this works as expected.
>>
> 
> Sorry, I may not catch your point. If my understanding is correct, the
> above behavior happens in function section_deactivate().
> 
> Let me draw my understanding of function section_deactivate():
> 
>     section_deactivate(pfn, nr_pages)
>         clear_subsection_map(pfn, nr_pages)
> 	depopulate_section_memmap(pfn, nr_pages)
> 
> Since we just remove a sub-section, I skipped some un-related codes. These two
> functions would:
> 
>   * clear bitmap in ms->usage->subsection_map
>   * free memmap for the sub-section
> 
> While since the section is not empty, ms->section_mem_map is not set no null.

Let me clarify, sub-section hotremove works differently when overlying
with (online) boot memory within a section.

Early sections (IOW, boot memory) are never partially removed. See
mm/sparse.c:section_deactivate(). We only free a early memmap when the
section is completely empty. Also see how
include/linux/mmzone.h:pfn_valid() handles early sections.

So when we have a partially present section with boot memory, we
a) marked the whole section present and online (there is only a single
   bit)
b) allocated the memmap for the whole section
c) Only exposed the relevant pages to the buddy. The memmap of non-
   present parts in a section were initialized and are reserved.

pfn_valid() will return for all non-present pfns valid, because there is
a memmap. pfn_to_online_page() will return for all pfns "true", because
we only have a single bit for the whole section. This has been the case
before sub-section hotplug and is still the case. It simply looks like
just another memory hole for which we have a memmap.

Now, with devmem it is possible to suddenly change these sub-section
holes (memmaps) to become ZONE_DEVICE memory. pfn_to_online_page() would
have to detect that and report a "false". Possible fixes were already
discussed (e.g., sub-section online map instead of a single bit).

Again, the zone check safes us from the worst, just as in the case of
all other pfn walkers that use (as documented) pfn_to_online_page(). It
still needs a fix as dicussed, but it seems to work reasonably fine like
that for now.
David Hildenbrand June 22, 2020, 2:11 p.m. UTC | #8
On 22.06.20 11:22, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>> On 22.06.20 10:26, Wei Yang wrote:
>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>> Especially with memory hotplug, we can have offline sections (with a
>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>> the zone matches, to not move pages between zones.
>>>>
>>>> To test if this can actually happen, I added a simple
>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>> onlining the first memory block "online_movable" and the second memory
>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>> and MOVABLE) overlap.
>>>>
>>>> This might result in all kinds of weird situations (e.g., double
>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>> movable zone).
>>>>
>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>> Cc: stable@vger.kernel.org # v5.2+
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> mm/shuffle.c | 18 +++++++++---------
>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>> --- a/mm/shuffle.c
>>>> +++ b/mm/shuffle.c
>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>  */
>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>> +						  unsigned long pfn, int order)
>>>> {
>>>> -	struct page *page;
>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>
>>> Hi, David and Dan,
>>>
>>> One thing I want to confirm here is we won't have partially online section,
>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>
>> Hi,
>>
>> there is still a BUG with sub-section hot-add (devmem), which broke
>> pfn_to_online_page() in corner cases (especially, see the description in
>> include/linux/mmzone.h). We can have a boot-memory section partially
>> populated and marked online. Then, we can hot-add devmem, marking the
>> remaining pfns valid - and as the section is maked online, also as online.
> 
> Oh, yes, I see this description.
> 
> This means we could have section marked as online, but with a sub-section even
> not added.
> 
> While the good news is even the sub-section is not added, but its memmap is
> populated for an early section. So the page returned from pfn_to_online_page()
> is a valid one.
> 
> But what would happen, if the sub-section is removed after added? Would
> section_deactivate() release related memmap to this "struct page"?

Just to clarify now that I get your point: No it would not, as it is an
early section, and the early section is not completely empty.
Wei Yang June 22, 2020, 9:55 p.m. UTC | #9
On Mon, Jun 22, 2020 at 04:06:15PM +0200, David Hildenbrand wrote:
>On 22.06.20 15:10, Wei Yang wrote:
>> On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>>> On 22.06.20 11:22, Wei Yang wrote:
>>>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>>>> On 22.06.20 10:26, Wei Yang wrote:
>>>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>>>> the zone matches, to not move pages between zones.
>>>>>>>
>>>>>>> To test if this can actually happen, I added a simple
>>>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>>>> onlining the first memory block "online_movable" and the second memory
>>>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>>>> and MOVABLE) overlap.
>>>>>>>
>>>>>>> This might result in all kinds of weird situations (e.g., double
>>>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>>>> movable zone).
>>>>>>>
>>>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>>>> ---
>>>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>>>> --- a/mm/shuffle.c
>>>>>>> +++ b/mm/shuffle.c
>>>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>>>  */
>>>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>>>> +						  unsigned long pfn, int order)
>>>>>>> {
>>>>>>> -	struct page *page;
>>>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>>>
>>>>>> Hi, David and Dan,
>>>>>>
>>>>>> One thing I want to confirm here is we won't have partially online section,
>>>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>>>
>>>>> Hi,
>>>>>
>>>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>>>> pfn_to_online_page() in corner cases (especially, see the description in
>>>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>>>> populated and marked online. Then, we can hot-add devmem, marking the
>>>>> remaining pfns valid - and as the section is maked online, also as online.
>>>>
>>>> Oh, yes, I see this description.
>>>>
>>>> This means we could have section marked as online, but with a sub-section even
>>>> not added.
>>>>
>>>> While the good news is even the sub-section is not added, but its memmap is
>>>> populated for an early section. So the page returned from pfn_to_online_page()
>>>> is a valid one.
>>>>
>>>> But what would happen, if the sub-section is removed after added? Would
>>>> section_deactivate() release related memmap to this "struct page"?
>>>
>>> If devmem is removed, the memmap will be freed and the sub-sections are
>>> marked as non-present. So this works as expected.
>>>
>> 
>> Sorry, I may not catch your point. If my understanding is correct, the
>> above behavior happens in function section_deactivate().
>> 
>> Let me draw my understanding of function section_deactivate():
>> 
>>     section_deactivate(pfn, nr_pages)
>>         clear_subsection_map(pfn, nr_pages)
>> 	depopulate_section_memmap(pfn, nr_pages)
>> 
>> Since we just remove a sub-section, I skipped some un-related codes. These two
>> functions would:
>> 
>>   * clear bitmap in ms->usage->subsection_map
>>   * free memmap for the sub-section
>> 
>> While since the section is not empty, ms->section_mem_map is not set no null.
>
>Let me clarify, sub-section hotremove works differently when overlying
>with (online) boot memory within a section.
>
>Early sections (IOW, boot memory) are never partially removed. See

Thanks for your time and patience. 

Looked into the comment of section_deactivate():

 * 1. deactivation of a partial hot-added section (only possible in
 *    the SPARSEMEM_VMEMMAP=y case).
 *      a) section was present at memory init.
 *      b) section was hot-added post memory init.

Case a) seems do partial remove for an early section?

>mm/sparse.c:section_deactivate(). We only free a early memmap when the
>section is completely empty. Also see how

Hmm.. I thought this is the behavior for early section, while it looks current
code doesn't work like this:

       if (section_is_early && memmap)
               free_map_bootmem(memmap);
       else
	       depopulate_section_memmap(pfn, nr_pages, altmap);

section_is_early is always "true" for early section, while memmap is not-NULL
only when sub-section map is empty.

If my understanding is correct, when we remove a sub-section in early section,
the code would call depopulate_section_memmap(), which in turn free related
memmap. By removing the memmap, the return value from pfn_to_online_page() is
not a valid one.

Maybe we want to write the code like this:

       if (section_is_early)
               if (memmap)
                       free_map_bootmem(memmap);
       else
	       depopulate_section_memmap(pfn, nr_pages, altmap);

This makes sure we only free memmap for early section only when the whole
section is removed.

>include/linux/mmzone.h:pfn_valid() handles early sections.
>
>So when we have a partially present section with boot memory, we
>a) marked the whole section present and online (there is only a single
>   bit)
>b) allocated the memmap for the whole section
>c) Only exposed the relevant pages to the buddy. The memmap of non-
>   present parts in a section were initialized and are reserved.
>
>pfn_valid() will return for all non-present pfns valid, because there is
>a memmap. pfn_to_online_page() will return for all pfns "true", because
>we only have a single bit for the whole section. This has been the case
>before sub-section hotplug and is still the case. It simply looks like
>just another memory hole for which we have a memmap.
>
>Now, with devmem it is possible to suddenly change these sub-section
>holes (memmaps) to become ZONE_DEVICE memory. pfn_to_online_page() would
>have to detect that and report a "false". Possible fixes were already
>discussed (e.g., sub-section online map instead of a single bit).
>
>Again, the zone check safes us from the worst, just as in the case of
>all other pfn walkers that use (as documented) pfn_to_online_page(). It
>still needs a fix as dicussed, but it seems to work reasonably fine like
>that for now.
>
>-- 
>Thanks,
>
>David / dhildenb
David Hildenbrand June 23, 2020, 7:39 a.m. UTC | #10
> Hmm.. I thought this is the behavior for early section, while it looks current
> code doesn't work like this:
> 
>        if (section_is_early && memmap)
>                free_map_bootmem(memmap);
>        else
> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> 
> section_is_early is always "true" for early section, while memmap is not-NULL
> only when sub-section map is empty.
> 
> If my understanding is correct, when we remove a sub-section in early section,
> the code would call depopulate_section_memmap(), which in turn free related
> memmap. By removing the memmap, the return value from pfn_to_online_page() is
> not a valid one.

I think you're right, and pfn_valid() would also return true, as it is
an early section. This looks broken.

> 
> Maybe we want to write the code like this:
> 
>        if (section_is_early)
>                if (memmap)
>                        free_map_bootmem(memmap);
>        else
> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> 

I guess that should be the way to go

@Dan, I think what Wei proposes here is correct, right? Or how does it
work in the VMEMMAP case with early sections?
David Hildenbrand June 23, 2020, 7:55 a.m. UTC | #11
On 23.06.20 09:39, David Hildenbrand wrote:
>> Hmm.. I thought this is the behavior for early section, while it looks current
>> code doesn't work like this:
>>
>>        if (section_is_early && memmap)
>>                free_map_bootmem(memmap);
>>        else
>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>
>> section_is_early is always "true" for early section, while memmap is not-NULL
>> only when sub-section map is empty.
>>
>> If my understanding is correct, when we remove a sub-section in early section,
>> the code would call depopulate_section_memmap(), which in turn free related
>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>> not a valid one.
> 
> I think you're right, and pfn_valid() would also return true, as it is
> an early section. This looks broken.
> 
>>
>> Maybe we want to write the code like this:
>>
>>        if (section_is_early)
>>                if (memmap)
>>                        free_map_bootmem(memmap);
>>        else
>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>
> 
> I guess that should be the way to go
> 
> @Dan, I think what Wei proposes here is correct, right? Or how does it
> work in the VMEMMAP case with early sections?
> 

Especially, if you would re-hot-add, section_activate() would assume
there is a memmap, it must not be removed.

@Wei, can you send a patch?
Wei Yang June 23, 2020, 9:30 a.m. UTC | #12
On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>On 23.06.20 09:39, David Hildenbrand wrote:
>>> Hmm.. I thought this is the behavior for early section, while it looks current
>>> code doesn't work like this:
>>>
>>>        if (section_is_early && memmap)
>>>                free_map_bootmem(memmap);
>>>        else
>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>
>>> section_is_early is always "true" for early section, while memmap is not-NULL
>>> only when sub-section map is empty.
>>>
>>> If my understanding is correct, when we remove a sub-section in early section,
>>> the code would call depopulate_section_memmap(), which in turn free related
>>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>>> not a valid one.
>> 
>> I think you're right, and pfn_valid() would also return true, as it is
>> an early section. This looks broken.
>> 
>>>
>>> Maybe we want to write the code like this:
>>>
>>>        if (section_is_early)
>>>                if (memmap)
>>>                        free_map_bootmem(memmap);
>>>        else
>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>
>> 
>> I guess that should be the way to go
>> 
>> @Dan, I think what Wei proposes here is correct, right? Or how does it
>> work in the VMEMMAP case with early sections?
>> 
>
>Especially, if you would re-hot-add, section_activate() would assume
>there is a memmap, it must not be removed.
>

You are right here. I didn't notice it.

>@Wei, can you send a patch?
>

Sure, let me prepare for it.

>-- 
>Thanks,
>
>David / dhildenb
Andrew Morton July 24, 2020, 3:08 a.m. UTC | #13
On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
> >On 23.06.20 09:39, David Hildenbrand wrote:
> >>> Hmm.. I thought this is the behavior for early section, while it looks current
> >>> code doesn't work like this:
> >>>
> >>>        if (section_is_early && memmap)
> >>>                free_map_bootmem(memmap);
> >>>        else
> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> >>>
> >>> section_is_early is always "true" for early section, while memmap is not-NULL
> >>> only when sub-section map is empty.
> >>>
> >>> If my understanding is correct, when we remove a sub-section in early section,
> >>> the code would call depopulate_section_memmap(), which in turn free related
> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
> >>> not a valid one.
> >> 
> >> I think you're right, and pfn_valid() would also return true, as it is
> >> an early section. This looks broken.
> >> 
> >>>
> >>> Maybe we want to write the code like this:
> >>>
> >>>        if (section_is_early)
> >>>                if (memmap)
> >>>                        free_map_bootmem(memmap);
> >>>        else
> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> >>>
> >> 
> >> I guess that should be the way to go
> >> 
> >> @Dan, I think what Wei proposes here is correct, right? Or how does it
> >> work in the VMEMMAP case with early sections?
> >> 
> >
> >Especially, if you would re-hot-add, section_activate() would assume
> >there is a memmap, it must not be removed.
> >
> 
> You are right here. I didn't notice it.
> 
> >@Wei, can you send a patch?
> >
> 
> Sure, let me prepare for it.

Still awaiting this, and the v3 patch was identical to this v2 patch.

It's tagged for -stable, so there's some urgency.  Should we just go
ahead with the decently-tested v2?
Wei Yang July 24, 2020, 5:45 a.m. UTC | #14
On Thu, Jul 23, 2020 at 08:08:46PM -0700, Andrew Morton wrote:
>On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
>
>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>> >On 23.06.20 09:39, David Hildenbrand wrote:
>> >>> Hmm.. I thought this is the behavior for early section, while it looks current
>> >>> code doesn't work like this:
>> >>>
>> >>>        if (section_is_early && memmap)
>> >>>                free_map_bootmem(memmap);
>> >>>        else
>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>> >>>
>> >>> section_is_early is always "true" for early section, while memmap is not-NULL
>> >>> only when sub-section map is empty.
>> >>>
>> >>> If my understanding is correct, when we remove a sub-section in early section,
>> >>> the code would call depopulate_section_memmap(), which in turn free related
>> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>> >>> not a valid one.
>> >> 
>> >> I think you're right, and pfn_valid() would also return true, as it is
>> >> an early section. This looks broken.
>> >> 
>> >>>
>> >>> Maybe we want to write the code like this:
>> >>>
>> >>>        if (section_is_early)
>> >>>                if (memmap)
>> >>>                        free_map_bootmem(memmap);
>> >>>        else
>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>> >>>
>> >> 
>> >> I guess that should be the way to go
>> >> 
>> >> @Dan, I think what Wei proposes here is correct, right? Or how does it
>> >> work in the VMEMMAP case with early sections?
>> >> 
>> >
>> >Especially, if you would re-hot-add, section_activate() would assume
>> >there is a memmap, it must not be removed.
>> >
>> 
>> You are right here. I didn't notice it.
>> 
>> >@Wei, can you send a patch?
>> >
>> 
>> Sure, let me prepare for it.
>
>Still awaiting this, and the v3 patch was identical to this v2 patch.
>
>It's tagged for -stable, so there's some urgency.  Should we just go
>ahead with the decently-tested v2?

This message is to me right?

I thought the fix patch is merged, the patch link may be
https://lkml.org/lkml/2020/6/23/380.

If I missed something, just let me know.
David Hildenbrand July 24, 2020, 8:20 a.m. UTC | #15
On 24.07.20 05:08, Andrew Morton wrote:
> On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
> 
>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>>> On 23.06.20 09:39, David Hildenbrand wrote:
>>>>> Hmm.. I thought this is the behavior for early section, while it looks current
>>>>> code doesn't work like this:
>>>>>
>>>>>        if (section_is_early && memmap)
>>>>>                free_map_bootmem(memmap);
>>>>>        else
>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>>>
>>>>> section_is_early is always "true" for early section, while memmap is not-NULL
>>>>> only when sub-section map is empty.
>>>>>
>>>>> If my understanding is correct, when we remove a sub-section in early section,
>>>>> the code would call depopulate_section_memmap(), which in turn free related
>>>>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>>>>> not a valid one.
>>>>
>>>> I think you're right, and pfn_valid() would also return true, as it is
>>>> an early section. This looks broken.
>>>>
>>>>>
>>>>> Maybe we want to write the code like this:
>>>>>
>>>>>        if (section_is_early)
>>>>>                if (memmap)
>>>>>                        free_map_bootmem(memmap);
>>>>>        else
>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>>>
>>>>
>>>> I guess that should be the way to go
>>>>
>>>> @Dan, I think what Wei proposes here is correct, right? Or how does it
>>>> work in the VMEMMAP case with early sections?
>>>>
>>>
>>> Especially, if you would re-hot-add, section_activate() would assume
>>> there is a memmap, it must not be removed.
>>>
>>
>> You are right here. I didn't notice it.
>>
>>> @Wei, can you send a patch?
>>>
>>
>> Sure, let me prepare for it.
> 
> Still awaiting this, and the v3 patch was identical to this v2 patch.
> 
> It's tagged for -stable, so there's some urgency.  Should we just go
> ahead with the decently-tested v2?

This patch (mm/shuffle: don't move pages between zones and don't read
garbage memmaps) is good enough for upstream. While the issue reported
by Wei was valid (and needs to be fixed), the user in this patch is just
one of many affected users. Nothing special.
diff mbox series

Patch

diff --git a/mm/shuffle.c b/mm/shuffle.c
index 44406d9977c77..dd13ab851b3ee 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -58,25 +58,25 @@  module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
  * For two pages to be swapped in the shuffle, they must be free (on a
  * 'free_area' lru), have the same order, and have the same migratetype.
  */
-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
+static struct page * __meminit shuffle_valid_page(struct zone *zone,
+						  unsigned long pfn, int order)
 {
-	struct page *page;
+	struct page *page = pfn_to_online_page(pfn);
 
 	/*
 	 * Given we're dealing with randomly selected pfns in a zone we
 	 * need to ask questions like...
 	 */
 
-	/* ...is the pfn even in the memmap? */
-	if (!pfn_valid_within(pfn))
+	/* ... is the page managed by the buddy? */
+	if (!page)
 		return NULL;
 
-	/* ...is the pfn in a present section or a hole? */
-	if (!pfn_in_present_section(pfn))
+	/* ... is the page assigned to the same zone? */
+	if (page_zone(page) != zone)
 		return NULL;
 
 	/* ...is the page free and currently on a free_area list? */
-	page = pfn_to_page(pfn);
 	if (!PageBuddy(page))
 		return NULL;
 
@@ -123,7 +123,7 @@  void __meminit __shuffle_zone(struct zone *z)
 		 * page_j randomly selected in the span @zone_start_pfn to
 		 * @spanned_pages.
 		 */
-		page_i = shuffle_valid_page(i, order);
+		page_i = shuffle_valid_page(z, i, order);
 		if (!page_i)
 			continue;
 
@@ -137,7 +137,7 @@  void __meminit __shuffle_zone(struct zone *z)
 			j = z->zone_start_pfn +
 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
 						order_pages);
-			page_j = shuffle_valid_page(j, order);
+			page_j = shuffle_valid_page(z, j, order);
 			if (page_j && page_j != page_i)
 				break;
 		}