diff mbox series

[RFC] mm: compaction: avoid fast_isolate_freepages blindly choose improper pageblock

Message ID 20231129104530.63787-1-v-songbaohua@oppo.com (mailing list archive)
State New
Headers show
Series [RFC] mm: compaction: avoid fast_isolate_freepages blindly choose improper pageblock | expand

Commit Message

Barry Song Nov. 29, 2023, 10:45 a.m. UTC
Testing shows fast_isolate_freepages can blindly choose an unsuitable
pageblock from time to time particularly while the min mark is used
from XXX path:
 if (!page) {
         cc->fast_search_fail++;
         if (scan_start) {
                 /*
                  * Use the highest PFN found above min. If one was
                  * not found, be pessimistic for direct compaction
                  * and use the min mark.
                  */
                 if (highest >= min_pfn) {
                         page = pfn_to_page(highest);
                         cc->free_pfn = highest;
                 } else {
                         if (cc->direct_compaction && pfn_valid(min_pfn)) { /* XXX */
                                 page = pageblock_pfn_to_page(min_pfn,
                                         min(pageblock_end_pfn(min_pfn),
                                             zone_end_pfn(cc->zone)),
                                         cc->zone);
                                 cc->free_pfn = min_pfn;
                         }
                 }
         }
 }

In contrast, slow path is skipping unsuitable pageblocks in a decent way.

I don't know if it is an intended design or just an oversight. But
it seems more sensible to skip unsuitable pageblock.

Reported-by: Zhanyuan Hu <huzhanyuan@oppo.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
 mm/compaction.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Baolin Wang Dec. 6, 2023, 9:54 a.m. UTC | #1
On 11/29/2023 6:45 PM, Barry Song wrote:
> Testing shows fast_isolate_freepages can blindly choose an unsuitable
> pageblock from time to time particularly while the min mark is used
> from XXX path:
>   if (!page) {
>           cc->fast_search_fail++;
>           if (scan_start) {
>                   /*
>                    * Use the highest PFN found above min. If one was
>                    * not found, be pessimistic for direct compaction
>                    * and use the min mark.
>                    */
>                   if (highest >= min_pfn) {
>                           page = pfn_to_page(highest);
>                           cc->free_pfn = highest;
>                   } else {
>                           if (cc->direct_compaction && pfn_valid(min_pfn)) { /* XXX */
>                                   page = pageblock_pfn_to_page(min_pfn,
>                                           min(pageblock_end_pfn(min_pfn),
>                                               zone_end_pfn(cc->zone)),
>                                           cc->zone);
>                                   cc->free_pfn = min_pfn;
>                           }
>                   }
>           }
>   }

Yes, the min_pfn can be an unsuitable migration target. But I think we 
can just add the suitable_migration_target() validation into 'min_pfn' 
case? Since other cases must be suitable target which found from 
MIGRATE_MOVABLE free list. Something like below:

diff --git a/mm/compaction.c b/mm/compaction.c
index 01ba298739dd..4e8eb4571909 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1611,6 +1611,8 @@ static void fast_isolate_freepages(struct 
compact_control *cc)
 
min(pageblock_end_pfn(min_pfn),
 
zone_end_pfn(cc->zone)),
                                                 cc->zone);
+                                       if 
(!suitable_migration_target(cc, page))
+                                               page = NULL;
                                         cc->free_pfn = min_pfn;
                                 }
                         }

By the way, I wonder if this patch can improve the efficiency of 
compaction in your test case?

> In contrast, slow path is skipping unsuitable pageblocks in a decent way.
> 
> I don't know if it is an intended design or just an oversight. But
> it seems more sensible to skip unsuitable pageblock.
> 
> Reported-by: Zhanyuan Hu <huzhanyuan@oppo.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
>   mm/compaction.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 01ba298739dd..98c485a25614 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1625,6 +1625,12 @@ static void fast_isolate_freepages(struct compact_control *cc)
>   	cc->total_free_scanned += nr_scanned;
>   	if (!page)
>   		return;
> +	/*
> +	 * Otherwise, we can blindly choose an improper pageblock especially
> +	 * while using the min mark
> +	 */
> +	if (!suitable_migration_target(cc, page))
> +		return;
>   
>   	low_pfn = page_to_pfn(page);
>   	fast_isolate_around(cc, low_pfn);
Barry Song Dec. 6, 2023, 10:18 a.m. UTC | #2
On Wed, Dec 6, 2023 at 10:54 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 11/29/2023 6:45 PM, Barry Song wrote:
> > Testing shows fast_isolate_freepages can blindly choose an unsuitable
> > pageblock from time to time particularly while the min mark is used
> > from XXX path:
> >   if (!page) {
> >           cc->fast_search_fail++;
> >           if (scan_start) {
> >                   /*
> >                    * Use the highest PFN found above min. If one was
> >                    * not found, be pessimistic for direct compaction
> >                    * and use the min mark.
> >                    */
> >                   if (highest >= min_pfn) {
> >                           page = pfn_to_page(highest);
> >                           cc->free_pfn = highest;
> >                   } else {
> >                           if (cc->direct_compaction && pfn_valid(min_pfn)) { /* XXX */
> >                                   page = pageblock_pfn_to_page(min_pfn,
> >                                           min(pageblock_end_pfn(min_pfn),
> >                                               zone_end_pfn(cc->zone)),
> >                                           cc->zone);
> >                                   cc->free_pfn = min_pfn;
> >                           }
> >                   }
> >           }
> >   }
>
> Yes, the min_pfn can be an unsuitable migration target. But I think we
> can just add the suitable_migration_target() validation into 'min_pfn'
> case? Since other cases must be suitable target which found from
> MIGRATE_MOVABLE free list. Something like below:
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 01ba298739dd..4e8eb4571909 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1611,6 +1611,8 @@ static void fast_isolate_freepages(struct
> compact_control *cc)
>
> min(pageblock_end_pfn(min_pfn),
>
> zone_end_pfn(cc->zone)),
>                                                  cc->zone);
> +                                       if
> (!suitable_migration_target(cc, page))
> +                                               page = NULL;
>                                          cc->free_pfn = min_pfn;
>                                  }
>                          }
>

yes. this makes more senses.

> By the way, I wonder if this patch can improve the efficiency of
> compaction in your test case?

This happens not quite often. when running 25 machines for
one night, most of them can hit this unexpected code path.
but the frequency isn't  many times in one second. it might
be one time in a couple of hours.

so it is very difficult to measure the visible performance impact
in my machines though the affection of choosing the unsuitable
migration_target should be negative.

I feel like it's worth fixing this to at least make the code theoretically
self-explanatory? as it is quite odd unsuitable_migration_target can
be still migration_target?

>
> > In contrast, slow path is skipping unsuitable pageblocks in a decent way.
> >
> > I don't know if it is an intended design or just an oversight. But
> > it seems more sensible to skip unsuitable pageblock.
> >
> > Reported-by: Zhanyuan Hu <huzhanyuan@oppo.com>
> > Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> > ---
> >   mm/compaction.c | 6 ++++++
> >   1 file changed, 6 insertions(+)
> >
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 01ba298739dd..98c485a25614 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -1625,6 +1625,12 @@ static void fast_isolate_freepages(struct compact_control *cc)
> >       cc->total_free_scanned += nr_scanned;
> >       if (!page)
> >               return;
> > +     /*
> > +      * Otherwise, we can blindly choose an improper pageblock especially
> > +      * while using the min mark
> > +      */
> > +     if (!suitable_migration_target(cc, page))
> > +             return;
> >
> >       low_pfn = page_to_pfn(page);
> >       fast_isolate_around(cc, low_pfn);

Thanks
Barry
Baolin Wang Dec. 7, 2023, 1:50 a.m. UTC | #3
On 12/6/2023 6:18 PM, Barry Song wrote:
> On Wed, Dec 6, 2023 at 10:54 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 11/29/2023 6:45 PM, Barry Song wrote:
>>> Testing shows fast_isolate_freepages can blindly choose an unsuitable
>>> pageblock from time to time particularly while the min mark is used
>>> from XXX path:
>>>    if (!page) {
>>>            cc->fast_search_fail++;
>>>            if (scan_start) {
>>>                    /*
>>>                     * Use the highest PFN found above min. If one was
>>>                     * not found, be pessimistic for direct compaction
>>>                     * and use the min mark.
>>>                     */
>>>                    if (highest >= min_pfn) {
>>>                            page = pfn_to_page(highest);
>>>                            cc->free_pfn = highest;
>>>                    } else {
>>>                            if (cc->direct_compaction && pfn_valid(min_pfn)) { /* XXX */
>>>                                    page = pageblock_pfn_to_page(min_pfn,
>>>                                            min(pageblock_end_pfn(min_pfn),
>>>                                                zone_end_pfn(cc->zone)),
>>>                                            cc->zone);
>>>                                    cc->free_pfn = min_pfn;
>>>                            }
>>>                    }
>>>            }
>>>    }
>>
>> Yes, the min_pfn can be an unsuitable migration target. But I think we
>> can just add the suitable_migration_target() validation into 'min_pfn'
>> case? Since other cases must be suitable target which found from
>> MIGRATE_MOVABLE free list. Something like below:
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 01ba298739dd..4e8eb4571909 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1611,6 +1611,8 @@ static void fast_isolate_freepages(struct
>> compact_control *cc)
>>
>> min(pageblock_end_pfn(min_pfn),
>>
>> zone_end_pfn(cc->zone)),
>>                                                   cc->zone);
>> +                                       if
>> (!suitable_migration_target(cc, page))
>> +                                               page = NULL;
>>                                           cc->free_pfn = min_pfn;
>>                                   }
>>                           }
>>
> 
> yes. this makes more senses.
> 
>> By the way, I wonder if this patch can improve the efficiency of
>> compaction in your test case?
> 
> This happens not quite often. when running 25 machines for
> one night, most of them can hit this unexpected code path.
> but the frequency isn't  many times in one second. it might
> be one time in a couple of hours.
> 
> so it is very difficult to measure the visible performance impact
> in my machines though the affection of choosing the unsuitable
> migration_target should be negative.

OK. Fair enough.

> 
> I feel like it's worth fixing this to at least make the code theoretically
> self-explanatory? as it is quite odd unsuitable_migration_target can
> be still migration_target?
> 
>>
>>> In contrast, slow path is skipping unsuitable pageblocks in a decent way.
>>>
>>> I don't know if it is an intended design or just an oversight. But
>>> it seems more sensible to skip unsuitable pageblock.
>>>
>>> Reported-by: Zhanyuan Hu <huzhanyuan@oppo.com>
>>> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
>>> ---
>>>    mm/compaction.c | 6 ++++++
>>>    1 file changed, 6 insertions(+)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index 01ba298739dd..98c485a25614 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -1625,6 +1625,12 @@ static void fast_isolate_freepages(struct compact_control *cc)
>>>        cc->total_free_scanned += nr_scanned;
>>>        if (!page)
>>>                return;
>>> +     /*
>>> +      * Otherwise, we can blindly choose an improper pageblock especially
>>> +      * while using the min mark
>>> +      */
>>> +     if (!suitable_migration_target(cc, page))
>>> +             return;
>>>
>>>        low_pfn = page_to_pfn(page);
>>>        fast_isolate_around(cc, low_pfn);
> 
> Thanks
> Barry
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 01ba298739dd..98c485a25614 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1625,6 +1625,12 @@  static void fast_isolate_freepages(struct compact_control *cc)
 	cc->total_free_scanned += nr_scanned;
 	if (!page)
 		return;
+	/*
+	 * Otherwise, we can blindly choose an improper pageblock especially
+	 * while using the min mark
+	 */
+	if (!suitable_migration_target(cc, page))
+		return;
 
 	low_pfn = page_to_pfn(page);
 	fast_isolate_around(cc, low_pfn);