mm, vmscan: prevent useless kswapd loops
diff mbox series

Message ID 20190628015520.13357-1-shakeelb@google.com
State New
Headers show
Series
  • mm, vmscan: prevent useless kswapd loops
Related show

Commit Message

Shakeel Butt June 28, 2019, 1:55 a.m. UTC
On production we have noticed hard lockups on large machines running
large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
isolate_lru_pages was basically skipping GiBs of pages while holding the
LRU spinlock with interrupt disabled.

On further inspection, it seems like there are two issues:

1) If the kswapd on the return from balance_pgdat() could not sleep
(maybe all zones are still unbalanced), the classzone_idx is set to 0,
unintentionally, and the whole reclaim cycle of kswapd will try to reclaim
only the lowest and smallest zone while traversing the whole memory.

2) Fundamentally isolate_lru_pages() is really bad when the allocation
has woken kswapd for a smaller zone on a very large machine running very
large jobs. It can hoard the LRU spinlock while skipping over 100s of
GiBs of pages.

This patch only fixes the (1). The (2) needs a more fundamental solution.

Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Yang Shi June 28, 2019, 6:52 p.m. UTC | #1
On 6/27/19 6:55 PM, Shakeel Butt wrote:
> On production we have noticed hard lockups on large machines running
> large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
> GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
> isolate_lru_pages was basically skipping GiBs of pages while holding the
> LRU spinlock with interrupt disabled.
>
> On further inspection, it seems like there are two issues:
>
> 1) If the kswapd on the return from balance_pgdat() could not sleep
> (maybe all zones are still unbalanced), the classzone_idx is set to 0,
> unintentionally, and the whole reclaim cycle of kswapd will try to reclaim
> only the lowest and smallest zone while traversing the whole memory.
>
> 2) Fundamentally isolate_lru_pages() is really bad when the allocation
> has woken kswapd for a smaller zone on a very large machine running very
> large jobs. It can hoard the LRU spinlock while skipping over 100s of
> GiBs of pages.
>
> This patch only fixes the (1). The (2) needs a more fundamental solution.
>
> Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
> due to mismatched classzone_idx")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
>   mm/vmscan.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9e3292ee5c7c..786dacfdfe29 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3908,7 +3908,7 @@ static int kswapd(void *p)
>   
>   		/* Read the new order and classzone_idx */
>   		alloc_order = reclaim_order = pgdat->kswapd_order;
> -		classzone_idx = kswapd_classzone_idx(pgdat, 0);
> +		classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);

I'm a little bit confused by the fix. What happen if kswapd is waken for 
a lower zone? It looks kswapd may just reclaim the higher zone instead 
of the lower zone?

For example, after bootup, classzone_idx should be (MAX_NR_ZONES - 1), 
if GFP_DMA is used for allocation and kswapd is waken up for ZONE_DMA, 
kswapd_classzone_idx would still return (MAX_NR_ZONES - 1) since 
kswapd_classzone_idx(pgdat, classzone_idx) returns the max classzone_idx.

>   		pgdat->kswapd_order = 0;
>   		pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
>
Shakeel Butt June 28, 2019, 7:27 p.m. UTC | #2
On Fri, Jun 28, 2019 at 11:53 AM Yang Shi <yang.shi@linux.alibaba.com> wrote:
>
>
>
> On 6/27/19 6:55 PM, Shakeel Butt wrote:
> > On production we have noticed hard lockups on large machines running
> > large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> > sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
> > GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
> > isolate_lru_pages was basically skipping GiBs of pages while holding the
> > LRU spinlock with interrupt disabled.
> >
> > On further inspection, it seems like there are two issues:
> >
> > 1) If the kswapd on the return from balance_pgdat() could not sleep
> > (maybe all zones are still unbalanced), the classzone_idx is set to 0,
> > unintentionally, and the whole reclaim cycle of kswapd will try to reclaim
> > only the lowest and smallest zone while traversing the whole memory.
> >
> > 2) Fundamentally isolate_lru_pages() is really bad when the allocation
> > has woken kswapd for a smaller zone on a very large machine running very
> > large jobs. It can hoard the LRU spinlock while skipping over 100s of
> > GiBs of pages.
> >
> > This patch only fixes the (1). The (2) needs a more fundamental solution.
> >
> > Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
> > due to mismatched classzone_idx")
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > ---
> >   mm/vmscan.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 9e3292ee5c7c..786dacfdfe29 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -3908,7 +3908,7 @@ static int kswapd(void *p)
> >
> >               /* Read the new order and classzone_idx */
> >               alloc_order = reclaim_order = pgdat->kswapd_order;
> > -             classzone_idx = kswapd_classzone_idx(pgdat, 0);
> > +             classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
>
> I'm a little bit confused by the fix. What happen if kswapd is waken for
> a lower zone? It looks kswapd may just reclaim the higher zone instead
> of the lower zone?
>
> For example, after bootup, classzone_idx should be (MAX_NR_ZONES - 1),
> if GFP_DMA is used for allocation and kswapd is waken up for ZONE_DMA,
> kswapd_classzone_idx would still return (MAX_NR_ZONES - 1) since
> kswapd_classzone_idx(pgdat, classzone_idx) returns the max classzone_idx.
>

Indeed you are right. I think kswapd_classzone_idx() is too much
convoluted. It has different semantics when called from the wakers
than when called from kswapd(). Let me see if we can decouple the
logic in this function based on the context (or have two separate
functions for both contexts).

thanks,
Shakeel

Patch
diff mbox series

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9e3292ee5c7c..786dacfdfe29 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3908,7 +3908,7 @@  static int kswapd(void *p)
 
 		/* Read the new order and classzone_idx */
 		alloc_order = reclaim_order = pgdat->kswapd_order;
-		classzone_idx = kswapd_classzone_idx(pgdat, 0);
+		classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
 		pgdat->kswapd_order = 0;
 		pgdat->kswapd_classzone_idx = MAX_NR_ZONES;