diff mbox series

[v2] mm: thp: update split_queue_len correctly

Message ID 20211123190916.1738458-1-shakeelb@google.com (mailing list archive)
State New
Headers show
Series [v2] mm: thp: update split_queue_len correctly | expand

Commit Message

Shakeel Butt Nov. 23, 2021, 7:09 p.m. UTC
The deferred THPs are split on memory pressure through shrinker
callback and splitting of THP during reclaim can fail for several
reasons like unable to lock the THP, under writeback or unexpected
number of pins on the THP. Such pages are put back on the deferred split
list for consideration later. However kernel does not update the
deferred queue size on putting back the pages whose split was failed.
This patch fixes that.

Without this patch the split_queue_len can underflow. Shrinker will
always get that there are some THPs to split even if there are not and
waste some cpu to scan the empty list.

Fixes: 364c1eebe453 ("mm: thp: extract split_queue_* into a struct")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
---
Changes since v1:
- updated commit message
- incorporated Yang Shi's suggestion

 mm/huge_memory.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Yang Shi Nov. 23, 2021, 8 p.m. UTC | #1
On Tue, Nov 23, 2021 at 11:09 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> The deferred THPs are split on memory pressure through shrinker
> callback and splitting of THP during reclaim can fail for several
> reasons like unable to lock the THP, under writeback or unexpected
> number of pins on the THP. Such pages are put back on the deferred split
> list for consideration later. However kernel does not update the
> deferred queue size on putting back the pages whose split was failed.
> This patch fixes that.
>
> Without this patch the split_queue_len can underflow. Shrinker will
> always get that there are some THPs to split even if there are not and
> waste some cpu to scan the empty list.
>
> Fixes: 364c1eebe453 ("mm: thp: extract split_queue_* into a struct")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
> Changes since v1:
> - updated commit message
> - incorporated Yang Shi's suggestion

Reviewed-by: Yang Shi <shy828301@gmail.com>

>
>  mm/huge_memory.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e5483347291c..d393028681e2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2809,7 +2809,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>         unsigned long flags;
>         LIST_HEAD(list), *pos, *next;
>         struct page *page;
> -       int split = 0;
> +       unsigned long split = 0;
>
>  #ifdef CONFIG_MEMCG
>         if (sc->memcg)
> @@ -2847,6 +2847,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>
>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>         list_splice_tail(&list, &ds_queue->split_queue);
> +       ds_queue->split_queue_len -= split;
>         spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
>
>         /*
> --
> 2.34.0.rc2.393.gf8c9666880-goog
>
Kirill A. Shutemov Nov. 24, 2021, 8:12 p.m. UTC | #2
On Tue, Nov 23, 2021 at 11:09:16AM -0800, Shakeel Butt wrote:
> The deferred THPs are split on memory pressure through shrinker
> callback and splitting of THP during reclaim can fail for several
> reasons like unable to lock the THP, under writeback or unexpected
> number of pins on the THP. Such pages are put back on the deferred split
> list for consideration later. However kernel does not update the
> deferred queue size on putting back the pages whose split was failed.
> This patch fixes that.

Hm. No. split_huge_page_to_list() updates the queue size on split success.

NAK.
Shakeel Butt Nov. 24, 2021, 8:44 p.m. UTC | #3
On Wed, Nov 24, 2021 at 12:12 PM Kirill A. Shutemov
<kirill@shutemov.name> wrote:
>
> On Tue, Nov 23, 2021 at 11:09:16AM -0800, Shakeel Butt wrote:
> > The deferred THPs are split on memory pressure through shrinker
> > callback and splitting of THP during reclaim can fail for several
> > reasons like unable to lock the THP, under writeback or unexpected
> > number of pins on the THP. Such pages are put back on the deferred split
> > list for consideration later. However kernel does not update the
> > deferred queue size on putting back the pages whose split was failed.
> > This patch fixes that.
>
> Hm. No. split_huge_page_to_list() updates the queue size on split success.
>

Right. This is really convoluted. split_huge_page_to_list() is just
assuming that if the given page is on a deferred list then it must be
on the list returned by get_deferred_split_queue(page). The
interaction of move_charge and deferred split seems broken.

Andrew, can you please drop this patch?
Yang Shi Nov. 24, 2021, 9:17 p.m. UTC | #4
On Wed, Nov 24, 2021 at 12:44 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Wed, Nov 24, 2021 at 12:12 PM Kirill A. Shutemov
> <kirill@shutemov.name> wrote:
> >
> > On Tue, Nov 23, 2021 at 11:09:16AM -0800, Shakeel Butt wrote:
> > > The deferred THPs are split on memory pressure through shrinker
> > > callback and splitting of THP during reclaim can fail for several
> > > reasons like unable to lock the THP, under writeback or unexpected
> > > number of pins on the THP. Such pages are put back on the deferred split
> > > list for consideration later. However kernel does not update the
> > > deferred queue size on putting back the pages whose split was failed.
> > > This patch fixes that.
> >
> > Hm. No. split_huge_page_to_list() updates the queue size on split success.
> >
>
> Right. This is really convoluted. split_huge_page_to_list() is just
> assuming that if the given page is on a deferred list then it must be
> on the list returned by get_deferred_split_queue(page). The
> interaction of move_charge and deferred split seems broken.

Because memcg code doesn't move charge for PTE mapped THP at all. See
the below comment from mem_cgroup_move_charge_pte_range():

"We can have a part of the split pmd here. Moving it can be done but
it would be too convoluted so simply ignore such a partial THP and
keep it in original memcg. There should be somebody mapping the head."

BTW, did you run into any problem related to this?

>
> Andrew, can you please drop this patch?
Shakeel Butt Nov. 24, 2021, 9:19 p.m. UTC | #5
On Wed, Nov 24, 2021 at 1:17 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Wed, Nov 24, 2021 at 12:44 PM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > On Wed, Nov 24, 2021 at 12:12 PM Kirill A. Shutemov
> > <kirill@shutemov.name> wrote:
> > >
> > > On Tue, Nov 23, 2021 at 11:09:16AM -0800, Shakeel Butt wrote:
> > > > The deferred THPs are split on memory pressure through shrinker
> > > > callback and splitting of THP during reclaim can fail for several
> > > > reasons like unable to lock the THP, under writeback or unexpected
> > > > number of pins on the THP. Such pages are put back on the deferred split
> > > > list for consideration later. However kernel does not update the
> > > > deferred queue size on putting back the pages whose split was failed.
> > > > This patch fixes that.
> > >
> > > Hm. No. split_huge_page_to_list() updates the queue size on split success.
> > >
> >
> > Right. This is really convoluted. split_huge_page_to_list() is just
> > assuming that if the given page is on a deferred list then it must be
> > on the list returned by get_deferred_split_queue(page). The
> > interaction of move_charge and deferred split seems broken.
>
> Because memcg code doesn't move charge for PTE mapped THP at all. See
> the below comment from mem_cgroup_move_charge_pte_range():
>
> "We can have a part of the split pmd here. Moving it can be done but
> it would be too convoluted so simply ignore such a partial THP and
> keep it in original memcg. There should be somebody mapping the head."
>
> BTW, did you run into any problem related to this?
>

No, just reading code to see if I can share code for the sync splitting of THPs.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e5483347291c..d393028681e2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2809,7 +2809,7 @@  static unsigned long deferred_split_scan(struct shrinker *shrink,
 	unsigned long flags;
 	LIST_HEAD(list), *pos, *next;
 	struct page *page;
-	int split = 0;
+	unsigned long split = 0;
 
 #ifdef CONFIG_MEMCG
 	if (sc->memcg)
@@ -2847,6 +2847,7 @@  static unsigned long deferred_split_scan(struct shrinker *shrink,
 
 	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
 	list_splice_tail(&list, &ds_queue->split_queue);
+	ds_queue->split_queue_len -= split;
 	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
 
 	/*