diff mbox series

[4.18] Revert "mm: slowly shrink slabs with a relatively small number of objects"

Message ID 20181026111859.23807-1-sashal@kernel.org (mailing list archive)
State New, archived
Headers show
Series [4.18] Revert "mm: slowly shrink slabs with a relatively small number of objects" | expand

Commit Message

Sasha Levin Oct. 26, 2018, 11:18 a.m. UTC
This reverts commit 62aad93f09c1952ede86405894df1b22012fd5ab.

Which was upstream commit 172b06c32b94 ("mm: slowly shrink slabs with a
relatively small number of objects").

The upstream commit was found to cause regressions. While there is a
proposed fix upstream, revent this patch from stable trees for now as
testing the fix will take some time.

Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/vmscan.c | 11 -----------
 1 file changed, 11 deletions(-)

Comments

Sasha Levin Oct. 31, 2018, 1:52 p.m. UTC | #1
On Fri, Oct 26, 2018 at 07:18:59AM -0400, Sasha Levin wrote:
>This reverts commit 62aad93f09c1952ede86405894df1b22012fd5ab.
>
>Which was upstream commit 172b06c32b94 ("mm: slowly shrink slabs with a
>relatively small number of objects").
>
>The upstream commit was found to cause regressions. While there is a
>proposed fix upstream, revent this patch from stable trees for now as
>testing the fix will take some time.
>
>Signed-off-by: Sasha Levin <sashal@kernel.org>
>---
> mm/vmscan.c | 11 -----------
> 1 file changed, 11 deletions(-)
>
>diff --git a/mm/vmscan.c b/mm/vmscan.c
>index fc0436407471..03822f86f288 100644
>--- a/mm/vmscan.c
>+++ b/mm/vmscan.c
>@@ -386,17 +386,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> 	delta = freeable >> priority;
> 	delta *= 4;
> 	do_div(delta, shrinker->seeks);
>-
>-	/*
>-	 * Make sure we apply some minimal pressure on default priority
>-	 * even on small cgroups. Stale objects are not only consuming memory
>-	 * by themselves, but can also hold a reference to a dying cgroup,
>-	 * preventing it from being reclaimed. A dying cgroup with all
>-	 * corresponding structures like per-cpu stats and kmem caches
>-	 * can be really big, so it may lead to a significant waste of memory.
>-	 */
>-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
>-
> 	total_scan += delta;
> 	if (total_scan < 0) {
> 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",

I've queued it up for 4.18.

--
Thanks,
Sasha
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index fc0436407471..03822f86f288 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -386,17 +386,6 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	delta = freeable >> priority;
 	delta *= 4;
 	do_div(delta, shrinker->seeks);
-
-	/*
-	 * Make sure we apply some minimal pressure on default priority
-	 * even on small cgroups. Stale objects are not only consuming memory
-	 * by themselves, but can also hold a reference to a dying cgroup,
-	 * preventing it from being reclaimed. A dying cgroup with all
-	 * corresponding structures like per-cpu stats and kmem caches
-	 * can be really big, so it may lead to a significant waste of memory.
-	 */
-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
-
 	total_scan += delta;
 	if (total_scan < 0) {
 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",