From patchwork Wed Dec 2 18:27:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11946709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54DD8C71155 for ; Wed, 2 Dec 2020 18:28:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D18222224B for ; Wed, 2 Dec 2020 18:28:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D18222224B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09CD96B0073; Wed, 2 Dec 2020 13:28:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 04BF06B0074; Wed, 2 Dec 2020 13:28:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55BB6B0075; Wed, 2 Dec 2020 13:28:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id CECD46B0073 for ; Wed, 2 Dec 2020 13:28:06 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9690E181AEF1F for ; Wed, 2 Dec 2020 18:28:06 +0000 (UTC) X-FDA: 77549176572.12.news39_5e10dbf273b5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 7481818013510 for ; Wed, 2 Dec 2020 18:28:06 +0000 (UTC) X-HE-Tag: news39_5e10dbf273b5 X-Filterd-Recvd-Size: 5740 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Dec 2020 18:28:05 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id o4so1620321pgj.0 for ; Wed, 02 Dec 2020 10:28:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/KThC6++Y6o6leiTwROJ96GvddZx8+czWeSmb5amZVQ=; b=EMi/tuOVxp5UeIBbQHuc0MvOjXaJL4xMfzAPKC7vAdSGUX+AukgNU6lbSCEl1pa40s CzGFvy2f9sxECBStwdH2cBnDzBzlMji6d0TkKGY36wzDoRVCBRuf5Irax63BIyEYZsV1 hd+v6o02bEmjxBhOs0e07+ABx6uC99D3Xg3bhAUFvSZcPWzJzjpxVs7Pg9F6XbO+PP/Y 8KbvW7Sbe06Jvw/Ig0FqkoPmY8jWSszcCz+ff02L+L9d4Q9R6UKL+fhF6h0Uv+TyVPDj q6dqCTWgosHvZbrEXyjzSVAS2jtMKMCtFGukFVkeyUr8DKuEOAbBBwsLwLHWlKpfihhn pEOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/KThC6++Y6o6leiTwROJ96GvddZx8+czWeSmb5amZVQ=; b=Rf3WxVvxXY8WxGD9LwJlewjMm+H2JYN/6oqT/sUwEAH+8iKwqlImKXyYmCFNOKdxYc ajbCw3wAgTg5yFc9KZ+ifDaOqHrZDq4KVFUURM/gJjKwdyq3hd7l14gCglNLYFY1TW4J hi1t3+1UZFHmmrT3rv6BSuA0KrGKi3l0I1c5xVNrv7xibJlwgkKgvt+92ljThOtFfOHF a6fYwZHgwtChLba6BOCUulMaVqxkuWo89uOcoFyk5J+4pFXjPxW4BmJq5ElyYK0pnCDj qTs+yd97tr7FcM4FOijLt5r19sfXyY5Y/8Vgp1vUFJEe7s8VIkc30ul8NQICs/e+ZJLZ /S2w== X-Gm-Message-State: AOAM5323jRnBvDyejk9q5PRSpW9Tb1iAp+R8IYr01SBlN1PCtlUKR+Jv vRUe10ooMM9TQ+1HNAsa5uw= X-Google-Smtp-Source: ABdhPJwastASUnMXqQBRmmSHtYhnA9Bcu2/dYOOI2LdtM0gtv3ElGxk1WiZ9iMjXMoN8OXJgmnux8A== X-Received: by 2002:a63:d45:: with SMTP id 5mr1078522pgn.0.1606933685165; Wed, 02 Dec 2020 10:28:05 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id c6sm396906pgl.38.2020.12.02.10.28.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Dec 2020 10:28:04 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 9/9] mm: vmscan: shrink deferred objects proportional to priority Date: Wed, 2 Dec 2020 10:27:25 -0800 Message-Id: <20201202182725.265020-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201202182725.265020-1-shy828301@gmail.com> References: <20201202182725.265020-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7dc8075c371b..9d2a6485e982 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -608,10 +579,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.