From patchwork Wed Feb 17 00:13:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00582C433E6 for ; Wed, 17 Feb 2021 00:14:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B9DC64EB2 for ; Wed, 17 Feb 2021 00:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231244AbhBQAOO (ORCPT ); Tue, 16 Feb 2021 19:14:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231172AbhBQAOL (ORCPT ); Tue, 16 Feb 2021 19:14:11 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EA2CC06174A; Tue, 16 Feb 2021 16:13:31 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id e9so428155pjj.0; Tue, 16 Feb 2021 16:13:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oDIXLxy24LDduOjAgxeksw9hKjupmB+52EkzQu0ub3U=; b=tftXLk6zX3Xb/cf8+6zK9ZM6tMvifXEmb/deF6t5Rus0pTCpIfksjhYqeeJ5a95yH4 m+4rfrfqtzyTY+ma3SBO1W0w10WAh0xhmztzyrGuj81FocIeHOoPCWe1C/2E73IntBP3 B8RaDuy91wqWcfILUAtMKgwI//V/lZGukHWyelk/KGOAOlpqyWRSFO9/49IWWJ8nr7W4 lY1PmhOdYjBe/TsWuhs5utEtqdUa0/l/ZIbNUrXUn7Hu0FpUY3n1dR6O98ddRIW0Pb5W hHcKIkAJqPYiKFMK9GQ6CMqml5wfDLhfyWtKLavN9KNhuTQH51cUMKqDliHUIb6Wm5nG Ukew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oDIXLxy24LDduOjAgxeksw9hKjupmB+52EkzQu0ub3U=; b=dE3HrphHl+/V+B8YGdGu1ORoXf8rKVMdRhYeHV2wrKLZ9SvwGtbtoHyRGRQRJuVdnk jBD/LCOKKp/K7vl43vNguljbhsdUrVXUrdpu2R1IuIet9KzSCytQe9rQBnHA7zAP8BNK Pz/gS0wgQ8AFjRfqAfNSwGLJsKjiG/vp1E3fUjj8ls/b9Cb368RlsdYL3jkiWkzvejfz +4G4xoIz1PMFelMTCKe9Vw1/Ojs3PEYBS9CZFLIJ6WR72/wB/D1dZCdN253bZno3+yGJ kS4pL4c86f2vAk28JLyYLyDYn65+N7rt5SAdhuMD+AuRLTVcVz49PGucN5youXg05Ewo Q9BA== X-Gm-Message-State: AOAM531wXqTzY4YFpSYbXFC6t8GiUSx4yILPgRXTkUHYbsVczyIi1q6L 5mHCSSeuOEkm9NQ548qYA3w= X-Google-Smtp-Source: ABdhPJzwkFumlTnfhOiwYgAsXEbk8Y/K1i2XUu3wOErxAZ5+RVdNcGG132WhjXrBD4iR4KyPU84LLw== X-Received: by 2002:a17:90b:4a0b:: with SMTP id kk11mr6563422pjb.95.1613520811146; Tue, 16 Feb 2021 16:13:31 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:30 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 01/13] mm: vmscan: use nid from shrink_control for tracepoint Date: Tue, 16 Feb 2021 16:13:10 -0800 Message-Id: <20210217001322.2226796-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The tracepoint's nid should show what node the shrink happens on, the start tracepoint uses nid from shrinkctl, but the nid might be set to 0 before end tracepoint if the shrinker is not NUMA aware, so the tracing log may show the shrink happens on one node but end up on the other node. It seems confusing. And the following patch will remove using nid directly in do_shrink_slab(), this patch also helps cleanup the code. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt Acked-by: Roman Gushchin Signed-off-by: Yang Shi --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b1b574ad199d..b512dd5e3a1c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -535,7 +535,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, else new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; } From patchwork Wed Feb 17 00:13:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 273ADC433DB for ; Wed, 17 Feb 2021 00:14:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D13A664EB8 for ; Wed, 17 Feb 2021 00:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231281AbhBQAOQ (ORCPT ); Tue, 16 Feb 2021 19:14:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbhBQAON (ORCPT ); Tue, 16 Feb 2021 19:14:13 -0500 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB5EAC061756; Tue, 16 Feb 2021 16:13:33 -0800 (PST) Received: by mail-pf1-x433.google.com with SMTP id y25so3962972pfp.5; Tue, 16 Feb 2021 16:13:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUU/F1/sUlWA430OGWLD5vYEFvrXqDvu+mUbwKeWm8M=; b=kD5e6mN2eamYKeLqs3SGKIjmmxWHUONpbaMQeVIcjP7sNbzqFnZlEHZiqqZVClhvd0 WSrJDYMiADiJkkYLnU4YVwDYoil5c2GTWpzbBENe59x6b8/fjMQksu/2ngUGGGOoJ7mb CRrIpit16f9bOyAV3NkLlaZD0wEV+txLU+U95eFjNhSaSGSEGjm8fGV3PhyQ+23x6H8g 7c42VnNDfRAv9ik27mfG45U2I09JQ29LA1gSvfuk80+nDy0SYCbNf9mtHQ6qGZKSYobJ KK4e+s1n12TzvkU8rH7fFF6LhMaTJyL2X+ngt6ZJe8WGMqbp6DGDwn6vAysnmrcOISMa c6XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUU/F1/sUlWA430OGWLD5vYEFvrXqDvu+mUbwKeWm8M=; b=ja0kQcrFry48IkAYAZ3aO0ip59fFTdsVEPv9I679TNlFcOectKB108ZvjNHIn6ul3n 2z+VZdGJc9AwhT9E2QI9pm7Gm+yO3j/GuG3l6JzNmhXGd3re/vFbJ0pefX7bGTbCrUqB V/AybmLXii5J/R0eJ+nhJRlDON/7koU0r/4a70gHq5VD5r++8syelcwpLnrbcaNqnKbv pYWQNOdgcJ3kOENSsAuMosKSDzgkdvBCFvv1tWf8nfhEvD+e1e5iwFLYnWYJzIzFVNfP lXfrX16Bkur05Ya2Eco2HVbpZmzM9rcQd4MujshDHq6bINFtC1mWtpUsG25K2/XfXIN1 AIfw== X-Gm-Message-State: AOAM531eXNmrsBstUjSt+7HiUSohbKaUaRE/HBgSYBJ48hH3bO4t2+SR c6XKgpSqEuVdVU+znAZTp9k= X-Google-Smtp-Source: ABdhPJzwmGLd1koYIh+EJCZ9DXiQkpAmKri3tEVg5VXGthvmNp6h/Z4duHv3WG0hXn59xs4RYbUwpQ== X-Received: by 2002:a63:c60b:: with SMTP id w11mr20933992pgg.215.1613520813254; Tue, 16 Feb 2021 16:13:33 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:32 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 02/13] mm: vmscan: consolidate shrinker_maps handling code Date: Tue, 16 Feb 2021 16:13:11 -0800 Message-Id: <20210217001322.2226796-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The shrinker map management is not purely memcg specific, it is at the intersection between memory cgroup and shrinkers. It's allocation and assignment of a structure, and the only memcg bit is the map is being stored in a memcg structure. So move the shrinker_maps handling code into vmscan.c for tighter integration with shrinker code, and remove the "memcg_" prefix. There is no functional change. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 11 ++-- mm/huge_memory.c | 4 +- mm/list_lru.c | 6 +- mm/memcontrol.c | 129 +----------------------------------- mm/vmscan.c | 131 ++++++++++++++++++++++++++++++++++++- 5 files changed, 141 insertions(+), 140 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index eeb0b52203e9..1739f17e0939 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1581,10 +1581,9 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int memcg_expand_shrinker_maps(int new_id); - -extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id); +int alloc_shrinker_maps(struct mem_cgroup *memcg); +void free_shrinker_maps(struct mem_cgroup *memcg); +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1594,8 +1593,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id) +static inline void set_shrinker_bit(struct mem_cgroup *memcg, + int nid, int shrinker_id) { } #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 91ca9b103ee5..1c2ee6ecd6cf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2832,8 +2832,8 @@ void deferred_split_huge_page(struct page *page) ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG if (memcg) - memcg_set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); + set_shrinker_bit(memcg, page_to_nid(page), + deferred_split_shrinker.id); #endif } spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); diff --git a/mm/list_lru.c b/mm/list_lru.c index fe230081690b..628030fa5f69 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -125,8 +125,8 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - memcg_set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, + lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -548,7 +548,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, if (src->nr_items) { dst->nr_items += src->nr_items; - memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); + set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1bdb93ee8e72..f5c9a0d2160b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -397,129 +397,6 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif -static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); - -static void memcg_free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - -static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) -{ - struct memcg_shrinker_map *new, *old; - int nid; - - lockdep_assert_held(&memcg_shrinker_map_mutex); - - for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); - /* Not yet online memcg */ - if (!old) - return 0; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); - if (!new) - return -ENOMEM; - - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); - - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); - } - - return 0; -} - -static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) -{ - struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; - int nid; - - if (mem_cgroup_is_root(memcg)) - return; - - for_each_node(nid) { - pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); - } -} - -static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) -{ - struct memcg_shrinker_map *map; - int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - size = memcg_shrinker_map_size; - for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - memcg_free_shrinker_maps(memcg); - ret = -ENOMEM; - break; - } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); - } - mutex_unlock(&memcg_shrinker_map_mutex); - - return ret; -} - -int memcg_expand_shrinker_maps(int new_id) -{ - int size, old_size, ret = 0; - struct mem_cgroup *memcg; - - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; - if (size <= old_size) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - if (!root_mem_cgroup) - goto unlock; - - for_each_mem_cgroup(memcg) { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_map(memcg, size, old_size); - if (ret) { - mem_cgroup_iter_break(NULL, memcg); - goto unlock; - } - } -unlock: - if (!ret) - memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); - return ret; -} - -void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) -{ - if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; - - rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); - /* Pairs with smp mb in shrink_slab() */ - smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); - rcu_read_unlock(); - } -} - /** * mem_cgroup_css_from_page - css of the memcg associated with a page * @page: page of interest @@ -5369,11 +5246,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for memcg_expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_maps() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (memcg_alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_maps(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5437,7 +5314,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - memcg_free_shrinker_maps(memcg); + free_shrinker_maps(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index b512dd5e3a1c..96b08c79f18d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,6 +185,131 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG + +static int memcg_shrinker_map_size; +static DEFINE_MUTEX(memcg_shrinker_map_mutex); + +static void free_shrinker_map_rcu(struct rcu_head *head) +{ + kvfree(container_of(head, struct memcg_shrinker_map, rcu)); +} + +static int expand_one_shrinker_map(struct mem_cgroup *memcg, + int size, int old_size) +{ + struct memcg_shrinker_map *new, *old; + int nid; + + lockdep_assert_held(&memcg_shrinker_map_mutex); + + for_each_node(nid) { + old = rcu_dereference_protected( + mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + /* Not yet online memcg */ + if (!old) + return 0; + + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!new) + return -ENOMEM; + + /* Set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_size); + memset((void *)new->map + old_size, 0, size - old_size); + + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + call_rcu(&old->rcu, free_shrinker_map_rcu); + } + + return 0; +} + +void free_shrinker_maps(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct memcg_shrinker_map *map; + int nid; + + if (mem_cgroup_is_root(memcg)) + return; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + map = rcu_dereference_protected(pn->shrinker_map, true); + kvfree(map); + rcu_assign_pointer(pn->shrinker_map, NULL); + } +} + +int alloc_shrinker_maps(struct mem_cgroup *memcg) +{ + struct memcg_shrinker_map *map; + int nid, size, ret = 0; + + if (mem_cgroup_is_root(memcg)) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + size = memcg_shrinker_map_size; + for_each_node(nid) { + map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); + if (!map) { + free_shrinker_maps(memcg); + ret = -ENOMEM; + break; + } + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + } + mutex_unlock(&memcg_shrinker_map_mutex); + + return ret; +} + +static int expand_shrinker_maps(int new_id) +{ + int size, old_size, ret = 0; + struct mem_cgroup *memcg; + + size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + old_size = memcg_shrinker_map_size; + if (size <= old_size) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + if (!root_mem_cgroup) + goto unlock; + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + if (mem_cgroup_is_root(memcg)) + continue; + ret = expand_one_shrinker_map(memcg, size, old_size); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto unlock; + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); +unlock: + if (!ret) + memcg_shrinker_map_size = size; + mutex_unlock(&memcg_shrinker_map_mutex); + return ret; +} + +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) +{ + if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { + struct memcg_shrinker_map *map; + + rcu_read_lock(); + map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); + set_bit(shrinker_id, map->map); + rcu_read_unlock(); + } +} + /* * We allow subsystems to populate their shrinker-related * LRU lists before register_shrinker_prepared() is called @@ -212,7 +337,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (memcg_expand_shrinker_maps(id)) { + if (expand_shrinker_maps(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -589,7 +714,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * case, we invoke the shrinker one more time and reset * the bit if it reports that it is not empty anymore. * The memory barrier here pairs with the barrier in - * memcg_set_shrinker_bit(): + * set_shrinker_bit(): * * list_lru_add() shrink_slab_memcg() * list_add_tail() clear_bit() @@ -601,7 +726,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (ret == SHRINK_EMPTY) ret = 0; else - memcg_set_shrinker_bit(memcg, nid, i); + set_shrinker_bit(memcg, nid, i); } freed += ret; From patchwork Wed Feb 17 00:13:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A8E5C43381 for ; Wed, 17 Feb 2021 00:14:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0720D64EB1 for ; Wed, 17 Feb 2021 00:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231335AbhBQAOR (ORCPT ); Tue, 16 Feb 2021 19:14:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231269AbhBQAOP (ORCPT ); Tue, 16 Feb 2021 19:14:15 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92378C0613D6; Tue, 16 Feb 2021 16:13:35 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id t29so7232195pfg.11; Tue, 16 Feb 2021 16:13:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9kmwvTRVdVE30SERAs5GtvM6Gm5zn00pBYFxryVSYDw=; b=Apl5JLsT8gxKLTZobcVZz6rQgkrf7fQH/siVQrIumS8tleaop2xVxnCc1CVoVkVJle rEBSWCXJ4hgljG6TwwrSCSET42iM1SypFs4U9pXsbsGS9cN1u+TN9L1qYNRvDz3LItls Ge3B/4WeTdat0V1dhsU6QowLkWjm7g+w17A6BFufm8mkrMno6SUeQD2CQhHKuOPF7Uxk zU1c6ZKQsWhYDuEtW7kYlqmyD2hruwXytfAMU8YVYBuncyTrbQHQMP3LHOORQ9WL2uH4 4ao0Lx3XyQx7Qi851KdhTdcmrGzoI+2PT6TZiSWkVELbysIpYRKOcVkE6dI01pF14uzo dfWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9kmwvTRVdVE30SERAs5GtvM6Gm5zn00pBYFxryVSYDw=; b=ei8Zq/8/oxNu+sdSxvuk0zFBunv/qOp4Wq22A9+EVY5g8DQ0GN8rGJnKb4uy/3IPBX P6I+47s/MH/Q2RmRLYLrvqXN/M+pYqm1nRF1HwfR9lvER2hdUUbiPVu/flYzXffmCxQQ gcDaI+pG0iU+Kr+gI+F8tDvYupyUuIlHg9s1zJ3tU4gUKEQdgofdbmpzlJCFa7bV9YMm 0SGHFzzAKrH5Dly5Mlw32+OQbURB2dSFNoz8/Q7PocRgXb6m01c/AGqHB9Zc4JZ4YJNd jOqHx/yPc2/otSMIJZPnZdoIJr4gf/Fa0AjSPyUpeTcZs8Ya/+FsaWWDJxTBwgOEWh/l 1zTw== X-Gm-Message-State: AOAM532q73jZTSTTaYJ3AFUK0xQi/5cmFf5ST84aoY3HCt6aYE9FDMD0 gIDF9fpCgSG9WiYbYbVJljw= X-Google-Smtp-Source: ABdhPJxwYwEKGAq8dceduxuD8dwCPV4j9BtMWCo2ogoZYclu44P7088TKtQSQLx9LwfhLkUdFWexiw== X-Received: by 2002:a62:b410:0:b029:1a4:7868:7e4e with SMTP id h16-20020a62b4100000b02901a478687e4emr215718pfn.62.1613520815217; Tue, 16 Feb 2021 16:13:35 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:34 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 03/13] mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation Date: Tue, 16 Feb 2021 16:13:12 -0800 Message-Id: <20210217001322.2226796-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Since memcg_shrinker_map_size just can be changed under holding shrinker_rwsem exclusively, the read side can be protected by holding read lock, so it sounds superfluous to have a dedicated mutex. Kirill Tkhai suggested use write lock since: * We want the assignment to shrinker_maps is visible for shrink_slab_memcg(). * The rcu_dereference_protected() dereferrencing in shrink_slab_memcg(), but in case of we use READ lock in alloc_shrinker_maps(), the dereferrencing is not actually protected. * READ lock makes alloc_shrinker_info() racy against memory allocation fail. alloc_shrinker_info()->free_shrinker_info() may free memory right after shrink_slab_memcg() dereferenced it. You may say shrink_slab_memcg()->mem_cgroup_online() protects us from it? Yes, sure, but this is not the thing we want to remember in the future, since this spreads modularity. And a test with heavy paging workload didn't show write lock makes things worse. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 96b08c79f18d..543af6ec1e02 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,7 +187,6 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -200,8 +199,6 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, struct memcg_shrinker_map *new, *old; int nid; - lockdep_assert_held(&memcg_shrinker_map_mutex); - for_each_node(nid) { old = rcu_dereference_protected( mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); @@ -249,7 +246,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) if (mem_cgroup_is_root(memcg)) return 0; - mutex_lock(&memcg_shrinker_map_mutex); + down_write(&shrinker_rwsem); size = memcg_shrinker_map_size; for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); @@ -260,7 +257,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) } rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); } - mutex_unlock(&memcg_shrinker_map_mutex); + up_write(&shrinker_rwsem); return ret; } @@ -275,9 +272,10 @@ static int expand_shrinker_maps(int new_id) if (size <= old_size) return 0; - mutex_lock(&memcg_shrinker_map_mutex); if (!root_mem_cgroup) - goto unlock; + goto out; + + lockdep_assert_held(&shrinker_rwsem); memcg = mem_cgroup_iter(NULL, NULL, NULL); do { @@ -286,13 +284,13 @@ static int expand_shrinker_maps(int new_id) ret = expand_one_shrinker_map(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); - goto unlock; + goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -unlock: +out: if (!ret) memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); + return ret; } From patchwork Wed Feb 17 00:13:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E962C433E9 for ; Wed, 17 Feb 2021 00:14:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4AA3D64EB7 for ; Wed, 17 Feb 2021 00:14:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231350AbhBQAOd (ORCPT ); Tue, 16 Feb 2021 19:14:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231345AbhBQAOT (ORCPT ); Tue, 16 Feb 2021 19:14:19 -0500 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E65DFC061786; Tue, 16 Feb 2021 16:13:37 -0800 (PST) Received: by mail-pg1-x52e.google.com with SMTP id t11so7331215pgu.8; Tue, 16 Feb 2021 16:13:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nrWTpq2dO7Eg3GTZ90HCf5mhya6dwUv+AMyfdHDY2DA=; b=o0z2F6Qc4S3+vXblyH0mYP/ax8EyOS1mKW7gexinUfzv49fXukNW7QCAIA0ZJO7iTC UFtZo0CfeeWgJA5Kisa28tJcIIP5jkq8NG4R/Xz3dumFOL3PVv10yxHovzM54BYnrwjb 339Ev1AqMok156YI1cCj2582/Nj7Z9RRljbb5+8CVJANnOMYZo3wRD5zxdyYj9e2KysG lunfq90/Zcfbk7OApA2LeuHcdVbYVKvNa+/u2dzBG7OABLCGxhRzaY5Hnk0nnj+aEyWr c6R5Ns4obmv87viI7yUaHvMjcG5TnoKJ7Iq+zOnCtLkd1tTR+2TfCpOBysbFZyVMPiHY V9Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nrWTpq2dO7Eg3GTZ90HCf5mhya6dwUv+AMyfdHDY2DA=; b=DRXtaIk6S2dzIMnBn7yBuXYMdpxVJq9qIQ16ZwMwI1J/X+nidok5PWjr3Tuv4ctlO9 SMqxXvGX2yY1YLlX9SFuw03gCyjB7zYrXV3kSpdspVeOQ8THh1bvIlgewntw6GospZVc pM7QUJcF3yxKCoQRAxjGkyyH7lwRUAheABS+xS+969h+iMSDSxzp1Ikjx7GEbTqApEak NUI8bx/38Ax1ex1R/XL3yZR1WvGWWOmG+Rb5C0MJ9Z5hSOaQUnEyPQSJV067EfNZ0NsE fyTbSgdMYywTvqjFQGNeT3eyW+GeN86i5pmub3AXa8bdLX1oHns0YTmjNF7ewaaB1oFG NBIw== X-Gm-Message-State: AOAM532XBlx9OXGCt6dJUVJ9H1r0mA5bjueInHe1slHEgxChilmWnNwk 6jAOH6ax0+zjLJZvHELT1Dw= X-Google-Smtp-Source: ABdhPJy3A83kV+erXZ8lkFiabZggHE6dNpCc6tIrExHC220czNBuC5EzLT1Qs5H6KpkeBQPb4+gfOA== X-Received: by 2002:aa7:8811:0:b029:1eb:77b1:6e77 with SMTP id c17-20020aa788110000b02901eb77b16e77mr15629198pfo.22.1613520817524; Tue, 16 Feb 2021 16:13:37 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:36 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 04/13] mm: vmscan: remove memcg_shrinker_map_size Date: Tue, 16 Feb 2021 16:13:13 -0800 Message-Id: <20210217001322.2226796-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually the map size can be calculated via shrinker_nr_max, so it seems unnecessary to keep both. Remove memcg_shrinker_map_size since shrinker_nr_max is also used by iterating the bit map. Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 543af6ec1e02..2e753c2516fa 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,8 +185,12 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG +static int shrinker_nr_max; -static int memcg_shrinker_map_size; +static inline int shrinker_map_size(int nr_items) +{ + return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); +} static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -247,7 +251,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) return 0; down_write(&shrinker_rwsem); - size = memcg_shrinker_map_size; + size = shrinker_map_size(shrinker_nr_max); for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); if (!map) { @@ -265,12 +269,13 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) static int expand_shrinker_maps(int new_id) { int size, old_size, ret = 0; + int new_nr_max = new_id + 1; struct mem_cgroup *memcg; - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; + size = shrinker_map_size(new_nr_max); + old_size = shrinker_map_size(shrinker_nr_max); if (size <= old_size) - return 0; + goto out; if (!root_mem_cgroup) goto out; @@ -289,7 +294,7 @@ static int expand_shrinker_maps(int new_id) } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); out: if (!ret) - memcg_shrinker_map_size = size; + shrinker_nr_max = new_nr_max; return ret; } @@ -322,7 +327,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) #define SHRINKER_REGISTERING ((struct shrinker *)~0UL) static DEFINE_IDR(shrinker_idr); -static int shrinker_nr_max; static int prealloc_memcg_shrinker(struct shrinker *shrinker) { @@ -339,8 +343,6 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); goto unlock; } - - shrinker_nr_max = id + 1; } shrinker->id = id; ret = 0; From patchwork Wed Feb 17 00:13:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FF09C433E6 for ; Wed, 17 Feb 2021 00:15:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53C0364EB1 for ; Wed, 17 Feb 2021 00:15:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231380AbhBQAOz (ORCPT ); Tue, 16 Feb 2021 19:14:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231371AbhBQAOv (ORCPT ); Tue, 16 Feb 2021 19:14:51 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A4C7C061788; Tue, 16 Feb 2021 16:13:40 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id fa16so362376pjb.1; Tue, 16 Feb 2021 16:13:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xVcqSZcHu5IZi/C+WStQFCtrAWEjFBSbWmkdZnY1Mak=; b=jHIijN1unicXjyzPENrx3Ud1wLIwQPgTtH6laop63DTNjZ0dgJ28S5yZfj0x+h8cCM r+07GDQ1I7eTsdhnJD0/xw5YuUfEb+SeZwEWsp9ZGiiaw4COS45rwbqZ/HtjfNr+W928 Y4KHY3P2DsDcBRmJBwxHe2Bw3O5LHH2irQgkGVY/bKHwleetw3Sod4IYA9hDHuxWEgJv d4HSCLHb7GmM2kZ0BhoxN/ffzrrYxfiMY3aw5rD7qhY1+DjsY8sAyi36yBZn4xegKrRK ZcHtI8Y7vZb7jFeUBxeghGXHHL2Q88NiP5CHahtEtnzPPTnuGIvkEBT7aAA0jRDRjH4W +ZvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xVcqSZcHu5IZi/C+WStQFCtrAWEjFBSbWmkdZnY1Mak=; b=Vr8mLGuyQ+rYzhrIco1Y1OeGl31IPLSVGKmoAZkkmZzqWUWIaMka8o3aL/1OhimJOK I72xCQcjKWwweaSsHXZ0LaYRomeu/idv1BTioO4fNFtumQRx4rREasnAlieg8/0XvOdM URZ5I48l49WnJSxyU+Fs7hW8h5iHOXJqn/1lXl7U6t9FsKOJdvY14dcRSijnyp2l07MQ 1Dg3mjxCTbhsqvr4nzrLRaaC0BDajUzbuwWEHjymmGvRgwH1hTn3W1dIkJhEDYvCTM4Y pSBxFjB2LixjpIng3NQ0h7yj2EKrdQGfI8A1SpuogK8kSKvhhczhCX5V1p+EQGHB5Fbz zkRg== X-Gm-Message-State: AOAM533Bkc5Px1MSqFm8QdR3DTX3pO5D36v485XT2fZ0h2T8UwrSv1Ec 0WqjeXJ4VHoS5O4luuyA+xat0OemuWEj+g== X-Google-Smtp-Source: ABdhPJwLfmniYH7zm8PmbGDXGY5EKEOj2yTUgjfmbch/RcW4W/iuVWi4pfuPHYaFIhnOSnFSKjOCkg== X-Received: by 2002:a17:902:20e:b029:e1:916c:a4d6 with SMTP id 14-20020a170902020eb02900e1916ca4d6mr18104plc.57.1613520819500; Tue, 16 Feb 2021 16:13:39 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:38 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 05/13] mm: vmscan: use kvfree_rcu instead of call_rcu Date: Tue, 16 Feb 2021 16:13:14 -0800 Message-Id: <20210217001322.2226796-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Using kvfree_rcu() to free the old shrinker_maps instead of call_rcu(). We don't have to define a dedicated callback for call_rcu() anymore. Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- mm/vmscan.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2e753c2516fa..c2a309acd86b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,11 +192,6 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } -static void free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - static int expand_one_shrinker_map(struct mem_cgroup *memcg, int size, int old_size) { @@ -219,7 +214,7 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, memset((void *)new->map + old_size, 0, size - old_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, free_shrinker_map_rcu); + kvfree_rcu(old); } return 0; From patchwork Wed Feb 17 00:13:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 258BDC433E0 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8DC261494 for ; Wed, 17 Feb 2021 00:15:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231398AbhBQAPA (ORCPT ); Tue, 16 Feb 2021 19:15:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231382AbhBQAOz (ORCPT ); Tue, 16 Feb 2021 19:14:55 -0500 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49B1CC06178B; Tue, 16 Feb 2021 16:13:43 -0800 (PST) Received: by mail-pg1-x533.google.com with SMTP id z68so7360901pgz.0; Tue, 16 Feb 2021 16:13:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CdooLm6FEpgeYf4aCk5vQTWBIDCkIBhqYMgTNhIFYyw=; b=URZ4hnL4SUWcPdFs4/tROAPmsXTR2/LdoQpbbwknlBEO3g0MMh55GLJq7Jht735Nff rqpRtAERzRyP0dNmOqCQm1ZTEy6BbnSSjtJFDMQOxiqEUjr36Zycme0+CXRptPoft/x6 5ZJS5YlC7UomxFLVdMV53J7J3/ieV9lbKXtgcQfSc6ES8VaPBB7H/Pq8JGmJ1o4hBkRd v8L3dKQ9ZybHBp3H2JwdWSrHRdOQquS3QyimtRMQVCqtlcxXDejceM4qTYXiaSaOb5cB Cu7Ov2JXUQa3aYi1GYIcrUu5wCFWLEGtxrKzRQC4PDsUd8fi0QFZkcZva+dlO5yBLOYI CEnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CdooLm6FEpgeYf4aCk5vQTWBIDCkIBhqYMgTNhIFYyw=; b=K51bZZFK951xK2FE02FhLqT+WlU4G+RaHwaO2BfxDeXMJJxkwV8tz+0RUO4Oa2JF8M vDPuxQdx9L4pREhBq/Nj1Zvyg0Q0nB12+wbfNvQy5E3a4T+xgh+7hwe2x5WG7ulrwDLu 0YjLH+oMAqK6N5ePbIZV6jBf3Yac0uN6idB6hy8QiG11Gj32EasIQLXiZJYqrorC1MvS 9XQ4gLkbeNTcbLCi3tBLXhPz6LAH2pmAoHVd1ifV9SH7jbt2/BQdlbg1J/JuKWjNREpi sGzWQgpvCJ04WIg3Ckj3HApMEQtHokhR8MzN+giEEyUBpXGIqkHyVJhhbKqaG7UPy10R kkpA== X-Gm-Message-State: AOAM530BBKuvmKvGEH9ofWCdpSUa7y0csIvGjXxneZk0IHJi2/aehEVG MY2/sUQiMcJ8DHlD3WQ32bc= X-Google-Smtp-Source: ABdhPJyeS67RTRv9prWRBpCtnKNYCZWtj1PjSfX0vCNoRWMOh+CzSsh72KqO9uDmefcekDtkYT2pPA== X-Received: by 2002:a63:1d26:: with SMTP id d38mr3778744pgd.385.1613520822907; Tue, 16 Feb 2021 16:13:42 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:41 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 06/13] mm: memcontrol: rename shrinker_map to shrinker_info Date: Tue, 16 Feb 2021 16:13:15 -0800 Message-Id: <20210217001322.2226796-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The following patch is going to add nr_deferred into shrinker_map, the change will make shrinker_map not only include map anymore, so rename it to "memcg_shrinker_info". And this should make the patch adding nr_deferred cleaner and readable and make review easier. Also remove the "memcg_" prefix. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 8 +++--- mm/memcontrol.c | 6 ++-- mm/vmscan.c | 58 +++++++++++++++++++------------------- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 1739f17e0939..4c9253896e25 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -96,7 +96,7 @@ struct lruvec_stat { * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. */ -struct memcg_shrinker_map { +struct shrinker_info { struct rcu_head rcu; unsigned long map[]; }; @@ -118,7 +118,7 @@ struct mem_cgroup_per_node { struct mem_cgroup_reclaim_iter iter; - struct memcg_shrinker_map __rcu *shrinker_map; + struct shrinker_info __rcu *shrinker_info; struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ @@ -1581,8 +1581,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -int alloc_shrinker_maps(struct mem_cgroup *memcg); -void free_shrinker_maps(struct mem_cgroup *memcg); +int alloc_shrinker_info(struct mem_cgroup *memcg); +void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else #define mem_cgroup_sockets_enabled 0 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f5c9a0d2160b..f64ad0d044d9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5246,11 +5246,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_info(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5314,7 +5314,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - free_shrinker_maps(memcg); + free_shrinker_info(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index c2a309acd86b..c94861a3ea3e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,15 +192,15 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } -static int expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) +static int expand_one_shrinker_info(struct mem_cgroup *memcg, + int size, int old_size) { - struct memcg_shrinker_map *new, *old; + struct shrinker_info *new, *old; int nid; for_each_node(nid) { old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); /* Not yet online memcg */ if (!old) return 0; @@ -213,17 +213,17 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, memset(new->map, (int)0xff, old_size); memset((void *)new->map + old_size, 0, size - old_size); - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); kvfree_rcu(old); } return 0; } -void free_shrinker_maps(struct mem_cgroup *memcg) +void free_shrinker_info(struct mem_cgroup *memcg) { struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid; if (mem_cgroup_is_root(memcg)) @@ -231,15 +231,15 @@ void free_shrinker_maps(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); + info = rcu_dereference_protected(pn->shrinker_info, true); + kvfree(info); + rcu_assign_pointer(pn->shrinker_info, NULL); } } -int alloc_shrinker_maps(struct mem_cgroup *memcg) +int alloc_shrinker_info(struct mem_cgroup *memcg) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid, size, ret = 0; if (mem_cgroup_is_root(memcg)) @@ -248,20 +248,20 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) down_write(&shrinker_rwsem); size = shrinker_map_size(shrinker_nr_max); for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - free_shrinker_maps(memcg); + info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); + if (!info) { + free_shrinker_info(memcg); ret = -ENOMEM; break; } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); return ret; } -static int expand_shrinker_maps(int new_id) +static int expand_shrinker_info(int new_id) { int size, old_size, ret = 0; int new_nr_max = new_id + 1; @@ -281,7 +281,7 @@ static int expand_shrinker_maps(int new_id) do { if (mem_cgroup_is_root(memcg)) continue; - ret = expand_one_shrinker_map(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; @@ -297,13 +297,13 @@ static int expand_shrinker_maps(int new_id) void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); /* Pairs with smp mb in shrink_slab() */ smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); + set_bit(shrinker_id, info->map); rcu_read_unlock(); } } @@ -334,7 +334,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (expand_shrinker_maps(id)) { + if (expand_shrinker_info(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -663,7 +663,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; unsigned long ret, freed = 0; int i; @@ -673,12 +673,12 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, - true); - if (unlikely(!map)) + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + if (unlikely(!info)) goto unlock; - for_each_set_bit(i, map->map, shrinker_nr_max) { + for_each_set_bit(i, info->map, shrinker_nr_max) { struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, @@ -689,7 +689,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, shrinker = idr_find(&shrinker_idr, i); if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { if (!shrinker) - clear_bit(i, map->map); + clear_bit(i, info->map); continue; } @@ -700,7 +700,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, ret = do_shrink_slab(&sc, shrinker, priority); if (ret == SHRINK_EMPTY) { - clear_bit(i, map->map); + clear_bit(i, info->map); /* * After the shrinker reported that it had no objects to * free, but before we cleared the corresponding bit in From patchwork Wed Feb 17 00:13:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C99EC433DB for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02BBC60235 for ; Wed, 17 Feb 2021 00:15:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231403AbhBQAPL (ORCPT ); Tue, 16 Feb 2021 19:15:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231383AbhBQAOz (ORCPT ); Tue, 16 Feb 2021 19:14:55 -0500 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38600C06178C; Tue, 16 Feb 2021 16:13:45 -0800 (PST) Received: by mail-pg1-x529.google.com with SMTP id 75so3955688pgf.13; Tue, 16 Feb 2021 16:13:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HSn3NqB9VD7rWht7Szy8nfdBgp8H6px/y36iWOtZRrk=; b=ogEtLEdngbkvDqPH+9k1j7lFEVEQ8Pd5ZYB0zf2c1AslkcEufULjGU+i0U3pl1UYk5 f5/W8fn/VgbPSXrYB+U0zZqI7wvw86WUOM2p19OmoYw+vgFqPHM2qKOUPJEAcb8sUB22 kOoR1jsGwnef7/cE1UjbwbmpYWZxsN+UkULCCzH5TJODl+B/r+ckbYkYXit2GDJGzHbG qtT9ePpxLmGZl2AOhwlO7uq0KPS3ucBzM7w/wpl1nUhVZ7onoGk/fIc75IYZWtOMBpAZ d0/5bkkS60b2K7Iwd8Eiso8NX9jLDCbHZ+XAwisXEb77hIatNJMA3UWwh7iPHdPdbhsz Yslw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HSn3NqB9VD7rWht7Szy8nfdBgp8H6px/y36iWOtZRrk=; b=rhNS0863/p/zbErXeh/lupDlwergAEEaUegeWkyZM9ezTUdhQZ/VkV1FRQkBxuTXU5 zxv2iLwVPzVjmONGcoBMenO0WtAamNQt8SOWsT69KxJrbzoC/9/Oh6YRrNNG26faruNI ViLaIz2C1mMzHiTJgPnt1m+13vBJwvorOKs9vbDcAI6N0RxnXICBI9mjlxPhXlyswfOS pB3kc4H6OfSL/fAfDfjokw+KSoiWSNZKa7v2mgLjelediFFR3SDe4LI8EOF+bpaUtOSF lLw8Q9JIfQInkZnhqvlzfDGlsMDjaM4CcO0SE/nD8Jog3wvSDHcW4DJlLDLlg7EARur0 bt0g== X-Gm-Message-State: AOAM530O+79/9vp8dEjkKz6t+8pH+k8BaRXIYt2aAIzKUCqW5rHI8nem 0M9vGBibt13eZ2PdqjcHwT/YwqZ1aTepAw== X-Google-Smtp-Source: ABdhPJwZEecYz6LkZS+2+ihMK2GWDVgJ6bjhlNP1SYaJC1fdnuVWtBc+oQ3w2THvYZWNwjTWlxlzXQ== X-Received: by 2002:a63:cf06:: with SMTP id j6mr21192416pgg.195.1613520824890; Tue, 16 Feb 2021 16:13:44 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:44 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 07/13] mm: vmscan: add shrinker_info_protected() helper Date: Tue, 16 Feb 2021 16:13:16 -0800 Message-Id: <20210217001322.2226796-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The shrinker_info is dereferenced in a couple of places via rcu_dereference_protected with different calling conventions, for example, using mem_cgroup_nodeinfo helper or dereferencing memcg->nodeinfo[nid]->shrinker_info. And the later patch will add more dereference places. So extract the dereference into a helper to make the code more readable. No functional change. Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index c94861a3ea3e..fe6e25f46b55 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,6 +192,13 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } +static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, + int nid) +{ + return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + lockdep_is_held(&shrinker_rwsem)); +} + static int expand_one_shrinker_info(struct mem_cgroup *memcg, int size, int old_size) { @@ -199,8 +206,7 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, int nid; for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); + old = shrinker_info_protected(memcg, nid); /* Not yet online memcg */ if (!old) return 0; @@ -231,7 +237,7 @@ void free_shrinker_info(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - info = rcu_dereference_protected(pn->shrinker_info, true); + info = shrinker_info_protected(memcg, nid); kvfree(info); rcu_assign_pointer(pn->shrinker_info, NULL); } @@ -673,8 +679,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, - true); + info = shrinker_info_protected(memcg, nid); if (unlikely(!info)) goto unlock; From patchwork Wed Feb 17 00:13:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69581C433E9 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 285D864D79 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231419AbhBQAPP (ORCPT ); Tue, 16 Feb 2021 19:15:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231384AbhBQAOz (ORCPT ); Tue, 16 Feb 2021 19:14:55 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7381FC061793; Tue, 16 Feb 2021 16:13:47 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id b145so7253519pfb.4; Tue, 16 Feb 2021 16:13:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hkn1dx6SBMPLTML2sqiHh/8OhgR0+c9GTaDqLQG9mRw=; b=mkH7qtgs/sWwR4kD6hoIFngLAQvUL2GPktVY3d/z4niwdZepWeEo7AH0a9NiFPODlv QUlrGVdMT+xO/IwguQ1bxvrUk9VU7j0GM/kWNkUC8svsymmD0dpabMk2lFgB5UFUiol9 xdSaXqNCIQjHg8jECpAvnC+3rjEu7fwZ3tfuxV3VIqO6PL88KXmJ6MI0XwpX5uFa6MAH 71grjRbccq6xRslenhUtEKq3fG1FBQTugbINLupLm0SmtGMhEh0RCGGVFWGI2jeO00Lv Fu7FyXqbyHU8B0p1+Uz6R9Ih6OtstECkyCfk6C2P2RBOCbvavrmIvF9pbwOI6pniRzHS y3Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hkn1dx6SBMPLTML2sqiHh/8OhgR0+c9GTaDqLQG9mRw=; b=lU1sAH7ng/ByutXqdkW9XqKCBieHV2DjwX5fuC0SAKjfGwcWx6Ol9O4AqDXS6/eHRT IThN8jAHWc02mGVPYVlQqEdDPWk0GucIU5+dlClqaahEI7vUe7sPkoZVD36bndzLCEGb j/isNd8HKg527+yMlaAGHXoq5O4ycK5dV7maCxQ9jrIF61KCZcSHkz+eD5UZKtvOw0AS XJC5+YTzW217gbAUPVifdeaJ+5QZCueQ+4wDCiECcftxWc76YpMWHmozzmUdXTERIAvR 3oR4/TcGmaGjreWPsUBV3JOqWpRWy4+kjSd4rgu+VW+nSBPSOD3xjktjdMD6yMUhTzcl BKjg== X-Gm-Message-State: AOAM532CbZsw24Um22Gne52aRwI/sWXizdd71aYQJddnfgakvT+V4ZsC Fv2wRrtx70ye+eLNS0vieD0= X-Google-Smtp-Source: ABdhPJxqdz7DA9e2dGvWSU81oXjXXmxqWC5jlwimj9yzT8it/Gn8cOPqh3jo1xVDg7Msfs3rW/c2kw== X-Received: by 2002:a63:fc58:: with SMTP id r24mr556511pgk.72.1613520827032; Tue, 16 Feb 2021 16:13:47 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:46 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 08/13] mm: vmscan: use a new flag to indicate shrinker is registered Date: Tue, 16 Feb 2021 16:13:17 -0800 Message-Id: <20210217001322.2226796-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred. This approach is fine with nr_deferred at the shrinker level, but the following patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their shrinker->nr_deferred would always be NULL. This would prevent the shrinkers from unregistering correctly. Remove SHRINKER_REGISTERING since we could check if shrinker is registered successfully by the new flag. Acked-by: Kirill Tkhai Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt --- include/linux/shrinker.h | 7 ++++--- mm/vmscan.c | 40 +++++++++++++++------------------------- 2 files changed, 19 insertions(+), 28 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..1eac79ce57d4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -79,13 +79,14 @@ struct shrinker { #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ /* Flags */ -#define SHRINKER_NUMA_AWARE (1 << 0) -#define SHRINKER_MEMCG_AWARE (1 << 1) +#define SHRINKER_REGISTERED (1 << 0) +#define SHRINKER_NUMA_AWARE (1 << 1) +#define SHRINKER_MEMCG_AWARE (1 << 2) /* * It just makes sense when the shrinker is also MEMCG_AWARE for now, * non-MEMCG_AWARE shrinker should not have this flag set. */ -#define SHRINKER_NONSLAB (1 << 2) +#define SHRINKER_NONSLAB (1 << 3) extern int prealloc_shrinker(struct shrinker *shrinker); extern void register_shrinker_prepared(struct shrinker *shrinker); diff --git a/mm/vmscan.c b/mm/vmscan.c index fe6e25f46b55..a1047ea60ecf 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -314,19 +314,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) } } -/* - * We allow subsystems to populate their shrinker-related - * LRU lists before register_shrinker_prepared() is called - * for the shrinker, since we don't want to impose - * restrictions on their internal registration order. - * In this case shrink_slab_memcg() may find corresponding - * bit is set in the shrinkers map. - * - * This value is used by the function to detect registering - * shrinkers and to skip do_shrink_slab() calls for them. - */ -#define SHRINKER_REGISTERING ((struct shrinker *)~0UL) - static DEFINE_IDR(shrinker_idr); static int prealloc_memcg_shrinker(struct shrinker *shrinker) @@ -335,7 +322,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ - id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL); + id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); if (id < 0) goto unlock; @@ -358,9 +345,9 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) BUG_ON(id < 0); - down_write(&shrinker_rwsem); + lockdep_assert_held(&shrinker_rwsem); + idr_remove(&shrinker_idr, id); - up_write(&shrinker_rwsem); } static bool cgroup_reclaim(struct scan_control *sc) @@ -487,8 +474,11 @@ void free_prealloced_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + down_write(&shrinker_rwsem); unregister_memcg_shrinker(shrinker); + up_write(&shrinker_rwsem); + } kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; @@ -498,10 +488,7 @@ void register_shrinker_prepared(struct shrinker *shrinker) { down_write(&shrinker_rwsem); list_add_tail(&shrinker->list, &shrinker_list); -#ifdef CONFIG_MEMCG - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - idr_replace(&shrinker_idr, shrinker, shrinker->id); -#endif + shrinker->flags |= SHRINKER_REGISTERED; up_write(&shrinker_rwsem); } @@ -521,13 +508,16 @@ EXPORT_SYMBOL(register_shrinker); */ void unregister_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) + if (!(shrinker->flags & SHRINKER_REGISTERED)) return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + down_write(&shrinker_rwsem); list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); up_write(&shrinker_rwsem); + kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; } @@ -692,7 +682,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct shrinker *shrinker; shrinker = idr_find(&shrinker_idr, i); - if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { + if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { if (!shrinker) clear_bit(i, info->map); continue; From patchwork Wed Feb 17 00:13:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AE9FC43381 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7716A60235 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbhBQAPR (ORCPT ); Tue, 16 Feb 2021 19:15:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231387AbhBQAOz (ORCPT ); Tue, 16 Feb 2021 19:14:55 -0500 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AA89C061794; Tue, 16 Feb 2021 16:13:49 -0800 (PST) Received: by mail-pl1-x629.google.com with SMTP id b8so6405128plh.12; Tue, 16 Feb 2021 16:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=l5jGtCznVK0NVwkC43TLcjy6EyPzdWSDomF/6MCFY8gOAwNC3+VwMtOM94S+yqhKbt UPbavFhbVYtbOOEMITEO4aYXRfIC9FAqrj3xFADx5Dy6K+xpNPcROM6grZ9sx9+3/I2v BrIhDi9BAlb8c//c+0R6c/mVlKu45hhx1eqkOkom91W0TDCBvrJV8jdiI8vkXsVtr7uR 8QuL319HGFDPYH0JGzGgodQgk4+Z2ww8t+5katwqDidIc7PrJUGCF1Gl/vMTpknE+wh6 2mDy2SGwTxLI2Qtnxuv5OVoEVNmD2ETZjpBe01AiCiZrg0a49co8chiz941kd/8SLGhR NqPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=SPEOPAqBSHTisSwtWn1TgdcuOpdQK4gPVubnBhf+X42GjDoNUKVTAUihCZBoIXKUEc Ay4ZQgpVnfsiGNCC2yC2GNUDI6M9CRJuqszWSJwL0t053/KmpPDUn20AP4j9sApzr+m+ Y98JFcjVpTUTfrXH6PMuHY0laF4baZ1KAO9QkoIyw2tgpH4hWcr8yWeJwWw3V30iYqC1 xarQVFJ1Abhpn3VpFKVM26eLCxKHxTriM1g+tuh/S7D8nmPVcP1OroYoj2+4RPZ329Ek L4bU0LhezitS3HqBs8mKNZZBIVMMLBzNSi707ryxq0/fldvfN+SXSVXPlFzcn5Vtxvre +fog== X-Gm-Message-State: AOAM532jEhaXUb61xs1TvNfmfzhhS0fuQeH8Aag+dnHku6hJ4rvA/ir5 QipKHNp2dQdKDB4Ph+NAVHE= X-Google-Smtp-Source: ABdhPJzRIgnWMkJWfVYWNE2KifMetToUV9ry7UTA22xZy4U6M6axAXj8xeGW4AYHXUwQn33JRYcT2g== X-Received: by 2002:a17:90a:4f83:: with SMTP id q3mr6548785pjh.38.1613520828992; Tue, 16 Feb 2021 16:13:48 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:48 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred Date: Tue, 16 Feb 2021 16:13:18 -0800 Message-Id: <20210217001322.2226796-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 7 +++-- mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ 2 files changed, 46 insertions(+), 21 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4c9253896e25..c457fc7bc631 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct shrinker_info { struct rcu_head rcu; - unsigned long map[]; + atomic_long_t *nr_deferred; + unsigned long *map; }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index a1047ea60ecf..fcb399e18fc3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ static inline int shrinker_map_size(int nr_items) { return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } +static inline int shrinker_defer_size(int nr_items) +{ + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); +} + static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, int nid) { @@ -200,10 +206,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, } static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int map_size, int defer_size, + int old_map_size, int old_defer_size) { struct shrinker_info *new, *old; int nid; + int size = map_size + defer_size; for_each_node(nid) { old = shrinker_info_protected(memcg, nid); @@ -215,9 +223,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, if (!new) return -ENOMEM; - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->nr_deferred = (atomic_long_t *)(new + 1); + new->map = (void *)new->nr_deferred + defer_size; + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_map_size); + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); + memset((void *)new->nr_deferred + old_defer_size, 0, + defer_size - old_defer_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); kvfree_rcu(old); @@ -232,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) struct shrinker_info *info; int nid; - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); info = shrinker_info_protected(memcg, nid); @@ -247,12 +259,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int map_size, defer_size = 0; down_write(&shrinker_rwsem); - size = shrinker_map_size(shrinker_nr_max); + map_size = shrinker_map_size(shrinker_nr_max); + defer_size = shrinker_defer_size(shrinker_nr_max); + size = map_size + defer_size; for_each_node(nid) { info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); if (!info) { @@ -260,6 +272,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) ret = -ENOMEM; break; } + info->nr_deferred = (atomic_long_t *)(info + 1); + info->map = (void *)info->nr_deferred + defer_size; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); @@ -267,15 +281,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) return ret; } +static inline bool need_expand(int nr_max) +{ + return round_up(nr_max, BITS_PER_LONG) > + round_up(shrinker_nr_max, BITS_PER_LONG); +} + static int expand_shrinker_info(int new_id) { - int size, old_size, ret = 0; + int ret = 0; int new_nr_max = new_id + 1; + int map_size, defer_size = 0; + int old_map_size, old_defer_size = 0; struct mem_cgroup *memcg; - size = shrinker_map_size(new_nr_max); - old_size = shrinker_map_size(shrinker_nr_max); - if (size <= old_size) + if (!need_expand(new_nr_max)) goto out; if (!root_mem_cgroup) @@ -283,11 +303,15 @@ static int expand_shrinker_info(int new_id) lockdep_assert_held(&shrinker_rwsem); + map_size = shrinker_map_size(new_nr_max); + defer_size = shrinker_defer_size(new_nr_max); + old_map_size = shrinker_map_size(shrinker_nr_max); + old_defer_size = shrinker_defer_size(shrinker_nr_max); + memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret = expand_one_shrinker_info(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, map_size, defer_size, + old_map_size, old_defer_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; From patchwork Wed Feb 17 00:13:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6362C4332D for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A3DC61494 for ; Wed, 17 Feb 2021 00:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231428AbhBQAPV (ORCPT ); Tue, 16 Feb 2021 19:15:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231390AbhBQAO4 (ORCPT ); Tue, 16 Feb 2021 19:14:56 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52A9AC061797; Tue, 16 Feb 2021 16:13:51 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id e9so6426418plh.3; Tue, 16 Feb 2021 16:13:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oR8nKROSIHuwr5r2cjQa0F+3vcVobf/RwbwA1G9gVJ8=; b=RNmWCc1rfbFjfx2dHI/SpqPm15sqeJbOH/dgKEWRRz/ac+32aZzyvvCEFVykQNkzdD /m7kWXif2dDZu3iIeoAvgQuZqeY1rroN8PT/YwHCWP8tGogLm8oMFXt6u55EIr0PnIwX jPL1SyXzaqMA7DZPQ3TFrDn3BNqshOsQpfhAYs6QTrQGFvd0IjxokLIj5xi/04HMsH6A VXShD/Sn/fIJGF9E7klrX2/ORnzVtmtAciePa4ma8fPkW7QTQhDDGwGX+kFDSsTpmdve qf0uQQbQo0K/dgb6iyNKhyacqR1I3gQGPd97jImtzJ/nYdt7mNqJwl7Y1wPz9NOhq9CT hpOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oR8nKROSIHuwr5r2cjQa0F+3vcVobf/RwbwA1G9gVJ8=; b=LiLkwWEn6JttXFmbmOzQy6QzfxmqNqyylPvPFdEqmFaP2suIWDSb+dqPVOq2/u7eBN TdPWuqiPYzb6BQMJMDSf9yZ3LaEYfUX9jn2vtOZ1QKZxJUo6TKvxUCjwjjPbVS8zuqWO vsvc9kL2WPUwD9Uc/6FKIzmhOqrl8ZSpsJ58ujGcBew87kZkW/Yw0aBmUsRsZ0GoXWpS CYsFMSQVN4voj0ZJGXegYWZjboh+m95+XQKwUAx85CvX5MMjhwXwtgruR1jBz5NKIr5n duHwsRR6opIdNmVOHezXwoszALqMIS82L4M+TcoXfZnWsKPwjcMZyUdG9iVu4KHNE62a dX9g== X-Gm-Message-State: AOAM533iP07akarqmfvWjioWl18G7BZkuj+pWn8EXuHa2NGdL1bUHsGJ WfXuuRVrncSQrMD9mcTI0dE= X-Google-Smtp-Source: ABdhPJynnWV/n3VpQ+hxbtK6YzKqt/IV+8FJESNbPl6F/fso06lkUpZU1U3mBCbgCKNtzSRQdDbSug== X-Received: by 2002:a17:90a:bf02:: with SMTP id c2mr6684811pjs.117.1613520830950; Tue, 16 Feb 2021 16:13:50 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:50 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 10/13] mm: vmscan: use per memcg nr_deferred of shrinker Date: Tue, 16 Feb 2021 16:13:19 -0800 Message-Id: <20210217001322.2226796-11-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- mm/vmscan.c | 78 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 66 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index fcb399e18fc3..57cbc6bc8a49 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -374,6 +374,24 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; @@ -412,6 +430,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -423,6 +453,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +static long xchg_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return xchg_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long add_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return add_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -558,14 +621,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; @@ -575,7 +634,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = xchg_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -666,14 +725,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; From patchwork Wed Feb 17 00:13:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E80B4C433DB for ; Wed, 17 Feb 2021 00:15:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B911864DA8 for ; Wed, 17 Feb 2021 00:15:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231432AbhBQAPY (ORCPT ); Tue, 16 Feb 2021 19:15:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231391AbhBQAO4 (ORCPT ); Tue, 16 Feb 2021 19:14:56 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BC51C0617A7; Tue, 16 Feb 2021 16:13:53 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id kr16so360150pjb.2; Tue, 16 Feb 2021 16:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qF6ksg7l8pC6izRWMIXhdTqgtS4B/O7ETuSZILY8w1s=; b=efLeE7WoLcttQISB/mbXgF8wP3e76TpoE8gJQSpz8Doiv4ijcze5SDWuVLpPilWugW 4ase4ksfGx3DQ1vQ9YPS8cvJgTc6yrYkXYYL16cV5wUUsZMCRKE/0CVqrMEUDcpFG7cd 98ppxJELp7IKlxIyYmzLE5sHSXYOslrGVPOJE8ZnB87f5l1xOwj/PBABqjJKvDrEEw0W 9A6cMzrD8TEYAQG0O6lKGFyhv14a3xGy/upbPa7cmjFMZMoYe6VkfETTgNFzEsRK7Lh4 PHM2MHODmm3VT4wd+ECwKb9Is2ipviXZPb4J6RTTOURjIepkEuKENtWyJrrwEoxbi+GI q8og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qF6ksg7l8pC6izRWMIXhdTqgtS4B/O7ETuSZILY8w1s=; b=alTv6g1c4GYxLLwqiKYWKOv7phwA1g08bJ5VYK+8HQx78P55G8g4S8jk7MCLBHoldq wgPL5MEE4RlUoNtDcsEowVlz+O25tjnwe5JmtDTY6ghFI1TfleTxJlNNeDSbLF+LuGmb HYzzdb6Q/Gad2Vrg3IfBJq6C7VFgb28bB8RMumYvy/YyBC6wtMcM9LL9nl3Q6rAJawGJ qLuZ897t9nfgXfcbKh6Dh06tVNyh5YznV2L6sdK+SPz//2UA0aLhJxADeEPHCldMaRsG UH/LVVMHob7Cay3DiVX/BsABg1BtJgeWhTOM2/TQKVekJDZLwCxOqWjtyHMc3ItYTfEd i/cA== X-Gm-Message-State: AOAM533EOhhjkFys1pzh4pl3QfmAWYRSw6tU8sBe4hhmembdkaXRFvBK hT+M9pcol7IU9O3lBqsN7rI= X-Google-Smtp-Source: ABdhPJxLa2Ndx3ZiLcTf5jq9MtNEjZleRMgJ3G3T68XGHQVRcwZQievbuP+0T/xY4M5LCfHxhgP3gg== X-Received: by 2002:a17:90a:7108:: with SMTP id h8mr6288180pjk.98.1613520832960; Tue, 16 Feb 2021 16:13:52 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:52 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 11/13] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Date: Tue, 16 Feb 2021 16:13:20 -0800 Message-Id: <20210217001322.2226796-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need allocate shrinker->nr_deferred for such shrinkers anymore. The prealloc_memcg_shrinker() would return -ENOSYS if !CONFIG_MEMCG or memcg is disabled by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be cleared. This makes the implementation of this patch simpler. Acked-by: Vlastimil Babka Reviewed-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 57cbc6bc8a49..d8800e4da67d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -344,6 +344,9 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) { int id, ret = -ENOMEM; + if (mem_cgroup_disabled()) + return -ENOSYS; + down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); @@ -423,7 +426,7 @@ static bool writeback_throttling_sane(struct scan_control *sc) #else static int prealloc_memcg_shrinker(struct shrinker *shrinker) { - return 0; + return -ENOSYS; } static void unregister_memcg_shrinker(struct shrinker *shrinker) @@ -534,8 +537,18 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone */ int prealloc_shrinker(struct shrinker *shrinker) { - unsigned int size = sizeof(*shrinker->nr_deferred); + unsigned int size; + int err; + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (err != -ENOSYS) + return err; + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + } + + size = sizeof(*shrinker->nr_deferred); if (shrinker->flags & SHRINKER_NUMA_AWARE) size *= nr_node_ids; @@ -543,28 +556,16 @@ int prealloc_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return -ENOMEM; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - if (prealloc_memcg_shrinker(shrinker)) - goto free_deferred; - } - return 0; - -free_deferred: - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; - return -ENOMEM; } void free_prealloced_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { down_write(&shrinker_rwsem); unregister_memcg_shrinker(shrinker); up_write(&shrinker_rwsem); + return; } kfree(shrinker->nr_deferred); From patchwork Wed Feb 17 00:13:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11087C433E0 for ; Wed, 17 Feb 2021 00:16:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E555B60238 for ; Wed, 17 Feb 2021 00:16:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229903AbhBQAPv (ORCPT ); Tue, 16 Feb 2021 19:15:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231406AbhBQAPL (ORCPT ); Tue, 16 Feb 2021 19:15:11 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 757ABC0617A9; Tue, 16 Feb 2021 16:13:55 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id c19so356510pjq.3; Tue, 16 Feb 2021 16:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GTXH9pU4UKvAeIiKlIuVYFEP3dOyBTmqLQ92jrFsTbY=; b=mhHHzKvtD90ogbwP0/K8kvPhFu2KAfiTc5QtEhGBbxt+Mb7fSTrStvGppXl9QXOypN SazTKsi8EowW96OwiEgtWaPT7e5yo4OKVvZDPgWwdJGdP1nF54Pc3hyQE3Z2J+StOCiV DpwBPNEoeirEHEdNsQdeRdQK2gTbVCCfMfacsfLPU5sT+RKu9nfzi0KUyruFIYrpvTku oYvfRJsDPRILjSZ2xuRO98HNvDdaA45MP5A8mnCt6rvsi56MMfPv3qhkrgQ4qQ2vv5zB QQ9b56NjjLtscCrZeme0cZprSZkmtqXE371xPBETREEtDSwfCwyJaJU72ytMZTJoK6ZI xzgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GTXH9pU4UKvAeIiKlIuVYFEP3dOyBTmqLQ92jrFsTbY=; b=GMHq38fmcAPvjQg4VnWAxXNLjhTBDqdXvAiusXljHzV3wyxE6rE1d1s66aEYG9m/Qm XSshT394xlo8ZQNkxo5/bgbGafnkpVymK8auhPwZSqbyrpV+CrtGZJjXGxRh/P3chLHP PfnAlqmN4q9Lc+P+AdxODpGrwRAvpWDRNedIOTJrbzCLhdgVGKU3bb+MXFrC8XnNk5yc zL7965F5Rn0ciP1HN/FOLoCSF/tpIeCzbUvtynsaBD6M8hcrM6nsWjaIekEoDhYY1M3K qnijVv23DXpuBTSFbdrZl2+qSSawrc/Xk9YSqyU9STgV+Kd7r1qgdgegiH2UmegX2GrT RoLA== X-Gm-Message-State: AOAM531FKd9jlcrd51wj9klqbTvtGBXi6C9w+LTvIgNbNrJ5T+UERL1E aq72DG7/jBGs9WLN7IfpB9Q= X-Google-Smtp-Source: ABdhPJzjx38X/6z2lktUcBE2x/989ultR6wYHYlPOfUHfDB0TvkaLepuyhOH/fcjg2aO8iUuqVhjUg== X-Received: by 2002:a17:902:ed0d:b029:e3:76d8:79de with SMTP id b13-20020a170902ed0db02900e376d879demr4980227pld.36.1613520835093; Tue, 16 Feb 2021 16:13:55 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:54 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 12/13] mm: memcontrol: reparent nr_deferred when memcg offline Date: Tue, 16 Feb 2021 16:13:21 -0800 Message-Id: <20210217001322.2226796-13-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's corresponding nr_deferred when memcg offline. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 1 + mm/memcontrol.c | 1 + mm/vmscan.c | 24 ++++++++++++++++++++++++ 3 files changed, 26 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c457fc7bc631..e1c4b93889ad 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1585,6 +1585,7 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); +void reparent_shrinker_deferred(struct mem_cgroup *memcg); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f64ad0d044d9..21f36b73f36a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5282,6 +5282,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); drain_all_stock(memcg); diff --git a/mm/vmscan.c b/mm/vmscan.c index d8800e4da67d..4247a3568585 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -395,6 +395,30 @@ static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); } +void reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct shrinker_info *child_info, *parent_info; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + /* Prevent from concurrent shrinker_info expand */ + down_read(&shrinker_rwsem); + for_each_node(nid) { + child_info = shrinker_info_protected(memcg, nid); + parent_info = shrinker_info_protected(parent, nid); + for (i = 0; i < shrinker_nr_max; i++) { + nr = atomic_long_read(&child_info->nr_deferred[i]); + atomic_long_add(nr, &parent_info->nr_deferred[i]); + } + } + up_read(&shrinker_rwsem); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; From patchwork Wed Feb 17 00:13:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46EC6C43381 for ; Wed, 17 Feb 2021 00:16:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D41F60238 for ; Wed, 17 Feb 2021 00:16:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbhBQAP4 (ORCPT ); Tue, 16 Feb 2021 19:15:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231408AbhBQAPL (ORCPT ); Tue, 16 Feb 2021 19:15:11 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31336C0617AA; Tue, 16 Feb 2021 16:13:58 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id z6so7264130pfq.0; Tue, 16 Feb 2021 16:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=QMtwxddL/Doi8ti3RVqupaiMXJnaV9oGrObTbm+fsfP+CLJwAhTg4H2cVedturf5DV zxMhDQidZP6zDkrUwUy3kcfi15WIvwH1mEzbKU7Ans11F5RGcUqAjy0++7ZCS1Fbt7UR Y6Ivl518m6sTiAbBGyoEKpHzsD2kY7Q4DVKMy1qWplkTEvLvyjpv9y5gW4RWppP6eE4M bzVo3X+si5N15PafDj0vU/TQRrW9QAKXgxtiKEJZcZ6m0SjlfAJ6KPp7s3soh2rRiMLJ MEcAAAj9p+wWS6Sck1vMJ3xrRtpHGXpntrpfA1T6vI15ZY7v+XbVstR+0U155i9htOoU brvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=IUwmvcJAXvVGylakpi2Ii+uJ43eqr3Nn8exWY36R61FXR2hplqq4/PiB5C4AhR/GYN Pk1TVONH2LwIA5EisaBbly1H22GYURrZuOKcsb048G5ETcuReubcz/ti0fIiFXO/Ry3W byjU1XlLJ6m7Nww0p2en3qnC3D17fGyquYMx+1J7GZHoynQsHr7U2Vpjbw6BWimjt9hj lyGiutSFx6R+0ySSajkYF8Xo+Yg6ysONiDn5fcQ3bwx+rLB5HjiWJz7s2IEfYGU+Pwvr XmcEjilB4AsBgeCQKmcHdeygM4jwd2jt6uf71Wh36CgL26TLQXlW4lVT2st3Y/VRQu4S BQjQ== X-Gm-Message-State: AOAM533erk0Dhl/RIO/GNfsgH3AgDygGuSXGtN3F5wlfSobarg9K0H7Z vBdQZw22Ds9Mp0LSu2VEjuc= X-Google-Smtp-Source: ABdhPJzDqnLKOjhckWkxEnIUdYEfij0kOz6DzuqxoZpDS+iPjz7y/hkLXnRk9SrvxZKe5UMlLh6QPA== X-Received: by 2002:aa7:888b:0:b029:1ec:df4a:4da2 with SMTP id z11-20020aa7888b0000b02901ecdf4a4da2mr14656pfe.66.1613520837746; Tue, 16 Feb 2021 16:13:57 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:56 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 13/13] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 16 Feb 2021 16:13:22 -0800 Message-Id: <20210217001322.2226796-14-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed from Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload in our production environment, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 46 +++++++++++----------------------------------- 1 file changed, 11 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4247a3568585..b3bdc3ba8edc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -661,7 +661,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = xchg_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -675,37 +674,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -744,10 +715,15 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + /* + * The deferred work is increased by any new work (delta) that wasn't + * done, decreased by old deferred work that was done now. + * + * And it is capped to two times of the freeable items. + */ + next_deferred = max_t(long, (nr + delta - scanned), 0); + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.