From patchwork Thu Jan 21 23:06:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C4DC433E0 for ; Thu, 21 Jan 2021 23:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABD0D2087E for ; Thu, 21 Jan 2021 23:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726094AbhAUXTZ (ORCPT ); Thu, 21 Jan 2021 18:19:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726023AbhAUXHG (ORCPT ); Thu, 21 Jan 2021 18:07:06 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD673C061351; Thu, 21 Jan 2021 15:06:38 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id s15so2151244plr.9; Thu, 21 Jan 2021 15:06:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IYLQLgfEaFjCvTsN4FkoUtw26E88jjRQPD7HteqNuNs=; b=kARRAAcbOP7tKGGRgTCOE/cJLV13xKkJ0c1oFnpmAWeKv4vTgF50jpdPoiCWc7zED3 zFxIGFutxuUjX9mt3NGojgqhHPOpHdHTl+8A5G/i2ZjzdWLjzkF2RXIlhXgxe0kPT0vm osAWYJr9Nwjk62Gb14cKOHUZ6fSXXOT6PcEEIVcPBMy5f3sLgzrb3stfaCJEoyY1fZEA b1uoBb3o806Y1lxQHNio/zJAR2s8Md+JmVE0GKhEcHYe4M3InL8xCJSkxSQwOx3slhq7 tCITEPddUPDCwk5CsL7cpHXIJy7/avU4ue2hGXkL7+rr1m8DMPUC4o/ClZCmKyHfiw4W 7DAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IYLQLgfEaFjCvTsN4FkoUtw26E88jjRQPD7HteqNuNs=; b=QOl1YoykmaWpzqgAVZm0qOSnT5ui2tquAWworswJ4bmoBXT7EhHJL9o8r41v5hx2z0 0053BmC/+nL0WzffqprxDKwCTDgejII2Orf0XxZ3MuYj5ADsNcZxQYiC84663rgKmkK7 hHhOO7AXQHO6NWZj9ls5/Wja+hKNpmpy7LaX4d2MTuAPX4oQFV7eayeeF7J8zGaBm9AF QesWNSBlI6Iw6qG1ZoFivuXyyA+9VzdnxwAkSBTq1jugxmPbWLtAMLbrkMgSE6fnk4+X VisGNPM3hFZZrGuvkQhjk1b22/eZGDu/BuJvsTskzkrt6iEB0yjhU9knlBsOaeaQkJdm lmcw== X-Gm-Message-State: AOAM531j5gMlDjv0BQRCkTxRrvgJO0ZYS5WEzrTAqpoxw3bR/cIqBsnv JVzF333zYn2Q5TPJ0QHgT9I= X-Google-Smtp-Source: ABdhPJyPQ/XS2KHGDobMo3FlC/5WrlmbHKGpggWZGVDAIeXc9VXIJvWPYdtc7vUBL0TqXt5+JT6KNg== X-Received: by 2002:a17:902:d2ca:b029:de:7845:c0b2 with SMTP id n10-20020a170902d2cab02900de7845c0b2mr2000876plc.11.1611270398396; Thu, 21 Jan 2021 15:06:38 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:37 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 01/11] mm: vmscan: use nid from shrink_control for tracepoint Date: Thu, 21 Jan 2021 15:06:11 -0800 Message-Id: <20210121230621.654304-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The tracepoint's nid should show what node the shrink happens on, the start tracepoint uses nid from shrinkctl, but the nid might be set to 0 before end tracepoint if the shrinker is not NUMA aware, so the traceing log may show the shrink happens on one node but end up on the other node. It seems confusing. And the following patch will remove using nid directly in do_shrink_slab(), this patch also helps cleanup the code. Signed-off-by: Yang Shi --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b1b574ad199d..b512dd5e3a1c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -535,7 +535,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, else new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; } From patchwork Thu Jan 21 23:06:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62C90C433DB for ; Thu, 21 Jan 2021 23:08:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 295B223A54 for ; Thu, 21 Jan 2021 23:08:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726137AbhAUXIF (ORCPT ); Thu, 21 Jan 2021 18:08:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725980AbhAUXHG (ORCPT ); Thu, 21 Jan 2021 18:07:06 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AA82C061352; Thu, 21 Jan 2021 15:06:41 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id i7so2365139pgc.8; Thu, 21 Jan 2021 15:06:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OEbdsaWTY/MRBU6sm8LD7NiopNN+pz/8wNqd7YE+lyk=; b=opb87jM6CZLqnXcmcZKX8KEcHH21sN96Prnu9OtHoYIvmB+nydjvy0awP1pfGr53b+ Druxdp0sC4DQVZO8Hf4rRQakefDAyLvtPXL29v0IPPov0MacKG5md/wivZM4F/0j3GlD 8UW5q03Gdu38k4jIZSzrKvkdK4tRdUiEOXsos90JP2IrQNU7lNsP3L36uLCy4XzEghpJ WZ9sf4vdGBfSj6pI2WlQEXZNFtqpawZR2XbrgKAj9BkcYc7ZBV7gJyHFo6Bt4mHGYOkq f1OwXamG9KT1I1Dxq3pOADSTW4+xK9Orwe8dr3Fe5tNrW8AIYCjad02gpspL97mozivQ BI3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OEbdsaWTY/MRBU6sm8LD7NiopNN+pz/8wNqd7YE+lyk=; b=GI8e9FJeuycBo75q85FOUcen932+pNyWeMf3GByblypmxGP1ZLHwFaAyodwc0iyPJe onZ8Ke6sKlSm10xXNf9epZAw64fxKcyY0vQL/QX51FsRb1aUSoEW4UMwQZsomS98V9h0 5j6hoVb/r0FeA3kEOthGSG7cP7eANjvUO8tuQo8f1VBxq3dbl3UFci7SyOiXB/7PudV8 aeOTGbFuR9RtUrBzFv8+tLHA/ppxClJuTNzRHQ7JVWn+yW975IOvZWAxls3Fjt3cvt9e IjZuy1WMJesRpePciWF7hWiltOqEDFyqqi8TnDk2gZuR4rHLt8uwQbLhR6/+9UcuNhCO HP5w== X-Gm-Message-State: AOAM532dP9+sr2ZNtYMJr/056LiPd7gyTC7/MyqcfKsBwI3FX+T5YrXF Cb6EOM3H2Q/RuDZPU/tYWds= X-Google-Smtp-Source: ABdhPJxJfGrzhRoAFJHkNeWumsBuFgJgOS4lFKI7F9ixsSwo9QqkYGLccooG2/yD+JduZiW6NN8X7g== X-Received: by 2002:a63:455a:: with SMTP id u26mr1611240pgk.215.1611270400631; Thu, 21 Jan 2021 15:06:40 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:39 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 02/11] mm: vmscan: consolidate shrinker_maps handling code Date: Thu, 21 Jan 2021 15:06:12 -0800 Message-Id: <20210121230621.654304-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The shrinker map management is not purely memcg specific, it is at the intersection between memory cgroup and shrinkers. It's allocation and assignment of a structure, and the only memcg bit is the map is being stored in a memcg structure. So move the shrinker_maps handling code into vmscan.c for tighter integration with shrinker code, and remove the "memcg_" prefix. There is no functional change. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 12 ++-- mm/huge_memory.c | 4 +- mm/list_lru.c | 6 +- mm/memcontrol.c | 130 +------------------------------------ mm/vmscan.c | 130 ++++++++++++++++++++++++++++++++++++- 5 files changed, 142 insertions(+), 140 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index eeb0b52203e9..0ee2924991fb 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1581,10 +1581,10 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int memcg_expand_shrinker_maps(int new_id); - -extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id); +extern int alloc_shrinker_maps(struct mem_cgroup *memcg); +extern void free_shrinker_maps(struct mem_cgroup *memcg); +extern void set_shrinker_bit(struct mem_cgroup *memcg, + int nid, int shrinker_id); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1594,8 +1594,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id) +static inline void set_shrinker_bit(struct mem_cgroup *memcg, + int nid, int shrinker_id) { } #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9237976abe72..05190d7f32ae 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2823,8 +2823,8 @@ void deferred_split_huge_page(struct page *page) ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG if (memcg) - memcg_set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); + set_shrinker_bit(memcg, page_to_nid(page), + deferred_split_shrinker.id); #endif } spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); diff --git a/mm/list_lru.c b/mm/list_lru.c index fe230081690b..628030fa5f69 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -125,8 +125,8 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - memcg_set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, + lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -548,7 +548,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, if (src->nr_items) { dst->nr_items += src->nr_items; - memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); + set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 605f671203ef..76a557520a1a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -397,130 +397,6 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif -static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); - -static void memcg_free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - -static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) -{ - struct memcg_shrinker_map *new, *old; - int nid; - - lockdep_assert_held(&memcg_shrinker_map_mutex); - - for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); - /* Not yet online memcg */ - if (!old) - return 0; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); - if (!new) - return -ENOMEM; - - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); - - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); - } - - return 0; -} - -static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) -{ - struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; - int nid; - - if (mem_cgroup_is_root(memcg)) - return; - - for_each_node(nid) { - pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - if (map) - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); - } -} - -static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) -{ - struct memcg_shrinker_map *map; - int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - size = memcg_shrinker_map_size; - for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - memcg_free_shrinker_maps(memcg); - ret = -ENOMEM; - break; - } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); - } - mutex_unlock(&memcg_shrinker_map_mutex); - - return ret; -} - -int memcg_expand_shrinker_maps(int new_id) -{ - int size, old_size, ret = 0; - struct mem_cgroup *memcg; - - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; - if (size <= old_size) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - if (!root_mem_cgroup) - goto unlock; - - for_each_mem_cgroup(memcg) { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_map(memcg, size, old_size); - if (ret) { - mem_cgroup_iter_break(NULL, memcg); - goto unlock; - } - } -unlock: - if (!ret) - memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); - return ret; -} - -void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) -{ - if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; - - rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); - /* Pairs with smp mb in shrink_slab() */ - smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); - rcu_read_unlock(); - } -} - /** * mem_cgroup_css_from_page - css of the memcg associated with a page * @page: page of interest @@ -5372,11 +5248,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for memcg_expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_maps() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (memcg_alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_maps(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5440,7 +5316,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - memcg_free_shrinker_maps(memcg); + free_shrinker_maps(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index b512dd5e3a1c..d950cead66ca 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,6 +185,132 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG + +static int memcg_shrinker_map_size; +static DEFINE_MUTEX(memcg_shrinker_map_mutex); + +static void free_shrinker_map_rcu(struct rcu_head *head) +{ + kvfree(container_of(head, struct memcg_shrinker_map, rcu)); +} + +static int expand_one_shrinker_map(struct mem_cgroup *memcg, + int size, int old_size) +{ + struct memcg_shrinker_map *new, *old; + int nid; + + lockdep_assert_held(&memcg_shrinker_map_mutex); + + for_each_node(nid) { + old = rcu_dereference_protected( + mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + /* Not yet online memcg */ + if (!old) + return 0; + + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!new) + return -ENOMEM; + + /* Set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_size); + memset((void *)new->map + old_size, 0, size - old_size); + + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + call_rcu(&old->rcu, free_shrinker_map_rcu); + } + + return 0; +} + +void free_shrinker_maps(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct memcg_shrinker_map *map; + int nid; + + if (mem_cgroup_is_root(memcg)) + return; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + map = rcu_dereference_protected(pn->shrinker_map, true); + if (map) + kvfree(map); + rcu_assign_pointer(pn->shrinker_map, NULL); + } +} + +int alloc_shrinker_maps(struct mem_cgroup *memcg) +{ + struct memcg_shrinker_map *map; + int nid, size, ret = 0; + + if (mem_cgroup_is_root(memcg)) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + size = memcg_shrinker_map_size; + for_each_node(nid) { + map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); + if (!map) { + free_shrinker_maps(memcg); + ret = -ENOMEM; + break; + } + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + } + mutex_unlock(&memcg_shrinker_map_mutex); + + return ret; +} + +static int expand_shrinker_maps(int new_id) +{ + int size, old_size, ret = 0; + struct mem_cgroup *memcg; + + size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + old_size = memcg_shrinker_map_size; + if (size <= old_size) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + if (!root_mem_cgroup) + goto unlock; + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + if (mem_cgroup_is_root(memcg)) + continue; + ret = expand_one_shrinker_map(memcg, size, old_size); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto unlock; + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); +unlock: + if (!ret) + memcg_shrinker_map_size = size; + mutex_unlock(&memcg_shrinker_map_mutex); + return ret; +} + +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) +{ + if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { + struct memcg_shrinker_map *map; + + rcu_read_lock(); + map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); + set_bit(shrinker_id, map->map); + rcu_read_unlock(); + } +} + /* * We allow subsystems to populate their shrinker-related * LRU lists before register_shrinker_prepared() is called @@ -212,7 +338,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (memcg_expand_shrinker_maps(id)) { + if (expand_shrinker_maps(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -601,7 +727,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (ret == SHRINK_EMPTY) ret = 0; else - memcg_set_shrinker_bit(memcg, nid, i); + set_shrinker_bit(memcg, nid, i); } freed += ret; From patchwork Thu Jan 21 23:06:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 743F4C433DB for ; Thu, 21 Jan 2021 23:19:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 348EF22CAE for ; Thu, 21 Jan 2021 23:19:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726445AbhAUXTq (ORCPT ); Thu, 21 Jan 2021 18:19:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726251AbhAUXHG (ORCPT ); Thu, 21 Jan 2021 18:07:06 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0341BC061353; Thu, 21 Jan 2021 15:06:43 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id f63so2446584pfa.13; Thu, 21 Jan 2021 15:06:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9KysAzfeahGm6mPyUWMF7gAQ+XHowxRld+CYS+eHKLY=; b=WBYhztIOeeY04TJWXqc9qQSLVH5omlKxjvbotA3q3BZOdZBcZCRKzackoXnxcq2Yc6 mCRwOBZO38UOqjX5/5IJUOPjU4gC+inxt6mAIO1dFghM0SscgNdi53CVxVhFDQPiD4D/ QBZLyzueyDathFogij0nRBuVX7E0x0Uin5KKSYcr6/BRyfvoWZ+jFQseMDrwzoZ6O1F3 TpjGO57KsySxbwmqEJDl3637iLv3aOk9ITA2ZWPzggbPaZVNHW5E1ElmenJQ0akR4kF0 bBxK+jUom8OB1WgHgQGWXA6agAIsNwesmz0Gy4wRmaqMnTjWFThw4oDgbyD6yEXwqHZe TrKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9KysAzfeahGm6mPyUWMF7gAQ+XHowxRld+CYS+eHKLY=; b=PtHwOlsio2oFeKxpBRadtT3LwKRbxvjRQrYBhFG9b3Icl3U2OaoCrUZvZa5cxYQBRE IDByzuaWLAPVA6c7b962HS6Z/6UaVprDm5wD1d73w3XGtYwmR7/CdYVYdHmoz0l5h0NC jqV8avT8B+hOEEAogOcJcdvOH2y7VulqQp2NYQ01y1T2zmFlfmVdd1vTsKmRErqO6vi3 IRt06M5Vwtb31X4dlLmHcj9ChKnkAtL9Xr1fN+OUVugDL7QuNSBIn0GIpvPhKAnSwHhE nY2eYXgK1hnsEqoo6sSY2mL+q6hg7CNf1Jcqs7mHubls1EeBA6zKBO/iDBIg1M4Ev2M/ 4J6Q== X-Gm-Message-State: AOAM5304IJL92DxNxMjIOK8q40np3wZipnfmAuXgNqEa3DwaOkA282xr I4J9ixZ1G05dk7eqf2YF3cE= X-Google-Smtp-Source: ABdhPJywYL+McJDZGYLd5Wt25nO1Tlxegu3RhSomNZ37WaqnKe9HHRYdl+ws3Opt5V3VAcwEmyqujA== X-Received: by 2002:a63:4c52:: with SMTP id m18mr1642106pgl.280.1611270402606; Thu, 21 Jan 2021 15:06:42 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:41 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 03/11] mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation Date: Thu, 21 Jan 2021 15:06:13 -0800 Message-Id: <20210121230621.654304-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Since memcg_shrinker_map_size just can be changed under holding shrinker_rwsem exclusively, the read side can be protected by holding read lock, so it sounds superfluous to have a dedicated mutex. Kirill Tkhai suggested use write lock since: * We want the assignment to shrinker_maps is visible for shrink_slab_memcg(). * The rcu_dereference_protected() dereferrencing in shrink_slab_memcg(), but in case of we use READ lock in alloc_shrinker_maps(), the dereferrencing is not actually protected. * READ lock makes alloc_shrinker_info() racy against memory allocation fail. alloc_shrinker_info()->free_shrinker_info() may free memory right after shrink_slab_memcg() dereferenced it. You may say shrink_slab_memcg()->mem_cgroup_online() protects us from it? Yes, sure, but this is not the thing we want to remember in the future, since this spreads modularity. And a test with heavy paging workload didn't show write lock makes things worse. Signed-off-by: Yang Shi --- mm/vmscan.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d950cead66ca..d3f3701dfcd2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,7 +187,6 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -200,8 +199,6 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, struct memcg_shrinker_map *new, *old; int nid; - lockdep_assert_held(&memcg_shrinker_map_mutex); - for_each_node(nid) { old = rcu_dereference_protected( mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); @@ -250,7 +247,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) if (mem_cgroup_is_root(memcg)) return 0; - mutex_lock(&memcg_shrinker_map_mutex); + down_write(&shrinker_rwsem); size = memcg_shrinker_map_size; for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); @@ -261,7 +258,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) } rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); } - mutex_unlock(&memcg_shrinker_map_mutex); + up_write(&shrinker_rwsem); return ret; } @@ -276,9 +273,8 @@ static int expand_shrinker_maps(int new_id) if (size <= old_size) return 0; - mutex_lock(&memcg_shrinker_map_mutex); if (!root_mem_cgroup) - goto unlock; + goto out; memcg = mem_cgroup_iter(NULL, NULL, NULL); do { @@ -287,13 +283,13 @@ static int expand_shrinker_maps(int new_id) ret = expand_one_shrinker_map(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); - goto unlock; + goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -unlock: +out: if (!ret) memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); + return ret; } From patchwork Thu Jan 21 23:06:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23CF1C433E0 for ; Thu, 21 Jan 2021 23:17:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E847420769 for ; Thu, 21 Jan 2021 23:17:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726239AbhAUXRb (ORCPT ); Thu, 21 Jan 2021 18:17:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726398AbhAUXHL (ORCPT ); Thu, 21 Jan 2021 18:07:11 -0500 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FF05C061356; Thu, 21 Jan 2021 15:06:45 -0800 (PST) Received: by mail-pg1-x52f.google.com with SMTP id i5so2394279pgo.1; Thu, 21 Jan 2021 15:06:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ecr+uyWVMQYJIPC/0K3B/JrS2n6P+feKAlXvlt40fK0=; b=OfbJGr70L0LEsiZ7MaGL39F+x8jqf1HlI3cOx5jDnGeeynuSmYIdMRNKksm+7Xbb6G 391CuA3S5xAbjI3C8zxnHkgkmazfA9kK8z+E5+B4Xb9hayZm/h2SioDmY5DHGRJ6+svS 3x4o3aTYO8NdO22EWeSuMp52kr43X+WB5rucplbEZbNxVrA+neGUlJLRWSSFaHYxeLlR HAmgckMD0/txHmZqfwpOVzXotPSIxa+yaO0etz1jcjNKhKDBy3szftmNhD344/2PIU4M qv/QZpqhuNlHGkRpuRp+/dgGjXpVX7b99ua4O+Ecoe60pcvXYsqn7Yelg9VhlvZagJuj UB4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ecr+uyWVMQYJIPC/0K3B/JrS2n6P+feKAlXvlt40fK0=; b=Anr8torBsRvGSKOXofeZ16UUvnEd8DNHaEZYYQsdeycnlodhsEJxOqnHmdXhKsEiwg iHvhFo9T19JUXCLnjG6Oo6BySrdjAN71vTwMShmPlCr5ho36t6L7+r5xhDqck5RQfuDF oHhUSGLPzvGQ00yncpvIjUa9LK2ChlqsgnCinyNnWvX5qAfWhiQDoJCb3bE4C04Px/yT 6iwSUe05vDZ4Uo5orPRkmPthcBqOfD1wdqYEqrtIOaNb8aOj3f/qf7Imk65pNOeD7c+t chsHq2A0z1jn+jcpinoMmsvbCFwn62D6QqlQ6POq/PcKUnFyzPLCyNk1cKABQkBjcBbc zulQ== X-Gm-Message-State: AOAM530SyYl0y2869glsF588c83cWes7dikf1LvH/58lSgo2yStCtrnA ZGlHnj29A/hbZMrA4HaJTus= X-Google-Smtp-Source: ABdhPJwr0t0N81wvkWziICzoO8pyZXAIt1mXXq49BQqPpZ1o/XhNOsQ0dLrPkmS2E9OrdE0gEKa8HQ== X-Received: by 2002:a63:1b11:: with SMTP id b17mr1618542pgb.322.1611270404859; Thu, 21 Jan 2021 15:06:44 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:44 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 04/11] mm: vmscan: remove memcg_shrinker_map_size Date: Thu, 21 Jan 2021 15:06:14 -0800 Message-Id: <20210121230621.654304-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually the map size can be calculated via shrinker_nr_max, so it seems unnecessary to keep both. Remove memcg_shrinker_map_size since shrinker_nr_max is also used by iterating the bit map. Signed-off-by: Yang Shi --- mm/vmscan.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d3f3701dfcd2..40e7751ef961 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,8 +185,7 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG - -static int memcg_shrinker_map_size; +static int shrinker_nr_max; static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -248,7 +247,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) return 0; down_write(&shrinker_rwsem); - size = memcg_shrinker_map_size; + size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); if (!map) { @@ -266,10 +265,11 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) static int expand_shrinker_maps(int new_id) { int size, old_size, ret = 0; + int new_nr_max = new_id + 1; struct mem_cgroup *memcg; - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; + size = (new_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + old_size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); if (size <= old_size) return 0; @@ -286,9 +286,10 @@ static int expand_shrinker_maps(int new_id) goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); + out: if (!ret) - memcg_shrinker_map_size = size; + shrinker_nr_max = new_nr_max; return ret; } @@ -321,7 +322,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) #define SHRINKER_REGISTERING ((struct shrinker *)~0UL) static DEFINE_IDR(shrinker_idr); -static int shrinker_nr_max; static int prealloc_memcg_shrinker(struct shrinker *shrinker) { @@ -338,8 +338,6 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); goto unlock; } - - shrinker_nr_max = id + 1; } shrinker->id = id; ret = 0; From patchwork Thu Jan 21 23:06:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4702C433DB for ; Thu, 21 Jan 2021 23:19:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56D0720769 for ; Thu, 21 Jan 2021 23:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726016AbhAUXQ7 (ORCPT ); Thu, 21 Jan 2021 18:16:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726379AbhAUXHL (ORCPT ); Thu, 21 Jan 2021 18:07:11 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE579C06174A; Thu, 21 Jan 2021 15:06:47 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id x20so2648722pjh.3; Thu, 21 Jan 2021 15:06:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=znjWC9WcMpZrPJmV/Ont8uolgMCcznsgXVeTspUOoC8=; b=skeSoalhaMwUVhxMFcGSdD6cqKJseQeQs3HECxY9vV07FbRrVgUwQ5XCiiIHrOWq4z 6/vwmsaln1LMhY7DWantYSWR4CC1TLoRjfEDkcIGiGg6POdPChbo4yMtyGpcmNw99Lo8 piB18/srTxE/4ei//DOjOXJaAGYsuiTX5eB/zOPdoa3QwfDQnznNkSHu3leUQUm3p8oy UWL7A0ZEw2Z4xjQnp8UKRLbQR8uvgtHuDMrZJD7T82wHTf0BKm7c74WcqvnNoj6lgPNp xZffwCn1CAXQIaezl3Izpsb1MiL0i7OH3TtYlgzpuM96y43g9JLZ7m/+OQX3isfONJO2 +zZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=znjWC9WcMpZrPJmV/Ont8uolgMCcznsgXVeTspUOoC8=; b=fxXiWoF2bqJu5pOTf0W0k0TeLLA8mGzowSoReX84ar5N66Bozix7PH6IUDkzLhISQ4 yllZ4a9qbfO0rU7agd8OInlyMxUwEjc/vFJ1TXz+xJcxShvKtsbBST1CDbevS1fMnL2p O6YKSncLlLOFXoXJqGTidGYG4uQ8cNxbZCyx4QzrdfsUW71l2nIdztMHT3v0Cpd/U+Xu EeVx/LqeldTkS8g8IIRcOqdoRYZy+B5vbg45zZdFqTbajX8Z4GQZYfbksUPq4MEefZfa 7qCyWaNVCfQX3vk9wJFFfuux9UY13TrrhIStBEvN6K67MKL98aYQV4UHQFZ3g6Vlb1Ee Wwpw== X-Gm-Message-State: AOAM530YiP+fCX2Qo6napyWth6fwK622Dih90ghBUUtwCkTphtj4xiL+ be37sUlopkM2+XT/4s8juik= X-Google-Smtp-Source: ABdhPJzufOJUfBKmPJxfd+xTIlJpEIlx+BhAty8GjV+05RTYiuwyNbYLP9mMcJM5OLzYVEnyg0cH7g== X-Received: by 2002:a17:902:ba85:b029:de:ba16:818b with SMTP id k5-20020a170902ba85b02900deba16818bmr1596578pls.75.1611270407235; Thu, 21 Jan 2021 15:06:47 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:46 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 05/11] mm: memcontrol: rename shrinker_map to shrinker_info Date: Thu, 21 Jan 2021 15:06:15 -0800 Message-Id: <20210121230621.654304-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The following patch is going to add nr_deferred into shrinker_map, the change will make shrinker_map not only include map anymore, so rename it to a more general name. And this should make the patch adding nr_deferred cleaner and readable and make review easier. Rename "memcg_shrinker_info" to "shrinker_info" as well. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 8 ++--- mm/memcontrol.c | 6 ++-- mm/vmscan.c | 64 +++++++++++++++++++------------------- 3 files changed, 39 insertions(+), 39 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0ee2924991fb..62b888b88a5f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -96,7 +96,7 @@ struct lruvec_stat { * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. */ -struct memcg_shrinker_map { +struct shrinker_info { struct rcu_head rcu; unsigned long map[]; }; @@ -118,7 +118,7 @@ struct mem_cgroup_per_node { struct mem_cgroup_reclaim_iter iter; - struct memcg_shrinker_map __rcu *shrinker_map; + struct shrinker_info __rcu *shrinker_info; struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ @@ -1581,8 +1581,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int alloc_shrinker_maps(struct mem_cgroup *memcg); -extern void free_shrinker_maps(struct mem_cgroup *memcg); +extern int alloc_shrinker_info(struct mem_cgroup *memcg); +extern void free_shrinker_info(struct mem_cgroup *memcg); extern void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 76a557520a1a..65d9eb0215b5 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5248,11 +5248,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_info(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5316,7 +5316,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - free_shrinker_maps(memcg); + free_shrinker_info(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 40e7751ef961..dcb7f2913ace 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,20 +187,20 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; -static void free_shrinker_map_rcu(struct rcu_head *head) +static void free_shrinker_info_rcu(struct rcu_head *head) { - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); + kvfree(container_of(head, struct shrinker_info, rcu)); } -static int expand_one_shrinker_map(struct mem_cgroup *memcg, +static int expand_one_shrinker_info(struct mem_cgroup *memcg, int size, int old_size) { - struct memcg_shrinker_map *new, *old; + struct shrinker_info *new, *old; int nid; for_each_node(nid) { old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); /* Not yet online memcg */ if (!old) return 0; @@ -213,17 +213,17 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, memset(new->map, (int)0xff, old_size); memset((void *)new->map + old_size, 0, size - old_size); - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, free_shrinker_map_rcu); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); + call_rcu(&old->rcu, free_shrinker_info_rcu); } return 0; } -void free_shrinker_maps(struct mem_cgroup *memcg) +void free_shrinker_info(struct mem_cgroup *memcg) { struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid; if (mem_cgroup_is_root(memcg)) @@ -231,16 +231,16 @@ void free_shrinker_maps(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - if (map) - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); + info = rcu_dereference_protected(pn->shrinker_info, true); + if (info) + kvfree(info); + rcu_assign_pointer(pn->shrinker_info, NULL); } } -int alloc_shrinker_maps(struct mem_cgroup *memcg) +int alloc_shrinker_info(struct mem_cgroup *memcg) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid, size, ret = 0; if (mem_cgroup_is_root(memcg)) @@ -249,20 +249,20 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) down_write(&shrinker_rwsem); size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - free_shrinker_maps(memcg); + info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); + if (!info) { + free_shrinker_info(memcg); ret = -ENOMEM; break; } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); return ret; } -static int expand_shrinker_maps(int new_id) +static int expand_shrinker_info(int new_id) { int size, old_size, ret = 0; int new_nr_max = new_id + 1; @@ -280,7 +280,7 @@ static int expand_shrinker_maps(int new_id) do { if (mem_cgroup_is_root(memcg)) continue; - ret = expand_one_shrinker_map(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; @@ -297,13 +297,13 @@ static int expand_shrinker_maps(int new_id) void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); /* Pairs with smp mb in shrink_slab() */ smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); + set_bit(shrinker_id, info->map); rcu_read_unlock(); } } @@ -334,7 +334,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (expand_shrinker_maps(id)) { + if (expand_shrinker_info(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -663,7 +663,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; unsigned long ret, freed = 0; int i; @@ -673,12 +673,12 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, - true); - if (unlikely(!map)) + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + if (unlikely(!info)) goto unlock; - for_each_set_bit(i, map->map, shrinker_nr_max) { + for_each_set_bit(i, info->map, shrinker_nr_max) { struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, @@ -689,7 +689,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, shrinker = idr_find(&shrinker_idr, i); if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { if (!shrinker) - clear_bit(i, map->map); + clear_bit(i, info->map); continue; } @@ -700,7 +700,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, ret = do_shrink_slab(&sc, shrinker, priority); if (ret == SHRINK_EMPTY) { - clear_bit(i, map->map); + clear_bit(i, info->map); /* * After the shrinker reported that it had no objects to * free, but before we cleared the corresponding bit in From patchwork Thu Jan 21 23:06:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 651CFC433DB for ; Thu, 21 Jan 2021 23:17:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 25A7B20769 for ; Thu, 21 Jan 2021 23:17:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726410AbhAUXRK (ORCPT ); Thu, 21 Jan 2021 18:17:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726375AbhAUXHL (ORCPT ); Thu, 21 Jan 2021 18:07:11 -0500 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0C7AC061756; Thu, 21 Jan 2021 15:06:49 -0800 (PST) Received: by mail-pg1-x533.google.com with SMTP id i5so2394412pgo.1; Thu, 21 Jan 2021 15:06:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nnB8h59OPJ5JvB/lyk5EwUCcxRhMfxfME1UgDyDGoQ4=; b=QCywdeQAJFYgBjkq16w9Nwd1NEJ6ZQaVs0eqqZSRHYkuLblNXGnvzIhx1G+5k3GrR0 CL7EEnT4+9GOVXUQsbTzqzeo2uRy2ogXPFp7fH+Q5ad3kp3n/mZmGHGvG8TDaduz/5fZ yhMubrMH+qn+GOnwuajNYd446QuDY7qy9ytv401sIGjE4QYo+B5fAEULnwDCD5A+SBkN LqMTcA6zvOIof+MLYmcXm8IfwjVcmSz/OSPTh+jnhbBzO+G4zIIMhZAO/Bh6Tc9epdLU Bcn6sqbHgRLUO8pdrHQh8/d1eh8rfkNutkMz7dmtkQhwJm/irdm7xOlBGyVLOtL6MtOw i4pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nnB8h59OPJ5JvB/lyk5EwUCcxRhMfxfME1UgDyDGoQ4=; b=IJ1n10GZSlYEp5//6M6F5CiLeuJXOL0h4iDlMzAmX/Miin70/QGDjQS2o4RDQUqM+0 MaXTvVgjgmkwlR5NzL5PiNzRIyxAZPF5kTwWwfSKsv/vylQvHkYjnU5xT+N2pxqqtmry JU/WsIjzuq48xqztfQeXNEriHt/OTv051x1XnRHjJ4I+dX2W9eUX+nxMXZCeiJxaySux ARA4u7o52B0br/HI010co7EpdStf0qePjOkxSj7U8Ja5MOLQSFy7zssXrc9hCjb5BMAb PQuTPtRXaBn93ypwoEJdMRWfO5bIfTs3So3ISbYU2UupxtdB7YuFoVaR126gQrWxFpOL xivA== X-Gm-Message-State: AOAM532hEQLHCRcUbWWPP0mYl9KjI1gFcjWMSpmLmvz1tQNd6JH7eSyP lqP41V7dukCxaWUaSA4jreg= X-Google-Smtp-Source: ABdhPJzSxoYXXWg9qJp6Wsca6bMsr1MdN+qYTw1gM5auopTSgB/O1V2pm1yyZwbcayuB+/p//fu8Pw== X-Received: by 2002:a63:4923:: with SMTP id w35mr1615014pga.404.1611270409472; Thu, 21 Jan 2021 15:06:49 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:48 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 06/11] mm: vmscan: use a new flag to indicate shrinker is registered Date: Thu, 21 Jan 2021 15:06:16 -0800 Message-Id: <20210121230621.654304-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred. This approach is fine with nr_deferred at the shrinker level, but the following patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their shrinker->nr_deferred would always be NULL. This would prevent the shrinkers from unregistering correctly. Remove SHRINKER_REGISTERING since we could check if shrinker is registered successfully by the new flag. Signed-off-by: Yang Shi --- include/linux/shrinker.h | 7 ++++--- mm/vmscan.c | 27 +++++++++------------------ 2 files changed, 13 insertions(+), 21 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..1eac79ce57d4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -79,13 +79,14 @@ struct shrinker { #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ /* Flags */ -#define SHRINKER_NUMA_AWARE (1 << 0) -#define SHRINKER_MEMCG_AWARE (1 << 1) +#define SHRINKER_REGISTERED (1 << 0) +#define SHRINKER_NUMA_AWARE (1 << 1) +#define SHRINKER_MEMCG_AWARE (1 << 2) /* * It just makes sense when the shrinker is also MEMCG_AWARE for now, * non-MEMCG_AWARE shrinker should not have this flag set. */ -#define SHRINKER_NONSLAB (1 << 2) +#define SHRINKER_NONSLAB (1 << 3) extern int prealloc_shrinker(struct shrinker *shrinker); extern void register_shrinker_prepared(struct shrinker *shrinker); diff --git a/mm/vmscan.c b/mm/vmscan.c index dcb7f2913ace..018e1beb24c9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -308,19 +308,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) } } -/* - * We allow subsystems to populate their shrinker-related - * LRU lists before register_shrinker_prepared() is called - * for the shrinker, since we don't want to impose - * restrictions on their internal registration order. - * In this case shrink_slab_memcg() may find corresponding - * bit is set in the shrinkers map. - * - * This value is used by the function to detect registering - * shrinkers and to skip do_shrink_slab() calls for them. - */ -#define SHRINKER_REGISTERING ((struct shrinker *)~0UL) - static DEFINE_IDR(shrinker_idr); static int prealloc_memcg_shrinker(struct shrinker *shrinker) @@ -329,7 +316,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ - id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL); + id = idr_alloc(&shrinker_idr, NULL, 0, 0, GFP_KERNEL); if (id < 0) goto unlock; @@ -496,6 +483,7 @@ void register_shrinker_prepared(struct shrinker *shrinker) if (shrinker->flags & SHRINKER_MEMCG_AWARE) idr_replace(&shrinker_idr, shrinker, shrinker->id); #endif + shrinker->flags |= SHRINKER_REGISTERED; up_write(&shrinker_rwsem); } @@ -515,13 +503,16 @@ EXPORT_SYMBOL(register_shrinker); */ void unregister_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) + if (!(shrinker->flags & SHRINKER_REGISTERED)) return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + down_write(&shrinker_rwsem); list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; up_write(&shrinker_rwsem); + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; } @@ -687,7 +678,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct shrinker *shrinker; shrinker = idr_find(&shrinker_idr, i); - if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { + if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { if (!shrinker) clear_bit(i, info->map); continue; From patchwork Thu Jan 21 23:06:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B7AFC433DB for ; Thu, 21 Jan 2021 23:13:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CFD9423A5B for ; Thu, 21 Jan 2021 23:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726202AbhAUXND (ORCPT ); Thu, 21 Jan 2021 18:13:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726402AbhAUXHb (ORCPT ); Thu, 21 Jan 2021 18:07:31 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F21FC0613D6; Thu, 21 Jan 2021 15:06:52 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id u4so2642177pjn.4; Thu, 21 Jan 2021 15:06:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L2xKRAJQ2s+bzAbA86jVWah9Pcyn3h9mN21LmxROmMg=; b=GlhuPtTKpp+g6fUASFTUMs4Bb0pbNFyKH/ey8j7wkrJ5+/BiUjBScVmpiPSJz1XdWf nUgGb4U8fSndmG0bt2fkdYVjfX9NUdeLUsONeTCMeVJBf4d7iIgzKLVpiwMI70et7N84 iHnRJOFinZKVhpzgZ6ADdroGQi+pQUZAMuKM/kZTqu6X4ZkZj1TPDA8SDeGl8ZDsZ8Ac ejKFXZXF5YgILGGukXt7waNCrbz5tgdWNgMt5mJHqgNErLWkcX9oqmiQoVMYy/PgA1NJ MFvDYjFLHpiCr+Gt1aO3hm0rXywaNaaT+BPH9WYIxtho2C9oJ8ERsmmmL6kEUtcmQXSA +unQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L2xKRAJQ2s+bzAbA86jVWah9Pcyn3h9mN21LmxROmMg=; b=HtmP//6lNICnYQL3I2urrPfFIrBHOToemSMEnYF6ygTxIODsviMhHtkGWRHam7ffVP YyVSdgG9G55Lj2362IVtW7QsghIiotWRDwX5BjJXarbChtLp5iDiZqVtmTApmNsGZZXM H+F8n3GcbGgh83Z6yLW1T8+uKGRleAxo/4/e4XUDqGEIOKWEWC31+NSkx6wj2NXmnInG k7GVpHun6fMVRuJnsrOjFc6pk2c779YrwLiZ7V6iYa5C7lKEtHoRcUgjMtmz+4VJEOiN jWQy6aCJ1OX+FRsewiZd1JV68Se+JY2rE/QyBZg7ml/f6lLF++0lmvkiyyaJIlhxkYni mkzw== X-Gm-Message-State: AOAM530uYuQz5dQcGztFI4IYFJtRceJ9cg0Lq5/J480BF37/OZFKV1bv MtAlaRmmeEQJRdC80pNTsyw= X-Google-Smtp-Source: ABdhPJyBLeofWI4NXFnspAnBQ9Hx7qB8cYn0QlzQ5sRwqGqOdV/hluZalstoKB27N9LtctDa9Erq/g== X-Received: by 2002:a17:90b:30d4:: with SMTP id hi20mr1857070pjb.41.1611270411631; Thu, 21 Jan 2021 15:06:51 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:50 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 07/11] mm: vmscan: add per memcg shrinker nr_deferred Date: Thu, 21 Jan 2021 15:06:17 -0800 Message-Id: <20210121230621.654304-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 7 +++--- mm/vmscan.c | 49 +++++++++++++++++++++++++------------- 2 files changed, 36 insertions(+), 20 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 62b888b88a5f..e0384367e07d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct shrinker_info { struct rcu_head rcu; - unsigned long map[]; + unsigned long *map; + atomic_long_t *nr_deferred; }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 018e1beb24c9..722aa71b13b2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,11 +192,13 @@ static void free_shrinker_info_rcu(struct rcu_head *head) kvfree(container_of(head, struct shrinker_info, rcu)); } -static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) +static int expand_one_shrinker_info(struct mem_cgroup *memcg, int nr_max, + int m_size, int d_size, + int old_m_size, int old_d_size) { struct shrinker_info *new, *old; int nid; + int size = m_size + d_size; for_each_node(nid) { old = rcu_dereference_protected( @@ -209,9 +211,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, if (!new) return -ENOMEM; - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->map = (unsigned long *)(new + 1); + new->nr_deferred = (atomic_long_t *)(new->map + + nr_max / BITS_PER_LONG + 1); + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_m_size); + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_d_size); + memset((void *)new->nr_deferred + old_d_size, 0, d_size - old_d_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); call_rcu(&old->rcu, free_shrinker_info_rcu); @@ -226,9 +235,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) struct shrinker_info *info; int nid; - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); info = rcu_dereference_protected(pn->shrinker_info, true); @@ -242,12 +248,13 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int m_size, d_size = 0; down_write(&shrinker_rwsem); - size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + m_size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + d_size = shrinker_nr_max * sizeof(atomic_long_t); + size = m_size + d_size; + for_each_node(nid) { info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); if (!info) { @@ -255,6 +262,9 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) ret = -ENOMEM; break; } + info->map = (unsigned long *)(info + 1); + info->nr_deferred = (atomic_long_t *)(info->map + + shrinker_nr_max / BITS_PER_LONG + 1); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); @@ -266,10 +276,16 @@ static int expand_shrinker_info(int new_id) { int size, old_size, ret = 0; int new_nr_max = new_id + 1; + int m_size, d_size = 0; + int old_m_size, old_d_size = 0; struct mem_cgroup *memcg; - size = (new_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); - old_size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + m_size = (new_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + d_size = new_nr_max * sizeof(atomic_long_t); + size = m_size + d_size; + old_m_size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); + old_size = old_m_size + old_d_size; if (size <= old_size) return 0; @@ -278,9 +294,8 @@ static int expand_shrinker_info(int new_id) memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret = expand_one_shrinker_info(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, new_nr_max, m_size, d_size, + old_m_size, old_d_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; From patchwork Thu Jan 21 23:06:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56113C433DB for ; Thu, 21 Jan 2021 23:13:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D0DD23A5C for ; Thu, 21 Jan 2021 23:13:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726462AbhAUXNH (ORCPT ); Thu, 21 Jan 2021 18:13:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726405AbhAUXHb (ORCPT ); Thu, 21 Jan 2021 18:07:31 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55527C0613ED; Thu, 21 Jan 2021 15:06:54 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id n25so2403246pgb.0; Thu, 21 Jan 2021 15:06:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3SlHfZhSiZNKbs0mjvUox6d28geo/ZvBRCAhNI+IEVc=; b=sDJz2yo6fVhLXppFA6DLaTHwx1TBzkOJPIpEiu9J0oik+Hl6Jdnzr52Nz7mYZFxtwl lQyYLhO+PcOLm5Xg9Gu2HIrGORhm22/JGifPt9t4qtHquP4ORMLfwutzjnYyTdZFBHps nOfABIw8MRYXjFcsZzvgApucTjH6re4Se4lIlrIua9lwe+ZP6u3iS3wEhJ4w8cV1b1Kn xwG+nNpiXQZuHGi2BtGfmu9aSnlgh2djyA5XF8n0NqMclHGN0WWyuS5K5cD7Apd05kue +KxQ56Tbe8sgrdPGkAOcUO86FaDS7ZO9hRHqzXqDZTGoqhXdYlplbyXZGLSwFq/ZHaVI jSSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3SlHfZhSiZNKbs0mjvUox6d28geo/ZvBRCAhNI+IEVc=; b=JXtSryI+Csyk/HmBz18vC0zEia9fn4oC+uBC3sSDFN6YeYJHr1qhUlaqcnLoGkc/ZJ kZmJ84cCj1kLrQu3CRjQua5KTw9dEk/Rd9+Jcr7Nlqfy22qs+CrABHJCHNNm3TgUXtzR UpZSztavBhhVi/E+OERT9OmgpIMdcvZW6Ok6r28hcnbNcFgFPbv3+5/4/iLUJ0fseMhL RtVbl9BUbdJKgXyS3J4PkLd/MRbkfyhZzwTzutI8ZgCDNlvEWfAzWgB8rkwJOIf1YI1K jN6JD34+y0BzYqNJp4Een1tP7HXJG0a30bvpbuKup3bavxC57hhyiyeZQhFdeFBrtytS duzw== X-Gm-Message-State: AOAM531kd+aEi6kqFOt0MX/EJWjtLdHa5MKwJMJcHCDoJsTrqYBc9Fko P9zUaokdEKBUSTZsmRTK1Iw= X-Google-Smtp-Source: ABdhPJwIU5G2rXN+/5fiO72dLGOTVGko9BsBg3ezB7RegtX65G41XVMMa1YdR7nAzZtQ8X/DwgrLUQ== X-Received: by 2002:a62:5207:0:b029:1b7:6c49:f971 with SMTP id g7-20020a6252070000b02901b76c49f971mr1836224pfb.23.1611270413916; Thu, 21 Jan 2021 15:06:53 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:52 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 08/11] mm: vmscan: use per memcg nr_deferred of shrinker Date: Thu, 21 Jan 2021 15:06:18 -0800 Message-Id: <20210121230621.654304-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Signed-off-by: Yang Shi --- mm/vmscan.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 69 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 722aa71b13b2..d8e77ea13815 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -359,6 +359,27 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) up_write(&shrinker_rwsem); } +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; @@ -397,6 +418,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -408,6 +441,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +static long count_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return count_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long set_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return set_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -544,14 +610,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; @@ -561,7 +623,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = count_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -652,14 +714,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = set_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; From patchwork Thu Jan 21 23:06:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11008C433E6 for ; Thu, 21 Jan 2021 23:12:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA70023A7D for ; Thu, 21 Jan 2021 23:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726322AbhAUXMj (ORCPT ); Thu, 21 Jan 2021 18:12:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726417AbhAUXHb (ORCPT ); Thu, 21 Jan 2021 18:07:31 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A338BC06121C; Thu, 21 Jan 2021 15:06:56 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id q131so2452602pfq.10; Thu, 21 Jan 2021 15:06:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BZLMf4A0IHqVBFCj87SkKbs9E9e3bcGD5MPZ0Wkq/mc=; b=c9oY6ctP2b/rT4QSmLGh0urDBiDIMwa+LiG+0UYKP238b6uBaZEyVR0T1ZIOAg5GWa byoPVJmG97Ev2e+Wl1qb4HAjdpUTOG4o7Jr8EBk3SkGpdY4EKo++kmmImqTVN6KBwhts IanjXH4X0hq2JP0RiJrivJgmCEO1/Un16rV6aQDZ32p2av3G9S2Fv6F0hcYX0ivUGXRF ep0sRSbMxb8hBL8+WtQPHB+IcOBvotAwzJEObtKHOTi48ZOEV6e6i+Qzccv+REprcaw9 Xq1zWL4oft9V2sxGBUdqDdwaJfDXr40IMNOydFb3GpttFn30lBbv7Wyfb5H+/SbFQeu7 3/OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BZLMf4A0IHqVBFCj87SkKbs9E9e3bcGD5MPZ0Wkq/mc=; b=XwaBcImq0lxPeanxd08+GgTisedTK4Db41TrtZkDnwvccn4cDKKcBi5JM3elyD534n kVbEp82jUOGgx++QQNB5Smj4iCbWUYfWIIDgEW0fUWkcyZL77DwFcBG3FoLwEfL4Emq/ Lp1ac+nrocoMyZu37ku4QhsNW9XCn+nvtE2wDWPHv/PYbmb1V49EYcVbc4f2SHSfgtW3 UKzSShZkMXSzXwkwgIOPSkwQY+fpEynpFs+Mu9OYsyESi+InyEbUoDdWyYFVAX+atR+H lBYs/NMpWKi6j0Y64PEKnsruTxI9fkZcieWtDt01btHaBwHkod23AF8t8Nvlttrb1yO3 0C0A== X-Gm-Message-State: AOAM531uiK5TvcgqlnMj6gV8HPY6UjTv9Ey6xCNb2kcZTGiiYivuj1Ey qhTvfdNaIp/ToAE+7jxhWw4= X-Google-Smtp-Source: ABdhPJwozmAUrFq5/542uQo/T7+zhXr34cLbhu0JgxH/zwZSWSWNnoG9bMZDzJbaT7H/G+OraFRYmw== X-Received: by 2002:a62:e516:0:b029:156:3b35:9423 with SMTP id n22-20020a62e5160000b02901563b359423mr1841757pff.19.1611270416239; Thu, 21 Jan 2021 15:06:56 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:55 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 09/11] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Date: Thu, 21 Jan 2021 15:06:19 -0800 Message-Id: <20210121230621.654304-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need allocate shrinker->nr_deferred for such shrinkers anymore. The prealloc_memcg_shrinker() would return -ENOSYS if !CONFIG_MEMCG or memcg is disabled by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be cleared. This makes the implementation of this patch simpler. Signed-off-by: Yang Shi --- mm/vmscan.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d8e77ea13815..ea1402e7b968 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -329,6 +329,9 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) { int id, ret = -ENOMEM; + if (mem_cgroup_disabled()) + return -ENOSYS; + down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ id = idr_alloc(&shrinker_idr, NULL, 0, 0, GFP_KERNEL); @@ -411,7 +414,7 @@ static bool writeback_throttling_sane(struct scan_control *sc) #else static int prealloc_memcg_shrinker(struct shrinker *shrinker) { - return 0; + return -ENOSYS; } static void unregister_memcg_shrinker(struct shrinker *shrinker) @@ -522,8 +525,20 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone */ int prealloc_shrinker(struct shrinker *shrinker) { - unsigned int size = sizeof(*shrinker->nr_deferred); + unsigned int size; + int err; + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (!err) + return 0; + if (err != -ENOSYS) + return err; + + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + } + size = sizeof(*shrinker->nr_deferred); if (shrinker->flags & SHRINKER_NUMA_AWARE) size *= nr_node_ids; @@ -531,26 +546,14 @@ int prealloc_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return -ENOMEM; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - if (prealloc_memcg_shrinker(shrinker)) - goto free_deferred; - } return 0; - -free_deferred: - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; - return -ENOMEM; } void free_prealloced_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + return unregister_memcg_shrinker(shrinker); kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; From patchwork Thu Jan 21 23:06:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8075EC433DB for ; Thu, 21 Jan 2021 23:10:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 429ED23A3A for ; Thu, 21 Jan 2021 23:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726122AbhAUXI7 (ORCPT ); Thu, 21 Jan 2021 18:08:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726416AbhAUXHb (ORCPT ); Thu, 21 Jan 2021 18:07:31 -0500 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B23D0C06121D; Thu, 21 Jan 2021 15:06:58 -0800 (PST) Received: by mail-pg1-x532.google.com with SMTP id i5so2394643pgo.1; Thu, 21 Jan 2021 15:06:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ej4Rc46w5k8kBipb90Kqo5kCs6g2nAP16XzJEvULq1o=; b=Wqh1uFbzzPDacscNDMRfxpCDbzIF1XQwIiBMlgS+cV9sRWgHFqXIY9qtEWihLhQ0UB rqh4MQ0yYBJpe//Cml1XSs28/ErMUnnsBpuVAz+B5C6d/hbozV2a8Ktt/oq/hzCPaLlR /la3I+slLzw6pqS7/2XuIna/9MdSxor9BPj6rbidquQPAFtekFSoTkTSLY8L5wscXEIa Kndrh/n/GYE5V9O5aglQOH2T/CPtOGfbNb2wNelQke55C81U/mVos+fGI9rCRVBJDpT2 RbR1iGoRmB+HdfSoTNaKiAOBRiC55td7OKx01KCVRDBEgwhMVH2FxEaO4DWLRJUP+fZ5 lVNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ej4Rc46w5k8kBipb90Kqo5kCs6g2nAP16XzJEvULq1o=; b=b4HBRCwHXptTNXdguohyRyHckbOGg1xb31Q9NN0qb/iyEwh3gE33sJ6S3As6wcaf/X TBIfiuYYyOoDyMKCcIHJhyP4PhVYycsMtPHA9dd3FQESIjpYv8SGJh9pWaOO9gEuXGPZ 0nx6ERTtUlvsUyCYwtaCOftWA5ySKCh+3KZkneb8khm6siANN3CVjWuvICbjmUpZERND SlF7YvnP/ZihGTXXgEj7tUVpmrlcgvXXz7gdEfWEWmd3oUcwIivTSW1il5dLGvVnmhqy dHVm27XoE8j2XA6rBElG7acY+FE+WQ0g39ve5kQDzCbMPtZEMuF3Hi2HkXy6Q2nqyfHQ +ggQ== X-Gm-Message-State: AOAM5309v3TYZ1y5LB3dY+MKQMifg6+hrhagnZr9zZt4iHTU4O2PllQQ krCWFog3nwdYReBK2BNZLuY= X-Google-Smtp-Source: ABdhPJyNaukOZwbxtacagm3HKlPqsr9jW52mpjA/ls4Ymzr4VY6IHzD1KCGZIgN7gJ/FL+9MOX43Zw== X-Received: by 2002:aa7:8ad0:0:b029:1a9:3a46:78d1 with SMTP id b16-20020aa78ad00000b02901a93a4678d1mr1709495pfd.77.1611270418325; Thu, 21 Jan 2021 15:06:58 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:57 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 10/11] mm: memcontrol: reparent nr_deferred when memcg offline Date: Thu, 21 Jan 2021 15:06:20 -0800 Message-Id: <20210121230621.654304-11-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's corresponding nr_deferred when memcg offline. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 1 + mm/memcontrol.c | 1 + mm/vmscan.c | 31 +++++++++++++++++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e0384367e07d..fe1375f08881 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1586,6 +1586,7 @@ extern int alloc_shrinker_info(struct mem_cgroup *memcg); extern void free_shrinker_info(struct mem_cgroup *memcg); extern void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); +extern void reparent_shrinker_deferred(struct mem_cgroup *memcg); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 65d9eb0215b5..cccf2bacb147 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5284,6 +5284,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); drain_all_stock(memcg); diff --git a/mm/vmscan.c b/mm/vmscan.c index ea1402e7b968..e73f200ffd2d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -383,6 +383,37 @@ static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); } +static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, + int nid) +{ + return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + lockdep_is_held(&shrinker_rwsem)); +} + +void reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct shrinker_info *child_info, *parent_info; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + /* Prevent from concurrent shrinker_info expand */ + down_read(&shrinker_rwsem); + for_each_node(nid) { + child_info = shrinker_info_protected(memcg, nid); + parent_info = shrinker_info_protected(parent, nid); + for (i = 0; i < shrinker_nr_max; i++) { + nr = atomic_long_read(&child_info->nr_deferred[i]); + atomic_long_add(nr, &parent_info->nr_deferred[i]); + } + } + up_read(&shrinker_rwsem); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; From patchwork Thu Jan 21 23:06:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12037919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6496C433DB for ; Thu, 21 Jan 2021 23:09:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABBC423A3A for ; Thu, 21 Jan 2021 23:09:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725946AbhAUXJJ (ORCPT ); Thu, 21 Jan 2021 18:09:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726427AbhAUXHb (ORCPT ); Thu, 21 Jan 2021 18:07:31 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDBDDC06121E; Thu, 21 Jan 2021 15:07:00 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id o20so2481548pfu.0; Thu, 21 Jan 2021 15:07:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cFyZyyuzW/ERd/pESi1fpg8DhMcXb2BRIJWpsGqGYMg=; b=K+k8Xme1P0wSsE7PFMeGaVopgVa+k1Y498WdbK9y6Vatk2rx7jNoLmN3+5kYcRLqqK l8rOPPSYKTQzP7xrZiGtdzmcpCYM9Wah5qcEcLYHGWoTk9EKgYdi9gLLcg20U2eXkLKF TvKvzOD5LIma1GnDGEPyaQL45M+4wKB8g1hbxRmTpbrONiMwDCGCD7F/Q16/zcjG6Sqd GItJuuq+IVsY9f72PmOX2LP9JKJ33nLsZKSUuGJcR/+LmGZyXjcefw1CJjkLZnQ/4OmK 4kE0X/54if2CtfTBeLlbp/+1imSeKqC8NUK/RrfZjM3HXvhZjy4qTlKF+D2Fwvg7u7MU IPWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cFyZyyuzW/ERd/pESi1fpg8DhMcXb2BRIJWpsGqGYMg=; b=Q5lhlLLu52GUj41JK73G54WtLppjOLUduy+L6NvH47/ZLCBk/EUTbq9HdglHtpxS7/ 33jlptUxeMKCCIGSdbopizeL5mgV+e+kfcNn5HPdWGfk0JHZcqOtcDrNb8ZlF8W8VL0d PNFQ6QRQW5dYxXOFh1JNc1aU+ZW1R1cPBz/YUJShA4dTqPHvbjOhu9eBkAT4DcfyPWU+ yYbXcS6dUl7QgvJDlNyZIF3iyPNrHJaUONHbyF52eSkJIIZZFyiA2WH1SgpQ7xYUsrmL 2ii+d1WFP2Un8AzaWoVfc7046/lOkdgwJKtx0FdCSAQLvKEYKlPMIKo+ADlQBz5FY41R xP/A== X-Gm-Message-State: AOAM531pdtOH94nWk1xt7M9l1tc2rR66ZnI2/SYzjbRHCvKq0ZkhN50Q IHaHIAVsYtcYHepyNql/NdI= X-Google-Smtp-Source: ABdhPJx5OwK2Kp2/u2/VM8ly/p6rR5YhfIOFzAhndfeeO2sLl7p9z+F3NsQ/by+5PnKrRVX3gESrew== X-Received: by 2002:a62:d449:0:b029:1bc:431b:6aa4 with SMTP id u9-20020a62d4490000b02901bc431b6aa4mr1755606pfl.58.1611270420524; Thu, 21 Jan 2021 15:07:00 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:59 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 11/11] mm: vmscan: shrink deferred objects proportional to priority Date: Thu, 21 Jan 2021 15:06:21 -0800 Message-Id: <20210121230621.654304-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed fron Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload, no regression is spotted so far. But it still may have regression for some corner cases. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index e73f200ffd2d..bb254d39339f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -659,7 +659,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -673,37 +672,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -742,10 +713,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.