From patchwork Sat Aug 14 05:25:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9D82C4338F for ; Sat, 14 Aug 2021 05:25:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76C8C610CF for ; Sat, 14 Aug 2021 05:25:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 76C8C610CF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0F0F46B006C; Sat, 14 Aug 2021 01:25:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A1E98D0002; Sat, 14 Aug 2021 01:25:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAC076B0072; Sat, 14 Aug 2021 01:25:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id D22756B006C for ; Sat, 14 Aug 2021 01:25:40 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 70B8A1809CDF4 for ; Sat, 14 Aug 2021 05:25:40 +0000 (UTC) X-FDA: 78472548840.21.266B674 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf14.hostedemail.com (Postfix) with ESMTP id 2D74960022C6 for ; Sat, 14 Aug 2021 05:25:40 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id w13-20020a17090aea0db029017897a5f7bcso19041689pjy.5 for ; Fri, 13 Aug 2021 22:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LkgVxlGCf6bKLlXW3fToFeS0WRPiohkGWt+flADM0c8=; b=zqH6VtO93wr+br2CK1IMWR7G+IWiaw9LQQVShcKzoJrmdoPK9ujf3qwfUfIXJJ36MN j3LdxNiTOIYBWFNvY9mhNcp42bLiKzIy/BFEOUivH9Vv8KQ7MVBW0jKwgzA9yeQZMuZI Qh/1JTb8wGBNsl9G8Zk9XHTPs5LDQfNBgA7jib/Dn4RVlN/Dd4q1usTvDfjPhWRRQ/k9 l0KHrAejHlpZ8+lO2/VV5ytbmFqPKi0C1v4eBbcKePdFtjxje8HkRd6TihIiY2fept2N g7ieAYrcfwQy/umPcdtzcQXFe4nAOHMk+5DWw5mapYrMy6K1XzOoSLG8p87bP4Pj2V5W KraQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LkgVxlGCf6bKLlXW3fToFeS0WRPiohkGWt+flADM0c8=; b=J13eX5xgHZVaB7w/RFb865fUk7HqK8djkpCWdLwDxTn+FpO7dVYUgC3jhTnzmeKyMW g459WbHGGtpQhGXVEV0DSwuiQZapw/fQpNKlnyltSY1jdXAXXpGgsAdkfdkhpXJy9yeY pBt7l+oUy2i2EcnsBeOjf2CGLGEY9dm/5mn22ML87Co7RvqYu8zE1Z7oQ9+EF/jA97FF GJrKAbem0EEM/5PV5YVMD6xWpTvKFXMnviofhUyByD2TuQRPtgIfs4moYrFE1Am8L2lF nGSmh1dfZEWYEXP/nHpbGKJbx8t/T1g0Ahb3KkHCkzfpHq/khhZUQUWlRyJqJptKGb/H y9NQ== X-Gm-Message-State: AOAM530WU8UA19sXC+FnMT6vAIcuDPiRWh8RSlYieKyVKgLsb30oP8+Z ByvLCfVBfjB8uO9lS9V1bO9boA== X-Google-Smtp-Source: ABdhPJw9eg4WltKVUoPVy5UwA26m96K/Yiz/gagYIaWrb42c7e7FBwLkxyl3f6BMbsrb46sORKviRA== X-Received: by 2002:a17:902:ce83:b0:12d:97e1:e199 with SMTP id f3-20020a170902ce8300b0012d97e1e199mr4097060plg.16.1628918739240; Fri, 13 Aug 2021 22:25:39 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.25.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:25:38 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Date: Sat, 14 Aug 2021 13:25:08 +0800 Message-Id: <20210814052519.86679-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2D74960022C6 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=zqH6VtO9; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: qgiia7kwn6hmess9srump7t46jutmckg X-HE-Tag: 1628918740-376605 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song Reported-by: kernel test robot --- include/linux/memcontrol.h | 4 ++- mm/memcontrol.c | 66 +++++++++++++++++++++++++++++----------------- 2 files changed, 45 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0ff146486aed..41a35de93d75 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -221,7 +221,9 @@ struct memcg_cgwb_frn { struct obj_cgroup { struct percpu_ref refcnt; struct mem_cgroup *memcg; +#ifdef CONFIG_MEMCG_KMEM atomic_t nr_charged_bytes; +#endif union { struct list_head list; struct rcu_head rcu; @@ -319,9 +321,9 @@ struct mem_cgroup { #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; enum memcg_kmem_state kmem_state; +#endif struct obj_cgroup __rcu *objcg; struct list_head objcg_list; /* list of inherited objcgs */ -#endif MEMCG_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3e7c205a1852..7df2176e4f02 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -261,7 +261,6 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) return container_of(vmpr, struct mem_cgroup, vmpressure); } -#ifdef CONFIG_MEMCG_KMEM extern spinlock_t css_set_lock; bool mem_cgroup_kmem_disabled(void) @@ -269,15 +268,14 @@ bool mem_cgroup_kmem_disabled(void) return cgroup_memory_nokmem; } +#ifdef CONFIG_MEMCG_KMEM static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_kmem(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; /* * At this point all allocated objects are freed, and @@ -291,9 +289,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes = PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes == PAGE_SIZE. @@ -305,6 +303,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_kmem(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_kmem(objcg); spin_lock_irqsave(&css_set_lock, flags); list_del(&objcg->list); @@ -333,10 +344,14 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; objcg = rcu_replace_pointer(memcg->objcg, NULL, true); @@ -355,6 +370,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, percpu_ref_kill(&objcg->refcnt); } +#ifdef CONFIG_MEMCG_KMEM /* * This will be used as a shrinker list's index. * The main reason for not using cgroup id for this: @@ -3579,7 +3595,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; int memcg_id; if (cgroup_memory_nokmem) @@ -3592,14 +3607,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (memcg_id < 0) return memcg_id; - objcg = obj_cgroup_alloc(); - if (!objcg) { - memcg_free_cache_id(memcg_id); - return -ENOMEM; - } - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); memcg->kmemcg_id = memcg_id; @@ -3623,8 +3630,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); - kmemcg_id = memcg->kmemcg_id; BUG_ON(kmemcg_id < 0); @@ -5151,8 +5156,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5224,16 +5229,22 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct obj_cgroup *objcg; /* * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_info(memcg)) { - mem_cgroup_id_remove(memcg); - return -ENOMEM; - } + if (alloc_shrinker_info(memcg)) + goto remove_id; + + objcg = obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); @@ -5243,6 +5254,12 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; + +free_shrinker: + free_shrinker_info(memcg); +remove_id: + mem_cgroup_id_remove(memcg); + return -ENOMEM; } static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) @@ -5266,6 +5283,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + memcg_reparent_objcgs(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); From patchwork Sat Aug 14 05:25:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 749CAC4338F for ; Sat, 14 Aug 2021 05:25:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1CEFB6101E for ; Sat, 14 Aug 2021 05:25:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1CEFB6101E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B9FBD8D0003; Sat, 14 Aug 2021 01:25:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4F808D0002; Sat, 14 Aug 2021 01:25:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A173E8D0003; Sat, 14 Aug 2021 01:25:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 873C38D0002 for ; Sat, 14 Aug 2021 01:25:46 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1E864180ABF51 for ; Sat, 14 Aug 2021 05:25:46 +0000 (UTC) X-FDA: 78472549092.09.A85B032 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf23.hostedemail.com (Postfix) with ESMTP id D83B0900071C for ; Sat, 14 Aug 2021 05:25:45 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id oa17so18431915pjb.1 for ; Fri, 13 Aug 2021 22:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2uac+sn95cIQHLEW0GWlzK2VueAN9tRynbXsxCLuwFw=; b=Qcvlsd7b4p/JVpBcCXWk50h9oVvNbW1BNiYTDwmKZo/j9+6f2V0XG7LSfvbmZL2tUh vMiFTDr648lK4QAdhvqVw8yrexBekOVMse2tPD0MI8JUVsrQjMuOwhk2w1BIvmYii5VH P2JPYvoa6XMMkkJAax1Bj+OTICG8G9CkLm/ZDRmiA5c5U7LEdn6QCp9LzmiCLzYaQytw tQS4PYu6uEBTmaz9ws5xAeDZINwpgtvudCvfyFnmZfW3RGM8+DoizqmBsd157ja06pTt xOWJLJrOtGUeWMClVglBHu7GnS8dvvSTqASB4KJ7FAYkVp6gv8iodEHWAvETe+l4/gvH Wtjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2uac+sn95cIQHLEW0GWlzK2VueAN9tRynbXsxCLuwFw=; b=t+x0ruKn401p1C0aCNNnbokEuy5o2du/rdCmd3iVYBLzFiXu8Zyupl3u/le0UE1mHN lauB1jwLyvjOo834BOxWa1Mt2M//24LSmWL9Extx2F1E0dO+sIQHEFsRNz4AyzNp58Jf iE35YO7KVmPaq4vgafnThGGZlL86JPA3wmZTHV6csnOO+p0WeSnqNzb8TlP2hf8D0h7p FhAPsOkVCHZECTZUShjZbbMMsctdIzytQQb/vUDudc/9wyuUukj4oHJKTkhRY+UNmjaS i45h3yXSQbl9mBBZbriIDEg1hKJRkpxmRbg9xn49ifTKLf2RNNqH48xoK8bwjbozRw8W ubgA== X-Gm-Message-State: AOAM530KNs1D5nfEr/Oz2XKU7k5rTrahM4GuypeDsTPN5gzADQKWw3mO 4vd0RORg9QktPYDvJMjmqt0fQg== X-Google-Smtp-Source: ABdhPJw0LAgLcb5hqFA89s26U6zvCZxN50nfHyJyFspItLg3UVuEz9ozEtTUp5gfBtL6FX2mRvXcZA== X-Received: by 2002:a17:90a:2bcb:: with SMTP id n11mr6017017pje.198.1628918744940; Fri, 13 Aug 2021 22:25:44 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.25.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:25:44 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Date: Sat, 14 Aug 2021 13:25:09 +0800 Message-Id: <20210814052519.86679-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D83B0900071C Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=Qcvlsd7b; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: s9nmqsqahmh1696b738bqnjzht7h8urg X-HE-Tag: 1628918745-782841 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we reuse the objcg APIs to charge LRU pages, the folio_memcg() can be changed when the LRU pages reparented. In this case, we need to acquire the new lruvec lock. lruvec = folio_lruvec(folio); // The page is reparented. compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); // Acquired the wrong lruvec lock and need to retry. But compact_lock_irqsave() only take lruvec lock as the parameter, we cannot aware this change. If it can take the page as parameter to acquire the lruvec lock. When the page memcg is changed, we can use the folio_memcg() detect whether we need to reacquire the new lruvec lock. So compact_lock_irqsave() is not suitable for us. Similar to folio_lruvec_lock_irqsave(), introduce compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in the compaction routine. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- mm/compaction.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index fbc60f964c38..3131558a7c31 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + lruvec = folio_lruvec(folio); + + /* Track if the lock is contended in async mode */ + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + goto out; + + cc->contended = true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); +out: + lruvec_memcg_debug(lruvec, folio); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -837,6 +860,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; if (skip_on_failure && low_pfn >= next_skip_pfn) { /* @@ -1022,18 +1046,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + folio = page_folio(page); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) unlock_page_lruvec_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; From patchwork Sat Aug 14 05:25:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A961C4338F for ; Sat, 14 Aug 2021 05:25:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1109460F51 for ; Sat, 14 Aug 2021 05:25:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1109460F51 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id ADB948D0005; Sat, 14 Aug 2021 01:25:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8C598D0002; Sat, 14 Aug 2021 01:25:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 953BA8D0005; Sat, 14 Aug 2021 01:25:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 7B6358D0002 for ; Sat, 14 Aug 2021 01:25:52 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 23D611BCBF for ; Sat, 14 Aug 2021 05:25:52 +0000 (UTC) X-FDA: 78472549344.10.A7A7664 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf09.hostedemail.com (Postfix) with ESMTP id C34463001DAF for ; Sat, 14 Aug 2021 05:25:51 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id m24-20020a17090a7f98b0290178b1a81700so19050653pjl.4 for ; Fri, 13 Aug 2021 22:25:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=srhzdC6dpfDSqfVf+RD/0prfJUSgfxeDUXo+j1ykPgc=; b=Ebm1NQwKcXSarqSFsJgcWgom5FuU3zoKjjle2EaDp0vepGN+D/VQaW+WlMGe/fb+YX W1dNsip3hRd0esRbu3h7yKdAKg9oMyfbkGSx1bc6A17BLSlfMzVDB+4Gdy4GeLC+g+55 NQZeSiXlj+XoKf/vy3gz9YDJXiwK/0t7gToJFBS/E9+aKkSruTNkM3QlmzUKDr0ccgOQ CJcouMtFk64wxIeykIxrsdyVl0oor/KH6kc6Le1QQPIOTmERu2+oN4M6rA05eMcypoWP z5MCehzKD3ylHYwl130/jajV+RTJ5HG6gX8QkHoXOL3fcURdA7rhwsr6aZwvTppr4yka 8TFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=srhzdC6dpfDSqfVf+RD/0prfJUSgfxeDUXo+j1ykPgc=; b=UE6GrgUx2hMBh7Fd59clmmsouAwVweWiycmCZtJ5rH+2yW8GPU/OGvaEA0HNojusAD FbTUEWqwJodnNiY/PMq7conEWllxlX+QRdb5LlaVHE5kPhSxuN4IvftGrL6Co3pGe1tn 4p+s5Zwex91WmSRikONdjmjkGvyZGmiD78TcDclTt0Dzepcwj6smu0czixej6OKuqSjq knASDHk+iyKeYuz0n5/f/lJ6a5JX/UCzKq8FvVZyWyAn7Vue72PpP5Sb6l7Z9dNFwGc3 oMh919QEgslQVJugmGJu03N/AtPat1wuw9kK3u5CLx5SOdlBVL0fTTIGQlVW7TddJOrO dktQ== X-Gm-Message-State: AOAM531e8FOTpFnHpCnGOgjcYkCIUEtJgPys6ihpU5osF7otxhdN9LOf vt1yN7bnHCCBlL7sTdKQbrtoPA== X-Google-Smtp-Source: ABdhPJyuYgFZ3sES+0HS/OxwmUQF9hXLqPNkVQOQj2IFtDyG3xKC//DtY1+36hzn68Qec11oagpZZg== X-Received: by 2002:a63:5fd4:: with SMTP id t203mr5486107pgb.141.1628918750893; Fri, 13 Aug 2021 22:25:50 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.25.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:25:50 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Date: Sat, 14 Aug 2021 13:25:10 +0800 Message-Id: <20210814052519.86679-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=Ebm1NQwK; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: hbd4o17d1mt8c6m8qkj4rhox3wjm3ajq X-Rspamd-Queue-Id: C34463001DAF X-Rspamd-Server: rspam05 X-HE-Tag: 1628918751-429766 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The diagram below shows how to make the folio lruvec lock safe when LRU pages are reparented. folio_lruvec_lock(folio) retry: lruvec = folio_lruvec(folio); // The folio is reparented at this time. spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) // Acquired the wrong lruvec lock and need to retry. // Because this folio is on the parent memcg lruvec list. goto retry; // If we reach here, it means that folio_memcg(folio) is stable. memcg_reparent_objcgs(memcg) // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); // Move all the pages from the lruvec list to the parent lruvec list. spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); After we acquire the lruvec lock, we need to check whether the folio is reparented. If so, we need to reacquire the new lruvec lock. On the routine of the LRU pages reparenting, we will also acquire the lruvec lock (will be implemented in the later patch). So folio_memcg() cannot be changed when we hold the lruvec lock. Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So remove it. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 18 +++----------- mm/compaction.c | 10 +++++++- mm/memcontrol.c | 62 +++++++++++++++++++++++++++++----------------- mm/swap.c | 4 +++ 4 files changed, 55 insertions(+), 39 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 41a35de93d75..5a8f85bd9bbf 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -769,7 +769,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, * folio_lruvec - return lruvec for isolating/putting an LRU folio * @folio: Pointer to the folio. * - * This function relies on folio->mem_cgroup being stable. + * The lruvec can be changed to its parent lruvec when the page reparented. + * The caller need to recheck if it cares about this changes (just like + * folio_lruvec_lock() does). */ static inline struct lruvec *folio_lruvec(struct folio *folio) { @@ -788,15 +790,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); -#else -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1243,11 +1236,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio) return &pgdat->__lruvec; } -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; diff --git a/mm/compaction.c b/mm/compaction.c index 3131558a7c31..97d72f6b3dd5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -515,6 +515,8 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = folio_lruvec(folio); /* Track if the lock is contended in async mode */ @@ -527,7 +529,13 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, spin_lock_irqsave(&lruvec->lru_lock, *flags); out: - lruvec_memcg_debug(lruvec, folio); + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7df2176e4f02..7955da38e385 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1146,23 +1146,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return ret; } -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg = folio_memcg(folio); - - if (!memcg) - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio); - else - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio); -} -#endif - /** * folio_lruvec_lock - lock and return lruvec for a given folio. * @folio: Pointer to the folio. @@ -1175,20 +1158,43 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) */ struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); return lruvec; } struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } @@ -1196,10 +1202,20 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/swap.c b/mm/swap.c index 2bca632b73df..9554ff008fe6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -295,6 +295,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_folio(struct folio *folio) { + /* + * The rcu read lock is held by the caller, so we do not need to + * care about the lruvec returned by folio_lruvec() being released. + */ lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio), folio_nr_pages(folio)); } From patchwork Sat Aug 14 05:25:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5973DC4338F for ; Sat, 14 Aug 2021 05:25:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0BC4E60E9B for ; Sat, 14 Aug 2021 05:25:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0BC4E60E9B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9F20E8D0006; Sat, 14 Aug 2021 01:25:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A1468D0002; Sat, 14 Aug 2021 01:25:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 842288D0006; Sat, 14 Aug 2021 01:25:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 6A6E68D0002 for ; Sat, 14 Aug 2021 01:25:58 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0E5CE1BCBF for ; Sat, 14 Aug 2021 05:25:58 +0000 (UTC) X-FDA: 78472549596.04.10DB3D0 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf16.hostedemail.com (Postfix) with ESMTP id AFF4DF000AED for ; Sat, 14 Aug 2021 05:25:57 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id n13-20020a17090a4e0d00b0017946980d8dso4895339pjh.5 for ; Fri, 13 Aug 2021 22:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mvg4Ru6OOrx7XzBZqzJfbp8SqJQxo34cv//v2Oh/ogE=; b=PjaMCXr/tBJxktavybYNAuh2+QETydbRTzNTgSwoI5BZSjFzlocpe6ZIPVAXfGgVjw 0ruTlV+TlLVCkUvheXfvaIXlPzzkCysN2wK09r+hxhc0+fSq81eqEa6tWwiSJ3PP0Lvh 0u117eDiXQylKBmDZA/bTv2wDSwo4gQgY6TGv1q145w1aNO8UeJC17tg3FepAZ1Ib7CP oT8L2w+TPja+ECohJnh+9DO+l3dNtdQYaTFcA/jwd6fBnr0oZE0m8q2HQhg+/jVGbvAD o31LFEsSQgRAdhMdj1UqvxD3LK4/phGXWzPt6M/hsYgg/jekV/Hl+RWvGuTrc/7voXAL 4/tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mvg4Ru6OOrx7XzBZqzJfbp8SqJQxo34cv//v2Oh/ogE=; b=NqJH5ioEFMcDcpropeOpInh6mVO7nl82CozH2iDGwlP9EjpnDUFcO4LzFsUBzmmwlg Tbo9MoogPV1In9faKmZqGb1pGgn29xFyyJ+6SRWShWmBZ6aaVrucNLt9pE+Iq9JiJ2wm Zf2LlMz+tbo/er/v5w7qvLe/w0tpj9c8kwT/pB5+0YevRtDQnZi0helvSTOWicuv0wgE 4nkSAztY8Ynf7xMeMx5GCXFovHVcVAGJCJYgM4vIeerEc8L5HwJ9d8+Un3+pIDRiLHeM pDQi+63XFKNLYzPjx5BtMWXDpKdYRO1/TaP/GSFClRifaQmQLNDs/032YwHr9VKjoaUZ eT+w== X-Gm-Message-State: AOAM533lXHpMO07hgc6P8ggc6h2ckVWIHmvhwb2rOAJhjyp7RGC0hVMC Ef1TK94lEbNPOCYvzYJhKcWXsg== X-Google-Smtp-Source: ABdhPJzrOflGrSuy6eiUo+AH+dxSxM4GEOboB7Uia7lzJ43Ep96CMpPU06BUSLILMUiwhGfjANruXA== X-Received: by 2002:a62:ee0f:0:b029:335:a681:34f6 with SMTP id e15-20020a62ee0f0000b0290335a68134f6mr5682382pfi.55.1628918756877; Fri, 13 Aug 2021 22:25:56 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.25.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:25:56 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru() Date: Sat, 14 Aug 2021 13:25:11 +0800 Message-Id: <20210814052519.86679-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AFF4DF000AED Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="PjaMCXr/"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf16.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: ok49cfaudbkiysu9ru11up84yxgpf7pw X-HE-Tag: 1628918757-343747 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will reparent the LRU pages. The pages moved to appropriate LRU list can be reparented during the process of the move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we should use the more general interface of folio_lruvec_relock_irq() to acquire the correct lruvec lock. Signed-off-by: Muchun Song --- include/linux/mm.h | 1 + mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------ 2 files changed, 26 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ce8fc0fd6d6e..1e7f06bc5f2d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -227,6 +227,7 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, #define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE) #define lru_to_page(head) (list_entry((head)->prev, struct page, lru)) +#define lru_to_folio(head) (list_entry((head)->prev, struct folio, lru)) void setup_initial_init_mm(void *start_code, void *end_code, void *end_data, void *brk); diff --git a/mm/vmscan.c b/mm/vmscan.c index 403a175a720f..8ce42858ad5d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2153,23 +2153,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * move_pages_to_lru() moves pages from private @list to appropriate LRU list. * On return, @list is reused as a list of pages to be freed by the caller. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate LRU list. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_pages_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_pages_to_lru(struct list_head *list) { - int nr_pages, nr_moved = 0; + int nr_moved = 0; + struct lruvec *lruvec = NULL; LIST_HEAD(pages_to_free); - struct page *page; while (!list_empty(list)) { - page = lru_to_page(list); + int nr_pages; + struct folio *folio = lru_to_folio(list); + struct page *page = &folio->page; + + lruvec = folio_lruvec_relock_irq(folio, lruvec); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); putback_lru_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; continue; } @@ -2190,20 +2195,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); destroy_compound_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; } else list_add(&page->lru, &pages_to_free); continue; } - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; @@ -2211,6 +2212,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, workingset_age_nonresident(lruvec, nr_pages); } + if (lruvec) + unlock_page_lruvec_irq(lruvec); /* * To save our caller's stack, now use input list for pages to free. */ @@ -2284,16 +2287,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false); - spin_lock_irq(&lruvec->lru_lock); - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(&page_list); + local_irq_disable(); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); @@ -2420,18 +2423,16 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate = move_pages_to_lru(lruvec, &l_active); - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); + nr_activate = move_pages_to_lru(&l_active); + nr_deactivate = move_pages_to_lru(&l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); + local_irq_disable(); __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); From patchwork Sat Aug 14 05:25:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30251C4338F for ; Sat, 14 Aug 2021 05:26:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B935F60E78 for ; Sat, 14 Aug 2021 05:26:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B935F60E78 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 647E18D0007; Sat, 14 Aug 2021 01:26:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F8798D0002; Sat, 14 Aug 2021 01:26:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C0558D0007; Sat, 14 Aug 2021 01:26:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 33B2A8D0002 for ; Sat, 14 Aug 2021 01:26:04 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D781918046558 for ; Sat, 14 Aug 2021 05:26:03 +0000 (UTC) X-FDA: 78472549806.22.57BCE47 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf25.hostedemail.com (Postfix) with ESMTP id 8A616B000E1D for ; Sat, 14 Aug 2021 05:26:03 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id k2so14641577plk.13 for ; Fri, 13 Aug 2021 22:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zU8OR12zD7dJpqHRBp+KmCGx/6l3jVZJYNBuhwInJvA=; b=baofm8jITXQu42I3toPmvetkxBbjHFnxRkbFgRFdxBfZ6oyK/6TabEj09051EGgJ+p y3DofQEU4pEbZoJBdN7RwS5XQxIcpcYQwSNKtmMd8MLnjLyyMlYcg8T6NxYuKJaC4BbF 7OKcQpnfMG/tAskpjzUnL2UE7x0W6N/I7ESnW9LPwYIhr8g2/7V+SgHl5/ceaV5apzbZ BvofoUfNk/CWq3phd6fH2oICqTFQfuhAR7tAIVWwZPgSZjtFhoM3Aa4yOiRIE2ouXsRB Zw/7nublncnkgsn2sx04/Ds8xO9ETm3Opwb1y4AytxU5R15ITqtEiGL2ksqN0rW6pbPY 4YDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zU8OR12zD7dJpqHRBp+KmCGx/6l3jVZJYNBuhwInJvA=; b=b+sc4bgr/oWcCZ4DEglt0dki0ac8G6WpsBTtmmIQeqDqY/3JxU3+ruVn51TcxdO+qb cyKZ/Etsi7cTEyhXXeBLeTdcHc8hSbxrrTyl4JyCJwX8E2TGz2itWTzjBovHeq+37O1Z +5M/DB/H/NbGMU49ZREAcSpMdodpij31uiTqkObOkbdq0h2dwTwIhYydtnVncxkjpCko yv8RI4ru+hlrB3eGfI9MLIqkyqeOtALKu+qUIO0UH6IDTpXBz3Vw8zPsQbwn2nsgkKxV P8nR/UeqwYIPr4E/+qA2DBPZOD4b2LqTGOl7zcTq6vIyqAo1XE3pa2ScBvmeTUaXDzxS 0xlA== X-Gm-Message-State: AOAM531E48V5GYgQsgOymTNJxCRA8TSy1dqLYOdajWtGOkhhDHOOSXoJ K5xMCSNqAn0FMAK401+Ym3ErKg== X-Google-Smtp-Source: ABdhPJwTVasNRe+PjjQb6P1qvUabEk3txnhwNtwU4r5NANrxJo4s9X/GkVWU/mR0lIcT++wLuJY3mQ== X-Received: by 2002:a65:50c2:: with SMTP id s2mr5377409pgp.113.1628918762726; Fri, 13 Aug 2021 22:26:02 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.25.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:02 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Date: Sat, 14 Aug 2021 13:25:12 +0800 Message-Id: <20210814052519.86679-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8A616B000E1D Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=baofm8jI; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: nuatyyss9ogfx3qzcji3d4pspns77639 X-HE-Tag: 1628918763-234702 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should make thp deferred split queue lock safe when LRU pages are reparented. Similar to folio_lruvec_lock{_irqsave, _irq}(), we introduce folio_split_queue_lock{_irqsave}() to make the deferred split queue lock easier to be reparented. And in the next patch, we can use a similar approach (just like lruvec lock does) to make thp deferred split queue lock safe when the LRU pages reparented. Signed-off-by: Muchun Song Reported-by: kernel test robot Reported-by: kernel test robot --- mm/huge_memory.c | 93 +++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 69 insertions(+), 24 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ade81c123d87..c49ef28e48c1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -499,25 +499,70 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { - struct mem_cgroup *memcg = page_memcg(compound_head(page)); - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + if (mem_cgroup_disabled()) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + return memcg ? &memcg->deferred_split_queue : NULL; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + return NULL; +} - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + return NULL; } #endif +static struct deferred_split *folio_split_queue(struct folio *folio) +{ + struct deferred_split *queue = folio_memcg_split_queue(folio); + + return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock(&queue->split_queue_lock); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; +} + +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *queue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2610,8 +2655,9 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) */ int split_huge_page_to_list(struct page *page, struct list_head *list) { - struct page *head = compound_head(page); - struct deferred_split *ds_queue = get_deferred_split_queue(head); + struct folio *folio = page_folio(page); + struct page *head = &folio->page; + struct deferred_split *ds_queue; struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; @@ -2689,13 +2735,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue = folio_split_queue_lock(folio); if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr = thp_nr_pages(head); @@ -2710,7 +2756,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); fail: if (mapping) xa_unlock(&mapping->i_pages); @@ -2733,24 +2779,22 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); free_compound_page(page); } void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg = page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; + struct mem_cgroup *memcg; unsigned long flags; VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -2768,7 +2812,8 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); + memcg = split_queue_memcg(ds_queue); if (list_empty(page_deferred_list(page))) { count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); @@ -2779,7 +2824,7 @@ void deferred_split_huge_page(struct page *page) deferred_split_shrinker.id); #endif } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } static unsigned long deferred_split_count(struct shrinker *shrink, From patchwork Sat Aug 14 05:25:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33C31C4338F for ; Sat, 14 Aug 2021 05:26:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D04A8604D7 for ; Sat, 14 Aug 2021 05:26:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D04A8604D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 780798D0008; Sat, 14 Aug 2021 01:26:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 730818D0002; Sat, 14 Aug 2021 01:26:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F8E58D0008; Sat, 14 Aug 2021 01:26:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 459F78D0002 for ; Sat, 14 Aug 2021 01:26:09 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E2FFC8249980 for ; Sat, 14 Aug 2021 05:26:08 +0000 (UTC) X-FDA: 78472550016.22.F3C8590 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf13.hostedemail.com (Postfix) with ESMTP id AD52A101939C for ; Sat, 14 Aug 2021 05:26:08 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id l11so14697742plk.6 for ; Fri, 13 Aug 2021 22:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QGMUDL+X4tm/vJIxNFScO4bfVyq5TxNYHUST0OzJycs=; b=Awt006hzmvp3826E028AW54ZQdcH/iHJCFQh7D9WUYxFSVFbozsLRpX7pQ6JEIbg9W s4SyjyqL9SpQNj5AOBfRol+2Kcok6r9Fmk4sH07/7Rxg6R8djmoy6Rc0pKWV+wPdbKBq QkaIJ9bx9o5fdhBlierm42heq3f4pVeSrmi2XaepkVDKf6KsaxR/iom6ChVarEzFNycO TgxgCEjhQw+v9g3ClTw9De3JOtcS0gkL1Oxwz2orY/wTPDBR3qCjPMc8ZJRo9PYZ+rYA dNi8x/tbixxIlvFIaMh6OyuxbSDVbh/X3m8LKHGD15HXOdaD9NlcclnMEcybsIXFNapR CAGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QGMUDL+X4tm/vJIxNFScO4bfVyq5TxNYHUST0OzJycs=; b=qDXBosT75UV1uxAkPcOcXtaUrglwd7ZEf39p5TuNq4PajGHreZx9cKH74EiePGlbZe axoJoGOIdjAGJNTUnC3ZtiVZ3R7ujrL/oCqdcFfdIutH4K3tMtBtUEYMgAaxw3KjgYoy AtJ4oLNpdBJ8pfiXJPAA24ZcM7sNF5PzFKeaNM4W0ApBn1IDbdRiRYn6BnT9GAhmMA0Q PxnEsHXYs4Fb0Lh95uawNq09aVR8mO6GpNm/5Tv3AzVMx88O45atC59gWyDuy980OnOQ Owp+r3lYhPbfhmhN0DtPwJwVrIU6paNXz3nr1Qks/TeQqyKjOlyFIeLmXjtb4nr/DbUj fmHw== X-Gm-Message-State: AOAM532fZqOzgPj5Q416MLfOQxs0p8RFQL8I+1FwKwTSmr99O4d/iA9m 4R0T3Mzmnsc41VszdpUsjxhfEw== X-Google-Smtp-Source: ABdhPJxAx+pnIIz1Pwn50bFJeyyCUDyRI7YRH37fF9YnbX2dv1IvGhcPf/LkKmZqfJrEz5hhj6s9pg== X-Received: by 2002:a63:5153:: with SMTP id r19mr5516505pgl.56.1628918767822; Fri, 13 Aug 2021 22:26:07 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:07 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Date: Sat, 14 Aug 2021 13:25:13 +0800 Message-Id: <20210814052519.86679-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AD52A101939C Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=Awt006hz; spf=pass (imf13.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Stat-Signature: n8ddjwgpujjcnasq8mcpj7d5p66ubfgd X-HE-Tag: 1628918768-346895 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to the lruvec lock, we use the same approach to make the split queue lock safe when LRU pages are reparented. Signed-off-by: Muchun Song --- mm/huge_memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c49ef28e48c1..22fbf2c74d49 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -535,9 +535,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock(&queue->split_queue_lock); + if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } @@ -546,9 +559,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock_irqsave(&queue->split_queue_lock, *flags); + if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + + /* See the comments in folio_split_queue_lock(). */ + rcu_read_unlock(); + return queue; } From patchwork Sat Aug 14 05:25:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 190FBC4338F for ; Sat, 14 Aug 2021 05:26:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A993660E78 for ; Sat, 14 Aug 2021 05:26:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A993660E78 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 536A98D0009; Sat, 14 Aug 2021 01:26:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E6198D0002; Sat, 14 Aug 2021 01:26:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3ADFE8D0009; Sat, 14 Aug 2021 01:26:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 20FC98D0002 for ; Sat, 14 Aug 2021 01:26:15 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C43DEFB47 for ; Sat, 14 Aug 2021 05:26:14 +0000 (UTC) X-FDA: 78472550268.21.EBB76B1 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf23.hostedemail.com (Postfix) with ESMTP id 82E419000701 for ; Sat, 14 Aug 2021 05:26:14 +0000 (UTC) Received: by mail-pj1-f44.google.com with SMTP id j1so18403938pjv.3 for ; Fri, 13 Aug 2021 22:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cLAFvnxBwAS7OeMZn1+Mlgsj6/dWmzUoMkU4Bk0iQeY=; b=fDKQNUhcmgyaHZCVlmnB0/iiINhLlpxL6R9WPDEpkg934YtIERrYga13/x7PvHSC6F EA7QHEaYs8xAzL/lcu5hwCwIKj+sUEleXXkwaGnNyFQejkT8KCUDqPMSf57Ly5r4N5Vn Df1eJ89uKsRJRfutIU7p8+g8f08nC8xdMfPfeU2MnrPn4uqWaIjtDqtJ6vuovcOEZkYh q8uYCWkpvsjUsmiPVUC3bcHrMsIefYWLGYyDaQspo+gAAWAfeOO6FZ3gExg1AMY2d1kg l4eh2nvGMzjLgTTBi87W8zJ//3p4ICK/GRk0OpULPPpWuyL9EKmocxKdTaxuDij5BjQh j1eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cLAFvnxBwAS7OeMZn1+Mlgsj6/dWmzUoMkU4Bk0iQeY=; b=L2gGN9xjWh7SYv7OxfFJlfwPf7OZ3JfsQi6Sc3iGzjK/kCL4Tg55iVh/znBW3OQu0T 4mhX1e/Z1CnBh/NH9kevnpsPlZLJYeRs1WQbM/JtkzlDApELvYkn08Q3fKFulllPJqKt b2RYT6pARlm0yKmeFK0/9cvg7OnD6G29ygfrYoMF6GQkj8NuvQS6SfPzDfBQK79PVYsI kVtwg27GTv3RT4f/s8bqIopCvuHpD7fpY+DvTrihNJfIc89i4xqTGpYtmNhde07F1PfI mqeqMTbzOcONmz5o5hpdVlfWSCsxNT6BHhpbB7m+N9Nv3sNE4F4TP3EMDoY3bamhWKCj 6cVA== X-Gm-Message-State: AOAM53248C8YNAis6A6/aifMs0PLBADPoWmfxdPzCjjJ/2+VQnLalUX9 bQHtV2ALUUlxXuMIcsxZFGvM3A== X-Google-Smtp-Source: ABdhPJzevd6zylVsIAlF2bw09yp/DrKlvr3k0eBij2UZzNZc6xjkh+vyPo9OVFI/xZ9rSPOxorAGJg== X-Received: by 2002:a65:524a:: with SMTP id q10mr5480742pgp.97.1628918773654; Fri, 13 Aug 2021 22:26:13 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:13 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Date: Sat, 14 Aug 2021 13:25:14 +0800 Message-Id: <20210814052519.86679-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=fDKQNUhc; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 82E419000701 X-Stat-Signature: axgq5w48t5tem83a489pnd5qejxgdtzf X-HE-Tag: 1628918774-133052 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we use objcg APIs to charge the LRU pages, the page will not hold a reference to the memcg associated with the page. So the caller of the {folio,page}_memcg() should hold an rcu read lock or obtain a reference to the memcg associated with the page to protect memcg from being released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a reference to the memory cgroup associated with the page. In this patch, make all the callers hold an rcu read lock or obtain a reference to the memcg to protect memcg from being released when the LRU pages reparented. We do not need to adjust the callers of {folio,page}_memcg() during the whole process of mem_cgroup_move_task(). Because the cgroup migration and memory cgroup offlining are serialized by @cgroup_mutex. In this routine, the LRU pages cannot be reparented to its parent memory cgroup. So {folio,page}_memcg() is stable and cannot be released. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- fs/buffer.c | 3 ++- fs/fs-writeback.c | 23 +++++++++++---------- include/linux/memcontrol.h | 49 ++++++++++++++++++++++++++++++++++++++++++--- mm/memcontrol.c | 50 ++++++++++++++++++++++++++++++++++++---------- mm/migrate.c | 4 ++++ mm/page_io.c | 5 +++-- 6 files changed, 106 insertions(+), 28 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index f5384cff7e0c..88123f84885a 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -823,7 +823,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, gfp |= __GFP_NOFAIL; /* The page lock pins the memcg */ - memcg = page_memcg(page); + memcg = get_mem_cgroup_from_page(page); old_memcg = set_active_memcg(memcg); head = NULL; @@ -843,6 +843,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, set_bh_page(bh, page, offset); } out: + mem_cgroup_put(memcg); set_active_memcg(old_memcg); return head; /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 81ec192ce067..d9a67fffcc78 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; - if (page) { - memcg_css = mem_cgroup_css_from_page(page); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (page) + memcg_css = get_mem_cgroup_css_from_page(page); + else memcg_css = task_get_css(current, memory_cgrp_id); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } if (!wb) @@ -866,16 +864,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, if (!wbc->wb || wbc->no_cgroup_owner) return; - css = mem_cgroup_css_from_page(page); + css = get_mem_cgroup_css_from_page(page); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; id = css->id; if (id == wbc->wb_id) { wbc->wb_bytes += bytes; - return; + goto out; } if (id == wbc->wb_lcand_id) @@ -888,6 +886,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, wbc->wb_tcand_bytes += bytes; else wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes); + +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5a8f85bd9bbf..431fc606f6f9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -378,7 +378,7 @@ static inline bool folio_memcg_kmem(struct folio *folio); * a valid memcg, but can be atomically swapped to the parent memcg. * * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * e.g. acquire the rcu_read_lock or css_set_lock or cgroup_mutex. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { @@ -460,6 +460,36 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup + * associated with a folio. + * @folio: Pointer to the folio. + * + * Returns a pointer to the memory cgroup (and obtain a reference on it) + * associated with the folio, or NULL. This function assumes that the + * folio is known to have a proper memory cgroup pointer. It's not safe + * to call this function against some type of pages, e.g. slab pages or + * ex-slab pages. + */ +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = folio_memcg(folio); + if (unlikely(memcg && !css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return get_mem_cgroup_from_folio(page_folio(page)); +} + +/* * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. * @@ -893,7 +923,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page); +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page); ino_t page_cgroup_ino(struct page *page); static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1051,10 +1081,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg, static inline void count_memcg_page_event(struct page *page, enum vm_event_item idx) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = page_memcg(page); if (memcg) count_memcg_events(memcg, idx, 1); + rcu_read_unlock(); } static inline void count_memcg_event_mm(struct mm_struct *mm, @@ -1133,6 +1166,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + return NULL; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7955da38e385..0eca3cf6cede 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -424,7 +424,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif /** - * mem_cgroup_css_from_page - css of the memcg associated with a page + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page * @page: page of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -434,13 +434,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page) { struct mem_cgroup *memcg; - memcg = page_memcg(page); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + memcg = get_mem_cgroup_from_page(page); + if (!memcg) memcg = root_mem_cgroup; return &memcg->css; @@ -1983,7 +1985,9 @@ void folio_memcg_lock(struct folio *folio) * The RCU lock is held throughout the transaction. The fast * path can get away without acquiring the memcg->move_lock * because page moving starts with an RCU grace period. - */ + * + * The RCU lock also protects the memcg from being freed. + */ rcu_read_lock(); if (mem_cgroup_disabled()) @@ -4553,7 +4557,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; @@ -4600,6 +4604,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, frn->memcg_id = wb->memcg_css->id; frn->at = now; } + css_put(&memcg->css); } /* issue foreign writeback flushes for recorded foreign dirtying events */ @@ -6169,6 +6174,14 @@ static void mem_cgroup_move_charge(void) atomic_dec(&mc.from->moving_account); } +/* + * The cgroup migration and memory cgroup offlining are serialized by + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot + * be reparented to its parent memory cgroup. So during the whole process + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not + * need to worry about the memcg (returned from page_memcg()) being + * released even if we do not hold an rcu read lock. + */ static void mem_cgroup_move_task(void) { if (mc.to) { @@ -6991,7 +7004,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) if (folio_memcg(new)) return; - memcg = folio_memcg(old); + memcg = get_mem_cgroup_from_folio(old); VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -7010,6 +7023,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); + + css_put(&memcg->css); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7198,6 +7213,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + /* + * Interrupts should be disabled by the caller (see the comments below), + * which can serve as RCU read-side critical sections. + */ memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); @@ -7262,15 +7281,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; + rcu_read_lock(); memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) - return 0; + goto out; if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; + goto out; } memcg = mem_cgroup_id_get_online(memcg); @@ -7280,6 +7300,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); mem_cgroup_id_put(memcg); + rcu_read_unlock(); return -ENOMEM; } @@ -7289,6 +7310,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); VM_BUG_ON_PAGE(oldid, page); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); +out: + rcu_read_unlock(); return 0; } @@ -7343,17 +7366,22 @@ bool mem_cgroup_swap_full(struct page *page) if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return false; + goto out; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { unsigned long usage = page_counter_read(&memcg->swap); if (usage * 2 >= READ_ONCE(memcg->swap.high) || - usage * 2 >= READ_ONCE(memcg->swap.max)) + usage * 2 >= READ_ONCE(memcg->swap.max)) { + rcu_read_unlock(); return true; + } } +out: + rcu_read_unlock(); return false; } diff --git a/mm/migrate.c b/mm/migrate.c index 7a03a61bbcd8..cc7f5ecfd4a2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -473,6 +473,10 @@ int folio_migrate_mapping(struct address_space *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; + /* + * Irq is disabled, which can serve as RCU read-side critical + * sections. + */ memcg = folio_memcg(folio); old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); diff --git a/mm/page_io.c b/mm/page_io.c index d597bc6e6e45..63014c61932a 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -269,13 +269,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return; + goto out; - rcu_read_lock(); css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); +out: rcu_read_unlock(); } #else From patchwork Sat Aug 14 05:25:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E41EBC4338F for ; Sat, 14 Aug 2021 05:26:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96ACC60F00 for ; Sat, 14 Aug 2021 05:26:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 96ACC60F00 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3D6CA8D000A; Sat, 14 Aug 2021 01:26:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3879B8D0002; Sat, 14 Aug 2021 01:26:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24EE88D000A; Sat, 14 Aug 2021 01:26:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 0956B8D0002 for ; Sat, 14 Aug 2021 01:26:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A48758249980 for ; Sat, 14 Aug 2021 05:26:20 +0000 (UTC) X-FDA: 78472550520.30.81BE118 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf12.hostedemail.com (Postfix) with ESMTP id 5A1D9100089E for ; Sat, 14 Aug 2021 05:26:20 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id d1so14716755pll.1 for ; Fri, 13 Aug 2021 22:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fPIEUgCv2CfVB3RBhBUqutTabZnAlwkc6TmF8u0la+M=; b=evzFsEQ5cP5VAHIL/eesP/y9sy4kvCZ9J4vPwMM2WZeM3i2WpFmYbgXqFhi7udyynW 1aMUdu+vZBYPWaxURBSa7XvTgGwbtY2Yti0mSH1AAERVh8UGsXqrKAKvcvMj2UnJJQ+a JXOPRC9/lmWRFRzm2zDJac+b32GcJJmKA64coKjcw4dkIOsFQL7IGMMaX7O1/Ay+jfqa to8CPnjiQh/hSZjI3V0a8RTln46tcbOf+p/S72IZunpbIOzatgr0PJFm0XxBXm11xIr8 osFk2afW32RSq/oNKn00qTjBLBWtlMZYo4XNh5NjDYGIbPRh/nkRSYmm+Moiv2c+QijM 7rVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fPIEUgCv2CfVB3RBhBUqutTabZnAlwkc6TmF8u0la+M=; b=lY0Qcb04kmZW9Ace44OKfi9lpfrsD47fHn6/WUi55ijN1mofLdG40q9Dm6A7cfWzwW 9hGd4XHb0M7GtfUcAHhoNFDg1Q7AAU+jXA1wOy5WrV2GOwXUOfcHbfqvFkqQKt/CYOzu qBd0eQiioxSEM9ZtgHdPYXNtUf13CICT/TWYD8Velrefx5qgyTPpFF7em6SiRS6Nyl9U Z4Ap8DC+edILRrJGSYn/5cgCegqm8wS4+/U/mCacpZY8w0wj28T3kdBQG8Pa/+60SwiV ulMx7Z+Y/Nr1PtdAauhFOx42XiHKntAVnscC+V9Im4UlHh3c99zmeCpFmpZ5AFrh9tky ftgQ== X-Gm-Message-State: AOAM532udvLMEq1VzQmCaB1HaXKYWOAnUsGtEBYCGlCNE9rtq+H0RK+b euk0Y16aopQZl6YbEuoYrJqnLQ== X-Google-Smtp-Source: ABdhPJzxwFjAn1im6gsgSEHt+LwD4iGMz1KIVynJ4deShVJLjmTgUF214h03/eIQOp5o9YqHY/X2sA== X-Received: by 2002:a17:90a:c087:: with SMTP id o7mr5996988pjs.57.1628918779474; Fri, 13 Aug 2021 22:26:19 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:19 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops Date: Sat, 14 Aug 2021 13:25:15 +0800 Message-Id: <20210814052519.86679-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5A1D9100089E Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=evzFsEQ5; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Stat-Signature: yibmfn73jeubccqb19chqmqeu1a5rydj X-HE-Tag: 1628918780-781456 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the previous patch, we know how to make the lruvec lock safe when LRU pages are reparented. We should do something like following. memcg_reparent_objcgs(memcg) 1) lock // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); 2) do reparent // Move all the pages from the lruvec list to the parent lruvec list. 3) unlock spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); Apart from the page lruvec lock, the deferred split queue lock (THP only) also needs to do something similar. So we extract the necessary three steps in the memcg_reparent_objcgs(). memcg_reparent_objcgs(memcg) 1) lock memcg_reparent_ops->lock(memcg, parent); 2) reparent memcg_reparent_ops->reparent(memcg, reparent); 3) unlock memcg_reparent_ops->unlock(memcg, reparent); Now there are two different locks (e.g. lruvec lock and deferred split queue lock) need to use this infrastructure. In the next patch, we will use those APIs to make those locks safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 7 +++++++ mm/memcontrol.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 431fc606f6f9..d85c03f2d76d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -352,6 +352,13 @@ struct mem_cgroup { struct mem_cgroup_per_node *nodeinfo[]; }; +struct memcg_reparent_ops { + /* Irq is disabled before calling those callbacks. */ + void (*lock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*unlock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*reparent)(struct mem_cgroup *memcg, struct mem_cgroup *parent); +}; + /* * size of first charge trial. "32" comes from vmscan.c's magic value. * TODO: maybe necessary to use big numbers in big irons. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0eca3cf6cede..8cfb4221c36a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -344,6 +344,35 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } +static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; + +static void memcg_reparent_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->lock(memcg, parent); +} + +static void memcg_reparent_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->unlock(memcg, parent); +} + +static void memcg_do_reparent(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->reparent(memcg, parent); +} + static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; @@ -353,9 +382,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; + local_irq_disable(); + + memcg_reparent_lock(memcg, parent); + objcg = rcu_replace_pointer(memcg->objcg, NULL, true); - spin_lock_irq(&css_set_lock); + spin_lock(&css_set_lock); /* 1) Ready to reparent active objcg. */ list_add(&objcg->list, &memcg->objcg_list); @@ -365,7 +398,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) /* 3) Move already reparented objcgs to the parent's list */ list_splice(&memcg->objcg_list, &parent->objcg_list); - spin_unlock_irq(&css_set_lock); + spin_unlock(&css_set_lock); + + memcg_do_reparent(memcg, parent); + + memcg_reparent_unlock(memcg, parent); + + local_irq_enable(); percpu_ref_kill(&objcg->refcnt); } From patchwork Sat Aug 14 05:25:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1432FC4338F for ; Sat, 14 Aug 2021 05:26:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9CFEB60E9B for ; Sat, 14 Aug 2021 05:26:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9CFEB60E9B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 425398D000B; Sat, 14 Aug 2021 01:26:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D4948D0002; Sat, 14 Aug 2021 01:26:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 275828D000B; Sat, 14 Aug 2021 01:26:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 085A28D0002 for ; Sat, 14 Aug 2021 01:26:27 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9602B1BCBF for ; Sat, 14 Aug 2021 05:26:26 +0000 (UTC) X-FDA: 78472550772.01.BD7E65F Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf04.hostedemail.com (Postfix) with ESMTP id 38E3B5001061 for ; Sat, 14 Aug 2021 05:26:26 +0000 (UTC) Received: by mail-pl1-f176.google.com with SMTP id e15so14665067plh.8 for ; Fri, 13 Aug 2021 22:26:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sy4RfaFrUa96arTkfKfOAAECCqypdoUZE3FYwI0uJVw=; b=BWxz79DtamTGjYX235WxqKBumLJQyCxloLwB5X8n+0JxTUBuMqZqEK4omZzZsyFswi xc/1ndSznpBR4hGLtx565tcD5RMzo/McXwKbvwGLv8USzAMj2vEaLXFsiwL15lgyQ+VY Mp91popEeyw4sq0/VsajRcZ7SuaeO0844Nct/xO3dRS/FQeHMaO4SdSJa+z+iEIQqId6 KsaRySeZ1YgH2neTbm6pxbzmzT4q9mJdT9DjAgy7Bc1KaBNFA6ziG4aLJv1cUbWiyN58 +pMyAR3MyNP9PrHXmkxv9hal/Y+dldMTF0q9+yHCd1QtLv2FFdjhOfhtgVak1NVi9GHT m4XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sy4RfaFrUa96arTkfKfOAAECCqypdoUZE3FYwI0uJVw=; b=mEWXa4rX8btAFfS+qC0pqFlOxB6iqmUV0HCCtV3MdOnCYM0tRSMVIX9CKf8R0vIGUq 60yJMtk2et9Ybm7OhzXfQNYz/tOb/1SmQCbjpBL1zIu8BpfhPJsYCquW6rhlEJnVNNiz Qjs0Vk9NJZzNkrls4g5bLzYtG73oNLoR4GxhdWSgJgnG7QIHvoEoh7+cPMnrlipYznwA xKNoKKiV/IGo+iDDgJPhbwpV1DeboPL8k13i83frGOqPDBdUT8+FJCjOn2D9dsDCcNAh bg0hB0LC04Fjol0VgGM26MiQNIo5Sdf63DxyJ7kT/H0wRW33F6Ut0ktjI03soIXvEHne AObw== X-Gm-Message-State: AOAM532ke3lJN72/IsQ8seK9ZslTFZe9STmkkQL0VWkNGfjmW0NnYvx3 7+o4TBse1cBZaYYaIT3vc5aK5g== X-Google-Smtp-Source: ABdhPJxPSem9RUplidgl64wgt0gY2T6IcwLbBAPLlSSIoVY/57aZnVBhCrRQPoxYtUW5HS5luK+uGw== X-Received: by 2002:a63:214c:: with SMTP id s12mr571575pgm.86.1628918785213; Fri, 13 Aug 2021 22:26:25 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:24 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Date: Sat, 14 Aug 2021 13:25:16 +0800 Message-Id: <20210814052519.86679-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 38E3B5001061 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=BWxz79Dt; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf04.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: 9xz59mt38ie3e5e7j5qw7pkbrk1wq96m X-HE-Tag: 1628918786-922736 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We will reuse the obj_cgroup APIs to charge the LRU pages. Finally, page->memcg_data will have 2 different meanings. - For the slab pages, page->memcg_data points to an object cgroups vector. - For the kmem pages (exclude the slab pages) and the LRU pages, page->memcg_data points to an object cgroup. In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end, The page cache cannot prevent long-living objects from pinning the original memory cgroup in the memory. At the same time we also changed the rules of page and objcg or memcg binding stability. The new rules are as follows. For a page any of the following ensures page and objcg binding stability: - the page lock - LRU isolation - lock_page_memcg() - exclusive reference Based on the stable binding of page and objcg, for a page any of the following ensures page and memcg binding stability: - css_set_lock - cgroup_mutex - the lruvec lock - the split queue lock (only THP page) If the caller only want to ensure that the page counters of memcg are updated correctly, ensure that the binding stability of page and objcg is sufficient. Signed-off-by: Muchun Song Reported-by: kernel test robot --- include/linux/memcontrol.h | 94 ++++++-------- mm/huge_memory.c | 42 +++++++ mm/memcontrol.c | 296 ++++++++++++++++++++++++++++++++------------- 3 files changed, 293 insertions(+), 139 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d85c03f2d76d..92c98a952bab 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -378,8 +378,6 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) -static inline bool folio_memcg_kmem(struct folio *folio); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -393,43 +391,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __folio_memcg - Get the memory cgroup associated with a non-kmem folio - * @folio: Pointer to the folio. - * - * Returns a pointer to the memory cgroup associated with the folio, - * or NULL. This function assumes that the folio is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * kmem folios. - */ -static inline struct mem_cgroup *__folio_memcg(struct folio *folio) -{ - unsigned long memcg_data = folio->memcg_data; - - VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * __folio_objcg - get the object cgroup associated with a kmem folio. + * folio_objcg - get the object cgroup associated with a folio. * @folio: Pointer to the folio. * * Returns a pointer to the object cgroup associated with the folio, * or NULL. This function assumes that the folio is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * LRU folios. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__folio_objcg(struct folio *folio) +static inline struct obj_cgroup *folio_objcg(struct folio *folio) { unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -443,7 +417,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem folio any of the following ensures folio and memcg binding + * For a folio any of the following ensures folio and memcg binding * stability: * * - the folio lock @@ -451,14 +425,28 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * - lock_page_memcg() * - exclusive reference * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * Based on the stable binding of folio and objcg, for a folio any of the + * following ensures folio and memcg binding stability: + * + * - css_set_lock + * - cgroup_mutex + * - the lruvec lock + * - the split queue lock (only THP page) + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of folio and objcg + * is sufficient. + * + * A caller should hold an rcu read lock (In addition, regions of code across + * which interrupts, preemption, or softirqs have been disabled also serve as + * RCU read-side critical sections) to protect memcg associated with a folio + * from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (folio_memcg_kmem(folio)) - return obj_cgroup_memcg(__folio_objcg(folio)); - return __folio_memcg(folio); + struct obj_cgroup *objcg = folio_objcg(folio); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } static inline struct mem_cgroup *page_memcg(struct page *page) @@ -476,6 +464,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page) * folio is known to have a proper memory cgroup pointer. It's not safe * to call this function against some type of pages, e.g. slab pages or * ex-slab pages. + * + * The page and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) { @@ -506,22 +496,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) * * Return: A pointer to the memory cgroup associated with the folio, * or NULL. + * + * The folio and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { unsigned long memcg_data = READ_ONCE(folio->memcg_data); + struct obj_cgroup *objcg; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -534,16 +522,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: + * The page and objcg or memcg binding rules can refer to page_memcg(). * - * - the page lock - * - LRU isolation - * - lock_page_memcg() - * - exclusive reference - * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -552,18 +534,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) * for slab pages, READ_ONCE() should be used here. */ unsigned long memcg_data = READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 22fbf2c74d49..731e1b894407 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -499,6 +499,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG +static struct shrinker deferred_split_shrinker; + static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { if (mem_cgroup_disabled()) @@ -512,6 +514,46 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio return memcg ? &memcg->deferred_split_queue : NULL; } + +static void memcg_reparent_split_queue_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_lock(&memcg->deferred_split_queue.split_queue_lock); + spin_lock(&parent->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_unlock(&parent->deferred_split_queue.split_queue_lock); + spin_unlock(&memcg->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int nid; + struct deferred_split *src, *dst; + + src = &memcg->deferred_split_queue; + dst = &parent->deferred_split_queue; + + if (!src->split_queue_len) + return; + + list_splice_tail_init(&src->split_queue, &dst->split_queue); + dst->split_queue_len += src->split_queue_len; + src->split_queue_len = 0; + + for_each_node(nid) + set_shrinker_bit(parent, nid, deferred_split_shrinker.id); +} + +const struct memcg_reparent_ops split_queue_reparent_ops = { + .lock = memcg_reparent_split_queue_lock, + .unlock = memcg_reparent_split_queue_unlock, + .reparent = memcg_reparent_split_queue, +}; #else static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8cfb4221c36a..52a878081452 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -75,6 +75,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly; EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +static struct obj_cgroup *root_obj_cgroup __read_mostly; /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); @@ -263,6 +264,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) extern spinlock_t css_set_lock; +static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg) +{ + return objcg == root_obj_cgroup; +} + bool mem_cgroup_kmem_disabled(void) { return cgroup_memory_nokmem; @@ -344,7 +350,81 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; +static void memcg_reparent_lruvec_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_lock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + spin_lock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + } +} + +static void memcg_reparent_lruvec_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + } +} + +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src = container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec); + + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] = 0; + } +} + +static void memcg_reparent_lruvec(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + enum lru_list lru; + struct lruvec *src, *dst; + + src = mem_cgroup_lruvec(memcg, NODE_DATA(i)); + dst = mem_cgroup_lruvec(parent, NODE_DATA(i)); + + dst->anon_cost += src->anon_cost; + dst->file_cost += src->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src, dst, lru); + } +} + +static const struct memcg_reparent_ops lruvec_reparent_ops = { + .lock = memcg_reparent_lruvec_lock, + .unlock = memcg_reparent_lruvec_unlock, + .reparent = memcg_reparent_lruvec, +}; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct memcg_reparent_ops split_queue_reparent_ops; +#endif + +static const struct memcg_reparent_ops *memcg_reparent_ops[] = { + &lruvec_reparent_ops, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + &split_queue_reparent_ops, +#endif +}; static void memcg_reparent_lock(struct mem_cgroup *memcg, struct mem_cgroup *parent) @@ -2799,18 +2879,18 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) } #endif -static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_objcg(folio), folio); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures page's objcg stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference */ - folio->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)objcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -2827,6 +2907,21 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) return memcg; } +static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) +{ + struct obj_cgroup *objcg = NULL; + + rcu_read_lock(); + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + objcg = rcu_dereference(memcg->objcg); + if (objcg && obj_cgroup_tryget(objcg)) + break; + } + rcu_read_unlock(); + + return objcg; +} + #ifdef CONFIG_MEMCG_KMEM /* * The allocated objcg pointers array is not accounted directly. @@ -2932,12 +3027,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) else memcg = mem_cgroup_from_task(current); - for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { - objcg = rcu_dereference(memcg->objcg); - if (objcg && obj_cgroup_tryget(objcg)) - break; + if (mem_cgroup_is_root(memcg)) + goto out; + + objcg = get_obj_cgroup_from_memcg(memcg); + if (obj_cgroup_is_root(objcg)) { + obj_cgroup_put(objcg); objcg = NULL; } +out: rcu_read_unlock(); return objcg; @@ -3081,13 +3179,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) void __memcg_kmem_uncharge_page(struct page *page, int order) { struct folio *folio = page_folio(page); - struct obj_cgroup *objcg; + struct obj_cgroup *objcg = folio_objcg(folio); unsigned int nr_pages = 1 << order; - if (!folio_memcg_kmem(folio)) + if (!objcg) return; - objcg = __folio_objcg(folio); + VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio); obj_cgroup_uncharge_pages(objcg, nr_pages); folio->memcg_data = 0; obj_cgroup_put(objcg); @@ -3319,24 +3417,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) #endif /* CONFIG_MEMCG_KMEM */ /* - * Because page_memcg(head) is not set on tails, set it now. + * Because page_objcg(head) is not set on tails, set it now. */ void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio = page_folio(head); - struct mem_cgroup *memcg = folio_memcg(folio); + struct obj_cgroup *objcg = folio_objcg(folio); int i; - if (mem_cgroup_disabled() || !memcg) + if (mem_cgroup_disabled() || !objcg) return; for (i = 1; i < nr; i++) folio_page(folio, i)->memcg_data = folio->memcg_data; - if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); - else - css_get_many(&memcg->css, nr - 1); + obj_cgroup_get_many(objcg, nr - 1); } #ifdef CONFIG_MEMCG_SWAP @@ -5306,6 +5401,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) objcg->memcg = memcg; rcu_assign_pointer(memcg->objcg, objcg); + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup = objcg; + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5735,10 +5833,10 @@ static int mem_cgroup_move_account(struct page *page, */ smp_mb(); - css_get(&to->css); - css_put(&from->css); + obj_cgroup_get(to->objcg); + obj_cgroup_put(from->objcg); - folio->memcg_data = (unsigned long)to; + folio->memcg_data = (unsigned long)to->objcg; __folio_memcg_unlock(from); @@ -6211,6 +6309,42 @@ static void mem_cgroup_move_charge(void) mmap_read_unlock(mc.mm); atomic_dec(&mc.from->moving_account); + + /* + * Moving its pages to another memcg is finished. Wait for already + * started RCU-only updates to finish to make sure that the caller + * of lock_page_memcg() can unlock the correct move_lock. The + * possible bad scenario would like: + * + * CPU0: CPU1: + * mem_cgroup_move_charge() + * walk_page_range() + * + * lock_page_memcg(page) + * memcg = folio_memcg() + * spin_lock_irqsave(&memcg->move_lock) + * memcg->move_lock_task = current + * + * atomic_dec(&mc.from->moving_account) + * + * mem_cgroup_css_offline() + * memcg_offline_kmem() + * memcg_reparent_objcgs() <== reparented + * + * unlock_page_memcg(page) + * memcg = folio_memcg() <== memcg has been changed + * if (memcg->move_lock_task == current) <== false + * spin_unlock_irqrestore(&memcg->move_lock) + * + * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex + * would be released soon), the page can be reparented to its parent + * memcg. When the unlock_page_memcg() is called for the page, we will + * miss unlock the move_lock. So using synchronize_rcu to wait for + * already started RCU-only updates to finish before this function + * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are + * serialized by cgroup_mutex). + */ + synchronize_rcu(); } /* @@ -6768,21 +6902,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(folio); - int ret; + int ret = 0; - ret = try_charge(memcg, gfp, nr_pages); + objcg = get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) + ret = try_charge(memcg, gfp, nr_pages); if (ret) goto out; - css_get(&memcg->css); - commit_charge(folio, memcg); + obj_cgroup_get(objcg); + commit_charge(folio, objcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: + obj_cgroup_put(objcg); return ret; } @@ -6882,7 +7021,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) } struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -6897,84 +7036,73 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); + page_counter_uncharge(&memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); + page_counter_uncharge(&memcg->memsw, ug->nr_memory); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) - page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); - memcg_oom_recover(ug->memcg); + page_counter_uncharge(&memcg->kmem, ug->nr_kmem); + memcg_oom_recover(memcg); } local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->nid); + __count_memcg_events(memcg, PGPGOUT, ug->pgpgout); + __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory); + memcg_check_events(memcg, ug->nid); local_irq_restore(flags); + rcu_read_unlock(); /* drop reference from uncharge_folio */ - css_put(&ug->memcg->css); + css_put(&memcg->css); + obj_cgroup_put(ug->objcg); } static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; - bool use_objcg = folio_memcg_kmem(folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* * Nobody should be changing or seriously looking at - * folio memcg or objcg at this point, we have fully - * exclusive access to the folio. + * folio objcg at this point, we have fully exclusive + * access to the folio. */ - if (use_objcg) { - objcg = __folio_objcg(folio); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg = get_mem_cgroup_from_objcg(objcg); - } else { - memcg = __folio_memcg(folio); - } - - if (!memcg) + objcg = folio_objcg(folio); + if (!objcg) return; - if (ug->memcg != memcg) { - if (ug->memcg) { + if (ug->objcg != objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg = memcg; + ug->objcg = objcg; ug->nid = folio_nid(folio); - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } nr_pages = folio_nr_pages(folio); - if (use_objcg) { + if (folio_memcg_kmem(folio)) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - - folio->memcg_data = 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory += nr_pages; ug->pgpgout++; - - folio->memcg_data = 0; } - css_put(&memcg->css); + folio->memcg_data = 0; + obj_cgroup_put(objcg); } /** @@ -6988,7 +7116,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_objcg(folio)) return; uncharge_gather_clear(&ug); @@ -7011,7 +7139,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) uncharge_gather_clear(&ug); list_for_each_entry(folio, page_list, lru) uncharge_folio(folio, &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } @@ -7028,6 +7156,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) void mem_cgroup_migrate(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(new); unsigned long flags; @@ -7040,30 +7169,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_objcg(new)) return; - memcg = get_mem_cgroup_from_folio(old); - VM_WARN_ON_ONCE_FOLIO(!memcg, old); - if (!memcg) + objcg = folio_objcg(old); + VM_WARN_ON_ONCE_FOLIO(!objcg, old); + if (!objcg) return; + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + /* Force-charge the new page. The old one will be freed soon */ - if (!mem_cgroup_is_root(memcg)) { + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } - css_get(&memcg->css); - commit_charge(new, memcg); + obj_cgroup_get(objcg); + commit_charge(new, objcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); - css_put(&memcg->css); + rcu_read_unlock(); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7240,6 +7372,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg) void mem_cgroup_swapout(struct page *page, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; unsigned short oldid; @@ -7252,15 +7385,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + objcg = folio_objcg(page_folio(page)); + VM_WARN_ON_ONCE_PAGE(!objcg, page); + if (!objcg) + return; + /* * Interrupts should be disabled by the caller (see the comments below), * which can serve as RCU read-side critical sections. */ - memcg = page_memcg(page); - - VM_WARN_ON_ONCE_PAGE(!memcg, page); - if (!memcg) - return; + memcg = obj_cgroup_memcg(objcg); /* * In case the memcg owning these pages has been offlined and doesn't @@ -7279,7 +7413,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) page->memcg_data = 0; - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); if (!cgroup_memory_noswap && memcg != swap_memcg) { @@ -7298,7 +7432,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) mem_cgroup_charge_statistics(memcg, -nr_entries); memcg_check_events(memcg, page_to_nid(page)); - css_put(&memcg->css); + obj_cgroup_put(objcg); } /** From patchwork Sat Aug 14 05:25:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91024C4338F for ; Sat, 14 Aug 2021 05:26:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 317DE60F14 for ; Sat, 14 Aug 2021 05:26:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 317DE60F14 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CFE198D000C; Sat, 14 Aug 2021 01:26:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CAF158D0002; Sat, 14 Aug 2021 01:26:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B27B68D000C; Sat, 14 Aug 2021 01:26:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 995208D0002 for ; Sat, 14 Aug 2021 01:26:32 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2A1161817905A for ; Sat, 14 Aug 2021 05:26:32 +0000 (UTC) X-FDA: 78472551024.35.9129D6F Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf24.hostedemail.com (Postfix) with ESMTP id D3242B001BD1 for ; Sat, 14 Aug 2021 05:26:31 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id n13-20020a17090a4e0d00b0017946980d8dso4896560pjh.5 for ; Fri, 13 Aug 2021 22:26:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FojWylM23QvkSybRKOwXASlFikLzL4f7BmcJ2ZeU5aE=; b=lr/drc26q2fEy29TkkLa6rRjbHgmQkWyV1BF9Zgs3roVb17pWXZecGTPrX8Pby1NML nV9mLKmHL1OlwlIjTsJ5OH2NlBtVq27Ijy5X7luv8YTnlqDSF/8PzeFTlmezXNdTDiEO irrNaGdS66v7O/wYGlMGIJarx8YzeExZwTnCgEpgynSiYTZPPlmuODCvt8hOxeMqOFQ6 JWX2Owy9Kj+z/p7xwgnR+oZYovsdGWV7GBFZqOdXoecxfJIVlcJSgTDed3ln6c8UB/BX bxKXpWoR3aVf+8DGXgWi6JX2IDWfMPpOmn979+of5ePi/y4fM36QOizBXeSJo4WybOna fhjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FojWylM23QvkSybRKOwXASlFikLzL4f7BmcJ2ZeU5aE=; b=cmpOtHxEVXjwd1bx3F5IV8DAir7EPxsEL6Zwg1v+w/+VUb74rnb8ITzyBiIwe/UQWU n4KaO0SSlZifDIhUK2T/EdI+gDxtOprwLUm3wf6/kzQHRxPgJm7p0IGXu0OApK+8K0ar NChUNhKIHh8pNrfoSdSmlyGqbzTagPa0KjjYx4/B41kECcxTBOLBS0zPW6Jj0A2G9r32 WmHvz57yGFoqs/8Opf9yfaeagaAa1nGeiO04GhZ9wqvZmWdCWKlRLrhySOmnals5Vtud qxjK5e62oBRHOuc3bxaFWyfNcUz5ox+dz36HH84lfyrPa6dEz/bBASeh7NiHP5bNGC/k lKZw== X-Gm-Message-State: AOAM5325ZhVFNz5V2LIYoaY5oRHR+duVkwnGpl/VMcmAzqJ1QRBFpnFd Nbt4a0P7ihCiq629IdMMWfahUw== X-Google-Smtp-Source: ABdhPJySeKJuLWfO3oSmyhLQwfzmtKMl5FBqy6pjSVZLmeUGhJlU5gofbWu8/zH/HlefGh+M4GscLg== X-Received: by 2002:a62:864b:0:b029:3c7:7197:59fc with SMTP id x72-20020a62864b0000b02903c7719759fcmr5566254pfd.59.1628918790961; Fri, 13 Aug 2021 22:26:30 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:30 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Date: Sat, 14 Aug 2021 13:25:17 +0800 Message-Id: <20210814052519.86679-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D3242B001BD1 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="lr/drc26"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: 8fw7d9hzdntgtk6p3tb3hxqsqe9q13dr X-HE-Tag: 1628918791-736617 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the lock_page_memcg() does not lock a page and memcg binding, it actually lock a page and objcg binding. So rename lock_page_memcg() to lock_page_objcg(). This is just code cleanup without any functionality changes. Signed-off-by: Muchun Song --- Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- fs/buffer.c | 8 ++++---- include/linux/memcontrol.h | 14 +++++++------- mm/filemap.c | 2 +- mm/huge_memory.c | 4 ++-- mm/memcontrol.c | 24 ++++++++++++------------ mm/page-writeback.c | 6 +++--- mm/rmap.c | 14 +++++++------- 8 files changed, 37 insertions(+), 37 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 41191b5fb69d..dd582312b91a 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -291,7 +291,7 @@ Lock order is as follows: Page lock (PG_locked bit of page->flags) mm->page_table_lock or split pte_lock - lock_page_memcg (memcg->move_lock) + lock_page_objcg (memcg->move_lock) mapping->i_pages lock lruvec->lru_lock. diff --git a/fs/buffer.c b/fs/buffer.c index 88123f84885a..c3d20ebb1d0b 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -635,14 +635,14 @@ int __set_page_dirty_buffers(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty = !TestSetPageDirty(page); spin_unlock(&mapping->private_lock); if (newly_dirty) __set_page_dirty(page, mapping, 1); - unlock_page_memcg(page); + unlock_page_objcg(page); if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1139,13 +1139,13 @@ void mark_buffer_dirty(struct buffer_head *bh) struct page *page = bh->b_page; struct address_space *mapping = NULL; - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { mapping = page_mapping(page); if (mapping) __set_page_dirty(page, mapping, 0); } - unlock_page_memcg(page); + unlock_page_objcg(page); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 92c98a952bab..611e6a6d7b00 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -417,12 +417,12 @@ static inline struct obj_cgroup *folio_objcg(struct folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a folio any of the following ensures folio and memcg binding - * stability: + * For a page any of the following ensures page and objcg binding + * stability (But the folio can be reparented to its parent memcg): * * - the folio lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference * * Based on the stable binding of folio and objcg, for a folio any of the @@ -970,8 +970,8 @@ extern bool cgroup_memory_noswap; void folio_memcg_lock(struct folio *folio); void folio_memcg_unlock(struct folio *folio); -void lock_page_memcg(struct page *page); -void unlock_page_memcg(struct page *page); +void lock_page_objcg(struct page *page); +void unlock_page_objcg(struct page *page); void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); @@ -1388,11 +1388,11 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } -static inline void lock_page_memcg(struct page *page) +static inline void lock_page_objcg(struct page *page) { } -static inline void unlock_page_memcg(struct page *page) +static inline void unlock_page_objcg(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index 53913fced7ae..b5298fd17d14 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -112,7 +112,7 @@ * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) * ->inode->i_lock (page_remove_rmap->set_page_dirty) - * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) + * ->memcg->move_lock (page_remove_rmap->lock_page_objcg) * bdi.wb->list_lock (zap_pte_range->set_page_dirty) * ->inode->i_lock (zap_pte_range->set_page_dirty) * ->private_lock (zap_pte_range->__set_page_dirty_buffers) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 731e1b894407..eb3c07c00d2c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2227,7 +2227,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_inc(&page[i]._mapcount); } - lock_page_memcg(page); + lock_page_objcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ __mod_lruvec_page_state(page, NR_ANON_THPS, @@ -2238,7 +2238,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_dec(&page[i]._mapcount); } } - unlock_page_memcg(page); + unlock_page_objcg(page); } smp_wmb(); /* make pte visible before pmd */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 52a878081452..9464e6d2d735 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2135,18 +2135,18 @@ void folio_memcg_lock(struct folio *folio) * When charge migration first begins, we can have multiple * critical sections holding the fast-path RCU lock and one * holding the slowpath move_lock. Track the task who has the - * move_lock for unlock_page_memcg(). + * move_lock for unlock_page_objcg(). */ memcg->move_lock_task = current; memcg->move_lock_flags = flags; } EXPORT_SYMBOL(folio_memcg_lock); -void lock_page_memcg(struct page *page) +void lock_page_objcg(struct page *page) { folio_memcg_lock(page_folio(page)); } -EXPORT_SYMBOL(lock_page_memcg); +EXPORT_SYMBOL(lock_page_objcg); static void __folio_memcg_unlock(struct mem_cgroup *memcg) { @@ -2176,11 +2176,11 @@ void folio_memcg_unlock(struct folio *folio) } EXPORT_SYMBOL(folio_memcg_unlock); -void unlock_page_memcg(struct page *page) +void unlock_page_objcg(struct page *page) { folio_memcg_unlock(page_folio(page)); } -EXPORT_SYMBOL(unlock_page_memcg); +EXPORT_SYMBOL(unlock_page_objcg); struct obj_stock { #ifdef CONFIG_MEMCG_KMEM @@ -2887,7 +2887,7 @@ static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference */ folio->memcg_data = (unsigned long)objcg; @@ -5826,7 +5826,7 @@ static int mem_cgroup_move_account(struct page *page, * with (un)charging, migration, LRU putback, or anything else * that would rely on a stable page's memory cgroup. * - * Note that lock_page_memcg is a memcg lock, not a page lock, + * Note that lock_page_objcg is a memcg lock, not a page lock, * to save space. As soon as we switch page's memory cgroup to a * new memcg that isn't locked, the above state can change * concurrently again. Make sure we're truly done with it. @@ -6281,7 +6281,7 @@ static void mem_cgroup_move_charge(void) { lru_add_drain_all(); /* - * Signal lock_page_memcg() to take the memcg's move_lock + * Signal lock_page_objcg() to take the memcg's move_lock * while we're moving its pages to another memcg. Then wait * for already started RCU-only updates to finish. */ @@ -6313,14 +6313,14 @@ static void mem_cgroup_move_charge(void) /* * Moving its pages to another memcg is finished. Wait for already * started RCU-only updates to finish to make sure that the caller - * of lock_page_memcg() can unlock the correct move_lock. The + * of lock_page_objcg() can unlock the correct move_lock. The * possible bad scenario would like: * * CPU0: CPU1: * mem_cgroup_move_charge() * walk_page_range() * - * lock_page_memcg(page) + * unlock_page_objcg(page) * memcg = folio_memcg() * spin_lock_irqsave(&memcg->move_lock) * memcg->move_lock_task = current @@ -6331,14 +6331,14 @@ static void mem_cgroup_move_charge(void) * memcg_offline_kmem() * memcg_reparent_objcgs() <== reparented * - * unlock_page_memcg(page) + * unlock_page_objcg(page) * memcg = folio_memcg() <== memcg has been changed * if (memcg->move_lock_task == current) <== false * spin_unlock_irqrestore(&memcg->move_lock) * * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex * would be released soon), the page can be reparented to its parent - * memcg. When the unlock_page_memcg() is called for the page, we will + * memcg. When the unlock_page_objcg() is called for the page, we will * miss unlock the move_lock. So using synchronize_rcu to wait for * already started RCU-only updates to finish before this function * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 5b171aa067c1..d8fd7308dd39 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2434,7 +2434,7 @@ EXPORT_SYMBOL(__set_page_dirty_no_writeback); /* * Helper function for set_page_dirty family. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). * * NOTE: This relies on being atomic wrt interrupts. */ @@ -2468,7 +2468,7 @@ static void folio_account_dirtied(struct folio *folio, /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). */ void folio_account_cleaned(struct folio *folio, struct address_space *mapping, struct bdi_writeback *wb) @@ -2489,7 +2489,7 @@ void folio_account_cleaned(struct folio *folio, struct address_space *mapping, * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * - * The caller must hold lock_page_memcg(). + * The caller must hold lock_page_objcg(). */ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, int warn) diff --git a/mm/rmap.c b/mm/rmap.c index 09c41e1f44d8..232494888628 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -32,7 +32,7 @@ * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) - * lock_page_memcg move_lock (in __set_page_dirty_buffers) + * lock_page_objcg move_lock (in __set_page_dirty_buffers) * i_pages lock (widely used) * lruvec->lru_lock (in folio_lruvec_lock_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty) @@ -1125,7 +1125,7 @@ void do_page_add_anon_rmap(struct page *page, bool first; if (unlikely(PageKsm(page))) - lock_page_memcg(page); + lock_page_objcg(page); else VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -1153,7 +1153,7 @@ void do_page_add_anon_rmap(struct page *page, } if (unlikely(PageKsm(page))) { - unlock_page_memcg(page); + unlock_page_objcg(page); return; } @@ -1213,7 +1213,7 @@ void page_add_file_rmap(struct page *page, bool compound) int i, nr = 1; VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); - lock_page_memcg(page); + lock_page_objcg(page); if (compound && PageTransHuge(page)) { int nr_pages = thp_nr_pages(page); @@ -1242,7 +1242,7 @@ void page_add_file_rmap(struct page *page, bool compound) } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: - unlock_page_memcg(page); + unlock_page_objcg(page); } static void page_remove_file_rmap(struct page *page, bool compound) @@ -1343,7 +1343,7 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_remove_rmap(struct page *page, bool compound) { - lock_page_memcg(page); + lock_page_objcg(page); if (!PageAnon(page)) { page_remove_file_rmap(page, compound); @@ -1382,7 +1382,7 @@ void page_remove_rmap(struct page *page, bool compound) * faster for those pages still in swapcache. */ out: - unlock_page_memcg(page); + unlock_page_objcg(page); } /* From patchwork Sat Aug 14 05:25:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42079C432BE for ; Sat, 14 Aug 2021 05:26:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E883460E9B for ; Sat, 14 Aug 2021 05:26:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E883460E9B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 92D638D000D; Sat, 14 Aug 2021 01:26:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DD568D0002; Sat, 14 Aug 2021 01:26:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A5A68D000D; Sat, 14 Aug 2021 01:26:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 61CA68D0002 for ; Sat, 14 Aug 2021 01:26:38 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 051CE1803619A for ; Sat, 14 Aug 2021 05:26:38 +0000 (UTC) X-FDA: 78472551276.36.1CFDF7C Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf15.hostedemail.com (Postfix) with ESMTP id B1695D0020F6 for ; Sat, 14 Aug 2021 05:26:37 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id j12-20020a17090aeb0c00b00179530520b3so3426671pjz.0 for ; Fri, 13 Aug 2021 22:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QA9/p/h+KpQBCu0VXssIO4coGl6opMzNI5lAhcd+PlA=; b=KDQ5YMhgkeEAvreorbhUVEHxzOIds006yyG7sFmzuFq31cEDc/SNg2TOFchU4xbqWa 1XBG/aepaxXn4ctwfkD+f/wOUrWBRIMwaiB/0STszbdDRmsudQEoxsiLYs+6OMrNMJmB lNnQy25bdCLe9AuL4d7jAWRz+ndeHYY94Nn9oZEZoNugTMW9s5H3sbODiyzgNZyUnu+R y5KbGSFShj/a7gotHGrE44xj0sl1YilJ1zP19RRFJhAZpTYnFgzpUTJG5HHTeNOY75vA ZCLc90EFBuIz3S94EHEGHwZZxcvZnVnCXiwTvgtNKIrH0DnSsXops3lWSlu3cQ7tjJNW tHqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QA9/p/h+KpQBCu0VXssIO4coGl6opMzNI5lAhcd+PlA=; b=KOwAYVdYYKdc+qG6dPwOvXbKkLX/x5Jh0uOa94MYlMg3w51ltcvZgrIv3CY/sSqb4f qCUTftt1yKAOmTb3Rikxfy10VIcsx64WmEFAtKZgFUuI3H88oxSa4Fi2yMmIAn/rPNaB dwUJElYWOAL85IHdeyPB3fGsSaICmspwsLTIFbQTDlfjACfOPqYXuPb1vZicuRYNoNZ8 LxajqXjJRT97himbg6gUPPo5gS9ym5vthCYs712PKih1YgGLU8M08NivxMstYauXBAPU N2UVBzc2b9BcNZSIJEIV9aQ18Zj4NZ/VeK3HoO4+pWIPlURT5gdhOXjFFivq8KtzmDEw v7uA== X-Gm-Message-State: AOAM531ig8VzLB9iHjfYNn3eNXFy0v58iIGhPl65vm2aJup51IDw7dx5 cHBu1zFFQkjNjCBs6TpoC2SYLw== X-Google-Smtp-Source: ABdhPJxgg7nnSBePeld5ZlmypZa6XkSzDz0hdBCtPjQN0Tp1f4d/XEOeVbI75li7/4OYOqGU73DoMg== X-Received: by 2002:a65:62c1:: with SMTP id m1mr5447376pgv.339.1628918796923; Fri, 13 Aug 2021 22:26:36 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:36 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Date: Sat, 14 Aug 2021 13:25:18 +0800 Message-Id: <20210814052519.86679-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B1695D0020F6 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=KDQ5YMhg; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Stat-Signature: 1expqacqyzxgeqkofhrif5a58h4scxud X-HE-Tag: 1628918797-691462 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We need to make sure that the page is deleted from or added to the correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid users. Signed-off-by: Muchun Song --- include/linux/mm_inline.h | 15 ++++++++++++--- mm/vmscan.c | 1 - 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index e2ec68b0515c..60eb827a41fe 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -103,7 +103,10 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) static __always_inline void add_page_to_lru_list(struct page *page, struct lruvec *lruvec) { - lruvec_add_folio(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_add_folio(lruvec, folio); } static __always_inline @@ -119,7 +122,10 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) static __always_inline void add_page_to_lru_list_tail(struct page *page, struct lruvec *lruvec) { - lruvec_add_folio_tail(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_add_folio_tail(lruvec, folio); } static __always_inline @@ -133,6 +139,9 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) static __always_inline void del_page_from_lru_list(struct page *page, struct lruvec *lruvec) { - lruvec_del_folio(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_del_folio(lruvec, folio); } #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 8ce42858ad5d..902d36ec91a3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2204,7 +2204,6 @@ static unsigned int move_pages_to_lru(struct list_head *list) continue; } - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; From patchwork Sat Aug 14 05:25:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12436511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BF98C4338F for ; Sat, 14 Aug 2021 05:26:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B0AC360F14 for ; Sat, 14 Aug 2021 05:26:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B0AC360F14 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5AAA56B006C; Sat, 14 Aug 2021 01:26:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 55AB28D0001; Sat, 14 Aug 2021 01:26:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4231D6B0072; Sat, 14 Aug 2021 01:26:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 265C66B006C for ; Sat, 14 Aug 2021 01:26:44 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C7C501811CE2F for ; Sat, 14 Aug 2021 05:26:43 +0000 (UTC) X-FDA: 78472551486.10.A019D53 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf09.hostedemail.com (Postfix) with ESMTP id 89B783001D8E for ; Sat, 14 Aug 2021 05:26:43 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id bo18so18527624pjb.0 for ; Fri, 13 Aug 2021 22:26:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F7om0JAa0251atdQthJaeQsHZPzAeTPpBSU7iFofJt8=; b=cpjKGrBcADQ43V3XCL+krFmngOlt6LDMzJ8yojIYlCDgPodVgLxGUqn57l4LwtfCAf tIKpZQ8voojunBk1mckjz65YDkE1V6gvVEjXq+IRHBYY8wGX/eMrCISR9NJGArx/luDf tnGwv5WxqxOJnCBqsgIFSUnQDAMmmy3Z+Mfruqqq2OIWON0EVhOWpTVrsb+s2QBGwWS+ OKydRi58D3MLlPpqAodC9EvyJ06Bv3QMLeOu4ZCYfP1+Y4kMLGK/wgawxAcUa6ChGcqE fEeKNtpiZnDduu4Z7EcONhCj3OnUH1+WuLGGmuBlIePpr4hB8R0N8cVStnfme1a2ZkBw LjuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F7om0JAa0251atdQthJaeQsHZPzAeTPpBSU7iFofJt8=; b=VxYTgC88NmzLwXp2fn5umwybTPVBTJHz1KG2ddkm3rpnS/VrCJNTjX/yTtKnMJyuSb b/zoCxU2JGfRMslgmehTh9U3oIQHtlHRfqfOytPdRyRyM1PBqDCPcDdOiF5A8nzuBLNH GPlMwdIRhTsr/7HcnZjYFZILYo8hbbFYFYAmtKfzQHjxcqaktDYIwsHXs78MA8YWyaAO 8JNSPNWdSuNVAh6bFfTMZbaXiqxn1hafokyQuhe+ZVDu2xbbUvTp/xlNULOVVJ6dRV6K kEz4nnLP1ncObMaSP0MPwLnAU1q1QSLQY5nU5AZZzUak8v7iD3rgTHRhBZYgIQpKUmUA Hq3g== X-Gm-Message-State: AOAM530kfVqHIrTC9V2/bBWydMMRM6TvBwXpNW/ZXBnY676ZvGwx5fUP ptvCzLy71+FghXHt1EtCCJj0Rw== X-Google-Smtp-Source: ABdhPJxtyMotRT3YcEMVp8oZNI/9l5Jq9kgUWY3mGX+e67YoWP7YJJfIybQYf6pC/184WHjV0eCAwA== X-Received: by 2002:a62:1a03:0:b029:3e0:30aa:5172 with SMTP id a3-20020a621a030000b02903e030aa5172mr5737900pfa.69.1628918802677; Fri, 13 Aug 2021 22:26:42 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id s5sm4783133pgp.81.2021.08.13.22.26.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Aug 2021 22:26:42 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes Date: Sat, 14 Aug 2021 13:25:19 +0800 Message-Id: <20210814052519.86679-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com> References: <20210814052519.86679-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 89B783001D8E X-Stat-Signature: esoixkkbwx8u66hqs9k94uk6ycngrzsq Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=cpjKGrBc; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1628918803-212115 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now folio_lruvec_lock*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/memcontrol.c | 31 +++++++++++++++++++++++++++++++ mm/swap.c | 45 ++++++++++++++------------------------------- mm/vmscan.c | 9 ++++----- 3 files changed, 49 insertions(+), 36 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9464e6d2d735..7732ccf7d180 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1286,12 +1286,38 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec = folio_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock(&lruvec->lru_lock); goto retry; } /* + * When we reach here, it means that the folio_memcg(folio) is stable. + * * Preemption is disabled in the internal of spin_lock, which can serve * as RCU read-side critical sections. */ @@ -1309,6 +1335,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1330,6 +1357,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5836,7 +5864,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_get(to->objcg); obj_cgroup_put(from->objcg); + /* See the comments in folio_lruvec_lock(). */ + spin_lock(&from_vec->lru_lock); folio->memcg_data = (unsigned long)to->objcg; + spin_unlock(&from_vec->lru_lock); __folio_memcg_unlock(from); diff --git a/mm/swap.c b/mm/swap.c index 9554ff008fe6..00b6776860e8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -191,14 +191,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, struct page *page = pvec->pages[i]; struct folio *folio = page_folio(page); - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); (*move_fn)(page, lruvec); - - SetPageLRU(page); } if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags); @@ -210,7 +204,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { struct folio *folio = page_folio(page); - if (!folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_unevictable(folio)) { lruvec_del_folio(lruvec, folio); folio_clear_active(folio); lruvec_add_folio_tail(lruvec, folio); @@ -305,7 +299,8 @@ void lru_note_cost_folio(struct folio *folio) static void __folio_activate(struct folio *folio, struct lruvec *lruvec) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); @@ -362,12 +357,9 @@ static void folio_activate(struct folio *folio) { struct lruvec *lruvec; - if (folio_test_clear_lru(folio)) { - lruvec = folio_lruvec_lock_irq(folio); - __folio_activate(folio, lruvec); - unlock_page_lruvec_irq(lruvec); - folio_set_lru(folio); - } + lruvec = folio_lruvec_lock_irq(folio); + __folio_activate(folio, lruvec); + unlock_page_lruvec_irq(lruvec); } #endif @@ -520,6 +512,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) bool active = PageActive(page); int nr_pages = thp_nr_pages(page); + if (!PageLRU(page)) + return; + if (PageUnevictable(page)) return; @@ -557,7 +552,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -573,7 +568,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { - if (PageAnon(page) && PageSwapBacked(page) && + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); @@ -983,8 +978,9 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { + struct folio *folio = page_folio(page); int was_unevictable = folio_test_clear_unevictable(folio); long nr_pages = folio_nr_pages(folio); @@ -1040,20 +1036,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct folio *folio = page_folio(pvec->pages[i]); - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(folio, lruvec); - } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 902d36ec91a3..7cff2f748df8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4667,18 +4667,17 @@ void check_move_unevictable_pages(struct pagevec *pvec) nr_pages = thp_nr_pages(page); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec = folio_lruvec_relock_irq(folio, lruvec); + + if (!PageLRU(page) || !PageUnevictable(page)) continue; - lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { + if (page_evictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } - SetPageLRU(page); } if (lruvec) {