From patchwork Thu May 27 09:33:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C06DAC47089 for ; Thu, 27 May 2021 09:33:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 66ECF613DE for ; Thu, 27 May 2021 09:33:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66ECF613DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 05DA46B007B; Thu, 27 May 2021 05:33:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00B2D6B007D; Thu, 27 May 2021 05:33:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7AB86B007E; Thu, 27 May 2021 05:33:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 9C8486B007B for ; Thu, 27 May 2021 05:33:57 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3D11EA8D9 for ; Thu, 27 May 2021 09:33:57 +0000 (UTC) X-FDA: 78186499314.12.72AF601 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf23.hostedemail.com (Postfix) with ESMTP id 4FB97A0001CC for ; Thu, 27 May 2021 09:33:50 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id v13so2047553ple.9 for ; Thu, 27 May 2021 02:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lgg5gwWDGBSh9Kzn8bagl8A5DgS3QNqOwkw9xHiFtB0=; b=KK8A2KldFyGj7BfVccS4eWdqh+4uXp9ErZMfa/IilY5/35TTsSZj58jDKtsnDO+OQY R2apwBUPieHZZ3FLbsgp8spkUK/Id+tq9RUeqWldk26n8OXI17uVOmb2zo851cxfXcrQ Pz1at5nXqQIqzp/0h7B1n0AA0GvNVz9/AnezlZxfY6Vb3UZ38ljCgyRZvi27gnbMvVQq fJoT6gG2U6eJdbeOhjNz1ZVZmH2FkpVTGk53TBSVZk/s/ydrO7qFI/gLIBuGM3UVc5/x mLVWrkfOE6F54WmVdJVFo0cVlJti7uylPJh+sH0TL4ozVA9Lj15eRcQPTQS5R4R7IPHe sq0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lgg5gwWDGBSh9Kzn8bagl8A5DgS3QNqOwkw9xHiFtB0=; b=TJLMacZm/OHzOB88hijixbRJZZWHOX4mC6LQVwtDDqgRXayR7xRUGP45mqawGD7ELg AuCiNnrsw4tJnFLVFSJecJwkzTwraeg3OsW1sEmVT0hDbbffObAvtHxm7kc7WtikNOL/ aRj4a3I3SQhVf/wjINrf5jpdS32FR45cdaAHO7xZU10PHlTdHBRX4rvA/TPRHZEMyG05 4hsyfbHSu/ClaaghwVltDAIPK55Sdca4MGVtK8QIPl5tpTdkJkNfnTiHPjF2Xx0qd+OV UOsWiFvaYHBJ7789pJaBBObVczx096dyns5A3XK0zOq+gkuhdCFyEqrHpYO9RdHAzCfX hVIw== X-Gm-Message-State: AOAM533WPOZGrGK4ICo83tLLCtZ5qHMnM/on4UE+Hiqeiu8qsKPIDdRJ 5zmQO44aN6TxdeE6QaFNnbrrrTv+l/iaaKiP X-Google-Smtp-Source: ABdhPJwMUurKr8o2MbZVHOJj3WUYVWlBvuG3TMB8Aqlx/T1pf4M3XVn1LeEVEEZJaQdr/yOszGCgPw== X-Received: by 2002:a17:902:c784:b029:ef:b14e:2b0b with SMTP id w4-20020a170902c784b02900efb14e2b0bmr2477441pla.64.1622108035850; Thu, 27 May 2021 02:33:55 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.33.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:33:55 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Date: Thu, 27 May 2021 17:33:25 +0800 Message-Id: <20210527093336.14895-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=KK8A2Kld; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4FB97A0001CC X-Stat-Signature: noa8om5buadgize9sf5wrpmtwu4b6b53 X-HE-Tag: 1622108030-22513 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 4 ++- mm/memcontrol.c | 66 +++++++++++++++++++++++++++++----------------- 2 files changed, 45 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3cc18c2176e7..0159e1191a86 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -223,7 +223,9 @@ struct memcg_cgwb_frn { struct obj_cgroup { struct percpu_ref refcnt; struct mem_cgroup *memcg; +#ifdef CONFIG_MEMCG_KMEM atomic_t nr_charged_bytes; +#endif union { struct list_head list; struct rcu_head rcu; @@ -321,9 +323,9 @@ struct mem_cgroup { #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; enum memcg_kmem_state kmem_state; +#endif struct obj_cgroup __rcu *objcg; struct list_head objcg_list; /* list of inherited objcgs */ -#endif MEMCG_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 70a7faa733b3..66f6ad1cc8e4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -252,18 +252,16 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr) return &container_of(vmpr, struct mem_cgroup, vmpressure)->css; } -#ifdef CONFIG_MEMCG_KMEM extern spinlock_t css_set_lock; +#ifdef CONFIG_MEMCG_KMEM static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_kmem(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; /* * At this point all allocated objects are freed, and @@ -277,9 +275,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes = PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes == PAGE_SIZE. @@ -291,6 +289,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_kmem(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_kmem(objcg); spin_lock_irqsave(&css_set_lock, flags); list_del(&objcg->list); @@ -319,10 +330,14 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; objcg = rcu_replace_pointer(memcg->objcg, NULL, true); @@ -341,6 +356,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, percpu_ref_kill(&objcg->refcnt); } +#ifdef CONFIG_MEMCG_KMEM /* * This will be used as a shrinker list's index. * The main reason for not using cgroup id for this: @@ -3623,7 +3639,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; int memcg_id; if (cgroup_memory_nokmem) @@ -3636,14 +3651,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (memcg_id < 0) return memcg_id; - objcg = obj_cgroup_alloc(); - if (!objcg) { - memcg_free_cache_id(memcg_id); - return -ENOMEM; - } - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); memcg->kmemcg_id = memcg_id; @@ -3667,8 +3674,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); - kmemcg_id = memcg->kmemcg_id; BUG_ON(kmemcg_id < 0); @@ -5212,8 +5217,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5285,21 +5290,33 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct obj_cgroup *objcg; /* * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_info(memcg)) { - mem_cgroup_id_remove(memcg); - return -ENOMEM; - } + if (alloc_shrinker_info(memcg)) + goto remove_id; + + objcg = obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); return 0; + +free_shrinker: + free_shrinker_info(memcg); +remove_id: + mem_cgroup_id_remove(memcg); + return -ENOMEM; } static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) @@ -5323,6 +5340,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + memcg_reparent_objcgs(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); From patchwork Thu May 27 09:33:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59A0FC4708A for ; Thu, 27 May 2021 09:34:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09B3F613DE for ; Thu, 27 May 2021 09:34:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09B3F613DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1F6436B007D; Thu, 27 May 2021 05:34:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19FCB6B007E; Thu, 27 May 2021 05:34:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB1ED6B0080; Thu, 27 May 2021 05:34:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 76BCE6B007D for ; Thu, 27 May 2021 05:34:03 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 172AB181AF5C7 for ; Thu, 27 May 2021 09:34:03 +0000 (UTC) X-FDA: 78186499566.24.C28C70F Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf12.hostedemail.com (Postfix) with ESMTP id 2E857E4 for ; Thu, 27 May 2021 09:33:52 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id 22so118021pfv.11 for ; Thu, 27 May 2021 02:34:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U0YhrymrmSUcJ+T+JtK0OJA6wg0f0ZwEXSDhmmxyIN4=; b=FyIy/JTLAy/uShk1TncAyQKrw8G/f5Ojqc5RG7p6MZ8FM2+BnLHKRWiHC3Se1mckDY kfP8joLFacRWyM9yy/djdd5EtN2bPH6kFUlNBs7iBN+rdkGxryYDobs0zs82yE1fz5ls s/olT1vG+Y8b8XGFPWejsZIYp92Wfc3XeVVAcxWsSqlfaNoZpv0d46b7YgQMErFYkggF DRuX1YtSotV0yzKhsZr4gStcfYJ0VwnEObHJe4cuRAWadjjWllRc2m11JmLWcq4l/0lQ MP5iQsLy2fClryCvNtJmY2/06BWnSl9+egyt9/I3z6hPYkLuCeVAbo/d0ZqBJ+9S4w5Y t/fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U0YhrymrmSUcJ+T+JtK0OJA6wg0f0ZwEXSDhmmxyIN4=; b=qY7iKwsFyvf4af6c3Eo6jGeQ5eWA4yMhVSPx5d/zCoVI4DPyO/DdORpnCHqmVwP2D/ hMRD5ILDIWiL742vxuHgyyG8flPf1x+EyBl2JAbhlBNnyY6F6SmXlRzMZTekjuRTRIpP Lp3G0Ovnm5yH1WeJovfCBLDbuRFoPFwK9DnVDJbUmit1lu7z1myQJbp2QtOh5ymJVfvY tdYqTQU1Gpj43uGJ58EdZLpsdm01Zt3qqHTL5mTsCNdK64Ku7gW4TjihZr7JLEnC+89t Eskp9DKAs7l0ma+pMbGuK7TWVSK7jIIR9qhxWEF2CRE8JfHLb3527Ybsqum14XFT5Rf5 e1Qw== X-Gm-Message-State: AOAM532U4ycuun6iPf89+7jNNTLd706yiihslMrYlF3sffttX7KLHxra sQX2fUmv0GdF2oTfq+OVRlxMpw== X-Google-Smtp-Source: ABdhPJxWkKaforjSgY5McTXQGVxZMMc2zHnli+MFqpGqnaszhkiedFoYqoejpPl4F1STf6pgjEREow== X-Received: by 2002:aa7:8159:0:b029:2c5:dfd8:3ac4 with SMTP id d25-20020aa781590000b02902c5dfd83ac4mr2508426pfn.16.1622108041980; Thu, 27 May 2021 02:34:01 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.33.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:01 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 02/12] mm: memcontrol: introduce compact_lock_page_irqsave Date: Thu, 27 May 2021 17:33:26 +0800 Message-Id: <20210527093336.14895-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2E857E4 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="FyIy/JTL"; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: cjp6z4n7c1s8gr4kmqppbczts1s87mpy X-HE-Tag: 1622108032-362353 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we reuse the objcg APIs to charge LRU pages, the page_memcg() can be changed when the LRU pages reparented. In this case, we need to acquire the new lruvec lock. lruvec = mem_cgroup_page_lruvec(page); // The page is reparented. compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); // Acquired the wrong lruvec lock and need to retry. But compact_lock_irqsave() only take lruvec lock as the parameter, we cannot aware this change. If it can take the page as parameter to acquire the lruvec lock. When the page memcg is changed, we can use the page_memcg() detect whether we need to reacquire the new lruvec lock. So compact_lock_irqsave() is not suitable for us. Similar to lock_page_lruvec_irqsave(), introduce compact_lock_page_irqsave() to acquire the lruvec lock in the compaction routine. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- mm/compaction.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 7d41b58fb17c..851fd8f62695 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -511,6 +511,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +static struct lruvec *compact_lock_page_irqsave(struct page *page, + unsigned long *flags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page); + + /* Track if the lock is contended in async mode */ + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + goto out; + + cc->contended = true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); +out: + lruvec_memcg_debug(lruvec, page); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -1035,11 +1058,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (locked) unlock_page_lruvec_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec = compact_lock_page_irqsave(page, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; From patchwork Thu May 27 09:33:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99D01C4707F for ; Thu, 27 May 2021 09:34:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4262A613DE for ; Thu, 27 May 2021 09:34:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4262A613DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DA2EA6B007E; Thu, 27 May 2021 05:34:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D51F66B0080; Thu, 27 May 2021 05:34:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCBFE6B0081; Thu, 27 May 2021 05:34:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 8BF0D6B007E for ; Thu, 27 May 2021 05:34:09 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1EACA180ACEFC for ; Thu, 27 May 2021 09:34:09 +0000 (UTC) X-FDA: 78186499818.01.A845DE6 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf24.hostedemail.com (Postfix) with ESMTP id 43855A0001DF for ; Thu, 27 May 2021 09:34:01 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id q67so144264pfb.4 for ; Thu, 27 May 2021 02:34:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q3W85RK2l11iT8uuad/jlRS+jlbeAafTCAjCzBkyWb0=; b=JLDaY54il6lBZINHfH9PVlTa7r/VJEfHruZ7k7Q7yimpf655DXK2p8iRpph1TatlIu tABuMlehMjV4g7TyyQtDQ2Lf1HtAL0qMgnWuwjFyug7IncR8B8C8ERH2bjWyPnPuQ4+x km9nanVB0cifdq8RTilknPMiZ1ZE8Yso3W37pSntWayP7paAOI/KZLo/n83RfR8xqid9 YVeaYNDyZM2jgRraFyhpEYyAx0XUbjm5h4lq6QTrBG98YeViEV853oV2IPAYJeAf4Y/l KXuhK3j4AJ7OsQImgFIoi7Xmr3nxLrfxnXYPgTSr/9ZtZEvMJuyAJhLUVFofR5xqUKFF yo2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q3W85RK2l11iT8uuad/jlRS+jlbeAafTCAjCzBkyWb0=; b=ZZG1SW33PCnkzkRW1FozC2TyvgEuQzx4nr/9IcFqdd85iY+m2/w2nXhVw3tt3JcHhZ PBr90mT1GdLh38cUCiwPEa6t2UyWF+QH4ff/8z0e5k9txeMHaUsD6vYsePpyKFlNIMnx 9MSpCN/GOdj8mFIRVOcA6Hbd8D1dzredWXpwByBktyH30oXMxoCn6uyBfIJ4unHnfZrw xW1OfWODczZDcfbmVUpPl9FXZi7gDafBKfB8UM9T0tDtqgWBDaAA3VEsgscdfYVvBteZ QVBjZYccxGM4nYo8q7Unixsn74OB1NBd9Amc4fdOh+T9arB38Uybs3PhPVp7FQWS5+5b U4CQ== X-Gm-Message-State: AOAM5327TRzwGtLVkvmNUkLqndxu2ywWGCWVy9GSX18x/NaTpILTbgct PQ0ieR1L1BDDqxzsBcvbxC6nBw== X-Google-Smtp-Source: ABdhPJzm+SfIBd6gbFIvCAnq9Jm8wfk5Wn/retcrwBVz3h721VZXhps2CyP82o8feu2FFuPCGmx5sg== X-Received: by 2002:a05:6a00:139a:b029:2e3:db98:9ae3 with SMTP id t26-20020a056a00139ab02902e3db989ae3mr2915042pfg.81.1622108047957; Thu, 27 May 2021 02:34:07 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:07 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 03/12] mm: memcontrol: make lruvec lock safe when the LRU pages reparented Date: Thu, 27 May 2021 17:33:27 +0800 Message-Id: <20210527093336.14895-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 43855A0001DF Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=JLDaY54i; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: moh919rj6swegzfussm58x94mayd7dhk X-HE-Tag: 1622108041-817186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The diagram below shows how to make the page lruvec lock safe when the LRU pages reparented. lock_page_lruvec(page) retry: lruvec = mem_cgroup_page_lruvec(page); // The page is reparented at this time. spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) // Acquired the wrong lruvec lock and need to retry. // Because this page is on the parent memcg lruvec list. goto retry; // If we reach here, it means that page_memcg(page) is stable. memcg_reparent_objcgs(memcg) // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); // Move all the pages from the lruvec list to the parent lruvec list. spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); After we acquire the lruvec lock, we need to check whether the page is reparented. If so, we need to reacquire the new lruvec lock. On the routine of the LRU pages reparenting, we will also acquire the lruvec lock (Will be implemented in the later patch). So page_memcg() cannot be changed when we hold the lruvec lock. Since lruvec_memcg(lruvec) is always equal to page_memcg(page) after we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So remove it. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 16 +++------------ mm/compaction.c | 10 +++++++++- mm/memcontrol.c | 50 +++++++++++++++++++++++++++------------------- mm/swap.c | 5 +++++ 4 files changed, 47 insertions(+), 34 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0159e1191a86..961ca3126d7f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -745,7 +745,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page * @page: the page * - * This function relies on page->mem_cgroup being stable. + * The lruvec can be changed to its parent lruvec when the page reparented. + * The caller need to recheck if it cares about this change (just like + * lock_page_lruvec() does). */ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) { @@ -765,14 +767,6 @@ struct lruvec *lock_page_lruvec_irq(struct page *page); struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags); -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); -#else -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1212,10 +1206,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) return &pgdat->__lruvec; } -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; diff --git a/mm/compaction.c b/mm/compaction.c index 851fd8f62695..225e06c63ec1 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -517,6 +517,8 @@ static struct lruvec *compact_lock_page_irqsave(struct page *page, { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = mem_cgroup_page_lruvec(page); /* Track if the lock is contended in async mode */ @@ -529,7 +531,13 @@ static struct lruvec *compact_lock_page_irqsave(struct page *page, spin_lock_irqsave(&lruvec->lru_lock, *flags); out: - lruvec_memcg_debug(lruvec, page); + if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in lock_page_lruvec(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 66f6ad1cc8e4..06c6bcaa265a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1178,23 +1178,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return ret; } -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg = page_memcg(page); - - if (!memcg) - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != root_mem_cgroup, page); - else - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != memcg, page); -} -#endif - /** * lock_page_lruvec - lock and return lruvec for a given page. * @page: the page @@ -1209,10 +1192,21 @@ struct lruvec *lock_page_lruvec(struct page *page) { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = mem_cgroup_page_lruvec(page); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, page); + if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); return lruvec; } @@ -1221,10 +1215,18 @@ struct lruvec *lock_page_lruvec_irq(struct page *page) { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = mem_cgroup_page_lruvec(page); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, page); + if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } + + /* See the comments in lock_page_lruvec(). */ + rcu_read_unlock(); return lruvec; } @@ -1233,10 +1235,18 @@ struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = mem_cgroup_page_lruvec(page); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, page); + if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in lock_page_lruvec(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/swap.c b/mm/swap.c index 1958d5feb148..6260e0e11268 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -313,6 +313,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_page(struct page *page) { + /* + * The rcu read lock is held by the caller, so we do not need to + * care about the lruvec returned by mem_cgroup_page_lruvec() being + * released. + */ lru_note_cost(mem_cgroup_page_lruvec(page), page_is_file_lru(page), thp_nr_pages(page)); } From patchwork Thu May 27 09:33:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50CB4C47089 for ; Thu, 27 May 2021 09:34:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D74EE613E6 for ; Thu, 27 May 2021 09:34:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D74EE613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7AEA06B0080; Thu, 27 May 2021 05:34:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 785226B0081; Thu, 27 May 2021 05:34:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FF6D6B0082; Thu, 27 May 2021 05:34:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 2A70E6B0080 for ; Thu, 27 May 2021 05:34:16 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B82288249980 for ; Thu, 27 May 2021 09:34:15 +0000 (UTC) X-FDA: 78186500070.38.5376DBC Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf12.hostedemail.com (Postfix) with ESMTP id CE72B2D4 for ; Thu, 27 May 2021 09:34:04 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id x18so125688pfi.9 for ; Thu, 27 May 2021 02:34:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6S3VYdwj0I4ndBv5IZcX9wqKGEeDomJn5lWaN+oyebU=; b=tFJ4jQpkCtjmisK3bHuyjg9vSnoim9HF8qZj/O3NLTxN/r6CuCGs7h726a5GF+5RuQ J3DwlStc0hpKPW8ZBAZzoGLstgpoPOFAi7Dtypzm308dom1xTZbtJheksBzIw2uCorO1 kEPmkSOBgSFNlApUyEpsVB0x8dhPpBXNTXVXSzpxqeWdbImgfa+eMoSDSlB8dR8A0VXE /J52qezsac9XrUMNJs+DomOTX8ZBJ8THtNq6p8r/ZYNpIEDVomKQEqjNHmjaU40HJzJe 2ws0ESwJvQySp/2ea8mWAsJoi5/uCJlAj9PWde7PiEv6j7g81hdz9jK8N4unRZ90TGaK y8Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6S3VYdwj0I4ndBv5IZcX9wqKGEeDomJn5lWaN+oyebU=; b=JIbS3vKPJqrjPjRf8uv0Bvz5XzCxHjoC4X6hp76aUxlVhHjj6oUtHv+8K01y16EgsM xtHCDrLhnBh+lkGJqyjcSykeY2x3oKoJuGA3nZIgfT79cOxzWWbqz2Ae3gEu013Zi3iS NcKaFOcYafOYCxvG6MNAGgAsRFlVkExbyO0VS3HahlVPaIg1UTFMt8Iislpbd7MRnxxE ZcSaLD+3WU9Zeq73MpoqDyknwsWv9g5eAMdQd7YefYWUPPqy79X5+FZtSi1319LJMdqP OSSRWMlOM+Ik+EgdOvR6E2z4jtlkJwxFvy/cvg0sOOLHurYMq7EGPs64sSOSxjwdX4sq X1kA== X-Gm-Message-State: AOAM5322LDjUAGX5XUogfeo7VUDgAb3DrKvrmjBNKitOr0JvzBHViGOV CJCWh+/cKoepYySEzcwO0mWsbg== X-Google-Smtp-Source: ABdhPJxzYilN8wdhcWYbfyeR9OlWXvIYbGhxnkv9fPjurYaFKr1eUghEA78K7Alidc4+zMAN4MiPag== X-Received: by 2002:a05:6a00:ac9:b029:2de:a06d:a52f with SMTP id c9-20020a056a000ac9b02902dea06da52fmr2961337pfl.4.1622108054505; Thu, 27 May 2021 02:34:14 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:14 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 04/12] mm: vmscan: rework move_pages_to_lru() Date: Thu, 27 May 2021 17:33:28 +0800 Message-Id: <20210527093336.14895-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=tFJ4jQpk; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: 6u4ijyth6tjgsohtiy6hunm3ym5473sr X-Rspamd-Queue-Id: CE72B2D4 X-Rspamd-Server: rspam02 X-HE-Tag: 1622108044-692162 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will reparent the LRU pages. The pages which will move to appropriate LRU list can be reparented during the process of the move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we should use the more general interface of relock_page_lruvec_irq() to acquire the correct lruvec lock. Signed-off-by: Muchun Song --- mm/vmscan.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 5a15748f9faf..731a8f5a4128 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2013,23 +2013,27 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * move_pages_to_lru() moves pages from private @list to appropriate LRU list. * On return, @list is reused as a list of pages to be freed by the caller. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate LRU list. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_pages_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_pages_to_lru(struct list_head *list) { - int nr_pages, nr_moved = 0; + int nr_moved = 0; + struct lruvec *lruvec = NULL; LIST_HEAD(pages_to_free); - struct page *page; while (!list_empty(list)) { - page = lru_to_page(list); + int nr_pages; + struct page *page = lru_to_page(list); + + lruvec = relock_page_lruvec_irq(page, lruvec); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); putback_lru_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; continue; } @@ -2050,19 +2054,15 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); destroy_compound_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; } else list_add(&page->lru, &pages_to_free); continue; } - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); @@ -2071,6 +2071,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, workingset_age_nonresident(lruvec, nr_pages); } + if (lruvec) + unlock_page_lruvec_irq(lruvec); /* * To save our caller's stack, now use input list for pages to free. */ @@ -2144,16 +2146,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false); - spin_lock_irq(&lruvec->lru_lock); - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(&page_list); + local_irq_disable(); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); @@ -2280,18 +2282,16 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate = move_pages_to_lru(lruvec, &l_active); - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); + nr_activate = move_pages_to_lru(&l_active); + nr_deactivate = move_pages_to_lru(&l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); + local_irq_disable(); __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); From patchwork Thu May 27 09:33:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A16AC47089 for ; Thu, 27 May 2021 09:34:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B4BD3613DE for ; Thu, 27 May 2021 09:34:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B4BD3613DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 517AE6B0081; Thu, 27 May 2021 05:34:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C7CB6B0082; Thu, 27 May 2021 05:34:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F2A36B0083; Thu, 27 May 2021 05:34:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id EFE836B0081 for ; Thu, 27 May 2021 05:34:21 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9B9848249980 for ; Thu, 27 May 2021 09:34:21 +0000 (UTC) X-FDA: 78186500322.40.2800103 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf01.hostedemail.com (Postfix) with ESMTP id 57411500166D for ; Thu, 27 May 2021 09:34:13 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id 6so3266931pgk.5 for ; Thu, 27 May 2021 02:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1YqeXDxKtAsmHDOtoB7d0+CQecCU4829NyXtDE054SM=; b=l8e/i+09LNmyHVOjutGSQ4+M+cZHVJyIOcGXrIws3Vy9qlU1811xLwujYufivAh6FZ O8J4YBdNWHQ6R6TdSicgIDIYXAFHGsKCv7KhrK6jwV3eGFaH00UjwwWmSHEcETBcb7gL s+ye3cZ/24DchFrVYjEVjm9gkrKdbioos6+wQF1YIAK3YwZVUO1yN0fY5ei/WDXiDjp9 XdAlx7JGljYMuENOT13NKUFRurGSC7pZgo1y1TzUtiUxQSS0Nf1xNdKi1nrQwjNrMRAb mZCcCf8yvXoDD+QnRb9lXDe/rQpY8c00Bo7NfkViLwXsydn9J1gMpeVutzPPY04ZbykE iajg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1YqeXDxKtAsmHDOtoB7d0+CQecCU4829NyXtDE054SM=; b=IBEOehLJV7Wz+jtHvCL4mZnoLfqn59oJDGTLE7UqxDDxsblylWb761xy6wrGR42GVn 1tLMdPDzjwHEJe+gk9exnH2+Y4WfcP2EmNHCblWomPn4ZnlzSwnCtPp3GOKO76DogFoK PwNI4LPXcnByjgnuixk27y5A5+8lcE0MZAIyCC2RHPD7lFT5Xol+kdoTaNupWd2vX5ab uNz/gPXu6KEhxAoRRX/xnP9gf/eueuCLeeh4sR78nL5z8++NVq6i9SOzRrwJn9QCemi9 E+PG7UPMbi6EcPI53uY4MrJM+R5peN8q6mUt/N+0pPnnSmk9SV6+EZovOWAApZWn91VH 68RQ== X-Gm-Message-State: AOAM533qf+zzjWgP2wuvt8PRaPOTFDgZhMNFl0cIQ1Xak7mG3QRcIIFp 3ULX2sCixmeczeZz8sSyrqtaPw== X-Google-Smtp-Source: ABdhPJzzEk61uARdN0wZ79mfYDOI9qUpXMwm8OKKSbRIUWRtq51T6t+7jRjMbjIun3F0dU3/tDf97w== X-Received: by 2002:a62:8f45:0:b029:28e:a5f2:2f2a with SMTP id n66-20020a628f450000b029028ea5f22f2amr2894395pfd.44.1622108060353; Thu, 27 May 2021 02:34:20 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:20 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 05/12] mm: thp: introduce lock/unlock_split_queue{_irqsave}() Date: Thu, 27 May 2021 17:33:29 +0800 Message-Id: <20210527093336.14895-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="l8e/i+09"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf01.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 57411500166D X-Stat-Signature: wjhpzbtsmsiqei81jrcbeiog4fyphenh X-HE-Tag: 1622108053-51992 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should make thp deferred split queue lock safe when LRU pages reparented. Similar to lock_page_lruvec{_irqsave, _irq}(), we introduce lock/unlock_split_queue{_irqsave}() to make the deferred split queue lock easier to be reparented. And in the next patch, we can use a similar approach (just like lruvec lock did) to make thp deferred split queue lock safe when the LRU pages reparented. Signed-off-by: Muchun Song --- mm/huge_memory.c | 96 +++++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 74 insertions(+), 22 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 233474770424..d8590408abbb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -496,25 +496,76 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *split_queue_to_memcg(struct deferred_split *queue) { - struct mem_cgroup *memcg = page_memcg(compound_head(page)); - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} + +static struct deferred_split *lock_split_queue(struct page *page) +{ + struct deferred_split *queue; + struct mem_cgroup *memcg; + + memcg = page_memcg(compound_head(page)); + if (memcg) + queue = &memcg->deferred_split_queue; + else + queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; + spin_lock(&queue->split_queue_lock); + + return queue; +} +static struct deferred_split *lock_split_queue_irqsave(struct page *page, + unsigned long *flags) +{ + struct deferred_split *queue; + struct mem_cgroup *memcg; + + memcg = page_memcg(compound_head(page)); if (memcg) - return &memcg->deferred_split_queue; + queue = &memcg->deferred_split_queue; else - return &pgdat->deferred_split_queue; + queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static struct deferred_split *lock_split_queue(struct page *page) +{ + struct deferred_split *queue; + + queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; + spin_lock(&queue->split_queue_lock); + + return queue; +} + +static struct deferred_split *lock_split_queue_irqsave(struct page *page, + unsigned long *flags) + { - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + struct deferred_split *queue; + + queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; + spin_lock_irqsave(&queue->split_queue_lock, *flags); - return &pgdat->deferred_split_queue; + return queue; } #endif +static inline void unlock_split_queue(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void unlock_split_queue_irqrestore(struct deferred_split *queue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2610,7 +2661,7 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct deferred_split *ds_queue = get_deferred_split_queue(head); + struct deferred_split *ds_queue; struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int mapcount, extra_pins, ret; @@ -2689,14 +2740,14 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue = lock_split_queue(head); mapcount = total_mapcount(head); if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + unlock_split_queue(ds_queue); if (mapping) { int nr = thp_nr_pages(head); @@ -2711,7 +2762,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + unlock_split_queue(ds_queue); fail: if (mapping) xa_unlock(&mapping->i_pages); local_irq_enable(); @@ -2733,24 +2784,21 @@ fail: if (mapping) void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = lock_split_queue_irqsave(page, &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + unlock_split_queue_irqrestore(ds_queue, flags); free_compound_page(page); } void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg = page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; unsigned long flags; VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -2768,18 +2816,22 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = lock_split_queue_irqsave(page, &flags); if (list_empty(page_deferred_list(page))) { count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG - if (memcg) + if (page_memcg(page)) { + struct mem_cgroup *memcg; + + memcg = split_queue_to_memcg(ds_queue); set_shrinker_bit(memcg, page_to_nid(page), deferred_split_shrinker.id); + } #endif } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + unlock_split_queue_irqrestore(ds_queue, flags); } static unsigned long deferred_split_count(struct shrinker *shrink, From patchwork Thu May 27 09:33:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91D99C4708A for ; Thu, 27 May 2021 09:34:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29F4D613DE for ; Thu, 27 May 2021 09:34:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29F4D613DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C1B176B0082; Thu, 27 May 2021 05:34:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF2136B0083; Thu, 27 May 2021 05:34:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6C1D6B0085; Thu, 27 May 2021 05:34:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 751856B0082 for ; Thu, 27 May 2021 05:34:27 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 20DAFA8DD for ; Thu, 27 May 2021 09:34:27 +0000 (UTC) X-FDA: 78186500574.23.6E185CD Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf18.hostedemail.com (Postfix) with ESMTP id C195B20007E0 for ; Thu, 27 May 2021 09:34:19 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id q15so3252136pgg.12 for ; Thu, 27 May 2021 02:34:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ft9WuvuTd8KuLYt2oYjvxJzc9erUDxmQq6d8Bz9BNBc=; b=MwXJs4szfOJmIMfrhnQvxPXKp0uPI07bWeUgOrKrSlFr+QzfhUUNksExbiD6v9pe+W +nbUm+i0jzvyStH4xhIZAnZS4ds37Mn/7FWXlpOZrazV9DnCc93SrT5LPK6cYyb3uRzw 0XkqgMVdriDQpON/GUVZqX07B589O6vIpccCwFD8T6WumzCQbKmcyWZUZh6LWv4ekWnj lGZiVUH4OXVfg1mV6hkQmrAZDu3yDdGAVoIdJQNrs09JpNfryPP0O8WxPJgjuXMFYV+6 rOHXzTThv9wZSBLZI2Ngnx7PGH1LStkqtp7nPEa98X1Z+PmSk6D/VxPy/J1FZrYSM149 2uXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ft9WuvuTd8KuLYt2oYjvxJzc9erUDxmQq6d8Bz9BNBc=; b=qf7fEkaNVGLbAzRB4wQQpOrrHUQZaXrqP/+k8t+wWl3kOKUew+r0ZjW3xUYN8nJum4 yuLQ9NpHl1TO7imO98teovtaA7q75vl556Fp65iZeu4KEr6W2x/3nDpWxvhmXZo3g+B1 R4YfMq2/TsV7PZe1hcHsoGWaI0/OQqjYU7/FVn9SaI+F+Oj3GDgEvr4tO+qu5n76TNLF PhBaLzM16tGU4f1WbLkodX3BGmncEQ8QrgxrjaGnUkbiclrt1lr7dPEeKSmtjWuJrjwj xAOau4AuU3aJhMieDkJ+5cgF01mMn/me4ZRuXmy4z6laqUXYBeGe3vm+m0PsbshMOxBF AC2A== X-Gm-Message-State: AOAM53273wSsVGq4bWVqu+bywFxt91oeEArT4sV82Gj/gDZ/IXXLC+2W OsWhh+/cJ7S7J9z5/Cmj6pgbFA== X-Google-Smtp-Source: ABdhPJyZzc7fWOpi0uj1AiDi2IHJ56gAWVN7hTmdPU75r/pk87l8G6wXVLzDGFYCBSD/C7vld9jtKA== X-Received: by 2002:a63:a552:: with SMTP id r18mr2798493pgu.439.1622108066077; Thu, 27 May 2021 02:34:26 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:25 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 06/12] mm: thp: make deferred split queue lock safe when the LRU pages reparented Date: Thu, 27 May 2021 17:33:30 +0800 Message-Id: <20210527093336.14895-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=MwXJs4sz; spf=pass (imf18.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: C195B20007E0 X-Stat-Signature: oubmas8qdn5s7g8k1nmwq7ykj43qo7x3 X-HE-Tag: 1622108059-342279 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to lruvec lock, we use the same approach to make the lock safe when the LRU pages reparented. Signed-off-by: Muchun Song --- mm/huge_memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d8590408abbb..8f0761563d46 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -506,6 +506,8 @@ static struct deferred_split *lock_split_queue(struct page *page) struct deferred_split *queue; struct mem_cgroup *memcg; + rcu_read_lock(); +retry: memcg = page_memcg(compound_head(page)); if (memcg) queue = &memcg->deferred_split_queue; @@ -513,6 +515,17 @@ static struct deferred_split *lock_split_queue(struct page *page) queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; spin_lock(&queue->split_queue_lock); + if (unlikely(memcg != page_memcg(page))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } @@ -522,6 +535,8 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page, struct deferred_split *queue; struct mem_cgroup *memcg; + rcu_read_lock(); +retry: memcg = page_memcg(compound_head(page)); if (memcg) queue = &memcg->deferred_split_queue; @@ -529,6 +544,14 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page, queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; spin_lock_irqsave(&queue->split_queue_lock, *flags); + if (unlikely(memcg != page_memcg(page))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + + /* See the comments in lock_split_queue(). */ + rcu_read_unlock(); + return queue; } #else From patchwork Thu May 27 09:33:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E60AC47089 for ; Thu, 27 May 2021 09:34:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D374613E6 for ; Thu, 27 May 2021 09:34:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D374613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C42F46B0083; Thu, 27 May 2021 05:34:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C18CF6B0085; Thu, 27 May 2021 05:34:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6C1F6B0087; Thu, 27 May 2021 05:34:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 707DC6B0083 for ; Thu, 27 May 2021 05:34:33 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0E1A5181AF5C7 for ; Thu, 27 May 2021 09:34:33 +0000 (UTC) X-FDA: 78186500826.30.B302912 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf13.hostedemail.com (Postfix) with ESMTP id D15ECE000810 for ; Thu, 27 May 2021 09:34:24 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id lx17-20020a17090b4b11b029015f3b32b8dbso2056757pjb.0 for ; Thu, 27 May 2021 02:34:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TjD5WwDSPTj30WiDAhVzg9O9vYb5Hy1ZDD6GA5Ms/f8=; b=daqY/3b9nh8lO0LbV5RWyRV4GaDCuFXC2q/JRtOrsDqbkxkAMWTnUdzjQJQKNAjD0T UqK339sQ0YDmj4FIFjSpQqYmWCYjNwMwxbZwq5zwC4miIMzbYpsY6JXPhGHMc6n46piy pgcK9xI+ZjnxppSVVesLxTdRZCvQ4634NWde9jrPAFSlwibowVX61ydBEnyThnNhyqXc tBiIFvLvV1mfST9JI+yMXIkF6RrPVIpVhMb6K6XOFziVFmN9xHAusdwR0MT9x3MiO9m3 kU9zOWrZNZTGJ61hgMSHPXqCZ8Rde5r2j5Fs2ORf8LFfrhq2TmCKBg+Rt1YWsSkkSvyW Ty2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TjD5WwDSPTj30WiDAhVzg9O9vYb5Hy1ZDD6GA5Ms/f8=; b=Dz++FmFBOE7iY8vHgMsne5W8aCvQQ+5Vs36TE+y7ZZXedd7wq8z1Taxq34Bggdzd10 sRMfGmiVAuPWbU7Q9wMsHyMFWWRuhLMqI47O6gCI+Oxah4YljRAhNYvhEFGqOi8mYgtn uEHpwHLIK0lQP9iffDXGivInOopEPde1+4XPzRcDx19R3JFKpwlaygZnbr74sCRtTyer M2x3wlsWpfumoTTCpvH3dEKaSmmEVOM8HwL1HxYk8MS7GI89s7uMym2oQf6a+SOKFPJ1 AvXcWStDcsaRHbXx3w4v7VPSLqczyDViOx7+pEf6PNoZodOQ9FZVjt9cFSTgss+GN6Oo 4+jg== X-Gm-Message-State: AOAM532d4P+zEwE0KO4i/kqVpmyqzaWZf7mZ5xzNyS11YIsXneDjQC4a 3ecBwdygAsyUp9tGO2/oBM+YOQ== X-Google-Smtp-Source: ABdhPJwVMCW7N70Mgh+rz/6kGNqETchJJUN/yFrBsYZnxDjSUvagKaILerKUzQywfVElpcIlmuDqbg== X-Received: by 2002:a17:902:b218:b029:f4:4b88:a44a with SMTP id t24-20020a170902b218b02900f44b88a44amr2453369plr.52.1622108071765; Thu, 27 May 2021 02:34:31 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:31 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 07/12] mm: memcontrol: make all the callers of page_memcg() safe Date: Thu, 27 May 2021 17:33:31 +0800 Message-Id: <20210527093336.14895-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D15ECE000810 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="daqY/3b9"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf13.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam03 X-Stat-Signature: 5tsy3ib1tckqfax7maufx94mz6mow4j6 X-HE-Tag: 1622108064-635509 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we use objcg APIs to charge the LRU pages, the page will not hold a reference to the memcg associated with the page. So the caller of the page_memcg() should hold an rcu read lock or obtain a reference to the memcg associated with the page to protect memcg from being released. So introduce get_mem_cgroup_from_page() to obtain a reference to the memory cgroup associated with the page. In this patch, make all the callers hold an rcu read lock or obtain a reference to the memcg to protect memcg from being released when the LRU pages reparented. We do not need to adjust the callers of page_memcg() during the whole process of mem_cgroup_move_task(). Because the cgroup migration and memory cgroup offlining are serialized by @cgroup_mutex. In this routine, the LRU pages cannot be reparented to its parent memory cgroup. So page_memcg(page) is stable and cannot be released. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- fs/buffer.c | 3 ++- fs/fs-writeback.c | 23 +++++++++++---------- include/linux/memcontrol.h | 39 ++++++++++++++++++++++++++++++++--- mm/memcontrol.c | 51 ++++++++++++++++++++++++++++++++++++---------- mm/migrate.c | 4 ++++ mm/page_io.c | 5 +++-- 6 files changed, 97 insertions(+), 28 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 673cfbef9eec..a542a47f6e27 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -848,7 +848,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, gfp |= __GFP_NOFAIL; /* The page lock pins the memcg */ - memcg = page_memcg(page); + memcg = get_mem_cgroup_from_page(page); old_memcg = set_active_memcg(memcg); head = NULL; @@ -868,6 +868,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, set_bh_page(bh, page, offset); } out: + mem_cgroup_put(memcg); set_active_memcg(old_memcg); return head; /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 7c46d1588a19..52b7d4563528 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -255,15 +255,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; - if (page) { - memcg_css = mem_cgroup_css_from_page(page); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (page) + memcg_css = get_mem_cgroup_css_from_page(page); + else memcg_css = task_get_css(current, memory_cgrp_id); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } if (!wb) @@ -736,16 +734,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, if (!wbc->wb || wbc->no_cgroup_owner) return; - css = mem_cgroup_css_from_page(page); + css = get_mem_cgroup_css_from_page(page); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; id = css->id; if (id == wbc->wb_id) { wbc->wb_bytes += bytes; - return; + goto out; } if (id == wbc->wb_lcand_id) @@ -758,6 +756,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, wbc->wb_tcand_bytes += bytes; else wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes); + +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 961ca3126d7f..99f745e23607 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -380,7 +380,7 @@ static inline bool PageMemcgKmem(struct page *page); * a valid memcg, but can be atomically swapped to the parent memcg. * * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * e.g. acquire the rcu_read_lock or css_set_lock or cgroup_mutex. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { @@ -458,6 +458,31 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* + * get_mem_cgroup_from_page - Obtain a reference on the memory cgroup associated + * with a page + * @page: a pointer to the page struct + * + * Returns a pointer to the memory cgroup (and obtain a reference on it) + * associated with the page, or NULL. This function assumes that the page + * is known to have a proper memory cgroup pointer. It's not safe to call + * this function against some type of pages, e.g. slab pages or ex-slab + * pages. + */ +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = page_memcg(page); + if (unlikely(memcg && !css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + +/* * page_memcg_rcu - locklessly get the memory cgroup associated with a page * @page: a pointer to the page struct * @@ -870,7 +895,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page); +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page); ino_t page_cgroup_ino(struct page *page); static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1030,10 +1055,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg, static inline void count_memcg_page_event(struct page *page, enum vm_event_item idx) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = page_memcg(page); if (memcg) count_memcg_events(memcg, idx, 1); + rcu_read_unlock(); } static inline void count_memcg_event_mm(struct mm_struct *mm, @@ -1107,6 +1135,11 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { WARN_ON_ONCE(!rcu_read_lock_held()); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 06c6bcaa265a..d294504aea0c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -410,7 +410,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif /** - * mem_cgroup_css_from_page - css of the memcg associated with a page + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page * @page: page of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -420,13 +420,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page) { struct mem_cgroup *memcg; - memcg = page_memcg(page); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + memcg = get_mem_cgroup_from_page(page); + if (!memcg) memcg = root_mem_cgroup; return &memcg->css; @@ -2015,7 +2017,9 @@ void lock_page_memcg(struct page *page) * The RCU lock is held throughout the transaction. The fast * path can get away without acquiring the memcg->move_lock * because page moving starts with an RCU grace period. - */ + * + * The RCU lock also protects the memcg from being freed. + */ rcu_read_lock(); if (mem_cgroup_disabled()) @@ -4591,7 +4595,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; @@ -4600,6 +4604,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, trace_track_foreign_dirty(page, wb); + memcg = get_mem_cgroup_from_page(page); /* * Pick the slot to use. If there is already a slot for @wb, keep * using it. If not replace the oldest one which isn't being @@ -4638,6 +4643,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, frn->memcg_id = wb->memcg_css->id; frn->at = now; } + css_put(&memcg->css); } /* issue foreign writeback flushes for recorded foreign dirtying events */ @@ -6168,6 +6174,14 @@ static void mem_cgroup_move_charge(void) atomic_dec(&mc.from->moving_account); } +/* + * The cgroup migration and memory cgroup offlining are serialized by + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot + * be reparented to its parent memory cgroup. So during the whole process + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not + * need to worry about the memcg (returned from page_memcg()) being + * released even if we do not hold an rcu read lock. + */ static void mem_cgroup_move_task(void) { if (mc.to) { @@ -6995,7 +7009,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) if (page_memcg(newpage)) return; - memcg = page_memcg(oldpage); + memcg = get_mem_cgroup_from_page(oldpage); VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; @@ -7016,6 +7030,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) mem_cgroup_charge_statistics(memcg, newpage, nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); + + css_put(&memcg->css); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7204,6 +7220,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + /* + * Interrupts should be disabled by the caller (see the comments below), + * which can serve as RCU read-side critical sections. + */ memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); @@ -7271,15 +7291,16 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; + rcu_read_lock(); memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) - return 0; + goto out; if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; + goto out; } memcg = mem_cgroup_id_get_online(memcg); @@ -7289,6 +7310,7 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); mem_cgroup_id_put(memcg); + rcu_read_unlock(); return -ENOMEM; } @@ -7298,6 +7320,8 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); VM_BUG_ON_PAGE(oldid, page); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); +out: + rcu_read_unlock(); return 0; } @@ -7352,17 +7376,22 @@ bool mem_cgroup_swap_full(struct page *page) if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return false; + goto out; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { unsigned long usage = page_counter_read(&memcg->swap); if (usage * 2 >= READ_ONCE(memcg->swap.high) || - usage * 2 >= READ_ONCE(memcg->swap.max)) + usage * 2 >= READ_ONCE(memcg->swap.max)) { + rcu_read_unlock(); return true; + } } +out: + rcu_read_unlock(); return false; } diff --git a/mm/migrate.c b/mm/migrate.c index 76cb9b422dc2..c4594b20dcec 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -465,6 +465,10 @@ int migrate_page_move_mapping(struct address_space *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; + /* + * Irq is disabled, which can serve as RCU read-side critical + * sections. + */ memcg = page_memcg(page); old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); diff --git a/mm/page_io.c b/mm/page_io.c index c493ce9ebcf5..81744777ab76 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -269,13 +269,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return; + goto out; - rcu_read_lock(); css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); +out: rcu_read_unlock(); } #else From patchwork Thu May 27 09:33:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C9E0C4707F for ; Thu, 27 May 2021 09:34:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADB36613EB for ; Thu, 27 May 2021 09:34:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADB36613EB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 516FB6B0085; Thu, 27 May 2021 05:34:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EDC16B0087; Thu, 27 May 2021 05:34:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 319986B0088; Thu, 27 May 2021 05:34:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id F21806B0085 for ; Thu, 27 May 2021 05:34:38 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 90C8E81D0 for ; Thu, 27 May 2021 09:34:38 +0000 (UTC) X-FDA: 78186501036.38.FDDEAA8 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf16.hostedemail.com (Postfix) with ESMTP id 217758019124 for ; Thu, 27 May 2021 09:34:31 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id v13-20020a17090abb8db029015f9f7d7290so5166721pjr.0 for ; Thu, 27 May 2021 02:34:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ib7RAPbfbKwKwT5M4f1jjNI8X1DqgVyFDhXu+pU2YDc=; b=mNp6Cb1YLHUxy6cw26u4CUhDgTDs0cRAiiyaSoga5m7N5h7Ak8Tvs4drwUtB2eKDsq BR9FRA2N93+3w3QtwDqRvBOhlA/Lqg+LKwC5f0G0cOuXPq0Yt3YbHFKirW8iCm17JCtN //J1JYMA+O73JMTbQnHwKJj8owPWVgoaxAcqFA1ApVDbozQypBp4QbQqBOjSQfgyUmvM PWXUQ6T9X4W7wNMcG4kcqG8CJa+iUad6qqbDeHTUQa3P+22iX6ujANNij0ErKLEwQTv8 W7ma61dnHJF6yj1QASuNvLz/9i/JF3VekHkTjnF8EdgEVdwoozlRRzvVnqnpYogdsH18 SkLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ib7RAPbfbKwKwT5M4f1jjNI8X1DqgVyFDhXu+pU2YDc=; b=aWg9uk9HnTK4egjB0Ou48/Ubc0M0ArpuQgzRhZnJP1Pns7mK/fJUeyKCFvloYU8RfS 6rV9vb0T39TtgRP7oWW95fX0bpztQZZCDKYT+ySbmCkx7B6cMzxrdqN/DKZRUc71Ff0h DO6LUF6O5UpEOyoEYwtyWrJN3gakaiqey3kkTy0uFDf4wlRNYFDbQVjz5AQKGlGel4Kj yfNeS+34jAMa3zUQ7cV9sARtvN+ZBDz5vTTgK4Esq8gi30A1ejVhXQd3JP1l5VWggU8J cQeeUQplBJtJF8SMR3TtG0lLiv54wO36RnVLbnSaVoOPJgcSvLDDda4WtubNOz3IHt5m Lypw== X-Gm-Message-State: AOAM531ZkXT8NBfcKTWz17fBVWKid6zky/SH5Gm2vuXhmu/H4t3sNEDm lXAvn2YTck1vyBkAwpHsyIycFA== X-Google-Smtp-Source: ABdhPJyVdHLzLicsxAPsLrq7ExyFFcuw1c5u2QVn6IBOaYMRrKOmc4rGueAku/4U/9ofUGoaghdX4A== X-Received: by 2002:a17:902:d30c:b029:fe:c349:14c1 with SMTP id b12-20020a170902d30cb02900fec34914c1mr1936785plc.81.1622108077583; Thu, 27 May 2021 02:34:37 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:37 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 08/12] mm: memcontrol: introduce memcg_reparent_ops Date: Thu, 27 May 2021 17:33:32 +0800 Message-Id: <20210527093336.14895-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 217758019124 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=mNp6Cb1Y; spf=pass (imf16.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: u9xmsshafiffsnna1oomghixccp9q6jc X-HE-Tag: 1622108071-951307 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the previous patch, we know how to make the lruvec lock safe when the LRU pages reparented. We should do something like following. memcg_reparent_objcgs(memcg) 1) lock // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); 2) do reparent // Move all the pages from the lruvec list to the parent lruvec list. 3) unlock spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); Apart from the page lruvec lock, the deferred split queue lock (THP only) also needs to do something similar. So we extract the necessary three steps in the memcg_reparent_objcgs(). memcg_reparent_objcgs(memcg) 1) lock memcg_reparent_ops->lock(memcg, parent); 2) reparent memcg_reparent_ops->reparent(memcg, reparent); 3) unlock memcg_reparent_ops->unlock(memcg, reparent); Now there are two different locks (e.g. lruvec lock and deferred split queue lock) need to use this infrastructure. In the next patch, we will use those APIs to make those locks safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 7 +++++++ mm/memcontrol.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 99f745e23607..336d80605763 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -354,6 +354,13 @@ struct mem_cgroup { struct mem_cgroup_per_node *nodeinfo[]; }; +struct memcg_reparent_ops { + /* Irq is disabled before calling those callbacks. */ + void (*lock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*unlock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*reparent)(struct mem_cgroup *memcg, struct mem_cgroup *parent); +}; + /* * size of first charge trial. "32" comes from vmscan.c's magic value. * TODO: maybe necessary to use big numbers in big irons. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d294504aea0c..470cdf0fbff1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -330,6 +330,35 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } +static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; + +static void memcg_reparent_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->lock(memcg, parent); +} + +static void memcg_reparent_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->unlock(memcg, parent); +} + +static void memcg_do_reparent(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->reparent(memcg, parent); +} + static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; @@ -339,9 +368,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; + local_irq_disable(); + + memcg_reparent_lock(memcg, parent); + objcg = rcu_replace_pointer(memcg->objcg, NULL, true); - spin_lock_irq(&css_set_lock); + spin_lock(&css_set_lock); /* 1) Ready to reparent active objcg. */ list_add(&objcg->list, &memcg->objcg_list); @@ -351,7 +384,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) /* 3) Move already reparented objcgs to the parent's list */ list_splice(&memcg->objcg_list, &parent->objcg_list); - spin_unlock_irq(&css_set_lock); + spin_unlock(&css_set_lock); + + memcg_do_reparent(memcg, parent); + + memcg_reparent_unlock(memcg, parent); + + local_irq_enable(); percpu_ref_kill(&objcg->refcnt); } From patchwork Thu May 27 09:33:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE2FC4708A for ; Thu, 27 May 2021 09:34:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B8B64613E9 for ; Thu, 27 May 2021 09:34:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8B64613E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1814B6B0087; Thu, 27 May 2021 05:34:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 127078D0001; Thu, 27 May 2021 05:34:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96D1C6B0089; Thu, 27 May 2021 05:34:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 416116B0087 for ; Thu, 27 May 2021 05:34:45 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D8F8B180ACEFC for ; Thu, 27 May 2021 09:34:44 +0000 (UTC) X-FDA: 78186501288.27.794709A Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf18.hostedemail.com (Postfix) with ESMTP id 73D6A2000981 for ; Thu, 27 May 2021 09:34:37 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id g6-20020a17090adac6b029015d1a9a6f1aso5142584pjx.1 for ; Thu, 27 May 2021 02:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GnOhA6lVJCjcr6SfTEkxdrAxqWh5BhnzIzOx7csmvmQ=; b=H0TsBEm2QjSb8ZREHDqksvu4HxUm0629822d48VvlLYcb3YSS3CbdRK6oPJ9O1HiZt VFy0Ow3sTskX8eC60nHCR69hIWntaUrmyY5u+yYNNKXA2m6Q8dQQDfrtL/XapWK2R794 0Qeg+btYZfe3sbHBA2rTlkddUiBRI12o3CGLZjrminkNGiH/zLFTAdaCZCKaWCc+kH8f hU4A4ih5gjNdSX4G3xuroXo33/0Xs0HtnLKS9UwBQRRgR0jkw2MD98Deywz0gVNftp4U yRJvMSCVrPyR78Yqyr0BXW5wzH47SHouksnzJFu2nWJsc+kNHQZug9xYv2S0JJScr2QG o34w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GnOhA6lVJCjcr6SfTEkxdrAxqWh5BhnzIzOx7csmvmQ=; b=tuwPB1jy8E8c9cJ7RuLuj1gWbOcMaKJl3WAYOI1gezoaXyn2ZBT3+QCaWIKyZT+5pC CuDkcpAj1AEK3Kx/SF+tBGIiPwNikIVyaU6SEXgIXny9E8z5InA0Z59/qJNAMKcrDGXx rGauZ9tAYmQG2II9kpSDGzT8t8a1WXlOJcJVhpYUz2Rrh5JIFfEb86pqBQIf3PQQzJfb lHN7bWOOYP7K00RzWoAFKAR2Xfnr3hMMtwVXpKcmQU0PLdRn5pqCom5AnPJN/bTC/mUG p6zK04fu0okFMN9Ox5rUG988gVffsbzPQKcnwsmw6I5CSEFXLndBUAvh3zkx1cv1n85k azqw== X-Gm-Message-State: AOAM532fb36A2CmJ9eDToMqig1CtsHuAGU7Ibpyq+X1JTk4BHHjwiSbu tyk1zXhbO/6OZZera3RbCFD8lw== X-Google-Smtp-Source: ABdhPJy174jDWq/wG00zKBmtNVWa9lC6Tof/ZoK3XFAMNDVtSY+vVNTddm0PWrj9PBvbqCq36I999g== X-Received: by 2002:a17:90a:1141:: with SMTP id d1mr1577090pje.56.1622108083473; Thu, 27 May 2021 02:34:43 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:43 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Date: Thu, 27 May 2021 17:33:33 +0800 Message-Id: <20210527093336.14895-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=H0TsBEm2; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf18.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: ht99i1f6w3tx8q3ini4kptw8rrkjk91h X-Rspamd-Queue-Id: 73D6A2000981 X-Rspamd-Server: rspam02 X-HE-Tag: 1622108077-237792 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We will reuse the obj_cgroup APIs to charge the LRU pages. Finally, page->memcg_data will have 2 different meanings. - For the slab pages, page->memcg_data points to an object cgroups vector. - For the kmem pages (exclude the slab pages) and the LRU pages, page->memcg_data points to an object cgroup. In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end, The page cache cannot prevent long-living objects from pinning the original memory cgroup in the memory. At the same time we also changed the rules of page and objcg or memcg binding stability. The new rules are as follows. For a page any of the following ensures page and objcg binding stability: - the page lock - LRU isolation - lock_page_memcg() - exclusive reference Based on the stable binding of page and objcg, for a page any of the following ensures page and memcg binding stability: - css_set_lock - cgroup_mutex - the lruvec lock - the split queue lock (only THP page) If the caller only want to ensure that the page counters of memcg are updated correctly, ensure that the binding stability of page and objcg is sufficient. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 96 ++++++--------- mm/huge_memory.c | 42 +++++++ mm/memcontrol.c | 293 ++++++++++++++++++++++++++++++++------------- 3 files changed, 291 insertions(+), 140 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 336d80605763..25777e39bc34 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -380,8 +380,6 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) -static inline bool PageMemcgKmem(struct page *page); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -395,43 +393,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __page_memcg - get the memory cgroup associated with a non-kmem page - * @page: a pointer to the page struct - * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * kmem pages. - */ -static inline struct mem_cgroup *__page_memcg(struct page *page) -{ - unsigned long memcg_data = page->memcg_data; - - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * __page_objcg - get the object cgroup associated with a kmem page + * page_objcg - get the object cgroup associated with page * @page: a pointer to the page struct * * Returns a pointer to the object cgroup associated with the page, * or NULL. This function assumes that the page is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * LRU pages. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__page_objcg(struct page *page) +static inline struct obj_cgroup *page_objcg(struct page *page) { unsigned long memcg_data = page->memcg_data; VM_BUG_ON_PAGE(PageSlab(page), page); VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -445,23 +419,35 @@ static inline struct obj_cgroup *__page_objcg(struct page *page) * proper memory cgroup pointer. It's not safe to call this function * against some type of pages, e.g. slab pages or ex-slab pages. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: + * For a page any of the following ensures page and objcg binding stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * Based on the stable binding of page and objcg, for a page any of the + * following ensures page and memcg binding stability: + * + * - css_set_lock + * - cgroup_mutex + * - the lruvec lock + * - the split queue lock (only THP page) + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of page and objcg + * is sufficient. + * + * A caller should hold an rcu read lock (In addition, regions of code across + * which interrupts, preemption, or softirqs have been disabled also serve as + * RCU read-side critical sections) to protect memcg associated with a page + * from being released. */ static inline struct mem_cgroup *page_memcg(struct page *page) { - if (PageMemcgKmem(page)) - return obj_cgroup_memcg(__page_objcg(page)); - else - return __page_memcg(page); + struct obj_cgroup *objcg = page_objcg(page); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -474,6 +460,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page) * is known to have a proper memory cgroup pointer. It's not safe to call * this function against some type of pages, e.g. slab pages or ex-slab * pages. + * + * The page and objcg or memcg binding rules can refer to page_memcg(). */ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) { @@ -497,22 +485,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) * or NULL. This function assumes that the page is known to have a * proper memory cgroup pointer. It's not safe to call this function * against some type of pages, e.g. slab pages or ex-slab pages. + * + * The page and objcg or memcg binding rules can refer to page_memcg(). */ static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { unsigned long memcg_data = READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; VM_BUG_ON_PAGE(PageSlab(page), page); WARN_ON_ONCE(!rcu_read_lock_held()); - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -525,16 +511,10 @@ static inline struct mem_cgroup *page_memcg_rcu(struct page *page) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: - * - * - the page lock - * - LRU isolation - * - lock_page_memcg() - * - exclusive reference + * The page and objcg or memcg binding rules can refer to page_memcg(). * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -543,18 +523,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) * for slab pages, READ_ONCE() should be used here. */ unsigned long memcg_data = READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8f0761563d46..78cf65c29336 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -496,6 +496,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG +static struct shrinker deferred_split_shrinker; + static inline struct mem_cgroup *split_queue_to_memcg(struct deferred_split *queue) { return container_of(queue, struct mem_cgroup, deferred_split_queue); @@ -554,6 +556,46 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page, return queue; } + +static void memcg_reparent_split_queue_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_lock(&memcg->deferred_split_queue.split_queue_lock); + spin_lock(&parent->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_unlock(&parent->deferred_split_queue.split_queue_lock); + spin_unlock(&memcg->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int nid; + struct deferred_split *src, *dst; + + src = &memcg->deferred_split_queue; + dst = &parent->deferred_split_queue; + + if (!src->split_queue_len) + return; + + list_splice_tail_init(&src->split_queue, &dst->split_queue); + dst->split_queue_len += src->split_queue_len; + src->split_queue_len = 0; + + for_each_node(nid) + set_shrinker_bit(parent, nid, deferred_split_shrinker.id); +} + +const struct memcg_reparent_ops split_queue_reparent_ops = { + .lock = memcg_reparent_split_queue_lock, + .unlock = memcg_reparent_split_queue_unlock, + .reparent = memcg_reparent_split_queue, +}; #else static struct deferred_split *lock_split_queue(struct page *page) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 470cdf0fbff1..48d40764ed49 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -75,6 +75,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly; EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +static struct obj_cgroup *root_obj_cgroup __read_mostly; /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); @@ -254,6 +255,11 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr) extern spinlock_t css_set_lock; +static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg) +{ + return objcg == root_obj_cgroup; +} + #ifdef CONFIG_MEMCG_KMEM static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); @@ -330,7 +336,81 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; +static void memcg_reparent_lruvec_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_lock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + spin_lock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + } +} + +static void memcg_reparent_lruvec_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + } +} + +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src = container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec); + + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] = 0; + } +} + +static void memcg_reparent_lruvec(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + enum lru_list lru; + struct lruvec *src, *dst; + + src = mem_cgroup_lruvec(memcg, NODE_DATA(i)); + dst = mem_cgroup_lruvec(parent, NODE_DATA(i)); + + dst->anon_cost += src->anon_cost; + dst->file_cost += src->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src, dst, lru); + } +} + +static const struct memcg_reparent_ops lruvec_reparent_ops = { + .lock = memcg_reparent_lruvec_lock, + .unlock = memcg_reparent_lruvec_unlock, + .reparent = memcg_reparent_lruvec, +}; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct memcg_reparent_ops split_queue_reparent_ops; +#endif + +static const struct memcg_reparent_ops *memcg_reparent_ops[] = { + &lruvec_reparent_ops, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + &split_queue_reparent_ops, +#endif +}; static void memcg_reparent_lock(struct mem_cgroup *memcg, struct mem_cgroup *parent) @@ -2842,18 +2922,18 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) } #endif -static void commit_charge(struct page *page, struct mem_cgroup *memcg) +static void commit_charge(struct page *page, struct obj_cgroup *objcg) { - VM_BUG_ON_PAGE(page_memcg(page), page); + VM_BUG_ON_PAGE(page_objcg(page), page); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures page's objcg stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference */ - page->memcg_data = (unsigned long)memcg; + page->memcg_data = (unsigned long)objcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -2870,6 +2950,21 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) return memcg; } +static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) +{ + struct obj_cgroup *objcg = NULL; + + rcu_read_lock(); + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + objcg = rcu_dereference(memcg->objcg); + if (objcg && obj_cgroup_tryget(objcg)) + break; + } + rcu_read_unlock(); + + return objcg; +} + #ifdef CONFIG_MEMCG_KMEM /* * The allocated objcg pointers array is not accounted directly. @@ -2975,12 +3070,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) else memcg = mem_cgroup_from_task(current); - for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { - objcg = rcu_dereference(memcg->objcg); - if (objcg && obj_cgroup_tryget(objcg)) - break; + if (mem_cgroup_is_root(memcg)) + goto out; + + objcg = get_obj_cgroup_from_memcg(memcg); + if (obj_cgroup_is_root(objcg)) { + obj_cgroup_put(objcg); objcg = NULL; } +out: rcu_read_unlock(); return objcg; @@ -3123,13 +3221,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { - struct obj_cgroup *objcg; + struct obj_cgroup *objcg = page_objcg(page); unsigned int nr_pages = 1 << order; - if (!PageMemcgKmem(page)) + if (!objcg) return; - objcg = __page_objcg(page); + VM_BUG_ON_PAGE(!PageMemcgKmem(page), page); obj_cgroup_uncharge_pages(objcg, nr_pages); page->memcg_data = 0; obj_cgroup_put(objcg); @@ -3359,23 +3457,20 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) #endif /* CONFIG_MEMCG_KMEM */ /* - * Because page_memcg(head) is not set on tails, set it now. + * Because page_objcg(head) is not set on tails, set it now. */ void split_page_memcg(struct page *head, unsigned int nr) { - struct mem_cgroup *memcg = page_memcg(head); + struct obj_cgroup *objcg = page_objcg(head); int i; - if (mem_cgroup_disabled() || !memcg) + if (mem_cgroup_disabled() || !objcg) return; for (i = 1; i < nr; i++) head[i].memcg_data = head->memcg_data; - if (PageMemcgKmem(head)) - obj_cgroup_get_many(__page_objcg(head), nr - 1); - else - css_get_many(&memcg->css, nr - 1); + obj_cgroup_get_many(objcg, nr - 1); } #ifdef CONFIG_MEMCG_SWAP @@ -5362,6 +5457,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) objcg->memcg = memcg; rcu_assign_pointer(memcg->objcg, objcg); + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup = objcg; + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5736,10 +5834,10 @@ static int mem_cgroup_move_account(struct page *page, */ smp_mb(); - css_get(&to->css); - css_put(&from->css); + obj_cgroup_get(to->objcg); + obj_cgroup_put(from->objcg); - page->memcg_data = (unsigned long)to; + page->memcg_data = (unsigned long)to->objcg; __unlock_page_memcg(from); @@ -6211,6 +6309,42 @@ static void mem_cgroup_move_charge(void) mmap_read_unlock(mc.mm); atomic_dec(&mc.from->moving_account); + + /* + * Moving its pages to another memcg is finished. Wait for already + * started RCU-only updates to finish to make sure that the caller + * of lock_page_memcg() can unlock the correct move_lock. The + * possible bad scenario would like: + * + * CPU0: CPU1: + * mem_cgroup_move_charge() + * walk_page_range() + * + * lock_page_memcg(page) + * memcg = page_memcg(page) + * spin_lock_irqsave(&memcg->move_lock) + * memcg->move_lock_task = current + * + * atomic_dec(&mc.from->moving_account) + * + * mem_cgroup_css_offline() + * memcg_offline_kmem() + * memcg_reparent_objcgs() <== reparented + * + * unlock_page_memcg(page) + * memcg = page_memcg(page) <== memcg has been changed + * if (memcg->move_lock_task == current) <== false + * spin_unlock_irqrestore(&memcg->move_lock) + * + * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex + * would be released soon), the page can be reparented to its parent + * memcg. When the unlock_page_memcg() is called for the page, we will + * miss unlock the move_lock. So using synchronize_rcu to wait for + * already started RCU-only updates to finish before this function + * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are + * serialized by cgroup_mutex). + */ + synchronize_rcu(); } /* @@ -6766,21 +6900,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) { + struct obj_cgroup *objcg; unsigned int nr_pages = thp_nr_pages(page); - int ret; + int ret = 0; - ret = try_charge(memcg, gfp, nr_pages); + objcg = get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) + ret = try_charge(memcg, gfp, nr_pages); if (ret) goto out; - css_get(&memcg->css); - commit_charge(page, memcg); + obj_cgroup_get(objcg); + commit_charge(page, objcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, page, nr_pages); memcg_check_events(memcg, page); local_irq_enable(); out: + obj_cgroup_put(objcg); return ret; } @@ -6881,7 +7020,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) } struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -6896,63 +7035,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); + page_counter_uncharge(&memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); + page_counter_uncharge(&memcg->memsw, ug->nr_memory); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) - page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); - memcg_oom_recover(ug->memcg); + page_counter_uncharge(&memcg->kmem, ug->nr_kmem); + memcg_oom_recover(memcg); } local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->dummy_page); + __count_memcg_events(memcg, PGPGOUT, ug->pgpgout); + __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory); + memcg_check_events(memcg, ug->dummy_page); local_irq_restore(flags); + rcu_read_unlock(); /* drop reference from uncharge_page */ - css_put(&ug->memcg->css); + obj_cgroup_put(ug->objcg); } static void uncharge_page(struct page *page, struct uncharge_gather *ug) { unsigned long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; VM_BUG_ON_PAGE(PageLRU(page), page); /* * Nobody should be changing or seriously looking at - * page memcg or objcg at this point, we have fully - * exclusive access to the page. + * page objcg at this point, we have fully exclusive + * access to the page. */ - if (PageMemcgKmem(page)) { - objcg = __page_objcg(page); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg = get_mem_cgroup_from_objcg(objcg); - } else { - memcg = __page_memcg(page); - } - - if (!memcg) + objcg = page_objcg(page); + if (!objcg) return; - if (ug->memcg != memcg) { - if (ug->memcg) { + if (ug->objcg != objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg = memcg; + ug->objcg = objcg; ug->dummy_page = page; - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } nr_pages = compound_nr(page); @@ -6960,19 +7092,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) if (PageMemcgKmem(page)) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - - page->memcg_data = 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory += nr_pages; ug->pgpgout++; - - page->memcg_data = 0; } - css_put(&memcg->css); + page->memcg_data = 0; + obj_cgroup_put(objcg); } /** @@ -6989,7 +7117,7 @@ void mem_cgroup_uncharge(struct page *page) return; /* Don't touch page->lru of any random page, pre-check: */ - if (!page_memcg(page)) + if (!page_objcg(page)) return; uncharge_gather_clear(&ug); @@ -7015,7 +7143,7 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) uncharge_gather_clear(&ug); list_for_each_entry(page, page_list, lru) uncharge_page(page, &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } @@ -7032,6 +7160,7 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned int nr_pages; unsigned long flags; @@ -7045,32 +7174,34 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) return; /* Page cache replacement: new page already charged? */ - if (page_memcg(newpage)) + if (page_objcg(newpage)) return; - memcg = get_mem_cgroup_from_page(oldpage); - VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); - if (!memcg) + objcg = page_objcg(oldpage); + VM_WARN_ON_ONCE_PAGE(!objcg, oldpage); + if (!objcg) return; /* Force-charge the new page. The old one will be freed soon */ nr_pages = thp_nr_pages(newpage); - if (!mem_cgroup_is_root(memcg)) { + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } - css_get(&memcg->css); - commit_charge(newpage, memcg); + obj_cgroup_get(objcg); + commit_charge(newpage, objcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, newpage, nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); - - css_put(&memcg->css); + rcu_read_unlock(); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7247,6 +7378,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg) void mem_cgroup_swapout(struct page *page, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; unsigned short oldid; @@ -7259,15 +7391,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + objcg = page_objcg(page); + VM_WARN_ON_ONCE_PAGE(!objcg, page); + if (!objcg) + return; + /* * Interrupts should be disabled by the caller (see the comments below), * which can serve as RCU read-side critical sections. */ - memcg = page_memcg(page); - - VM_WARN_ON_ONCE_PAGE(!memcg, page); - if (!memcg) - return; + memcg = obj_cgroup_memcg(objcg); /* * In case the memcg owning these pages has been offlined and doesn't @@ -7286,7 +7419,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) page->memcg_data = 0; - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); if (!cgroup_memory_noswap && memcg != swap_memcg) { @@ -7305,7 +7438,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) mem_cgroup_charge_statistics(memcg, page, -nr_entries); memcg_check_events(memcg, page); - css_put(&memcg->css); + obj_cgroup_put(objcg); } /** From patchwork Thu May 27 09:33:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB5DC4707F for ; Thu, 27 May 2021 09:34:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95B90613E6 for ; Thu, 27 May 2021 09:34:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95B90613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 38E4E6B0088; Thu, 27 May 2021 05:34:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 368C76B0089; Thu, 27 May 2021 05:34:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16ABF6B008A; Thu, 27 May 2021 05:34:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id CAC046B0088 for ; Thu, 27 May 2021 05:34:50 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 777EF180ACEFC for ; Thu, 27 May 2021 09:34:50 +0000 (UTC) X-FDA: 78186501540.20.49B4789 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 136382000991 for ; Thu, 27 May 2021 09:34:42 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id h12so2044574plf.11 for ; Thu, 27 May 2021 02:34:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CPb8rkUB+/BA7X8g6jN3Selz5WDf+tUjKtDrS+9kWa0=; b=sj48zu8Bw1J+8in08Uc2ELKei5mkagXrozTUzzm680lDobXPM41BXelaN/n91iojOp 99Nfg+Ik1/KhVOkMS10kfnziiVPn3zKqTOZAoZmDv6Wvf7g9eZCl6UCk0IwxzKeLYonE zogM/+77DVZFbVAd1xDUr4i1pjYdDKdEPsu7P4IohSlwH36Xu6mV5X0fNACB9eIdUoh9 LiDUeJNQCA30fWAUX6DV2X/tGYyD9ajJepPhSfoiwzFJYENAAReyguWkd22MH3cVMcrN ZNE7HkR+bVF81bCmnmwwCv4yKih7MnaVMAdnJgHrGq1dXCNeEI5I2fwxAlKRRfsMB3YL jnXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CPb8rkUB+/BA7X8g6jN3Selz5WDf+tUjKtDrS+9kWa0=; b=hWBEqewoIVQES67dRwEo5inp0eM7BTIdLcxLcJVlAWGAAX7npsAj7y3Fdln0K54iay ZIZioxK3KgT1Wup9jflAxWHya9K5fqmHfiTqawywlmKu0jisYGSIXRgJZO8Wo0HSNCx4 p2IpNi8tfjclMmFdlqHvEas4c1BbaJh9d9s7VjQaC52R7W8Q4Sn0gsuGp94IQFKJ1VDY ubSKVqajXTph+yEgaC2fS7J+/2OH33tMrPbSz5UnvmD3GBMKSWGwfkMfUkg6xqYuxIhd mXwMFZ0Pabgt6GSvdWzqcNoAXEfLMVFs1NqNDzNKiPBgAvd+eixwzAxTYHS7uxvMZmxU TqDQ== X-Gm-Message-State: AOAM533lZmru530RxKlq8V8aPGMCaL42CzJLo3gN2VVEWFnHZ5arBFhd GEUnEmBMAEmuPb5+q5bzDFIDNg== X-Google-Smtp-Source: ABdhPJxs2xZu7PjGIsD9xXCljJLRfyLzYUR6xGc7/Z/s+2+A3KkVZRdRCXVDObVrxpkO55V8+4W9RA== X-Received: by 2002:a17:90a:be12:: with SMTP id a18mr2699274pjs.187.1622108089157; Thu, 27 May 2021 02:34:49 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:48 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Date: Thu, 27 May 2021 17:33:34 +0800 Message-Id: <20210527093336.14895-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 136382000991 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=sj48zu8B; spf=pass (imf18.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam04 X-Stat-Signature: q8u6pcejgk8tyyow96n9u8snr4ftn7zo X-HE-Tag: 1622108082-336038 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the lock_page_memcg() does not lock a page and memcg binding, it actually lock a page and objcg binding. So rename lock_page_memcg() to lock_page_objcg(). This is just code cleanup without any functionality changes. Signed-off-by: Muchun Song --- Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- fs/buffer.c | 10 +++---- fs/iomap/buffered-io.c | 4 +-- include/linux/memcontrol.h | 18 +++++++---- mm/filemap.c | 2 +- mm/huge_memory.c | 4 +-- mm/memcontrol.c | 41 ++++++++++++++++---------- mm/page-writeback.c | 24 +++++++-------- mm/rmap.c | 14 ++++----- 9 files changed, 67 insertions(+), 52 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 41191b5fb69d..dd582312b91a 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -291,7 +291,7 @@ Lock order is as follows: Page lock (PG_locked bit of page->flags) mm->page_table_lock or split pte_lock - lock_page_memcg (memcg->move_lock) + lock_page_objcg (memcg->move_lock) mapping->i_pages lock lruvec->lru_lock. diff --git a/fs/buffer.c b/fs/buffer.c index a542a47f6e27..6935f12d23f8 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -595,7 +595,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode); * If warn is true, then emit a warning if the page is not uptodate and has * not been truncated. * - * The caller must hold lock_page_memcg(). + * The caller must hold lock_page_objcg(). */ void __set_page_dirty(struct page *page, struct address_space *mapping, int warn) @@ -660,14 +660,14 @@ int __set_page_dirty_buffers(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty = !TestSetPageDirty(page); spin_unlock(&mapping->private_lock); if (newly_dirty) __set_page_dirty(page, mapping, 1); - unlock_page_memcg(page); + unlock_page_objcg(page); if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1164,13 +1164,13 @@ void mark_buffer_dirty(struct buffer_head *bh) struct page *page = bh->b_page; struct address_space *mapping = NULL; - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { mapping = page_mapping(page); if (mapping) __set_page_dirty(page, mapping, 0); } - unlock_page_memcg(page); + unlock_page_objcg(page); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9023717c5188..de6d07fe5e07 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -653,11 +653,11 @@ iomap_set_page_dirty(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty = !TestSetPageDirty(page); if (newly_dirty) __set_page_dirty(page, mapping, 0); - unlock_page_memcg(page); + unlock_page_objcg(page); if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 25777e39bc34..76d6b82fec15 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -419,11 +419,12 @@ static inline struct obj_cgroup *page_objcg(struct page *page) * proper memory cgroup pointer. It's not safe to call this function * against some type of pages, e.g. slab pages or ex-slab pages. * - * For a page any of the following ensures page and objcg binding stability: + * For a page any of the following ensures page and objcg binding stability + * (But the page can be reparented to its parent memcg): * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference * * Based on the stable binding of page and objcg, for a page any of the @@ -943,8 +944,8 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); extern bool cgroup_memory_noswap; #endif -void lock_page_memcg(struct page *page); -void unlock_page_memcg(struct page *page); +void lock_page_objcg(struct page *page); +void unlock_page_objcg(struct page *page); void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); @@ -1113,6 +1114,11 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, #define MEM_CGROUP_ID_SHIFT 0 #define MEM_CGROUP_ID_MAX 0 +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg(struct page *page) { return NULL; @@ -1340,11 +1346,11 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } -static inline void lock_page_memcg(struct page *page) +static inline void lock_page_objcg(struct page *page) { } -static inline void unlock_page_memcg(struct page *page) +static inline void unlock_page_objcg(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index ba1068a1837f..85a1bdc86d3d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -110,7 +110,7 @@ * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) * ->inode->i_lock (page_remove_rmap->set_page_dirty) - * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) + * ->memcg->move_lock (page_remove_rmap->lock_page_objcg) * bdi.wb->list_lock (zap_pte_range->set_page_dirty) * ->inode->i_lock (zap_pte_range->set_page_dirty) * ->private_lock (zap_pte_range->__set_page_dirty_buffers) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 78cf65c29336..6548c9b8c0b3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2244,7 +2244,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_inc(&page[i]._mapcount); } - lock_page_memcg(page); + lock_page_objcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ __mod_lruvec_page_state(page, NR_ANON_THPS, @@ -2255,7 +2255,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_dec(&page[i]._mapcount); } } - unlock_page_memcg(page); + unlock_page_objcg(page); } smp_wmb(); /* make pte visible before pmd */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 48d40764ed49..33aad9ed5071 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1306,7 +1306,7 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, * These functions are safe to use under any of the following conditions: * - page locked * - PageLRU cleared - * - lock_page_memcg() + * - lock_page_objcg() * - page->_refcount is zero */ struct lruvec *lock_page_lruvec(struct page *page) @@ -2117,16 +2117,16 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg) } /** - * lock_page_memcg - lock a page and memcg binding + * lock_page_objcg - lock a page and objcg binding * @page: the page * * This function protects unlocked LRU pages from being moved to * another cgroup. * - * It ensures lifetime of the locked memcg. Caller is responsible + * It ensures lifetime of the locked objcg. Caller is responsible * for the lifetime of the page. */ -void lock_page_memcg(struct page *page) +void lock_page_objcg(struct page *page) { struct page *head = compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; @@ -2164,18 +2164,27 @@ void lock_page_memcg(struct page *page) } /* + * The cgroup migration and memory cgroup offlining are serialized by + * cgroup_mutex. If we reach here, it means that we are race with cgroup + * migration (or we are cgroup migration) and the @page cannot be + * reparented to its parent memory cgroup. So during the whole process + * from lock_page_objcg(page) to unlock_page_objcg(page), page_memcg(page) + * and obj_cgroup_memcg(objcg) are stable. + * * When charge migration first begins, we can have multiple * critical sections holding the fast-path RCU lock and one * holding the slowpath move_lock. Track the task who has the - * move_lock for unlock_page_memcg(). + * move_lock for unlock_page_objcg(). */ memcg->move_lock_task = current; memcg->move_lock_flags = flags; } -EXPORT_SYMBOL(lock_page_memcg); +EXPORT_SYMBOL(lock_page_objcg); -static void __unlock_page_memcg(struct mem_cgroup *memcg) +static void __unlock_page_objcg(struct obj_cgroup *objcg) { + struct mem_cgroup *memcg = objcg ? obj_cgroup_memcg(objcg) : NULL; + if (memcg && memcg->move_lock_task == current) { unsigned long flags = memcg->move_lock_flags; @@ -2189,16 +2198,16 @@ static void __unlock_page_memcg(struct mem_cgroup *memcg) } /** - * unlock_page_memcg - unlock a page and memcg binding + * unlock_page_objcg - unlock a page and memcg binding * @page: the page */ -void unlock_page_memcg(struct page *page) +void unlock_page_objcg(struct page *page) { struct page *head = compound_head(page); - __unlock_page_memcg(page_memcg(head)); + __unlock_page_objcg(page_objcg(head)); } -EXPORT_SYMBOL(unlock_page_memcg); +EXPORT_SYMBOL(unlock_page_objcg); struct obj_stock { #ifdef CONFIG_MEMCG_KMEM @@ -2930,7 +2939,7 @@ static void commit_charge(struct page *page, struct obj_cgroup *objcg) * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference */ page->memcg_data = (unsigned long)objcg; @@ -5775,7 +5784,7 @@ static int mem_cgroup_move_account(struct page *page, from_vec = mem_cgroup_lruvec(from, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat); - lock_page_memcg(page); + lock_page_objcg(page); if (PageAnon(page)) { if (page_mapped(page)) { @@ -5827,7 +5836,7 @@ static int mem_cgroup_move_account(struct page *page, * with (un)charging, migration, LRU putback, or anything else * that would rely on a stable page's memory cgroup. * - * Note that lock_page_memcg is a memcg lock, not a page lock, + * Note that lock_page_objcg is a memcg lock, not a page lock, * to save space. As soon as we switch page's memory cgroup to a * new memcg that isn't locked, the above state can change * concurrently again. Make sure we're truly done with it. @@ -5839,7 +5848,7 @@ static int mem_cgroup_move_account(struct page *page, page->memcg_data = (unsigned long)to->objcg; - __unlock_page_memcg(from); + __unlock_page_objcg(from->objcg); ret = 0; @@ -6281,7 +6290,7 @@ static void mem_cgroup_move_charge(void) { lru_add_drain_all(); /* - * Signal lock_page_memcg() to take the memcg's move_lock + * Signal lock_page_objcg() to take the memcg's move_lock * while we're moving its pages to another memcg. Then wait * for already started RCU-only updates to finish. */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index f3bcd2bb00a6..285ba4e1306a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2417,7 +2417,7 @@ int __set_page_dirty_no_writeback(struct page *page) /* * Helper function for set_page_dirty family. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). * * NOTE: This relies on being atomic wrt interrupts. */ @@ -2449,7 +2449,7 @@ void account_page_dirtied(struct page *page, struct address_space *mapping) /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). */ void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb) @@ -2476,13 +2476,13 @@ void account_page_cleaned(struct page *page, struct address_space *mapping, */ int __set_page_dirty_nobuffers(struct page *page) { - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { struct address_space *mapping = page_mapping(page); unsigned long flags; if (!mapping) { - unlock_page_memcg(page); + unlock_page_objcg(page); return 1; } @@ -2493,7 +2493,7 @@ int __set_page_dirty_nobuffers(struct page *page) __xa_set_mark(&mapping->i_pages, page_index(page), PAGECACHE_TAG_DIRTY); xa_unlock_irqrestore(&mapping->i_pages, flags); - unlock_page_memcg(page); + unlock_page_objcg(page); if (mapping->host) { /* !PageAnon && !swapper_space */ @@ -2501,7 +2501,7 @@ int __set_page_dirty_nobuffers(struct page *page) } return 1; } - unlock_page_memcg(page); + unlock_page_objcg(page); return 0; } EXPORT_SYMBOL(__set_page_dirty_nobuffers); @@ -2634,14 +2634,14 @@ void __cancel_dirty_page(struct page *page) struct bdi_writeback *wb; struct wb_lock_cookie cookie = {}; - lock_page_memcg(page); + lock_page_objcg(page); wb = unlocked_inode_to_wb_begin(inode, &cookie); if (TestClearPageDirty(page)) account_page_cleaned(page, mapping, wb); unlocked_inode_to_wb_end(inode, &cookie); - unlock_page_memcg(page); + unlock_page_objcg(page); } else { ClearPageDirty(page); } @@ -2728,7 +2728,7 @@ int test_clear_page_writeback(struct page *page) struct address_space *mapping = page_mapping(page); int ret; - lock_page_memcg(page); + lock_page_objcg(page); if (mapping && mapping_use_writeback_tags(mapping)) { struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); @@ -2760,7 +2760,7 @@ int test_clear_page_writeback(struct page *page) dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); inc_node_page_state(page, NR_WRITTEN); } - unlock_page_memcg(page); + unlock_page_objcg(page); return ret; } @@ -2769,7 +2769,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write) struct address_space *mapping = page_mapping(page); int ret, access_ret; - lock_page_memcg(page); + lock_page_objcg(page); if (mapping && mapping_use_writeback_tags(mapping)) { XA_STATE(xas, &mapping->i_pages, page_index(page)); struct inode *inode = mapping->host; @@ -2809,7 +2809,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write) inc_lruvec_page_state(page, NR_WRITEBACK); inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); } - unlock_page_memcg(page); + unlock_page_objcg(page); access_ret = arch_make_page_accessible(page); /* * If writeback has been triggered on a page that cannot be made diff --git a/mm/rmap.c b/mm/rmap.c index f3860e46a14d..867ac600286a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -31,7 +31,7 @@ * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) - * lock_page_memcg move_lock (in __set_page_dirty_buffers) + * lock_page_objcg move_lock (in __set_page_dirty_buffers) * i_pages lock (widely used) * lruvec->lru_lock (in lock_page_lruvec_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty) @@ -1127,7 +1127,7 @@ void do_page_add_anon_rmap(struct page *page, bool first; if (unlikely(PageKsm(page))) - lock_page_memcg(page); + lock_page_objcg(page); else VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -1155,7 +1155,7 @@ void do_page_add_anon_rmap(struct page *page, } if (unlikely(PageKsm(page))) { - unlock_page_memcg(page); + unlock_page_objcg(page); return; } @@ -1215,7 +1215,7 @@ void page_add_file_rmap(struct page *page, bool compound) int i, nr = 1; VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); - lock_page_memcg(page); + lock_page_objcg(page); if (compound && PageTransHuge(page)) { int nr_pages = thp_nr_pages(page); @@ -1244,7 +1244,7 @@ void page_add_file_rmap(struct page *page, bool compound) } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: - unlock_page_memcg(page); + unlock_page_objcg(page); } static void page_remove_file_rmap(struct page *page, bool compound) @@ -1345,7 +1345,7 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_remove_rmap(struct page *page, bool compound) { - lock_page_memcg(page); + lock_page_objcg(page); if (!PageAnon(page)) { page_remove_file_rmap(page, compound); @@ -1384,7 +1384,7 @@ void page_remove_rmap(struct page *page, bool compound) * faster for those pages still in swapcache. */ out: - unlock_page_memcg(page); + unlock_page_objcg(page); } /* From patchwork Thu May 27 09:33:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CD70C47089 for ; Thu, 27 May 2021 09:34:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE350613E6 for ; Thu, 27 May 2021 09:34:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE350613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A6DB6B0078; Thu, 27 May 2021 05:34:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32C686B008A; Thu, 27 May 2021 05:34:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D376E6B008C; Thu, 27 May 2021 05:34:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 8D9916B0078 for ; Thu, 27 May 2021 05:34:56 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 30DDAA8D6 for ; Thu, 27 May 2021 09:34:56 +0000 (UTC) X-FDA: 78186501792.38.538252E Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf10.hostedemail.com (Postfix) with ESMTP id 5667A40B8CFE for ; Thu, 27 May 2021 09:34:48 +0000 (UTC) Received: by mail-pf1-f170.google.com with SMTP id 22so119720pfv.11 for ; Thu, 27 May 2021 02:34:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=A6BVXsaS11IAfMyHtJiAg5Ol68mjN6vT4RM+MpSdQtA=; b=szBVqK4TJXUoZMJ5L2aPdAKI91Hi8jBSP6ql9PZS4s3V23KtjKAziLLp0qpU5VOQSg DKG+rubBXkdzsRQf86Z6YhUxEJV9qloQuQLLub0EE5XLIs0PYzvUMVfunjUPP45EugxZ QZ5VQ6AySA+Zbk1di5M8YWcwGO6VrLXNIxt99S8DYxdDtNUhIVaG8viJ6H3nKdLIoQlz WoDF0OiQ3xyficaDKjxcITXwmSguyfk5+KUMPK3vAO65L046nMt2ZBTutnVwSOxNNcfx 5J/CWPaSTo7VIho+yY4MTvQmjXWYhF0PeCXDG7d5aARD+CNTQ4sQrBRSnxSahrSdg3EM 5acw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=A6BVXsaS11IAfMyHtJiAg5Ol68mjN6vT4RM+MpSdQtA=; b=szUPo5f1dkaKPoA2zTaudp3zuKkno6qPt2QKvtyatFgM9HrKbP3LXxIUbd53DIIUUp 2mZETjEAzocqdwiV2ggxwVFWuvPwTSDLqHf08DY9HZFyFVW7AW26FfOXuMsw+bRkio0o 5avU9fp7mAD6HHdC3YXtxxKNchVlaqxJWZr6+1iNrqSuwUlQbWmMBR87wwM8FJUUQlg2 ot4FLKXE8aHXFfr8hJKyli6hZG1JspbL84EubhZSpUTv/FqgyxjKoCvp+KawNYMtUhyk Tt9wv48z4v/TICVnDr3+yLypr+9Nt5BJgcmDLt4GFOvZZGqlrsYJv4BHOh6OQSQUlu22 Y/mQ== X-Gm-Message-State: AOAM533qagCiqguZCpJc7n0nM6UeUd6Iovxcs2c0XhyQqTzHTijZbPpC E6ZK+LkMM0HVz/SerNMyzs70qA== X-Google-Smtp-Source: ABdhPJyf5GyjKjc67XHYiDY5BLcexKS9X52KNX4XLqz3/n1TTECE4ihpT7tVtQG1w2c3JGNf4AqI7A== X-Received: by 2002:a63:e709:: with SMTP id b9mr2849946pgi.153.1622108094821; Thu, 27 May 2021 02:34:54 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:34:54 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 11/12] mm: lru: add VM_BUG_ON_PAGE to lru maintenance function Date: Thu, 27 May 2021 17:33:35 +0800 Message-Id: <20210527093336.14895-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=szBVqK4T; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: 46yiau4xzcxaearo57gskpdq77towbp3 X-Rspamd-Queue-Id: 5667A40B8CFE X-Rspamd-Server: rspam02 X-HE-Tag: 1622108088-333728 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We need to make sure that the page is deleted from or added to the correct lruvec list. So add a VM_BUG_ON_PAGE() to catch invalid users. Signed-off-by: Muchun Song --- include/linux/mm_inline.h | 6 ++++++ mm/vmscan.c | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..1ca1e2ab8565 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -84,6 +84,8 @@ static __always_inline void add_page_to_lru_list(struct page *page, { enum lru_list lru = page_lru(page); + VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add(&page->lru, &lruvec->lists[lru]); } @@ -93,6 +95,8 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, { enum lru_list lru = page_lru(page); + VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add_tail(&page->lru, &lruvec->lists[lru]); } @@ -100,6 +104,8 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, static __always_inline void del_page_from_lru_list(struct page *page, struct lruvec *lruvec) { + VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + list_del(&page->lru); update_lru_size(lruvec, page_lru(page), page_zonenum(page), -thp_nr_pages(page)); diff --git a/mm/vmscan.c b/mm/vmscan.c index 731a8f5a4128..5c30dffce768 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2063,7 +2063,6 @@ static unsigned int move_pages_to_lru(struct list_head *list) continue; } - VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; From patchwork Thu May 27 09:33:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12283873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52F0EC4708A for ; Thu, 27 May 2021 09:35:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC92F613E6 for ; Thu, 27 May 2021 09:35:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC92F613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BF1A6B007B; Thu, 27 May 2021 05:35:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 895CF6B008A; Thu, 27 May 2021 05:35:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C1F46B008C; Thu, 27 May 2021 05:35:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 397086B007B for ; Thu, 27 May 2021 05:35:02 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CD2A1181AF5D0 for ; Thu, 27 May 2021 09:35:01 +0000 (UTC) X-FDA: 78186502002.07.21E834A Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf04.hostedemail.com (Postfix) with ESMTP id 6129A2D8 for ; Thu, 27 May 2021 09:34:57 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id m8-20020a17090a4148b029015fc5d36343so38568pjg.1 for ; Thu, 27 May 2021 02:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3R9N84gM7kQCasDHJd2pwm4qY19zJfOdLxk9Bm5x1RY=; b=1h6bUqcQFDhUpWHT5X49ZyDpj+0W/KsRC+H1eq3A+HlCinVVmHIwfiB5+230NhqbZt EwrbUhM6Ng11xNGc0vrOGQ99LZOPp2ITf67DaeFfqxMDv8cnkVDek0L0eWgQoLb7UbuE CVwrT/0iwmUfrnMOkyAeIEep8RN108/CqE4JGwbsN9Awdjq2ET63kf95eT5GEXRW0Okx Jd9sBI00EaMJOFmKbuX72zT63KvaQuPvDHusjNGnixLEb5sNjJMkJTq9QgGZzLd/Wxht TBBY50G3tm2vY/DfsSf4LfdlC7mL3Mr5wP8uYNPzH2YsIaOgcZcU4wOBTf2bAjApa640 V5Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3R9N84gM7kQCasDHJd2pwm4qY19zJfOdLxk9Bm5x1RY=; b=Hdo2rbIS60UY1eOaGg/V/6wRKuG+837QFjR5ieXe9cPG2Uif5tiODkr85MPxhAAWvb 0ieh4fHWpJCx+KCbKBFhn8B2Vu53KKtgAwagnof27PdmP+1zhsApOLeR7HurFUuT5F1J /Mmd8qphTMCvHrzYEpPNDUjHOLjtnchNPgtonglXclgkwaEIvyaX8Ttl5zTPVGDf0aug B+ONhzsGYshO/cCHC5BOlhyLHWpaOL3mUPNl/jBPCr0do84PBzOpr2YMCGQRL3mP2Zsa YLl0MDcrPVeeY9YCDQxscOMi1YCFt9eNSysYifpHfAIqYUboIxf77WthN70RJXPwXyDt 3WJg== X-Gm-Message-State: AOAM531/FhJQ3wKN+SBX9B4Uw+DPXBIDJJsHVlSW+LAJqclH1b0IUiyj yFSckNAnQoi4gf2fDvd+qXnJtw== X-Google-Smtp-Source: ABdhPJygcJ3M5c5lgVK3g02bjCw7gPzvN2YRDDlBMe271/7AOxs/or5Idi+lfILXatJqVhBi9+ePRQ== X-Received: by 2002:a17:90a:4d0a:: with SMTP id c10mr8426672pjg.206.1622108100589; Thu, 27 May 2021 02:35:00 -0700 (PDT) Received: from localhost.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id a9sm1418917pfl.57.2021.05.27.02.34.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 May 2021 02:35:00 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [RFC PATCH v4 12/12] mm: lru: use lruvec lock to serialize memcg changes Date: Thu, 27 May 2021 17:33:36 +0800 Message-Id: <20210527093336.14895-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210527093336.14895-1-songmuchun@bytedance.com> References: <20210527093336.14895-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=1h6bUqcQ; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf04.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6129A2D8 X-Stat-Signature: rtczsm1fb46t4ubtrnmdywu771mibrcz X-HE-Tag: 1622108097-616342 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now lock_page_lruvec*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/compaction.c | 1 + mm/memcontrol.c | 31 +++++++++++++++++++++++++++++++ mm/swap.c | 41 +++++++++++------------------------------ mm/vmscan.c | 9 ++++----- 4 files changed, 47 insertions(+), 35 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 225e06c63ec1..c8505542c33e 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -531,6 +531,7 @@ static struct lruvec *compact_lock_page_irqsave(struct page *page, spin_lock_irqsave(&lruvec->lru_lock, *flags); out: + /* See the comments in lock_page_lruvec(). */ if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 33aad9ed5071..289524aaf629 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1318,12 +1318,38 @@ struct lruvec *lock_page_lruvec(struct page *page) lruvec = mem_cgroup_page_lruvec(page); spin_lock(&lruvec->lru_lock); + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec = mem_cgroup_page_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { spin_unlock(&lruvec->lru_lock); goto retry; } /* + * When we reach here, it means that the page_memcg(page) is stable. + * * Preemption is disabled in the internal of spin_lock, which can serve * as RCU read-side critical sections. */ @@ -1341,6 +1367,7 @@ struct lruvec *lock_page_lruvec_irq(struct page *page) lruvec = mem_cgroup_page_lruvec(page); spin_lock_irq(&lruvec->lru_lock); + /* See the comments in lock_page_lruvec(). */ if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1361,6 +1388,7 @@ struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) lruvec = mem_cgroup_page_lruvec(page); spin_lock_irqsave(&lruvec->lru_lock, *flags); + /* See the comments in lock_page_lruvec(). */ if (unlikely(lruvec_memcg(lruvec) != page_memcg(page))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5846,7 +5874,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_get(to->objcg); obj_cgroup_put(from->objcg); + /* See the comments in lock_page_lruvec(). */ + spin_lock(&from_vec->lru_lock); page->memcg_data = (unsigned long)to->objcg; + spin_unlock(&from_vec->lru_lock); __unlock_page_objcg(from->objcg); diff --git a/mm/swap.c b/mm/swap.c index 6260e0e11268..a254f0dcfe1d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -211,14 +211,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); (*move_fn)(page, lruvec); - - SetPageLRU(page); } if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags); @@ -228,7 +222,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { - if (!PageUnevictable(page)) { + if (PageLRU(page) && !PageUnevictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec); @@ -324,7 +318,7 @@ void lru_note_cost_page(struct page *page) static void __activate_page(struct page *page, struct lruvec *lruvec) { - if (!PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -377,12 +371,9 @@ static void activate_page(struct page *page) struct lruvec *lruvec; page = compound_head(page); - if (TestClearPageLRU(page)) { - lruvec = lock_page_lruvec_irq(page); - __activate_page(page, lruvec); - unlock_page_lruvec_irq(lruvec); - SetPageLRU(page); - } + lruvec = lock_page_lruvec_irq(page); + __activate_page(page, lruvec); + unlock_page_lruvec_irq(lruvec); } #endif @@ -537,6 +528,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) bool active = PageActive(page); int nr_pages = thp_nr_pages(page); + if (!PageLRU(page)) + return; + if (PageUnevictable(page)) return; @@ -574,7 +568,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -590,7 +584,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { - if (PageAnon(page) && PageSwapBacked(page) && + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); @@ -1055,20 +1049,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; - - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); - } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 5c30dffce768..a16b28da0878 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4469,18 +4469,17 @@ void check_move_unevictable_pages(struct pagevec *pvec) nr_pages = thp_nr_pages(page); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec = relock_page_lruvec_irq(page, lruvec); + + if (!PageLRU(page) || !PageUnevictable(page)) continue; - lruvec = relock_page_lruvec_irq(page, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { + if (page_evictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } - SetPageLRU(page); } if (lruvec) {