From patchwork Wed Feb 16 11:51:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D406C433F5 for ; Wed, 16 Feb 2022 11:52:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9324C6B0074; Wed, 16 Feb 2022 06:52:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 892756B0078; Wed, 16 Feb 2022 06:52:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70F126B007B; Wed, 16 Feb 2022 06:52:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 5BA7B6B0074 for ; Wed, 16 Feb 2022 06:52:09 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1FC938DC1E for ; Wed, 16 Feb 2022 11:52:09 +0000 (UTC) X-FDA: 79148479578.08.36A9ABB Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf19.hostedemail.com (Postfix) with ESMTP id 890F91A000B for ; Wed, 16 Feb 2022 11:52:08 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id v8-20020a17090a634800b001bb78857ccdso3883851pjs.1 for ; Wed, 16 Feb 2022 03:52:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VBuKjaMGz0RKRLNoDxkfNXGbY7zkmBfjt3EnnJ2sbqs=; b=Gd/ICDonVA6jL9bUSwNgIKPxrv0uG4ezITWgsUe7/fXPKDiPa/HaVxzEG51b5Pvnsw 0cjp78q4Q9B9BXqAJZ4H8KWRgRxCnNcHpS7fOxKMmGI2QXNvJPVwUNj4Zzg32txkRWbd O7ysn8hOWxTEPakJOBtkdGR0jw7US3pHANmTAr+T3O0Na6wg2eAl10UWW4E4w3klABOo l8KeJYULvQHDRKwWlr9hWqS0HWrzNhIH7knZ8Dw3sYW0j+Idn1+BskgZ9mb4x/GzE9ou GsmtQtBtjP3zpx9lVQt8JvVYvhKiHLIhYnJoyRDGeAJE8oGljsMUOLxYFrZLZN2RD5AT lvhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VBuKjaMGz0RKRLNoDxkfNXGbY7zkmBfjt3EnnJ2sbqs=; b=lDdsmG0eYRo5a9fPIRMN679zyb+/a+hr2yWYz4l5WvZ31fWbamGKpSiL+EIYjlgCFY aSkpJ5kge3Ad+1S5GYeaUdmTIi3Pf9COAxW4VfpYy1xsDKh/vHePXiW6d2SgMEFOs+Qu cJW1MtcTrvV4ntg0DVPH+Z9JWci3iUHAfNG/AR1DPw5rLqNZuDEsUzoULhdTiYJrwO2z zvIJfZhtHc2vG/9UIKRjNpSVgnWpAwL/NFyc9UhA44WqR+OqafOx2YDlXg6/g26olJSp evjZuX4DkcrFyXHtmkNSXr/XYxmywzI6QIjWl737SUrU3ZWmXA8v7DLZ+XO0BqVaC7x2 fHbA== X-Gm-Message-State: AOAM532ymzHBr88DXKdt6GmZ7kwLOZ51xJUF7vuELHt6KB6lXNQbD3L4 xxhSdxOPNOdeMTqk6cOD/kTQNQqMfTr5QAVY X-Google-Smtp-Source: ABdhPJw3myGB8oMgHV8KImh6yJzTsuuYP82CElEbDHcyF/480YwZrqTEM4h84W2eSo19n7SdFE5qfA== X-Received: by 2002:a17:90b:3d0e:b0:1b9:b05f:c7ae with SMTP id pt14-20020a17090b3d0e00b001b9b05fc7aemr1312981pjb.54.1645012327491; Wed, 16 Feb 2022 03:52:07 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:07 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Date: Wed, 16 Feb 2022 19:51:21 +0800 Message-Id: <20220216115132.52602-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 890F91A000B X-Stat-Signature: 4rfm44i1o6e5g6w8z31swar83zcfus16 X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="Gd/ICDon"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf19.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1645012328-182723 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 2 +- mm/memcontrol.c | 66 +++++++++++++++++++++++++++++----------------- 2 files changed, 43 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0abbd685703b..81a2720653d0 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -314,10 +314,10 @@ struct mem_cgroup { #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; +#endif struct obj_cgroup __rcu *objcg; /* list of inherited objcgs, protected by objcg_lock */ struct list_head objcg_list; -#endif MEMCG_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 36e9f38c919d..6501f5b6df4b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -253,9 +253,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr) return container_of(vmpr, struct mem_cgroup, vmpressure); } -#ifdef CONFIG_MEMCG_KMEM static DEFINE_SPINLOCK(objcg_lock); +#ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { return cgroup_memory_nokmem; @@ -264,12 +264,10 @@ bool mem_cgroup_kmem_disabled(void) static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; /* * At this point all allocated objects are freed, and @@ -283,9 +281,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes = PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes == PAGE_SIZE. @@ -297,6 +295,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_bytes(objcg); spin_lock_irqsave(&objcg_lock, flags); list_del(&objcg->list); @@ -325,10 +336,14 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; objcg = rcu_replace_pointer(memcg->objcg, NULL, true); @@ -347,6 +362,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg, percpu_ref_kill(&objcg->refcnt); } +#ifdef CONFIG_MEMCG_KMEM /* * This will be used as a shrinker list's index. * The main reason for not using cgroup id for this: @@ -3624,7 +3640,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; int memcg_id; if (cgroup_memory_nokmem) @@ -3636,14 +3651,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) if (memcg_id < 0) return memcg_id; - objcg = obj_cgroup_alloc(); - if (!objcg) { - memcg_free_cache_id(memcg_id); - return -ENOMEM; - } - objcg->memcg = memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); memcg->kmemcg_id = memcg_id; @@ -3663,8 +3670,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); - kmemcg_id = memcg->kmemcg_id; BUG_ON(kmemcg_id < 0); @@ -5166,8 +5171,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure = jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id = -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5239,16 +5244,22 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct obj_cgroup *objcg; /* * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_info(memcg)) { - mem_cgroup_id_remove(memcg); - return -ENOMEM; - } + if (alloc_shrinker_info(memcg)) + goto remove_id; + + objcg = obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg = memcg; + rcu_assign_pointer(memcg->objcg, objcg); /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); @@ -5258,6 +5269,12 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; + +free_shrinker: + free_shrinker_info(memcg); +remove_id: + mem_cgroup_id_remove(memcg); + return -ENOMEM; } static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) @@ -5281,6 +5298,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + memcg_reparent_objcgs(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); From patchwork Wed Feb 16 11:51:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50015C433F5 for ; Wed, 16 Feb 2022 11:52:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 909B06B0078; Wed, 16 Feb 2022 06:52:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 891E16B007B; Wed, 16 Feb 2022 06:52:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E5646B007D; Wed, 16 Feb 2022 06:52:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 5D8976B0078 for ; Wed, 16 Feb 2022 06:52:15 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1A5D791ACF for ; Wed, 16 Feb 2022 11:52:15 +0000 (UTC) X-FDA: 79148479830.25.EDEA2E0 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf20.hostedemail.com (Postfix) with ESMTP id 9F3CD1C0005 for ; Wed, 16 Feb 2022 11:52:14 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id t4-20020a17090a510400b001b8c4a6cd5dso2122929pjh.5 for ; Wed, 16 Feb 2022 03:52:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iGg/OyjVBMaPdX+FW5TrfDsu52VsxBTEBhG9j0K7sOM=; b=3KiU23Dk4IIdfI4c99qbz/qS0FY1rcMSvlEFnUqUEsZ+69PtQnmdQUJMVYwGhvgVHS r78whG3T/3AKLDrSJfcP1UKB06vjCZxrlcIycEiCtM38KF7GH8UR83WxgQrRzXsxgT8P kjux7lm5v7j83wsK3mc9jepTQoNbujTiA+SX0q43cFzseESCOtmPvYCgZFdjsFmu24zb unI6hREvrRte5wyTzapItLZNhuF6obCjuKQrx16ZsnBlowl7c2ExWIJNQCWiGSdMjEGn hBqHqEbw/4+2pkiC5HAqB03eivPrBHjPGgmKQf9YUkg4+3eTwtxhg4aVQQVsyjWM1tMh NBTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iGg/OyjVBMaPdX+FW5TrfDsu52VsxBTEBhG9j0K7sOM=; b=ipsmC5444hunCUY94b7mUisv9GCiMdNvMZqrtg3qEczfOGJuXzbG+r2Dqc8DfljrgM XRb01wq3R+ux0dkb1yHEgFk+MO0S23+jsBIrND05X4SuytJ3MDfqhE4qgkSm4yEdAJb9 qvB5purSmVJZa6Vf2t7u3n2T8a1NS9FOb+DsonQ9v3t7WbMR4el2qneEQdqU7hvusSbs o7hakFZZalG5Ob97nWviuoV2AnqcfPhIAaS2/qUPiqgAgsjCaXKHv+Gs5YuWrsc+g2SW IJpqn/QZRfe4dV6c5MN3ZPP7Dw4QiOXMCyKc6nBA2Jl3d47Lq15oxDuJi4qI1+0Tqxo9 wq1g== X-Gm-Message-State: AOAM5314GRlv+7XhCrviYjq1WpQ9Ecc9W5SyLJ+cbDFZvh1JUm8lYLRb jlZrQUoTFYXoLaALJ+Q9vB7MCg== X-Google-Smtp-Source: ABdhPJzWWzYhpIImuxNbs2U6LAKi9urxlUqgGKwHYnuuQnxTJG3UMWrf56amcqobb5YZEgN3v/IY8A== X-Received: by 2002:a17:903:22c3:b0:14d:8437:5136 with SMTP id y3-20020a17090322c300b0014d84375136mr1948664plg.129.1645012333649; Wed, 16 Feb 2022 03:52:13 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:13 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Date: Wed, 16 Feb 2022 19:51:22 +0800 Message-Id: <20220216115132.52602-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=3KiU23Dk; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam07 X-Rspam-User: X-Rspamd-Queue-Id: 9F3CD1C0005 X-Stat-Signature: ai94iddetciu4osnd15sbgjtdwqbd9r4 X-HE-Tag: 1645012334-492655 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we reuse the objcg APIs to charge LRU pages, the folio_memcg() can be changed when the LRU pages reparented. In this case, we need to acquire the new lruvec lock. lruvec = folio_lruvec(folio); // The page is reparented. compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); // Acquired the wrong lruvec lock and need to retry. But compact_lock_irqsave() only take lruvec lock as the parameter, we cannot aware this change. If it can take the page as parameter to acquire the lruvec lock. When the page memcg is changed, we can use the folio_memcg() detect whether we need to reacquire the new lruvec lock. So compact_lock_irqsave() is not suitable for us. Similar to folio_lruvec_lock_irqsave(), introduce compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in the compaction routine. Signed-off-by: Muchun Song --- mm/compaction.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index b4e94cda3019..58d0e91cde49 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + lruvec = folio_lruvec(folio); + + /* Track if the lock is contended in async mode */ + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + goto out; + + cc->contended = true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); +out: + lruvec_memcg_debug(lruvec, folio); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -843,6 +866,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; if (skip_on_failure && low_pfn >= next_skip_pfn) { /* @@ -1028,18 +1052,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + folio = page_folio(page); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) unlock_page_lruvec_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; From patchwork Wed Feb 16 11:51:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39296C433EF for ; Wed, 16 Feb 2022 11:52:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A9E46B007B; Wed, 16 Feb 2022 06:52:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 732416B007D; Wed, 16 Feb 2022 06:52:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 585596B007E; Wed, 16 Feb 2022 06:52:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 37F236B007B for ; Wed, 16 Feb 2022 06:52:21 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E6A29181AC9C6 for ; Wed, 16 Feb 2022 11:52:20 +0000 (UTC) X-FDA: 79148480040.19.6635D5E Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf30.hostedemail.com (Postfix) with ESMTP id 7A77280006 for ; Wed, 16 Feb 2022 11:52:20 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id l9so1863616plg.0 for ; Wed, 16 Feb 2022 03:52:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0eIeZlraOMhnIGvYQ1DKNms/6a6+xjMQP4VMdJ+FeVI=; b=Lk52PPLP++WXnr65wbY3KT7YkWgxPQVLSd+9GPVgEUM55MjwLA5hip7PQ1lVeWbZkH V72nfGAE/Wn/twE+80TN2RwLE5FgxayjeJJz6zjet2Tt3y2fr9ZN2gBVMHGairFM2iLO qQyUi8F1iLmInhD1o5AVj1CVlhANg4NC9YISvqJZJgXesR19d5EJ8MqN5T/xxSCxW5XS 2IJxAK2IRC6JzIBmDT6DO+4hEVyNjXMLBAZ5pH6LfTMYuGwW4yCsuVlKJFjcO5voNMT9 bgXshIMcY1jUBGpk4BN9pQAAdaWlx2mRSXr19SiOM+T25cksVXK+yORQ2QxBprL6Uhes s5bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0eIeZlraOMhnIGvYQ1DKNms/6a6+xjMQP4VMdJ+FeVI=; b=rFjEF3H3d4Xqv9KQUjMtlwuZHk+vKfVmEcTapaydrdWc8s+jOYNpZsPrlxTAWhiXNn AZO0EMvx/K2oh6tqIojOMoLGQS28VHYGlxrBU86jCUDTMFzMW+9soYg1vN5G7uxTnGxl ilidJ6FL8256WsofkoVa5iUP3854uKwptHAbCRyDpcYvSc3inKEixCSl9gmvQGytMZ79 mypIhFu5wIb1UibjIX2OPklsNGMB4ndfXP1MI8N1Y0QYQrzuVkilSSEhQzYd3Mzb27wl Xd4tqHB1BlXoKtPM6f/aP2Pn3WhHAuMkiG1SErZmwhrjnz/NVF8b92dAFwYoVm1LL9wZ lyLQ== X-Gm-Message-State: AOAM5305Rm/zVUPKDUfJwfCCJPZUHpY6b3y+eyiTrRVUkWtVEd8DODtm CkgC/2ENTrYoVjZDHf3sA139eA== X-Google-Smtp-Source: ABdhPJzMjXjv4oYiNhaX8VcZnb3AyZNG/qYdAV5g4SBrctN8kM+cmk8GQgmCjKay+PMCJWzdU69Wcw== X-Received: by 2002:a17:902:b692:b0:14c:935b:2b03 with SMTP id c18-20020a170902b69200b0014c935b2b03mr2164330pls.81.1645012339585; Wed, 16 Feb 2022 03:52:19 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:19 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Date: Wed, 16 Feb 2022 19:51:23 +0800 Message-Id: <20220216115132.52602-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Lk52PPLP; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 7A77280006 X-Stat-Signature: 19z6tazxm54qtz148ioqrbj6wdg3duf9 X-HE-Tag: 1645012340-119215 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The diagram below shows how to make the folio lruvec lock safe when LRU pages are reparented. folio_lruvec_lock(folio) retry: lruvec = folio_lruvec(folio); // The folio is reparented at this time. spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) // Acquired the wrong lruvec lock and need to retry. // Because this folio is on the parent memcg lruvec list. goto retry; // If we reach here, it means that folio_memcg(folio) is stable. memcg_reparent_objcgs(memcg) // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); // Move all the pages from the lruvec list to the parent lruvec list. spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); After we acquire the lruvec lock, we need to check whether the folio is reparented. If so, we need to reacquire the new lruvec lock. On the routine of the LRU pages reparenting, we will also acquire the lruvec lock (will be implemented in the later patch). So folio_memcg() cannot be changed when we hold the lruvec lock. Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So remove it. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 18 +++---------- mm/compaction.c | 10 +++++++- mm/memcontrol.c | 63 +++++++++++++++++++++++++++++----------------- mm/swap.c | 4 +++ 4 files changed, 56 insertions(+), 39 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 81a2720653d0..961e9f9b6567 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -737,7 +737,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, * folio_lruvec - return lruvec for isolating/putting an LRU folio * @folio: Pointer to the folio. * - * This function relies on folio->mem_cgroup being stable. + * The lruvec can be changed to its parent lruvec when the page reparented. + * The caller need to recheck if it cares about this changes (just like + * folio_lruvec_lock() does). */ static inline struct lruvec *folio_lruvec(struct folio *folio) { @@ -756,15 +758,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); -#else -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1227,11 +1220,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio) return &pgdat->__lruvec; } -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; diff --git a/mm/compaction.c b/mm/compaction.c index 58d0e91cde49..eebe55e596fd 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -515,6 +515,8 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, { struct lruvec *lruvec; + rcu_read_lock(); +retry: lruvec = folio_lruvec(folio); /* Track if the lock is contended in async mode */ @@ -527,7 +529,13 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, spin_lock_irqsave(&lruvec->lru_lock, *flags); out: - lruvec_memcg_debug(lruvec, folio); + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6501f5b6df4b..7c7672631456 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1178,23 +1178,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return ret; } -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg = folio_memcg(folio); - - if (!memcg) - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio); - else - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio); -} -#endif - /** * folio_lruvec_lock - Lock the lruvec for a folio. * @folio: Pointer to the folio. @@ -1209,10 +1192,24 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) */ struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); return lruvec; } @@ -1232,10 +1229,20 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) */ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } @@ -1257,10 +1264,20 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags) { - struct lruvec *lruvec = folio_lruvec(folio); + struct lruvec *lruvec; + rcu_read_lock(); +retry: + lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + /* See the comments in folio_lruvec_lock(). */ + rcu_read_unlock(); return lruvec; } diff --git a/mm/swap.c b/mm/swap.c index bcf3ac288b56..9c2bcc2651c6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -305,6 +305,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_folio(struct folio *folio) { + /* + * The rcu read lock is held by the caller, so we do not need to + * care about the lruvec returned by folio_lruvec() being released. + */ lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio), folio_nr_pages(folio)); } From patchwork Wed Feb 16 11:51:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4D24C433EF for ; Wed, 16 Feb 2022 11:52:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75B456B007D; Wed, 16 Feb 2022 06:52:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E3A66B007E; Wed, 16 Feb 2022 06:52:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 510026B0080; Wed, 16 Feb 2022 06:52:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 3E1F46B007D for ; Wed, 16 Feb 2022 06:52:27 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 979DF92EC4 for ; Wed, 16 Feb 2022 11:52:26 +0000 (UTC) X-FDA: 79148480292.12.B874D3F Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf05.hostedemail.com (Postfix) with ESMTP id 2725D10000B for ; Wed, 16 Feb 2022 11:52:26 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id y9so2192993pjf.1 for ; Wed, 16 Feb 2022 03:52:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ujWC+Nynecy5cfWWuSueMGD3VsaPW7mqWiNPggI7fVw=; b=74WzMLOa4toy3L4dexYCXQatB+XtVs+E3mXl7uS+8FaRUzns/vyDpmzJoaQsaUKJy+ bbZj/kY5wGeJ40wLrJNe2xcIoN3w8I5jLYMy9FENkECJfhIOXO6pQtTbLEr6bRmgmqkj c433CectYmMUSiF3J8HxNa2tVfwZkwnMe/T6m2KIgbHGhFCocaNJM8mKh744PwVOEix6 M8trrFpluBPKIHgApO7NqUSo4llwtmtVDkiTu3J53Sk/bghugJ83HiVeHRj1dZh1SDDD q+34rPaN9qMJACOpVGa+HEGuQmdKv+IHsfj2j/2hpqPKOLfdXtR2Kt7ZsnxGHHRlPdYU QXjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ujWC+Nynecy5cfWWuSueMGD3VsaPW7mqWiNPggI7fVw=; b=PM+XS5KJ7El1Pnm2fV/2ZvXDdv4PlgvxQSU9OL7AljrNBGk3A3/dd+QTh6EGse7sZa qE0t94uNTrXk6r7uM9YS3IiUOut2oTHk17bArxysDkbdMn1Z2v3k+xxZlPhpeIMvg3uN XeCvS54WgMkicfcGev9WCvq5ygutLlC1pKjl1F9r4XnzL6JV5gs4ZoIawO0UqTQIQOo9 jqA/pV8c/+VlBk/pBQG79cqD5jIZW40aS9LZy0Jvg8TMjLhbdBi+o8lW3ozzjaZQpey7 D67GlFXQByKKeSQhQj4eY4DtYK8dlPQR3SHMFudb4plXUuUJI7M7lccsgAT2n9ho7dDt MH2Q== X-Gm-Message-State: AOAM530RfX5M3749bI1HSOrKXaBoXVrsj7Gg1usLFvgAMg0zwv7p+f4d VrBN5qDX0OvZKN2I3Gu0HWnwrQ== X-Google-Smtp-Source: ABdhPJxhYTlLU1UeJbjW60IdORbed8UmBz2oMySzvbgHil4ENU/8HVAVPkPfpn3fHaSNwy6IuBdqVQ== X-Received: by 2002:a17:903:408d:b0:14d:a09a:fdcf with SMTP id z13-20020a170903408d00b0014da09afdcfmr2144300plc.119.1645012345221; Wed, 16 Feb 2022 03:52:25 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:24 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 04/12] mm: vmscan: rework move_pages_to_lru() Date: Wed, 16 Feb 2022 19:51:24 +0800 Message-Id: <20220216115132.52602-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=74WzMLOa; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf05.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2725D10000B X-Stat-Signature: pof4d3s5qotjdtdg1arra76ojm89ddmk X-HE-Tag: 1645012346-991118 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will reparent the LRU pages. The pages moved to appropriate LRU list can be reparented during the process of the move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we should use the more general interface of folio_lruvec_relock_irq() to acquire the correct lruvec lock. Signed-off-by: Muchun Song --- include/linux/mm.h | 1 + mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------ 2 files changed, 26 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 213cc569b192..fc270e52a9a3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -227,6 +227,7 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, #define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE) #define lru_to_page(head) (list_entry((head)->prev, struct page, lru)) +#define lru_to_folio(head) (list_entry((head)->prev, struct folio, lru)) void setup_initial_init_mm(void *start_code, void *end_code, void *end_data, void *brk); diff --git a/mm/vmscan.c b/mm/vmscan.c index 59b14e0d696c..7beed9041e0a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2305,23 +2305,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * move_pages_to_lru() moves pages from private @list to appropriate LRU list. * On return, @list is reused as a list of pages to be freed by the caller. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate LRU list. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_pages_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_pages_to_lru(struct list_head *list) { - int nr_pages, nr_moved = 0; + int nr_moved = 0; + struct lruvec *lruvec = NULL; LIST_HEAD(pages_to_free); - struct page *page; while (!list_empty(list)) { - page = lru_to_page(list); + int nr_pages; + struct folio *folio = lru_to_folio(list); + struct page *page = &folio->page; + + lruvec = folio_lruvec_relock_irq(folio, lruvec); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); putback_lru_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; continue; } @@ -2342,20 +2347,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); + unlock_page_lruvec_irq(lruvec); destroy_compound_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec = NULL; } else list_add(&page->lru, &pages_to_free); continue; } - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; @@ -2363,6 +2364,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, workingset_age_nonresident(lruvec, nr_pages); } + if (lruvec) + unlock_page_lruvec_irq(lruvec); /* * To save our caller's stack, now use input list for pages to free. */ @@ -2436,16 +2439,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false); - spin_lock_irq(&lruvec->lru_lock); - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(&page_list); + local_irq_disable(); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); @@ -2572,18 +2575,16 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate = move_pages_to_lru(lruvec, &l_active); - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); + nr_activate = move_pages_to_lru(&l_active); + nr_deactivate = move_pages_to_lru(&l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); + local_irq_disable(); __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); From patchwork Wed Feb 16 11:51:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF32DC433EF for ; Wed, 16 Feb 2022 11:52:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 539116B007E; Wed, 16 Feb 2022 06:52:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C13E6B0080; Wed, 16 Feb 2022 06:52:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3147D6B0081; Wed, 16 Feb 2022 06:52:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id 1DD896B007E for ; Wed, 16 Feb 2022 06:52:33 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D03259368A for ; Wed, 16 Feb 2022 11:52:32 +0000 (UTC) X-FDA: 79148480544.06.A679D52 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf01.hostedemail.com (Postfix) with ESMTP id 516AB40003 for ; Wed, 16 Feb 2022 11:52:32 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id a11-20020a17090a740b00b001b8b506c42fso6256334pjg.0 for ; Wed, 16 Feb 2022 03:52:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+OHUI0LuXts2LsG4yRGxisobTcDTSKLzuS/zKZo7W3Q=; b=Lgwd+Q96OsQ1DEpXUbv934D9GmDU94Z5BryaKZl67m2oDVtkC/jNkZgH36ybqaGCGu cfmb2VkcmChs2cmtT/MWLiWuVnpvm5VI6sJxDS5BNXeNCtVN2DXOkOJdPbqKEjj2P6kv HsDcne4g1b7zK23Hgc/QLbqGd7jIkgPpEDnBSc6BY6U7+AEF3A4nWov52cNAkjwTT5VF sACcm3aZcXUqy2Rb3clYXlSBhysE8/gjT1fJf2xaPrp4LWf8AdoL8rlIaBMrOhcshnZu xIqT7TidCgLnxvf9QHbNLDJMLO3nJl5jCqYV9Mu2PA1CS26I8fmjPwNfdwmD8UcsR4pL IjNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+OHUI0LuXts2LsG4yRGxisobTcDTSKLzuS/zKZo7W3Q=; b=tNL45muEOchEZU0SvXLVdAb6Qent7f/clpek+gT0wIFAPkUfvyTfkvd+jgYwPE+ohf Q+7ZR1gFDm8Rm6+2oTdqOuNg7aejnSjDP8G2MbeJ5c49Vfn/A6MnqaJITs69zg5kwItj +tt7fhxkkKKcGjZefHlASq0EaeBH3d4JT1Qx/ffFOPMcI5c6KE6f0Bcg7F0iwxGSyhMP wodCGeDNo78CPXj2b/tZOYSmgD7OBdXjxG5wWCD+7H51Ck5kQ0CIja8RtPyn+XFsVXkZ JJXTSlGPrpD+USXRW7rayEM9p9MYJiLDL1jJHhcofo5cmaXz14zy3LsseQZoEfoEM7yp PygQ== X-Gm-Message-State: AOAM533rh5etHXal/fX7InLBJqf3aWizn/ny8tqBVRKvcwSZ+wO84a7H ttqORkdSvm7miO0RFevm+TRGmQ== X-Google-Smtp-Source: ABdhPJxtacu2pAFR5HptfNWgZtslscJeuzVNQ9V1sEvA4zIaP3Tomtu3mjYgWPhlDFG9+bspXd4YHA== X-Received: by 2002:a17:902:9a41:b0:149:a13f:af62 with SMTP id x1-20020a1709029a4100b00149a13faf62mr2379246plv.147.1645012351385; Wed, 16 Feb 2022 03:52:31 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:31 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Date: Wed, 16 Feb 2022 19:51:25 +0800 Message-Id: <20220216115132.52602-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 516AB40003 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Lgwd+Q96; spf=pass (imf01.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: mhtn1j3w54cy9q13zyeoppzipaukwequ X-HE-Tag: 1645012352-368962 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should make thp deferred split queue lock safe when LRU pages are reparented. Similar to folio_lruvec_lock{_irqsave, _irq}(), we introduce folio_split_queue_lock{_irqsave}() to make the deferred split queue lock easier to be reparented. And in the next patch, we can use a similar approach (just like lruvec lock does) to make thp deferred split queue lock safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 10 +++++ mm/huge_memory.c | 97 +++++++++++++++++++++++++++++++++------------- 2 files changed, 80 insertions(+), 27 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 961e9f9b6567..df607c9de500 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1633,6 +1633,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); void reparent_shrinker_deferred(struct mem_cgroup *memcg); + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return shrinker->id; +} #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1646,6 +1651,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { } + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return -1; +} #endif #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..a227731988b3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -499,25 +499,70 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { - struct mem_cgroup *memcg = page_memcg(compound_head(page)); - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + if (mem_cgroup_disabled()) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + return memcg ? &memcg->deferred_split_queue : NULL; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + return NULL; +} - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + return NULL; } #endif +static struct deferred_split *folio_split_queue(struct folio *folio) +{ + struct deferred_split *queue = folio_memcg_split_queue(folio); + + return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock(&queue->split_queue_lock); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; +} + +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *queue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2602,8 +2647,9 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) */ int split_huge_page_to_list(struct page *page, struct list_head *list) { - struct page *head = compound_head(page); - struct deferred_split *ds_queue = get_deferred_split_queue(head); + struct folio *folio = page_folio(page); + struct page *head = &folio->page; + struct deferred_split *ds_queue; XA_STATE(xas, &head->mapping->i_pages, head->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; @@ -2690,13 +2736,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue = folio_split_queue_lock(folio); if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr = thp_nr_pages(head); @@ -2714,7 +2760,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); fail: if (mapping) xas_unlock(&xas); @@ -2739,24 +2785,21 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); free_compound_page(page); } void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg = page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; unsigned long flags; VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -2774,18 +2817,18 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); if (list_empty(page_deferred_list(page))) { + struct mem_cgroup *memcg = split_queue_memcg(ds_queue); + count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); ds_queue->split_queue_len++; -#ifdef CONFIG_MEMCG if (memcg) set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); -#endif + shrinker_id(&deferred_split_shrinker)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } static unsigned long deferred_split_count(struct shrinker *shrink, From patchwork Wed Feb 16 11:51:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B92C433EF for ; Wed, 16 Feb 2022 11:52:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3645F6B0080; Wed, 16 Feb 2022 06:52:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ED256B0081; Wed, 16 Feb 2022 06:52:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 166BC6B0082; Wed, 16 Feb 2022 06:52:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 03F8F6B0080 for ; Wed, 16 Feb 2022 06:52:39 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B6E7F8249980 for ; Wed, 16 Feb 2022 11:52:38 +0000 (UTC) X-FDA: 79148480796.29.A80A018 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf24.hostedemail.com (Postfix) with ESMTP id 4A0C6180008 for ; Wed, 16 Feb 2022 11:52:38 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id y9so2193433pjf.1 for ; Wed, 16 Feb 2022 03:52:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=165dLhQ7hyh1L/DjdfIFCITpDk7lPjHyUo7YV5H1F1Y=; b=yS5Od63QJjgrf2sJM68+B+Eq/OWFlIDeRLQ9OZM5Dr7wRL1cF4uw+9WbmrG0CDmp4k HpKT78ZM9yqU0d1nofwrowKJGJUI99ewSbmK2iODzD3qk9DsIpkJXDnMAiJ+QnsgXj44 BFGAIxgqj0DpYxrj3e7+SLoTV3VvSMBxx0v2pBNZaEP4yxWAv6il6R2abz4ZU2f6Ek4b dX1cXNZjDFgo0TXtQiwZeTNNF2sJSRr1Z1Dy7+2pUui2jZxsX1F+XcRd5kjthUVup2jm oyIyXbTn5pyP2r/LxGjk6PRqXUCwx5c07AKz9lv2Oqmo1oV10fS/02XtXG1p6U7zLQmE N5Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=165dLhQ7hyh1L/DjdfIFCITpDk7lPjHyUo7YV5H1F1Y=; b=rxl6XEwW8QBFaBvtMcdn/raR1r6aV8UYtp38PKqXvVNaC28oD+NlbVAM4bqc/2SIyf LSU/wXgumZYxcX6d9U3O4d4WyeH/MbJ76JPBPAHW8CJQDKyoaRFirAlsYAEojOufqSmk kiw9koEf+KSvOFuYiMeNFWx8EvnqBSfjMzyADOavDzbKBNChD0u50AY67U/3kLsFagMN uw9pXX+ztX7wBiVTJaJa4dukFkj8WLLcKgC6JlfLNtipzQTL3SUDE9AflmqAxZCsXveW SXFjbTRZXsWEy2axrYSqFBIaqtK84sGnIvdx+wLtlW6y91I65avyA3ofjgQBAHrxFpoB LlnQ== X-Gm-Message-State: AOAM530trIDFu9BQVhAG6nrhOssibxNI/QdUmokYV443seIrNRq1l0+L KQOE/+IndlFxsIGkCi8j5Hh3oQ== X-Google-Smtp-Source: ABdhPJwAJOwLAtA5/2JhCA4A3zJUOykMJidUQdD198hj5IxFHa45Xeu8iRkX8Z8y9nx9M87a9h0OJQ== X-Received: by 2002:a17:902:ab52:b0:14d:7ce1:8d66 with SMTP id ij18-20020a170902ab5200b0014d7ce18d66mr2249555plb.88.1645012357348; Wed, 16 Feb 2022 03:52:37 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:37 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Date: Wed, 16 Feb 2022 19:51:26 +0800 Message-Id: <20220216115132.52602-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 4A0C6180008 X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=yS5Od63Q; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: nuzqxz6d4tswwgyb69d8dhtd8bpiruhi X-HE-Tag: 1645012358-366406 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to the lruvec lock, we use the same approach to make the split queue lock safe when LRU pages are reparented. Signed-off-by: Muchun Song --- mm/huge_memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a227731988b3..b8c6e766c91c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -535,9 +535,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock(&queue->split_queue_lock); + if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } @@ -546,9 +559,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock_irqsave(&queue->split_queue_lock, *flags); + if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + + /* See the comments in folio_split_queue_lock(). */ + rcu_read_unlock(); + return queue; } From patchwork Wed Feb 16 11:51:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3400AC433F5 for ; Wed, 16 Feb 2022 11:52:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEEAE6B0081; Wed, 16 Feb 2022 06:52:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B78726B0082; Wed, 16 Feb 2022 06:52:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F1826B0083; Wed, 16 Feb 2022 06:52:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 8CF346B0081 for ; Wed, 16 Feb 2022 06:52:45 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4FBEB181AC9C6 for ; Wed, 16 Feb 2022 11:52:45 +0000 (UTC) X-FDA: 79148481090.20.26122D4 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf08.hostedemail.com (Postfix) with ESMTP id C1572160003 for ; Wed, 16 Feb 2022 11:52:44 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id d187so1919549pfa.10 for ; Wed, 16 Feb 2022 03:52:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ehgm4bZ9T8zfc3OBtMyS8a0oj2CsPXwDpvgWKkCOIvY=; b=NKqOxmOesPE3d6DrmZfDGvssc9NxEN64pl3DAetzC31QXJ7FUXMKfTKLb7DsI9vo1X d4moMpjaj7HOs/Fwj1N6RhiVF4UU1a9x7XfUTdZmVazgMR2Pbo7vCycvYS7U/uSN3gNW Kxoo4pzmQY+YS5RNNT/qvjfKxv2NkuwdqzHiHxRlWCny6eNsm7lOoTMHwhi3YA9ycgnv v2KiPqEXhMWliQ0oOB9tLT7cUvudWcfQmw6VdyBgwOE8jldGKC0Vkh0+iTfiLAPKbM30 YjQ/oTh2tUvzPh86jbpUGVfhxn28hvdvuYycUqw+GFaO3wxSvbnKdkkqjEALLEzgvAcW yaXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ehgm4bZ9T8zfc3OBtMyS8a0oj2CsPXwDpvgWKkCOIvY=; b=jmIH9vWySbibu0y0sPWUnxafhpUzkIZ8mdhK6G9g3ujldHli+QLlbeC5G7Rs2iCuaM YtV725zKDlIgq/FvxLRK4fJA4BN/GavnV7uzYemKwqEnT3bFYppT27Sps7NJ8A6TUxe0 f7xgT6njTq0DQhOi0973AkcZp/F6CJEoddH68+uD/w6+4ilrN9io6Vsjf0IlvUNHMkJE Q3An8VzyNiXrPTS0zms6MALAmX4Csz3g/l6X6d6gpQssm7GT6GFgmsxvdAPqxqPR8HWv fBQqRoXTSU7E6etgyletoeFr6k8snKc8HhGmJjpUAyO1mDRvHnvcxgJXY21WiPgQ+kwh 1xHg== X-Gm-Message-State: AOAM530xWX0oNT2EH6u2kzfsyfXD6t5cf5QZ0h4tsq80EFnltp+2lnhw qiLYaQtKkUTU4E5TAhnxdINHJg== X-Google-Smtp-Source: ABdhPJxSgZOdBmBUY7aCW2+/epfBVifBjz/zVVrsZjAeQuXY9TGBCuNC8vx35vvo6Pu9PdBzcs+Zeg== X-Received: by 2002:a05:6a00:13a8:b0:4e1:6da6:b7a8 with SMTP id t40-20020a056a0013a800b004e16da6b7a8mr2715170pfg.27.1645012363781; Wed, 16 Feb 2022 03:52:43 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:43 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Date: Wed, 16 Feb 2022 19:51:27 +0800 Message-Id: <20220216115132.52602-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C1572160003 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=NKqOxmOe; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: 6h9hi1stronf1ieo9rpy3u1dobdbkukt X-Rspamd-Server: rspam11 X-HE-Tag: 1645012364-613942 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we use objcg APIs to charge the LRU pages, the page will not hold a reference to the memcg associated with the page. So the caller of the {folio,page}_memcg() should hold an rcu read lock or obtain a reference to the memcg associated with the page to protect memcg from being released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a reference to the memory cgroup associated with the page. In this patch, make all the callers hold an rcu read lock or obtain a reference to the memcg to protect memcg from being released when the LRU pages reparented. We do not need to adjust the callers of {folio,page}_memcg() during the whole process of mem_cgroup_move_task(). Because the cgroup migration and memory cgroup offlining are serialized by @cgroup_mutex. In this routine, the LRU pages cannot be reparented to its parent memory cgroup. So {folio,page}_memcg() is stable and cannot be released. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- fs/buffer.c | 4 +-- fs/fs-writeback.c | 23 ++++++++-------- include/linux/memcontrol.h | 49 ++++++++++++++++++++++++++++++--- include/trace/events/writeback.h | 5 ++++ mm/memcontrol.c | 58 ++++++++++++++++++++++++++++++---------- mm/migrate.c | 4 +++ mm/page_io.c | 5 ++-- 7 files changed, 116 insertions(+), 32 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 8e112b6bd371..30a6e7aa6b7d 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -822,8 +822,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, if (retry) gfp |= __GFP_NOFAIL; - /* The page lock pins the memcg */ - memcg = page_memcg(page); + memcg = get_mem_cgroup_from_page(page); old_memcg = set_active_memcg(memcg); head = NULL; @@ -843,6 +842,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, set_bh_page(bh, page, offset); } out: + mem_cgroup_put(memcg); set_active_memcg(old_memcg); return head; /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index f8d7fe6db989..d6059676fc7b 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; - if (page) { - memcg_css = mem_cgroup_css_from_page(page); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (page) + memcg_css = get_mem_cgroup_css_from_page(page); + else memcg_css = task_get_css(current, memory_cgrp_id); - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } if (!wb) @@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, if (!wbc->wb || wbc->no_cgroup_owner) return; - css = mem_cgroup_css_from_page(page); + css = get_mem_cgroup_css_from_page(page); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; id = css->id; if (id == wbc->wb_id) { wbc->wb_bytes += bytes; - return; + goto out; } if (id == wbc->wb_lcand_id) @@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, wbc->wb_tcand_bytes += bytes; else wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes); + +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index df607c9de500..6e0f7104f2fa 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -372,7 +372,7 @@ static inline bool folio_memcg_kmem(struct folio *folio); * a valid memcg, but can be atomically swapped to the parent memcg. * * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * e.g. acquire the rcu_read_lock or css_set_lock or cgroup_mutex. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { @@ -454,6 +454,36 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /** + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup + * associated with a folio. + * @folio: Pointer to the folio. + * + * Returns a pointer to the memory cgroup (and obtain a reference on it) + * associated with the folio, or NULL. This function assumes that the + * folio is known to have a proper memory cgroup pointer. It's not safe + * to call this function against some type of pages, e.g. slab pages or + * ex-slab pages. + */ +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg = folio_memcg(folio); + if (unlikely(memcg && !css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return get_mem_cgroup_from_folio(page_folio(page)); +} + +/** * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. * @folio: Pointer to the folio. * @@ -861,7 +891,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page); +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page); ino_t page_cgroup_ino(struct page *page); static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1034,10 +1064,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg, static inline void count_memcg_page_event(struct page *page, enum vm_event_item idx) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg; + rcu_read_lock(); + memcg = page_memcg(page); if (memcg) count_memcg_events(memcg, idx, 1); + rcu_read_unlock(); } static inline void count_memcg_event_mm(struct mm_struct *mm, @@ -1116,6 +1149,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + return NULL; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index a345b1e12daf..b5e9b07e5edb 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino = inode ? inode->i_ino : 0; __entry->memcg_id = wb->memcg_css->id; __entry->cgroup_ino = __trace_wb_assign_cgroup(wb); + /* + * TP_fast_assign() is under preemption disabled which can + * serve as an RCU read-side critical section so that the + * memcg returned by folio_memcg() cannot be freed. + */ __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup); ), diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7c7672631456..dd2602149ef3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -416,7 +416,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif /** - * mem_cgroup_css_from_page - css of the memcg associated with a page + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page * @page: page of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -426,13 +426,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page) { struct mem_cgroup *memcg; - memcg = page_memcg(page); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + memcg = get_mem_cgroup_from_page(page); + if (!memcg) memcg = root_mem_cgroup; return &memcg->css; @@ -754,13 +756,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { - struct page *head = compound_head(page); /* rmap on tail pages */ + struct folio *folio = page_folio(page); /* rmap on tail pages */ struct mem_cgroup *memcg; pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; rcu_read_lock(); - memcg = page_memcg(head); + memcg = folio_memcg(folio); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { rcu_read_unlock(); @@ -2046,7 +2048,9 @@ void folio_memcg_lock(struct folio *folio) * The RCU lock is held throughout the transaction. The fast * path can get away without acquiring the memcg->move_lock * because page moving starts with an RCU grace period. - */ + * + * The RCU lock also protects the memcg from being freed. + */ rcu_read_lock(); if (mem_cgroup_disabled()) @@ -3336,7 +3340,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio = page_folio(head); - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); int i; if (mem_cgroup_disabled() || !memcg) @@ -3349,6 +3353,8 @@ void split_page_memcg(struct page *head, unsigned int nr) obj_cgroup_get_many(__folio_objcg(folio), nr - 1); else css_get_many(&memcg->css, nr - 1); + + css_put(&memcg->css); } #ifdef CONFIG_MEMCG_SWAP @@ -4562,7 +4568,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; @@ -4609,6 +4615,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, frn->memcg_id = wb->memcg_css->id; frn->at = now; } + css_put(&memcg->css); } /* issue foreign writeback flushes for recorded foreign dirtying events */ @@ -6167,6 +6174,14 @@ static void mem_cgroup_move_charge(void) atomic_dec(&mc.from->moving_account); } +/* + * The cgroup migration and memory cgroup offlining are serialized by + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot + * be reparented to its parent memory cgroup. So during the whole process + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not + * need to worry about the memcg (returned from page_memcg()) being + * released even if we do not hold an rcu read lock. + */ static void mem_cgroup_move_task(void) { if (mc.to) { @@ -6971,7 +6986,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) if (folio_memcg(new)) return; - memcg = folio_memcg(old); + memcg = get_mem_cgroup_from_folio(old); VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6990,6 +7005,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); + + css_put(&memcg->css); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7176,6 +7193,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + /* + * Interrupts should be disabled by the caller (see the comments below), + * which can serve as RCU read-side critical sections. + */ memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); @@ -7240,15 +7261,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; + rcu_read_lock(); memcg = page_memcg(page); VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) - return 0; + goto out; if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; + goto out; } memcg = mem_cgroup_id_get_online(memcg); @@ -7258,6 +7280,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); mem_cgroup_id_put(memcg); + rcu_read_unlock(); return -ENOMEM; } @@ -7267,6 +7290,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); VM_BUG_ON_PAGE(oldid, page); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); +out: + rcu_read_unlock(); return 0; } @@ -7321,17 +7346,22 @@ bool mem_cgroup_swap_full(struct page *page) if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return false; + goto out; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { unsigned long usage = page_counter_read(&memcg->swap); if (usage * 2 >= READ_ONCE(memcg->swap.high) || - usage * 2 >= READ_ONCE(memcg->swap.max)) + usage * 2 >= READ_ONCE(memcg->swap.max)) { + rcu_read_unlock(); return true; + } } +out: + rcu_read_unlock(); return false; } diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..04838c98b2ea 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -442,6 +442,10 @@ int folio_migrate_mapping(struct address_space *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; + /* + * Irq is disabled, which can serve as RCU read-side critical + * sections. + */ memcg = folio_memcg(folio); old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); diff --git a/mm/page_io.c b/mm/page_io.c index 0bf8e40f4e57..3d823be3445a 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -270,13 +270,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; + rcu_read_lock(); memcg = page_memcg(page); if (!memcg) - return; + goto out; - rcu_read_lock(); css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); +out: rcu_read_unlock(); } #else From patchwork Wed Feb 16 11:51:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4AF3C433F5 for ; Wed, 16 Feb 2022 11:52:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 586296B0082; Wed, 16 Feb 2022 06:52:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50F3C6B0083; Wed, 16 Feb 2022 06:52:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AFF76B0085; Wed, 16 Feb 2022 06:52:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 2C66A6B0082 for ; Wed, 16 Feb 2022 06:52:51 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E742E8249980 for ; Wed, 16 Feb 2022 11:52:50 +0000 (UTC) X-FDA: 79148481300.08.5CD8AF5 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf25.hostedemail.com (Postfix) with ESMTP id 786E7A0002 for ; Wed, 16 Feb 2022 11:52:50 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id i6so1929746pfc.9 for ; Wed, 16 Feb 2022 03:52:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=35RlTAsUx3yejoEebytGRZ97Bh68Tl3UTQMeyrDekNI=; b=moRystZEHt66uvzmqSShBkGW+R2V6RH38qwB8xS4uNWWO9+kZx/5LLX6Q70OvVONti vslfKpv5q4ggM2+6oYPcXECgDEeMDjLXaqiXEnvyMCkcwWQs9TrQFf0yqliG+xu3h/G7 i/hpaJyUaAeN0zFzNJDRgtYG4oGSUW5MdPaJbpXYanP90+ohvdiR23D7bLCjoo/gdSAV BYB8GOnTlqICbtj6kyQnFfTDwm/9MnySMsLHwOa69XQLlBg6Eo6JwezYXMMjcNiajB5i jOTxdvt5LXbumEnI8vA5mahfALOtx9DnYod3Je86BPoFIdqglsjq87OTSdlzYmv1P4Ea F8Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=35RlTAsUx3yejoEebytGRZ97Bh68Tl3UTQMeyrDekNI=; b=yuyIuEuI6EOpvq106ZzpXYP2VylaB8+EDK8u2gtin/69N7liqBEb4MgkTdeXAutR8j zJwU6QQCap03sWkV/K6fJMMt3y4HfoiPN+OCxK5vsJNp3poLTCX21KrpK3lwBS2pA7zl jUCo4vD/Q0zSfRSHBfk6rdFg/7/IfB1NRpS/rvDXafXoW6yxXT1Ql06ynyla1GbbdhSi TFJs8vLlWX21ceP3iQ95+tgTma2yAH8dWssIoqVkCRHev8QDCZauQy4hEN1gze/TGY3U 9ja+2+BOATbRi4MNFC9HsRKGeQdZ2TUCa4VTTEAyQdkUMAfpzxUMS0bFJGYGpxlffJEK vw2A== X-Gm-Message-State: AOAM532fTW8+o14oGpYjNZ2a3f6I5ZbU4Pv6h4ixAfey6UBRKErtNisO vgGq/GVo5mg6FrBwflpvb8EYGQ== X-Google-Smtp-Source: ABdhPJwvmMrCkqNlkgisvdhcfgXRCaZkt9on70NqNBs9//l0VrAzUX8Dbh/M/8lvPW2VFtZFdTuf6g== X-Received: by 2002:a63:5f97:0:b0:372:f7dc:6cf6 with SMTP id t145-20020a635f97000000b00372f7dc6cf6mr1949242pgb.315.1645012369596; Wed, 16 Feb 2022 03:52:49 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:49 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 08/12] mm: memcontrol: introduce memcg_reparent_ops Date: Wed, 16 Feb 2022 19:51:28 +0800 Message-Id: <20220216115132.52602-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 786E7A0002 X-Stat-Signature: xtgffqzy4bbmbs99kmm57ao4h36fohxo X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=moRystZE; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1645012370-844302 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the previous patch, we know how to make the lruvec lock safe when LRU pages are reparented. We should do something like following. memcg_reparent_objcgs(memcg) 1) lock // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); 2) do reparent // Move all the pages from the lruvec list to the parent lruvec list. 3) unlock spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); Apart from the page lruvec lock, the deferred split queue lock (THP only) also needs to do something similar. So we extract the necessary three steps in the memcg_reparent_objcgs(). memcg_reparent_objcgs(memcg) 1) lock memcg_reparent_ops->lock(memcg, parent); 2) reparent memcg_reparent_ops->reparent(memcg, reparent); 3) unlock memcg_reparent_ops->unlock(memcg, reparent); Now there are two different locks (e.g. lruvec lock and deferred split queue lock) need to use this infrastructure. In the next patch, we will use those APIs to make those locks safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 7 +++++++ mm/memcontrol.c | 39 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 44 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6e0f7104f2fa..3c841c155f0d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -346,6 +346,13 @@ struct mem_cgroup { struct mem_cgroup_per_node *nodeinfo[]; }; +struct memcg_reparent_ops { + /* Irq is disabled before calling those callbacks. */ + void (*lock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*unlock)(struct mem_cgroup *memcg, struct mem_cgroup *parent); + void (*reparent)(struct mem_cgroup *memcg, struct mem_cgroup *parent); +}; + /* * size of first charge trial. "32" comes from vmscan.c's magic value. * TODO: maybe necessary to use big numbers in big irons. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index dd2602149ef3..6a393fe8e589 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -336,6 +336,35 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } +static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; + +static void memcg_reparent_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->lock(memcg, parent); +} + +static void memcg_reparent_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->unlock(memcg, parent); +} + +static void memcg_do_reparent(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) + memcg_reparent_ops[i]->reparent(memcg, parent); +} + static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; @@ -345,9 +374,11 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; + local_irq_disable(); + memcg_reparent_lock(memcg, parent); objcg = rcu_replace_pointer(memcg->objcg, NULL, true); - spin_lock_irq(&objcg_lock); + spin_lock(&objcg_lock); /* 1) Ready to reparent active objcg. */ list_add(&objcg->list, &memcg->objcg_list); @@ -357,7 +388,11 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg) /* 3) Move already reparented objcgs to the parent's list */ list_splice(&memcg->objcg_list, &parent->objcg_list); - spin_unlock_irq(&objcg_lock); + spin_unlock(&objcg_lock); + + memcg_do_reparent(memcg, parent); + memcg_reparent_unlock(memcg, parent); + local_irq_enable(); percpu_ref_kill(&objcg->refcnt); } From patchwork Wed Feb 16 11:51:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03A7AC433EF for ; Wed, 16 Feb 2022 11:52:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E17E6B0083; Wed, 16 Feb 2022 06:52:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86A3E6B0085; Wed, 16 Feb 2022 06:52:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 695DC6B0087; Wed, 16 Feb 2022 06:52:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 566596B0083 for ; Wed, 16 Feb 2022 06:52:57 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0EBD7180AC339 for ; Wed, 16 Feb 2022 11:52:57 +0000 (UTC) X-FDA: 79148481594.09.0D8A8F4 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf23.hostedemail.com (Postfix) with ESMTP id 97C1014000A for ; Wed, 16 Feb 2022 11:52:56 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id 75so1943902pgb.4 for ; Wed, 16 Feb 2022 03:52:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NkR7CAUXx8sPBMjiWASiXAODM2d/YWmlFk9BEdP/gb4=; b=AZsymvIE6FFKXS8HN6Pj1ZjXKJ56fRBcV2LY/jokCPLSI+rvZO7J7IP/eq2SIapYmI rgQgPaHf6mrkZlacPrOuMjXgJ1vOzUJTjS5DjTLHZobDnOWznXqh3IJ3ADNZi0n1kzl1 KSWhS6w39CnNF/8R4WOkOTadDaTB6RQV7sKKxx4bdydxAK3rodPWPZlqNUC0lKdliorq eJemzLJz93vmTjrGKcVIfaWFk6zanaZp0N7qwKVYdmJnVN8ndUMbjqYZTo0sjKU+N/dY AIv4+Zo/kJga6Bp419fD+P0lfW92WrgMHsL0BjjVaTHCVeFF27rsnH3wXb3Ps1M0Pn+r KL1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NkR7CAUXx8sPBMjiWASiXAODM2d/YWmlFk9BEdP/gb4=; b=bHwt9CJsfrQI5hYOgw7DX/AZ20Cp3GB+3vXmQr4DFrvBVEloRg72usElwevF13gm32 f74MaFi6L6ZaP1PzbW4NY58FxrHsgNsXezwaemP6hu6zIuTaxiQuXSaNxdIfaN3lfTtc mggj51Nkk+cTeQZE+k53un9jvJsh1C7ZUgTH9qWZLxDH97iz91BdhmDCjJOiZF4/DOfk yWUN8hj7OjBKUx9TNw35TsF9PKugZvi0yO0dJqxR6XLd3pW5ZJ43Ym7T65Tkr4nk3vdd 8xYMEi2QRER305yH88P0Ox/jxbuFzxuEnS/RL/AUwF4b4WCf6FTzvAezT1dSWEhcW7Qi 2iPg== X-Gm-Message-State: AOAM531rp6WFZQRIYpQG5pM/LfX4KiUJwecY17A8E2OUKiSJ1CJgkoHA KFHEWkJPr1EqOn4VX68QWnZ5BQ== X-Google-Smtp-Source: ABdhPJznX8lQY4KbWiMUpxqGZ1nAbSX3kOffF+ZO1iGazc9lq+xCm9VFCHQW2xaVFQ3aUhvWK0k6CQ== X-Received: by 2002:a05:6a00:2311:b0:4e1:52bf:e466 with SMTP id h17-20020a056a00231100b004e152bfe466mr2517019pfh.77.1645012375473; Wed, 16 Feb 2022 03:52:55 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:55 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Date: Wed, 16 Feb 2022 19:51:29 +0800 Message-Id: <20220216115132.52602-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=AZsymvIE; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 97C1014000A X-Stat-Signature: p8pbs1ywrci63ro78xoex9j7r98tem8q X-HE-Tag: 1645012376-817671 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We will reuse the obj_cgroup APIs to charge the LRU pages. Finally, page->memcg_data will have 2 different meanings. - For the slab pages, page->memcg_data points to an object cgroups vector. - For the kmem pages (exclude the slab pages) and the LRU pages, page->memcg_data points to an object cgroup. In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end, The page cache cannot prevent long-living objects from pinning the original memory cgroup in the memory. At the same time we also changed the rules of page and objcg or memcg binding stability. The new rules are as follows. For a page any of the following ensures page and objcg binding stability: - the page lock - LRU isolation - lock_page_memcg() - exclusive reference Based on the stable binding of page and objcg, for a page any of the following ensures page and memcg binding stability: - css_set_lock - cgroup_mutex - the lruvec lock - the split queue lock (only THP page) If the caller only want to ensure that the page counters of memcg are updated correctly, ensure that the binding stability of page and objcg is sufficient. Signed-off-by: Muchun Song Reported-by: kernel test robot --- include/linux/memcontrol.h | 94 ++++++-------- mm/huge_memory.c | 42 +++++++ mm/memcontrol.c | 307 ++++++++++++++++++++++++++++++++------------- 3 files changed, 300 insertions(+), 143 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3c841c155f0d..551fd8b76f9d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -372,8 +372,6 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) -static inline bool folio_memcg_kmem(struct folio *folio); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -387,43 +385,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __folio_memcg - Get the memory cgroup associated with a non-kmem folio - * @folio: Pointer to the folio. - * - * Returns a pointer to the memory cgroup associated with the folio, - * or NULL. This function assumes that the folio is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * kmem folios. - */ -static inline struct mem_cgroup *__folio_memcg(struct folio *folio) -{ - unsigned long memcg_data = folio->memcg_data; - - VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * __folio_objcg - get the object cgroup associated with a kmem folio. + * folio_objcg - get the object cgroup associated with a folio. * @folio: Pointer to the folio. * * Returns a pointer to the object cgroup associated with the folio, * or NULL. This function assumes that the folio is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * LRU folios. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__folio_objcg(struct folio *folio) +static inline struct obj_cgroup *folio_objcg(struct folio *folio) { unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -437,7 +411,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem folio any of the following ensures folio and memcg binding + * For a folio any of the following ensures folio and memcg binding * stability: * * - the folio lock @@ -445,14 +419,28 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * - lock_page_memcg() * - exclusive reference * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * Based on the stable binding of folio and objcg, for a folio any of the + * following ensures folio and memcg binding stability: + * + * - css_set_lock + * - cgroup_mutex + * - the lruvec lock + * - the split queue lock (only THP page) + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of folio and objcg + * is sufficient. + * + * A caller should hold an rcu read lock (In addition, regions of code across + * which interrupts, preemption, or softirqs have been disabled also serve as + * RCU read-side critical sections) to protect memcg associated with a folio + * from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (folio_memcg_kmem(folio)) - return obj_cgroup_memcg(__folio_objcg(folio)); - return __folio_memcg(folio); + struct obj_cgroup *objcg = folio_objcg(folio); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } static inline struct mem_cgroup *page_memcg(struct page *page) @@ -470,6 +458,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page) * folio is known to have a proper memory cgroup pointer. It's not safe * to call this function against some type of pages, e.g. slab pages or * ex-slab pages. + * + * The page and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) { @@ -500,22 +490,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) * * Return: A pointer to the memory cgroup associated with the folio, * or NULL. + * + * The folio and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { unsigned long memcg_data = READ_ONCE(folio->memcg_data); + struct obj_cgroup *objcg; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } /* @@ -528,16 +516,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: + * The page and objcg or memcg binding rules can refer to page_memcg(). * - * - the page lock - * - LRU isolation - * - lock_page_memcg() - * - exclusive reference - * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -546,18 +528,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) * for slab pages, READ_ONCE() should be used here. */ unsigned long memcg_data = READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b8c6e766c91c..d80afc5f14da 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -499,6 +499,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG +static struct shrinker deferred_split_shrinker; + static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { if (mem_cgroup_disabled()) @@ -512,6 +514,46 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio return memcg ? &memcg->deferred_split_queue : NULL; } + +static void memcg_reparent_split_queue_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_lock(&memcg->deferred_split_queue.split_queue_lock); + spin_lock(&parent->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + spin_unlock(&parent->deferred_split_queue.split_queue_lock); + spin_unlock(&memcg->deferred_split_queue.split_queue_lock); +} + +static void memcg_reparent_split_queue(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int nid; + struct deferred_split *src, *dst; + + src = &memcg->deferred_split_queue; + dst = &parent->deferred_split_queue; + + if (!src->split_queue_len) + return; + + list_splice_tail_init(&src->split_queue, &dst->split_queue); + dst->split_queue_len += src->split_queue_len; + src->split_queue_len = 0; + + for_each_node(nid) + set_shrinker_bit(parent, nid, deferred_split_shrinker.id); +} + +const struct memcg_reparent_ops split_queue_reparent_ops = { + .lock = memcg_reparent_split_queue_lock, + .unlock = memcg_reparent_split_queue_unlock, + .reparent = memcg_reparent_split_queue, +}; #else static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6a393fe8e589..e4e490690e33 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -75,6 +75,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly; EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +static struct obj_cgroup *root_obj_cgroup __read_mostly; /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); @@ -240,6 +241,11 @@ static inline bool task_is_dying(void) (current->flags & PF_EXITING); } +static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg) +{ + return objcg == root_obj_cgroup; +} + /* Some nice accessors for the vmpressure. */ struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg) { @@ -336,7 +342,81 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } -static const struct memcg_reparent_ops *memcg_reparent_ops[] = {}; +static void memcg_reparent_lruvec_lock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_lock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + spin_lock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + } +} + +static void memcg_reparent_lruvec_unlock(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock); + } +} + +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src = container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec); + + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] = 0; + } +} + +static void memcg_reparent_lruvec(struct mem_cgroup *memcg, + struct mem_cgroup *parent) +{ + int i; + + for_each_node(i) { + enum lru_list lru; + struct lruvec *src, *dst; + + src = mem_cgroup_lruvec(memcg, NODE_DATA(i)); + dst = mem_cgroup_lruvec(parent, NODE_DATA(i)); + + dst->anon_cost += src->anon_cost; + dst->file_cost += src->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src, dst, lru); + } +} + +static const struct memcg_reparent_ops lruvec_reparent_ops = { + .lock = memcg_reparent_lruvec_lock, + .unlock = memcg_reparent_lruvec_unlock, + .reparent = memcg_reparent_lruvec, +}; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct memcg_reparent_ops split_queue_reparent_ops; +#endif + +static const struct memcg_reparent_ops *memcg_reparent_ops[] = { + &lruvec_reparent_ops, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + &split_queue_reparent_ops, +#endif +}; static void memcg_reparent_lock(struct mem_cgroup *memcg, struct mem_cgroup *parent) @@ -2806,18 +2886,18 @@ static inline void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages page_counter_uncharge(&memcg->memsw, nr_pages); } -static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_objcg(folio), folio); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures page's objcg stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference */ - folio->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)objcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -2834,6 +2914,21 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) return memcg; } +static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) +{ + struct obj_cgroup *objcg = NULL; + + rcu_read_lock(); + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + objcg = rcu_dereference(memcg->objcg); + if (objcg && obj_cgroup_tryget(objcg)) + break; + } + rcu_read_unlock(); + + return objcg; +} + #ifdef CONFIG_MEMCG_KMEM /* * The allocated objcg pointers array is not accounted directly. @@ -2997,12 +3092,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void) else memcg = mem_cgroup_from_task(current); - for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) { - objcg = rcu_dereference(memcg->objcg); - if (objcg && obj_cgroup_tryget(objcg)) - break; + if (mem_cgroup_is_root(memcg)) + goto out; + + objcg = get_obj_cgroup_from_memcg(memcg); + if (obj_cgroup_is_root(objcg)) { + obj_cgroup_put(objcg); objcg = NULL; } +out: rcu_read_unlock(); return objcg; @@ -3132,13 +3230,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) void __memcg_kmem_uncharge_page(struct page *page, int order) { struct folio *folio = page_folio(page); - struct obj_cgroup *objcg; + struct obj_cgroup *objcg = folio_objcg(folio); unsigned int nr_pages = 1 << order; - if (!folio_memcg_kmem(folio)) + if (!objcg) return; - objcg = __folio_objcg(folio); + VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio); obj_cgroup_uncharge_pages(objcg, nr_pages); folio->memcg_data = 0; obj_cgroup_put(objcg); @@ -3370,26 +3468,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) #endif /* CONFIG_MEMCG_KMEM */ /* - * Because page_memcg(head) is not set on tails, set it now. + * Because page_objcg(head) is not set on tails, set it now. */ void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio = page_folio(head); - struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); + struct obj_cgroup *objcg = folio_objcg(folio); int i; - if (mem_cgroup_disabled() || !memcg) + if (mem_cgroup_disabled() || !objcg) return; for (i = 1; i < nr; i++) folio_page(folio, i)->memcg_data = folio->memcg_data; - if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); - else - css_get_many(&memcg->css, nr - 1); - - css_put(&memcg->css); + obj_cgroup_get_many(objcg, nr - 1); } #ifdef CONFIG_MEMCG_SWAP @@ -5320,6 +5413,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) objcg->memcg = memcg; rcu_assign_pointer(memcg->objcg, objcg); + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup = objcg; + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5629,6 +5725,8 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma, linear_page_index(vma, addr)); } +extern struct mutex cgroup_mutex; + /** * mem_cgroup_move_account - move account of the page * @page: the page @@ -5731,10 +5829,12 @@ static int mem_cgroup_move_account(struct page *page, */ smp_mb(); - css_get(&to->css); - css_put(&from->css); + rcu_read_lock(); + obj_cgroup_get(rcu_dereference(to->objcg)); + obj_cgroup_put(rcu_dereference(from->objcg)); + rcu_read_unlock(); - folio->memcg_data = (unsigned long)to; + folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg); __folio_memcg_unlock(from); @@ -6207,6 +6307,42 @@ static void mem_cgroup_move_charge(void) mmap_read_unlock(mc.mm); atomic_dec(&mc.from->moving_account); + + /* + * Moving its pages to another memcg is finished. Wait for already + * started RCU-only updates to finish to make sure that the caller + * of lock_page_memcg() can unlock the correct move_lock. The + * possible bad scenario would like: + * + * CPU0: CPU1: + * mem_cgroup_move_charge() + * walk_page_range() + * + * lock_page_memcg(page) + * memcg = folio_memcg() + * spin_lock_irqsave(&memcg->move_lock) + * memcg->move_lock_task = current + * + * atomic_dec(&mc.from->moving_account) + * + * mem_cgroup_css_offline() + * memcg_offline_kmem() + * memcg_reparent_objcgs() <== reparented + * + * unlock_page_memcg(page) + * memcg = folio_memcg() <== memcg has been changed + * if (memcg->move_lock_task == current) <== false + * spin_unlock_irqrestore(&memcg->move_lock) + * + * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex + * would be released soon), the page can be reparented to its parent + * memcg. When the unlock_page_memcg() is called for the page, we will + * miss unlock the move_lock. So using synchronize_rcu to wait for + * already started RCU-only updates to finish before this function + * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are + * serialized by cgroup_mutex). + */ + synchronize_rcu(); } /* @@ -6766,21 +6902,27 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(folio); - int ret; + int ret = 0; - ret = try_charge(memcg, gfp, nr_pages); - if (ret) - goto out; + objcg = get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) { + ret = try_charge(memcg, gfp, nr_pages); + if (ret) + goto out; + } - css_get(&memcg->css); - commit_charge(folio, memcg); + obj_cgroup_get(objcg); + commit_charge(folio, objcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: + obj_cgroup_put(objcg); return ret; } @@ -6866,7 +7008,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) } struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -6881,84 +7023,73 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); + page_counter_uncharge(&memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); + page_counter_uncharge(&memcg->memsw, ug->nr_memory); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) - page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); - memcg_oom_recover(ug->memcg); + page_counter_uncharge(&memcg->kmem, ug->nr_kmem); + memcg_oom_recover(memcg); } local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->nid); + __count_memcg_events(memcg, PGPGOUT, ug->pgpgout); + __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory); + memcg_check_events(memcg, ug->nid); local_irq_restore(flags); + rcu_read_unlock(); /* drop reference from uncharge_folio */ - css_put(&ug->memcg->css); + obj_cgroup_put(ug->objcg); } static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; - bool use_objcg = folio_memcg_kmem(folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* * Nobody should be changing or seriously looking at - * folio memcg or objcg at this point, we have fully - * exclusive access to the folio. + * folio objcg at this point, we have fully exclusive + * access to the folio. */ - if (use_objcg) { - objcg = __folio_objcg(folio); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg = get_mem_cgroup_from_objcg(objcg); - } else { - memcg = __folio_memcg(folio); - } - - if (!memcg) + objcg = folio_objcg(folio); + if (!objcg) return; - if (ug->memcg != memcg) { - if (ug->memcg) { + if (ug->objcg != objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg = memcg; + ug->objcg = objcg; ug->nid = folio_nid(folio); - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } nr_pages = folio_nr_pages(folio); - if (use_objcg) { + if (folio_memcg_kmem(folio)) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - - folio->memcg_data = 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory += nr_pages; ug->pgpgout++; - - folio->memcg_data = 0; } - css_put(&memcg->css); + folio->memcg_data = 0; + obj_cgroup_put(objcg); } void __mem_cgroup_uncharge(struct folio *folio) @@ -6966,7 +7097,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_objcg(folio)) return; uncharge_gather_clear(&ug); @@ -6989,7 +7120,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) uncharge_gather_clear(&ug); list_for_each_entry(folio, page_list, lru) uncharge_folio(folio, &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } @@ -7006,6 +7137,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list) void mem_cgroup_migrate(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; long nr_pages = folio_nr_pages(new); unsigned long flags; @@ -7018,30 +7150,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) return; /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_objcg(new)) return; - memcg = get_mem_cgroup_from_folio(old); - VM_WARN_ON_ONCE_FOLIO(!memcg, old); - if (!memcg) + objcg = folio_objcg(old); + VM_WARN_ON_ONCE_FOLIO(!objcg, old); + if (!objcg) return; + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + /* Force-charge the new page. The old one will be freed soon */ - if (!mem_cgroup_is_root(memcg)) { + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } - css_get(&memcg->css); - commit_charge(new, memcg); + obj_cgroup_get(objcg); + commit_charge(new, objcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); - css_put(&memcg->css); + rcu_read_unlock(); } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7216,6 +7351,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg) void mem_cgroup_swapout(struct page *page, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; unsigned short oldid; @@ -7228,15 +7364,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; + objcg = folio_objcg(page_folio(page)); + VM_WARN_ON_ONCE_PAGE(!objcg, page); + if (!objcg) + return; + /* * Interrupts should be disabled by the caller (see the comments below), * which can serve as RCU read-side critical sections. */ - memcg = page_memcg(page); - - VM_WARN_ON_ONCE_PAGE(!memcg, page); - if (!memcg) - return; + memcg = obj_cgroup_memcg(objcg); /* * In case the memcg owning these pages has been offlined and doesn't @@ -7255,7 +7392,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) page->memcg_data = 0; - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); if (!cgroup_memory_noswap && memcg != swap_memcg) { @@ -7274,7 +7411,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) mem_cgroup_charge_statistics(memcg, -nr_entries); memcg_check_events(memcg, page_to_nid(page)); - css_put(&memcg->css); + obj_cgroup_put(objcg); } /** From patchwork Wed Feb 16 11:51:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFFEEC433F5 for ; Wed, 16 Feb 2022 11:53:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 621E76B0085; Wed, 16 Feb 2022 06:53:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AB4C6B0087; Wed, 16 Feb 2022 06:53:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4255D6B0088; Wed, 16 Feb 2022 06:53:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 31DA96B0085 for ; Wed, 16 Feb 2022 06:53:04 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D4CAF181AC9C6 for ; Wed, 16 Feb 2022 11:53:03 +0000 (UTC) X-FDA: 79148481846.26.91BC483 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf06.hostedemail.com (Postfix) with ESMTP id 4084818000A for ; Wed, 16 Feb 2022 11:53:03 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id c10so1934461pfv.8 for ; Wed, 16 Feb 2022 03:53:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MHJFgQra5OTiNxvXaeWRVaAgr/8YUtDV7NnI9XoVTjM=; b=KQeRW5bg1GPoJKeIl+ZeoizI4nPoCXw8IXFjrHJrwx2DE84TFoJvmRg3eadnVyTC7S I9MapwGKFJFY9RlN04XHx6NEbPzRaBz13BGN+HFoDyzuvrqub9sGf6rDPPmY7NV9On1r V2Qq0dz12K3O9JAyiO9OIgieSs/I40K76x7zPLaAj73AXQF8Ock3RBeRmOMa71ZIhxAO mzUgQh7MGSpzhihas/3oHoyM41KOBNXxFXWMsOFw6UnHTy29ohjffqmHEhvuHDnz3pm0 jBonkHZGqxNo2rVjCGXjK13xkS8yVJnudlE9KFY0wfE722hdGz0dCcemGxkAVln3xOR3 HaHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MHJFgQra5OTiNxvXaeWRVaAgr/8YUtDV7NnI9XoVTjM=; b=LiuVxRxrQLYFpJRYhvODALDrtx7AJSLf8tyANvHcC4BHRIWXZb18G55H2E/g7+20Ja cxj7I5pcfIjQlzYi+JtZQNBHJvND3u/O/HR8PLsDBvT2JcBvPOQGUDW6y/iPCJZFy1Lh 7zZmWhHro/0m/jftnCHf588RaYfR0ffS2mNuUa8wqesvj67KUvm8u8Pj+CEoBw3b0iJh DG5bNS7VcsbbNwg5WzmtvJRcWzdqJ094ZGngOUr0J1pQ/jKuQDSh3fTpfvipn7stncyH RT3i6xEGdmtejlxyH+19nQXI4W7++k0NUi5E7Fr0uSgFVmU0g4VuFqC0oZdI/Ob6Tbfj NF+A== X-Gm-Message-State: AOAM532gDmp/XP9aQtsQM4J1NktQC6VI92Sk6EXa02J3h4akGZMDOIST LT8P7Bjm21xLThKQgVz2adF6OA== X-Google-Smtp-Source: ABdhPJz4TK97YPc3MlqsU/p2i6/eFPFUX7hVk8bmtA54c8wjXd84XTk1S0IkeHGA8XZgPoIYoIXIRA== X-Received: by 2002:a63:6942:0:b0:372:8da5:e137 with SMTP id e63-20020a636942000000b003728da5e137mr1970679pgc.618.1645012382308; Wed, 16 Feb 2022 03:53:02 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:53:02 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Date: Wed, 16 Feb 2022 19:51:30 +0800 Message-Id: <20220216115132.52602-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4084818000A X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=KQeRW5bg; spf=pass (imf06.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: x81mmt7nmy6fpekpxheqjodd96cc8rrr X-Rspamd-Server: rspam11 X-HE-Tag: 1645012383-734417 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the lock_page_memcg() does not lock a page and memcg binding, it actually lock a page and objcg binding. So rename lock_page_memcg() to lock_page_objcg(). This is just code cleanup without any functionality changes. Signed-off-by: Muchun Song --- Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- fs/buffer.c | 8 ++++---- include/linux/memcontrol.h | 14 +++++++------- mm/filemap.c | 2 +- mm/huge_memory.c | 4 ++-- mm/memcontrol.c | 20 ++++++++++---------- mm/page-writeback.c | 6 +++--- mm/rmap.c | 14 +++++++------- 8 files changed, 35 insertions(+), 35 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index faac50149a22..ddb795b2ec7e 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -289,7 +289,7 @@ Lock order is as follows: Page lock (PG_locked bit of page->flags) mm->page_table_lock or split pte_lock - lock_page_memcg (memcg->move_lock) + lock_page_objcg (memcg->move_lock) mapping->i_pages lock lruvec->lru_lock. diff --git a/fs/buffer.c b/fs/buffer.c index 30a6e7aa6b7d..3fa1492f057b 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -635,14 +635,14 @@ int __set_page_dirty_buffers(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty = !TestSetPageDirty(page); spin_unlock(&mapping->private_lock); if (newly_dirty) __set_page_dirty(page, mapping, 1); - unlock_page_memcg(page); + unlock_page_objcg(page); if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1101,13 +1101,13 @@ void mark_buffer_dirty(struct buffer_head *bh) struct page *page = bh->b_page; struct address_space *mapping = NULL; - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { mapping = page_mapping(page); if (mapping) __set_page_dirty(page, mapping, 0); } - unlock_page_memcg(page); + unlock_page_objcg(page); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 551fd8b76f9d..9ec428fc4c0b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -411,12 +411,12 @@ static inline struct obj_cgroup *folio_objcg(struct folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a folio any of the following ensures folio and memcg binding - * stability: + * For a page any of the following ensures page and objcg binding + * stability (But the folio can be reparented to its parent memcg): * * - the folio lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference * * Based on the stable binding of folio and objcg, for a folio any of the @@ -938,8 +938,8 @@ extern bool cgroup_memory_noswap; void folio_memcg_lock(struct folio *folio); void folio_memcg_unlock(struct folio *folio); -void lock_page_memcg(struct page *page); -void unlock_page_memcg(struct page *page); +void lock_page_objcg(struct page *page); +void unlock_page_objcg(struct page *page); void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); @@ -1372,11 +1372,11 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } -static inline void lock_page_memcg(struct page *page) +static inline void lock_page_objcg(struct page *page) { } -static inline void unlock_page_memcg(struct page *page) +static inline void unlock_page_objcg(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index ad8c39d90bf9..065aee19e168 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -112,7 +112,7 @@ * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) * ->inode->i_lock (page_remove_rmap->set_page_dirty) - * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) + * ->memcg->move_lock (page_remove_rmap->lock_page_objcg) * bdi.wb->list_lock (zap_pte_range->set_page_dirty) * ->inode->i_lock (zap_pte_range->set_page_dirty) * ->private_lock (zap_pte_range->__set_page_dirty_buffers) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d80afc5f14da..4b4af06a1cff 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2227,7 +2227,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_inc(&page[i]._mapcount); } - lock_page_memcg(page); + lock_page_objcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ __mod_lruvec_page_state(page, NR_ANON_THPS, @@ -2238,7 +2238,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_dec(&page[i]._mapcount); } } - unlock_page_memcg(page); + unlock_page_objcg(page); } smp_wmb(); /* make pte visible before pmd */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e4e490690e33..9531bdb6ede3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2194,13 +2194,13 @@ void folio_memcg_lock(struct folio *folio) * When charge migration first begins, we can have multiple * critical sections holding the fast-path RCU lock and one * holding the slowpath move_lock. Track the task who has the - * move_lock for unlock_page_memcg(). + * move_lock for unlock_page_objcg(). */ memcg->move_lock_task = current; memcg->move_lock_flags = flags; } -void lock_page_memcg(struct page *page) +void lock_page_objcg(struct page *page) { folio_memcg_lock(page_folio(page)); } @@ -2232,7 +2232,7 @@ void folio_memcg_unlock(struct folio *folio) __folio_memcg_unlock(folio_memcg(folio)); } -void unlock_page_memcg(struct page *page) +void unlock_page_objcg(struct page *page) { folio_memcg_unlock(page_folio(page)); } @@ -2894,7 +2894,7 @@ static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference */ folio->memcg_data = (unsigned long)objcg; @@ -5822,7 +5822,7 @@ static int mem_cgroup_move_account(struct page *page, * with (un)charging, migration, LRU putback, or anything else * that would rely on a stable page's memory cgroup. * - * Note that lock_page_memcg is a memcg lock, not a page lock, + * Note that lock_page_objcg is a memcg lock, not a page lock, * to save space. As soon as we switch page's memory cgroup to a * new memcg that isn't locked, the above state can change * concurrently again. Make sure we're truly done with it. @@ -6279,7 +6279,7 @@ static void mem_cgroup_move_charge(void) { lru_add_drain_all(); /* - * Signal lock_page_memcg() to take the memcg's move_lock + * Signal lock_page_objcg() to take the memcg's move_lock * while we're moving its pages to another memcg. Then wait * for already started RCU-only updates to finish. */ @@ -6311,14 +6311,14 @@ static void mem_cgroup_move_charge(void) /* * Moving its pages to another memcg is finished. Wait for already * started RCU-only updates to finish to make sure that the caller - * of lock_page_memcg() can unlock the correct move_lock. The + * of lock_page_objcg() can unlock the correct move_lock. The * possible bad scenario would like: * * CPU0: CPU1: * mem_cgroup_move_charge() * walk_page_range() * - * lock_page_memcg(page) + * unlock_page_objcg(page) * memcg = folio_memcg() * spin_lock_irqsave(&memcg->move_lock) * memcg->move_lock_task = current @@ -6329,14 +6329,14 @@ static void mem_cgroup_move_charge(void) * memcg_offline_kmem() * memcg_reparent_objcgs() <== reparented * - * unlock_page_memcg(page) + * unlock_page_objcg(page) * memcg = folio_memcg() <== memcg has been changed * if (memcg->move_lock_task == current) <== false * spin_unlock_irqrestore(&memcg->move_lock) * * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex * would be released soon), the page can be reparented to its parent - * memcg. When the unlock_page_memcg() is called for the page, we will + * memcg. When the unlock_page_objcg() is called for the page, we will * miss unlock the move_lock. So using synchronize_rcu to wait for * already started RCU-only updates to finish before this function * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 91d163f8d36b..9886da05ca7f 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2441,7 +2441,7 @@ EXPORT_SYMBOL(__set_page_dirty_no_writeback); /* * Helper function for set_page_dirty family. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). * * NOTE: This relies on being atomic wrt interrupts. */ @@ -2475,7 +2475,7 @@ static void folio_account_dirtied(struct folio *folio, /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). */ void folio_account_cleaned(struct folio *folio, struct address_space *mapping, struct bdi_writeback *wb) @@ -2496,7 +2496,7 @@ void folio_account_cleaned(struct folio *folio, struct address_space *mapping, * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * - * The caller must hold lock_page_memcg(). Most callers have the folio + * The caller must hold lock_page_objcg(). Most callers have the folio * locked. A few have the folio blocked from truncation through other * means (eg zap_page_range() has it mapped and is holding the page table * lock). This can also be called from mark_buffer_dirty(), which I diff --git a/mm/rmap.c b/mm/rmap.c index 6a1e8c7f6213..29dcdd4eb76f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -32,7 +32,7 @@ * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) - * lock_page_memcg move_lock (in __set_page_dirty_buffers) + * lock_page_objcg move_lock (in __set_page_dirty_buffers) * i_pages lock (widely used) * lruvec->lru_lock (in folio_lruvec_lock_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty) @@ -1154,7 +1154,7 @@ void do_page_add_anon_rmap(struct page *page, bool first; if (unlikely(PageKsm(page))) - lock_page_memcg(page); + lock_page_objcg(page); else VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -1182,7 +1182,7 @@ void do_page_add_anon_rmap(struct page *page, } if (unlikely(PageKsm(page))) { - unlock_page_memcg(page); + unlock_page_objcg(page); return; } @@ -1242,7 +1242,7 @@ void page_add_file_rmap(struct page *page, bool compound) int i, nr = 1; VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); - lock_page_memcg(page); + lock_page_objcg(page); if (compound && PageTransHuge(page)) { int nr_pages = thp_nr_pages(page); @@ -1273,7 +1273,7 @@ void page_add_file_rmap(struct page *page, bool compound) } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: - unlock_page_memcg(page); + unlock_page_objcg(page); } static void page_remove_file_rmap(struct page *page, bool compound) @@ -1374,7 +1374,7 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_remove_rmap(struct page *page, bool compound) { - lock_page_memcg(page); + lock_page_objcg(page); if (!PageAnon(page)) { page_remove_file_rmap(page, compound); @@ -1413,7 +1413,7 @@ void page_remove_rmap(struct page *page, bool compound) * faster for those pages still in swapcache. */ out: - unlock_page_memcg(page); + unlock_page_objcg(page); } /* From patchwork Wed Feb 16 11:51:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1E22C433FE for ; Wed, 16 Feb 2022 11:53:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5ACE16B0071; Wed, 16 Feb 2022 06:53:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 535E36B0087; Wed, 16 Feb 2022 06:53:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AFD96B0088; Wed, 16 Feb 2022 06:53:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 2BDFC6B0071 for ; Wed, 16 Feb 2022 06:53:10 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DE6578249980 for ; Wed, 16 Feb 2022 11:53:09 +0000 (UTC) X-FDA: 79148482098.06.27FD5EA Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf24.hostedemail.com (Postfix) with ESMTP id 4EC60180003 for ; Wed, 16 Feb 2022 11:53:09 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id t14-20020a17090a3e4e00b001b8f6032d96so2154273pjm.2 for ; Wed, 16 Feb 2022 03:53:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hb4Gc01XylgUCngtxENOJFpbWICvKI3FBnXtbBe6mQw=; b=Hv/yV6E/bbCMqR7MM5iHb7bJyd4OPBm0sjIJ3Agnh7p/RUA72ZSwnSAYh6ui3TwcUb FyU50MF38ZlbkCpNny0kgYoGRV0KOuhYait9vzWyUZr7CRuPa6MgqauOfkcV7/tYZ0pU KL39IVOcG7nWuUyj2AXQjciC6hGq/Opoy8kqlQQQNGut9FKkUhMITh7zSDosRc3yLOsu klrE8gpCZ6vATDgd6TC+mpJgnN+XJ/HXQzrOknh73rDR5fVyse2le763zkJXHp599GVG gmy4vKNyMGXp4GqgaslCshQs1MFD3lOHii364ySCYulCbkvKHu7EzK86WBprQsgNKyhD 7u2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hb4Gc01XylgUCngtxENOJFpbWICvKI3FBnXtbBe6mQw=; b=tUqz4lsrL3LkwTS6vhqFd+TfQXGmbP5J8gKedK4HVKDTPGsJZfK8D/cbt0LuOus1bF y+MddyhZ/HZyTGupUIp2d8WzPFaimlYEmVUTsBhoLW6y98AhJb7pqjPvcaeSqjqadsQk 8S4fLzxhwxN1b30VQ911O3PJX2LW0/HIBtEx95pPU5tlCzfuXS4QFnI10J8HE70ZZ5e/ 2a3+BZ6bq79D93Q9FYL08jP5DTYXAs0ByL10sNInD5Vm2PuVImddLUmxwASKZ2znpDTL DpFMoWpg2gTRShmELr/0/SZ3ZeCSxAXnfwp1DPOmWqTmeVIRq9bPFDk8+GsQBHJ6ZEzu hKWQ== X-Gm-Message-State: AOAM533OfVKI5ZsesNK3bA02jTGYtH88bC4QYm7UE/T5DDwCjbg7+1QG 6IZ8w0dnhIV4VcTiuRodZva2Ag== X-Google-Smtp-Source: ABdhPJyXZS3mMwShvFS3X7uagm3iGHbU4H2oEm2bloQpabZs8Juq19OdY6sNgEvStD7wOLFso2+Zdg== X-Received: by 2002:a17:90b:3607:b0:1b8:efe5:9008 with SMTP id ml7-20020a17090b360700b001b8efe59008mr1272889pjb.163.1645012388467; Wed, 16 Feb 2022 03:53:08 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.53.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:53:08 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Date: Wed, 16 Feb 2022 19:51:31 +0800 Message-Id: <20220216115132.52602-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 4EC60180003 X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="Hv/yV6E/"; spf=pass (imf24.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: mm6pndn4a7b9sk5xcydnmqxycg5mwgbw X-HE-Tag: 1645012389-472624 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We need to make sure that the page is deleted from or added to the correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid users. Signed-off-by: Muchun Song --- include/linux/mm_inline.h | 15 ++++++++++++--- mm/vmscan.c | 1 - 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index b725839dfe71..8cc134fd3f0b 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -105,7 +105,10 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) static __always_inline void add_page_to_lru_list(struct page *page, struct lruvec *lruvec) { - lruvec_add_folio(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_add_folio(lruvec, folio); } static __always_inline @@ -121,7 +124,10 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) static __always_inline void add_page_to_lru_list_tail(struct page *page, struct lruvec *lruvec) { - lruvec_add_folio_tail(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_add_folio_tail(lruvec, folio); } static __always_inline @@ -135,7 +141,10 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) static __always_inline void del_page_from_lru_list(struct page *page, struct lruvec *lruvec) { - lruvec_del_folio(lruvec, page_folio(page)); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + lruvec_del_folio(lruvec, folio); } #ifdef CONFIG_ANON_VMA_NAME diff --git a/mm/vmscan.c b/mm/vmscan.c index 7beed9041e0a..00207553c419 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2356,7 +2356,6 @@ static unsigned int move_pages_to_lru(struct list_head *list) continue; } - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; From patchwork Wed Feb 16 11:51:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12748469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACEE0C433F5 for ; Wed, 16 Feb 2022 11:53:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49BA06B0074; Wed, 16 Feb 2022 06:53:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 423CF6B0087; Wed, 16 Feb 2022 06:53:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C5546B0088; Wed, 16 Feb 2022 06:53:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 1C0E76B0074 for ; Wed, 16 Feb 2022 06:53:16 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CC9E99368A for ; Wed, 16 Feb 2022 11:53:15 +0000 (UTC) X-FDA: 79148482350.27.F8BFA66 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf11.hostedemail.com (Postfix) with ESMTP id 50BAA40007 for ; Wed, 16 Feb 2022 11:53:15 +0000 (UTC) Received: by mail-pf1-f174.google.com with SMTP id f6so1917431pfj.11 for ; Wed, 16 Feb 2022 03:53:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UhAhPTyHic8HLjvCRzBQUUQ636+wM982+LUClAOhopw=; b=QWgWNzNgU80Sy/fbB7VoAvCmqaopjdIFI6iIHndkTXLJBnwcSbs8MuvIH83u4KyBqG NdBVYhBwBrHa9O+Igeno+9U1+YHhWl3jA+dJOVghrOc8OtAZQ3BxeGPTb0982tkMEzYM /uDrfbdFHtvkE08wtABKnM4GAs+E7+5F9OoTWDM+uye1q5m9i8SbCIHwDv0SF8Cw85EX E71kczc50BRf4IloliZTprHsbw5/95HF79kmGq4fhXDiwp7FEhkeTubqyZYyrT/bNp1+ gq31CR/tXXc2vXc+A1HIWxsSKP3C2ZoT5heKUWCUlinPdvzn+ewziskziGMeoGRjuOD3 BrtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UhAhPTyHic8HLjvCRzBQUUQ636+wM982+LUClAOhopw=; b=whFtQHEZ9EOLlY1YltuhwTPlXvaF8t4Pk0oeT86Q0cD6LYEg0h5ucu2ehdAJM2xOo2 MjTsIH5ZYu9oBqtGYu1vXtIRkNsz3kqv/t6Q1pDEvqO36KptNOyyMPYjwgx5Q7YYc98B k3fSA0gGet+4ewRt9YcZPZKJ4zMlRdL43OoDUU2f+w7cFjaOAxGFUPaoKW1P1Zymz4Pi EugfzJMp3GecIfnIjrPj4zesRGf8k3e8OI7L1t3SW9hDndBR/g/fB9hh1N3bNb+8ok7Z td9QhflY00wfnupITibpVsSjDxfZr+1j4Xs3woTvPT+MWZjFp7MAg0+jeXyq2oi4RHBq 2Ydg== X-Gm-Message-State: AOAM530WpIfwCrsbgMlNCapMSNxHPFuCguRLHxsqUWGfCWGlCbhXlCnL sJ18uCPH8MMsnauHFtHEefOAxA== X-Google-Smtp-Source: ABdhPJw8BvSTOCR/CJ4JcuC5BevEZsDegvyDY1WKNxP/0W6FJiLrqwf0q/jlBCGAXZc7jFj1Ae+pgw== X-Received: by 2002:a05:6a00:244f:b0:4cc:a2ba:4cd8 with SMTP id d15-20020a056a00244f00b004cca2ba4cd8mr2929216pfj.74.1645012394275; Wed, 16 Feb 2022 03:53:14 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.53.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:53:14 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 12/12] mm: lru: use lruvec lock to serialize memcg changes Date: Wed, 16 Feb 2022 19:51:32 +0800 Message-Id: <20220216115132.52602-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 50BAA40007 X-Stat-Signature: oeifkifb5ff4od85rb6obscusopaq4e5 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=QWgWNzNg; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf11.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1645012395-568685 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now folio_lruvec_lock*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/memcontrol.c | 32 +++++++++++++++++++++++++++++++- mm/swap.c | 45 ++++++++++++++------------------------------- mm/vmscan.c | 9 ++++----- 3 files changed, 49 insertions(+), 37 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9531bdb6ede3..0a28f87b68c0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1316,13 +1316,38 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec = folio_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock(&lruvec->lru_lock); goto retry; } /* + * When we reach here, it means that the folio_memcg(folio) is stable. + * * Preemption is disabled in the internal of spin_lock, which can serve * as RCU read-side critical sections. */ @@ -1353,6 +1378,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1388,6 +1414,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5834,7 +5861,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_put(rcu_dereference(from->objcg)); rcu_read_unlock(); + /* See the comments in folio_lruvec_lock(). */ + spin_lock(&from_vec->lru_lock); folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg); + spin_unlock(&from_vec->lru_lock); __folio_memcg_unlock(from); diff --git a/mm/swap.c b/mm/swap.c index 9c2bcc2651c6..b9022fbbb70f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -201,14 +201,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, struct page *page = pvec->pages[i]; struct folio *folio = page_folio(page); - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); (*move_fn)(page, lruvec); - - SetPageLRU(page); } if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags); @@ -220,7 +214,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { struct folio *folio = page_folio(page); - if (!folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_unevictable(folio)) { lruvec_del_folio(lruvec, folio); folio_clear_active(folio); lruvec_add_folio_tail(lruvec, folio); @@ -315,7 +309,8 @@ void lru_note_cost_folio(struct folio *folio) static void __folio_activate(struct folio *folio, struct lruvec *lruvec) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); @@ -372,12 +367,9 @@ static void folio_activate(struct folio *folio) { struct lruvec *lruvec; - if (folio_test_clear_lru(folio)) { - lruvec = folio_lruvec_lock_irq(folio); - __folio_activate(folio, lruvec); - unlock_page_lruvec_irq(lruvec); - folio_set_lru(folio); - } + lruvec = folio_lruvec_lock_irq(folio); + __folio_activate(folio, lruvec); + unlock_page_lruvec_irq(lruvec); } #endif @@ -530,6 +522,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) bool active = PageActive(page); int nr_pages = thp_nr_pages(page); + if (!PageLRU(page)) + return; + if (PageUnevictable(page)) return; @@ -567,7 +562,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -583,7 +578,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { - if (PageAnon(page) && PageSwapBacked(page) && + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); @@ -1006,8 +1001,9 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { + struct folio *folio = page_folio(page); int was_unevictable = folio_test_clear_unevictable(folio); long nr_pages = folio_nr_pages(folio); @@ -1064,20 +1060,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct folio *folio = page_folio(pvec->pages[i]); - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(folio, lruvec); - } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 00207553c419..23d6f91b483a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4868,18 +4868,17 @@ void check_move_unevictable_pages(struct pagevec *pvec) nr_pages = thp_nr_pages(page); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec = folio_lruvec_relock_irq(folio, lruvec); + + if (!PageLRU(page) || !PageUnevictable(page)) continue; - lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { + if (page_evictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } - SetPageLRU(page); } if (lruvec) {