From patchwork Thu Mar 4 01:42:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12115317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDDDDC433E0 for ; Thu, 4 Mar 2021 01:42:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E32F6506C for ; Thu, 4 Mar 2021 01:42:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E32F6506C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D71AE6B0005; Wed, 3 Mar 2021 20:42:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D21D06B0006; Wed, 3 Mar 2021 20:42:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9AE96B0007; Wed, 3 Mar 2021 20:42:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 9A0A86B0005 for ; Wed, 3 Mar 2021 20:42:54 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 502DB8150 for ; Thu, 4 Mar 2021 01:42:54 +0000 (UTC) X-FDA: 77880493068.03.E6A6AB9 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf28.hostedemail.com (Postfix) with ESMTP id 0080C200038F for ; Thu, 4 Mar 2021 01:42:53 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id m16so21725779qkh.20 for ; Wed, 03 Mar 2021 17:42:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=/y8akNG1DgiTdfrJxBTeisB7ZHUVtQmD2LTrAqUe9Rc=; b=pRuHjSLtrINXjcaRHUknD2x66XzH2lKt75rlaw+GMEzKk6LpreODAay8rCA8X5pQPk gcT7zeVVLlFide7EBT7jKo5DShiOrQBaKkWftDhyI+uz+q/+1AODQc4NJr8AkADlw35H o+xXKOVbbUYi5XUmuwXkwyVYy67X/wyagrybe3eZfQFa9EF73G58le9g6qzjteuH4tbW gXD89LH82cefAizQt3fwyEMLdZ/FxKBW+YfyC2kctAbSfoSOHRIDXigkn8LuCwSv9Ii+ jw7qATtGDOCqEd9NG8HxWoEYwB5xwJ0AbtTecwPIVkDFWZu9qRdQSee6qUytxU2GsZQN 4UlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=/y8akNG1DgiTdfrJxBTeisB7ZHUVtQmD2LTrAqUe9Rc=; b=cAARtGGxl0RHq860tjX6UPMIFA25CDCKWRWQFJm1SrLQM0h+ECzYA2F8yNJ6CxxhDU eDP0gczhfYjOxehPSJOaCjlOm+UDsgCx/i33xxOlGCPn1Lqgd7lzfBz/FXlfxTMCFSjZ sJhLn13tgr9bPwwx42psKn/kiWCVMXJfGb0a6Yk2gYzbCA2o7H5Rg27B0MhXPwMWaA8R aGOe79Xx7EyJPiBwkzdbdVik5sLewrolBeOkHaDwlvtEnwjxmHOEI5QP1epx9NGyu2jL tiDsKlK9VWBHwmTkvmMHUtr7IOKjbHHVv/p0R+jDmbHPU+Ie2GI3qobyKQxjBL8fjHJ0 6hFQ== X-Gm-Message-State: AOAM5325oiw6Pwm4Zfp1RyW/pqSVbmC4V/+DyDB4wZYdTh3GoUora9z+ iIJT+4CV2gMeZUZbVf5IzW8mScDM8brluQ== X-Google-Smtp-Source: ABdhPJzEfJTbRoRRZjuPDqwSE1xvEUNN3nZNxtvUkE4GY78x6gDDJbP9FFiLvNQ0xe8dXOMozMT5qT8YZq+Tqw== X-Received: from shakeelb.svl.corp.google.com ([2620:15c:2cd:202:5df8:a0bb:52b8:486c]) (user=shakeelb job=sendgmr) by 2002:a05:6214:1103:: with SMTP id e3mr2189923qvs.12.1614822172690; Wed, 03 Mar 2021 17:42:52 -0800 (PST) Date: Wed, 3 Mar 2021 17:42:29 -0800 Message-Id: <20210304014229.521351-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3] memcg: charge before adding to swapcache on swapin From: Shakeel Butt To: Hugh Dickins , Johannes Weiner Cc: Roman Gushchin , Michal Hocko , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0080C200038F X-Stat-Signature: 1nyi8k1pr3ny7j8z994hmot4t9jjyf8a Received-SPF: none (flex--shakeelb.bounces.google.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from="<3HDtAYAgKCJsNC5F99G6BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--shakeelb.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614822173-881548 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the kernel adds the page, allocated for swapin, to the swapcache before charging the page. This is fine but now we want a per-memcg swapcache stat which is essential for folks who wants to transparently migrate from cgroup v1's memsw to cgroup v2's memory and swap counters. In addition charging a page before exposing it to other parts of the kernel is a step in the right direction. To correctly maintain the per-memcg swapcache stat, this patch has adopted to charge the page before adding it to swapcache. One challenge in this option is the failure case of add_to_swap_cache() on which we need to undo the mem_cgroup_charge(). Specifically undoing mem_cgroup_uncharge_swap() is not simple. To resolve the issue, this patch introduces transaction like interface to charge a page for swapin. The function mem_cgroup_charge_swapin_page() initiates the charging of the page and mem_cgroup_finish_swapin_page() completes the charging process. So, the kernel starts the charging process of the page for swapin with mem_cgroup_charge_swapin_page(), adds the page to the swapcache and on success completes the charging process with mem_cgroup_finish_swapin_page(). Signed-off-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Hugh Dickins --- Changes since v2: - fixed build for !CONFIG_MEMCG - simplified failure path from add_to_swap_cache() Changes since v1: - Removes __GFP_NOFAIL and introduced transaction interface for charging (suggested by Johannes) - Updated the commit message include/linux/memcontrol.h | 14 +++++ mm/memcontrol.c | 116 +++++++++++++++++++++++-------------- mm/memory.c | 14 ++--- mm/swap_state.c | 13 ++--- 4 files changed, 97 insertions(+), 60 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6dc793d587d..d31e6dca397f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -596,6 +596,9 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) } int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); +int mem_cgroup_charge_swapin_page(struct page *page, struct mm_struct *mm, + gfp_t gfp, swp_entry_t entry); +void mem_cgroup_finish_swapin_page(struct page *page, swp_entry_t entry); void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -1141,6 +1144,17 @@ static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, return 0; } +static inline int mem_cgroup_charge_swapin_page(struct page *page, + struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) +{ + return 0; +} + +static inline void mem_cgroup_finish_swapin_page(struct page *page, + swp_entry_t entry) +{ +} + static inline void mem_cgroup_uncharge(struct page *page) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2db2aeac8a9e..226b7bccb44c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6690,6 +6690,27 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } +static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, + gfp_t gfp) +{ + unsigned int nr_pages = thp_nr_pages(page); + int ret; + + ret = try_charge(memcg, gfp, nr_pages); + if (ret) + goto out; + + css_get(&memcg->css); + commit_charge(page, memcg); + + local_irq_disable(); + mem_cgroup_charge_statistics(memcg, page, nr_pages); + memcg_check_events(memcg, page); + local_irq_enable(); +out: + return ret; +} + /** * mem_cgroup_charge - charge a newly allocated page to a cgroup * @page: page to charge @@ -6699,55 +6720,70 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * + * Do not use this for pages allocated for swapin. + * * Returns 0 on success. Otherwise, an error code is returned. */ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) { - unsigned int nr_pages = thp_nr_pages(page); - struct mem_cgroup *memcg = NULL; - int ret = 0; + struct mem_cgroup *memcg; + int ret; if (mem_cgroup_disabled()) - goto out; + return 0; - if (PageSwapCache(page)) { - swp_entry_t ent = { .val = page_private(page), }; - unsigned short id; + memcg = get_mem_cgroup_from_mm(mm); + ret = __mem_cgroup_charge(page, memcg, gfp_mask); + css_put(&memcg->css); - /* - * Every swap fault against a single page tries to charge the - * page, bail as early as possible. shmem_unuse() encounters - * already charged pages, too. page and memcg binding is - * protected by the page lock, which serializes swap cache - * removal, which in turn serializes uncharging. - */ - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (page_memcg(compound_head(page))) - goto out; + return ret; +} - id = lookup_swap_cgroup_id(ent); - rcu_read_lock(); - memcg = mem_cgroup_from_id(id); - if (memcg && !css_tryget_online(&memcg->css)) - memcg = NULL; - rcu_read_unlock(); - } +/** + * mem_cgroup_charge_swapin_page - charge a newly allocated page for swapin + * @page: page to charge + * @mm: mm context of the victim + * @gfp: reclaim mode + * @entry: swap entry for which the page is allocated + * + * This function marks the start of the transaction of charging the page for + * swapin. Complete the transaction with mem_cgroup_finish_swapin_page(). + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge_swapin_page(struct page *page, struct mm_struct *mm, + gfp_t gfp, swp_entry_t entry) +{ + struct mem_cgroup *memcg; + unsigned short id; + int ret; - if (!memcg) - memcg = get_mem_cgroup_from_mm(mm); + if (mem_cgroup_disabled()) + return 0; - ret = try_charge(memcg, gfp_mask, nr_pages); - if (ret) - goto out_put; + id = lookup_swap_cgroup_id(entry); + rcu_read_lock(); + memcg = mem_cgroup_from_id(id); + if (!memcg || !css_tryget_online(&memcg->css)) + memcg = get_mem_cgroup_from_mm(mm); + rcu_read_unlock(); - css_get(&memcg->css); - commit_charge(page, memcg); + ret = __mem_cgroup_charge(page, memcg, gfp); - local_irq_disable(); - mem_cgroup_charge_statistics(memcg, page, nr_pages); - memcg_check_events(memcg, page); - local_irq_enable(); + css_put(&memcg->css); + return ret; +} +/* + * mem_cgroup_finish_swapin_page - complete the swapin page charge transaction + * @page: page charged for swapin + * @entry: swap entry for which the page is charged + * + * This function completes the transaction of charging the page allocated for + * swapin. + */ +void mem_cgroup_finish_swapin_page(struct page *page, swp_entry_t entry) +{ /* * Cgroup1's unified memory+swap counter has been charged with the * new swapcache page, finish the transfer by uncharging the swap @@ -6760,20 +6796,14 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) * correspond 1:1 to page and swap slot lifetimes: we charge the * page to memory here, and uncharge swap when the slot is freed. */ - if (do_memsw_account() && PageSwapCache(page)) { - swp_entry_t entry = { .val = page_private(page) }; + if (!mem_cgroup_disabled() && do_memsw_account()) { /* * The swap entry might not get freed for a long time, * let's not wait for it. The page already received a * memory+swap charge, drop the swap entry duplicate. */ - mem_cgroup_uncharge_swap(entry, nr_pages); + mem_cgroup_uncharge_swap(entry, thp_nr_pages(page)); } - -out_put: - css_put(&memcg->css); -out: - return ret; } struct uncharge_gather { diff --git a/mm/memory.c b/mm/memory.c index c8e357627318..4cd3cd95bb70 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3307,21 +3307,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address); if (page) { - int err; - __SetPageLocked(page); __SetPageSwapBacked(page); - set_page_private(page, entry.val); - - /* Tell memcg to use swap ownership records */ - SetPageSwapCache(page); - err = mem_cgroup_charge(page, vma->vm_mm, - GFP_KERNEL); - ClearPageSwapCache(page); - if (err) { + + if (mem_cgroup_charge_swapin_page(page, + vma->vm_mm, GFP_KERNEL, entry)) { ret = VM_FAULT_OOM; goto out_page; } + mem_cgroup_finish_swapin_page(page, entry); shadow = get_shadow_from_swap_cache(entry); if (shadow) diff --git a/mm/swap_state.c b/mm/swap_state.c index 3cdee7b11da9..e69a8df7da33 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -497,16 +497,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, __SetPageLocked(page); __SetPageSwapBacked(page); - /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) { - put_swap_page(page, entry); + if (mem_cgroup_charge_swapin_page(page, NULL, gfp_mask, entry)) goto fail_unlock; - } - if (mem_cgroup_charge(page, NULL, gfp_mask)) { - delete_from_swap_cache(page); + /* May fail (-ENOMEM) if XArray node allocation failed. */ + if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; - } + + mem_cgroup_finish_swapin_page(page, entry); if (shadow) workingset_refault(page, shadow); @@ -517,6 +515,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, return page; fail_unlock: + put_swap_page(page, entry); unlock_page(page); put_page(page); return NULL;