From patchwork Fri Jul 9 00:05:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12366349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-27.0 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECBBDC07E96 for ; Fri, 9 Jul 2021 00:05:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8863B61464 for ; Fri, 9 Jul 2021 00:05:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8863B61464 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D6DA6B0070; Thu, 8 Jul 2021 20:05:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7865B6B0071; Thu, 8 Jul 2021 20:05:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 600906B0072; Thu, 8 Jul 2021 20:05:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 40E636B0070 for ; Thu, 8 Jul 2021 20:05:17 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8C9A81844AD23 for ; Fri, 9 Jul 2021 00:05:16 +0000 (UTC) X-FDA: 78341104632.28.5B2200B Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf17.hostedemail.com (Postfix) with ESMTP id 4E3E1F00038D for ; Fri, 9 Jul 2021 00:05:16 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id c13-20020a25880d0000b029055492c8987bso9075770ybl.19 for ; Thu, 08 Jul 2021 17:05:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ljZtH/3TfSN/L2o7cH/WG4F/EQa0MpAf0ZWdg5C/FNc=; b=tjuzLHVF6l4VVoKLeCICczn6YDJN3WpgP2EKbpGIwoImA0pbzF/WW9fGwmIFgZjIbF ioFEOo80oORCzs77cXbZqQuT/dC8ElHWoBKrlQCrAisasje6lH4lQlJCXjncXzU4DqQh 2AU0p4gnRO7+7Eu9EL6ZXgZ44X/qWm/IzNyC32JDqh9i3SbKbBPTlFBUvit5atKoadxd 7JZ65EULEJRyeFkovh0oGnZfcOfKGVQVu88YAAMnPVktwy7KV4m/qX2aRgOidCH095n7 SdgX49cVT5lkj5ZtQr0JzkyElR8g/YiUM4L5WTjEHJL6M0JzllWjTEVN5V0u5340zWZ7 bT4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ljZtH/3TfSN/L2o7cH/WG4F/EQa0MpAf0ZWdg5C/FNc=; b=QQPoOqj169U4J8eMAMInBKF1/wldd8aNrvCzhF++aG2H01KuCwOqjJKCfMd8TQhwa6 Dt/n8TcBlzX+Q1v7xAUacXHuye2IbDmsBy81xHwFoIZ8DVlFetnxWWn26+fjTCb3vT6s yCju24lEUaX5BIKX2p+61+qpolVngpWc3ipbySJCES6pzjGgIx3XRdI47mET8/IP5ZHB 9fQ+9UqOk4/wlcCKJ+IUzedkGA9Gq9TGbPNEXgmsdzOUCfySP+gqorVo8LuRls78UwSH evsyGcDbNl5Lwzq0tXQV9NcBEJfcpKwMpp5L9g8ySrrfPI9/3pOQY8WJOlBe8QIbIRX4 OxCQ== X-Gm-Message-State: AOAM532UYtd147R5nz8mENOuXsnP9FfLA2GJ4+akGdUCzxHA7o8KW/fn CpymlOfnNGmjQkaqngfSdJSqIxx3TMY= X-Google-Smtp-Source: ABdhPJwUx/316sM6z63yIL3OPNiVYfnRJDICSxKoHBNF68vCz9qUEr6Sz5IXrfRfisKkM8fwNjl5Tx8HMyI= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:7a7f:fa1f:71a4:365b]) (user=surenb job=sendgmr) by 2002:a25:2c01:: with SMTP id s1mr46509637ybs.387.1625789115394; Thu, 08 Jul 2021 17:05:15 -0700 (PDT) Date: Thu, 8 Jul 2021 17:05:07 -0700 In-Reply-To: <20210709000509.2618345-1-surenb@google.com> Message-Id: <20210709000509.2618345-2-surenb@google.com> Mime-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions From: Suren Baghdasaryan To: tj@kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com, songmuchun@bytedance.com, shy828301@gmail.com, alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, vbabka@suse.cz, axboe@kernel.dk, iamjoonsoo.kim@lge.com, david@redhat.com, willy@infradead.org, apopple@nvidia.com, minchan@kernel.org, linmiaohe@huawei.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=tjuzLHVF; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3u5LnYAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3u5LnYAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4E3E1F00038D X-Stat-Signature: ztjud636psz8e4557cyxuykxtuedbpg6 X-HE-Tag: 1625789116-102200 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and cgroup_throttle_swaprate functions. This minimizes the memcg overhead in the pagefault and exit_mmap paths when memcgs are disabled using cgroup_disable=memory command-line option. This change results in ~2.1% overhead reduction when running PFT test comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y, CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core ARM64 Android device. Signed-off-by: Suren Baghdasaryan Acked-by: Johannes Weiner Reviewed-by: Shakeel Butt --- mm/memcontrol.c | 3 +++ mm/swapfile.c | 3 +++ mm/vmpressure.c | 7 ++++++- 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ae1f5d0cb581..a228cd51c4bd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) struct mem_cgroup *memcg; unsigned short id; + if (mem_cgroup_disabled()) + return; + id = swap_cgroup_record(entry, 0, nr_pages); rcu_read_lock(); memcg = mem_cgroup_from_id(id); diff --git a/mm/swapfile.c b/mm/swapfile.c index 1e07d1c776f2..707fa0481bb4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) struct swap_info_struct *si, *next; int nid = page_to_nid(page); + if (mem_cgroup_disabled()) + return; + if (!(gfp_mask & __GFP_IO)) return; diff --git a/mm/vmpressure.c b/mm/vmpressure.c index d69019fc3789..9b172561fded 100644 --- a/mm/vmpressure.c +++ b/mm/vmpressure.c @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work) void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, unsigned long scanned, unsigned long reclaimed) { - struct vmpressure *vmpr = memcg_to_vmpressure(memcg); + struct vmpressure *vmpr; + + if (mem_cgroup_disabled()) + return; + + vmpr = memcg_to_vmpressure(memcg); /* * Here we only want to account pressure that userland is able to From patchwork Fri Jul 9 00:05:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12366351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-27.0 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60AB7C07E96 for ; Fri, 9 Jul 2021 00:05:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 013B561464 for ; Fri, 9 Jul 2021 00:05:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 013B561464 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EB4776B0071; Thu, 8 Jul 2021 20:05:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E64876B0072; Thu, 8 Jul 2021 20:05:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D04ED6B0073; Thu, 8 Jul 2021 20:05:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id A74726B0071 for ; Thu, 8 Jul 2021 20:05:19 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DD13F1844AD23 for ; Fri, 9 Jul 2021 00:05:18 +0000 (UTC) X-FDA: 78341104716.38.0625A83 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf29.hostedemail.com (Postfix) with ESMTP id 8D15190001BD for ; Fri, 9 Jul 2021 00:05:18 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id o12-20020a5b050c0000b02904f4a117bd74so9110403ybp.17 for ; Thu, 08 Jul 2021 17:05:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bUjUMz6cUI7H8dlLz2s05nSjYMcuDgwip5aFJgPl7pk=; b=QNM4PuLJZd/MvfHK1N/nuIGXiMdR0WUAn3o67cSIYU6e0Yn0dY5a91CfXK+8cT7Xo1 zY90AZROwL+PX+cCbiB75nD85CNafuwwShOD3nfp5eReaf0zvET3tZs/mViUxKteJuj+ aMH+FnH3IcG9ekUxbm5qSsVTNQ2VPqPp5FPFA62WJNUkRrgLP75omKOyaP64Wf6QhYji je3qLPwDa2Z4XjKck3xbCe+sbEvEhc5Kx5Y1Aw7XFFCrG2Xp/hBZw3O56fqZHfh+fw9U Fk7QB5Kd5Qfs9NWPXYBXJb/gBlIbcMSU8dGCCfsUgYbCHOQXLvLGNtnzP9lWMu/JrdrM e85A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bUjUMz6cUI7H8dlLz2s05nSjYMcuDgwip5aFJgPl7pk=; b=kOsDp4V5PNcU4D3ZEm5nPAU5lje+LcjDebINM4XFl/F5HoKq5IXf6tEVgzIl3C1SIG ykxnlRW4Uu0cfiZONrj1xXWsblnwxyRXsNVk2yAU20I3yxR49HA7fskLRXvFcdN2pUmp YotGn2uA8S1uN37SBbcOrLSXrmaQ3ytB6oYbtdPmZVndJj2ARrWYPoYKmE/P89IeBSyQ 7Ukr4tQBS50RKZJRmFETkLTPKS7juZ+xtp0iVLIryl2E6u0UcKXDf3RTH7/IBQe7cDFo BkfcnM6Te8EASd22AwqWPb0aMVQa9zVWJz3/35ga84SQ2J9PDmto0Ycncgu4IaFiE2xm eTGQ== X-Gm-Message-State: AOAM5312z7cJ/zYtCpIYw2FFZILqsjPw/kK8SzLIp3S5O7pSm9PogqTg xXJIfKbMjpH3zR0nzSSNSBgN2ejxf/I= X-Google-Smtp-Source: ABdhPJwTGta8xdNQcTmp1igsFRUbPJSFJMRAgz9qZFkzjghU9O2djnnLnVCUpMpzRbRjqAW1dT8Yl4gEHaw= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:7a7f:fa1f:71a4:365b]) (user=surenb job=sendgmr) by 2002:a25:df11:: with SMTP id w17mr42318689ybg.314.1625789117804; Thu, 08 Jul 2021 17:05:17 -0700 (PDT) Date: Thu, 8 Jul 2021 17:05:08 -0700 In-Reply-To: <20210709000509.2618345-1-surenb@google.com> Message-Id: <20210709000509.2618345-3-surenb@google.com> Mime-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config From: Suren Baghdasaryan To: tj@kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com, songmuchun@bytedance.com, shy828301@gmail.com, alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, vbabka@suse.cz, axboe@kernel.dk, iamjoonsoo.kim@lge.com, david@redhat.com, willy@infradead.org, apopple@nvidia.com, minchan@kernel.org, linmiaohe@huawei.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8D15190001BD X-Stat-Signature: 4n1otqc4edgwti1fgy3zuo3e4dkreuca Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=QNM4PuLJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3vZLnYAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3vZLnYAYKCI4AC9w5ty66y3w.u64305CF-442Dsu2.69y@flex--surenb.bounces.google.com X-HE-Tag: 1625789118-943657 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions functions to perform mem_cgroup_disabled static key check inline before calling the main body of the function. This minimizes the memcg overhead in the pagefault and exit_mmap paths when memcgs are disabled using cgroup_disable=memory command-line option. This change results in ~0.4% overhead reduction when running PFT test comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} configurationon on an 8-core ARM64 Android device. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++---- mm/memcontrol.c | 43 +++--------------------------- 2 files changed, 53 insertions(+), 44 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index bfe5c486f4ad..480815feb116 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) page_counter_read(&memcg->memory); } -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); + +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, + gfp_t gfp); +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. if @mm is NULL, try to + * charge to the active memcg. + * + * Do not use this for pages allocated for swapin. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask) +{ + struct mem_cgroup *memcg; + int ret; + + if (mem_cgroup_disabled()) + return 0; + + memcg = get_mem_cgroup_from_mm(mm); + ret = __mem_cgroup_charge(page, memcg, gfp_mask); + css_put(&memcg->css); + + return ret; +} + int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); -void mem_cgroup_uncharge(struct page *page); -void mem_cgroup_uncharge_list(struct list_head *page_list); +void __mem_cgroup_uncharge(struct page *page); +static inline void mem_cgroup_uncharge(struct page *page) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge(page); +} + +void __mem_cgroup_uncharge_list(struct list_head *page_list); +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge_list(page_list); +} void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); @@ -756,8 +802,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); -struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); - struct lruvec *lock_page_lruvec(struct page *page); struct lruvec *lock_page_lruvec_irq(struct page *page); struct lruvec *lock_page_lruvec_irqsave(struct page *page, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a228cd51c4bd..da677b55b2fe 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6701,8 +6701,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, - gfp_t gfp) +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, + gfp_t gfp) { unsigned int nr_pages = thp_nr_pages(page); int ret; @@ -6722,35 +6722,6 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, return ret; } -/** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode - * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. if @mm is NULL, try to - * charge to the active memcg. - * - * Do not use this for pages allocated for swapin. - * - * Returns 0 on success. Otherwise, an error code is returned. - */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) -{ - struct mem_cgroup *memcg; - int ret; - - if (mem_cgroup_disabled()) - return 0; - - memcg = get_mem_cgroup_from_mm(mm); - ret = __mem_cgroup_charge(page, memcg, gfp_mask); - css_put(&memcg->css); - - return ret; -} - /** * mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin * @page: page to charge @@ -6921,13 +6892,10 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * * Uncharge a page previously charged with mem_cgroup_charge(). */ -void mem_cgroup_uncharge(struct page *page) +void __mem_cgroup_uncharge(struct page *page) { struct uncharge_gather ug; - if (mem_cgroup_disabled()) - return; - /* Don't touch page->lru of any random page, pre-check: */ if (!page_memcg(page)) return; @@ -6944,14 +6912,11 @@ void mem_cgroup_uncharge(struct page *page) * Uncharge a list of pages previously charged with * mem_cgroup_charge(). */ -void mem_cgroup_uncharge_list(struct list_head *page_list) +void __mem_cgroup_uncharge_list(struct list_head *page_list) { struct uncharge_gather ug; struct page *page; - if (mem_cgroup_disabled()) - return; - uncharge_gather_clear(&ug); list_for_each_entry(page, page_list, lru) uncharge_page(page, &ug); From patchwork Fri Jul 9 00:05:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12366353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-27.0 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3134C07E9E for ; Fri, 9 Jul 2021 00:05:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57F8261466 for ; Fri, 9 Jul 2021 00:05:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57F8261466 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C9786B0072; Thu, 8 Jul 2021 20:05:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A1346B0073; Thu, 8 Jul 2021 20:05:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 341196B0074; Thu, 8 Jul 2021 20:05:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 117136B0072 for ; Thu, 8 Jul 2021 20:05:22 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6009D27764 for ; Fri, 9 Jul 2021 00:05:21 +0000 (UTC) X-FDA: 78341104842.37.9F2ACC2 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf19.hostedemail.com (Postfix) with ESMTP id 1CB0EB0000B1 for ; Fri, 9 Jul 2021 00:05:21 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id b6-20020a05620a1186b02903b429a7ee4bso5023912qkk.4 for ; Thu, 08 Jul 2021 17:05:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jhuZ3xbmwvygLf83CLhmdeU2lS+ILnD+96Bl12UNwwQ=; b=v6/1TR9Jp9q8MB8LmdT7od76ysRCArOyT9FO2mXwU1jNjU7hqKQM0aKLiOhZn9sRsb pyVKA5YohGGjVAtswwGrlI7xo/9PENSs/7IfiwKmwEUJxYioxTKvavlDDqjdfJ9QIJBZ rJYJm+jb8ylIi0IGjFWnzXYuF/3tyLL16UxipfYvOtFR3FVWfZ5eZ3quPLTXhYWPEh73 Ye8DNpWqNb7yfL4uEGqljolcxoU7cTEb6cmiqrgLx0xLqqD/ay4gcAkPvvF8c5ELWrI2 y/DfYPUrgKqhMNY/sWN+VhCbsLMnzZMsF7HyxX66apExhzA23EXaCisNBfuqEtRK214X DJRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jhuZ3xbmwvygLf83CLhmdeU2lS+ILnD+96Bl12UNwwQ=; b=UbmylQZaut5+EBQLuKVefuRy+YqnA1j73hB9rzp/mrecKOF48UjS4JO2V+OOx6z/1Q xmxHmMWVbm46sT8WyU2mD7rM18gm0QRDCwCtXVeEGkSj/fhefpEeBGN6J32sHYDwIw8i Lh21hzrwqdpQnn6FoB+Y8dzpdmz0gF4Ryime6HG+WOBmerDYtWn6OwpegO31Do5cH9kO aMH+uOZ4P+Y4vu4G6kHa7n45eVc+kMap9uhSKcAMDs+hAUADL1gMHFDrHb728bbryjzJ TZdioeZlzoubyUoiZ0x57kzhPYbi/lCZjPhqGeKcLWHf/zr15X1S/GYv9VdzijBYCObO FrHg== X-Gm-Message-State: AOAM532BF8/g9GkXob04TKUq0ivyh722AdibAW6HmOWDf+hGaRG3Lp5N aOc29M6ofTEKusNsrSWlq3+aeBwnPdU= X-Google-Smtp-Source: ABdhPJy1s7QKwgir5ZIETa39TTPGxDdQUJBb1YRH/9if3Ew7PVLazjhBLabI4NL9su//Dnh1qSZzEAZn12Q= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:7a7f:fa1f:71a4:365b]) (user=surenb job=sendgmr) by 2002:ad4:5426:: with SMTP id g6mr32824500qvt.47.1625789120378; Thu, 08 Jul 2021 17:05:20 -0700 (PDT) Date: Thu, 8 Jul 2021 17:05:09 -0700 In-Reply-To: <20210709000509.2618345-1-surenb@google.com> Message-Id: <20210709000509.2618345-4-surenb@google.com> Mime-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config From: Suren Baghdasaryan To: tj@kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com, songmuchun@bytedance.com, shy828301@gmail.com, alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, vbabka@suse.cz, axboe@kernel.dk, iamjoonsoo.kim@lge.com, david@redhat.com, willy@infradead.org, apopple@nvidia.com, minchan@kernel.org, linmiaohe@huawei.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1CB0EB0000B1 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b="v6/1TR9J"; spf=pass (imf19.hostedemail.com: domain of 3wJLnYAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.222.201 as permitted sender) smtp.mailfrom=3wJLnYAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: johdoi3ct9zbyuucs9di4nrjdmcir9rq X-HE-Tag: 1625789121-515813 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static key check inline before calling the main body of the function. This minimizes the memcg overhead in the pagefault and exit_mmap paths when memcgs are disabled using cgroup_disable=memory command-line option. This change results in ~1% overhead reduction when running PFT test comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} configuration on an 8-core ARM64 Android device. Signed-off-by: Suren Baghdasaryan Acked-by: Johannes Weiner Reviewed-by: Shakeel Butt --- include/linux/swap.h | 26 +++++++++++++++++++++++--- mm/memcontrol.c | 12 +++--------- mm/swapfile.c | 5 +---- 3 files changed, 27 insertions(+), 16 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 6f5a43251593..f30d26b0f71d 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem) #endif #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask); +extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask); +static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) +{ + if (mem_cgroup_disabled()) + return; + __cgroup_throttle_swaprate(page, gfp_mask); +} #else static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { @@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) #ifdef CONFIG_MEMCG_SWAP extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry); -extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry); -extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); +extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry); +static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +{ + if (mem_cgroup_disabled()) + return 0; + return __mem_cgroup_try_charge_swap(page, entry); +} + +extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); +static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge_swap(entry, nr_pages); +} + extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg); extern bool mem_cgroup_swap_full(struct page *page); #else diff --git a/mm/memcontrol.c b/mm/memcontrol.c index da677b55b2fe..43f3f50a4751 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7208,7 +7208,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) } /** - * mem_cgroup_try_charge_swap - try charging swap space for a page + * __mem_cgroup_try_charge_swap - try charging swap space for a page * @page: page being added to swap * @entry: swap entry to charge * @@ -7216,16 +7216,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) * * Returns 0 on success, -ENOMEM on failure. */ -int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) +int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) { unsigned int nr_pages = thp_nr_pages(page); struct page_counter *counter; struct mem_cgroup *memcg; unsigned short oldid; - if (mem_cgroup_disabled()) - return 0; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; @@ -7265,14 +7262,11 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) * @entry: swap entry to uncharge * @nr_pages: the amount of swap space to uncharge */ -void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) +void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) { struct mem_cgroup *memcg; unsigned short id; - if (mem_cgroup_disabled()) - return; - id = swap_cgroup_record(entry, 0, nr_pages); rcu_read_lock(); memcg = mem_cgroup_from_id(id); diff --git a/mm/swapfile.c b/mm/swapfile.c index 707fa0481bb4..04a0c83f1313 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si) } #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) +void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { struct swap_info_struct *si, *next; int nid = page_to_nid(page); - if (mem_cgroup_disabled()) - return; - if (!(gfp_mask & __GFP_IO)) return;