From patchwork Fri Jul 31 21:14:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Duyck, Alexander H" X-Patchwork-Id: 11695467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 287901392 for ; Fri, 31 Jul 2020 21:14:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F3F3B22B43 for ; Fri, 31 Jul 2020 21:14:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F3F3B22B43 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07D6C8D005C; Fri, 31 Jul 2020 17:14:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 031888D000B; Fri, 31 Jul 2020 17:14:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E61308D005C; Fri, 31 Jul 2020 17:14:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id C3A198D000B for ; Fri, 31 Jul 2020 17:14:21 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 68BD0181AC9CC for ; Fri, 31 Jul 2020 21:14:21 +0000 (UTC) X-FDA: 77099624322.13.women49_061335d26f87 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 35E0A18140B70 for ; Fri, 31 Jul 2020 21:14:21 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,alexander.h.duyck@intel.com,,RULES_HIT:30054:30056:30064,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04yfno5uyw81c5mekp5acg9bx4zubypw59dy4je6frouysnre5e8pamxhf65c8z.q68io39sjot3qiwhytyj93xs3uaup7gqokddnqgz89q7uhcuumiy5o61ciku9xa.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: women49_061335d26f87 X-Filterd-Recvd-Size: 5459 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jul 2020 21:14:19 +0000 (UTC) IronPort-SDR: kTgrlViqEGQYs+hpHLsY+kHWLOyZi1uzyn53B9YZqEJ2EHbsPM12Jj9/gyM+F2FikECWpLB7eE /SoO/ct4P0IQ== X-IronPort-AV: E=McAfee;i="6000,8403,9698"; a="216310401" X-IronPort-AV: E=Sophos;i="5.75,419,1589266800"; d="scan'208";a="216310401" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2020 14:14:18 -0700 IronPort-SDR: HCByFcJfDHuvlCPmBTSbpM+7/9LR9XGUoyxIA6CLKTQXzi2LS8ZBBR5E4moE+Dk9E/JPGlOEL1 QbvQqKMOyETw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,419,1589266800"; d="scan'208";a="287297713" Received: from eaarrasx-mobl.amr.corp.intel.com (HELO ahduyck-desk1.jf.intel.com) ([10.212.61.153]) by orsmga003.jf.intel.com with ESMTP; 31 Jul 2020 14:14:16 -0700 Subject: [PATCH RFC] mm: Add function for testing if the current lruvec lock is valid From: alexander.h.duyck@intel.com To: alex.shi@linux.alibaba.com Cc: akpm@linux-foundation.org, alexander.duyck@gmail.com, aryabinin@virtuozzo.com, cgroups@vger.kernel.org, daniel.m.jordan@oracle.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, khlebnikov@yandex-team.ru, kirill@shutemov.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lkp@intel.com, mgorman@techsingularity.net, richard.weiyang@gmail.com, rong.a.chen@intel.com, shakeelb@google.com, tglx@linutronix.de, tj@kernel.org, willy@infradead.org, yang.shi@linux.alibaba.com Date: Fri, 31 Jul 2020 14:14:16 -0700 Message-ID: <159622999150.2576729.14455020813024958573.stgit@ahduyck-desk1.jf.intel.com> In-Reply-To: <1595681998-19193-19-git-send-email-alex.shi@linux.alibaba.com> References: <1595681998-19193-19-git-send-email-alex.shi@linux.alibaba.com> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Rspamd-Queue-Id: 35E0A18140B70 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck When testing for relock we can avoid the need for RCU locking if we simply compare the page pgdat and memcg pointers versus those that the lruvec is holding. By doing this we can avoid the extra pointer walks and accesses of the memory cgroup. In addition we can avoid the checks entirely if lruvec is currently NULL. Signed-off-by: Alexander Duyck --- include/linux/memcontrol.h | 52 +++++++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 20 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6e670f991b42..7a02f00bf3de 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -405,6 +405,22 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat = page_pgdat(page); + const struct mem_cgroup *memcg; + struct mem_cgroup_per_node *mz; + + if (mem_cgroup_disabled()) + return lruvec == &pgdat->__lruvec; + + mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); + memcg = page->mem_cgroup ? : root_mem_cgroup; + + return lruvec->pgdat == pgdat && mz->memcg == memcg; +} + struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); @@ -880,6 +896,14 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, return &pgdat->__lruvec; } +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat = page_pgdat(page); + + return lruvec == &pgdat->__lruvec; +} + static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; @@ -1317,18 +1341,12 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, static inline struct lruvec *relock_page_lruvec_irq(struct page *page, struct lruvec *locked_lruvec) { - struct pglist_data *pgdat = page_pgdat(page); - bool locked; + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; - rcu_read_lock(); - locked = mem_cgroup_page_lruvec(page, pgdat) == locked_lruvec; - rcu_read_unlock(); - - if (locked) - return locked_lruvec; - - if (locked_lruvec) unlock_page_lruvec_irq(locked_lruvec); + } return lock_page_lruvec_irq(page); } @@ -1337,18 +1355,12 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page, static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, struct lruvec *locked_lruvec, unsigned long *flags) { - struct pglist_data *pgdat = page_pgdat(page); - bool locked; - - rcu_read_lock(); - locked = mem_cgroup_page_lruvec(page, pgdat) == locked_lruvec; - rcu_read_unlock(); - - if (locked) - return locked_lruvec; + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; - if (locked_lruvec) unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + } return lock_page_lruvec_irqsave(page, flags); }