From patchwork Wed Nov 24 15:19:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Lee X-Patchwork-Id: 12637081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04ECFC433F5 for ; Wed, 24 Nov 2021 15:19:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D42C6B0075; Wed, 24 Nov 2021 10:19:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 883E36B0078; Wed, 24 Nov 2021 10:19:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74B576B007B; Wed, 24 Nov 2021 10:19:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 6880B6B0075 for ; Wed, 24 Nov 2021 10:19:29 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 332358D8F1 for ; Wed, 24 Nov 2021 15:19:19 +0000 (UTC) X-FDA: 78844182648.05.3DD6D1A Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf27.hostedemail.com (Postfix) with ESMTP id 85DD2700009D for ; Wed, 24 Nov 2021 15:19:17 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id l190so2413448pge.7 for ; Wed, 24 Nov 2021 07:19:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :user-agent; bh=VQS9J9ET/JQ+uchySALu/6BIuoEYQrXpV1bknFuamdY=; b=fK7/ysA9q+3lIpsi3NA5igDoHT6nob2CncLewzRyhJj7GxHFWBpXYzZck1N0SWi4D6 jaJ5RUL7SL3WnaYD2oDzS/379y4FSjRtBJaQbE7uAtQ4UZJ1L8ZB2USj7NZhSysju4Jo e8t/JSZ3LoDp648sz8zMfd25a3MWpAmxjIsSVuccbIxw7OK+vWsU511Z5CtAF5FbWDeJ 2EmDdUiVBkvAN052YipJeEXp4N/xVe2RUtkJ4M50ba/10QPJyPw+ljEPP66VSvQhRHP3 Uu23aHp7jPw5CexlF/OXPRWpfiZ3rckMpllum0MfYvpIqw2ylBOiUbKsYbS7sUaujoZ/ YK2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=VQS9J9ET/JQ+uchySALu/6BIuoEYQrXpV1bknFuamdY=; b=ImBQGoJuE2iOscd5cshHwtpPIfNX2yhQjv5cFR8as7ovaKxK7KQWsW0hS/LyZvfVGR E1R/lNIjjIpM8FbaZFWOfsWXva8bEJoHqUmP51sHTjRTo4Tyf+JHR40KNSKVq6Tysez3 rBqG8YOc0yxJn1szpG7DIUhax14JgKs774TsEDM3R74geg28Jl5GlhBgNgr0/Id4LgEX g7ShME0ft62zgu/hH/IvtLKebmyu0FeDH89GPT7Z6wdB+prneovVCpAyGD9ovjpHktYI sYi63tMnqpgqo1BjCh67SodAaAvx4SZhtS6jtWOv/lShJlvyNwkNO/Gom3y+XCU9HiIg mjlA== X-Gm-Message-State: AOAM531mbe2n/0G33591349R2KnnIiUEaptWgF/Hw3oN1VdPzQ5r6I5t Iw0/X/lFJXZbCWfVGobOMiYRP7PTA/0= X-Google-Smtp-Source: ABdhPJxqy8wKW5sCCer7X+ZQaKEjM1ckSJJiSykazVMZ/3aYiQWBkQmplN+NsBMwlKAszJYrFEPdsw== X-Received: by 2002:a63:145d:: with SMTP id 29mr10609569pgu.264.1637767157332; Wed, 24 Nov 2021 07:19:17 -0800 (PST) Received: from haolee.io ([2600:3c01::f03c:91ff:fe02:b162]) by smtp.gmail.com with ESMTPSA id k129sm32590pgk.72.2021.11.24.07.19.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Nov 2021 07:19:16 -0800 (PST) Date: Wed, 24 Nov 2021 15:19:15 +0000 From: Hao Lee To: linux-mm@kvack.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, haolee.swjtu@gmail.com, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: reduce spinlock contention in release_pages() Message-ID: <20211124151915.GA6163@haolee.io> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.12.1 (2019-06-15) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 85DD2700009D X-Stat-Signature: 34u9xq8qjr8bfodsbu753rqj57b91md7 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="fK7/ysA9"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of haolee.swjtu@gmail.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=haolee.swjtu@gmail.com X-HE-Tag: 1637767157-715538 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When several tasks are terminated simultaneously, lots of pages will be released, which can cause severe spinlock contention. Other tasks which are running on the same core will be seriously affected. We can yield cpu to fix this problem. Signed-off-by: Hao Lee --- include/linux/memcontrol.h | 26 ++++++++++++++++++++++++++ mm/memcontrol.c | 12 ++++++++++++ mm/swap.c | 8 +++++++- 3 files changed, 45 insertions(+), 1 deletion(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0c5c403f4be6..b06a5bcfd8f6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -800,6 +800,8 @@ struct lruvec *folio_lruvec_lock(struct folio *folio); struct lruvec *folio_lruvec_lock_irq(struct folio *folio); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); +struct lruvec *folio_lruvec_trylock_irqsave(struct folio *folio, + unsigned long *flags); #ifdef CONFIG_DEBUG_VM void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); @@ -1313,6 +1315,17 @@ static inline struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, return &pgdat->__lruvec; } +static inline struct lruvec *folio_lruvec_trylock_irqsave(struct folio *folio, + unsigned long *flagsp) +{ + struct pglist_data *pgdat = folio_pgdat(folio); + + if (spin_trylock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp)) + return &pgdat->__lruvec; + else + return NULL; +} + static inline struct mem_cgroup * mem_cgroup_iter(struct mem_cgroup *root, struct mem_cgroup *prev, @@ -1598,6 +1611,19 @@ static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, return folio_lruvec_lock_irqsave(folio, flags); } +static inline struct lruvec *folio_lruvec_tryrelock_irqsave(struct folio *folio, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + if (locked_lruvec) { + if (folio_matches_lruvec(folio, locked_lruvec)) + return locked_lruvec; + + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + } + + return folio_lruvec_trylock_irqsave(folio, flags); +} + #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 781605e92015..b60ba54e2337 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1261,6 +1261,18 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, return lruvec; } +struct lruvec *folio_lruvec_trylock_irqsave(struct folio *folio, + unsigned long *flags) +{ + struct lruvec *lruvec = folio_lruvec(folio); + + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) { + lruvec_memcg_debug(lruvec, folio); + return lruvec; + } + + return NULL; +} /** * mem_cgroup_update_lru_size - account for adding or removing an lru page * @lruvec: mem_cgroup per zone lru vector diff --git a/mm/swap.c b/mm/swap.c index e8c9dc6d0377..91850d51a5a5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -960,8 +960,14 @@ void release_pages(struct page **pages, int nr) if (PageLRU(page)) { struct lruvec *prev_lruvec = lruvec; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, +retry: + lruvec = folio_lruvec_tryrelock_irqsave(folio, lruvec, &flags); + if (!lruvec) { + cond_resched(); + goto retry; + } + if (prev_lruvec != lruvec) lock_batch = 0;