From patchwork Tue Mar 22 21:45:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12789226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C08C9C433F5 for ; Tue, 22 Mar 2022 21:45:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D4056B0140; Tue, 22 Mar 2022 17:45:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 585646B0141; Tue, 22 Mar 2022 17:45:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4725A6B0142; Tue, 22 Mar 2022 17:45:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 3757D6B0140 for ; Tue, 22 Mar 2022 17:45:48 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F14D6A32B2 for ; Tue, 22 Mar 2022 21:45:47 +0000 (UTC) X-FDA: 79273354734.24.2BBF4A5 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf02.hostedemail.com (Postfix) with ESMTP id 67A9180010 for ; Tue, 22 Mar 2022 21:45:47 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4DF37B81DAD; Tue, 22 Mar 2022 21:45:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AF3BC340EC; Tue, 22 Mar 2022 21:45:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985545; bh=J09gnwx+pM4OrqpRiCN3GoxRGIEM2JJo77DhAVWt2/o=; h=Date:To:From:In-Reply-To:Subject:From; b=Z29Wx5GtoDHoMxP/UeJv0ahkKwvT7trNuaLYNZYmdV5hP9wwM38v/0M0h1JnS6uZ3 4ZAyhMilkOT0yTM9hfGX3ypoYuhRsMWTMu1iPLN9uBVbQnomGeh5K/4aAK4LkGR9hm jbJy5M2iUhFSr+KzeM25ZzroUYyWoISJFWA21BzI= Date: Tue, 22 Mar 2022 14:45:44 -0700 To: songmuchun@bytedance.com,shakeelb@google.com,roman.gushchin@linux.dev,mhocko@suse.com,hannes@cmpxchg.org,longman@redhat.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 142/227] mm/list_lru: optimize memcg_reparent_list_lru_node() Message-Id: <20220322214545.0AF3BC340EC@smtp.kernel.org> X-Rspam-User: X-Stat-Signature: zzq5omadp63aihx3tt6cp1tcbdka9adn Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=Z29Wx5Gt; spf=pass (imf02.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 67A9180010 X-HE-Tag: 1647985547-148316 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Waiman Long Subject: mm/list_lru: optimize memcg_reparent_list_lru_node() Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() to be race free"), we are tracking the total number of lru entries in a list_lru_node in its nr_items field. In the case of memcg_reparent_list_lru_node(), there is nothing to be done if nr_items is 0. We don't even need to take the nlru->lock as no new lru entry could be added by a racing list_lru_add() to the draining src_idx memcg at this point. On systems that serve a lot of containers, it is possible that there can be thousands of list_lru's present due to the fact that each container may mount its own container specific filesystems. As a typical container uses only a few cpus, it is likely that only the list_lru_node that contains those cpus will be utilized while the rests may be empty. In other words, there can be a lot of list_lru_node with 0 nr_items. By skipping a lock/unlock operation and loading a cacheline from memcg_lrus, a sizeable number of cpu cycles can be saved. That can be substantial if we are talking about thousands of list_lru_node's with 0 nr_items. Link: https://lkml.kernel.org/r/20220309144000.1470138-1-longman@redhat.com Signed-off-by: Waiman Long Reviewed-by: Roman Gushchin Cc: Muchun Song Cc: Michal Hocko Cc: Johannes Weiner Cc: Shakeel Butt Signed-off-by: Andrew Morton --- mm/list_lru.c | 6 ++++++ 1 file changed, 6 insertions(+) --- a/mm/list_lru.c~mm-list_lru-optimize-memcg_reparent_list_lru_node +++ a/mm/list_lru.c @@ -395,6 +395,12 @@ static void memcg_reparent_list_lru_node struct list_lru_one *src, *dst; /* + * If there is no lru entry in this nlru, we can skip it immediately. + */ + if (!READ_ONCE(nlru->nr_items)) + return; + + /* * Since list_lru_{add,del} may be called under an IRQ-safe lock, * we have to use IRQ-safe primitives here to avoid deadlock. */