From patchwork Tue Apr 16 17:51:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesper Dangaard Brouer X-Patchwork-Id: 13632262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8CEAC4345F for ; Tue, 16 Apr 2024 17:51:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 66B1D6B009E; Tue, 16 Apr 2024 13:51:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 61B456B009F; Tue, 16 Apr 2024 13:51:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50A816B00A0; Tue, 16 Apr 2024 13:51:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 31A016B009E for ; Tue, 16 Apr 2024 13:51:45 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BA9B41C0176 for ; Tue, 16 Apr 2024 17:51:44 +0000 (UTC) X-FDA: 82016137728.02.6BAD3A4 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf26.hostedemail.com (Postfix) with ESMTP id 9AE7C140013 for ; Tue, 16 Apr 2024 17:51:42 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="I/GAY3Fl"; spf=pass (imf26.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713289903; a=rsa-sha256; cv=none; b=jwKi0poe+R4xxCEzFjqGTdDDbSV8817XqiGhdQQ4BreOK0qoisPDWMf1H97Sxtyv5D7hAa SFR3lpmWtUNaI0GkZdqkXbrX8b5ODbiHwmFybl8OMIaK2/f0yLaorLyGsQxU2QwIQbsFee m+pEEFsmWkBTvOf0I2sATsb3aBMZ52k= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="I/GAY3Fl"; spf=pass (imf26.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713289903; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RtxP6lvJoWmnCdIurYnb/RminvL93fz0m6ijjOetiP4=; b=W6dXA8OTPOUUle+PQk77v0xpOk8BM/5lNMAALuabCZzFjc3friZzp7qEoKu6uV1HmY23Sa bIm8zXkqI3OASq9UtiTOJx9d7MljojJ+eNAzOTUcumdn77biMdtpgSBoZHZAZqkquwF+ob 2tChSB888yaqlE1Ec+BlNqei2qLDRDQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BAC99CE068B; Tue, 16 Apr 2024 17:51:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AB99C2BD11; Tue, 16 Apr 2024 17:51:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713289898; bh=5yq8vAYf6xC8gXi+nuWR4s1aX8FwORkuEEx5b9YmRYs=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=I/GAY3FlPH6B1QRPQ2dqGR9a/rlcVnfGdvRVTQLO45RzGqp+LU4MWf4UwES2dc8+a 0BKM1Hre4ug7oBUZv1rJzeMn16gFlal2lBFm4UKtNvuhqrygHncYibMZMgGCxfxXeI ymw2t80RgVLeCyqPphCGqemiZ6vti6QTZxAtrPY9iUorEpW90I2Ko8DQ7xkOq7uA+t f0RH0cGf0dGovxwsZX5/eNt3Mk9J46hRV5nKs1Yz+PrESejCcpZoER+HybDBcH588p FutiTe1lgqsteYKqc599Y8n2e/uPvzK1sA8VHal2CeKFIT5/QKqFRby+WL5Sn53u6q JDZTow0vfEVEA== Subject: [PATCH v1 2/3] cgroup/rstat: convert cgroup_rstat_lock back to mutex From: Jesper Dangaard Brouer To: tj@kernel.org, hannes@cmpxchg.org, lizefan.x@bytedance.com, cgroups@vger.kernel.org, yosryahmed@google.com, longman@redhat.com Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, shakeel.butt@linux.dev, kernel-team@cloudflare.com, linux-kernel@vger.kernel.org, Arnaldo Carvalho de Melo , Sebastian Andrzej Siewior , mhocko@kernel.org Date: Tue, 16 Apr 2024 19:51:33 +0200 Message-ID: <171328989335.3930751.3091577850420501533.stgit@firesoul> In-Reply-To: <171328983017.3930751.9484082608778623495.stgit@firesoul> References: <171328983017.3930751.9484082608778623495.stgit@firesoul> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9AE7C140013 X-Stat-Signature: ibsz5arkgss68xds6f44dz4es1ugnyg6 X-Rspam-User: X-HE-Tag: 1713289902-661336 X-HE-Meta: U2FsdGVkX1/K5YQuuqIP8DANpZ77vqBd5NZyrJZww1i4Zgonj4+b7ZU1kE7PSULxprzsU36KF7NyK4zDj4uMtESDfPXsusS2QCPCoAWwxgozfdAAbF+TvZUMGT+O4nrqnZNGQgaVxjueO8lYPgGzekjcMrtMEWBrUSiulwCECd8HT+8uhxV6rKCqge15gyx3X+XJIHLyHzyjMHLjmHUuqz3BXV3vDhbMveq/rt9LlsBuv8wz0sbRSk4tU4lVcfk5NF0jTVCRvFXukEksYkbJnyh0q++SVmTCeVHuMK6RXHStaAsiCNaahlApkcPT9goKHPp0XdOzsUxK2JqCsYSNzbgoeeLA4RHpG9xdfYEUeRPgqsmJYXy2HxdwoWBkzVIXNby5io13MlpweUc9FXMRq1FLFXLOZhZQucQanm3YPiUQgNXWP37aRrHsnADsBTsBpSYHcMRsGkbFgLaaVz9s1u9mH5wddXglp8ubYMFnAVWmAIy0iR2p78eqD1byepx5OAdMMVWwvGh7TbFNfkggyJDgbIMO0GDQH0YiSXx2Vlrmm+GWpRjnoILSlsR+4ldqin+MFQMJ78vRnFfsO77u90HMiJ99bB2K5l83t4ZlMmySYqQTj1IDTRJywroQewyalqE6ZzFqx8BhK0Hc0qhaVcK9aSOXtOiivAmp/QFf8DSlbIe1oqn+xJjmr80VBzDcJtv4TBmrC2cA85vfa3N9RTcaZotYtAQ8vG6VD+yVOp9OChmiM/EOWtLUVOn5st61dIyx/dqekxkS6WwQRDX9OewBbKxIBYxkUiqB5iYo+TQYozoZ8udx7zJQHr7shR5/O5u47csv4Ud/DLveMG68pm302LbRGT/Gr7RZHNuwYUG1e/0FHGd4JMqknQleCoE2OSIDpoLqAz9H3zoqXC5vIcJ4rMb6q4+ikJFwAs8SuI6cR+87kkexYg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since kernel v4.18, cgroup_rstat_lock has been an IRQ-disabling spinlock, as introduced by commit 0fa294fb1985 ("cgroup: Replace cgroup_rstat_mutex with a spinlock"). Despite efforts in cgroup_rstat_flush_locked() to yield the lock when necessary during the collection of per-CPU stats, this approach has led to several scaling issues observed in production environments. Holding this IRQ lock has caused starvation of other critical kernel functions, such as softirq (e.g., timers and netstack). Although kernel v6.8 introduced optimizations in this area, we continue to observe instances where the spin_lock is held for 64-128 ms in production. This patch converts cgroup_rstat_lock back to being a mutex lock. This change is made possible thanks to the significant effort by Yosry Ahmed to eliminate all atomic context use-cases through multiple commits, ending in 0a2dc6ac3329 ("cgroup: removecgroup_rstat_flush_atomic()"), included in kernel v6.5. After this patch lock contention will be less obvious, as converting this to a mutex avoids multiple CPUs spinning while waiting for the lock, but it doesn't remove the lock contention. It is recommended to use the tracepoints to diagnose this. Signed-off-by: Jesper Dangaard Brouer --- kernel/cgroup/rstat.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index ff68c904e647..a90d68a7c27f 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -9,7 +9,7 @@ #include -static DEFINE_SPINLOCK(cgroup_rstat_lock); +static DEFINE_MUTEX(cgroup_rstat_lock); static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); @@ -238,10 +238,10 @@ static inline void __cgroup_rstat_lock(struct cgroup *cgrp, int cpu_in_loop) { bool contended; - contended = !spin_trylock_irq(&cgroup_rstat_lock); + contended = !mutex_trylock(&cgroup_rstat_lock); if (contended) { trace_cgroup_rstat_lock_contended(cgrp, cpu_in_loop, contended); - spin_lock_irq(&cgroup_rstat_lock); + mutex_lock(&cgroup_rstat_lock); } trace_cgroup_rstat_locked(cgrp, cpu_in_loop, contended); } @@ -250,7 +250,7 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) __releases(&cgroup_rstat_lock) { trace_cgroup_rstat_unlock(cgrp, cpu_in_loop, false); - spin_unlock_irq(&cgroup_rstat_lock); + mutex_unlock(&cgroup_rstat_lock); } /* see cgroup_rstat_flush() */ @@ -278,7 +278,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) } /* play nice and yield if necessary */ - if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { + if (need_resched()) { __cgroup_rstat_unlock(cgrp, cpu); if (!cond_resched()) cpu_relax();