From patchwork Fri Oct 25 01:23:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13849919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FF80D1039C for ; Fri, 25 Oct 2024 01:23:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D3036B00A0; Thu, 24 Oct 2024 21:23:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0337D6B00A1; Thu, 24 Oct 2024 21:23:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7A6A6B00A2; Thu, 24 Oct 2024 21:23:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B2F556B00A0 for ; Thu, 24 Oct 2024 21:23:39 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0C4B9C02AE for ; Fri, 25 Oct 2024 01:23:19 +0000 (UTC) X-FDA: 82710376434.03.4E9C451 Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [95.215.58.187]) by imf03.hostedemail.com (Postfix) with ESMTP id C0D7020020 for ; Fri, 25 Oct 2024 01:23:28 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="Qnj7/ylT"; spf=pass (imf03.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.187 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729819365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ea/MjMCn/PUwsZr9oC0H1/BbJ/TXP7i5xCISmkGKoYY=; b=ounHDPJd/XTpbD4v5GmABGei4PMCMGeH4s4TtJGGNX1mKVn/A0rWfFsRFZTrGQFuQ96Frb bn0siJkHDSb9qtUV7SCiW+iSgXQxV8eAsbLGjesV4NzP2PtahybPetOHopiA5Xq63q3e8m CWIiXCqweWIwkHLJypkI73NyYYVh+sA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="Qnj7/ylT"; spf=pass (imf03.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.187 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729819365; a=rsa-sha256; cv=none; b=fyfVcA7eBDqRzcw6SniWEEuLdx6dW8CLksCP3DCrN+7WGoQSPRGDC2QY2hHxq3/ZNBu/jV E9mFpq7fvocoJXPMjQlKBrN4Za1qTnUcG4ZhV9OW+MxPlXxAFkw1yaSp6Dvg/noZ3DAO6Q 26+VctsKTt7X5B/7GqCNFHvnVfVp3BA= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1729819416; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ea/MjMCn/PUwsZr9oC0H1/BbJ/TXP7i5xCISmkGKoYY=; b=Qnj7/ylTgj6qu/raQvuKWAyO7g8JdotVpJCyFMYSo3WFE7xF7Z+WM55uL9HFpK1rmXhc2+ 0IN2sQ/XwvawwCMrgzk03NlVPJPW0VfZskrXjGkZycB05364Ktq7ebY8tvS9oZteVpPoB/ kixS1n6ATSFM4zxELJsS/AoqmrK8yDI= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Hugh Dickins , Yosry Ahmed , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, Meta kernel team Subject: [PATCH v1 3/6] memcg-v1: no need for memcg locking for dirty tracking Date: Thu, 24 Oct 2024 18:23:00 -0700 Message-ID: <20241025012304.2473312-4-shakeel.butt@linux.dev> In-Reply-To: <20241025012304.2473312-1-shakeel.butt@linux.dev> References: <20241025012304.2473312-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: quc1otb88s3kog3n1aeb7mnnm1tf3obi X-Rspamd-Queue-Id: C0D7020020 X-Rspamd-Server: rspam11 X-HE-Tag: 1729819408-518809 X-HE-Meta: U2FsdGVkX19IR6CqWoAb928WGcuAGcH99xKsUUoSZqZzRqVed6nq5RWn7UXpILFZlpDBpaCWMsMTdKj30H1YpFiOKvPb9nT0V9p5fS+OfO5m+I+u1/xyhWkC7g+OYG3OuBupvXzL+3E0zEr+S7VrXSkc2lvHiOtWrifxHZ0LjwdDnUCqDgcJSGWGu4EoloPNLYLaTLW/2I7O8FhuEJK1/VLtanRdowPcXfXuMtuD/UttdSpZaYsvripVYuLVzGKkOaMtLZkWVZLNxVRAbZWuyFQWSAfyzLbXlfuw1RXz+qsBx76JUMwVKJAOugeqc/g4JbZD3oXqlzm/d5aHjB0ULOzXlla4euzMMMZPuTwX2n+9LtrZHAlBkofMI1uMFQ1dDrPO7TIkvuCFiFGCH4oXqoZKqUGhMmlAH6NeaBHR4NGe8IkcV9xIa9kYq+51D4X6E80GdSXWIOln5sXGunYoUG4o7ZA5SIlc+sqDSdVO2N+KgND8G6C0AkjC4nmvO+lAGstwyBdDaTDnNFaFybNigNoZs6opzbAvs7972iavDiu9SnmHcLERdrq2ITaiXGUXPF0bF5EHCrpr8uf3tr3RWG4KhqKzHHYYxtamny9VYW609nK0FnsQvxtF5TTbiz1NrS3d6VC8sJ/YGVcRJzAnlT+MmWtpD5DWY8AoPE56hx1d2x2hcAaQfyYilOV9d7JxA1tkkqpX9c/XVCb7eW4y9j0tzTiScy9MSuVjC4YS/VvnC9vBlYwDv1YR0xR1rkWp8+st+KkVV/PKToTjpeNaj6Ulcxi5cgsQQgxsPTNZhRIqnCr1fw1YSCBKN6t96xyeSGNOR/v+UHYBN8MVdgd36c+AD1R5V9HjUXKKEpx7Fm29XWLilhcTe9WFQf8BQK6sxh0JHwa0DUq/PspR6iwaT7iKIf8Ntg2eYLPZrCc6us35rEVxvbMNuyAeu5fBrMpmZghM281hD+0Mvc1YgU4 iZXMzAdW x492MOrqntiDNJ4cgLCHm1JL42rq2iNprGBnfqj73Jh4XK6pVFAmF91OEPrePC2mV3d4zyZsFGoupyK+UV8eKQwx7h0vL2lI78koUO2i9DdTdJk8sUB+k3SAFIlI4S232Cu1ArpZmu7RpfqU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: During the era of memcg charge migration, the kernel has to be make sure that the dirty stat updates do not race with the charge migration. Otherwise it might update the dirty stats of the wrong memcg. Now with the memcg charge migration deprecated, there is no more race for dirty stat updates and the previous locking can be removed. Signed-off-by: Shakeel Butt Acked-by: Michal Hocko Reviewed-by: Roman Gushchin --- fs/buffer.c | 5 ----- mm/page-writeback.c | 16 +++------------- 2 files changed, 3 insertions(+), 18 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 1fc9a50def0b..88e765b0699f 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -736,15 +736,12 @@ bool block_dirty_folio(struct address_space *mapping, struct folio *folio) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - folio_memcg_lock(folio); newly_dirty = !folio_test_set_dirty(folio); spin_unlock(&mapping->i_private_lock); if (newly_dirty) __folio_mark_dirty(folio, mapping, 1); - folio_memcg_unlock(folio); - if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1194,13 +1191,11 @@ void mark_buffer_dirty(struct buffer_head *bh) struct folio *folio = bh->b_folio; struct address_space *mapping = NULL; - folio_memcg_lock(folio); if (!folio_test_set_dirty(folio)) { mapping = folio->mapping; if (mapping) __folio_mark_dirty(folio, mapping, 0); } - folio_memcg_unlock(folio); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 1d7179aba8e3..a76a73529fd9 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2743,8 +2743,6 @@ EXPORT_SYMBOL(noop_dirty_folio); /* * Helper function for set_page_dirty family. * - * Caller must hold folio_memcg_lock(). - * * NOTE: This relies on being atomic wrt interrupts. */ static void folio_account_dirtied(struct folio *folio, @@ -2777,7 +2775,6 @@ static void folio_account_dirtied(struct folio *folio, /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold folio_memcg_lock(). */ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) { @@ -2795,9 +2792,8 @@ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * - * The caller must hold folio_memcg_lock(). It is the caller's - * responsibility to prevent the folio from being truncated while - * this function is in progress, although it may have been truncated + * It is the caller's responsibility to prevent the folio from being truncated + * while this function is in progress, although it may have been truncated * before this function is called. Most callers have the folio locked. * A few have the folio blocked from truncation through other means (e.g. * zap_vma_pages() has it mapped and is holding the page table lock). @@ -2841,14 +2837,10 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, */ bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) { - folio_memcg_lock(folio); - if (folio_test_set_dirty(folio)) { - folio_memcg_unlock(folio); + if (folio_test_set_dirty(folio)) return false; - } __folio_mark_dirty(folio, mapping, !folio_test_private(folio)); - folio_memcg_unlock(folio); if (mapping->host) { /* !PageAnon && !swapper_space */ @@ -2975,14 +2967,12 @@ void __folio_cancel_dirty(struct folio *folio) struct bdi_writeback *wb; struct wb_lock_cookie cookie = {}; - folio_memcg_lock(folio); wb = unlocked_inode_to_wb_begin(inode, &cookie); if (folio_test_clear_dirty(folio)) folio_account_cleaned(folio, wb); unlocked_inode_to_wb_end(inode, &cookie); - folio_memcg_unlock(folio); } else { folio_clear_dirty(folio); }