From patchwork Fri Jul 19 10:28:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13737164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA5BAC3DA59 for ; Fri, 19 Jul 2024 10:29:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 451A36B0089; Fri, 19 Jul 2024 06:29:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 400186B008C; Fri, 19 Jul 2024 06:29:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251FF6B0092; Fri, 19 Jul 2024 06:29:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 03EA16B0089 for ; Fri, 19 Jul 2024 06:29:10 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 76DA616057F for ; Fri, 19 Jul 2024 10:29:10 +0000 (UTC) X-FDA: 82356129660.16.577FDAE Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf28.hostedemail.com (Postfix) with ESMTP id 478C3C0020 for ; Fri, 19 Jul 2024 10:29:08 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=V+IUSZEQ; dkim=pass header.d=suse.com header.s=susede1 header.b=ZbQJrqhV; spf=pass (imf28.hostedemail.com: domain of wqu@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=wqu@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721384906; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rCb4hBwueKHgs5zAbBKeIF4mFjfNu0oH5bRUKJriLuE=; b=ZSmVLjjK7kRORBlJYo/SS/kFeppuPgoRjwh5r2fWQlCDxUJ38vBitWBsXiGNS/RU0WqgoI Z39m2Nc+Yju4LSev4TTPKONT8pTgMneTKz7AjWcHW7I+BgWvPbIV+ZRBESs1JD7wXZ5HLx uIQvlQzjzOLBGf4qr2d1clCWrWhJdQA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721384906; a=rsa-sha256; cv=none; b=DRvTxhhBCEVq9uDbAXwbcI8TyhWbOKrhuXltIbGV18f2l+rpTYJS4tcpUpzDC0HTQkH6Zp RQykSfFFeB1WsHt2IP6f9vbTe+9uZAWVd5O3lPzSp3wvaPcYdW0/h43MecJjMih6hceRA7 gU24cAxUplGtC0/BD0FUhphfEGR1u08= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=V+IUSZEQ; dkim=pass header.d=suse.com header.s=susede1 header.b=ZbQJrqhV; spf=pass (imf28.hostedemail.com: domain of wqu@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=wqu@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id D57D721B36; Fri, 19 Jul 2024 10:29:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384946; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rCb4hBwueKHgs5zAbBKeIF4mFjfNu0oH5bRUKJriLuE=; b=V+IUSZEQJ4MYRrnLifgbdIUzRgc8y/KMZSPNFbNUaBGyy/8meNC8E8Lz7mJZriO3NJFkBk iI6514FDCkKcStGKgBfADAtqriRv4uRsNkhIoNWt+A+78LlXp3YVOgXk20cfHiPZ+OBHgG vJ8uTncoAINjZ6wulsM0dbp+qo97ev8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384945; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rCb4hBwueKHgs5zAbBKeIF4mFjfNu0oH5bRUKJriLuE=; b=ZbQJrqhVESIcngQJArxsc/or77qC05ONkiZYpqs22KgoEuCGMkyXRsG7fMhX/VyPtaZhrA 4Hbe4PpROZiCogNRiVSzShQ1Q6BWKxEzP4JPoRj9XUc2Ni+Y0uLZjWukpv4am7X2ntY2uc dljKXwJ7mHRv++c+ar7UkBXOG1C9cHo= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 0AD0213808; Fri, 19 Jul 2024 10:29:02 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iPirLe4/mmb5WAAAD6G6ig (envelope-from ); Fri, 19 Jul 2024 10:29:02 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 1/3] memcontrol: define root_mem_cgroup for CONFIG_MEMCG=n cases Date: Fri, 19 Jul 2024 19:58:39 +0930 Message-ID: <2050f8a1bc181a9aaf01e0866e230e23216000f4.1721384771.git.wqu@suse.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Action: no action X-Spamd-Bar: / X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 478C3C0020 X-Stat-Signature: zhgkgdt9379jcxyapq4ar3sfbsxy86tg X-HE-Tag: 1721384948-784623 X-HE-Meta: U2FsdGVkX1/sZ+mADxfEVmkAKGb8JM9rKVmVM9p+6noZR8AWBb3pZnKce1hYdhN/MvcltU6CxZ/eKiGVxy68/KM8XarnwQVXbTrTqf/kOWWF27HUIiqCvi/mph643UmMjzZzSVWmKrVT39hsNLuR1oZdx3jdMHJHhIc2WoY2EHNl7iEocCVHZ6B+np/+7qH2WYFCc/L7svH2eUn+vUKEz6IeEFstfFxpoQtjzeGRAFxrw8MsA/0AAmUL971Mqo4h7E+37yJy5u+6n+afTQNFw0HBz301nQ+Xpjx5e3GdHBDfpBUOLALGbYmZo5+Ot00JSNtCsJLhQqxqtvOXlxXooFhDA0EOr+WEtQZ+tLlsnbIthOLk80pEOxs0qszYlnN2EKlU8tJdHa+URx/xtU/bBY/bQGmKWXJfJaXuaNfpPiwM6BY8jCj0uyk5KJ1ilhVJ9YBdYme1Fu9LB8/mkC33wasHNVdg/kfn3jipf7pysuf+ib/ym1+uTqpNTFHMnymzqGaZpJFM+8c4Q4/D8nAEddLCoZghMQ3MVzGT6qEHO/RtQAn410kbPJGvjfXJD1QckJM5qTAndfYCbKRQP9/Mu9hXmhER98FvIbJwNZUbfogFnJiDjEZN90cDknIG6bYbCNV+jsslJow47fD117IvWHNrMi0KgLh9Z2eVv+CkEZ2rNn1n97TX7c1AN3IZUgIUjLBa4uwexnHlR9XacEzxWn7WLrsgDkuOU9ANXaW00SRcQR+dnx2MqgzQjPmyMKoVmFPqEIbPxKjGFkig9e+rpw4yU7O06DQniTep4oxY3py0R4JC1GvlOBtL3AW59bRfWOO4W+7LMWrjBsXVZNP8NLCpSbB9bUexeWdxDgSXX2Psp1laAwqBPSe3rBFFSptmS0WsIN87A40SpeTdwG/66TSA9w7/p68fpnhAtgtWEV8YRRArfLwwfAhc7x6nmzstn7kHiUu6oDms+nQSQZU 9NADDNEz UTISErZuxojf/9zRdSWFGgwhmJELENAFdQed8scPTs+T/fDacSJQA6O2vdFWDQSffCk/me9Bx96pMIFedbv/Bg3WarcMdKawafOt52TY/VTiXucoXxIxh8WAc1rAewl8RvcgObx2R0WGkb8Z1Ajy6bOpXzxcF9m9hdZizD63wKGXJ87h256UsNZsltelHW0pi0gyN37Zr/JlnPndr4osLvfrooGpvW6uPx5Lpw0g4XL2hbOASMlRcgBtPaubUTE6ERTt4yA6pZ1MNIk1Nmzf/lu3G0galdk60ajO5BZ/aj7n/wAVrpM9bhCII5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There is an incoming btrfs patchset, which will use @root_mem_cgroup as the active cgroup to attach metadata folios to its internal btree inode, so that btrfs can skip the possibly costly charge for the internal inode which is only accessible by btrfs itself. However @root_mem_cgroup is not always defined (not defined for CONFIG_MEMCG=n case), thus all such callers need to do the extra handling for different CONFIG_MEMCG settings. So here we add a special macro definition of root_mem_cgroup, making it to always be NULL. The advantage of this, other than pulling the pointer definition out, is that we will avoid wasting global data section space for such pointer. Signed-off-by: Qu Wenruo --- include/linux/memcontrol.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 030d34e9d117..ae5c78719454 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -346,6 +346,12 @@ enum page_memcg_data_flags { #define __FIRST_OBJEXT_FLAG (1UL << 0) +/* + * For CONFIG_MEMCG=n case, still define a root_mem_cgroup, but that will + * always be NULL and not taking any global data section space. + */ +#define root_mem_cgroup (NULL) + #endif /* CONFIG_MEMCG */ enum objext_flags { From patchwork Fri Jul 19 10:28:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13737165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89702C3DA59 for ; Fri, 19 Jul 2024 10:29:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11BB46B0092; Fri, 19 Jul 2024 06:29:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A5666B0093; Fri, 19 Jul 2024 06:29:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E60136B0095; Fri, 19 Jul 2024 06:29:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C3D7F6B0092 for ; Fri, 19 Jul 2024 06:29:13 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 42B26160592 for ; Fri, 19 Jul 2024 10:29:13 +0000 (UTC) X-FDA: 82356129786.30.F0EC72C Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf10.hostedemail.com (Postfix) with ESMTP id 27E0DC0021 for ; Fri, 19 Jul 2024 10:29:10 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b="k5swL/Mk"; dkim=pass header.d=suse.com header.s=susede1 header.b="k5swL/Mk"; spf=pass (imf10.hostedemail.com: domain of wqu@suse.com designates 195.135.223.131 as permitted sender) smtp.mailfrom=wqu@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721384930; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s3EcZAi4vQxqy0nGMwVlJFTnfaXGD9fvU+gggCPP2PQ=; b=JkVk7v2MJUxTXcHt7WtfLOqsyNzskQ89suvw/odtI1E67u8IrZRa4fpmXgE5+tH3ZahrhB OAjIZ0jHKOrq7R1wqws24uG2j0cg1R+cXQGysk0a+oZwxMmaOSiqRf7Hj76LFjU9OZW7HF WAKJfnllGxOL6qjrXGgX60w9/YWBI+g= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b="k5swL/Mk"; dkim=pass header.d=suse.com header.s=susede1 header.b="k5swL/Mk"; spf=pass (imf10.hostedemail.com: domain of wqu@suse.com designates 195.135.223.131 as permitted sender) smtp.mailfrom=wqu@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721384930; a=rsa-sha256; cv=none; b=qQjEPHYAplFfdYg+cU2F5vrMl+ouPeqig+6se6BQNkcivBrCTfE3hiuS5m2taTVOOAcq93 xSsDgLm0Ny334Za3gWyZtpQ5JMNIf6N2huxNm8N49HX9KigmckIjNeQXDUH8YRY20MnkkN WfFrUz13T+RnZtV/M4Rel+aaW7Hbij4= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BD4A81F79D; Fri, 19 Jul 2024 10:29:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384949; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s3EcZAi4vQxqy0nGMwVlJFTnfaXGD9fvU+gggCPP2PQ=; b=k5swL/MkcBozQOr+CmNrwz55vgkBMo56w49y7GuPx9f3QvRn5KtfViWi8NB5qc2RiUAphJ W7H0v/O2fQN+fkvo/zpdtKa8N5xU2moK/YzqnbXyCqbEltTF/cGUzwFP2N1Cf7BJnHAmfn uhWtnkp3TVj3JAIv3RolC7MtFJk46ek= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384949; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s3EcZAi4vQxqy0nGMwVlJFTnfaXGD9fvU+gggCPP2PQ=; b=k5swL/MkcBozQOr+CmNrwz55vgkBMo56w49y7GuPx9f3QvRn5KtfViWi8NB5qc2RiUAphJ W7H0v/O2fQN+fkvo/zpdtKa8N5xU2moK/YzqnbXyCqbEltTF/cGUzwFP2N1Cf7BJnHAmfn uhWtnkp3TVj3JAIv3RolC7MtFJk46ek= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 64C3A132CB; Fri, 19 Jul 2024 10:29:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id QPMACPI/mmb5WAAAD6G6ig (envelope-from ); Fri, 19 Jul 2024 10:29:06 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, Michal Hocko , Vlastimil Babka Subject: [PATCH v7 2/3] btrfs: always uses root memcgroup for filemap_add_folio() Date: Fri, 19 Jul 2024 19:58:40 +0930 Message-ID: <6a9ba2c8e70c7b5c4316404612f281a031f847da.1721384771.git.wqu@suse.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Action: no action X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 27E0DC0021 X-Stat-Signature: 7ecxta5np1sb6uzpxut9t1bu3kajm9bg X-Rspam-User: X-HE-Tag: 1721384950-731572 X-HE-Meta: U2FsdGVkX18QTtOqW7H+bE9pqkSGkkkx5ONsnhuk8KEIqd7lDqDM+lnQWODpmq5cu5OTS0kn8AQC0XQuJHMMbn7182OhvJcgrAAjcHYRgpqnpGdjpNARu+KRj8x5clpn3vTB+bzRB8azdtRwHeq3C58M8rC7KxQt+Ax2sxCYRtw9DiJTLAX7FGo7ugvLmGoa3Wlu+pSURENo2H9Y1cauTbbPMzPm4Vxt9UElkrIjtitwa0pFt+YyeSoleNv9vegV3Vntm7Zr9ytKx96no7vopuPxCDFnAYB318IOFxtGBaElPZeXY+Lr+DCFfLgg+zvsg2Z8p3S8d22o33PKFcc+pwvZOOwA1Y9aYqWLzaq3Eb2e4593UcLaeBGlM5SJJEOzwfJi0Q9JGAyt2jDhz7AveKj94Xwc4rgvRjT5e4OeBkvXHFjken4BM83qdJixMv3bHvGSgMAPt1d7Yigodswvmu1hpjYhjQMhMaR3X9OWuWa0D7n5TsAcLMzJ2y016eDNTCqkmm21du71God4RLtOCs8XwcjgvR6uwYUFIwtRP4gDCZxKW2yco4S/gRAcbd6vK9dG5HjyAeyOm/33ToGMTBBSZnygYXGAcPR/TyLL0/nztg6YIuS7ReCrEvAitKnuDggXl9jcDezmcfvM64ozt/kKo42GEWh2tMTlaFdlrG5kU3q8F3RF6om0nLILq57BFw+t8SbnWOuF6Z5/ZVBWCOLDMLEtCp7p0Dt0teKnaJRDDlhlbEsWSsnSGg4QeHoZRdZRTweu5O43p+mdUhflup+Bw6rgXg1FVE+dCs3xZyR7qOg8xg8WXhd5V6D1OAEIi5h1Z0QsPRWPBSJHXOQrCVVcks/LRxiYdNGQ/aJKjptWFqXxg+DALcSzJg4dN3wypuAwWUww0ZpVbamvEWnK486Um/9EOEoGg9KplTXWcaBzU4j2eqgVvHJl7ln+/a7nlJIwP92IoqNnSzUZD/c HYMuhCN4 uSZEyeIw8MMZ//JjsYr/DPeurDbGR0VtJfJ/L2SKYWIJXv1IakncPjLLf57ee0Ra0ZztQDTfOqQxMfEzVqDzzy5svCBRaOPW092hK/WtGPRiXgetiLwjlaU/R26r0SaxMOX/TXad21xwjaQ9UpYulxqRx3pnfSNExFyq9iq3/era+ray9xPWo0lbDx29m4khXg+7giZ7rC2myn7g9aP6qe8SUnUJ0Nm/yGkrwTZiefxxq1t1nbvEIMyIvPMWZOkBmdyMmYOVEsBNUZfGN7CHTkOxrZONW/iT1WxUiTEP7qqTYqugbRbO/YSI/bcuv+a83yOJCUNKDngYfGQheFA/tDEtrVAOWmkbOukW31vEyOf3cbpqBI/z20AajBrtGC3ZTnTXoHU6hBA1sTz1h8cbAS0ZSHvy+KfLLUWkr8LxARq5qt871WPIwdSm1KRWX4RxCc1UB/xi4D+klHm0LGMp6GPNsb5HPZZttCTl3zN9V4uwd6RzBihXa9WqR+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: [BACKGROUND] The function filemap_add_folio() charges the memory cgroup, as we assume all page caches are accessible by user space progresses thus needs the cgroup accounting. However btrfs is a special case, it has a very large metadata thanks to its support of data csum (by default it's 4 bytes per 4K data, and can be as large as 32 bytes per 4K data). This means btrfs has to go page cache for its metadata pages, to take advantage of both cache and reclaim ability of filemap. This has a tiny problem, that all btrfs metadata pages have to go through the memcgroup charge, even all those metadata pages are not accessible by the user space, and doing the charging can introduce some latency if there is a memory limits set. Btrfs currently uses __GFP_NOFAIL flag as a workaround for this cgroup charge situation so that metadata pages won't really be limited by memcgroup. [ENHANCEMENT] Instead of relying on __GFP_NOFAIL to avoid charge failure, use root memory cgroup to attach metadata pages. With root memory cgroup, we directly skip the charging part, and only rely on __GFP_NOFAIL for the real memory allocation part. Suggested-by: Michal Hocko Suggested-by: Vlastimil Babka (SUSE) Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index aa7f8148cd0d..cfeed7673009 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2971,6 +2971,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, struct btrfs_fs_info *fs_info = eb->fs_info; struct address_space *mapping = fs_info->btree_inode->i_mapping; + struct mem_cgroup *old_memcg; const unsigned long index = eb->start >> PAGE_SHIFT; struct folio *existing_folio = NULL; int ret; @@ -2981,8 +2982,17 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, ASSERT(eb->folios[i]); retry: + /* + * Btree inode is a btrfs internal inode, and not exposed to any + * user. + * Furthermore we do not want any cgroup limits on this inode. + * So we always use root_mem_cgroup as our active memcg when attaching + * the folios. + */ + old_memcg = set_active_memcg(root_mem_cgroup); ret = filemap_add_folio(mapping, eb->folios[i], index + i, GFP_NOFS | __GFP_NOFAIL); + set_active_memcg(old_memcg); if (!ret) goto finish; From patchwork Fri Jul 19 10:28:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13737166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8B2EC3DA59 for ; Fri, 19 Jul 2024 10:29:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3675F6B0095; Fri, 19 Jul 2024 06:29:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EFE76B0098; Fri, 19 Jul 2024 06:29:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1419F6B0099; Fri, 19 Jul 2024 06:29:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E581A6B0095 for ; Fri, 19 Jul 2024 06:29:17 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A18FDA05C4 for ; Fri, 19 Jul 2024 10:29:17 +0000 (UTC) X-FDA: 82356129954.27.7C03E41 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf27.hostedemail.com (Postfix) with ESMTP id 899D44000D for ; Fri, 19 Jul 2024 10:29:15 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=sJgHGrEr; dkim=pass header.d=suse.com header.s=susede1 header.b="SItvW/vw"; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf27.hostedemail.com: domain of wqu@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=wqu@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721384924; a=rsa-sha256; cv=none; b=zQJwlDGIW6nLApCqaPXBzmbHrJyfDg0nV4p+pL7moqzMrUaqF5tWejmQglqKxnxAS59VE2 h8yNOOcFtR3of89j/VnKrw2awqBC7I/teXLtrXvRQQx4QLJEEvnDh5ld+JsC8ax1XvhToa PW3RZII2CpwvJttf7Vf9sCLLTwuiw3k= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=sJgHGrEr; dkim=pass header.d=suse.com header.s=susede1 header.b="SItvW/vw"; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf27.hostedemail.com: domain of wqu@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=wqu@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721384924; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mYLTpU+TQ80VryXSHafsvunreYjyK5cGZvwUB+lRbyk=; b=DMBKRE7RUb8budxsvwU2eS/q6fG01D8Ufg4zs2qLpscanesm3kSFtA9fcLUTZJ7WTBvRhA jeVCzNXXc0UYh17lx7M5bKsgjXmUh2+5KVFQchWQtrz6yyb4/CUqOKkWOaTl5LbFWLEiTh Yw8BzsDsgGyIhZblt5Wc0bVwGcGZnb0= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 195A821B1B; Fri, 19 Jul 2024 10:29:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384954; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mYLTpU+TQ80VryXSHafsvunreYjyK5cGZvwUB+lRbyk=; b=sJgHGrErSUkCNmegVCv8Nv3I8txbJS7QSW/dASLQv9HjXW+uCgEfHWsYYc7XUR4+Gn+qIB RFThsR/JnPngChHiKxKLOkRqXQZ86yXlCKMxpsa/hiBcrqKsL/Y/kSTaAD/BMPGuTzUZN6 enr/YyqSgKe6JjxxIEE4wdb8CnlX9Tw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1721384953; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mYLTpU+TQ80VryXSHafsvunreYjyK5cGZvwUB+lRbyk=; b=SItvW/vweK12TcejIEbLKVFTwvY1n3NwFGDzT6iq/rQhSP1+Xc0jQln8VhbxdG1j26GmF1 Jiz9aFaVxMX19V9tX2V8QCcmlcIHWtNmiSehtQIXJh0BGtNuGYM2qc3rntStywfXoAtoH4 QsfvDPQFnGImlpCkv6eSJp4x4e+G7fo= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4BEAE132CB; Fri, 19 Jul 2024 10:29:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wPEFAvY/mmb5WAAAD6G6ig (envelope-from ); Fri, 19 Jul 2024 10:29:10 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 3/3] btrfs: prefer to allocate larger folio for metadata Date: Fri, 19 Jul 2024 19:58:41 +0930 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Action: no action X-Rspamd-Queue-Id: 899D44000D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: qx7q8qe67zuf7g3f98zamjdws1ak5s8a X-HE-Tag: 1721384955-36279 X-HE-Meta: U2FsdGVkX19L98pZSKkjOXqFLXK+b+uNOvHSK+GMoVf/4pS/k707NEdPSIBNNg3wPH7YGAPs0jw4lNEH0iPeQ1vIoHqDJqP17vArbKWtwqC1kw5Qw2ZmFEh10ORMtVNfS/tKQCXAm6/vrf1Mg8mHbLxvR0+M7v1dhDVXBuKqPFOTFFhDIa4mfpKI0hWwXbXTgpmYzo9o6ZWTwBZWdqVhSJ9MvWHtw1XkZOISyMKyilsVNeYEqOHJQQofkOP7k7Vl/efdmvnqFF8/54eOb8WYCOd2yUi6Kcd86lFJPpvP6CRwHUCo0OmZPV2sDv19tpPLtivaWOKUbfmOtpscyn5zD4ghtDu6c8tXKjhjpNECtD2LzeHmJVKcGLVxqxko5lJig0L+d4va+I+N5WSSkRs1FOwktOoY4/zkpZ52TI1SYxpmlFKQxuaxPfgwPrENJNzd/RUR35bbNQ2phQ6gMQ+9zWEBtsDe2XGTfjL+jhXdhtP3B9O4AQ69/CFdkP2ct8ydlizFSQUqVZVClefjtdTerELhfFHqUhSzHBXegag5JJe/mz8xqn2/3MeNdi2yej3j4mE38B3L0XXrWekSo8fwjw58Dx0Y/eW0HmqQK1o9+kvXQVeTKxuHyqqZ37sRb7v/gxzXjFFrxZHQu1nEvHAGTXiYLU83HJo4WKbf9w/bAEwdDiOacJfPUBoj1d9IADLlfbdqqWDdfGgYOkQK58kQ52HeQ4T/5IzHZkutXHHABe2xdGngPDdVs+fVaCPCpAqygZR383zBfoJs40GKLA7YE8SnWTp/aFJGdWQjAx+q7Y9mZHWRGgDcO0oW6D6d8I8R21NuKM0yz1Nw45GgFvcyYCfiTvgAVUHuyVzidct/rBdRdYTHkLmPe2LC7qZygEcTkGzEaN41VLB+eLKRWq5EuMAIuJVTapaZWL03m5GFtD/vGku5HLp83SGSQrA9kgWU42fFcEw+csBVcEfMbVB DSc5av0g NkS5FvQXhxsvCEeAW1+gyNbx13zFTRScahkw8DlM/L/8cB1zC01r/RyJNKJiS/JUsnFtISgs7N2SWhJIwAS4uTkSfnxLWINUP/j1iP26DkfTli/2yw/8IZUcr9A6tpMFt65ZUdNdaGBCKmTHOaN2DYxMSxUvF1dmCzyJtPTSl2+VZyB1+t9ofp5VP5Wq9vpxtOC2sUXg1zyRb2XxiWRypfbnVze/EW0e7wPxJkSSX4ELfoGvF9nfpf9kQoZWbFqYFBlwON43q8c+8HIuPGZvP4G9K9OzxWc0J2Pio3SZ4Y79O84Sy1l1Cb+EiTGWiJWiBiYP/fTOUwtBWfF7CNXgGR76iwcHxrc1TrrlIuSHtT+f9nJV2VI+C9ij/g47s3pHNmeSWNyp/xzaWq1k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For btrfs metadata, the high order folios are only utilized when all the following conditions are met: - The extent buffer start is aligned to nodesize This should be the common case for any btrfs in the last 5 years. - The nodesize is larger than page size Or there is no need to use larger folios at all. - MM layer can fulfill our folio allocation request - The larger folio must exactly cover the extent buffer No longer no smaller, must be an exact fit. This is to make extent buffer accessors much easier. They only need to check the first slot in eb->folios[], to determine their access unit (need per-page handling or a large folio covering the whole eb). There is another small blockage, filemap APIs can not guarantee the folio size. For example, by default we go 16K nodesize on x86_64, meaning a larger folio we expect would be with order 2 (size 16K). We don't accept 2 order 1 (size 8K) folios, or we fall back to 4 order 0 (page sized) folios. So here we go a different workaround, allocate a order 2 folio first, then attach them to the filemap of metadata. Thus here comes several results related to the attach attempt of eb folios: 1) We can attach the pre-allocated eb folio to filemap This is the most simple and hot path, we just continue our work setting up the extent buffer. 2) There is an existing folio in the filemap 2.0) Subpage case We would reuse the folio no matter what, subpage is doing a different way handling folio->private (a bitmap other than a pointer to an existing eb). 2.1) There is already a live extent buffer attached to the filemap folio This should be more or less hot path, we grab the existing eb and free the current one. 2.2) No live eb. 2.2.1) The filemap folio is larger than eb folio This is a better case, we can reuse the filemap folio, but we need to cleanup all the pre-allocated folios of the new eb before reusing. Later code should take the folio size change into consideration. 2.2.2) The filemap folio is the same size of eb folio We just free the current folio, and reuse the filemap one. No other special handling needed. 2.2.3) The filemap folio is smaller than eb folio This is the most tricky corner case, we can not easily replace the folio in filemap using our eb folio. Thus here we return -EAGAIN, to inform our caller to re-try with order 0 (of course with our larger folio freed). Otherwise all the needed infrastructure is already here, we only need to try allocate larger folio as our first try in alloc_eb_folio_array(). For now, the higher order allocation is only a preferable attempt for debug build, before we had enough test coverage and push it to end users. Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 102 ++++++++++++++++++++++++++++--------------- 1 file changed, 68 insertions(+), 34 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index cfeed7673009..d7824644d593 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -719,12 +719,28 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array, * * For now, the folios populated are always in order 0 (aka, single page). */ -static int alloc_eb_folio_array(struct extent_buffer *eb, bool nofail) +static int alloc_eb_folio_array(struct extent_buffer *eb, int order, + bool nofail) { struct page *page_array[INLINE_EXTENT_BUFFER_PAGES] = { 0 }; int num_pages = num_extent_pages(eb); int ret; + if (order) { + gfp_t gfp; + + if (order > 0) + gfp = GFP_NOFS | __GFP_NORETRY | __GFP_NOWARN; + else + gfp = nofail ? (GFP_NOFS | __GFP_NOFAIL) : GFP_NOFS; + eb->folios[0] = folio_alloc(gfp, order); + if (likely(eb->folios[0])) { + eb->folio_size = folio_size(eb->folios[0]); + eb->folio_shift = folio_shift(eb->folios[0]); + return 0; + } + /* Fallback to 0 order (single page) allocation. */ + } ret = btrfs_alloc_page_array(num_pages, page_array, nofail); if (ret < 0) return ret; @@ -2707,7 +2723,7 @@ struct extent_buffer *btrfs_clone_extent_buffer(const struct extent_buffer *src) */ set_bit(EXTENT_BUFFER_UNMAPPED, &new->bflags); - ret = alloc_eb_folio_array(new, false); + ret = alloc_eb_folio_array(new, 0, false); if (ret) { btrfs_release_extent_buffer(new); return NULL; @@ -2740,7 +2756,7 @@ struct extent_buffer *__alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info, if (!eb) return NULL; - ret = alloc_eb_folio_array(eb, false); + ret = alloc_eb_folio_array(eb, 0, false); if (ret) goto err; @@ -2955,6 +2971,14 @@ static int check_eb_alignment(struct btrfs_fs_info *fs_info, u64 start) return 0; } +static void free_all_eb_folios(struct extent_buffer *eb) +{ + for (int i = 0; i < INLINE_EXTENT_BUFFER_PAGES; i++) { + if (eb->folios[i]) + folio_put(eb->folios[i]); + eb->folios[i] = NULL; + } +} /* * Return 0 if eb->folios[i] is attached to btree inode successfully. @@ -2974,6 +2998,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, struct mem_cgroup *old_memcg; const unsigned long index = eb->start >> PAGE_SHIFT; struct folio *existing_folio = NULL; + const int eb_order = folio_order(eb->folios[0]); int ret; ASSERT(found_eb_ret); @@ -3003,15 +3028,6 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, goto retry; } - /* For now, we should only have single-page folios for btree inode. */ - ASSERT(folio_nr_pages(existing_folio) == 1); - - if (folio_size(existing_folio) != eb->folio_size) { - folio_unlock(existing_folio); - folio_put(existing_folio); - return -EAGAIN; - } - finish: spin_lock(&mapping->i_private_lock); if (existing_folio && fs_info->nodesize < PAGE_SIZE) { @@ -3020,6 +3036,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, eb->folios[i] = existing_folio; } else if (existing_folio) { struct extent_buffer *existing_eb; + int existing_order = folio_order(existing_folio); existing_eb = grab_extent_buffer(fs_info, folio_page(existing_folio, 0)); @@ -3031,9 +3048,34 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, folio_put(existing_folio); return 1; } - /* The extent buffer no longer exists, we can reuse the folio. */ - __free_page(folio_page(eb->folios[i], 0)); - eb->folios[i] = existing_folio; + if (existing_order > eb_order) { + /* + * The existing one has higher order, we need to drop + * all eb folios before resuing it. + * And this should only happen for the first folio. + */ + ASSERT(i == 0); + free_all_eb_folios(eb); + eb->folios[i] = existing_folio; + } else if (existing_order == eb_order) { + /* + * Can safely reuse the filemap folio, just + * release the eb one. + */ + folio_put(eb->folios[i]); + eb->folios[i] = existing_folio; + } else { + /* + * The existing one has lower order. + * + * Just retry and fallback to order 0. + */ + ASSERT(i == 0); + folio_unlock(existing_folio); + folio_put(existing_folio); + spin_unlock(&mapping->i_private_lock); + return -EAGAIN; + } } eb->folio_size = folio_size(eb->folios[i]); eb->folio_shift = folio_shift(eb->folios[i]); @@ -3066,6 +3108,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 lockdep_owner = owner_root; bool page_contig = true; int uptodate = 1; + int order = 0; int ret; if (check_eb_alignment(fs_info, start)) @@ -3082,6 +3125,10 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, btrfs_warn_32bit_limit(fs_info); #endif + if (IS_ENABLED(CONFIG_BTRFS_DEBUG) && fs_info->nodesize > PAGE_SIZE && + IS_ALIGNED(start, fs_info->nodesize)) + order = ilog2(fs_info->nodesize >> PAGE_SHIFT); + eb = find_extent_buffer(fs_info, start); if (eb) return eb; @@ -3116,7 +3163,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, reallocate: /* Allocate all pages first. */ - ret = alloc_eb_folio_array(eb, true); + ret = alloc_eb_folio_array(eb, order, true); if (ret < 0) { btrfs_free_subpage(prealloc); goto out; @@ -3134,26 +3181,12 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, } /* - * TODO: Special handling for a corner case where the order of - * folios mismatch between the new eb and filemap. - * - * This happens when: - * - * - the new eb is using higher order folio - * - * - the filemap is still using 0-order folios for the range - * This can happen at the previous eb allocation, and we don't - * have higher order folio for the call. - * - * - the existing eb has already been freed - * - * In this case, we have to free the existing folios first, and - * re-allocate using the same order. - * Thankfully this is not going to happen yet, as we're still - * using 0-order folios. + * Got a corner case where the existing folio is lower order, + * fallback to 0 order and retry. */ if (unlikely(ret == -EAGAIN)) { - ASSERT(0); + order = 0; + free_all_eb_folios(eb); goto reallocate; } attached++; @@ -3164,6 +3197,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, * and free the allocated page. */ folio = eb->folios[i]; + num_folios = num_extent_folios(eb); WARN_ON(btrfs_folio_test_dirty(fs_info, folio, eb->start, eb->len)); /*