From patchwork Mon Mar 11 01:07:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10846539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D90614DE for ; Mon, 11 Mar 2019 01:08:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 46D6528E54 for ; Mon, 11 Mar 2019 01:08:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3A9DC28E56; Mon, 11 Mar 2019 01:08:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B010F28E54 for ; Mon, 11 Mar 2019 01:08:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8ACED8E0004; Sun, 10 Mar 2019 21:08:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 85C008E0002; Sun, 10 Mar 2019 21:08:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 723E68E0004; Sun, 10 Mar 2019 21:08:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id 436F08E0002 for ; Sun, 10 Mar 2019 21:08:23 -0400 (EDT) Received: by mail-qt1-f199.google.com with SMTP id k5so3915707qte.0 for ; Sun, 10 Mar 2019 18:08:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=QUZeS3AdF7WqBwpmb1Jq8fZixl26ADp8ngeUBW86i+4=; b=DeyDa783I+g9MvTEHB7qB4joQ+E9K2t8G6mic6WDk4ILxbgIt6X5iAD+B7/vmPL+DL vvyBwHtB5wJCcv7Yi9/WVBJ0nx/lDlep7798zGFPdqOeO/kaAI5ElKd1g9QHGeoDzWeb cPlBVIY9bQ5ymka9mQgkSzG3dOMN1zU++AirrlF4n2ST+q8MWZJExUhyqdhLC3eXrdlu IG+PUwjAZUpHxmPc5rVjanFmN7NajBoFF+othsFuHZk0zvJozE40B+1+hJ0yIPlkwVlL ZIzr1FwbkY0irbdY4yBRYyknfNp1hEzZbsTycLXoj7OU0bOu7AqlVvhhQztgbZLEGgff bv0g== X-Gm-Message-State: APjAAAXLvWe+8Y2+jQ5htS5bYZDN5v6Qr7o4FIfcp3nz4VLsxRG9bYU+ A8rMNlDqOxinSPQUmbm9e3TS2mM/m5TpopPhx20dYCZmJMMAzNloggSJhfMzZg3X/iIGEggaFoG bdqqZwoSGjgDiexTtiee5l7DnstfBJWVtB/i3rSLAI7ErT4zqu13qHjW9XwyzR9o= X-Received: by 2002:a37:f50e:: with SMTP id l14mr9945040qkk.332.1552266503053; Sun, 10 Mar 2019 18:08:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7MJalp0HVE/b93k9Zr+7WNNwqiXDIhllOXOI49HR2ZFuMk+yinlCnJOWOAtLoHF2fMVT9 X-Received: by 2002:a37:f50e:: with SMTP id l14mr9944982qkk.332.1552266501495; Sun, 10 Mar 2019 18:08:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552266501; cv=none; d=google.com; s=arc-20160816; b=n2wMpJ8yn/K1DapATnpTJLugQAMg/Bg9Ifqpyu8AGMgeHd2oiXcaarHTeq3sjnhFA9 JN1yGsKSbwIbC5JUSiYtfNwyEdGyZqiStdaNDIvxkJHr7zKdgXqYE6ROsCEaN8apgTSJ IQ4aqjntjDfcDI/2CeLA3aJ06QZvd7PBsnw5MLDSvpn71pHFtRTzE2lf0dNiikM5SnhK k0rolabwGjf1ysrlOF/7q9r+Fs572R89rAo4aJ1P8OGYX6KB15k/7+gq1tKorWYWT78K K+M4lP+n/QU4/te0Ecq68PTl0Fp8FWWju1wfRMSoEXgOoM63/IB2JEk1unamzHEKbo38 5tog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=QUZeS3AdF7WqBwpmb1Jq8fZixl26ADp8ngeUBW86i+4=; b=j7m/+lGZhTSGM2oDWDF08ChsnQZrIe8r/sYO9jpZcJvz8kQbiPBwAu6zy14R1VPV0p IV2FUxm8YqMTf2IE13ZKMPpnq1qn0R1+ytOOxwdCbCvwWUOUdnBAddnHie7LVnPnRGTM cUTePivkmbwCDOx7DMqu1m+YyZqc7/2SX8oyu5cdl2LPqmcHU8I/tHQBXLUpEdyETIpa k8wRQI6PR91nZJS/uuWODAlCQQLs4YQ0uvlQ4x+zJGxLs5ko711TEF1RRROMU5vDreRg IHVrwmWPdjascw6481MElATqEFiqnYGCk/22GKJVhvPMjm0moGvcX8NdrCugEg3aBZb3 rnTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Ta6tCXtz; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id y1si2456307qvf.173.2019.03.10.18.08.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 10 Mar 2019 18:08:21 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Ta6tCXtz; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 2C15321FF6; Sun, 10 Mar 2019 21:08:21 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sun, 10 Mar 2019 21:08:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=QUZeS3AdF7WqBwpmb1Jq8fZixl26ADp8ngeUBW86i+4=; b=Ta6tCXtz r3RhP4SFqbhg4a3c+OS689bI80hIKiLYJwaoiAjsSQIXm9naHzcOx3B7Impypx8Y JiKu/At6rhrB1zQj7Ru5moy/mE661MJWeuwdn+tP7ypQ5/VuFwvkRdiO0ol0ugB0 K851BEjTSRYA2zeezA8lJ/dSE4dK1Elct7HCPHYFjfMjGP81j33UmcUYDuGuO/UI wW4TrLfENTDEqvAvUUVc2PdgM7J1Gw1MKVkN/ZGXmrr9f+3tyUrhzGKZoZhp1FL3 3Pq0cMWoCcsvir6V2mm0pS54Osd1SPWm2axS51y0iTNQsCaFQPA+XOK77nrJQH5u gY4xIZjbNpA7AQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeehgdeftdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddukedrvdduuddrudelvddrieeinecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from eros.localdomain (ppp118-211-192-66.bras1.syd2.internode.on.net [118.211.192.66]) by mail.messagingengine.com (Postfix) with ESMTPA id D00F010335; Sun, 10 Mar 2019 21:08:17 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] slub: Add comments to endif pre-processor macros Date: Mon, 11 Mar 2019 12:07:41 +1100 Message-Id: <20190311010744.5862-2-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190311010744.5862-1-tobin@kernel.org> References: <20190311010744.5862-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP SLUB allocator makes heavy use of ifdef/endif pre-processor macros. The pairing of these statements is at times hard to follow e.g. if the pair are further than a screen apart or if there are nested pairs. We can reduce cognitive load by adding a comment to the endif statement of form #ifdef CONFIG_FOO ... #endif /* CONFIG_FOO */ Add comments to endif pre-processor macros if ifdef/endif pair is not immediately apparent. Signed-off-by: Tobin C. Harding --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1b08fbcb7e61..b282e22885cd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1951,7 +1951,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, } } } while (read_mems_allowed_retry(cpuset_mems_cookie)); -#endif +#endif /* CONFIG_NUMA */ return NULL; } @@ -2249,7 +2249,7 @@ static void unfreeze_partials(struct kmem_cache *s, discard_slab(s, page); stat(s, FREE_SLAB); } -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } /* @@ -2308,7 +2308,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) local_irq_restore(flags); } preempt_enable(); -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) @@ -2813,7 +2813,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif +#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -3845,7 +3845,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif +#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4063,7 +4063,7 @@ void __kmemcg_cache_deactivate(struct kmem_cache *s) */ slab_deactivate_memcg_cache_rcu_sched(s, kmemcg_cache_deact_after_rcu); } -#endif +#endif /* CONFIG_MEMCG */ static int slab_mem_going_offline_callback(void *arg) { @@ -4696,7 +4696,7 @@ static int list_locations(struct kmem_cache *s, char *buf, len += sprintf(buf, "No data\n"); return len; } -#endif +#endif /* CONFIG_SLUB_DEBUG */ #ifdef SLUB_RESILIENCY_TEST static void __init resiliency_test(void) @@ -4756,7 +4756,7 @@ static void __init resiliency_test(void) #ifdef CONFIG_SYSFS static void resiliency_test(void) {}; #endif -#endif +#endif /* SLUB_RESILIENCY_TEST */ #ifdef CONFIG_SYSFS enum slab_stat_type { @@ -5413,7 +5413,7 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); -#endif +#endif /* CONFIG_SLUB_STATS */ static struct attribute *slab_attrs[] = { &slab_size_attr.attr, @@ -5614,7 +5614,7 @@ static void memcg_propagate_slab_attrs(struct kmem_cache *s) if (buffer) free_page((unsigned long)buffer); -#endif +#endif /* CONFIG_MEMCG */ } static void kmem_cache_release(struct kobject *k) From patchwork Mon Mar 11 01:07:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10846541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BCD514DE for ; Mon, 11 Mar 2019 01:08:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 576C028E54 for ; Mon, 11 Mar 2019 01:08:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B37728E56; Mon, 11 Mar 2019 01:08:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A594028E54 for ; Mon, 11 Mar 2019 01:08:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41B968E0005; Sun, 10 Mar 2019 21:08:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3A14A8E0002; Sun, 10 Mar 2019 21:08:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2685C8E0005; Sun, 10 Mar 2019 21:08:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id F14448E0002 for ; Sun, 10 Mar 2019 21:08:26 -0400 (EDT) Received: by mail-qk1-f197.google.com with SMTP id f70so3263601qke.8 for ; Sun, 10 Mar 2019 18:08:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=0YdAVJOsub1ouxSrbSwMtQqJcFX4a2eTBDmV3Tq0WPQ=; b=C4/U5a+lhuzYOSFsL5nco1Tc1uYj1AgI+P7PT0LbeyyNCWBx8Mqv+pkhy2hP9GZQAR IrMBzhyhZ/lzRskb3LJM2yzmxVLuBMXa99ixBYLV/214i1TNUtlXKIz7bmyxeoxQa6o4 5o0VQUOBe4UtDy4AbYpkg6BBsWvtSJdCzXEACWKpIru93LslZPPvkCPQnH4ai6hnmC1D RvKXSMaoYsG3rTVJ3x+HJ0qzVzZpMUNsRepi49zVgWpyHYbQsvSLxevrXTB9DloldGNc muSJqeupTS/SteUuKPLhrx5jgH839LtA4k2uw3fTU7YRoZGnlJtZLVths/SpkSoW4zv5 oXLw== X-Gm-Message-State: APjAAAVMIQggHu4sMmFT3VuByPS21v0xo0n72XKF6J2J7rKIu5K0FJzL E/60MJuhNiOQF2CAxS48V84Y4QJATF0gJ6cfVkHgPxNybSquTumj9eP9sLy90A6jbs7goo77EBM htoqIeRtt6GEGXJOlWrdb2JG4Z2/5zyFsFsrTLuc3arz5nFSaaSQImTqNIAL6BxM= X-Received: by 2002:ac8:682:: with SMTP id f2mr6526284qth.285.1552266506695; Sun, 10 Mar 2019 18:08:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqxxR0nNPZ9px2hGSm2QcQ37YyJ1qKhG5AWq+AEjcTUmrF+tqTaIYTMV/WYndxWo+/WFun51 X-Received: by 2002:ac8:682:: with SMTP id f2mr6526249qth.285.1552266505400; Sun, 10 Mar 2019 18:08:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552266505; cv=none; d=google.com; s=arc-20160816; b=hndIW6//bJlXSvRs62O3TE4uE3J5euk6ciETwcnaDiqdMO0B9qxZu0yp314qu+r35J kCe6Wrox7QqG7i5GTWUwTL1/6BDhTN2/UmcVgZvOxNO9WvXBypdqOU8Qtgol05BA/m3p e3IWLXGgDFMkkXgWq+JNjeL19IwvcPKP8WzPjbkv/4WIb8Bqej9CjH0SY6JTVu7tzsvh dA2uq3SShd3sf3MCmj87y/q/FzkBZFqYnV5xBCk+poB2bQR2ndLIEn1d6duoM5MvqxL6 S1Dp1WLN62R9xK5NIgNkxGMCukZcCiJKW2D2+bjyi7ImGZDYNGgyqCu9I2FCPkOMj1qW CQEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=0YdAVJOsub1ouxSrbSwMtQqJcFX4a2eTBDmV3Tq0WPQ=; b=bhLVykE0OacMTUJMe/iG+I4GGc6ZcivR1Fz1153WoIcBugwttZrRgsT7uaSYsPs0sV 2LTmHfyww2aFyfLkmMtjBAKe5APyZAXHuCEjUgN3nOm23EuZl1XqphDeAdVaBMpRfpet xildksjfGwNNqPlxLdVJl2VKPvti7+tDQ9I+sOmdDHgESdizrKqQlfzmV1Qt1StFhyOR 8xUjDqGhPr7s6Vk0C1Hlg7aULR0kC9pRYPDIXYRtjRxY3X1cC3ADZV3VFZGoLlbRtXnH uBaYsZWlycMfsNZQX0hIXj0RYN0P0XMFK+eZYslc+EYcP5nN7m04pPoTp2eIddjd/2hS tnPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FZkGMICQ; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id h68si1750655qkb.14.2019.03.10.18.08.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 10 Mar 2019 18:08:25 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FZkGMICQ; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 1C5CB20D7C; Sun, 10 Mar 2019 21:08:25 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sun, 10 Mar 2019 21:08:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=0YdAVJOsub1ouxSrbSwMtQqJcFX4a2eTBDmV3Tq0WPQ=; b=FZkGMICQ RBJt7UixobVfNSBG6ZzC3alUMFvJvwoj2/NFH0E8fvRo8Sx6bbysKrp9Wfuum10m L7rqsn59D+3FmhUzJImeA6ycQNy10ZP3YuveStk3D21Teyl+r2hEmej23RmjD9rB CS3x5CRp744TUjAaq64/9gA+UWf7v2at5yQr+LPyKYahGe7E8Ke6+u/qe5prO2Fj 8Qo9MNjj7is532Wc3BXmJaQf3kq7w3ElgHiDymGC7s2lVhKi7ejWU8n7aEgklYEi GgHH6Z9qk3nUjBffVK+kxcKNtxzqr8USQiZ9CrLgL9skyYKDbjD09pQ0N8EElyur vXFybCJG0UnS2Q== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeehgdeftdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddukedrvdduuddrudelvddrieeinecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedu X-ME-Proxy: Received: from eros.localdomain (ppp118-211-192-66.bras1.syd2.internode.on.net [118.211.192.66]) by mail.messagingengine.com (Postfix) with ESMTPA id C822010312; Sun, 10 Mar 2019 21:08:21 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] slub: Use slab_list instead of lru Date: Mon, 11 Mar 2019 12:07:42 +1100 Message-Id: <20190311010744.5862-3-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190311010744.5862-1-tobin@kernel.org> References: <20190311010744.5862-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Signed-off-by: Tobin C. Harding --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b282e22885cd..d692b5e0163d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s, return; lockdep_assert_held(&n->list_lock); - list_add(&page->lru, &n->full); + list_add(&page->slab_list, &n->full); } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) @@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct return; lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->lru, &n->partial); + list_add_tail(&page->slab_list, &n->partial); else - list_add(&page->lru, &n->partial); + list_add(&page->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, @@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n, struct page *page) { lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); n->nr_partial--; } @@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock(&n->list_lock); - list_for_each_entry_safe(page, page2, &n->partial, lru) { + list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; if (!pfmemalloc_match(page, flags)) @@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); return x; @@ -3702,10 +3702,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, lru) { + list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); - list_add(&page->lru, &discard); + list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -3713,7 +3713,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, lru) + list_for_each_entry_safe(page, h, &discard, slab_list) discard_slab(s, page); } @@ -3993,7 +3993,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, lru) { + list_for_each_entry_safe(page, t, &n->partial, slab_list) { int free = page->objects - page->inuse; /* Do not reread page->inuse */ @@ -4003,10 +4003,10 @@ int __kmem_cache_shrink(struct kmem_cache *s) BUG_ON(free <= 0); if (free == page->objects) { - list_move(&page->lru, &discard); + list_move(&page->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); + list_move(&page->slab_list, promote + free - 1); } /* @@ -4019,7 +4019,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) + list_for_each_entry_safe(page, t, &discard, slab_list) discard_slab(s, page); if (slabs_node(s, node)) @@ -4211,11 +4211,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) for_each_kmem_cache_node(s, node, n) { struct page *p; - list_for_each_entry(p, &n->partial, lru) + list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, lru) + list_for_each_entry(p, &n->full, slab_list) p->slab_cache = s; #endif } @@ -4432,7 +4432,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) { + list_for_each_entry(page, &n->partial, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4443,7 +4443,7 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, lru) { + list_for_each_entry(page, &n->full, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4639,9 +4639,9 @@ static int list_locations(struct kmem_cache *s, char *buf, continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) process_slab(&t, s, page, alloc, map); - list_for_each_entry(page, &n->full, lru) + list_for_each_entry(page, &n->full, slab_list) process_slab(&t, s, page, alloc, map); spin_unlock_irqrestore(&n->list_lock, flags); } From patchwork Mon Mar 11 01:07:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10846543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 558C71515 for ; Mon, 11 Mar 2019 01:08:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 427D528E54 for ; Mon, 11 Mar 2019 01:08:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3585B28E56; Mon, 11 Mar 2019 01:08:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8358728E54 for ; Mon, 11 Mar 2019 01:08:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 550B88E0006; Sun, 10 Mar 2019 21:08:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5004E8E0002; Sun, 10 Mar 2019 21:08:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9BB8E0006; Sun, 10 Mar 2019 21:08:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 172868E0002 for ; Sun, 10 Mar 2019 21:08:31 -0400 (EDT) Received: by mail-qk1-f199.google.com with SMTP id k21so3287819qkg.19 for ; Sun, 10 Mar 2019 18:08:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=CxGqyPgRRgFsJNIUYcTQsfjDYACO56dLDTf5SGEtnkY=; b=OLUwD8O8uD9bPV3H69LNeOIafKJ+P89gbWJqb6Q3YGXvMMQW4VQstsi6c8jRz37ylQ yt5U6xJ2inKR3scdc9Mqyi/T9n4KXOCcQIg/U6i/ifVmlw+usC4QoqhDUqTTeLzWt6ky YH+JYY7NFuggMN03tXn0sCR8R4uywVtpjkbqeiCdbfguVvd3gTf7KlX6PPnptCd2flsH 2g9yhD+rPXxhwsZKz43OTdiM4olbKNopqGB0FglvIQs95p85o0yNIg7lAbgkh4+8/Ngp fOQphLMPa1bGbvYY7zy5EUpV/uTzseG7H8nK74xKahSNgvMydFJtF55GtjE0jq9WpGBo pswQ== X-Gm-Message-State: APjAAAV4PESH4u79DRRyFSXzLbGwfxD+ouPZwKMKVOvsHIltr0PIHtYe 7vX8SU4h5Eb9B/J39PiE9ptTg0RuQfMkrueT71kEFHOFF7lBvafnHk+1oSZ9liKA+uso90nGth2 NmgqWa6AbjfPcDURDlqvRWuFIvufC6FnjZFchQMdG7/Z0N6YcwLgrPHbhknmUkmY= X-Received: by 2002:ac8:266d:: with SMTP id v42mr23371910qtv.116.1552266510879; Sun, 10 Mar 2019 18:08:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqxqwlYwUy2VbDJW0aidQRGcZfE3iKbV5WgdLxnJPDjMRTmoPpJ1XSzE9glY8vjlVh8oKYDX X-Received: by 2002:ac8:266d:: with SMTP id v42mr23371855qtv.116.1552266509061; Sun, 10 Mar 2019 18:08:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552266509; cv=none; d=google.com; s=arc-20160816; b=c3ffAkXKn55snukaMhU518C1dGikV8nA0yUm+ECDtapudDEDEHgg/nzegks7qrUojl bk1Etk44BSn9gVS14pxAMEDKWYeLHKjTcn36QhAOnlm/clWSr8N/FfkrKbBpSKhhxB2x ClinnRavTxSA3d61oyQTu9UOBSHR7RZMO+JWKi9HYJYj7f/owf+s/JXwLsD/TgO7Ie1I qYrShwKbfSRq9i8UATTnPu1QX77wkVoWI5wAfjSJLPl1SnbrW/49zQYUAz0izAe4Xc2i 1W29rbOFfZL+QGtEh3oUqSNPXf2y3eQSGI80zfGTm+JRYCsLp/E5Frws6dU7/c6CmYHO N4qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=CxGqyPgRRgFsJNIUYcTQsfjDYACO56dLDTf5SGEtnkY=; b=ZKPE28kbjP/Z9ZzO0Po8dEHeCr5piCXEnCKjzpe2J0UqdPG6qaP/saqwB+Vr1h4GRF BFw4x1vQlh0p241RHA2rLqYayYYXKipqM7JnoWMC/fRBW01cVn16wehtKqOqWtY2unNy EmaYM+CFiKpUQxMAsEeqCDH1Uqmb02KTdmO/PJhzGQ7BzIT1ntBq8/JVNklEU9B1TTZE NjpM4nL7hvx4yC+yuqkjBotEWMYNqTo+G+uRg8VRaZ3oGPbrj2ywD4As7zCqWtdiUHkN F7xH/1fnxlJdPe6cVQ3cyL+LeXPAbTQhQAKv7oVVKeIwJQr5r1BImgKbYwDAZfcExJOg 2U6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=zbEIlQFi; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id m4si2415052qvg.167.2019.03.10.18.08.28 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 10 Mar 2019 18:08:29 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=zbEIlQFi; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id BE2DF21F86; Sun, 10 Mar 2019 21:08:28 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sun, 10 Mar 2019 21:08:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=CxGqyPgRRgFsJNIUYcTQsfjDYACO56dLDTf5SGEtnkY=; b=zbEIlQFi Tke8QZo8EK3u9652auMp3yuH27RKJrSxGU/IXsMu6GjmwBuPx1RxrbgKPbzyLX/h OzALvVjZYChQHXV2jNbDF12ReyQhbvNXaUVTIQQMOuiKptz4mRpfBdEqVDTdHpGc 7lPi3jO+XLngDVR0gOKuOmfTnNOob6e36iRwAZjrtUgkoDUArnS0eqjUiui9Jreb weo0uM0XrXQMmipthLO9uylb2aKet/zO9trnhamDkEtinlBeFBzdKvOOBnjFJJ5W 0D720pDlcXN5yN5fsIynrBvmt+B9TKtf6uJ7r2NrInTGfjJ+mHrXVOWwmOi19q6E uuz7bGHupAes4A== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeehgdeftdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddukedrvdduuddrudelvddrieeinecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: Received: from eros.localdomain (ppp118-211-192-66.bras1.syd2.internode.on.net [118.211.192.66]) by mail.messagingengine.com (Postfix) with ESMTPA id 947DF10312; Sun, 10 Mar 2019 21:08:25 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] slab: Use slab_list instead of lru Date: Mon, 11 Mar 2019 12:07:43 +1100 Message-Id: <20190311010744.5862-4-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190311010744.5862-1-tobin@kernel.org> References: <20190311010744.5862-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Signed-off-by: Tobin C. Harding --- mm/slab.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 28652e4218e0..09cc64ef9613 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { struct page *page, *n; - list_for_each_entry_safe(page, n, list, lru) { - list_del(&page->lru); + list_for_each_entry_safe(page, n, list, slab_list) { + list_del(&page->slab_list); slab_destroy(cachep, page); } } @@ -2265,8 +2265,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, lru); - list_del(&page->lru); + page = list_entry(p, struct page, slab_list); + list_del(&page->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2726,13 +2726,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) if (!page) return; - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->slab_list); n = get_node(cachep, page_to_nid(page)); spin_lock(&n->list_lock); n->total_slabs++; if (!page->active) { - list_add_tail(&page->lru, &(n->slabs_free)); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, page, &list); @@ -2841,9 +2841,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->lru); + list_del(&page->slab_list); if (page->active == cachep->num) { - list_add(&page->lru, &n->slabs_full); + list_add(&page->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ @@ -2857,7 +2857,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, page->freelist = NULL; } } else - list_add(&page->lru, &n->slabs_partial); + list_add(&page->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ @@ -2880,20 +2880,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->lru); + list_del(&page->slab_list); if (!page->active) { - list_add_tail(&page->lru, &n->slabs_free); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, lru) { + list_for_each_entry(page, &n->slabs_partial, slab_list) { if (!PageSlabPfmemalloc(page)) return page; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { if (!PageSlabPfmemalloc(page)) { n->free_slabs--; return page; @@ -2908,11 +2908,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) struct page *page; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); + page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab_list); if (!page) { n->free_touched = 1; page = list_first_entry_or_null(&n->slabs_free, struct page, - lru); + slab_list); if (page) n->free_slabs--; } @@ -3413,29 +3414,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp, objp = objpp[i]; page = virt_to_head_page(objp); - list_del(&page->lru); + list_del(&page->slab_list); check_spinlock_acquired_node(cachep, node); slab_put_obj(cachep, page, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ if (page->active == 0) { - list_add(&page->lru, &n->slabs_free); + list_add(&page->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, lru); - list_move(&page->lru, list); + page = list_last_entry(&n->slabs_free, struct page, slab_list); + list_move(&page->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3473,7 +3474,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) int i = 0; struct page *page; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { BUG_ON(page->active); i++; @@ -4336,9 +4337,9 @@ static int leaks_show(struct seq_file *m, void *p) check_irq_on(); spin_lock_irq(&n->list_lock); - list_for_each_entry(page, &n->slabs_full, lru) + list_for_each_entry(page, &n->slabs_full, slab_list) handle_slab(x, cachep, page); - list_for_each_entry(page, &n->slabs_partial, lru) + list_for_each_entry(page, &n->slabs_partial, slab_list) handle_slab(x, cachep, page); spin_unlock_irq(&n->list_lock); } From patchwork Mon Mar 11 01:07:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10846545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 172821515 for ; Mon, 11 Mar 2019 01:08:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00F9528E54 for ; Mon, 11 Mar 2019 01:08:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8EA828E56; Mon, 11 Mar 2019 01:08:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7476F28E54 for ; Mon, 11 Mar 2019 01:08:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4662C8E0007; Sun, 10 Mar 2019 21:08:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 419948E0002; Sun, 10 Mar 2019 21:08:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28FCC8E0007; Sun, 10 Mar 2019 21:08:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id F1EC78E0002 for ; Sun, 10 Mar 2019 21:08:33 -0400 (EDT) Received: by mail-qt1-f200.google.com with SMTP id i21so3883007qtq.6 for ; Sun, 10 Mar 2019 18:08:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=zLp9xmyRL5FGSf3lybn7avPXaeiP5CieRjoPUWOw1UY=; b=b/iZV6bQqme1ay3k+kcbrwAUBmtr7dkCMc6QcNmh4/rl7jYgwce34bHirZcIl1hG3Y Kj02jSeOOCDkZepK6amjjtVcuRA0T/j29JfK4WkGN77iL6srerYySy1uiclIP68QncKO vkKaNf6IRi0Y3UG7QYZ9Us1SQI1PXpK7trardvBuv3sBes+UedPimuyNQYCtwcdMuMDa RLXAnbgk9NwlP4OT2M8oJ8egQ9j1CyQVpfvfgkyfHr7f8HjyT4vcBu2uqtYdhJ2/cQwl la6lB3+3t//zzo2WazfYysuhvxCoYf+qMIUZ42rYHMbZf7V1GIUEek3wjfgsJwYbY2Kw tJzg== X-Gm-Message-State: APjAAAViJ57lSnJW7QUmRpLvGETzNfvwB7GGgwtvnF4XdwDm2OOEpolf iorECwrjDmOCMMsVVAHXXfelis5kkYK8JlGXLVuGkQBUWXDZF0NVRzoHgqJpufGbDMduIxK/uql fuRZNmcnyWWX3guThTbpVASc6exw8JGlhrP4Fp5rOqh4adgP7+lmE7G/aQbTXfGI= X-Received: by 2002:a37:3087:: with SMTP id w129mr21456616qkw.255.1552266513776; Sun, 10 Mar 2019 18:08:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqyieVjuOTswNwETquxA473LnBI825lAi2HRFP0+f5pUk2P3iH78q4v3ABOCPc4kfyYSOwK2 X-Received: by 2002:a37:3087:: with SMTP id w129mr21456582qkw.255.1552266512726; Sun, 10 Mar 2019 18:08:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552266512; cv=none; d=google.com; s=arc-20160816; b=hWRbi2zrusUQDemDyZHl9KAV/6Rdwp75ynDu5izgZ+un5ecCH9yFDpRD+75xR7g7Rc se60ee62wlX2GM1IcHiXWyuj3vuraJIqApEq+5/oE0/fJbrJkCWYdoHujoiJ3sOUlUmy NJpTniNG5Zd3JsbPR2B9a45pfL14IdQR0WJnMNgFDwtyAqVcXhrZIKtzTzVDgXAAnGW7 WzI3i4vF3VuFKDn9okNsaelW3/neU7f2DjRhf3oLi5ptjhOMiVfXYseQsWJATX5v791L tu4sKeLQMJzOodDJBl7eMkeJnRPKWJBx7tWrlTuOP5wmvI7YHAYMAWoMzuku7x+dwaYn DWuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=zLp9xmyRL5FGSf3lybn7avPXaeiP5CieRjoPUWOw1UY=; b=xnZfwIIsdYDYnTC1onh2gL1eWSNIaWXipbx1ZFFSjQVNLZecmvzIzwHnVOGUJgh1QZ JN5YLC1YdRUJqDRSiho0wqAFEhE/OkMvyrtLB9njxIbey+F+Q5W+Rv6KwDYtwV0zQqhd SKiVHG8hqwjvcNsNiK+EHZcl+oVZaYZls4JK2KOUK6bitnVBCLX8CDlea7/JIfWoYWrk vnDvjdwCMYPTdHCK4UwSo4WTU0+4DxKKRBdI05VWQw9t/V/qVxDahbhKYZU661JSFW7e G1gOY+oH4qrgk8etFN5hNfkOBpsgimRlpxpjRmqL89qcM3H6db1y89Rw7oFbAy1ldF6Y xNKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=2EJQ+BG+; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id b51si2506961qtc.224.2019.03.10.18.08.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 10 Mar 2019 18:08:32 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=2EJQ+BG+; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 52BC521F86; Sun, 10 Mar 2019 21:08:32 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sun, 10 Mar 2019 21:08:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=zLp9xmyRL5FGSf3lybn7avPXaeiP5CieRjoPUWOw1UY=; b=2EJQ+BG+ Wfxvf5TF8zZb4G4z2IhbAoh5oBkt3SQlgxYO3tIAq+MxHR08gW+uZZ8AAfTd69o5 3ykj1DSQCSs0smdS8cJ+CUxT5d1IijEGr+UEjS03bkm1vFfsQQWUxQKAbVskNLKN mRzUbFBzHZIpGP7N2eC5Uf4GFQcefWMD7QOEVUmZOfAQLOxJUh0BrmuUInXhfF5T 5Dur9NdWoOTWxukjO+CTE+XJ+HMZy4TrXiN8k/wuhYLk1+QeE37r40TPgeOgBMhP co69JcKtfJWuM7VW/puE7lOdFRkjmXp6+Ef6zG3rIa73lrgbdGb8ATZX7zJR81Mb p4oO0l0FDrCXTw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeehgdeftdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddukedrvdduuddrudelvddrieeinecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: Received: from eros.localdomain (ppp118-211-192-66.bras1.syd2.internode.on.net [118.211.192.66]) by mail.messagingengine.com (Postfix) with ESMTPA id 327CB10310; Sun, 10 Mar 2019 21:08:28 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] slob: Use slab_list instead of lru Date: Mon, 11 Mar 2019 12:07:44 +1100 Message-Id: <20190311010744.5862-5-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190311010744.5862-1-tobin@kernel.org> References: <20190311010744.5862-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Signed-off-by: Tobin C. Harding --- mm/slob.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index 307c2c9feb44..ee68ff2a2833 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp) static void set_slob_page_free(struct page *sp, struct list_head *list) { - list_add(&sp->lru, list); + list_add(&sp->slab_list, list); __SetPageSlobFree(sp); } static inline void clear_slob_page_free(struct page *sp) { - list_del(&sp->lru); + list_del(&sp->slab_list); __ClearPageSlobFree(sp); } @@ -283,7 +283,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); /* Iterate through each partially free page, try to find room */ - list_for_each_entry(sp, slob_list, lru) { + list_for_each_entry(sp, slob_list, slab_list) { #ifdef CONFIG_NUMA /* * If there's a node specification, search for a partial @@ -297,7 +297,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) continue; /* Attempt to alloc */ - prev = sp->lru.prev; + prev = sp->slab_list.prev; b = slob_page_alloc(sp, size, align); if (!b) continue; @@ -323,7 +323,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); sp->freelist = b; - INIT_LIST_HEAD(&sp->lru); + INIT_LIST_HEAD(&sp->slab_list); set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE)); set_slob_page_free(sp, slob_list); b = slob_page_alloc(sp, size, align);