From patchwork Mon Aug 19 18:54:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Lameter via B4 Relay X-Patchwork-Id: 13768876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C630DC52D7C for ; Mon, 19 Aug 2024 18:54:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50BE96B0083; Mon, 19 Aug 2024 14:54:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BC406B0085; Mon, 19 Aug 2024 14:54:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 384146B0088; Mon, 19 Aug 2024 14:54:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1AA1A6B0083 for ; Mon, 19 Aug 2024 14:54:29 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C06EC1C5170 for ; Mon, 19 Aug 2024 18:54:28 +0000 (UTC) X-FDA: 82469895816.16.ED2786D Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf05.hostedemail.com (Postfix) with ESMTP id 3861E10000D for ; Mon, 19 Aug 2024 18:54:25 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=teiGkPjf; spf=pass (imf05.hostedemail.com: domain of devnull+cl.gentwo.org@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=devnull+cl.gentwo.org@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724093651; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=B8AUEZg6zGIGd6IftxOldYEWMpfLNFd6uglTJXubPxQ=; b=BTabq5Od3xcgjEOK92AQXwyiKUlcGxIcVgRB9XgZZB7O1Sihwy7crftvO/xNbI8w9yDzZt shTm4ywH+phe+fWyxqvgAo3gzMG8z2yiEdMlCkOZh7XId5iJ3K0kkcHfnlySqzWGybPcSX fLbcO4MR+TfYDjsJIhQKU0rzFbds9to= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=teiGkPjf; spf=pass (imf05.hostedemail.com: domain of devnull+cl.gentwo.org@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=devnull+cl.gentwo.org@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724093651; a=rsa-sha256; cv=none; b=wnHwCwGMJewTIZuq2hq8bc8+C7Z3VkdFaJWEbgMs8vKumJTICwKSXahxqgesOTMSOMQnYr dkPGFQm4OwM7R5vx7GJkBHJflhUi04RdpLHv98lBUJC+y8CDW58NK8nlZU6oVOvM1lnps5 q7JvG4Q2gIHGQBHqfIGAOF2zf16xM6I= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 5099BCE009F; Mon, 19 Aug 2024 18:54:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPS id 7A7CFC32782; Mon, 19 Aug 2024 18:54:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724093662; bh=6JC+G78lYQvXrX0p9ZlscZB05JH8UcL3tgbe0KmdkyA=; h=From:Date:Subject:To:Cc:Reply-To:From; b=teiGkPjfLVJq2XeDB7VSm/KfZ/oAfAbZt6lzqryPBr02saviiKQYnfz3os9oNjpG6 bpgo3rOXtUYtpM4Y+2D0BxvUlbeLrHIuo25GvpHNgQdgZD7DClylnNSitir/N2n7Eo JjguwxbOJzq6TgmTJsr6fkgLH1ZP+cIA5kD/IBZww+GZa9iaUlsQj+kUMM+ibqJm/u z3Ydxag2qiAMq4YZO6JWaPUAdJaAJoZE6Ua6CH4AZI4CF3QtM21sxUkSgvtrVDUeLZ p4a+VBBqVvuATfRtbhIgNl8gTvL/Wu0wz7xlTsFyaGqWD6SRoOUl4V9fucUzxbnZi4 QRnnNNvw6N0cQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6848DC52D7C; Mon, 19 Aug 2024 18:54:22 +0000 (UTC) From: Christoph Lameter via B4 Relay Date: Mon, 19 Aug 2024 11:54:05 -0700 Subject: [PATCH] Reenable NUMA policy support in the slab allocator MIME-Version: 1.0 Message-Id: <20240819-numa_policy-v1-1-f096cff543ee@gentwo.org> X-B4-Tracking: v=1; b=H4sIAMyUw2YC/6tWKk4tykwtVrJSqFYqSi3LLM7MzwNyDHUUlJIzE vPSU3UzU4B8JSMDIxMDCwMz3bzS3MT4gvyczORKXVNDC4s0E5OkRIMUCyWgjoKi1LTMCrBp0bG 1tQCMAoOGXQAAAA== To: "Matthew Wilcox (Oracle)" , Johannes Weiner , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Yang Shi Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@kernel.org, Christoph Lameter X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=ed25519-sha256; t=1724093662; l=2013; i=cl@gentwo.org; s=20240811; h=from:subject:message-id; bh=OdcUj5iCsDbP03VJBF3jAH+o1SUhMTMBiGyFaXIVG7g=; b=gt+6QhcxVDLtFNAaF7Ic9x8x+uWCYrJj/boApPxkXOD0mMiKbIrYlvMiqR9Sh7PT1It4+B+Eo x0tBHn9Rn92DAmzESSk0AYEKhcsKgQLpO4nDXPiEsJ/U2J39Nf8okst X-Developer-Key: i=cl@gentwo.org; a=ed25519; pk=I7gqGwDi9drzCReFIuf2k9de1FI1BGibsshXI0DIvq8= X-Endpoint-Received: by B4 Relay for cl@gentwo.org/20240811 with auth_id=194 X-Original-From: Christoph Lameter Reply-To: cl@gentwo.org X-Rspam-User: X-Stat-Signature: eqkmmdp14784sq67y8nu8h7x7meo6foi X-Rspamd-Queue-Id: 3861E10000D X-Rspamd-Server: rspam11 X-HE-Tag: 1724093665-253731 X-HE-Meta: U2FsdGVkX1/Op6DmINEUAtVQqKSKDiw9Kvo+vc3DUs/CF8WcaAhMAU0kJbDRdYiX7SkPko8D9I3+bd8W4UPs/AssJ6PNT7SLqfIaIfil82S71LM5c+xdgjU8k1SoJ+YDgBRISMmJ3fK524fbLb20fmjZx0zD0d/o4t0mgwdIyVi/1RPz4aFxQ6yBnORxSXdro4dfudWKXzG7SLDCaJKwDAS6d3IoUGsYWBanLVL1WSF20uwuH6pyUgWzRqDQwu/f0FnLxEf0bosO5NUOCfu27+X94k0Wdr4fOP6FUPqlbNswLER97i0cVMnfcHb6krLPakYiZa0p8SBkyeRM4BGIVQz3e2YIaVargr/p0Lvl5ikjIzlB6R4QLVkUhQWPxATfa3N7wXlLj1lhW3B5qeQag/1Cpl/u41jbU43iTuUFhi37YX5C8rIh0tw/lsZQGnMI6pb67GSO/Gp6F1srSf2aL6vyL+uOAsyPXEZWcKClZGe+OywqKME7Q8T/tpWFd36dO7VlJ7Mg2CbSNGBUIPyywjrFnH74KCAmIivLUbR6v4cxmQ0+nh094RQduSAj6qCANM7HFH76fYm231Yc6hjDJPXPnEBXMfR0+wM2s7I8M7+2sK8o9x3LiRXEhWJmSbM3XKehUWZ6qrZ1l7cHSR5fRW+kRlC5epPhj8/S4p+g+/DYdCczFp/wW6za1JtTWg8U64lqCg96hMth0ZGM5lIFE6EiAFXOwwcTxfDGHixRNf1d10SscebzQJOKGsQUudnIcwI/l8mxYwg+Rs14hP1W6vVj9nOBndsfFJMNJMNFlj7EB7Q/qixE95Nz9wGbjfiOK+MIZn0icaJUxgTQ488cRBAu0IL5vUDrVM297NZeA9NgiyQ6r4tmV7jirf5+G12RYHhXuyG8uePYeBHp+bXFELbQXJSs08d2tpEjBvCzK7o73D3Cy4I+MSTuZCkY4A8C4vSdR7QaSkOUERyV/X+ yU2npOhC erKivnpcVWhc1wdBluqTCfcJ3dF6A37k2gR2VoUqiMt0fJ/3nRu+OBWasZO8H9fSTFDO7WEWRa1HjvmspCqrpnTjwC6NafIbx0zEJaBf159NSigopoqhkz+4JUFRXhARoaGhGgv4Qn7FMgDgQKpI5iFPc2h9GVjWDDzVxlBuxFBLsST1MGUGsdrGrMjf0wtt8TW1ZTeUp+5IN/OS58X9AgGobipxyL8dOb8frxJRdcqrX5lWNuF8fDPexe31+k1qosjnsyjO5Y/kxLLIqQWopULuIwOStDyxrkMy0sTXhAoEzepi0EGVYnewkb+89OrOKU0yrKprGdfgrJTtWov49+/pYSdv+HCoL8N46zoogli8x7+9QgcWEMjnNFsrI+R5jQpVmjwbwW6GdZBFUE5dMmWa346eBnY4IcIwH8o5t2yQQdoRnTCbOWjYIaldVWp2JuD5y X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Lameter Revert commit 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()"). The patch disabled the numa policy support in the slab allocator. It did not consider that alloc_pages() uses memory policies but alloc_pages_node() does not. As a result of this patch slab memory allocations are no longer spread via interleave policy across all available NUMA nodes on bootup. Instead all slab memory is allocated close to the boot processor. This leads to an imbalance of memory accesses on NUMA systems. Also applications using MPOL_INTERLEAVE as a memory policy will no longer spread slab allocations over all nodes in the interleave set but allocate memory locally. This may also result in unbalanced allocations on a single numa node. SLUB does not apply memory policies to individual object allocations. However, it relies on the page allocators support of memory policies through alloc_pages() to do the NUMA memory allocations on a per folio or page level. SLUB also applies memory policies when retrieving partial allocated slab pages from the partial list. Fixes: 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()") Cc: stable@kernel.org Signed-off-by: Christoph Lameter Reviewed-by: Yang Shi --- mm/slub.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) --- base-commit: b0da640826ba3b6506b4996a6b23a429235e6923 change-id: 20240806-numa_policy-5188f44ba0d8 Best regards, diff --git a/mm/slub.c b/mm/slub.c index c9d8a2497fd6..4dea3c7df5ad 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2318,7 +2318,11 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct slab *slab; unsigned int order = oo_order(oo); - folio = (struct folio *)alloc_pages_node(node, flags, order); + if (node == NUMA_NO_NODE) + folio = (struct folio *)alloc_pages(flags, order); + else + folio = (struct folio *)__alloc_pages_node(node, flags, order); + if (!folio) return NULL;