From patchwork Wed Oct 23 16:27:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13847499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3A16CFA44F for ; Wed, 23 Oct 2024 16:28:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F3406B009F; Wed, 23 Oct 2024 12:28:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 879336B00A0; Wed, 23 Oct 2024 12:28:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CBC96B00A1; Wed, 23 Oct 2024 12:28:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4B92A6B009F for ; Wed, 23 Oct 2024 12:28:12 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 12AED160D75 for ; Wed, 23 Oct 2024 16:27:52 +0000 (UTC) X-FDA: 82705398846.18.27389F6 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf23.hostedemail.com (Postfix) with ESMTP id 3C53A140017 for ; Wed, 23 Oct 2024 16:27:59 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MksvU0Ha; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729700722; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lEN/DUg8OpeBMnFOqUOvMNdeZM8ZqIXrCjuMxHrebUI=; b=5PcvH6t6WP/YeELFhFNxvJ52BDiXa7I3Bie1pMtCAwDyBUNhTSeR8SR6ApZPGUnBQnDeEP VuS5CIlTQeZfYKnUFZ276oe0UW93SoZEnEF/YKQALaPhj8ZULZ841GShTX9p1woZ5ygZ8T Cc2HE+vvIC57nXCEsJ5Y/hyUYPbkNbE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729700722; a=rsa-sha256; cv=none; b=fLqqHA8hpMM9mIyp/gPYmbAl9nlITu9CoqqzNd+9Hj4hdSsLRhbxhkdHS677q/f56mI0d+ jxiJVrAqdvNVjPZeZrqzCi3UDr3ToNX9sfMj4ApFQdJ7y7dwbRvyCs1szKSogmyA4s7IFH CuIMBpRKx5halvqMMW6uvL6+DXqU7DA= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MksvU0Ha; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 615CFA41B5B; Wed, 23 Oct 2024 16:28:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56B4DC4CEE7; Wed, 23 Oct 2024 16:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729700889; bh=HxNNSVRyyfyrPRCaFVp5s4SXKjHAQ4AhySVjRrfREus=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MksvU0HaetB7l+rvL5crZVM93NPTlNcrfu6YrKoKkzCIK/OyHPU9MR3j5y140F/u0 TWrZIHcKqwtYib/NBb5usCgcwhA+g3bNW3YgyhFYl1AcI7ozEsTOGWmZ+Dl5kQr4Jp hgs53lWF95EQt4sAc6eAE3i0p6HqWI/0Odajl9H0wJQFeRAmFhWp9VOc2snwQv/XjZ JyuWFaBxFUQ5u3svCmC8vNSqqg2IKU4pHaPbT/JfcPYGGqiExtri61Z2bC2GwYWfdA Fb619NBOFCRRMrcv9s+C9SJEvb4IOkwksPANHclcy9AEWj43gvk8rOpUvfLK30rzZS mF+tuhvVsxe1A== From: Mike Rapoport To: Andrew Morton , Luis Chamberlain Cc: Andreas Larsson , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Christoph Hellwig , Christophe Leroy , Dave Hansen , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Kent Overstreet , "Liam R. Howlett" , Mark Rutland , Masami Hiramatsu , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Oleg Nesterov , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Song Liu , Stafford Horne , Steven Rostedt , Suren Baghdasaryan , Thomas Bogendoerfer , Thomas Gleixner , Uladzislau Rezki , Vineet Gupta , Will Deacon , bpf@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 2/8] mm: vmalloc: don't account for number of nodes for HUGE_VMAP allocations Date: Wed, 23 Oct 2024 19:27:05 +0300 Message-ID: <20241023162711.2579610-3-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241023162711.2579610-1-rppt@kernel.org> References: <20241023162711.2579610-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3C53A140017 X-Stat-Signature: 3cnk4c44d6kxr3um9raybit3ds1jh7m8 X-Rspam-User: X-HE-Tag: 1729700879-361161 X-HE-Meta: U2FsdGVkX18UCXKS7eXw6rtL4Df9ETDgltJSeu7D/jPP5z2pVYLhAAj0At0L7XgSClI4ArNnhTpY3ZbxCtnWQk6tNRhiEF/rzBK9XEhS7s23ZVH2U1I035BU4n3drmr0sDEM2u6+FIdGulKZdB3pxIBKeaRqRFC+sePXUWNEaxxfqYyUZaipUVFMJoC85KPTlssh8V0USFNwiiSrBIeLh7imJhqs+DhQIcuzU5H/chRZx7eGWPTsEbneBI/bwHhG6ZwvasUUl1/W62GDA9Y7jsGegESLuYwQGFzAviJFCrZWOOjbWvjQecgxgbv039H1QNYrEEzf4GKBJD04XUamCnROoWdps6W7fg2n/bdqLs9HM1TtaOCigtS/QO8BBS2rYj2sDcI4pWSG374BxSAGfqYxcou1L443DPSj7B4RiWC12dFjg7oAut6iiQbdz+oDFoYc4lOJ1ZDnOFH7IYJ/FpuYfuYDT5IRhfMNLEqA0WGNW82P9C6He56teTis1WBtzaqnWxuX2Hk8bSStEXKcGsy+vt4s5m2SUcHVT/Sagcq6cSiXL7A9854c+A1ErlXbnta/q+ll0sUamWmXKI49cyb4+UQ+bm4aaZJkPbq9iDaIpBdlVr35xM5777QiVbUg1uEkLJKaKB+1bDg273HH6GxoNkKhhqHdvE9PyFT5hiRbS6BtnhbV2xjPJqz/ZlUnviTnTNeHUekrh+mBNjRcJY1DRbcYRVsJuBcdU8M+4+ATuX7dV6/grqFCTAfLWcLs9az6nGDMWRrrOxkVUAtSS/Ac8dwMfGYBYWf7tgL1M9SJIJX5Uq7Im71cLF558UIQtMAJDpqafMFcCU3/JHzFHxkBqyWX8YyDysFb8k9SHKtR7+OI4uB25vbZ4Rg3Gj0qavmc01uBh0lepOnaGWPiwD5ILrjErsBARj4pyaaAeENc4eTWm6dcFnA1tRtsq4u0Xcb5kEyP8uxy6I9F54d AnKct9hJ uKLV+UL4k3A9YFFczk/X1pG1dxJ0ZLs25JqOcOBcAMdo45nqBBrl2C4Wj+Zwni2A1WwdU8QIaefMMeX+1AmNcik7OCueeH9vR6wd7281MYU2QjM+FKJzJbWtQYlMStCWxII7LO0m9dVmt3FzlCT1ry0JFaughCHJ/XPHT3v03/i2EOBHWLBvS8gMLzfaFUAzWtXDusAaJXPO89PywCl/tDQqyTdf4RFbkK3S0/2iwfnJoxQtVR44wT6/rd0NRzks3KeXli9jZ9BzqxRKy/mkLek+5K0QOMNeWYsF5gJmA+azwctE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly specify node ID will use huge pages only if size_per_node is larger than a huge page. Still the actual allocated memory is not distributed between nodes and there is no advantage in such approach. On the contrary, BPF allocates SZ_2M * num_possible_nodes() for each new bpf_prog_pack, while it could do with a single huge page per pack. Don't account for number of nodes for VM_ALLOW_HUGE_VMAP with NUMA_NO_NODE and use huge pages whenever the requested allocation size is larger than a huge page. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Christoph Hellwig Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: Luis Chamberlain Tested-by: kdevops --- mm/vmalloc.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c00..86b2344d7461 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3763,8 +3763,6 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, } if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { - unsigned long size_per_node; - /* * Try huge pages. Only try for PAGE_KERNEL allocations, * others like modules don't yet expect huge pages in @@ -3772,13 +3770,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, * supporting them. */ - size_per_node = size; - if (node == NUMA_NO_NODE) - size_per_node /= num_online_nodes(); - if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) + if (arch_vmap_pmd_supported(prot) && size >= PMD_SIZE) shift = PMD_SHIFT; else - shift = arch_vmap_pte_supported_shift(size_per_node); + shift = arch_vmap_pte_supported_shift(size); align = max(real_align, 1UL << shift); size = ALIGN(real_size, 1UL << shift);