From patchwork Mon Oct 7 06:28:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13824139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFDBBCFB440 for ; Mon, 7 Oct 2024 06:29:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C8736B0104; Mon, 7 Oct 2024 02:29:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 751C46B0105; Mon, 7 Oct 2024 02:29:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CB696B0106; Mon, 7 Oct 2024 02:29:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3C4FA6B0104 for ; Mon, 7 Oct 2024 02:29:57 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BBA661A091A for ; Mon, 7 Oct 2024 06:29:56 +0000 (UTC) X-FDA: 82645830792.20.5152907 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf12.hostedemail.com (Postfix) with ESMTP id 140E440007 for ; Mon, 7 Oct 2024 06:29:54 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OQrnWRR3; spf=pass (imf12.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728282526; a=rsa-sha256; cv=none; b=MXWiUYDFzycQvlvDyu2f4LeeoCmV/XC5G76XOkSEd7Q4aCWC8lt9dMkN0qEVXkifnnWMMX yw3KzgRGqrXX5uDZ+sKqi1FHtTPLfZLAt2lIuC0OgYRdS1ayj0Y7pJuOJWmv+D4DAvPnW8 DA0Yra1wttVXfIoknPRBbuUZCeMGUdo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OQrnWRR3; spf=pass (imf12.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728282526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VoDjrKHpUT224dnSitnnGFqJ2by/iZ5cZnUq7cIEdF4=; b=xrAvVdUvpakXcvbUGuAnlVUoAQNt0EY47gN/qsnnf5V/1AKl67HB9CjAL/x0YMlwQw5qGa KeZGiNW9W3BxL/fpQdLSXz7gzPyBtLLgsdnUVd2QQiO9RYPPooQXNL4q09xrEElixwm6+X IhY79kAg9bi4P7WQxpejMJRsKNOX/Rc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 06EF05C59DC; Mon, 7 Oct 2024 06:29:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF2BFC4CEC6; Mon, 7 Oct 2024 06:29:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1728282594; bh=FFXSlSIenlvc/JFQhdQdJ8pK7321CYQcMdI24yMcjAo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OQrnWRR3/g7hkdwqKSPCCpC6iFIYY/d292H/OkXvsUSIeBCbNT8KZSGMgs98muJ6O hStgp6uWL5I4mPNO24G+F+cFs8u7131j/Ey2uNjues5m4rGfIl+ezO5d7V6cyfXRrB ff1uz0i8ryxaYlWwqB3Tbk2Qfaen2IbD0G0cn3eSzNqjZHyYnCmEOJkRb1lVxqWFoq uXQrayzrXN+XpU2Oiz+opHfaOrJilIbQoWVWPrD++Zkr2uZPOk/qb3+N03ykByAfhN p1iJ7Mmzt/MjewEvg+nhLagLlQ4uD909oU6Ls/cTOKl5Oc1CSXLgqfomaexJDZ7MVY yRZGKqDqoDdLw== From: Mike Rapoport To: Andrew Morton Cc: Andreas Larsson , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Christoph Hellwig , Christophe Leroy , Dave Hansen , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Kent Overstreet , "Liam R. Howlett" , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Oleg Nesterov , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Song Liu , Stafford Horne , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Uladzislau Rezki , Vineet Gupta , Will Deacon , bpf@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v4 2/8] mm: vmalloc: don't account for number of nodes for HUGE_VMAP allocations Date: Mon, 7 Oct 2024 09:28:52 +0300 Message-ID: <20241007062858.44248-3-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241007062858.44248-1-rppt@kernel.org> References: <20241007062858.44248-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: ps4g9fe13yjbremc37gwtboutrwe6dcz X-Rspamd-Queue-Id: 140E440007 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1728282594-980805 X-HE-Meta: U2FsdGVkX1+GkWMDj8DehOT+9rOV8l1hvrb9yHK/kl8L/Px+20xzRDQzrmiNX0/R94LFXuO34yK2D7ID2m56HiXz+fM4hws7eHivCoLK6lxW0gZz/gSL6PBe/JXdC5GpfIuOVySdvlV6K+86UvDJ9JfTOUzNA4pUu1CkRigZjDg17bIVMUh/pDwq7k56V7W3KBgtgVUCOgnKGbjy+QWXbxYfFvs5jrXXexncmHS9DBHb75oHypaambiSmG9X7Gw6y7xNObABHtV7tA5xxoJVs76DdWR1U9uIotWQn/V+jIFez9ogOF1qGmsS8qPI2aiJfq0v7pfoN/ZySJgd0j/ZId0eGwM4yxAN2BWWDL0FD27cVQqkSDpWNVZOa/Rnoh3N2sGVJ5UirlRQ31qyhH7gzvU5UCXoQPeUGhEkEAYg3lbskWUQhZHL7BBBKYIKBriB6VGmz/PZamXM2/cGQVsANQtSxtn6KOg0dAPxYesrMf22fZkKHgCapp0rS0BRJ0uOxyHANPwHs9j0j3xjZSWYQQPLcR8tVxskHtgq8xlq+fv/5WLPTYeWvOgsFkk1o454EnMlu4PnKqwbQ9T2uFn1YKysgGdd58o+TQ1UgJ0fadET4L+tqhztu0LxOE5uY1I6ASpCIik7gQ5RJCL7XwxdK2LIvYS0TAaYZ9vyxdaLGNRs0Pi261dGUM8Q4+rPw4abV5ENUAjpxc/M6eYvyLPWR+IQTD809IdwXz6xJV2Uzv8lAaeiZyJQa2iQa3NnFaGQQfVeBZ6BOIa98Df32CbZVYaEkHnFAolFMIQ7OTTmR5cL++qWQycLqQ/hN30h1GQrxSzAa7+cTsA6JqhsesxfArcXnxianZriTRUR8tNhlF6Mg4KXVGDQWZH6XIHPgUo37NqrDLyx3PgLgI1yJF4Y63Cmn7plfuLOYuPWoh8tfin0PLyBIm4Kxlg7C2RuxpZymP0t6cmhZsvYyBMl51x WgWOjmZ5 +yi858HIQkYWuiYz5py/a4iYeR2GSFTLAYKT20ZpcFfwWnEezfixD4OVRcFHVfU7BgS0Q8pkxgCk5RFmwXXB9EKi4xq2bxWb6ZA6Uu6awzeYEr5vTYiTEaCys5PZDWIFaRtR7MqkJWa6FMhTeF+cBRAkJXBWBTv7KG8jxbSQdueRWHlKuJoPwn23fil1TyzLyaVFv5CSCOG1qdUITBAkP1qFH9t/N99ruwGj3lfi/RJhLbE3sWM3aYNRRY1xBrJNEO7u6kH84zVA0k+2uofeZ86GPFQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly specify node ID will use huge pages only if size_per_node is larger than a huge page. Still the actual allocated memory is not distributed between nodes and there is no advantage in such approach. On the contrary, BPF allocates SZ_2M * num_possible_nodes() for each new bpf_prog_pack, while it could do with a single huge page per pack. Don't account for number of nodes for VM_ALLOW_HUGE_VMAP with NUMA_NO_NODE and use huge pages whenever the requested allocation size is larger than a huge page. Signed-off-by: Mike Rapoport (Microsoft) --- mm/vmalloc.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c00..86b2344d7461 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3763,8 +3763,6 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, } if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { - unsigned long size_per_node; - /* * Try huge pages. Only try for PAGE_KERNEL allocations, * others like modules don't yet expect huge pages in @@ -3772,13 +3770,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, * supporting them. */ - size_per_node = size; - if (node == NUMA_NO_NODE) - size_per_node /= num_online_nodes(); - if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) + if (arch_vmap_pmd_supported(prot) && size >= PMD_SIZE) shift = PMD_SHIFT; else - shift = arch_vmap_pte_supported_shift(size_per_node); + shift = arch_vmap_pte_supported_shift(size); align = max(real_align, 1UL << shift); size = ALIGN(real_size, 1UL << shift);