From patchwork Wed Apr 13 15:33:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 12812187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC40DC433EF for ; Wed, 13 Apr 2022 15:49:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB9E16B0072; Wed, 13 Apr 2022 11:49:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D69416B0073; Wed, 13 Apr 2022 11:49:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7EF26B0074; Wed, 13 Apr 2022 11:49:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id B9E436B0072 for ; Wed, 13 Apr 2022 11:49:42 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 84DE012261C for ; Wed, 13 Apr 2022 15:49:42 +0000 (UTC) X-FDA: 79352291004.08.4938E76 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by imf31.hostedemail.com (Postfix) with ESMTP id DA5FB20002 for ; Wed, 13 Apr 2022 15:49:41 +0000 (UTC) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23DDhDQF016934 for ; Wed, 13 Apr 2022 08:49:40 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fdd4173wt-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 13 Apr 2022 08:49:40 -0700 Received: from twshared27284.14.frc2.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 13 Apr 2022 08:49:38 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id C091657504F8; Wed, 13 Apr 2022 08:33:44 -0700 (PDT) From: Song Liu To: , , , CC: , , , , , , , , , Song Liu Subject: [PATCH v3 bpf 1/4] vmalloc: replace VM_NO_HUGE_VMAP with VM_ALLOW_HUGE_VMAP Date: Wed, 13 Apr 2022 08:33:37 -0700 Message-ID: <20220413153340.326834-2-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220413153340.326834-1-song@kernel.org> References: <20220413153340.326834-1-song@kernel.org> X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: F0LUkan1CnWea4Q2ErPCIVEKxaPl86ey X-Proofpoint-GUID: F0LUkan1CnWea4Q2ErPCIVEKxaPl86ey X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-13_02,2022-04-13_01,2022-02-23_01 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=none; spf=none (imf31.hostedemail.com: domain of "prvs=51024983e5=songliubraving@fb.com" has no SPF policy when checking 67.231.145.42) smtp.mailfrom="prvs=51024983e5=songliubraving@fb.com"; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DA5FB20002 X-Stat-Signature: p387b5ncqu4p98fy3nkzqqd9mqrhykpq X-HE-Tag: 1649864981-139878 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Huge page backed vmalloc memory could benefit performance in many cases. However, some users of vmalloc may not be ready to handle huge pages for various reasons: hardware constraints, potential pages split, etc. VM_NO_HUGE_VMAP was introduced to allow vmalloc users to opt-out huge pages. However, it is not easy to track down all the users that require the opt-out, as the allocation are passed different stacks and may cause issues in different layers. To address this issue, replace VM_NO_HUGE_VMAP with an opt-in flag, VM_ALLOW_HUGE_VMAP, so that users that benefit from huge pages could ask specificially. Also, replace vmalloc_no_huge() with opt-in helpers vmalloc_huge(), and __vmalloc_huge(). Fixes: fac54e2bfb5b ("x86/Kconfig: Select HAVE_ARCH_HUGE_VMALLOC with HAVE_ARCH_HUGE_VMAP") Link: https://lore.kernel.org/netdev/14444103-d51b-0fb3-ee63-c3f182f0b546@molgen.mpg.de/" Signed-off-by: Song Liu --- arch/Kconfig | 6 ++---- arch/powerpc/kernel/module.c | 2 +- arch/s390/kvm/pv.c | 2 +- include/linux/vmalloc.h | 5 +++-- mm/vmalloc.c | 34 ++++++++++++++++++++++++++++------ 5 files changed, 35 insertions(+), 14 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 29b0167c088b..31c4fdc4a4ba 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -854,10 +854,8 @@ config HAVE_ARCH_HUGE_VMAP # # Archs that select this would be capable of PMD-sized vmaps (i.e., -# arch_vmap_pmd_supported() returns true), and they must make no assumptions -# that vmalloc memory is mapped with PAGE_SIZE ptes. The VM_NO_HUGE_VMAP flag -# can be used to prohibit arch-specific allocations from using hugepages to -# help with this (e.g., modules may require it). +# arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag +# must be used to enable allocations to use hugepages. # config HAVE_ARCH_HUGE_VMALLOC depends on HAVE_ARCH_HUGE_VMAP diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index 40a583e9d3c7..97a76a8619fb 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -101,7 +101,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool * too. */ return __vmalloc_node_range(size, 1, start, end, gfp, prot, - VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP, + VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, __builtin_return_address(0)); } diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index 7f7c0d6af2ce..8afede243903 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -142,7 +142,7 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm) * using large pages for the virtual memory area. * This is a hardware limitation. */ - kvm->arch.pv.stor_var = vmalloc_no_huge(vlen); + kvm->arch.pv.stor_var = vmalloc(vlen); if (!kvm->arch.pv.stor_var) goto out_err; return 0; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 3b1df7da402d..20205c4e3b23 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -26,7 +26,7 @@ struct notifier_block; /* in notifier.h */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */ #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ -#define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */ +#define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */ #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ !defined(CONFIG_KASAN_VMALLOC) @@ -153,7 +153,8 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) __alloc_size(1); void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) __alloc_size(1); -void *vmalloc_no_huge(unsigned long size) __alloc_size(1); +void *vmalloc_huge(unsigned long size) __alloc_size(1); +void *__vmalloc_huge(unsigned long size, gfp_t gfp_mask) __alloc_size(1); extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); extern void *vmalloc_array(size_t n, size_t size) __alloc_size(1, 2); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e163372d3967..1dac30c0ea41 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3106,7 +3106,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return NULL; } - if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) { + if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { unsigned long size_per_node; /* @@ -3273,21 +3273,43 @@ void *vmalloc(unsigned long size) EXPORT_SYMBOL(vmalloc); /** - * vmalloc_no_huge - allocate virtually contiguous memory using small pages + * vmalloc_huge - allocate virtually contiguous memory, allow huge pages * @size: allocation size * - * Allocate enough non-huge pages to cover @size from the page level + * Allocate enough pages to cover @size from the page level + * allocator and map them into contiguous kernel virtual space. + * If @size is greater than or equal to PMD_SIZE, allow using + * huge pages for the memory + * + * Return: pointer to the allocated memory or %NULL on error + */ +void *vmalloc_huge(unsigned long size) +{ + return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + GFP_KERNEL, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, + NUMA_NO_NODE, __builtin_return_address(0)); +} +EXPORT_SYMBOL_GPL(vmalloc_huge); + +/** + * __vmalloc_huge - allocate virtually contiguous memory, allow huge pages + * @size: allocation size + * @gfp_mask: flags for the page level allocator + * + * Allocate enough pages to cover @size from the page level * allocator and map them into contiguous kernel virtual space. + * If @size is greater than or equal to PMD_SIZE, allow using + * huge pages for the memory * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_no_huge(unsigned long size) +void *__vmalloc_huge(unsigned long size, gfp_t gfp_mask) { return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL, VM_NO_HUGE_VMAP, + gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL(vmalloc_no_huge); +EXPORT_SYMBOL_GPL(__vmalloc_huge); /** * vzalloc - allocate virtually contiguous memory with zero fill From patchwork Wed Apr 13 15:33:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 12812202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56148C433EF for ; Wed, 13 Apr 2022 16:05:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA6FF6B0073; Wed, 13 Apr 2022 12:05:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D56A26B0074; Wed, 13 Apr 2022 12:05:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C46156B0075; Wed, 13 Apr 2022 12:05:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id B87FB6B0073 for ; Wed, 13 Apr 2022 12:05:43 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 710758249980 for ; Wed, 13 Apr 2022 16:05:43 +0000 (UTC) X-FDA: 79352331366.30.FBF16DF Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by imf12.hostedemail.com (Postfix) with ESMTP id D5D4E4000D for ; Wed, 13 Apr 2022 16:05:42 +0000 (UTC) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23DEO090016444 for ; Wed, 13 Apr 2022 09:05:41 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fe08arqyj-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 13 Apr 2022 09:05:41 -0700 Received: from twshared5730.23.frc3.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 13 Apr 2022 09:05:40 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id 859CE57504FB; Wed, 13 Apr 2022 08:33:46 -0700 (PDT) From: Song Liu To: , , , CC: , , , , , , , , , Song Liu Subject: [PATCH v3 bpf 2/4] page_alloc: use __vmalloc_huge for large system hash Date: Wed, 13 Apr 2022 08:33:38 -0700 Message-ID: <20220413153340.326834-3-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220413153340.326834-1-song@kernel.org> References: <20220413153340.326834-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: o6qDiJs1TObncYnfs43o4q6W6gYlQ3qp X-Proofpoint-ORIG-GUID: o6qDiJs1TObncYnfs43o4q6W6gYlQ3qp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-13_02,2022-04-13_01,2022-02-23_01 X-Stat-Signature: suycm1dkgtdfba8n35hr571sg9xaf57y X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D5D4E4000D Authentication-Results: imf12.hostedemail.com; dkim=none; spf=none (imf12.hostedemail.com: domain of "prvs=51024983e5=songliubraving@fb.com" has no SPF policy when checking 67.231.145.42) smtp.mailfrom="prvs=51024983e5=songliubraving@fb.com"; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=kernel.org (policy=none) X-Rspam-User: X-HE-Tag: 1649865942-788061 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use __vmalloc_huge() in alloc_large_system_hash() so that large system hash (>= PMD_SIZE) could benefit from huge pages. Note that __vmalloc_huge only allocates huge pages for systems with HAVE_ARCH_HUGE_VMALLOC. Signed-off-by: Song Liu --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6e5b4488a0c5..20d38b8482c4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8919,7 +8919,7 @@ void *__init alloc_large_system_hash(const char *tablename, table = memblock_alloc_raw(size, SMP_CACHE_BYTES); } else if (get_order(size) >= MAX_ORDER || hashdist) { - table = __vmalloc(size, gfp_flags); + table = __vmalloc_huge(size, gfp_flags); virt = true; if (table) huge = is_vm_area_hugepages(table); From patchwork Wed Apr 13 15:33:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 12812190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8B6EC433F5 for ; Wed, 13 Apr 2022 15:49:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 491FB6B0075; Wed, 13 Apr 2022 11:49:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 442326B0078; Wed, 13 Apr 2022 11:49:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30A066B007B; Wed, 13 Apr 2022 11:49:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 1C7E26B0075 for ; Wed, 13 Apr 2022 11:49:49 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DF42D2271F for ; Wed, 13 Apr 2022 15:49:48 +0000 (UTC) X-FDA: 79352291256.16.2819F66 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf01.hostedemail.com (Postfix) with ESMTP id 4720E40002 for ; Wed, 13 Apr 2022 15:49:48 +0000 (UTC) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23DCiWmm015914 for ; Wed, 13 Apr 2022 08:49:47 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fd5qb1t6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 13 Apr 2022 08:49:47 -0700 Received: from twshared13345.18.frc3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 13 Apr 2022 08:49:46 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id 0560057504FE; Wed, 13 Apr 2022 08:33:48 -0700 (PDT) From: Song Liu To: , , , CC: , , , , , , , , , Song Liu Subject: [PATCH v3 bpf 3/4] module: introduce module_alloc_huge Date: Wed, 13 Apr 2022 08:33:39 -0700 Message-ID: <20220413153340.326834-4-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220413153340.326834-1-song@kernel.org> References: <20220413153340.326834-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: BgcevnQrorlJ6kY-UoST4iGxVDx6NX6E X-Proofpoint-GUID: BgcevnQrorlJ6kY-UoST4iGxVDx6NX6E X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-13_02,2022-04-13_01,2022-02-23_01 Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=kernel.org (policy=none); spf=none (imf01.hostedemail.com: domain of "prvs=51024983e5=songliubraving@fb.com" has no SPF policy when checking 67.231.153.30) smtp.mailfrom="prvs=51024983e5=songliubraving@fb.com" X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4720E40002 X-Stat-Signature: jum7qjh65suorxmt31njt6phkwokt8t7 X-HE-Tag: 1649864988-830693 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce module_alloc_huge, which allocates huge page backed memory in module memory space. The primary user of this memory is bpf_prog_pack (multiple BPF programs sharing a huge page). Signed-off-by: Song Liu --- arch/x86/kernel/module.c | 21 +++++++++++++++++++++ include/linux/moduleloader.h | 5 +++++ kernel/module.c | 5 +++++ 3 files changed, 31 insertions(+) diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index b98ffcf4d250..63f6a16c70dc 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -86,6 +86,27 @@ void *module_alloc(unsigned long size) return p; } +void *module_alloc_huge(unsigned long size) +{ + gfp_t gfp_mask = GFP_KERNEL; + void *p; + + if (PAGE_ALIGN(size) > MODULES_LEN) + return NULL; + + p = __vmalloc_node_range(size, MODULE_ALIGN, + MODULES_VADDR + get_module_load_offset(), + MODULES_END, gfp_mask, PAGE_KERNEL, + VM_DEFER_KMEMLEAK | VM_ALLOW_HUGE_VMAP, + NUMA_NO_NODE, __builtin_return_address(0)); + if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { + vfree(p); + return NULL; + } + + return p; +} + #ifdef CONFIG_X86_32 int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 9e09d11ffe5b..d34743a88938 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -26,6 +26,11 @@ unsigned int arch_mod_section_prepend(struct module *mod, unsigned int section); sections. Returns NULL on failure. */ void *module_alloc(unsigned long size); +/* Allocator used for allocating memory in module memory space. If size is + * greater than PMD_SIZE, allow using huge pages. Returns NULL on failure. + */ +void *module_alloc_huge(unsigned long size); + /* Free memory returned from module_alloc. */ void module_memfree(void *module_region); diff --git a/kernel/module.c b/kernel/module.c index 6cea788fd965..b2c6cb682a7d 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2839,6 +2839,11 @@ void * __weak module_alloc(unsigned long size) NUMA_NO_NODE, __builtin_return_address(0)); } +void * __weak module_alloc_huge(unsigned long size) +{ + return vmalloc_huge(size); +} + bool __weak module_init_section(const char *name) { return strstarts(name, ".init"); From patchwork Wed Apr 13 15:33:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 12812189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7162C433F5 for ; Wed, 13 Apr 2022 15:49:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E58DA6B0074; Wed, 13 Apr 2022 11:49:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF41B6B0075; Wed, 13 Apr 2022 11:49:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B46F86B0078; Wed, 13 Apr 2022 11:49:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id A50626B0074 for ; Wed, 13 Apr 2022 11:49:45 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6A7A31831AD7D for ; Wed, 13 Apr 2022 15:49:45 +0000 (UTC) X-FDA: 79352291130.21.663B0BE Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf13.hostedemail.com (Postfix) with ESMTP id 011EE2000C for ; Wed, 13 Apr 2022 15:49:44 +0000 (UTC) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23DEEeY0026862 for ; Wed, 13 Apr 2022 08:49:44 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fda5frhyq-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 13 Apr 2022 08:49:44 -0700 Received: from twshared5730.23.frc3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 13 Apr 2022 08:49:42 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id 0977E5750516; Wed, 13 Apr 2022 08:33:54 -0700 (PDT) From: Song Liu To: , , , CC: , , , , , , , , , Song Liu Subject: [PATCH v3 bpf 4/4] bpf: use module_alloc_huge for bpf_prog_pack Date: Wed, 13 Apr 2022 08:33:40 -0700 Message-ID: <20220413153340.326834-5-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220413153340.326834-1-song@kernel.org> References: <20220413153340.326834-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: fFB_kuqdQspCdWwd9uBI3jYz6EmryqH8 X-Proofpoint-ORIG-GUID: fFB_kuqdQspCdWwd9uBI3jYz6EmryqH8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-13_02,2022-04-13_01,2022-02-23_01 Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=kernel.org (policy=none); spf=none (imf13.hostedemail.com: domain of "prvs=51024983e5=songliubraving@fb.com" has no SPF policy when checking 67.231.153.30) smtp.mailfrom="prvs=51024983e5=songliubraving@fb.com" X-Stat-Signature: t3wwww3psqgeihemnk6b91yum5tod9us X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 011EE2000C X-HE-Tag: 1649864984-347292 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use module_alloc_huge for bpf_prog_pack so that BPF programs sit on PMD_SIZE pages. This benefits system performance by reducing iTLB miss rate. Also, remove set_vm_flush_reset_perms() from alloc_new_pack() and use set_memory_[nx|rw] in bpf_prog_pack_free(). This is because VM_FLUSH_RESET_PERMS does not work with huge pages yet. Signed-off-by: Song Liu --- kernel/bpf/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 13e9dbeeedf3..b2a634d0f842 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -857,7 +857,7 @@ static size_t select_bpf_prog_pack_size(void) void *ptr; size = BPF_HPAGE_SIZE * num_online_nodes(); - ptr = module_alloc(size); + ptr = module_alloc_huge(size); /* Test whether we can get huge pages. If not just use PAGE_SIZE * packs. @@ -881,7 +881,7 @@ static struct bpf_prog_pack *alloc_new_pack(void) GFP_KERNEL); if (!pack) return NULL; - pack->ptr = module_alloc(bpf_prog_pack_size); + pack->ptr = module_alloc_huge(bpf_prog_pack_size); if (!pack->ptr) { kfree(pack); return NULL; @@ -889,7 +889,6 @@ static struct bpf_prog_pack *alloc_new_pack(void) bitmap_zero(pack->bitmap, bpf_prog_pack_size / BPF_PROG_CHUNK_SIZE); list_add_tail(&pack->list, &pack_list); - set_vm_flush_reset_perms(pack->ptr); set_memory_ro((unsigned long)pack->ptr, bpf_prog_pack_size / PAGE_SIZE); set_memory_x((unsigned long)pack->ptr, bpf_prog_pack_size / PAGE_SIZE); return pack; @@ -970,6 +969,8 @@ static void bpf_prog_pack_free(struct bpf_binary_header *hdr) if (bitmap_find_next_zero_area(pack->bitmap, bpf_prog_chunk_count(), 0, bpf_prog_chunk_count(), 0) == 0) { list_del(&pack->list); + set_memory_nx((unsigned long)pack->ptr, bpf_prog_pack_size / PAGE_SIZE); + set_memory_rw((unsigned long)pack->ptr, bpf_prog_pack_size / PAGE_SIZE); module_memfree(pack->ptr); kfree(pack); }