From patchwork Fri Oct 7 23:43:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 13001542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B31C433F5 for ; Fri, 7 Oct 2022 23:46:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D112E8E0001; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C95A56B0073; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B16DF6B0073; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 967596B0073 for ; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 64BA414044F for ; Fri, 7 Oct 2022 23:46:09 +0000 (UTC) X-FDA: 79995789258.07.0794E86 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf28.hostedemail.com (Postfix) with ESMTP id 0A243C0025 for ; Fri, 7 Oct 2022 23:46:08 +0000 (UTC) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 297NbmQ8019952 for ; Fri, 7 Oct 2022 16:46:08 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by m0001303.ppops.net (PPS) with ESMTPS id 3k244sj1bw-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 07 Oct 2022 16:46:08 -0700 Received: from twshared19720.14.frc2.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 7 Oct 2022 16:46:05 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id C92E9DE442E2; Fri, 7 Oct 2022 16:43:22 -0700 (PDT) From: Song Liu To: , CC: , , , , , , , , Song Liu Subject: [RFC v2 1/4] vmalloc: introduce vmalloc_exec and vfree_exec Date: Fri, 7 Oct 2022 16:43:12 -0700 Message-ID: <20221007234315.2877365-2-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221007234315.2877365-1-song@kernel.org> References: <20221007234315.2877365-1-song@kernel.org> X-FB-Internal: Safe X-Proofpoint-GUID: amadq0YM3ZYYy1zpEeZ5f7ia88CCpmxP X-Proofpoint-ORIG-GUID: amadq0YM3ZYYy1zpEeZ5f7ia88CCpmxP X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-07_04,2022-10-07_01,2022-06-22_01 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665186369; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HfSnekEfoK/DnUs41xLg6d95iIE2vIsI8ukzHpWzX/w=; b=WyQLTqPi1fQR8t6bIt6a8C0+QA+RyC9OFJ398eWOJRYvJIksqQVXb/I7VfUTroxrcStSYq d/1Zh6aax80fKyQjbym0KIQ9nKkOA5exnxm0NRHCzNOVraHwqmS+CN9g27k9MziiESC5wN nRHl/at7sS2kz9zcy1vMy2HqKwIfxeo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf28.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665186369; a=rsa-sha256; cv=none; b=BBpMR25Dyp6SmfYBzS/m+1c11McCRzjBBu6LlL5xA7kc/xT7gZ6WUCZ/Ws7pMOMm5VHwDo fOBaOAJ7RfN7L+PppW118BMl5otdBiP8b9moOykuoEVqL0gd99N2P/iFo8S+1jUB9v9l70 e7Jv+Jz4DzyIuKQL8UIF2wyEgzjJZc4= X-Stat-Signature: qheq3ixs8j53cmfjp919zthb91r35x7f X-Rspamd-Server: rspam09 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf28.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" X-Rspamd-Queue-Id: 0A243C0025 X-HE-Tag: 1665186368-49487 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc_exec is used to allocate memory to host dynamic kernel text (modules, BPF programs, etc.) with huge pages. This is similar to the proposal by Peter in [1]. A new tree of vmap_area, free_text_area_* tree, is introduced in addition to free_vmap_area_* and vmap_area_*. vmalloc_exec allocates pages from free_text_area_*. When there isn't enough space left in free_text_area_*, new PMD_SIZE page(s) is allocated from free_vmap_area_* and added to free_text_area_*. To be more accurate, the vmap_area is first added to vmap_area_* tree and then moved to free_text_area_*. This extra move simplifies the logic of vmalloc_exec. vmap_area in free_text_area_* tree are backed with memory, but we need subtree_max_size for tree operations. Therefore, vm_struct for these vmap_area are stored in a separate list, all_text_vm. The new tree allows separate handling of < PAGE_SIZE allocations, as current vmalloc code mostly assumes PAGE_SIZE aligned allocations. This version of vmalloc_exec can handle bpf programs, which uses 64 byte aligned allocations), and modules, which uses PAGE_SIZE aligned allocations. In vfree_exec(), the memory is first added to free_text_area_*. If this free creates big enough free space (> PMD_SIZE), vfree_exec() will try to free the backing vm_struct. [1] https://lore.kernel.org/bpf/Ys6cWUMHO8XwyYgr@hirez.programming.kicks-ass.net/ Signed-off-by: Song Liu --- include/linux/vmalloc.h | 2 + mm/nommu.c | 7 ++ mm/vmalloc.c | 269 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 278 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 096d48aa3437..16c0adc1daee 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -154,6 +154,8 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) __alloc_size(1); void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +void *vmalloc_exec(unsigned long size, unsigned long align) __alloc_size(1); +void vfree_exec(const void *addr); extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); extern void *vmalloc_array(size_t n, size_t size) __alloc_size(1, 2); diff --git a/mm/nommu.c b/mm/nommu.c index e819cbc21b39..c7dcd920ec26 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -372,6 +372,13 @@ int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, } EXPORT_SYMBOL(vm_map_pages_zero); +void *vmalloc_exec(unsigned long size, unsigned long align) +{ + return NULL; +} + +void vfree_exec(const void *addr) { } + /* * sys_brk() for the most part doesn't need the global kernel * lock, except when an application is doing something nasty diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 088b421601c4..9212ff96b871 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -72,6 +72,9 @@ early_param("nohugevmalloc", set_nohugevmalloc); static const bool vmap_allow_huge = false; #endif /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ +#define PMD_ALIGN(addr) ALIGN(addr, PMD_SIZE) +#define PMD_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PMD_SIZE) + bool is_vmalloc_addr(const void *x) { unsigned long addr = (unsigned long)kasan_reset_tag(x); @@ -753,6 +756,39 @@ static LIST_HEAD(free_vmap_area_list); */ static struct rb_root free_vmap_area_root = RB_ROOT; +/* + * free_text_area for vmalloc_exec() and vfree_exec() + * + */ +static DEFINE_SPINLOCK(free_text_area_lock); +/* + * This linked list is used in pair with free_text_area_root. + * It gives O(1) access to prev/next to perform fast coalescing. + */ +static LIST_HEAD(free_text_area_list); + +/* + * This augment red-black tree represents the free text space. + * All vmap_area objects in this tree are sorted by va->va_start + * address. It is used for allocation and merging when a vmap + * object is released. + * + * Each vmap_area node contains a maximum available free block + * of its sub-tree, right or left. Therefore it is possible to + * find a lowest match of free area. + * + * vmap_area in this tree are backed by RO+X memory, but they do + * not have valid vm pointer (because we need subtree_max_size). + * The vm for these vmap_area are stored in all_text_vm. + */ +static struct rb_root free_text_area_root = RB_ROOT; + +/* + * List of vm_struct for free_text_area_root. This list is rarely + * accessed, so the O(N) complexity is not likely a real issue. + */ +struct vm_struct *all_text_vm; + /* * Preload a CPU with one object for "no edge" split case. The * aim is to get rid of allocations from the atomic context, thus @@ -3297,6 +3333,239 @@ void *vmalloc(unsigned long size) } EXPORT_SYMBOL(vmalloc); +#if defined(CONFIG_MODULES) && defined(MODULES_VADDR) +#define VMALLOC_EXEC_START MODULES_VADDR +#define VMALLOC_EXEC_END MODULES_END +#else +#define VMALLOC_EXEC_START VMALLOC_START +#define VMALLOC_EXEC_END VMALLOC_END +#endif + +static void move_vmap_to_free_text_tree(void *addr) +{ + struct vmap_area *va; + + /* remove from vmap_area_root */ + spin_lock(&vmap_area_lock); + va = __find_vmap_area((unsigned long)addr, &vmap_area_root); + if (WARN_ON_ONCE(!va)) { + spin_unlock(&vmap_area_lock); + return; + } + unlink_va(va, &vmap_area_root); + spin_unlock(&vmap_area_lock); + + /* make the memory RO+X */ + memset(addr, 0, va->va_end - va->va_start); + set_memory_ro(va->va_start, (va->va_end - va->va_start) >> PAGE_SHIFT); + set_memory_x(va->va_start, (va->va_end - va->va_start) >> PAGE_SHIFT); + + /* add to all_text_vm */ + va->vm->next = all_text_vm; + all_text_vm = va->vm; + + /* add to free_text_area_root */ + spin_lock(&free_text_area_lock); + merge_or_add_vmap_area_augment(va, &free_text_area_root, &free_text_area_list); + spin_unlock(&free_text_area_lock); +} + +/** + * vmalloc_exec - allocate virtually contiguous RO+X memory + * @size: allocation size + * + * This is used to allocate dynamic kernel text, such as module text, BPF + * programs, etc. User need to use text_poke to update the memory allocated + * by vmalloc_exec. + * + * Return: pointer to the allocated memory or %NULL on error + */ +void *vmalloc_exec(unsigned long size, unsigned long align) +{ + struct vmap_area *va, *tmp; + unsigned long addr; + enum fit_type type; + int ret; + + va = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, NUMA_NO_NODE); + if (unlikely(!va)) + return NULL; + +again: + preload_this_cpu_lock(&free_text_area_lock, GFP_KERNEL, NUMA_NO_NODE); + tmp = find_vmap_lowest_match(&free_text_area_root, size, align, 1, false); + + if (!tmp) { + unsigned long alloc_size; + void *ptr; + + spin_unlock(&free_text_area_lock); + + /* + * Not enough continuous space in free_text_area_root, try + * allocate more memory. The memory is first added to + * vmap_area_root, and then moved to free_text_area_root. + */ + alloc_size = roundup(size, PMD_SIZE * num_online_nodes()); + ptr = __vmalloc_node_range(alloc_size, PMD_SIZE, VMALLOC_EXEC_START, + VMALLOC_EXEC_END, GFP_KERNEL, PAGE_KERNEL, + VM_ALLOW_HUGE_VMAP | VM_NO_GUARD, + NUMA_NO_NODE, __builtin_return_address(0)); + if (unlikely(!ptr)) + goto err_out; + + move_vmap_to_free_text_tree(ptr); + goto again; + } + + addr = roundup(tmp->va_start, align); + type = classify_va_fit_type(tmp, addr, size); + if (WARN_ON_ONCE(type == NOTHING_FIT)) + goto err_out; + + ret = adjust_va_to_fit_type(&free_text_area_root, &free_text_area_list, + tmp, addr, size); + if (ret) + goto err_out; + + spin_unlock(&free_text_area_lock); + + va->va_start = addr; + va->va_end = addr + size; + va->vm = NULL; + + spin_lock(&vmap_area_lock); + insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + spin_unlock(&vmap_area_lock); + + return (void *)addr; + +err_out: + spin_unlock(&free_text_area_lock); + kmem_cache_free(vmap_area_cachep, va); + return NULL; +} + +static struct vm_struct *find_and_unlink_text_vm(unsigned long start, unsigned long end) +{ + struct vm_struct *vm, *prev_vm; + + lockdep_assert_held(&free_text_area_lock); + + vm = all_text_vm; + while (vm) { + unsigned long vm_addr = (unsigned long)vm->addr; + + /* vm is within this free space, we can free it */ + if ((vm_addr >= start) && ((vm_addr + vm->size) <= end)) + goto unlink_vm; + vm = vm->next; + } + return NULL; + +unlink_vm: + if (all_text_vm == vm) { + all_text_vm = vm->next; + } else { + prev_vm = all_text_vm; + while (prev_vm->next != vm) + prev_vm = prev_vm->next; + prev_vm = vm->next; + } + return vm; +} + +/** + * vfree_exec - Release memory allocated by vmalloc_exec() + * @addr: Memory base address + * + * If @addr is NULL, no operation is performed. + */ +void vfree_exec(const void *addr) +{ + unsigned long free_start, free_end, free_addr; + struct vm_struct *vm; + struct vmap_area *va; + + might_sleep(); + + if (!addr) + return; + + spin_lock(&vmap_area_lock); + va = __find_vmap_area((unsigned long)addr, &vmap_area_root); + if (WARN_ON_ONCE(!va)) { + spin_unlock(&vmap_area_lock); + return; + } + WARN_ON_ONCE(va->vm); + + unlink_va(va, &vmap_area_root); + spin_unlock(&vmap_area_lock); + + spin_lock(&free_text_area_lock); + va = merge_or_add_vmap_area_augment(va, + &free_text_area_root, &free_text_area_list); + + if (WARN_ON_ONCE(!va)) + goto out; + + free_start = PMD_ALIGN(va->va_start); + free_end = PMD_ALIGN_DOWN(va->va_end); + + /* + * Only try to free vm when there is at least one PMD_SIZE free + * continuous memory. + */ + if (free_start >= free_end) + goto out; + + /* + * TODO: It is possible that multiple vm are ready to be freed + * after one vfree_exec(). But we free at most one vm for now. + */ + vm = find_and_unlink_text_vm(free_start, free_end); + if (!vm) + goto out; + + va = kmem_cache_alloc_node(vmap_area_cachep, GFP_ATOMIC, NUMA_NO_NODE); + if (unlikely(!va)) + goto out_save_vm; + + free_addr = __alloc_vmap_area(&free_text_area_root, &free_text_area_list, + vm->size, 1, (unsigned long)vm->addr, + (unsigned long)vm->addr + vm->size); + + if (WARN_ON_ONCE(free_addr != (unsigned long)vm->addr)) + goto out_save_vm; + + va->va_start = (unsigned long)vm->addr; + va->va_end = va->va_start + vm->size; + va->vm = vm; + spin_unlock(&free_text_area_lock); + + set_memory_nx(va->va_start, vm->size >> PAGE_SHIFT); + set_memory_rw(va->va_start, vm->size >> PAGE_SHIFT); + + /* put the va to vmap_area_root, and then free it with vfree */ + spin_lock(&vmap_area_lock); + insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + spin_unlock(&vmap_area_lock); + + vfree(vm->addr); + return; + +out_save_vm: + /* + * vm is removed from all_text_vm, but not freed. Add it back, + * so that we can use or free it later. + */ + vm->next = all_text_vm; + all_text_vm = vm; +out: + spin_unlock(&free_text_area_lock); +} + /** * vmalloc_huge - allocate virtually contiguous memory, allow huge pages * @size: allocation size From patchwork Fri Oct 7 23:43:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 13001543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9683C4332F for ; Fri, 7 Oct 2022 23:46:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E5506B0073; Fri, 7 Oct 2022 19:46:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07F946B0078; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFFD16B0075; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A343A6B0074 for ; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 76B38C0BC5 for ; Fri, 7 Oct 2022 23:46:09 +0000 (UTC) X-FDA: 79995789258.08.D9098C9 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf26.hostedemail.com (Postfix) with ESMTP id 19BBA14001E for ; Fri, 7 Oct 2022 23:46:08 +0000 (UTC) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 297NZN3W003657 for ; Fri, 7 Oct 2022 16:46:08 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3k27ew0cba-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 07 Oct 2022 16:46:08 -0700 Received: from twshared19720.14.frc2.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 7 Oct 2022 16:46:05 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id 50A7CDE442FB; Fri, 7 Oct 2022 16:43:24 -0700 (PDT) From: Song Liu To: , CC: , , , , , , , , Song Liu Subject: [RFC v2 2/4] bpf: use vmalloc_exec Date: Fri, 7 Oct 2022 16:43:13 -0700 Message-ID: <20221007234315.2877365-3-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221007234315.2877365-1-song@kernel.org> References: <20221007234315.2877365-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 0VtwMwTTITDyWJs7z3-SYsnSDVAFWHa- X-Proofpoint-ORIG-GUID: 0VtwMwTTITDyWJs7z3-SYsnSDVAFWHa- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-07_04,2022-10-07_01,2022-06-22_01 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665186369; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h1A40AwNJfp2LzlmZIN4ufUjsZfpoyOzpJ46tZ9Bim8=; b=htajx7n69lxHLvhqQeTtVP5BHsH/bD6TaIv7Os2x/q9el5QjREij0HkBwL5aJnzC1Tm3k8 IJjX3t+EU3sm6QbMSNZwnWL0cCNlUMspel9MeBPkaoehdzxfDBskZ6wl94+bcz0g6iudni xvp/K/WgVw2E4IYcOCGelFOC/1Ae8tM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf26.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665186369; a=rsa-sha256; cv=none; b=SzztNc2Ds6N5REjnxGfm3L+uvadmhK6lmnI+7w2rWPlwNwsBQKeAc0YIDivxU+BQKEiO3y 9KX9J5ZXC/yxX7F1PO6uib9fIZg0jN2D2IjQMOoTtHbtujc0fRujuCu4hUqcL1GjJWDnhL 1HLItqm/U8nyDKRi+d6xrDmVibgNvuc= Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf26.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" X-Stat-Signature: q4psd4tou9z3arrfynutdbf45wgbwzuo X-Rspamd-Queue-Id: 19BBA14001E X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1665186368-377305 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use vmalloc_exec and vfree_exec instead of bpf_prog_pack_[alloc|free]. Signed-off-by: Song Liu --- kernel/bpf/core.c | 155 +++------------------------------------------- 1 file changed, 10 insertions(+), 145 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 3d9eb3ae334c..f96e8acaa34c 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -806,144 +806,6 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog, return slot; } -/* - * BPF program pack allocator. - * - * Most BPF programs are pretty small. Allocating a hole page for each - * program is sometime a waste. Many small bpf program also adds pressure - * to instruction TLB. To solve this issue, we introduce a BPF program pack - * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86) - * to host BPF programs. - */ -#define BPF_PROG_CHUNK_SHIFT 6 -#define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT) -#define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1)) - -struct bpf_prog_pack { - struct list_head list; - void *ptr; - unsigned long bitmap[]; -}; - -#define BPF_PROG_SIZE_TO_NBITS(size) (round_up(size, BPF_PROG_CHUNK_SIZE) / BPF_PROG_CHUNK_SIZE) - -static DEFINE_MUTEX(pack_mutex); -static LIST_HEAD(pack_list); - -/* PMD_SIZE is not available in some special config, e.g. ARCH=arm with - * CONFIG_MMU=n. Use PAGE_SIZE in these cases. - */ -#ifdef PMD_SIZE -#define BPF_PROG_PACK_SIZE (PMD_SIZE * num_possible_nodes()) -#else -#define BPF_PROG_PACK_SIZE PAGE_SIZE -#endif - -#define BPF_PROG_CHUNK_COUNT (BPF_PROG_PACK_SIZE / BPF_PROG_CHUNK_SIZE) - -static struct bpf_prog_pack *alloc_new_pack(bpf_jit_fill_hole_t bpf_fill_ill_insns) -{ - struct bpf_prog_pack *pack; - - pack = kzalloc(struct_size(pack, bitmap, BITS_TO_LONGS(BPF_PROG_CHUNK_COUNT)), - GFP_KERNEL); - if (!pack) - return NULL; - pack->ptr = module_alloc(BPF_PROG_PACK_SIZE); - if (!pack->ptr) { - kfree(pack); - return NULL; - } - bpf_fill_ill_insns(pack->ptr, BPF_PROG_PACK_SIZE); - bitmap_zero(pack->bitmap, BPF_PROG_PACK_SIZE / BPF_PROG_CHUNK_SIZE); - list_add_tail(&pack->list, &pack_list); - - set_vm_flush_reset_perms(pack->ptr); - set_memory_ro((unsigned long)pack->ptr, BPF_PROG_PACK_SIZE / PAGE_SIZE); - set_memory_x((unsigned long)pack->ptr, BPF_PROG_PACK_SIZE / PAGE_SIZE); - return pack; -} - -static void *bpf_prog_pack_alloc(u32 size, bpf_jit_fill_hole_t bpf_fill_ill_insns) -{ - unsigned int nbits = BPF_PROG_SIZE_TO_NBITS(size); - struct bpf_prog_pack *pack; - unsigned long pos; - void *ptr = NULL; - - mutex_lock(&pack_mutex); - if (size > BPF_PROG_PACK_SIZE) { - size = round_up(size, PAGE_SIZE); - ptr = module_alloc(size); - if (ptr) { - bpf_fill_ill_insns(ptr, size); - set_vm_flush_reset_perms(ptr); - set_memory_ro((unsigned long)ptr, size / PAGE_SIZE); - set_memory_x((unsigned long)ptr, size / PAGE_SIZE); - } - goto out; - } - list_for_each_entry(pack, &pack_list, list) { - pos = bitmap_find_next_zero_area(pack->bitmap, BPF_PROG_CHUNK_COUNT, 0, - nbits, 0); - if (pos < BPF_PROG_CHUNK_COUNT) - goto found_free_area; - } - - pack = alloc_new_pack(bpf_fill_ill_insns); - if (!pack) - goto out; - - pos = 0; - -found_free_area: - bitmap_set(pack->bitmap, pos, nbits); - ptr = (void *)(pack->ptr) + (pos << BPF_PROG_CHUNK_SHIFT); - -out: - mutex_unlock(&pack_mutex); - return ptr; -} - -static void bpf_prog_pack_free(struct bpf_binary_header *hdr) -{ - struct bpf_prog_pack *pack = NULL, *tmp; - unsigned int nbits; - unsigned long pos; - - mutex_lock(&pack_mutex); - if (hdr->size > BPF_PROG_PACK_SIZE) { - module_memfree(hdr); - goto out; - } - - list_for_each_entry(tmp, &pack_list, list) { - if ((void *)hdr >= tmp->ptr && (tmp->ptr + BPF_PROG_PACK_SIZE) > (void *)hdr) { - pack = tmp; - break; - } - } - - if (WARN_ONCE(!pack, "bpf_prog_pack bug\n")) - goto out; - - nbits = BPF_PROG_SIZE_TO_NBITS(hdr->size); - pos = ((unsigned long)hdr - (unsigned long)pack->ptr) >> BPF_PROG_CHUNK_SHIFT; - - WARN_ONCE(bpf_arch_text_invalidate(hdr, hdr->size), - "bpf_prog_pack bug: missing bpf_arch_text_invalidate?\n"); - - bitmap_clear(pack->bitmap, pos, nbits); - if (bitmap_find_next_zero_area(pack->bitmap, BPF_PROG_CHUNK_COUNT, 0, - BPF_PROG_CHUNK_COUNT, 0) == 0) { - list_del(&pack->list); - module_memfree(pack->ptr); - kfree(pack); - } -out: - mutex_unlock(&pack_mutex); -} - static atomic_long_t bpf_jit_current; /* Can be overridden by an arch's JIT compiler if it has a custom, @@ -1043,6 +905,9 @@ void bpf_jit_binary_free(struct bpf_binary_header *hdr) bpf_jit_uncharge_modmem(size); } +#define BPF_PROG_EXEC_ALIGN 64 +#define BPF_PROG_EXEC_MASK (~(BPF_PROG_EXEC_ALIGN - 1)) + /* Allocate jit binary from bpf_prog_pack allocator. * Since the allocated memory is RO+X, the JIT engine cannot write directly * to the memory. To solve this problem, a RW buffer is also allocated at @@ -1065,11 +930,11 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr, alignment > BPF_IMAGE_ALIGNMENT); /* add 16 bytes for a random section of illegal instructions */ - size = round_up(proglen + sizeof(*ro_header) + 16, BPF_PROG_CHUNK_SIZE); + size = round_up(proglen + sizeof(*ro_header) + 16, BPF_PROG_EXEC_ALIGN); if (bpf_jit_charge_modmem(size)) return NULL; - ro_header = bpf_prog_pack_alloc(size, bpf_fill_ill_insns); + ro_header = vmalloc_exec(size, BPF_PROG_EXEC_ALIGN); if (!ro_header) { bpf_jit_uncharge_modmem(size); return NULL; @@ -1078,7 +943,7 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr, *rw_header = kvmalloc(size, GFP_KERNEL); if (!*rw_header) { bpf_arch_text_copy(&ro_header->size, &size, sizeof(size)); - bpf_prog_pack_free(ro_header); + vfree_exec(ro_header); bpf_jit_uncharge_modmem(size); return NULL; } @@ -1088,7 +953,7 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr, (*rw_header)->size = size; hole = min_t(unsigned int, size - (proglen + sizeof(*ro_header)), - BPF_PROG_CHUNK_SIZE - sizeof(*ro_header)); + BPF_PROG_EXEC_ALIGN - sizeof(*ro_header)); start = (get_random_int() % hole) & ~(alignment - 1); *image_ptr = &ro_header->image[start]; @@ -1109,7 +974,7 @@ int bpf_jit_binary_pack_finalize(struct bpf_prog *prog, kvfree(rw_header); if (IS_ERR(ptr)) { - bpf_prog_pack_free(ro_header); + vfree_exec(ro_header); return PTR_ERR(ptr); } return 0; @@ -1130,7 +995,7 @@ void bpf_jit_binary_pack_free(struct bpf_binary_header *ro_header, { u32 size = ro_header->size; - bpf_prog_pack_free(ro_header); + vfree_exec(ro_header); kvfree(rw_header); bpf_jit_uncharge_modmem(size); } @@ -1141,7 +1006,7 @@ bpf_jit_binary_pack_hdr(const struct bpf_prog *fp) unsigned long real_start = (unsigned long)fp->bpf_func; unsigned long addr; - addr = real_start & BPF_PROG_CHUNK_MASK; + addr = real_start & BPF_PROG_EXEC_MASK; return (void *)addr; } From patchwork Fri Oct 7 23:43:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 13001545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AECCC433F5 for ; Fri, 7 Oct 2022 23:46:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11AE86B0075; Fri, 7 Oct 2022 19:46:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07D876B0078; Fri, 7 Oct 2022 19:46:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D57CE6B007B; Fri, 7 Oct 2022 19:46:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B61336B0075 for ; Fri, 7 Oct 2022 19:46:14 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8D9D7160653 for ; Fri, 7 Oct 2022 23:46:14 +0000 (UTC) X-FDA: 79995789468.30.700031A Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf16.hostedemail.com (Postfix) with ESMTP id 2F6AE180010 for ; Fri, 7 Oct 2022 23:46:14 +0000 (UTC) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 297NZ7BV003301 for ; Fri, 7 Oct 2022 16:46:13 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3k27ew0cbf-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 07 Oct 2022 16:46:13 -0700 Received: from twshared5413.23.frc3.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 7 Oct 2022 16:46:06 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id E31D1DE44310; Fri, 7 Oct 2022 16:43:25 -0700 (PDT) From: Song Liu To: , CC: , , , , , , , , Song Liu Subject: [RFC v2 3/4] modules, x86: use vmalloc_exec for module core Date: Fri, 7 Oct 2022 16:43:14 -0700 Message-ID: <20221007234315.2877365-4-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221007234315.2877365-1-song@kernel.org> References: <20221007234315.2877365-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: lkI8kIYfmSw98v61hr1IAyb8YZ7mEdKt X-Proofpoint-ORIG-GUID: lkI8kIYfmSw98v61hr1IAyb8YZ7mEdKt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-07_04,2022-10-07_01,2022-06-22_01 ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com"; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665186374; a=rsa-sha256; cv=none; b=bQZ2UovyFizydnTQVu3m+W+nBfNEizW7cXqS4srZeVlpFR807TrbL6rSLSJ7tN2NmCjBRl 8KQaUucuidIpWBKlMgkuAJsFe1cJ7ngwf+exbVfekflLcYhOM6yDyjcDCjg29R84nuIcgz OlFk/rjrt2vGafkxpRgl/1hFGJTnMLs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665186374; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7RLyS9R2Zdrq7n0A1ZJTPXGC7L5Fe9blKzeGmBiE+S0=; b=NbEn86VK7yOnzaqbAa/5cEp5Ewp68bFsUC1roCuHXzQ7AmDS5WcuNaDqAe5UsGHOPt4G8G kFwxpFC/Uyvf6t3MqR9ZS88Kyz6GcJCG5bwg4ZoLvW8g5jqoUE7/iBm7lwY404QcCflowj Fj+cX63s2ufBkuK1ZRaugUZVlureLno= X-Rspamd-Queue-Id: 2F6AE180010 X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com"; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam10 X-Stat-Signature: yjn64xyhab7xr5wbtd5s8s519kr9og1e X-HE-Tag: 1665186374-442018 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a prototype that allows modules to share 2MB text pages with other modules and BPF programs. Current version only covers core_layout. --- arch/x86/Kconfig | 1 + arch/x86/kernel/alternative.c | 30 ++++++++++++++++++++++++------ arch/x86/kernel/module.c | 1 + kernel/module/main.c | 23 +++++++++++++---------- kernel/module/strict_rwx.c | 3 --- kernel/trace/ftrace.c | 3 ++- 6 files changed, 41 insertions(+), 20 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f9920f1341c8..0b1ea05a1da6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -91,6 +91,7 @@ config X86 select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX + select ARCH_WANTS_MODULES_DATA_IN_VMALLOC if X86_64 select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_UBSAN_SANITIZE_ALL diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 4f3204364caa..0e47a558c5bc 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -332,7 +332,13 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, DUMP_BYTES(insn_buff, insn_buff_sz, "%px: final_insn: ", instr); - text_poke_early(instr, insn_buff, insn_buff_sz); + if (system_state < SYSTEM_RUNNING) { + text_poke_early(instr, insn_buff, insn_buff_sz); + } else { + mutex_lock(&text_mutex); + text_poke(instr, insn_buff, insn_buff_sz); + mutex_unlock(&text_mutex); + } next: optimize_nops(instr, a->instrlen); @@ -503,7 +509,13 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) optimize_nops(bytes, len); DUMP_BYTES(((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(addr, bytes, len); + if (system_state == SYSTEM_BOOTING) { + text_poke_early(addr, bytes, len); + } else { + mutex_lock(&text_mutex); + text_poke(addr, bytes, len); + mutex_unlock(&text_mutex); + } } } } @@ -568,7 +580,13 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) if (len == insn.length) { DUMP_BYTES(((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(addr, bytes, len); + if (unlikely(system_state == SYSTEM_BOOTING)) { + text_poke_early(addr, bytes, len); + } else { + mutex_lock(&text_mutex); + text_poke(addr, bytes, len); + mutex_unlock(&text_mutex); + } } } } @@ -609,7 +627,7 @@ void __init_or_module noinline apply_ibt_endbr(s32 *start, s32 *end) */ DUMP_BYTES(((u8*)addr), 4, "%px: orig: ", addr); DUMP_BYTES(((u8*)&poison), 4, "%px: repl: ", addr); - text_poke_early(addr, &poison, 4); + text_poke(addr, &poison, 4); } } @@ -791,7 +809,7 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start, /* Pad the rest with nops */ add_nops(insn_buff + used, p->len - used); - text_poke_early(p->instr, insn_buff, p->len); + text_poke(p->instr, insn_buff, p->len); } } extern struct paravirt_patch_site __start_parainstructions[], @@ -1699,7 +1717,7 @@ void __ref text_poke_bp(void *addr, const void *opcode, size_t len, const void * struct text_poke_loc tp; if (unlikely(system_state == SYSTEM_BOOTING)) { - text_poke_early(addr, opcode, len); + text_poke(addr, opcode, len); return; } diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index b1abf663417c..577e31647dc4 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -229,6 +229,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs, bool early = me->state == MODULE_STATE_UNFORMED; void *(*write)(void *, const void *, size_t) = memcpy; + early = false; if (!early) { write = text_poke; mutex_lock(&text_mutex); diff --git a/kernel/module/main.c b/kernel/module/main.c index a4e4d84b6f4e..b44806e31a56 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -53,6 +53,7 @@ #include #include #include +#include #include #include "internal.h" @@ -1203,7 +1204,7 @@ static void free_module(struct module *mod) lockdep_free_key_range(mod->data_layout.base, mod->data_layout.size); /* Finally, free the core (containing the module structure) */ - module_memfree(mod->core_layout.base); + vfree_exec(mod->core_layout.base); #ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC vfree(mod->data_layout.base); #endif @@ -1321,7 +1322,8 @@ static int simplify_symbols(struct module *mod, const struct load_info *info) ksym = resolve_symbol_wait(mod, info, name); /* Ok if resolved. */ if (ksym && !IS_ERR(ksym)) { - sym[i].st_value = kernel_symbol_value(ksym); + unsigned long val = kernel_symbol_value(ksym); + bpf_arch_text_copy(&sym[i].st_value, &val, sizeof(val)); break; } @@ -1342,7 +1344,8 @@ static int simplify_symbols(struct module *mod, const struct load_info *info) secbase = (unsigned long)mod_percpu(mod); else secbase = info->sechdrs[sym[i].st_shndx].sh_addr; - sym[i].st_value += secbase; + secbase += sym[i].st_value; + bpf_arch_text_copy(&sym[i].st_value, &secbase, sizeof(secbase)); break; } } @@ -2123,7 +2126,7 @@ static int move_module(struct module *mod, struct load_info *info) void *ptr; /* Do the allocs. */ - ptr = module_alloc(mod->core_layout.size); + ptr = vmalloc_exec(mod->core_layout.size, PAGE_SIZE); /* * The pointer to this block is stored in the module structure * which is inside the block. Just mark it as not being a @@ -2133,7 +2136,7 @@ static int move_module(struct module *mod, struct load_info *info) if (!ptr) return -ENOMEM; - memset(ptr, 0, mod->core_layout.size); +/* memset(ptr, 0, mod->core_layout.size); */ mod->core_layout.base = ptr; if (mod->init_layout.size) { @@ -2146,7 +2149,7 @@ static int move_module(struct module *mod, struct load_info *info) */ kmemleak_ignore(ptr); if (!ptr) { - module_memfree(mod->core_layout.base); + vfree_exec(mod->core_layout.base); return -ENOMEM; } memset(ptr, 0, mod->init_layout.size); @@ -2156,7 +2159,7 @@ static int move_module(struct module *mod, struct load_info *info) #ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC /* Do the allocs. */ - ptr = vzalloc(mod->data_layout.size); + ptr = module_alloc(mod->data_layout.size); /* * The pointer to this block is stored in the module structure * which is inside the block. Just mark it as not being a @@ -2164,7 +2167,7 @@ static int move_module(struct module *mod, struct load_info *info) */ kmemleak_not_leak(ptr); if (!ptr) { - module_memfree(mod->core_layout.base); + vfree_exec(mod->core_layout.base); module_memfree(mod->init_layout.base); return -ENOMEM; } @@ -2189,7 +2192,7 @@ static int move_module(struct module *mod, struct load_info *info) dest = mod->core_layout.base + shdr->sh_entsize; if (shdr->sh_type != SHT_NOBITS) - memcpy(dest, (void *)shdr->sh_addr, shdr->sh_size); + bpf_arch_text_copy(dest, (void *)shdr->sh_addr, shdr->sh_size); /* Update sh_addr to point to copy in image. */ shdr->sh_addr = (unsigned long)dest; pr_debug("\t0x%lx %s\n", @@ -2345,7 +2348,7 @@ static void module_deallocate(struct module *mod, struct load_info *info) percpu_modfree(mod); module_arch_freeing_init(mod); module_memfree(mod->init_layout.base); - module_memfree(mod->core_layout.base); + vfree_exec(mod->core_layout.base); #ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC vfree(mod->data_layout.base); #endif diff --git a/kernel/module/strict_rwx.c b/kernel/module/strict_rwx.c index 14fbea66f12f..d392eb7bf574 100644 --- a/kernel/module/strict_rwx.c +++ b/kernel/module/strict_rwx.c @@ -85,7 +85,6 @@ void module_enable_x(const struct module *mod) !PAGE_ALIGNED(mod->init_layout.base)) return; - frob_text(&mod->core_layout, set_memory_x); frob_text(&mod->init_layout, set_memory_x); } @@ -98,9 +97,7 @@ void module_enable_ro(const struct module *mod, bool after_init) return; #endif - set_vm_flush_reset_perms(mod->core_layout.base); set_vm_flush_reset_perms(mod->init_layout.base); - frob_text(&mod->core_layout, set_memory_ro); frob_rodata(&mod->data_layout, set_memory_ro); frob_text(&mod->init_layout, set_memory_ro); diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 439e2ab6905e..818418d5b853 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3142,6 +3142,7 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs) if (mod) rec_flags |= FTRACE_FL_DISABLED; + ftrace_arch_code_modify_prepare(); for (pg = new_pgs; pg; pg = pg->next) { for (i = 0; i < pg->index; i++) { @@ -3163,7 +3164,7 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs) update_cnt++; } } - + ftrace_arch_code_modify_post_process(); stop = ftrace_now(raw_smp_processor_id()); ftrace_update_time = stop - start; ftrace_update_tot_cnt += update_cnt; From patchwork Fri Oct 7 23:43:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 13001541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57A48C433FE for ; Fri, 7 Oct 2022 23:46:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70DAA6B0072; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6962C6B0073; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E9976B0074; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3484B6B0072 for ; Fri, 7 Oct 2022 19:46:09 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E53EF120237 for ; Fri, 7 Oct 2022 23:46:08 +0000 (UTC) X-FDA: 79995789216.19.F2FEEE8 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf14.hostedemail.com (Postfix) with ESMTP id 8B455100011 for ; Fri, 7 Oct 2022 23:46:07 +0000 (UTC) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 297NYiZP012650 for ; Fri, 7 Oct 2022 16:46:06 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by m0089730.ppops.net (PPS) with ESMTPS id 3k2acdfa6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 07 Oct 2022 16:46:06 -0700 Received: from twshared19720.14.frc2.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 7 Oct 2022 16:46:05 -0700 Received: by devbig932.frc1.facebook.com (Postfix, from userid 4523) id 425F1DE44324; Fri, 7 Oct 2022 16:43:28 -0700 (PDT) From: Song Liu To: , CC: , , , , , , , , Song Liu Subject: [RFC v2 4/4] vmalloc_exec: share a huge page with kernel text Date: Fri, 7 Oct 2022 16:43:15 -0700 Message-ID: <20221007234315.2877365-5-song@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221007234315.2877365-1-song@kernel.org> References: <20221007234315.2877365-1-song@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: p3vvf65NT4dX1HS36XzS-krMcEfN2nDB X-Proofpoint-GUID: p3vvf65NT4dX1HS36XzS-krMcEfN2nDB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-07_04,2022-10-07_01,2022-06-22_01 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665186367; a=rsa-sha256; cv=none; b=hM7jLGeHG37dlZEHIGdW3AzqmunkNUrUdnJIwbq9sW+jvPzIF8a7C0zYQOvYVKIP/JkldD tGhiMw4r2ES1GXD1mB0PwBbY96hSDTLXrViNbJWZN8wkrLYTV7SeFeVSIVZj/YuOyAeXcP VCFYsq6HeWfddkKr16dXsCqhuSdouOk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf14.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665186367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Av+fHZMPINneoQ5H98o6FwkPq64MNwSAeKLIHKvYfaU=; b=3v57Goe1sb4nI4Obs2x9KynYaRN7XM27Q/bPrkeKgxKA+7cNa8R4s7yKvNJ/6TFHCpXlnS I81cA2lItTPXo/rtARrDd9be2v4Njx220GpHqM3oQ7/iHqYOFUXwxebUK/PsEQXwb+1GpW iPCeGe+xuk7lnJo3kDBPINs1zQ0Uqcs= X-Stat-Signature: usgrwhiw3ahzdfkiea6zyk8dmznikiih X-Rspamd-Queue-Id: 8B455100011 Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf14.hostedemail.com: domain of "prvs=1279faf8c2=songliubraving@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=1279faf8c2=songliubraving@fb.com" X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1665186367-603062 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On x86 kernel, we allocate 2MB pages for kernel text up to round_down(_etext, 2MB). Therefore, some of the kernel text is still on 4kB pages. With vmalloc_exec, we can allocate 2MB pages up to round_up(_etext, 2MB), and use the rest of the page for modules and BPF programs. Here is an example: [root@eth50-1 ~]# grep _etext /proc/kallsyms ffffffff82202a08 T _etext [root@eth50-1 ~]# grep bpf_prog_ /proc/kallsyms | tail -n 3 ffffffff8220f920 t bpf_prog_cc61a5364ac11d93_handle__sched_wakeup [bpf] ffffffff8220fa28 t bpf_prog_cc61a5364ac11d93_handle__sched_wakeup_new [bpf] ffffffff8220fad4 t bpf_prog_3bf73fa16f5e3d92_handle__sched_switch [bpf] [root@eth50-1 ~]# grep 0xffffffff82200000 /sys/kernel/debug/page_tables/kernel 0xffffffff82200000-0xffffffff82400000 2M ro PSE x pmd [root@eth50-1 ~]# grep xfs_flush_inodes /proc/kallsyms ffffffff822ba910 t xfs_flush_inodes_worker [xfs] ffffffff822bc580 t xfs_flush_inodes [xfs] ffffffff82200000-ffffffff82400000 is a 2MB page, serving kernel text, xfs module, and bpf programs. Signed-off-by: Song Liu --- arch/x86/mm/init_64.c | 3 ++- mm/vmalloc.c | 24 ++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0fe690ebc269..d94f196c541a 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1367,12 +1367,13 @@ int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) int kernel_set_to_readonly; +#define PMD_ALIGN(x) (((unsigned long)(x) + (PMD_SIZE - 1)) & PMD_MASK) void mark_rodata_ro(void) { unsigned long start = PFN_ALIGN(_text); unsigned long rodata_start = PFN_ALIGN(__start_rodata); unsigned long end = (unsigned long)__end_rodata_hpage_align; - unsigned long text_end = PFN_ALIGN(_etext); + unsigned long text_end = PMD_ALIGN(_etext); unsigned long rodata_end = PFN_ALIGN(__end_rodata); unsigned long all_end; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 9212ff96b871..41509bbec583 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -75,6 +75,9 @@ static const bool vmap_allow_huge = false; #define PMD_ALIGN(addr) ALIGN(addr, PMD_SIZE) #define PMD_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PMD_SIZE) +static struct vm_struct text_tail_vm; +static struct vmap_area text_tail_va; + bool is_vmalloc_addr(const void *x) { unsigned long addr = (unsigned long)kasan_reset_tag(x); @@ -637,6 +640,8 @@ int is_vmalloc_or_module_addr(const void *x) unsigned long addr = (unsigned long)kasan_reset_tag(x); if (addr >= MODULES_VADDR && addr < MODULES_END) return 1; + if (addr >= text_tail_va.va_start && addr < text_tail_va.va_end) + return 1; #endif return is_vmalloc_addr(x); } @@ -2422,6 +2427,24 @@ static void vmap_init_free_space(void) } } +static void register_text_tail_vm(void) +{ + unsigned long start = PFN_ALIGN((unsigned long)_etext); + unsigned long end = PMD_ALIGN((unsigned long)_etext); + struct vmap_area *va; + + va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (WARN_ON_ONCE(!va)) + return; + text_tail_vm.addr = (void *)start; + text_tail_vm.size = end - start; + text_tail_va.va_start = start; + text_tail_va.va_end = end; + text_tail_va.vm = &text_tail_vm; + memcpy(va, &text_tail_va, sizeof(*va)); + insert_vmap_area_augment(va, NULL, &free_text_area_root, &free_text_area_list); +} + void __init vmalloc_init(void) { struct vmap_area *va; @@ -2432,6 +2455,7 @@ void __init vmalloc_init(void) * Create the cache for vmap_area objects. */ vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC); + register_text_tail_vm(); for_each_possible_cpu(i) { struct vmap_block_queue *vbq;