From patchwork Mon Aug 26 06:55:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13777179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35C70C5321E for ; Mon, 26 Aug 2024 06:57:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDC928D0040; Mon, 26 Aug 2024 02:57:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8C918D0029; Mon, 26 Aug 2024 02:57:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A058F8D0040; Mon, 26 Aug 2024 02:57:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7ED168D0029 for ; Mon, 26 Aug 2024 02:57:49 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 21A65120CFB for ; Mon, 26 Aug 2024 06:57:49 +0000 (UTC) X-FDA: 82493491458.14.3C85294 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf18.hostedemail.com (Postfix) with ESMTP id BB4961C000F for ; Mon, 26 Aug 2024 06:57:46 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q6ZyVdwI; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724655448; a=rsa-sha256; cv=none; b=e008NupaprIOZAUMC/psThXxKunz6ax11aP+sAsrELkmxaMRPBxw8u8uQtpfwUD4Pl5arU MXiqXnEHwRfHL8Te9XCAbu0K9BeA7Ct1eNgxPtAX8pkyUq41qCYJYPdc6VVeDxhqcd0A54 dl9+IpjxHIKUcltbCjJhxOmRifrMo/g= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q6ZyVdwI; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724655448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n5kXRl+dSghM9uaB3dT4KX3E2VK4jV7hFUGqQ2ppz/4=; b=C1nl6oyaoAn+HPBQdyK6jx7ipu5InZidKwAgTM2mp/b/S1qAXrFoNr5QY2w7ZgjoKUKKFl V68fghAZkZeTUg6iY2Z6kkENN4cDU+XO15pt4Tb6tvClur1kc54Cu1A7xgQf5yXImhWZdM 99gWmshwaTyHy+4juuJHGYIMiyEGgb8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E134FCE0B1F; Mon, 26 Aug 2024 06:57:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3B96C5819A; Mon, 26 Aug 2024 06:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724655461; bh=60YTzdSE6i7e5nDrzqtUhMVWFwXnvzse4OY0JvgSqFk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q6ZyVdwIdlrhEC6VjJZ0fgcKrogmO8tbfwBw/ARW3MBm8b5naeSJO870VG/7Zc0Sx z8xklW4iH2BY/ocnk31e8T3HGA5Lsu6v0dGc0Ywxgq9qRiqq+kxoeFEvwBPzgm9W+l +RpneR3sVLQRf1q36xGh2c0HbQzqJrF0JCM/67VWkE7sGVz0vcNqXhnPQDBIT8PnqX /3N7DWwyo3yfbCRm7TkkewmQ9Fwxhe6ubFFpBEyncgzMZBymzlW/HYMseTjJnOBZIB ev/JwfnKwLexycTlHfnkA8oyj80FTg22qkCIhfXBQnf/RAkVOhD58H/yq0ZDvFp5Cx ao8cUX++cRVfQ== From: Mike Rapoport To: Andrew Morton Cc: Andreas Larsson , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Christoph Hellwig , Christophe Leroy , Dave Hansen , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Kent Overstreet , "Liam R. Howlett" , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Oleg Nesterov , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Song Liu , Stafford Horne , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Uladzislau Rezki , Vineet Gupta , Will Deacon , bpf@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 7/8] execmem: add support for cache of large ROX pages Date: Mon, 26 Aug 2024 09:55:31 +0300 Message-ID: <20240826065532.2618273-8-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240826065532.2618273-1-rppt@kernel.org> References: <20240826065532.2618273-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: BB4961C000F X-Rspamd-Server: rspam01 X-Stat-Signature: eop9f34a5ra4d8uhrrz6em9dnkbsujaz X-HE-Tag: 1724655466-904330 X-HE-Meta: U2FsdGVkX1/QIV+aYfNeoPYM4yspBxFt8WShbg4WalCvuFEM58fh2ffkf5Dz567rwSYeYEuhAUr1itcLwTkBNH9oEm+TPOymCAPBkFNJ9kJt73+5iRUNx4+3bEOx4XFtmIVGBrCqCKhYgxlB50NNpLyK9DyE7sFlUp8MUkSVGK8f7AulYV/2216hsggHg6DVEqql2ZcFfCpgm4+rOlhEIyotg6CmIn+cm1vMmZlXyavCCn33SafUH7+lZnSJZOO0/NoHjCHnNm4NmeM+Zq8ogsG4igXHrj0FCfE7BGzM4S9xC8Phk4scRb/Z55MRUUEWRoCOcPX5FWHlTTSmd6+PAnIFXuLsKebV6uHvSwcez0NEgnM4GDQPmC/b68Uart/7cEHJLJsGqTY+Jf4yBQ90hsKmP0TWORj/ndo9cfA4t/x3Io4viXUaL8rG8R6pSlLISXhtNEEI6PCFo4Io6SNcXCVoAvmWa+/wRKgTfrWWDghWouH/Zx4+kCKA7WO4j6pYDHwLqzmUrAphz1SSpu9FPgDqKVD46XuXzqrn6S/SieVymaCebdSLXPlca3AL2dWpy2m8VeD/7wLWBkJLudBW0miut4lo9J6h0YnM0Z1YDeZ4km35lPhE7Q7zF+3K+rj4TlXq5ns7YqOAkFEQ0GEKms5WpxYLKGfjMTAhbdT0z29r3lHPIsBRYC10dsVB2sAYezo8eLRR0XGYOPQipvk9PT+Ru8bGg1swlhYautuLXSE1g16tjyyetZCViPGUB2jREnYlU+VBOiwh0UfPLZpU3gJMsZyhzq5bDCBeFu4Y8LVHZv5TJCWzLrYMNBqe7oYgDOR8katBlkyl0w2kgFENkAhDb6p8siYnPzXYR6JCqBIP11O7oCUneecd59e04iz6FbfgRMrCh66H3THpR0rnfCdbx0QUXPL0M2SOfSr1aAMlvIZT0zLKKgj50vkW96zr3EVd7yiVfd2i4aPh+72 hLYzbV3e o6AYFp4hSUbx32uF5JGDRCKM8aNxM8vaq/TjuqrWDnKt2XIh6laYKZWIN0TYPNXuzomydK1iB9dQeThRkRmW/9+GtTI0AVFfXgtQ8FY+7Jm+BcZeJCPFXSedE21WFRAiliYgP1lDKluT8+PsVh9y5DCV4ZQ1ZuQciqGUqYk0eAR4/VV1KSlz9z/K1G0pPDjhqDIhO8PW2zgvojS0gb2XGIZvG/P/pZdno93cRUSu7Iu7F1A+lc/EuAiqQlA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Using large pages to map text areas reduces iTLB pressure and improves performance. Extend execmem_alloc() with an ability to use PMD_SIZE'ed pages with ROX permissions as a cache for smaller allocations. To populate the cache, a writable large page is allocated from vmalloc with VM_ALLOW_HUGE_VMAP, filled with invalid instructions and then remapped as ROX. Portions of that large page are handed out to execmem_alloc() callers without any changes to the permissions. When the memory is freed with execmem_free() it is invalidated again so that it won't contain stale instructions. The cache is enabled when an architecture sets EXECMEM_ROX_CACHE flag in definition of an execmem_range. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/execmem.h | 2 + mm/execmem.c | 290 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 287 insertions(+), 5 deletions(-) diff --git a/include/linux/execmem.h b/include/linux/execmem.h index dfdf19f8a5e8..7436aa547818 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -77,12 +77,14 @@ struct execmem_range { /** * struct execmem_info - architecture parameters for code allocations + * @fill_trapping_insns: set memory to contain instructions that will trap * @ranges: array of parameter sets defining architecture specific * parameters for executable memory allocations. The ranges that are not * explicitly initialized by an architecture use parameters defined for * @EXECMEM_DEFAULT. */ struct execmem_info { + void (*fill_trapping_insns)(void *ptr, size_t size, bool writable); struct execmem_range ranges[EXECMEM_TYPE_MAX]; }; diff --git a/mm/execmem.c b/mm/execmem.c index 0f6691e9ffe6..3bde0863c50a 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -7,28 +7,88 @@ */ #include +#include #include #include +#include #include #include +#include + +#include "internal.h" + static struct execmem_info *execmem_info __ro_after_init; static struct execmem_info default_execmem_info __ro_after_init; -static void *__execmem_alloc(struct execmem_range *range, size_t size) +#ifdef CONFIG_MMU +struct execmem_cache { + struct mutex mutex; + struct maple_tree busy_areas; + struct maple_tree free_areas; +}; + +static struct execmem_cache execmem_cache = { + .mutex = __MUTEX_INITIALIZER(execmem_cache.mutex), + .busy_areas = MTREE_INIT_EXT(busy_areas, MT_FLAGS_LOCK_EXTERN, + execmem_cache.mutex), + .free_areas = MTREE_INIT_EXT(free_areas, MT_FLAGS_LOCK_EXTERN, + execmem_cache.mutex), +}; + +static void execmem_cache_clean(struct work_struct *work) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct mutex *mutex = &execmem_cache.mutex; + MA_STATE(mas, free_areas, 0, ULONG_MAX); + void *area; + + mutex_lock(mutex); + mas_for_each(&mas, area, ULONG_MAX) { + size_t size; + + if (!xa_is_value(area)) + continue; + + size = xa_to_value(area); + + if (IS_ALIGNED(size, PMD_SIZE) && + IS_ALIGNED(mas.index, PMD_SIZE)) { + void *ptr = (void *)mas.index; + + mas_erase(&mas); + vfree(ptr); + } + } + mutex_unlock(mutex); +} + +static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean); + +static void execmem_fill_trapping_insns(void *ptr, size_t size, bool writable) +{ + if (execmem_info->fill_trapping_insns) + execmem_info->fill_trapping_insns(ptr, size, writable); + else + memset(ptr, 0, size); +} + +static void *execmem_vmalloc(struct execmem_range *range, size_t size, + pgprot_t pgprot, unsigned long vm_flags) { bool kasan = range->flags & EXECMEM_KASAN_SHADOW; - unsigned long vm_flags = VM_FLUSH_RESET_PERMS; gfp_t gfp_flags = GFP_KERNEL | __GFP_NOWARN; + unsigned int align = range->alignment; unsigned long start = range->start; unsigned long end = range->end; - unsigned int align = range->alignment; - pgprot_t pgprot = range->pgprot; void *p; if (kasan) vm_flags |= VM_DEFER_KMEMLEAK; + if (vm_flags & VM_ALLOW_HUGE_VMAP) + align = PMD_SIZE; + p = __vmalloc_node_range(size, align, start, end, gfp_flags, pgprot, vm_flags, NUMA_NO_NODE, __builtin_return_address(0)); @@ -50,8 +110,226 @@ static void *__execmem_alloc(struct execmem_range *range, size_t size) return NULL; } + return p; +} + +static int execmem_cache_add(void *ptr, size_t size) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr = (unsigned long)ptr; + MA_STATE(mas, free_areas, addr - 1, addr + 1); + unsigned long lower, lower_size = 0; + unsigned long upper, upper_size = 0; + unsigned long area_size; + void *area = NULL; + int err; + + lower = addr; + upper = addr + size - 1; + + mutex_lock(mutex); + area = mas_walk(&mas); + if (area && xa_is_value(area) && mas.last == addr - 1) { + lower = mas.index; + lower_size = xa_to_value(area); + } + + area = mas_next(&mas, ULONG_MAX); + if (area && xa_is_value(area) && mas.index == addr + size) { + upper = mas.last; + upper_size = xa_to_value(area); + } + + mas_set_range(&mas, lower, upper); + area_size = lower_size + upper_size + size; + err = mas_store_gfp(&mas, xa_mk_value(area_size), GFP_KERNEL); + mutex_unlock(mutex); + if (err) + return -ENOMEM; + + return 0; +} + +static bool within_range(struct execmem_range *range, struct ma_state *mas, + size_t size) +{ + unsigned long addr = mas->index; + + if (addr >= range->start && addr + size < range->end) + return true; + + if (range->fallback_start && + addr >= range->fallback_start && addr + size < range->fallback_end) + return true; + + return false; +} + +static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct maple_tree *busy_areas = &execmem_cache.busy_areas; + MA_STATE(mas_free, free_areas, 0, ULONG_MAX); + MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr, last, area_size = 0; + void *area, *ptr = NULL; + int err; + + mutex_lock(mutex); + mas_for_each(&mas_free, area, ULONG_MAX) { + area_size = xa_to_value(area); + + if (area_size >= size && within_range(range, &mas_free, size)) + break; + } + + if (area_size < size) + goto out_unlock; + + addr = mas_free.index; + last = mas_free.last; + + /* insert allocated size to busy_areas at range [addr, addr + size) */ + mas_set_range(&mas_busy, addr, addr + size - 1); + err = mas_store_gfp(&mas_busy, xa_mk_value(size), GFP_KERNEL); + if (err) + goto out_unlock; + + mas_erase(&mas_free); + if (area_size > size) { + /* + * re-insert remaining free size to free_areas at range + * [addr + size, last] + */ + mas_set_range(&mas_free, addr + size, last); + size = area_size - size; + err = mas_store_gfp(&mas_free, xa_mk_value(size), GFP_KERNEL); + if (err) { + mas_erase(&mas_busy); + goto out_unlock; + } + } + ptr = (void *)addr; + +out_unlock: + mutex_unlock(mutex); + return ptr; +} + +static int execmem_cache_populate(struct execmem_range *range, size_t size) +{ + unsigned long vm_flags = VM_FLUSH_RESET_PERMS | VM_ALLOW_HUGE_VMAP; + unsigned long start, end; + struct vm_struct *vm; + size_t alloc_size; + int err = -ENOMEM; + void *p; + + alloc_size = round_up(size, PMD_SIZE); + p = execmem_vmalloc(range, alloc_size, PAGE_KERNEL, vm_flags); + if (!p) + return err; + + vm = find_vm_area(p); + if (!vm) + goto err_free_mem; + + /* fill memory with instructions that will trap */ + execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true); + + start = (unsigned long)p; + end = start + alloc_size; + + vunmap_range_noflush(start, end); + flush_tlb_kernel_range(start, end); + + err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages, + PMD_SHIFT); + if (err) + goto err_free_mem; + + err = execmem_cache_add(p, alloc_size); + if (err) + goto err_free_mem; + + return 0; + +err_free_mem: + vfree(p); + return err; +} + +static void *execmem_cache_alloc(struct execmem_range *range, size_t size) +{ + void *p; + int err; + + p = __execmem_cache_alloc(range, size); + if (p) + return p; + + err = execmem_cache_populate(range, size); + if (err) + return NULL; + + return __execmem_cache_alloc(range, size); +} + +static bool execmem_cache_free(void *ptr) +{ + struct maple_tree *busy_areas = &execmem_cache.busy_areas; + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr = (unsigned long)ptr; + MA_STATE(mas, busy_areas, addr, addr); + size_t size; + void *area; + + mutex_lock(mutex); + area = mas_walk(&mas); + if (!area) { + mutex_unlock(mutex); + return false; + } + size = xa_to_value(area); + mas_erase(&mas); + mutex_unlock(mutex); + + execmem_fill_trapping_insns(ptr, size, /* writable = */ false); + + execmem_cache_add(ptr, size); + + schedule_work(&execmem_cache_clean_work); + + return true; +} + +static void *__execmem_alloc(struct execmem_range *range, size_t size) +{ + bool use_cache = range->flags & EXECMEM_ROX_CACHE; + unsigned long vm_flags = VM_FLUSH_RESET_PERMS; + pgprot_t pgprot = range->pgprot; + void *p; + + if (use_cache) + p = execmem_cache_alloc(range, size); + else + p = execmem_vmalloc(range, size, pgprot, vm_flags); + return kasan_reset_tag(p); } +#else +static void *__execmem_alloc(struct execmem_range *range, size_t size) +{ + return vmalloc(size); +} + +static bool execmem_cache_free(void *ptr) +{ + return false; +} +#endif void *execmem_alloc(enum execmem_type type, size_t size) { @@ -67,7 +345,9 @@ void execmem_free(void *ptr) * supported by vmalloc. */ WARN_ON(in_interrupt()); - vfree(ptr); + + if (!execmem_cache_free(ptr)) + vfree(ptr); } void *execmem_update_copy(void *dst, const void *src, size_t size)