From patchwork Wed Feb 23 05:21:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junaid Shahid X-Patchwork-Id: 12756371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45DE4C433EF for ; Wed, 23 Feb 2022 05:24:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C18E18D000D; Wed, 23 Feb 2022 00:24:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BEF158D0001; Wed, 23 Feb 2022 00:24:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8F798D000D; Wed, 23 Feb 2022 00:24:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 986058D0001 for ; Wed, 23 Feb 2022 00:24:11 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6212022E5B for ; Wed, 23 Feb 2022 05:24:11 +0000 (UTC) X-FDA: 79172903502.04.40B8CC3 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf09.hostedemail.com (Postfix) with ESMTP id E0539140004 for ; Wed, 23 Feb 2022 05:24:10 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id s133-20020a252c8b000000b0062112290d0bso26612617ybs.23 for ; Tue, 22 Feb 2022 21:24:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZeYN5P9RWE85rWG989hD+TytaZQUQYUlju5cKer3rg4=; b=ojUJsUEL9fv13WjcZCmN21LxwR3Z6zqiLpmAQSzxCgc5C9RlKzd4dJEPqIdea+S3R3 SjjKai+zXn4VwTi+vC8vw+NDJWHGEkF4oYybv8WUgGBs6X4Bacon5BeJ8KDM6+Kjcqg7 fYXSLYWwuUfF3ZzaHWiW5r5fUjU5hUbi5i8udL9OUGPQxFLT6BZJN8cKiUG2KSidmSdI 7UxdullkH2J71pHhwQMQjuVH7Fyee/6awuNUgdr31148ohQL1xZz1MmOnDGr6y/5AC05 atbUvniVVa19eg7E36/l3gFWXXYduMYcF6MD2CAuUTHeqJzSqvtlhwblL5Fzis+kQ6SQ claQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZeYN5P9RWE85rWG989hD+TytaZQUQYUlju5cKer3rg4=; b=cbxC7umTKfxT/tsphXBQM1QP/wZmnUXeSk6qXNic8zhZTgGoN2iBmfsDIAiFexeJWv dF8M1JY0DXmCE+iCoamf8LKm5aqow0di2Ic0npemw521n8sIC2Zlovn4TN3sgcWZFYlB of3Dj/9HxjQZm9Kq4Y1y5bhX74QMulczBjc1QkPTN6vu2Z06ja1pJTuv2OpHxPdnK0VK cf9gr1U0F6RpM5kf+4dlQfmhZfqtKgoaUKPXEy9ODbAnTl/CRYxCopdeOAkenVpipRmo YiEH1i1TKQs8LZtCyPxowYdfo7JyZ4W8nniCow35040GPiNV+D+Zt2fw7hSnZfdqnqHD t64w== X-Gm-Message-State: AOAM533HnLRGGA0VZniA4C8z9E5LckoZ5/GpDQ1Cw4EjAXetEn145X2p mwZnFr4JkQMa8eMKeh9iNFA4716JZCsO X-Google-Smtp-Source: ABdhPJx7Y4lP1+DVrEJyeaCPqdjSHoYWrsMiEOQNVQ/p8prXsXPbox8dmisMikykjaQnA+B25uCbwTFY5Vh8 X-Received: from js-desktop.svl.corp.google.com ([2620:15c:2cd:202:ccbe:5d15:e2e6:322]) (user=junaids job=sendgmr) by 2002:a5b:589:0:b0:61d:de51:9720 with SMTP id l9-20020a5b0589000000b0061dde519720mr26317731ybp.167.1645593850281; Tue, 22 Feb 2022 21:24:10 -0800 (PST) Date: Tue, 22 Feb 2022 21:21:47 -0800 In-Reply-To: <20220223052223.1202152-1-junaids@google.com> Message-Id: <20220223052223.1202152-12-junaids@google.com> Mime-Version: 1.0 References: <20220223052223.1202152-1-junaids@google.com> X-Mailer: git-send-email 2.35.1.473.g83b2b277ed-goog Subject: [RFC PATCH 11/47] mm: asi: Global non-sensitive vmalloc/vmap support From: Junaid Shahid To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com, pjt@google.com, oweisse@google.com, alexandre.chartre@oracle.com, rppt@linux.ibm.com, dave.hansen@linux.intel.com, peterz@infradead.org, tglx@linutronix.de, luto@kernel.org, linux-mm@kvack.org X-Stat-Signature: jii13x65hbwsu87jxbpfgggeaqwqtt48 X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ojUJsUEL; spf=pass (imf09.hostedemail.com: domain of 3-sQVYgcKCPEcngTbWlZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--junaids.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3-sQVYgcKCPEcngTbWlZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--junaids.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: E0539140004 X-HE-Tag: 1645593850-63305 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A new flag, VM_GLOBAL_NONSENSITIVE is added to designate globally non-sensitive vmalloc/vmap areas. When using the __vmalloc / __vmalloc_node APIs, if the corresponding GFP flag is specified, the VM flag is automatically added. When using the __vmalloc_node_range API, either flag can be specified independently. The VM flag will only map the vmalloc area as non-sensitive, while the GFP flag will only map the underlying direct map area as non-sensitive. When using the __vmalloc_node_range API, instead of VMALLOC_START/END, VMALLOC_GLOBAL_NONSENSITIVE_START/END should be used. This is to keep these mappings separate from locally non-sensitive vmalloc areas, which will be added later. Areas outside of the standard vmalloc range can specify the range as before. Signed-off-by: Junaid Shahid --- arch/x86/include/asm/pgtable_64_types.h | 5 +++ arch/x86/mm/asi.c | 3 +- include/asm-generic/asi.h | 3 ++ include/linux/vmalloc.h | 6 +++ mm/vmalloc.c | 53 ++++++++++++++++++++++--- 5 files changed, 64 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 91ac10654570..0fc380ba25b8 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -141,6 +141,11 @@ extern unsigned int ptrs_per_p4d; #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#ifdef CONFIG_ADDRESS_SPACE_ISOLATION +#define VMALLOC_GLOBAL_NONSENSITIVE_START VMALLOC_START +#define VMALLOC_GLOBAL_NONSENSITIVE_END VMALLOC_END +#endif + #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ #ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index d381ae573af9..71348399baf1 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -198,7 +198,8 @@ static int __init asi_global_init(void) "ASI Global Non-sensitive direct map"); preallocate_toplevel_pgtbls(asi_global_nonsensitive_pgd, - VMALLOC_START, VMALLOC_END, + VMALLOC_GLOBAL_NONSENSITIVE_START, + VMALLOC_GLOBAL_NONSENSITIVE_END, "ASI Global Non-sensitive vmalloc"); return 0; diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 012691e29895..f918cd052722 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -14,6 +14,9 @@ #define ASI_GLOBAL_NONSENSITIVE NULL +#define VMALLOC_GLOBAL_NONSENSITIVE_START VMALLOC_START +#define VMALLOC_GLOBAL_NONSENSITIVE_END VMALLOC_END + #ifndef _ASSEMBLY_ struct asi_hooks {}; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 6e022cc712e6..c7c66decda3e 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -39,6 +39,12 @@ struct notifier_block; /* in notifier.h */ * determine which allocations need the module shadow freed. */ +#ifdef CONFIG_ADDRESS_SPACE_ISOLATION +#define VM_GLOBAL_NONSENSITIVE 0x00000800 /* Similar to __GFP_GLOBAL_NONSENSITIVE */ +#else +#define VM_GLOBAL_NONSENSITIVE 0 +#endif + /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f2ef719f1cba..ba588a37ee75 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2393,6 +2393,33 @@ void __init vmalloc_init(void) vmap_initialized = true; } +static int asi_map_vm_area(struct vm_struct *area) +{ + if (!static_asi_enabled()) + return 0; + + if (area->flags & VM_GLOBAL_NONSENSITIVE) + return asi_map(ASI_GLOBAL_NONSENSITIVE, area->addr, + get_vm_area_size(area)); + + return 0; +} + +static void asi_unmap_vm_area(struct vm_struct *area) +{ + if (!static_asi_enabled()) + return; + + /* + * TODO: The TLB flush here could potentially be avoided in + * the case when the existing flush from try_purge_vmap_area_lazy() + * and/or vm_unmap_aliases() happens non-lazily. + */ + if (area->flags & VM_GLOBAL_NONSENSITIVE) + asi_unmap(ASI_GLOBAL_NONSENSITIVE, area->addr, + get_vm_area_size(area), true); +} + static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { @@ -2570,6 +2597,7 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages) int flush_dmap = 0; int i; + asi_unmap_vm_area(area); remove_vm_area(area->addr); /* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */ @@ -2787,16 +2815,20 @@ void *vmap(struct page **pages, unsigned int count, addr = (unsigned long)area->addr; if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), - pages, PAGE_SHIFT) < 0) { - vunmap(area->addr); - return NULL; - } + pages, PAGE_SHIFT) < 0) + goto err; + + if (asi_map_vm_area(area)) + goto err; if (flags & VM_MAP_PUT_PAGES) { area->pages = pages; area->nr_pages = count; } return area->addr; +err: + vunmap(area->addr); + return NULL; } EXPORT_SYMBOL(vmap); @@ -2991,6 +3023,9 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } + if (asi_map_vm_area(area)) + goto fail; + return area->addr; fail: @@ -3038,6 +3073,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (WARN_ON_ONCE(!size)) return NULL; + if (static_asi_enabled() && (vm_flags & VM_GLOBAL_NONSENSITIVE)) + gfp_mask |= __GFP_ZERO; + if ((size >> PAGE_SHIFT) > totalram_pages()) { warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, exceeds total pages", @@ -3127,8 +3165,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) { + ulong vm_flags = 0; + + if (static_asi_enabled() && (gfp_mask & __GFP_GLOBAL_NONSENSITIVE)) + vm_flags |= VM_GLOBAL_NONSENSITIVE; + return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, - gfp_mask, PAGE_KERNEL, 0, node, caller); + gfp_mask, PAGE_KERNEL, vm_flags, node, caller); } /* * This is only for performance analysis of vmalloc and stress purpose.