From patchwork Fri Jan 10 18:40:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80943E77188 for ; Fri, 10 Jan 2025 18:42:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4F7C6B009B; Fri, 10 Jan 2025 13:41:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B01606B00D8; Fri, 10 Jan 2025 13:41:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94EE76B009E; Fri, 10 Jan 2025 13:41:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 75B206B008C for ; Fri, 10 Jan 2025 13:41:50 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3920FAEA6C for ; Fri, 10 Jan 2025 18:41:50 +0000 (UTC) X-FDA: 82992411180.01.DEDB223 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 581CB14000F for ; Fri, 10 Jan 2025 18:41:48 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="W1yNJ/hw"; spf=pass (imf09.hostedemail.com: domain of 36mmBZwgKCAcqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=36mmBZwgKCAcqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736534508; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=ytv3rhwCL92bt4G0d1ao96sZMVFiWZ2pmC11Uzvqy9MF3b10GU7dmIWnmwMWtVYHoPYCuB nBiYcZvq2ukkliCbEWlFaOQHqNbJMQBDLN1UHgmP59r/nZYVrqgPpointcvdHhZCpP9Fr1 NAzUa324vnTasEiqrBac+XQ9v190HW0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736534508; a=rsa-sha256; cv=none; b=CnT6Di4Vsq804NVBSBS4k3WDCf365XlrNiQIteN7a74PrYa2SzvZnm+wKJdcSceulFsPGD cefuKJEu2p3qQlEhwGsFgbQiX7nKITOiiP1aS4d+b2V+gvysg7mrvDYxLlJC9IVuJd77ag KSdKBUY9/SOQ26l0PAqwXxfghuIB6YY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="W1yNJ/hw"; spf=pass (imf09.hostedemail.com: domain of 36mmBZwgKCAcqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=36mmBZwgKCAcqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4361c040ba8so13109705e9.1 for ; Fri, 10 Jan 2025 10:41:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534507; x=1737139307; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=W1yNJ/hwgWDfKZqPTu2JXj9ddc7AIa+qUL6A/m4WVMHgrJUHzZzUkcbgNapzkUXd8A mKxEkHGSd4FrzvPbLzGXXBb2e9058d1DU/M+YLVXWc6g4XLDIB5SXkgK/hZGMO6K8iBE /rhzHNcoMxNdotki3ayUMJzHf0JvANiuXkqG4Vd5VJ0mLE4DxzSE4W17bzG/RXqvTc/g NzDbGjnVFbjFUlu8ifTJyg2lJF8mA9AowkYT1A+kBKRo5INRPeJCcgoU4BocnIArFieZ JV04q7L3jRDNr4wdKj/A3DCxkTJx96B+UeJjzNU6TeTI/YSpJOshqX/SFY65UF5Q+rxP FSfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534507; x=1737139307; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=H3ewWENKJzQonDbkseJsTLH14p+6VGz1Cw4jb+13wZ8J1LkW7QIrDEI7m8q3qolHSL keTZHBHlMs+KlFyzHgFzx2XZb8Em3Zknwt/FEKg5GA06GQIEHq8A6DFZ8tcVISF3+75t WOtrLUm8W3mXdvEhA5ULADip3N0MobyXWkhii9CLDdWeolApFEigfllaEb/7qF7pjzYD nhz3LHji+ggaOQGRaLBlEkb/LoyDBt2ojorT4qm4UxwKeGbO5ZP29OYUMp379oJ60zBh vnMAbYg6DHQLWkhUW9zzcgzIeqbYuhzedcziHdjyMiwuo0b5p9HkHGEKEf2F0eDoKurG IlGg== X-Forwarded-Encrypted: i=1; AJvYcCWkuptmCkhRnFLbKF805Peftw2CtmFgV9snuXciPjU+gtsKF9+VoMbyaFwxY+SgFfZhPKEg6hI64w==@kvack.org X-Gm-Message-State: AOJu0YyAnYQPW0i/oHFsjLUOSKMC5wecjC8pEWdJp/pm2bpCl2GhPG55 SLSVRXNUnAwQ7VVjHicZTdRw2IW3oS8ct3UuCDZMcTiKkBZJb4IStblimE75ydsOAkKp297lOjB UqAfOn4xSfA== X-Google-Smtp-Source: AGHT+IEgsXbDoolH8oLFMalWBy0djT9uJM60zMy+Lo0ErgHawFZLEGOSZUiE5P0d6bxzOZZfpK4Vh92Jd6K16A== X-Received: from wmrn43.prod.google.com ([2002:a05:600c:502b:b0:434:a9bd:e68c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f81:b0:434:f1d5:1453 with SMTP id 5b1f17b1804b1-436e2531ec8mr120237105e9.0.1736534506703; Fri, 10 Jan 2025 10:41:46 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:53 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-27-8419288bc805@google.com> Subject: [PATCH RFC v2 27/29] mm: asi: Add some mitigations on address space transitions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 581CB14000F X-Stat-Signature: 5yakgaeyfpi7a313ho7cn557gr3ri658 X-Rspam-User: X-HE-Tag: 1736534508-147149 X-HE-Meta: U2FsdGVkX1/PrwNHwfSXTCNzlpb6w+UTsHwsJ6TRRyksnQAWsfsulQM/cpuNSh2Z49pV24nAzllYcZcBxK1XpNw+yM7JWse66j2pdhi5rS/dgmt5ED6My9FvJd9Lj34GXW7LcolP53c/hx6gj7VkKNTIN35MnomE3DzqoE8f7qdN0fsDtIGFQ8CeBtE3sQPJQmPxOno/IW4eirCiHXF8F5ntq5I2erVyYI5M2oDuJikjQDO1YuyYLrFajGXUSo4ruMZFnkXRKK3RKYpAWpADSlLi7X8QdJXBQ5ZBr32vlYis3AvE+7UCDpkJDsomB7s9k6l1d8TXlCd446B4zk8GYfBGoiZgDKOqff16j84vQlotBPVf9jK8Z90QercEdjfczkYIljV4H3w5VWDy4/fpcP32I2NPdnSifRQ6VgIbtP/V+Ncg8K/xatF3GkuFzVmSbfFicYJRYG5ulb7zqoTxF3rqDNjfySqIaQKAmrkwJYeYFsdtHLyu8XSnyHfyH3ieDLcptuBz1vx6XNNXzcc4RQqzEbya64U7vyZQJoaagtB1oX13gKSKlibcc35N8qDXBfUsKIp3PB9R4Ez+njET9PuCOYsVstmyi/wisOw7SEpg/cusnioab77hJLxOynth0CShyhp5p4F4cQNreN4c74/5xSMXfEJYe+R9Y8nMKLqZKSNa2ZwUmk3zSGf59+L92RwF/WoIxSJSIGTECjNhxJu3qJxTglwHIv1QtRuiqPrRRCZHWtyWQTMKlYMlsqYl7eD/iOf1QlIloIFfL2bw324wZUKxO6GtKl9fPzJsf1LYM2d/p+i0PPClJbHm/88RANYtRExjnS/02W0uKWN00ji/ykTf8ZSTBhAcwtUy9wbpkROpKFIXyfixoTWdetLyu6h45Ivyw5OENzRj4w+cHLeNNZ6SJWVTFVn3AuXdIBPxKLAyECJb1ZIkgFqsQ1v9CeqgbWHEZutg1i8mZ8m kXT7aach 3KeXq7xo/SNrDdo8fDAoTa01qA743uV47+Hfekl0qQIf/6aRDAkPPjn2yyHJdPnVHY7rXbzMx8jNAGA2jJHHhIwz/woirciD9PM/rfe0hCQfomFSscyKW6UlbBCsjn536b6yq0VUoG1iaY+ZHCn7uNR7aBLjbrNFuaSKVcuZKKBtYcUikpRXMKiyNx3OX2VqBTF79PNtrbfBk3mv2uAspTiWODzhpVA5pML/AssZykvdpPKjsHLKMc3G7iIvjtn3qbXtdugkVrm2WQoFIgrhC786IqE7crSIr5Tbf4QilFxbEq4rYQxIK6E4Vuk/qUHAvFSEDvD5ncMHbw6++G6UiQNLKiEOv4q47LIPozsQj84HETcOVF8XNdRRl2uTB2HmmdLdJ1ghoPp0sAFjQBSKkuG+bOalj3Umj+d4hwA7xlYNoo6U9mR3apyEytkSeJqNwUNZoag8GWr24t2uoOpyLm3VHe5zcIxrwJMSOYvzfcKactucTA7KnGukB1jAKyp7lE09JFcVcyW+6WbIxzTPqrNhI0vMGMDTPA+3bksNTMxEWRjTYuoaK3AOPRjv+BGHw7ODY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Here we ASI actually starts becoming a real exploit mitigation, On CPUs with L1TF, flush L1D when the ASI data taints say so. On all CPUs, do some general branch predictor clearing whenever the control taints say so. This policy is very much just a starting point for discussion. Primarily it's a vague gesture at the fact that there is leeway in how ASI is used: it can be used to target CPU-specific issues (as is the case for L1TF here), or it can be used as a fairly broad mitigation (asi_maybe_flush_control() mitigates several known Spectre-style attacks and very likely also some unknown ones). Signed-off-by: Brendan Jackman --- arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kvm/vmx/vmx.c | 1 + arch/x86/lib/l1tf.c | 2 ++ arch/x86/lib/retpoline.S | 10 ++++++++++ arch/x86/mm/asi.c | 29 +++++++++++++++++++++-------- 5 files changed, 36 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 96b410b1d4e841eb02f53a4691ee794ceee4ad2c..4582fb1fb42f6fd226534012d969ed13085e943a 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -614,6 +614,8 @@ static __always_inline void mds_idle_clear_cpu_buffers(void) mds_clear_cpu_buffers(); } +extern void fill_return_buffer(void); + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b1a02f27b3abce0ef6ac448b66bef2c653a52eef..a532783caaea97291cd92a2e2cac617f74f76c7e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6635,6 +6635,7 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return ret; } +/* Must be reentrant, for use by vmx_post_asi_enter. */ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/lib/l1tf.c b/arch/x86/lib/l1tf.c index c474f18ae331c8dfa7a029c457dd3cf75bebf808..ffe1c3d0ef43ff8f1781f2e446aed041f4ce3179 100644 --- a/arch/x86/lib/l1tf.c +++ b/arch/x86/lib/l1tf.c @@ -46,6 +46,8 @@ EXPORT_SYMBOL(l1tf_flush_setup); * - may or may not work on other CPUs. * * Don't call unless l1tf_flush_setup() has returned successfully. + * + * Must be reentrant, for use by ASI. */ noinstr void l1tf_flush(void) { diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 391059b2c6fbc4a571f0582c7c4654147a930cef..6d126fff6bf839889086fe21464d8af07316d7e5 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -396,3 +396,13 @@ SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk) #endif /* CONFIG_MITIGATION_RETHUNK */ + +.pushsection .noinstr.text, "ax" +SYM_CODE_START(fill_return_buffer) + UNWIND_HINT_FUNC + ENDBR + __FILL_RETURN_BUFFER(%_ASM_AX,RSB_CLEAR_LOOPS) + RET +SYM_CODE_END(fill_return_buffer) +__EXPORT_THUNK(fill_return_buffer) +.popsection diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 1e9dc568e79e8686a4dbf47f765f2c2535d025ec..f10f6614b26148e5ba423d8a44f640674573ee40 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -38,6 +39,8 @@ struct asi __asi_global_nonsensitive = { .mm = &init_mm, }; +static bool do_l1tf_flush __ro_after_init; + static inline bool asi_class_id_valid(enum asi_class_id class_id) { return class_id >= 0 && class_id < ASI_MAX_NUM_CLASSES; @@ -361,6 +364,15 @@ static int __init asi_global_init(void) asi_clone_pgd(asi_global_nonsensitive_pgd, init_mm.pgd, VMEMMAP_START + (1UL << PGDIR_SHIFT)); + if (boot_cpu_has_bug(X86_BUG_L1TF)) { + int err = l1tf_flush_setup(); + + if (err) + pr_warn("Failed to setup L1TF flushing for ASI (%pe)", ERR_PTR(err)); + else + do_l1tf_flush = true; + } + #ifdef CONFIG_PM_SLEEP register_syscore_ops(&asi_syscore_ops); #endif @@ -512,10 +524,12 @@ static __always_inline void maybe_flush_control(struct asi *next_asi) if (!taints) return; - /* - * This is where we'll do the actual dirty work of clearing uarch state. - * For now we just pretend, clear the taints. - */ + /* Clear normal indirect branch predictions, if we haven't */ + if (cpu_feature_enabled(X86_FEATURE_IBPB)) + __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0); + + fill_return_buffer(); + this_cpu_and(asi_taints, ~ASI_TAINTS_CONTROL_MASK); } @@ -536,10 +550,9 @@ static __always_inline void maybe_flush_data(struct asi *next_asi) if (!taints) return; - /* - * This is where we'll do the actual dirty work of clearing uarch state. - * For now we just pretend, clear the taints. - */ + if (do_l1tf_flush) + l1tf_flush(); + this_cpu_and(asi_taints, ~ASI_TAINTS_DATA_MASK); }