From patchwork Fri Jul 12 17:00:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13732009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E8A6C2BD09 for ; Fri, 12 Jul 2024 17:02:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAB496B00C0; Fri, 12 Jul 2024 13:02:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D309D6B00C1; Fri, 12 Jul 2024 13:02:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B34A26B00C2; Fri, 12 Jul 2024 13:02:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8CF406B00C0 for ; Fri, 12 Jul 2024 13:02:12 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D3549140C2F for ; Fri, 12 Jul 2024 17:02:11 +0000 (UTC) X-FDA: 82331718462.05.9BB3D38 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf11.hostedemail.com (Postfix) with ESMTP id B871D40032 for ; Fri, 12 Jul 2024 17:02:09 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iJTtODhz; spf=pass (imf11.hostedemail.com: domain of 3kGGRZggKCNM8z19BzC05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jackmanb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3kGGRZggKCNM8z19BzC05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720803696; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zn579dgno7SZboGdojpY5JJU8xi+US4yr/l6Mjf8vz0=; b=L0klBhc0oe6oBWyoZ3r9yE4UL7uwCsMo5fU+pGfs1klpSABjJ6rT8JwnPQInDOT+8wh+f+ MDAkB4NsuVNj9GyHEH3SlP5y5uA3+n1V8C/9S5ElaEPbZ+K3CMK5pMdRBtFuDWupSkqrGA gc3sQ/dC2d9svbCyOWS0SQP8Dly/n+Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720803696; a=rsa-sha256; cv=none; b=uq7oXNFw9lQs528o2+6pcIjcZTUkS1t4UUW6q2d9BUhD//2nOCMhkMIGE+ATsZhwKGTBjJ g2XmOp3boAkjCOEvDzJ0KkJUNzwX+W3RNHF2toagPacgBUt5honEzvgMk2XAv6qLfG0F5A x5HmjDu6+sBHfdfeeO10K5xe3syfGg8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iJTtODhz; spf=pass (imf11.hostedemail.com: domain of 3kGGRZggKCNM8z19BzC05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jackmanb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3kGGRZggKCNM8z19BzC05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-653306993a8so36442177b3.1 for ; Fri, 12 Jul 2024 10:02:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720803728; x=1721408528; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zn579dgno7SZboGdojpY5JJU8xi+US4yr/l6Mjf8vz0=; b=iJTtODhzl9dD6qfFJBUIh+yY1gN3IoG8jC2hZp8I5LbwEPqvCTUvaV4QF3aa0SzjgA m6dFrLzPrpZqo7Y5SnknwEODOlY7VMhRsSQf2kkJauTTnndGRggbFqZTDtjsd/CDFeq8 fD+oQFXqjoAtmHqQeUMIJDvP9WgFdCR3L+Cf6R0OaGG/hqgLTsy292dpTQU6w5OPYvrH 6eZgxCc42ds2ZkTKdD0OqcOS3ComgsHW5G3H5o5alwHUYD6+ckdgaRSAawZ7MSGgkRlN OEpee+ZgIXieCdtfBslORAs2EafUGYcUE5erABcgRzKafcfXaTYozJES1TDe1EpDMKa5 W7wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720803728; x=1721408528; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zn579dgno7SZboGdojpY5JJU8xi+US4yr/l6Mjf8vz0=; b=Ij1S1ZBMEidLXbg0TgNkHUNeCBxOeTryVBY2XBcOXSdiWuTE+ZMjHDF2xEsMyoJudn RnSPNRz4Zn1vtpxKOcgWVimzwWZQ8UfuGvmi38B8uIB8+N7xqXB7T3iK8b52EMh1ChW2 mt85o6ogCVBukGtcy+lXukaHYOfo5smKsSDt5q2q4hFeAo+Q0/A67jYYkJILkp1tc4dz k/HKjPRKfW8AMMBRoxW0bu8ROMMqYsTG8EY9zhuQiYXtgVjni7RRB4JJ4K9OVvxzC7PD hgUS7oPqDaio+5d+S2rjg6oVeQkKORXWYCtPs6pRC7jp7fHWqPpIjDd2W74Z6eyCm9V8 wAuw== X-Forwarded-Encrypted: i=1; AJvYcCWgNjXw+pbP2/elvoG6BAFrz9YJeIukzHRbFEyDoGr5RYQzgcFSROb3ADHMK0FosMitGdW4SFV2NMRrlfTjPLqARDU= X-Gm-Message-State: AOJu0YzjmY1CT36V3INsVdsjgPwyXmHnyjgm1EvGW5BF5vN97P2mQZ43 stv3lQnH5BuzixA0TZFlGpqh8OIZl9QK2anVQKsojP1+pFP5zAXiKgZk2S65yIYzVT0MK5/pHgR vPreMErHumQ== X-Google-Smtp-Source: AGHT+IGmf1CJRMIpY+h+xba+0XgEdJs1Sqn1txDmOypcZe8mfXwCEjETMiBlkTEaYdYTy3241JdukP3H8LKm6w== X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:690c:46c8:b0:62c:de05:5a78 with SMTP id 00721157ae682-658f01fd061mr625407b3.6.1720803728252; Fri, 12 Jul 2024 10:02:08 -0700 (PDT) Date: Fri, 12 Jul 2024 17:00:44 +0000 In-Reply-To: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> Mime-Version: 1.0 References: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> X-Mailer: b4 0.14-dev Message-ID: <20240712-asi-rfc-24-v1-26-144b319a40d8@google.com> Subject: [PATCH 26/26] KVM: x86: asi: Add some mitigations on address space transitions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Sean Christopherson , Paolo Bonzini , Alexandre Chartre , Liran Alon , Jan Setje-Eilers , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , Mel Gorman , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Michal Hocko , Khalid Aziz , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Valentin Schneider , Paul Turner , Reiji Watanabe , Junaid Shahid , Ofir Weisse , Yosry Ahmed , Patrick Bellasi , KP Singh , Alexandra Sandulescu , Matteo Rizzo , Jann Horn Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, Brendan Jackman X-Rspamd-Queue-Id: B871D40032 X-Stat-Signature: 188y9wtttz1b3au7w1j7jajrijwu6ub5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1720803729-473933 X-HE-Meta: U2FsdGVkX18ijRvQnO3aqoWyMrRPNQiZCbhg//BrSgMy2+uEo0G8exiamuWv6/zIqVrr6eLsKYgVxdnjEiOZFAmJz0jGCDYzPuWwguh0Feegm1E5+9YxTat0ZPK0jn3YEkaZzq104+DKAHOg2b5Ej/Xc0SriEjARcJ2dpmFKrCRxeQ2PJaFktKqD/7tXbUhl8nEyJW4bsXkNzfodcETJ0nCBoI0z4DfAH847vDyj86OBqsEkcX7D/N2IqorhUwa+ybLCVjnrDENpKU/sn88QAvekFOvzM/YF78u5ubW3kUJTHooLX53cgUSLHFyuizAaJw6dj74sY1yWtFp2Pd6IemvtFEnt1kHEMHCe+JMrX3F5EErvcBZABF4XTgKe1d/AROzNAWvfvq3PdnQp+GqYY1i8fWyasfxf+KoDZtLVHbBg4s3KB/e4XvwlQkVG+WDULIS4Ofw7MeSA9/XAbJPg/1UNyZUs9kWSamxBGzbOxuix8xvZjSF2gfSdqx3iDzosYrHvWB/A5u+qADDVGBOgZGgkpc6/TcP5ytZFneITFqyVg8rgSO7kWHYiTNhrNtUqGtZUCHUiJNOlAGKXr4gDx1WiugTy5ItR6SwtdnEcRaD7htSjmNqHnMYABpUfDS8DzOm8rRwcECvlzlLy9bdJrSt8juGP5rts0MjczVUnNZ32cyTGVlLMJweY6nU7TWHwKUyHnUWlzVhUHt2+yj3N+/zYWpRV5J7hq69gJdlWjNUQXUipVxvlHanxzBiZTC1n7utdIQmuBkFncnNAmVI58UY24/nzJyhvqfm5DFukEpR4P9qE26mZtjWFX9+YhArnI569KmZydspWCZVpsRI6gWIDFBfQT1d1opKfeiHF/I9dbQBHUiDaQ7V5OaDF6qoPLFFMdP37SKOilVzjWcwy94fwGJty6vZ/GlVx+kp4RIqr8Fa5x8IlTIreAVzaMtpjoM8wIiJAGMnSV8wd0nx YBmgfd/o xCjUE7a2vFL+QERqj4cPejrNNnv1wRpJe5GRB7j/Hf3xeHKa/ItAT4UIej3xAmp9KRkBkoo7/O+eUaCNd4ghXo1Ddt/RWepxvC5Bc4g0H9c+6Kzz0lZu/43G46p69E+LjsdAdxhgRkTtti2Q//kJA7D4AQKh9xTOOamDbS8vh67k6QHg5nplj6b53Opz2g3vKIyC2Z2063GQUbfMGGQT4p3LcwM+z990krLoRbstfm8q2XcYhOhxUDHG2zTkJdcTGsQNo4ASUxdJFKeGP4BqVx1EMTigyqhcJ8t/mVphFfo4O2dlEUwxUhPcSE1C+cZbMG3FUu1xGwiK7Zldl6pOoU+/fas8RkSwF8J2Trf5pUltIWAyoypwPqqsWKjx+RvBzNwgd2h1sQWPkBw6A18jBGrFZuMP6JIUheWqvtjzYBgOg6o2u5+EZOU4sWIUVXu96b5Yq92OwJQt2IS2f4y9iZPgaRno1hHDN8bJCF4LPhwTGFY8gLilvzpIo9n5ID4l8rEaMr18Vh/3K8J7y73GbgBakbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Here we start actually turning ASI into a real exploit mitigation. On all CPUs we attempt to obliterate any indirect branch predictor training before mapping in any secrets. We can also flush side channels on the inverse transition. So, in this iteration we flush L1D, but only on CPUs affected by L1TF. The rationale for this is: L1TF seems to have been a relative outlier in terms of its impact, and the mitigation is obviously rather devastating. On the other hand, Spectre-type attacks are continuously being found, and it's quite reasonable to assume that existing systems are vulnerable to variations that are not currently mitigated by bespoke techniques like Safe RET. This is clearly an incomplete policy, for example it probably makes sense to perform MDS mitigations in post_asi_enter, and there is clearly a wide range of alternative postures with regard to per-platform vs blanket mitigation configurations. This also ought to be integrated more intelligently with bugs.c - this will probably require a fair bit of discussion so it might warrant a patchset all to itself. For now though, this ouhgt to provide an example of the kind of thing we might do with ASI. The changes to the inline asm for L1D flushes are to avoid duplicate jump labels breaking the build in the case that vmx_l1d_flush() gets inlined at multiple locations (as it seems to do in my builds). Signed-off-by: Brendan Jackman --- arch/x86/include/asm/kvm_host.h | 2 + arch/x86/include/asm/nospec-branch.h | 2 + arch/x86/kvm/vmx/vmx.c | 88 ++++++++++++++++++++++++------------ arch/x86/kvm/x86.c | 33 +++++++++++++- arch/x86/lib/retpoline.S | 7 +++ 5 files changed, 101 insertions(+), 31 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6c3326cb8273c..8b7226dd2e027 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1840,6 +1840,8 @@ struct kvm_x86_init_ops { struct kvm_x86_ops *runtime_ops; struct kvm_pmu_ops *pmu_ops; + + void (*post_asi_enter)(void); }; struct kvm_arch_async_pf { diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index ff5f1ecc7d1e6..9502bdafc1edd 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -605,6 +605,8 @@ static __always_inline void mds_idle_clear_cpu_buffers(void) mds_clear_cpu_buffers(); } +extern void fill_return_buffer(void); + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1105d666a8ade..6efcbddf6ce27 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6629,37 +6629,18 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) * is not exactly LRU. This could be sized at runtime via topology * information but as all relevant affected CPUs have 32KiB L1D cache size * there is no point in doing so. + * + * Must be reentrant, for use by vmx_post_asi_enter. */ -static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) +static inline_or_noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { int size = PAGE_SIZE << L1D_CACHE_ORDER; /* - * This code is only executed when the flush mode is 'cond' or - * 'always' + * In theory we lose some of these increments to reentrancy under ASI. + * We just tolerate imprecise stats rather than deal with synchronizing. + * Anyway in practice on 64 bit it's gonna be a single instruction. */ - if (static_branch_likely(&vmx_l1d_flush_cond)) { - bool flush_l1d; - - /* - * Clear the per-vcpu flush bit, it gets set again - * either from vcpu_run() or from one of the unsafe - * VMEXIT handlers. - */ - flush_l1d = vcpu->arch.l1tf_flush_l1d; - vcpu->arch.l1tf_flush_l1d = false; - - /* - * Clear the per-cpu flush bit, it gets set again from - * the interrupt handlers. - */ - flush_l1d |= kvm_get_cpu_l1tf_flush_l1d(); - kvm_clear_cpu_l1tf_flush_l1d(); - - if (!flush_l1d) - return; - } - vcpu->stat.l1d_flush++; if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { @@ -6670,26 +6651,57 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) asm volatile( /* First ensure the pages are in the TLB */ "xorl %%eax, %%eax\n" - ".Lpopulate_tlb:\n\t" + ".Lpopulate_tlb_%=:\n\t" "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" "addl $4096, %%eax\n\t" "cmpl %%eax, %[size]\n\t" - "jne .Lpopulate_tlb\n\t" + "jne .Lpopulate_tlb_%=\n\t" "xorl %%eax, %%eax\n\t" "cpuid\n\t" /* Now fill the cache */ "xorl %%eax, %%eax\n" - ".Lfill_cache:\n" + ".Lfill_cache_%=:\n" "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" "addl $64, %%eax\n\t" "cmpl %%eax, %[size]\n\t" - "jne .Lfill_cache\n\t" + "jne .Lfill_cache_%=\n\t" "lfence\n" :: [flush_pages] "r" (vmx_l1d_flush_pages), [size] "r" (size) : "eax", "ebx", "ecx", "edx"); } +static noinstr void vmx_maybe_l1d_flush(struct kvm_vcpu *vcpu) +{ + /* + * This code is only executed when the flush mode is 'cond' or + * 'always' + */ + if (static_branch_likely(&vmx_l1d_flush_cond)) { + bool flush_l1d; + + /* + * Clear the per-vcpu flush bit, it gets set again + * either from vcpu_run() or from one of the unsafe + * VMEXIT handlers. + */ + flush_l1d = vcpu->arch.l1tf_flush_l1d; + vcpu->arch.l1tf_flush_l1d = false; + + /* + * Clear the per-cpu flush bit, it gets set again from + * the interrupt handlers. + */ + flush_l1d |= kvm_get_cpu_l1tf_flush_l1d(); + kvm_clear_cpu_l1tf_flush_l1d(); + + if (!flush_l1d) + return; + } + + vmx_l1d_flush(vcpu); +} + static void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); @@ -7284,7 +7296,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, * This is only after asi_enter() for performance reasons. */ if (static_branch_unlikely(&vmx_l1d_should_flush)) - vmx_l1d_flush(vcpu); + vmx_maybe_l1d_flush(vcpu); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) mds_clear_cpu_buffers(); @@ -8321,6 +8333,14 @@ gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63)); } +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +static noinstr void vmx_post_asi_enter(void) +{ + if (boot_cpu_has_bug(X86_BUG_L1TF)) + vmx_l1d_flush(kvm_get_running_vcpu()); +} +#endif + static struct kvm_x86_ops vmx_x86_ops __initdata = { .name = KBUILD_MODNAME, @@ -8727,6 +8747,14 @@ static struct kvm_x86_init_ops vmx_init_ops __initdata = { .runtime_ops = &vmx_x86_ops, .pmu_ops = &intel_pmu_ops, + +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + /* + * Only Intel CPUs currently do anything in post-enter, so this is a + * vendor hook for now. + */ + .post_asi_enter = vmx_post_asi_enter, +#endif }; static void vmx_cleanup_l1d_flush(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b9947e88d4ac6..b5e4df2aa1636 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9695,6 +9695,36 @@ static void kvm_x86_check_cpu_compat(void *ret) *(int *)ret = kvm_x86_check_processor_compatibility(); } +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + +static noinstr void pre_asi_exit(void) +{ + /* + * Flush out prediction trainings by the guest before we go to access + * secrets. + */ + + /* Clear normal indirect branch predictions, if we haven't */ + if (cpu_feature_enabled(X86_FEATURE_IBPB) && + !cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT)) + __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0); + + /* Flush the RAS/RSB if we haven't already. */ + if (!IS_ENABLED(CONFIG_RETPOLINE) || + !cpu_feature_enabled(X86_FEATURE_RSB_VMEXIT)) + fill_return_buffer(); +} + +struct asi_hooks asi_hooks = { + .pre_asi_exit = pre_asi_exit, + /* post_asi_enter populated later. */ +}; + +#else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ +struct asi_hooks asi_hooks = {}; +#endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + + int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) { u64 host_pat; @@ -9753,7 +9783,8 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) if (r) goto out_free_percpu; - r = asi_register_class("KVM", NULL); + asi_hooks.post_asi_enter = ops->post_asi_enter; + r = asi_register_class("KVM", &asi_hooks); if (r < 0) goto out_mmu_exit; kvm_asi_index = r; diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 391059b2c6fbc..db5b8ee01efeb 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -396,3 +396,10 @@ SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk) #endif /* CONFIG_MITIGATION_RETHUNK */ + +.pushsection .noinstr.text, "ax" +SYM_CODE_START(fill_return_buffer) + __FILL_RETURN_BUFFER(%_ASM_AX,RSB_CLEAR_LOOPS) + RET +SYM_CODE_END(fill_return_buffer) +.popsection