diff mbox series

[v2,2/2] xen/nospec: Introduce CONFIG_SPECULATIVE_HARDEN_BRANCH and disable it

Message ID 20191001143207.15844-3-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series xen/nospec: Add Kconfig options for speculative hardening | expand

Commit Message

Andrew Cooper Oct. 1, 2019, 2:32 p.m. UTC
The code generation for barrier_nospec_true() is not correct; the lfence
instructions are generally too early in the instruction stream, resulting in a
performance hit but no additional speculative safety.

This is caused by inline assembly trying to fight the compiler optimiser, and
the optimiser winning.  There is no clear way to achieve safety, so turn the
perf hit off for now.

This also largely reverts 3860d5534df4.  The name 'l1tf-barrier', and making
barrier_nospec_true() depend on CONFIG_HVM was constrained by what could be
discussed publicly at the time.  Now that MDS is public, neither aspects are
correct.

As l1tf-barrier hasn't been in a release of Xen, and
CONFIG_SPECULATIVE_HARDEN_BRANCH is disabled until we can find a safe way of
implementing the functionality, remove the l1tf-barrier command line option.
Fix a typo of 'conditionals' in an adjacent comment.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-acked-by: Juergen Gross <jgross@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Norbert Manthey <nmanthey@amazon.de>

v2:
 * Expand the commit message to describe how the generated code is broken.
 * Rename to CONFIG_SPECULATIVE_HARDEN_BRANCH
 * Switch alternative() to asm()
 * Fix a comment typo
---
 docs/misc/xen-command-line.pandoc |  8 +-------
 xen/arch/x86/spec_ctrl.c          | 17 ++---------------
 xen/common/Kconfig                | 21 +++++++++++++++++++++
 xen/include/asm-x86/cpufeatures.h |  2 +-
 xen/include/asm-x86/nospec.h      |  6 +++---
 xen/include/asm-x86/spec_ctrl.h   |  1 -
 6 files changed, 28 insertions(+), 27 deletions(-)

Comments

Jan Beulich Oct. 2, 2019, 9:03 a.m. UTC | #1
On 01.10.2019 16:32, Andrew Cooper wrote:
> The code generation for barrier_nospec_true() is not correct; the lfence
> instructions are generally too early in the instruction stream, resulting in a
> performance hit but no additional speculative safety.
> 
> This is caused by inline assembly trying to fight the compiler optimiser, and
> the optimiser winning.  There is no clear way to achieve safety, so turn the
> perf hit off for now.

For one (following the v1 thread which was still in progress when you
sent this) and important but not (explicitly) mentioned aspect here is
that in case affected inline functions to not get inlined, the LFENCE
would end up in the function body rather than in the caller. I think
this wants making explicit.

As to "no clear way" - is the "convert all involved inline functions
to always_inline" not a sufficiently promising approach, until aid by
compilers is available? (For gcc 9 the asm inline() approach could also
be chosen, and shouldn't be overly difficult to carry out.)

Finally ...

> --- a/xen/include/asm-x86/nospec.h
> +++ b/xen/include/asm-x86/nospec.h
> @@ -9,13 +9,13 @@
>  /* Allow to insert a read memory barrier into conditionals */
>  static always_inline bool barrier_nospec_true(void)
>  {
> -#ifdef CONFIG_HVM
> -    alternative("", "lfence", X86_FEATURE_SC_L1TF_VULN);
> +#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
> +    asm volatile ( "lfence" ::: "memory" );
>  #endif
>      return true;
>  }

... doesn't this change alone (assuming the config option could be set
to Y) already take care of the issue? By there no longer being the
(misleading to the compiler) complexity of alternative(), there should
be far less (if any) instances of this (and its inline users) not
getting inlined. In fact I wonder whether then the always_inline here
couldn't also be converted back to just inline (except perhaps for
clang, as per the other patch of yours).

Then again if the config option could be set to Y, we'd not want the
LFENCE unconditionally anyway aiui: Hardware affected by neither L1TF
nor the MDS variations (i.e. in particular all of AMD hardware)
shouldn't get penalized. So perhaps it was a bad request of mine to
switch from alternative() to asm(); instead I should have asked that
your use of X86_FEATURE_ALWAYS in v1 be replaced by something that
would actually trigger a build error (or work correctly) if the config
option became settable to Y.

Jan
diff mbox series

Patch

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index fc64429064..b9c5b822ca 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1932,7 +1932,7 @@  By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
 ### spec-ctrl (x86)
 > `= List of [ <bool>, xen=<bool>, {pv,hvm,msr-sc,rsb,md-clear}=<bool>,
 >              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,eager-fpu,
->              l1d-flush,l1tf-barrier}=<bool> ]`
+>              l1d-flush}=<bool> ]`
 
 Controls for speculative execution sidechannel mitigations.  By default, Xen
 will pick the most appropriate mitigations based on compiled in support,
@@ -2004,12 +2004,6 @@  Irrespective of Xen's setting, the feature is virtualised for HVM guests to
 use.  By default, Xen will enable this mitigation on hardware believed to be
 vulnerable to L1TF.
 
-On hardware vulnerable to L1TF, the `l1tf-barrier=` option can be used to force
-or prevent Xen from protecting evaluations inside the hypervisor with a barrier
-instruction to not load potentially secret information into L1 cache.  By
-default, Xen will enable this mitigation on hardware believed to be vulnerable
-to L1TF.
-
 ### sync_console
 > `= <boolean>`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 4761be81bd..5ea8870981 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -21,7 +21,6 @@ 
 #include <xen/lib.h>
 #include <xen/warning.h>
 
-#include <asm/cpuid.h>
 #include <asm/microcode.h>
 #include <asm/msr.h>
 #include <asm/processor.h>
@@ -53,7 +52,6 @@  bool __read_mostly opt_ibpb = true;
 bool __read_mostly opt_ssbd = false;
 int8_t __read_mostly opt_eager_fpu = -1;
 int8_t __read_mostly opt_l1d_flush = -1;
-int8_t __read_mostly opt_l1tf_barrier = -1;
 
 bool __initdata bsp_delay_spec_ctrl;
 uint8_t __read_mostly default_xen_spec_ctrl;
@@ -98,8 +96,6 @@  static int __init parse_spec_ctrl(const char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
-            opt_l1tf_barrier = 0;
-
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -175,8 +171,6 @@  static int __init parse_spec_ctrl(const char *s)
             opt_eager_fpu = val;
         else if ( (val = parse_boolean("l1d-flush", s, ss)) >= 0 )
             opt_l1d_flush = val;
-        else if ( (val = parse_boolean("l1tf-barrier", s, ss)) >= 0 )
-            opt_l1tf_barrier = val;
         else
             rc = -EINVAL;
 
@@ -337,7 +331,7 @@  static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -348,8 +342,7 @@  static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
-           opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "",
-           opt_l1tf_barrier                          ? " L1TF_BARRIER" : "");
+           opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
 
     /* L1TF diagnostics, printed if vulnerable or PV shadowing is in use. */
     if ( cpu_has_bug_l1tf || opt_pv_l1tf_hwdom || opt_pv_l1tf_domu )
@@ -1034,12 +1027,6 @@  void __init init_speculation_mitigations(void)
     else if ( opt_l1d_flush == -1 )
         opt_l1d_flush = cpu_has_bug_l1tf && !(caps & ARCH_CAPS_SKIP_L1DFL);
 
-    /* By default, enable L1TF_VULN on L1TF-vulnerable hardware */
-    if ( opt_l1tf_barrier == -1 )
-        opt_l1tf_barrier = cpu_has_bug_l1tf && (opt_smt || !opt_l1d_flush);
-    if ( opt_l1tf_barrier > 0 )
-        setup_force_cpu_cap(X86_FEATURE_SC_L1TF_VULN);
-
     /*
      * We do not disable HT by default on affected hardware.
      *
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 911333357a..b0b8aadeb2 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -99,6 +99,27 @@  config SPECULATIVE_HARDEN_ARRAY
 
 	  If unsure, say Y.
 
+config SPECULATIVE_HARDEN_BRANCH
+	bool "Speculative Branch Hardening"
+	depends on BROKEN
+        ---help---
+	  Contemporary processors may use speculative execution as a
+	  performance optimisation, but this can potentially be abused by an
+	  attacker to leak data via speculative sidechannels.
+
+	  One source of misbehaviour is by executing the wrong basic block
+	  following a conditional jump.
+
+	  When enabled, specific conditions which have been deemed liable to
+	  be speculatively abused will be hardened to avoid entering the wrong
+	  basic block.
+
+	  This is a best-effort mitigation.  There are no guarantees that all
+	  areas of code open to abuse have been hardened.
+
+	  !!! WARNING - This doesn't function as intended.  It does not
+              generate speculatively safe code !!!
+
 endmenu
 
 config KEXEC
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index 91eccf5161..ecb651c35d 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -27,7 +27,7 @@  XEN_CPUFEATURE(XEN_SMAP,          X86_SYNTH(11)) /* SMAP gets used by Xen itself
 XEN_CPUFEATURE(LFENCE_DISPATCH,   X86_SYNTH(12)) /* lfence set as Dispatch Serialising */
 XEN_CPUFEATURE(IND_THUNK_LFENCE,  X86_SYNTH(13)) /* Use IND_THUNK_LFENCE */
 XEN_CPUFEATURE(IND_THUNK_JMP,     X86_SYNTH(14)) /* Use IND_THUNK_JMP */
-XEN_CPUFEATURE(SC_L1TF_VULN,      X86_SYNTH(15)) /* L1TF protection required */
+/* 15 unused. */
 XEN_CPUFEATURE(SC_MSR_PV,         X86_SYNTH(16)) /* MSR_SPEC_CTRL used by Xen for PV */
 XEN_CPUFEATURE(SC_MSR_HVM,        X86_SYNTH(17)) /* MSR_SPEC_CTRL used by Xen for HVM */
 XEN_CPUFEATURE(SC_RSB_PV,         X86_SYNTH(18)) /* RSB overwrite needed for PV */
diff --git a/xen/include/asm-x86/nospec.h b/xen/include/asm-x86/nospec.h
index 2aa47b3455..03748abbd3 100644
--- a/xen/include/asm-x86/nospec.h
+++ b/xen/include/asm-x86/nospec.h
@@ -9,13 +9,13 @@ 
 /* Allow to insert a read memory barrier into conditionals */
 static always_inline bool barrier_nospec_true(void)
 {
-#ifdef CONFIG_HVM
-    alternative("", "lfence", X86_FEATURE_SC_L1TF_VULN);
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
+    asm volatile ( "lfence" ::: "memory" );
 #endif
     return true;
 }
 
-/* Allow to protect evaluation of conditionasl with respect to speculation */
+/* Allow to protect evaluation of conditionals with respect to speculation */
 static always_inline bool evaluate_nospec(bool condition)
 {
     return condition ? barrier_nospec_true() : !barrier_nospec_true();
diff --git a/xen/include/asm-x86/spec_ctrl.h b/xen/include/asm-x86/spec_ctrl.h
index 1339ddd7ef..ba03bb42e5 100644
--- a/xen/include/asm-x86/spec_ctrl.h
+++ b/xen/include/asm-x86/spec_ctrl.h
@@ -37,7 +37,6 @@  extern bool opt_ibpb;
 extern bool opt_ssbd;
 extern int8_t opt_eager_fpu;
 extern int8_t opt_l1d_flush;
-extern int8_t opt_l1tf_barrier;
 
 extern bool bsp_delay_spec_ctrl;
 extern uint8_t default_xen_spec_ctrl;