diff mbox series

[8/8] x86/spec-ctrl: Mitigate the Zen1 DIV leakge

Message ID 20230913202758.508225-9-andrew.cooper3@citrix.com (mailing list archive)
State Superseded
Headers show
Series x86/spec-ctrl: AMD DIV fix, and VERW prerequisite bugfixes | expand

Commit Message

Andrew Cooper Sept. 13, 2023, 8:27 p.m. UTC
In the Zen1 microarchitecure, there is one divider in the pipeline which
services uops from both threads.  In the case of #DE, the latched result from
the previous DIV to execute will be forwarded speculatively.

This is an interesting covert channel that allows two threads to communicate
without any system calls.  In also allows userspace to obtain the result of
the most recent DIV instruction executed (even speculatively) in the core,
which can be from a higher privilege context.

Scrub the buffers in the divider unit by executing a non-faulting divide.
This needs performing on the exit-to-guest paths, and ist_exit-to-Xen.

This is XSA-439 / CVE-2023-20588.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

No embargo - this is already public.  XSA paperwork to follow.
---
 docs/misc/xen-command-line.pandoc        |  6 +++-
 xen/arch/x86/hvm/svm/entry.S             |  1 +
 xen/arch/x86/include/asm/cpufeatures.h   |  2 +-
 xen/arch/x86/include/asm/spec_ctrl_asm.h | 16 +++++++++
 xen/arch/x86/spec_ctrl.c                 | 45 ++++++++++++++++++++++--
 5 files changed, 66 insertions(+), 4 deletions(-)

Comments

Andrew Cooper Sept. 13, 2023, 8:43 p.m. UTC | #1
On 13/09/2023 9:27 pm, Andrew Cooper wrote:
> @@ -955,6 +960,40 @@ static void __init srso_calculations(bool hw_smt_enabled)
>          setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>  }
>  
> +/*
> + * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
> + * heuristic to select between Zen1 and Zen2 uarches.
> + */
> +static bool __init has_div_vuln(void)
> +{
> +    if ( !(boot_cpu_data.x86_vendor &
> +           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +        return false;
> +
> +    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> +         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
> +        return false;

Bah - this serves me right for positing before waiting for CI to check
that Naples picks the right default.  The STIBP check is backwards and
will mix up Zen1/2.

I'm going to create real is_zen{1,2}_uarch() helpers in amd.h to avoid
opencoding this heuristic yet again.  I highly doubt this will be the
final time we need it.

~Andrew
Marek Marczykowski-Górecki Sept. 13, 2023, 9:12 p.m. UTC | #2
On Wed, Sep 13, 2023 at 09:27:58PM +0100, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
> index da0593de8542..724de2e11db4 100644
> --- a/xen/arch/x86/include/asm/cpufeatures.h
> +++ b/xen/arch/x86/include/asm/cpufeatures.h
> @@ -35,7 +35,7 @@ XEN_CPUFEATURE(SC_RSB_HVM,        X86_SYNTH(19)) /* RSB overwrite needed for HVM
>  XEN_CPUFEATURE(XEN_SELFSNOOP,     X86_SYNTH(20)) /* SELFSNOOP gets used by Xen itself */
>  XEN_CPUFEATURE(SC_MSR_IDLE,       X86_SYNTH(21)) /* Clear MSR_SPEC_CTRL on idle */
>  XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
> -/* Bits 23 unused. */
> +XEN_CPUFEATURE(SC_DIV,            X86_SYNTH(25)) /* DIV scrub needed */

s/25/23/

>  XEN_CPUFEATURE(SC_RSB_IDLE,       X86_SYNTH(24)) /* RSB overwrite needed for idle. */
>  XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
>  XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
Jan Beulich Sept. 14, 2023, 10:52 a.m. UTC | #3
On 13.09.2023 22:27, Andrew Cooper wrote:
> @@ -378,6 +392,8 @@ UNLIKELY_DISPATCH_LABEL(\@_serialise):
>      verw STACK_CPUINFO_FIELD(verw_sel)(%r14)
>  .L\@_verw_skip:
>  
> +    ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
> +
>  .L\@_skip_ist_exit:
>  .endm

While we did talk about using alternatives patching here, I'm now in
doubt again, in particular because the rest of the macro uses
conditionals anyway, and the code here is bypassed for non-IST exits. If
you're sure you want to stick to this, then I think some justification
wants adding to the description.

> @@ -955,6 +960,40 @@ static void __init srso_calculations(bool hw_smt_enabled)
>          setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>  }
>  
> +/*
> + * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
> + * heuristic to select between Zen1 and Zen2 uarches.
> + */
> +static bool __init has_div_vuln(void)
> +{
> +    if ( !(boot_cpu_data.x86_vendor &
> +           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +        return false;
> +
> +    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> +         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
> +        return false;
> +
> +    return true;
> +}
> +
> +static void __init div_calculations(bool hw_smt_enabled)
> +{
> +    bool cpu_bug_div = has_div_vuln();
> +
> +    if ( opt_div_scrub == -1 )
> +        opt_div_scrub = cpu_bug_div;
> +
> +    if ( opt_div_scrub )
> +        setup_force_cpu_cap(X86_FEATURE_SC_DIV);

Isn't this only lowering performance (even if just slightly) when SMT is
enabled, without gaining us very much?

> +    if ( opt_smt == -1 && cpu_bug_div && hw_smt_enabled )
> +        warning_add(
> +            "Booted on leaky-DIV hardware with SMT/Hyperthreading\n"
> +            "enabled.  Please assess your configuration and choose an\n"
> +            "explicit 'smt=<bool>' setting.  See XSA-439.\n");
> +}

What about us running virtualized? The topology we see may not be that
of the underlying hardware. Maybe check_smt_enabled() should return
true when cpu_has_hypervisor is true? (In-guest decisions would
similarly need to assume that they may be running on SMT-enabled
hardware, despite not themselves seeing this to be the case.)

Since we can't know for sure when running virtualized, that's a case
where I would consider it useful to enable the workaround nevertheless
(perhaps accompanied by a warning that whether this helps depends on
the next level hypervisor).

Jan
Jason Andryuk Sept. 14, 2023, 1:12 p.m. UTC | #4
On Wed, Sep 13, 2023 at 6:09 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> @@ -955,6 +960,40 @@ static void __init srso_calculations(bool hw_smt_enabled)
>          setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>  }
>
> +/*
> + * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
> + * heuristic to select between Zen1 and Zen2 uarches.
> + */
> +static bool __init has_div_vuln(void)
> +{
> +    if ( !(boot_cpu_data.x86_vendor &
> +           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +        return false;
> +
> +    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
> +         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
> +        return false;
> +
> +    return true;
> +}
> +
> +static void __init div_calculations(bool hw_smt_enabled)
> +{
> +    bool cpu_bug_div = has_div_vuln();
> +

Would it make sense to add
    if ( !cpu_bug_div )
        return
...

> +    if ( opt_div_scrub == -1 )
> +        opt_div_scrub = cpu_bug_div;
> +
> +    if ( opt_div_scrub )
> +        setup_force_cpu_cap(X86_FEATURE_SC_DIV);

...so that div-scrub=1 isn't setting X86_FEATURE_SC_DIV on un-affected
hardware?  Or do you want to leave command line control in place in
case it might be needed as a future workaround on other hardware?

Thanks,
Jason
Andrew Cooper Sept. 14, 2023, 8:01 p.m. UTC | #5
On 14/09/2023 11:52 am, Jan Beulich wrote:
> On 13.09.2023 22:27, Andrew Cooper wrote:
>> @@ -378,6 +392,8 @@ UNLIKELY_DISPATCH_LABEL(\@_serialise):
>>      verw STACK_CPUINFO_FIELD(verw_sel)(%r14)
>>  .L\@_verw_skip:
>>  
>> +    ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
>> +
>>  .L\@_skip_ist_exit:
>>  .endm
> While we did talk about using alternatives patching here, I'm now in
> doubt again, in particular because the rest of the macro uses
> conditionals anyway, and the code here is bypassed for non-IST exits. If
> you're sure you want to stick to this, then I think some justification
> wants adding to the description.

Patching IST paths is safe - we drop into NMI context, and rewrite
before starting APs.

VERW needs to remain a conditional because of the FB_CLEAR/MMIO paths. 
MSR_SPEC_CTRL is going to turn back into being an alternative when I
make eIBRS work sensibly.

>> @@ -955,6 +960,40 @@ static void __init srso_calculations(bool hw_smt_enabled)
>>          setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>>  }
>>  
>> +/*
>> + * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
>> + * heuristic to select between Zen1 and Zen2 uarches.
>> + */
>> +static bool __init has_div_vuln(void)
>> +{
>> +    if ( !(boot_cpu_data.x86_vendor &
>> +           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
>> +        return false;
>> +
>> +    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
>> +         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
>> +        return false;
>> +
>> +    return true;
>> +}
>> +
>> +static void __init div_calculations(bool hw_smt_enabled)
>> +{
>> +    bool cpu_bug_div = has_div_vuln();
>> +
>> +    if ( opt_div_scrub == -1 )
>> +        opt_div_scrub = cpu_bug_div;
>> +
>> +    if ( opt_div_scrub )
>> +        setup_force_cpu_cap(X86_FEATURE_SC_DIV);
> Isn't this only lowering performance (even if just slightly) when SMT is
> enabled, without gaining us very much?

It is consistent with how MDS/L1TF/others work, which is important.

And it does protect against a single-threaded attacker, for what that's
worth in practice.

>
>> +    if ( opt_smt == -1 && cpu_bug_div && hw_smt_enabled )
>> +        warning_add(
>> +            "Booted on leaky-DIV hardware with SMT/Hyperthreading\n"
>> +            "enabled.  Please assess your configuration and choose an\n"
>> +            "explicit 'smt=<bool>' setting.  See XSA-439.\n");
>> +}
> What about us running virtualized? The topology we see may not be that
> of the underlying hardware. Maybe check_smt_enabled() should return
> true when cpu_has_hypervisor is true? (In-guest decisions would
> similarly need to assume that they may be running on SMT-enabled
> hardware, despite not themselves seeing this to be the case.)
>
> Since we can't know for sure when running virtualized, that's a case
> where I would consider it useful to enable the workaround nevertheless
> (perhaps accompanied by a warning that whether this helps depends on
> the next level hypervisor).

Honestly, SMT's not relevant.  If you're virtualised, you've lost
irrespective.

There is no FOO_NO bit, so there's no possible way a VM can figure out
if it is current on, or may subsequently move to, an impacted processor.

~Andrew
Andrew Cooper Sept. 14, 2023, 8:05 p.m. UTC | #6
On 14/09/2023 2:12 pm, Jason Andryuk wrote:
> On Wed, Sep 13, 2023 at 6:09 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> @@ -955,6 +960,40 @@ static void __init srso_calculations(bool hw_smt_enabled)
>>          setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>>  }
>>
>> +/*
>> + * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
>> + * heuristic to select between Zen1 and Zen2 uarches.
>> + */
>> +static bool __init has_div_vuln(void)
>> +{
>> +    if ( !(boot_cpu_data.x86_vendor &
>> +           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
>> +        return false;
>> +
>> +    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
>> +         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
>> +        return false;
>> +
>> +    return true;
>> +}
>> +
>> +static void __init div_calculations(bool hw_smt_enabled)
>> +{
>> +    bool cpu_bug_div = has_div_vuln();
>> +
> Would it make sense to add
>     if ( !cpu_bug_div )
>         return
> ...
>
>> +    if ( opt_div_scrub == -1 )
>> +        opt_div_scrub = cpu_bug_div;
>> +
>> +    if ( opt_div_scrub )
>> +        setup_force_cpu_cap(X86_FEATURE_SC_DIV);
> ...so that div-scrub=1 isn't setting X86_FEATURE_SC_DIV on un-affected
> hardware?  Or do you want to leave command line control in place in
> case it might be needed as a future workaround on other hardware?

All options (where possible) allow for paths to be explicitly activated
on un-affected hardware so we can test this giant mess.

The only cases where we ignore a user choice is when the result will
crash from e.g. #GP due to insufficient microcode.

~Andrew
diff mbox series

Patch

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index f88e6a70aed6..7acd68885656 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2353,7 +2353,7 @@  By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
 >              {msr-sc,rsb,md-clear,ibpb-entry}=<bool>|{pv,hvm}=<bool>,
 >              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,psfd,
 >              eager-fpu,l1d-flush,branch-harden,srb-lock,
->              unpriv-mmio,gds-mit}=<bool> ]`
+>              unpriv-mmio,gds-mit,div-scrub}=<bool> ]`
 
 Controls for speculative execution sidechannel mitigations.  By default, Xen
 will pick the most appropriate mitigations based on compiled in support,
@@ -2475,6 +2475,10 @@  has elected not to lock the configuration, Xen will use GDS_CTRL to mitigate
 GDS with.  Otherwise, Xen will mitigate by disabling AVX, which blocks the use
 of the AVX2 Gather instructions.
 
+On all hardware, the `div-scrub=` option can be used to force or prevent Xen
+from mitigating the DIV-leakage vulnerability.  By default, Xen will mitigate
+DIV-leakage on hardware believed to be vulnerable.
+
 ### sync_console
 > `= <boolean>`
 
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 9effd2199ba0..c52528fed4cf 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -72,6 +72,7 @@  __UNLIKELY_END(nsvm_hap)
 1:          /* No Spectre v1 concerns.  Execution will hit VMRUN imminently. */
         .endm
         ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
+        ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
 
         pop  %r15
         pop  %r14
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index da0593de8542..724de2e11db4 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -35,7 +35,7 @@  XEN_CPUFEATURE(SC_RSB_HVM,        X86_SYNTH(19)) /* RSB overwrite needed for HVM
 XEN_CPUFEATURE(XEN_SELFSNOOP,     X86_SYNTH(20)) /* SELFSNOOP gets used by Xen itself */
 XEN_CPUFEATURE(SC_MSR_IDLE,       X86_SYNTH(21)) /* Clear MSR_SPEC_CTRL on idle */
 XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
-/* Bits 23 unused. */
+XEN_CPUFEATURE(SC_DIV,            X86_SYNTH(25)) /* DIV scrub needed */
 XEN_CPUFEATURE(SC_RSB_IDLE,       X86_SYNTH(24)) /* RSB overwrite needed for idle. */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
diff --git a/xen/arch/x86/include/asm/spec_ctrl_asm.h b/xen/arch/x86/include/asm/spec_ctrl_asm.h
index 9740697114ad..10e57780f08b 100644
--- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
+++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
@@ -165,6 +165,18 @@ 
 .L\@_verw_skip:
 .endm
 
+.macro DO_SPEC_CTRL_DIV
+/*
+ * Requires nothing
+ * Clobbers %rax
+ *
+ * Issue a DIV for its flushing side effect (Zen1 uarch specific).  Any
+ * non-faulting DIV will do, and a byte DIV has least latency.
+ */
+    mov $1, %eax
+    div %al
+.endm
+
 .macro DO_SPEC_CTRL_ENTRY maybexen:req
 /*
  * Requires %rsp=regs (also cpuinfo if !maybexen)
@@ -267,6 +279,8 @@ 
     ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV
 
     DO_SPEC_CTRL_COND_VERW
+
+    ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
 .endm
 
 /*
@@ -378,6 +392,8 @@  UNLIKELY_DISPATCH_LABEL(\@_serialise):
     verw STACK_CPUINFO_FIELD(verw_sel)(%r14)
 .L\@_verw_skip:
 
+    ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
+
 .L\@_skip_ist_exit:
 .endm
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 9b8fdb5303ad..5332dba3f659 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -67,6 +67,7 @@  static int8_t __initdata opt_srb_lock = -1;
 static bool __initdata opt_unpriv_mmio;
 static bool __ro_after_init opt_fb_clear_mmio;
 static int8_t __initdata opt_gds_mit = -1;
+static int8_t __initdata opt_div_scrub = -1;
 
 static int __init cf_check parse_spec_ctrl(const char *s)
 {
@@ -121,6 +122,7 @@  static int __init cf_check parse_spec_ctrl(const char *s)
             opt_srb_lock = 0;
             opt_unpriv_mmio = false;
             opt_gds_mit = 0;
+            opt_div_scrub = 0;
         }
         else if ( val > 0 )
             rc = -EINVAL;
@@ -273,6 +275,8 @@  static int __init cf_check parse_spec_ctrl(const char *s)
             opt_unpriv_mmio = val;
         else if ( (val = parse_boolean("gds-mit", s, ss)) >= 0 )
             opt_gds_mit = val;
+        else if ( (val = parse_boolean("div-scrub", s, ss)) >= 0 )
+            opt_div_scrub = val;
         else
             rc = -EINVAL;
 
@@ -473,7 +477,7 @@  static void __init print_details(enum ind_thunk thunk)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s%s, Other:%s%s%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s%s, Other:%s%s%s%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -498,7 +502,8 @@  static void __init print_details(enum ind_thunk thunk)
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm ||
            opt_fb_clear_mmio                         ? " VERW"  : "",
-           opt_branch_harden                         ? " BRANCH_HARDEN" : "");
+           opt_branch_harden                         ? " BRANCH_HARDEN" : "",
+           opt_div_scrub                             ? " DIV" : "");
 
     /* L1TF diagnostics, printed if vulnerable or PV shadowing is in use. */
     if ( cpu_has_bug_l1tf || opt_pv_l1tf_hwdom || opt_pv_l1tf_domu )
@@ -955,6 +960,40 @@  static void __init srso_calculations(bool hw_smt_enabled)
         setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 }
 
+/*
+ * Div leakage is specific to the AMD Zen1 microarchitecure.  Use STIBP as a
+ * heuristic to select between Zen1 and Zen2 uarches.
+ */
+static bool __init has_div_vuln(void)
+{
+    if ( !(boot_cpu_data.x86_vendor &
+           (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+        return false;
+
+    if ( (boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18) ||
+         !boot_cpu_has(X86_FEATURE_AMD_STIBP) )
+        return false;
+
+    return true;
+}
+
+static void __init div_calculations(bool hw_smt_enabled)
+{
+    bool cpu_bug_div = has_div_vuln();
+
+    if ( opt_div_scrub == -1 )
+        opt_div_scrub = cpu_bug_div;
+
+    if ( opt_div_scrub )
+        setup_force_cpu_cap(X86_FEATURE_SC_DIV);
+
+    if ( opt_smt == -1 && cpu_bug_div && hw_smt_enabled )
+        warning_add(
+            "Booted on leaky-DIV hardware with SMT/Hyperthreading\n"
+            "enabled.  Please assess your configuration and choose an\n"
+            "explicit 'smt=<bool>' setting.  See XSA-439.\n");
+}
+
 static void __init ibpb_calculations(void)
 {
     bool def_ibpb_entry = false;
@@ -1714,6 +1753,8 @@  void __init init_speculation_mitigations(void)
 
     ibpb_calculations();
 
+    div_calculations(hw_smt_enabled);
+
     /* Check whether Eager FPU should be enabled by default. */
     if ( opt_eager_fpu == -1 )
         opt_eager_fpu = should_use_eager_fpu();