diff mbox series

[v2,04/14] x86/traps: Implement #CP handler and extend #PF for shadow stacks

Message ID 20200527191847.17207-5-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86: Support for CET Supervisor Shadow Stacks | expand

Commit Message

Andrew Cooper May 27, 2020, 7:18 p.m. UTC
For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
attempt to recover from #CP if taken in guest context.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Rebase over #PF[Rsvd] rework.
 * Alignment for PFEC_shstk.
 * Use more X86_EXC_* names.
---
 xen/arch/x86/traps.c            | 46 +++++++++++++++++++++++++++++++++++++++--
 xen/arch/x86/x86_64/entry.S     |  7 ++++++-
 xen/include/asm-x86/processor.h |  2 ++
 3 files changed, 52 insertions(+), 3 deletions(-)

Comments

Jan Beulich May 28, 2020, 12:03 p.m. UTC | #1
On 27.05.2020 21:18, Andrew Cooper wrote:
> For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
> attempt to recover from #CP if taken in guest context.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one more question and a suggestion:

> @@ -1445,8 +1447,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>       *
>       * Anything remaining is an error, constituting corruption of the
>       * pagetables and probably an L1TF vulnerable gadget.
> +     *
> +     * Any shadow stack access fault is a bug in Xen.
>       */
> -    if ( error_code & PFEC_reserved_bit )
> +    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
>          goto fatal;

Wouldn't you saying "any" imply putting this new check higher up, in
particular ahead of the call to fixup_page_fault()?

> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>          entrypoint 1b
>  
>          /* Reserved exceptions, heading towards do_reserved_trap(). */
> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)

Adding yet another || here adds to the fragility of the entire
construct. Wouldn't it be better to implement do_entry_VE at
this occasion, even its handling continues to end up in
do_reserved_trap()? This would have the benefit of avoiding the
pointless checking of %spl first thing in its handling. Feel
free to keep the R-b if you decide to go this route.

Jan
Andrew Cooper May 28, 2020, 1:22 p.m. UTC | #2
On 28/05/2020 13:03, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
>> attempt to recover from #CP if taken in guest context.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one more question and a suggestion:
>
>> @@ -1445,8 +1447,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>>       *
>>       * Anything remaining is an error, constituting corruption of the
>>       * pagetables and probably an L1TF vulnerable gadget.
>> +     *
>> +     * Any shadow stack access fault is a bug in Xen.
>>       */
>> -    if ( error_code & PFEC_reserved_bit )
>> +    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
>>          goto fatal;
> Wouldn't you saying "any" imply putting this new check higher up, in
> particular ahead of the call to fixup_page_fault()?

Can do.

>
>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>          entrypoint 1b
>>  
>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
> Adding yet another || here adds to the fragility of the entire
> construct. Wouldn't it be better to implement do_entry_VE at
> this occasion, even its handling continues to end up in
> do_reserved_trap()? This would have the benefit of avoiding the
> pointless checking of %spl first thing in its handling. Feel
> free to keep the R-b if you decide to go this route.

I actually have a different plan, which deletes this entire clause, and
simplifies our autogen sanity checking somewhat.

For vectors which Xen has no implementation of (for whatever reason),
use DPL0, non-present descriptors, and redirect #NP[IDT] into
do_reserved_trap().

No need for any entry logic for the trivially fatal paths, or safety
against being uncertain about error codes.

However, its a little too close to 4.14 to clean this up now.

~Andrew
Jan Beulich May 28, 2020, 1:31 p.m. UTC | #3
On 28.05.2020 15:22, Andrew Cooper wrote:
> On 28/05/2020 13:03, Jan Beulich wrote:
>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>>          entrypoint 1b
>>>  
>>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
>> Adding yet another || here adds to the fragility of the entire
>> construct. Wouldn't it be better to implement do_entry_VE at
>> this occasion, even its handling continues to end up in
>> do_reserved_trap()? This would have the benefit of avoiding the
>> pointless checking of %spl first thing in its handling. Feel
>> free to keep the R-b if you decide to go this route.
> 
> I actually have a different plan, which deletes this entire clause, and
> simplifies our autogen sanity checking somewhat.
> 
> For vectors which Xen has no implementation of (for whatever reason),
> use DPL0, non-present descriptors, and redirect #NP[IDT] into
> do_reserved_trap().

Except that #NP itself being a contributory exception, if the such
covered exception is also contributory (e.g. #CP) or of page fault
class (e.g. #VE), we'd get #DF instead of #NP afaict.

Jan
Andrew Cooper May 29, 2020, 6:50 p.m. UTC | #4
On 28/05/2020 14:31, Jan Beulich wrote:
> On 28.05.2020 15:22, Andrew Cooper wrote:
>> On 28/05/2020 13:03, Jan Beulich wrote:
>>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>>>          entrypoint 1b
>>>>  
>>>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>>>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>>>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>>>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
>>> Adding yet another || here adds to the fragility of the entire
>>> construct. Wouldn't it be better to implement do_entry_VE at
>>> this occasion, even its handling continues to end up in
>>> do_reserved_trap()? This would have the benefit of avoiding the
>>> pointless checking of %spl first thing in its handling. Feel
>>> free to keep the R-b if you decide to go this route.
>> I actually have a different plan, which deletes this entire clause, and
>> simplifies our autogen sanity checking somewhat.
>>
>> For vectors which Xen has no implementation of (for whatever reason),
>> use DPL0, non-present descriptors, and redirect #NP[IDT] into
>> do_reserved_trap().
> Except that #NP itself being a contributory exception, if the such
> covered exception is also contributory (e.g. #CP) or of page fault
> class (e.g. #VE), we'd get #DF instead of #NP afaict.

Hmm.  Good point.

I also had some other cleanup plans.  (In due course,) I'll see what I
can do to make the status quo better.

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index eeb3e146ef..90da787ee2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -156,7 +156,9 @@  void (* const exception_table[TRAP_nr])(struct cpu_user_regs *regs) = {
     [TRAP_alignment_check]              = do_trap,
     [TRAP_machine_check]                = (void *)do_machine_check,
     [TRAP_simd_error]                   = do_trap,
-    [TRAP_virtualisation ...
+    [TRAP_virtualisation]               = do_reserved_trap,
+    [X86_EXC_CP]                        = do_entry_CP,
+    [X86_EXC_CP + 1 ...
      (ARRAY_SIZE(exception_table) - 1)] = do_reserved_trap,
 };
 
@@ -1445,8 +1447,10 @@  void do_page_fault(struct cpu_user_regs *regs)
      *
      * Anything remaining is an error, constituting corruption of the
      * pagetables and probably an L1TF vulnerable gadget.
+     *
+     * Any shadow stack access fault is a bug in Xen.
      */
-    if ( error_code & PFEC_reserved_bit )
+    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
         goto fatal;
 
     if ( unlikely(!guest_mode(regs)) )
@@ -1898,6 +1902,43 @@  void do_debug(struct cpu_user_regs *regs)
     pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
 }
 
+void do_entry_CP(struct cpu_user_regs *regs)
+{
+    static const char errors[][10] = {
+        [1] = "near ret",
+        [2] = "far/iret",
+        [3] = "endbranch",
+        [4] = "rstorssp",
+        [5] = "setssbsy",
+    };
+    const char *err = "??";
+    unsigned int ec = regs->error_code;
+
+    if ( debugger_trap_entry(TRAP_debug, regs) )
+        return;
+
+    /* Decode ec if possible */
+    if ( ec < ARRAY_SIZE(errors) && errors[ec][0] )
+        err = errors[ec];
+
+    /*
+     * For now, only supervisors shadow stacks should be active.  A #CP from
+     * guest context is probably a Xen bug, but kill the guest in an attempt
+     * to recover.
+     */
+    if ( guest_mode(regs) )
+    {
+        gprintk(XENLOG_ERR, "Hit #CP[%04x] in guest context %04x:%p\n",
+                ec, regs->cs, _p(regs->rip));
+        ASSERT_UNREACHABLE();
+        domain_crash(current->domain);
+        return;
+    }
+
+    show_execution_state(regs);
+    panic("CONTROL-FLOW PROTECTION FAULT: #CP[%04x] %s\n", ec, err);
+}
+
 static void __init noinline __set_intr_gate(unsigned int n,
                                             uint32_t dpl, void *addr)
 {
@@ -1987,6 +2028,7 @@  void __init init_idt_traps(void)
     set_intr_gate(TRAP_alignment_check,&alignment_check);
     set_intr_gate(TRAP_machine_check,&machine_check);
     set_intr_gate(TRAP_simd_error,&simd_coprocessor_error);
+    set_intr_gate(X86_EXC_CP, entry_CP);
 
     /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
     enable_each_ist(idt_table);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index d55453f3f3..f7ee3dce91 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -795,6 +795,10 @@  ENTRY(alignment_check)
         movl  $TRAP_alignment_check,4(%rsp)
         jmp   handle_exception
 
+ENTRY(entry_CP)
+        movl  $X86_EXC_CP, 4(%rsp)
+        jmp   handle_exception
+
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
@@ -940,7 +944,8 @@  autogen_stubs: /* Automatically generated stubs. */
         entrypoint 1b
 
         /* Reserved exceptions, heading towards do_reserved_trap(). */
-        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
+        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
+                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
 
 1:      test  $8,%spl        /* 64bit exception frames are 16 byte aligned, but the word */
         jz    2f             /* size is 8 bytes.  Check whether the processor gave us an */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 96deac73ed..c2b9dc1ac0 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -68,6 +68,7 @@ 
 #define PFEC_reserved_bit   (_AC(1,U) << 3)
 #define PFEC_insn_fetch     (_AC(1,U) << 4)
 #define PFEC_prot_key       (_AC(1,U) << 5)
+#define PFEC_shstk          (_AC(1,U) << 6)
 #define PFEC_arch_mask      (_AC(0xffff,U)) /* Architectural PFEC values. */
 /* Internally used only flags. */
 #define PFEC_page_paged     (1U<<16)
@@ -530,6 +531,7 @@  DECLARE_TRAP_HANDLER(coprocessor_error);
 DECLARE_TRAP_HANDLER(simd_coprocessor_error);
 DECLARE_TRAP_HANDLER_CONST(machine_check);
 DECLARE_TRAP_HANDLER(alignment_check);
+DECLARE_TRAP_HANDLER(entry_CP);
 
 DECLARE_TRAP_HANDLER(entry_int82);