diff mbox series

[v3,2/6] x86/entry_64: Add VERW just before userspace transition

Message ID 20231025-delay-verw-v3-2-52663677ee35@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series Delay VERW | expand

Commit Message

Pawan Gupta Oct. 25, 2023, 8:52 p.m. UTC
Mitigation for MDS is to use VERW instruction to clear any secrets in
CPU Buffers. Any memory accesses after VERW execution can still remain
in CPU buffers. It is safer to execute VERW late in return to user path
to minimize the window in which kernel data can end up in CPU buffers.
There are not many kernel secrets to be had after SWITCH_TO_USER_CR3.

Add support for deploying VERW mitigation after user register state is
restored. This helps minimize the chances of kernel data ending up into
CPU buffers after executing VERW.

Note that the mitigation at the new location is not yet enabled.

  Corner case not handled
  =======================
  Interrupts returning to kernel don't clear CPUs buffers since the
  exit-to-user path is expected to do that anyways. But, there could be
  a case when an NMI is generated in kernel after the exit-to-user path
  has cleared the buffers. This case is not handled and NMI returning to
  kernel don't clear CPU buffers because:

  1. It is rare to get an NMI after VERW, but before returning to userspace.
  2. For an unprivileged user, there is no known way to make that NMI
     less rare or target it.
  3. It would take a large number of these precisely-timed NMIs to mount
     an actual attack.  There's presumably not enough bandwidth.
  4. The NMI in question occurs after a VERW, i.e. when user state is
     restored and most interesting data is already scrubbed. Whats left
     is only the data that NMI touches, and that may or may not be of
     any interest.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/entry/entry_64.S        | 11 +++++++++++
 arch/x86/entry/entry_64_compat.S |  1 +
 2 files changed, 12 insertions(+)

Comments

Nikolay Borisov Oct. 26, 2023, 4:25 p.m. UTC | #1
On 25.10.23 г. 23:52 ч., Pawan Gupta wrote:

<snip>

> @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
>   	UNWIND_HINT_END_OF_STACK
>   	ENDBR
>   	mov	$-ENOSYS, %eax
> +	CLEAR_CPU_BUFFERS

nit: Just out of curiosity is it really needed in this case or it's 
doesn for the sake of uniformity so that all ring3 transitions are 
indeed covered??

>   	sysretl
>   SYM_CODE_END(ignore_sysret)
>   #endif
> diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
> index 70150298f8bd..245697eb8485 100644
> --- a/arch/x86/entry/entry_64_compat.S
> +++ b/arch/x86/entry/entry_64_compat.S
> @@ -271,6 +271,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL)
>   	xorl	%r9d, %r9d
>   	xorl	%r10d, %r10d
>   	swapgs
> +	CLEAR_CPU_BUFFERS
>   	sysretl
>   SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL)
>   	ANNOTATE_NOENDBR
>
Pawan Gupta Oct. 26, 2023, 7:29 p.m. UTC | #2
On Thu, Oct 26, 2023 at 07:25:27PM +0300, Nikolay Borisov wrote:
> 
> 
> On 25.10.23 г. 23:52 ч., Pawan Gupta wrote:
> 
> <snip>
> 
> > @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
> >   	UNWIND_HINT_END_OF_STACK
> >   	ENDBR
> >   	mov	$-ENOSYS, %eax
> > +	CLEAR_CPU_BUFFERS
> 
> nit: Just out of curiosity is it really needed in this case or it's doesn
> for the sake of uniformity so that all ring3 transitions are indeed
> covered??

Interrupts returning to kernel don't clear the CPU buffers. I believe
interrupts will be enabled here, and getting an interrupt here could
leak the data that interrupt touched.
Dave Hansen Oct. 26, 2023, 7:40 p.m. UTC | #3
On 10/26/23 12:29, Pawan Gupta wrote:
> On Thu, Oct 26, 2023 at 07:25:27PM +0300, Nikolay Borisov wrote:
>> On 25.10.23 г. 23:52 ч., Pawan Gupta wrote:
>>> @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
>>>   	UNWIND_HINT_END_OF_STACK
>>>   	ENDBR
>>>   	mov	$-ENOSYS, %eax
>>> +	CLEAR_CPU_BUFFERS
>> nit: Just out of curiosity is it really needed in this case or it's doesn
>> for the sake of uniformity so that all ring3 transitions are indeed
>> covered??
> Interrupts returning to kernel don't clear the CPU buffers. I believe
> interrupts will be enabled here, and getting an interrupt here could
> leak the data that interrupt touched.

Specifically NMIs, right?

X86_EFLAGS_IF should be clear here.
Pawan Gupta Oct. 26, 2023, 9:15 p.m. UTC | #4
On Thu, Oct 26, 2023 at 12:40:49PM -0700, Dave Hansen wrote:
> On 10/26/23 12:29, Pawan Gupta wrote:
> > On Thu, Oct 26, 2023 at 07:25:27PM +0300, Nikolay Borisov wrote:
> >> On 25.10.23 г. 23:52 ч., Pawan Gupta wrote:
> >>> @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
> >>>   	UNWIND_HINT_END_OF_STACK
> >>>   	ENDBR
> >>>   	mov	$-ENOSYS, %eax
> >>> +	CLEAR_CPU_BUFFERS
> >> nit: Just out of curiosity is it really needed in this case or it's doesn
> >> for the sake of uniformity so that all ring3 transitions are indeed
> >> covered??
> > Interrupts returning to kernel don't clear the CPU buffers. I believe
> > interrupts will be enabled here, and getting an interrupt here could
> > leak the data that interrupt touched.
> 
> Specifically NMIs, right?

Yes, and VERW can omitted for the same reason as NMI returning to
kernel.

> X86_EFLAGS_IF should be clear here.

I see that SYSCALL has a configuration for IF, but I didn't see it for
SYSENTER in the code. But looking at the SDM, it clear IF by default.

syscall_init()
{
...
#else
	wrmsrl_cstar((unsigned long)ignore_sysret);
	wrmsrl_safe(MSR_IA32_SYSENTER_CS, (u64)GDT_ENTRY_INVALID_SEG);
	wrmsrl_safe(MSR_IA32_SYSENTER_ESP, 0ULL);
	wrmsrl_safe(MSR_IA32_SYSENTER_EIP, 0ULL);
#endif

	/*
	 * Flags to clear on syscall; clear as much as possible
	 * to minimize user space-kernel interference.
	 */
	wrmsrl(MSR_SYSCALL_MASK,
	       X86_EFLAGS_CF|X86_EFLAGS_PF|X86_EFLAGS_AF|
	       X86_EFLAGS_ZF|X86_EFLAGS_SF|X86_EFLAGS_TF|
	       X86_EFLAGS_IF|X86_EFLAGS_DF|X86_EFLAGS_OF|
	       X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_RF|
	       X86_EFLAGS_AC|X86_EFLAGS_ID);
Pawan Gupta Oct. 26, 2023, 10:13 p.m. UTC | #5
On Thu, Oct 26, 2023 at 02:15:11PM -0700, Pawan Gupta wrote:
> On Thu, Oct 26, 2023 at 12:40:49PM -0700, Dave Hansen wrote:
> > On 10/26/23 12:29, Pawan Gupta wrote:
> > > On Thu, Oct 26, 2023 at 07:25:27PM +0300, Nikolay Borisov wrote:
> > >> On 25.10.23 г. 23:52 ч., Pawan Gupta wrote:
> > >>> @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
> > >>>   	UNWIND_HINT_END_OF_STACK
> > >>>   	ENDBR
> > >>>   	mov	$-ENOSYS, %eax
> > >>> +	CLEAR_CPU_BUFFERS
> > >> nit: Just out of curiosity is it really needed in this case or it's doesn
> > >> for the sake of uniformity so that all ring3 transitions are indeed
> > >> covered??
> > > Interrupts returning to kernel don't clear the CPU buffers. I believe
> > > interrupts will be enabled here, and getting an interrupt here could
> > > leak the data that interrupt touched.
> > 
> > Specifically NMIs, right?
> 
> Yes, and VERW can omitted for the same reason as NMI returning to
> kernel.

Thinking more on this, we should not omit verw here, as this spot is way
easier to target NMIs. A user executing SYSENTER in a loop has much
higher chances of causing an NMI to return to kernel, and skip verw.
Dave Hansen Oct. 26, 2023, 10:17 p.m. UTC | #6
On 10/26/23 15:13, Pawan Gupta wrote:
>>>> Interrupts returning to kernel don't clear the CPU buffers. I believe
>>>> interrupts will be enabled here, and getting an interrupt here could
>>>> leak the data that interrupt touched.
>>> Specifically NMIs, right?
>> Yes, and VERW can omitted for the same reason as NMI returning to
>> kernel.
> Thinking more on this, we should not omit verw here, as this spot is way
> easier to target NMIs. A user executing SYSENTER in a loop has much
> higher chances of causing an NMI to return to kernel, and skip verw.

Right.

This is also a path where we care *ZERO* about performance.  It's
basically all upside to _add_ VERW and all downside (increased attack
surface) to skip it.
diff mbox series

Patch

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 43606de22511..9f97a8bd11e8 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -223,6 +223,7 @@  syscall_return_via_sysret:
 SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL)
 	ANNOTATE_NOENDBR
 	swapgs
+	CLEAR_CPU_BUFFERS
 	sysretq
 SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL)
 	ANNOTATE_NOENDBR
@@ -663,6 +664,7 @@  SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
 	/* Restore RDI. */
 	popq	%rdi
 	swapgs
+	CLEAR_CPU_BUFFERS
 	jmp	.Lnative_iret
 
 
@@ -774,6 +776,8 @@  native_irq_return_ldt:
 	 */
 	popq	%rax				/* Restore user RAX */
 
+	CLEAR_CPU_BUFFERS
+
 	/*
 	 * RSP now points to an ordinary IRET frame, except that the page
 	 * is read-only and RSP[31:16] are preloaded with the userspace
@@ -1502,6 +1506,12 @@  nmi_restore:
 	std
 	movq	$0, 5*8(%rsp)		/* clear "NMI executing" */
 
+	/*
+	 * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
+	 * NMI in kernel after user state is restored. For an unprivileged user
+	 * these conditions are hard to meet.
+	 */
+
 	/*
 	 * iretq reads the "iret" frame and exits the NMI stack in a
 	 * single instruction.  We are returning to kernel mode, so this
@@ -1520,6 +1530,7 @@  SYM_CODE_START(ignore_sysret)
 	UNWIND_HINT_END_OF_STACK
 	ENDBR
 	mov	$-ENOSYS, %eax
+	CLEAR_CPU_BUFFERS
 	sysretl
 SYM_CODE_END(ignore_sysret)
 #endif
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 70150298f8bd..245697eb8485 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -271,6 +271,7 @@  SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL)
 	xorl	%r9d, %r9d
 	xorl	%r10d, %r10d
 	swapgs
+	CLEAR_CPU_BUFFERS
 	sysretl
 SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL)
 	ANNOTATE_NOENDBR