diff mbox series

[1/3] x86/asm: Pin sensitive CR0 bits

Message ID 20190226233647.28547-2-keescook@chromium.org (mailing list archive)
State New, archived
Headers show
Series x86/asm: More pinning | expand

Commit Message

Kees Cook Feb. 26, 2019, 11:36 p.m. UTC
With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0
might become a target as well. Following the same reasoning for the CR4
pinning, this pins CR0's WP bit (but this can be done with a static value).

As before, to convince the compiler to not optimize away the check for the
WP bit after the set, this marks "val" as an output from the asm() block.
This protects against just jumping into the function past where the masking
happens; we must check that the mask was applied after we do the set). Due
to how this function can be built by the compiler (especially due to the
removal of frame pointers), jumping into the middle of the function
frequently doesn't require stack manipulation to construct a stack frame
(there may only a retq without pops, which is sufficient for use with
exploits like timer overwrites).

Additionally, this avoids WARN()ing before resetting the bit, just to
minimize any race conditions with leaving the bit unset.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/special_insns.h | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

Comments

Solar Designer Feb. 27, 2019, 10:44 a.m. UTC | #1
On Tue, Feb 26, 2019 at 03:36:45PM -0800, Kees Cook wrote:
>  static inline void native_write_cr0(unsigned long val)
>  {
> -	asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
> +	bool warn = false;
> +
> +again:
> +	val |= X86_CR0_WP;
> +	/*
> +	 * In order to have the compiler not optimize away the check
> +	 * in the WARN_ONCE(), mark "val" as being also an output ("+r")

This comment is now slightly out of date: the check is no longer "in the
WARN_ONCE()".  Ditto about the comment for CR4.

> +	 * by this asm() block so it will perform an explicit check, as
> +	 * if it were "volatile".
> +	 */
> +	asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : );
> +	/*
> +	 * If the MOV above was used directly as a ROP gadget we can
> +	 * notice the lack of pinned bits in "val" and start the function
> +	 * from the beginning to gain the WP bit for sure. And do it
> +	 * without first taking the exception for a WARN().
> +	 */
> +	if ((val & X86_CR0_WP) != X86_CR0_WP) {
> +		warn = true;
> +		goto again;
> +	}
> +	WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
>  }

Alexander
Kees Cook Feb. 27, 2019, 7:45 p.m. UTC | #2
On Wed, Feb 27, 2019 at 2:44 AM Solar Designer <solar@openwall.com> wrote:
>
> On Tue, Feb 26, 2019 at 03:36:45PM -0800, Kees Cook wrote:
> >  static inline void native_write_cr0(unsigned long val)
> >  {
> > -     asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
> > +     bool warn = false;
> > +
> > +again:
> > +     val |= X86_CR0_WP;
> > +     /*
> > +      * In order to have the compiler not optimize away the check
> > +      * in the WARN_ONCE(), mark "val" as being also an output ("+r")
>
> This comment is now slightly out of date: the check is no longer "in the
> WARN_ONCE()".  Ditto about the comment for CR4.

Ah yes, good point. I will adjust and send a v2 series.

>
> > +      * by this asm() block so it will perform an explicit check, as
> > +      * if it were "volatile".
> > +      */
> > +     asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : );
> > +     /*
> > +      * If the MOV above was used directly as a ROP gadget we can
> > +      * notice the lack of pinned bits in "val" and start the function
> > +      * from the beginning to gain the WP bit for sure. And do it
> > +      * without first taking the exception for a WARN().
> > +      */
> > +     if ((val & X86_CR0_WP) != X86_CR0_WP) {
> > +             warn = true;
> > +             goto again;
> > +     }
> > +     WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
> >  }
>
> Alexander
diff mbox series

Patch

diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index fabda1400137..8416d6b31084 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -25,7 +25,28 @@  static inline unsigned long native_read_cr0(void)
 
 static inline void native_write_cr0(unsigned long val)
 {
-	asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
+	bool warn = false;
+
+again:
+	val |= X86_CR0_WP;
+	/*
+	 * In order to have the compiler not optimize away the check
+	 * in the WARN_ONCE(), mark "val" as being also an output ("+r")
+	 * by this asm() block so it will perform an explicit check, as
+	 * if it were "volatile".
+	 */
+	asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : );
+	/*
+	 * If the MOV above was used directly as a ROP gadget we can
+	 * notice the lack of pinned bits in "val" and start the function
+	 * from the beginning to gain the WP bit for sure. And do it
+	 * without first taking the exception for a WARN().
+	 */
+	if ((val & X86_CR0_WP) != X86_CR0_WP) {
+		warn = true;
+		goto again;
+	}
+	WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
 }
 
 static inline unsigned long native_read_cr2(void)