diff mbox

[2/3] x86/entry/32: Check for VM86 mode in slow-path check

Message ID 1532103744-31902-3-git-send-email-joro@8bytes.org (mailing list archive)
State New, archived
Headers show

Commit Message

Joerg Roedel July 20, 2018, 4:22 p.m. UTC
From: Joerg Roedel <jroedel@suse.de>

The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to
go down the slow and paranoid entry path. The problem is
that this check also returns true when coming from VM86
mode. This is not a problem by itself, as the paranoid path
handles VM86 stack-frames just fine, but it is not necessary
as the normal code path handles VM86 mode as well (and
faster).

Extend the check to include VM86 mode. This also makes an
optimization of the paranoid path possible.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/entry/entry_32.S | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Pavel Machek July 21, 2018, 4:06 p.m. UTC | #1
Hi!

> The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to
> go down the slow and paranoid entry path. The problem is
> that this check also returns true when coming from VM86
> mode. This is not a problem by itself, as the paranoid path
> handles VM86 stack-frames just fine, but it is not necessary
> as the normal code path handles VM86 mode as well (and
> faster).
> 
> Extend the check to include VM86 mode. This also makes an
> optimization of the paranoid path possible.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/entry/entry_32.S | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
> index 010cdb4..2767c62 100644
> --- a/arch/x86/entry/entry_32.S
> +++ b/arch/x86/entry/entry_32.S
> @@ -414,8 +414,16 @@
>  	andl	$(0x0000ffff), PT_CS(%esp)
>  
>  	/* Special case - entry from kernel mode via entry stack */
> -	testl	$SEGMENT_RPL_MASK, PT_CS(%esp)
> -	jz	.Lentry_from_kernel_\@
> +#ifdef CONFIG_VM86
> +	movl	PT_EFLAGS(%esp), %ecx		# mix EFLAGS and CS
> +	movb	PT_CS(%esp), %cl
> +	andl	$(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
> +#else
> +	movl	PT_CS(%esp), %ecx
> +	andl	$SEGMENT_RPL_MASK, %ecx
> +#endif
> +	cmpl	$USER_RPL, %ecx
> +	jb	.Lentry_from_kernel_\@

Would it make sense to jump to the slow path as we did, and them jump
back if VM86 is detected?

Because VM86 is not really used often these days, and moving partial
registers results in short code but IIRC it will be rather slow.

								Pavel
diff mbox

Patch

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 010cdb4..2767c62 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -414,8 +414,16 @@ 
 	andl	$(0x0000ffff), PT_CS(%esp)
 
 	/* Special case - entry from kernel mode via entry stack */
-	testl	$SEGMENT_RPL_MASK, PT_CS(%esp)
-	jz	.Lentry_from_kernel_\@
+#ifdef CONFIG_VM86
+	movl	PT_EFLAGS(%esp), %ecx		# mix EFLAGS and CS
+	movb	PT_CS(%esp), %cl
+	andl	$(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
+#else
+	movl	PT_CS(%esp), %ecx
+	andl	$SEGMENT_RPL_MASK, %ecx
+#endif
+	cmpl	$USER_RPL, %ecx
+	jb	.Lentry_from_kernel_\@
 
 	/* Bytes to copy */
 	movl	$PTREGS_SIZE, %ecx