diff mbox series

KVM: x86: fix kvm_vcpu_is_preempted

Message ID 1641986380-10199-1-git-send-email-lirongqing@baidu.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: fix kvm_vcpu_is_preempted | expand

Commit Message

Li RongQing Jan. 12, 2022, 11:19 a.m. UTC
After support paravirtualized TLB shootdowns, steal_time.preempted
includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB

and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED

Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown")
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 arch/x86/kernel/kvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Sean Christopherson Jan. 12, 2022, 4:43 p.m. UTC | #1
On Wed, Jan 12, 2022, Li RongQing wrote:
> After support paravirtualized TLB shootdowns, steal_time.preempted
> includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB
> 
> and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED
> 
> Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown")
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 59abbda..a9202d9 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -1025,8 +1025,8 @@ asm(
>  ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
>  "__raw_callee_save___kvm_vcpu_is_preempted:"
>  "movq	__per_cpu_offset(,%rdi,8), %rax;"
> -"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> -"setne	%al;"
> +"movb	" __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax), %al;"
> +"andb	$" __stringify(KVM_VCPU_PREEMPTED) ", %al;"

Eww, the existing code is sketchy.  It relies on the compiler to store _Bool/bool
in a single byte since %rax may be non-zero from the __per_cpu_offset(), and
modifying %al doesn't zero %rax[63:8].  I doubt gcc or clang use anything but a
single byte on x86-64, but "andl" is just as cheap so I don't see any harm in
being paranoid.

>  "ret;"
>  ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
>  ".popsection");
> -- 
> 2.9.4
>
Li RongQing Jan. 13, 2022, 8:06 a.m. UTC | #2
> -----邮件原件-----
> 发件人: Sean Christopherson <seanjc@google.com>
> 发送时间: 2022年1月13日 0:44
> 收件人: Li,Rongqing <lirongqing@baidu.com>
> 抄送: pbonzini@redhat.com; vkuznets@redhat.com; wanpengli@tencent.com;
> jmattson@google.com; tglx@linutronix.de; mingo@redhat.com; bp@alien8.de;
> x86@kernel.org; hpa@zytor.com; rkrcmar@redhat.com; kvm@vger.kernel.org;
> joro@8bytes.org
> 主题: Re: [PATCH] KVM: x86: fix kvm_vcpu_is_preempted
> 
> On Wed, Jan 12, 2022, Li RongQing wrote:
> > After support paravirtualized TLB shootdowns, steal_time.preempted
> > includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB
> >
> > and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED
> >
> > Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown")
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> > ---
> >  arch/x86/kernel/kvm.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index
> > 59abbda..a9202d9 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -1025,8 +1025,8 @@ asm(
> >  ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> >  "__raw_callee_save___kvm_vcpu_is_preempted:"
> >  "movq	__per_cpu_offset(,%rdi,8), %rax;"
> > -"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> > -"setne	%al;"
> > +"movb	" __stringify(KVM_STEAL_TIME_preempted)
> "+steal_time(%rax), %al;"
> > +"andb	$" __stringify(KVM_VCPU_PREEMPTED) ", %al;"
> 
> Eww, the existing code is sketchy.  It relies on the compiler to store _Bool/bool
> in a single byte since %rax may be non-zero from the __per_cpu_offset(), and
> modifying %al doesn't zero %rax[63:8].  I doubt gcc or clang use anything but a
> single byte on x86-64, but "andl" is just as cheap so I don't see any harm in being
> paranoid.
> 
Build with gcc (GCC) 7.3.0,  disassembled as blow

before:
ffffffff8105bdc0 <__raw_callee_save___kvm_vcpu_is_preempted>:
ffffffff8105bdc0:       48 8b 04 fd c0 16 1d    mov    -0x7de2e940(,%rdi,8),%rax
ffffffff8105bdc7:       82
ffffffff8105bdc8:       80 b8 90 c0 02 00 00    cmpb   $0x0,0x2c090(%rax)
ffffffff8105bdcf:       0f 95 c0                setne  %al
ffffffff8105bdd2:       c3                      retq
ffffffff8105bdd3:       0f 1f 00                nopl   (%rax)
ffffffff8105bdd6:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)
ffffffff8105bddd:       00 00 00


after:
ffffffff8105bdc0 <__raw_callee_save___kvm_vcpu_is_preempted>:
ffffffff8105bdc0:       48 8b 04 fd c0 16 1d    mov    -0x7de2e940(,%rdi,8),%rax
ffffffff8105bdc7:       82
ffffffff8105bdc8:       8a 80 90 c0 02 00       mov    0x2c090(%rax),%al
ffffffff8105bdce:       24 01                   and    $0x1,%al
ffffffff8105bdd0:       c3                      retq
ffffffff8105bdd1:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
ffffffff8105bdd6:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)
ffffffff8105bddd:       00 00 00

Thanks

-Li
Peter Zijlstra Jan. 13, 2022, 10 a.m. UTC | #3
On Wed, Jan 12, 2022 at 04:43:39PM +0000, Sean Christopherson wrote:
> On Wed, Jan 12, 2022, Li RongQing wrote:
> > After support paravirtualized TLB shootdowns, steal_time.preempted
> > includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB
> > 
> > and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED
> > 
> > Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown")
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> > ---
> >  arch/x86/kernel/kvm.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 59abbda..a9202d9 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -1025,8 +1025,8 @@ asm(
> >  ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> >  "__raw_callee_save___kvm_vcpu_is_preempted:"
> >  "movq	__per_cpu_offset(,%rdi,8), %rax;"
> > -"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> > -"setne	%al;"
> > +"movb	" __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax), %al;"
> > +"andb	$" __stringify(KVM_VCPU_PREEMPTED) ", %al;"
> 
> Eww, the existing code is sketchy.  It relies on the compiler to store _Bool/bool
> in a single byte since %rax may be non-zero from the __per_cpu_offset(), and
> modifying %al doesn't zero %rax[63:8].  I doubt gcc or clang use anything but a
> single byte on x86-64, but "andl" is just as cheap so I don't see any harm in
> being paranoid.

Agreed, better to clear the rest of rax just to be safe.
Paolo Bonzini Jan. 13, 2022, 11:03 a.m. UTC | #4
On 1/12/22 12:19, Li RongQing wrote:
> After support paravirtualized TLB shootdowns, steal_time.preempted
> includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB
> 
> and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED

The combination of PREEMPTED=0,FLUSH_TLB=1 is invalid and can only 
happens if the guest malfunctions (which it doesn't, it uses cmpxchg to 
set KVM_VCPU_PREEMPTED); the host only does an xchg with 0 as the new 
value.  Since this is guest code, this patch does not change an actual 
error in the code, does it?

Paolo

> Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown")
> Signed-off-by: Li RongQing<lirongqing@baidu.com>
> ---
>   arch/x86/kernel/kvm.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 59abbda..a9202d9 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -1025,8 +1025,8 @@ asm(
>   ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
>   "__raw_callee_save___kvm_vcpu_is_preempted:"
>   "movq	__per_cpu_offset(,%rdi,8), %rax;"
> -"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> -"setne	%al;"
> +"movb	" __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax), %al;"
> +"andb	$" __stringify(KVM_VCPU_PREEMPTED) ", %al;"
>   "ret;"
>   ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
>   ".popsection");
Li RongQing Jan. 14, 2022, 9:49 a.m. UTC | #5
> -----邮件原件-----
> The combination of PREEMPTED=0,FLUSH_TLB=1 is invalid and can only
> happens if the guest malfunctions (which it doesn't, it uses cmpxchg to set
> KVM_VCPU_PREEMPTED); the host only does an xchg with 0 as the new value.
> Since this is guest code, this patch does not change an actual error in the code,
> does it?
> 

You are right, this is not a issue in practice 
Thanks

-Li
diff mbox series

Patch

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 59abbda..a9202d9 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1025,8 +1025,8 @@  asm(
 ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
 "__raw_callee_save___kvm_vcpu_is_preempted:"
 "movq	__per_cpu_offset(,%rdi,8), %rax;"
-"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
-"setne	%al;"
+"movb	" __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax), %al;"
+"andb	$" __stringify(KVM_VCPU_PREEMPTED) ", %al;"
 "ret;"
 ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
 ".popsection");