diff mbox

x86/kvm: virt_xxx memory barriers instead of mandatory barriers

Message ID 1491904161-4099-1-git-send-email-wanpeng.li@hotmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wanpeng Li April 11, 2017, 9:49 a.m. UTC
From: Wanpeng Li <wanpeng.li@hotmail.com>

virt_xxx memory barriers are implemented trivially using the low-level 
__smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong 
TSO memory model, however, mandatory barriers will unconditional add 
memory barriers, this patch replaces the rmb() in kvm_steal_clock() by 
virt_rmb().

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
 arch/x86/kernel/kvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Paolo Bonzini April 11, 2017, 2:20 p.m. UTC | #1
----- Original Message -----
> From: "Wanpeng Li" <kernellwp@gmail.com>
> To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
> Cc: "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář" <rkrcmar@redhat.com>, "Wanpeng Li" <wanpeng.li@hotmail.com>
> Sent: Tuesday, April 11, 2017 5:49:21 PM
> Subject: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers
> 
> From: Wanpeng Li <wanpeng.li@hotmail.com>
> 
> virt_xxx memory barriers are implemented trivially using the low-level
> __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
> TSO memory model, however, mandatory barriers will unconditional add
> memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
> virt_rmb().
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 14f65a5..da5c097 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
>  	src = &per_cpu(steal_time, cpu);
>  	do {
>  		version = src->version;
> -		rmb();
> +		virt_rmb();
>  		steal = src->steal;
> -		rmb();
> +		virt_rmb();
>  	} while ((version & 1) || (version != src->version));
>  
>  	return steal;
> --
> 2.7.4

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Radim Krčmář April 12, 2017, 7:04 p.m. UTC | #2
2017-04-11 02:49-0700, Wanpeng Li:
> From: Wanpeng Li <wanpeng.li@hotmail.com>
> 
> virt_xxx memory barriers are implemented trivially using the low-level 
> __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong 
> TSO memory model, however, mandatory barriers will unconditional add 
> memory barriers, this patch replaces the rmb() in kvm_steal_clock() by 
> virt_rmb().
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---

Applied to kvm/queue, thanks.
diff mbox

Patch

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 14f65a5..da5c097 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -396,9 +396,9 @@  static u64 kvm_steal_clock(int cpu)
 	src = &per_cpu(steal_time, cpu);
 	do {
 		version = src->version;
-		rmb();
+		virt_rmb();
 		steal = src->steal;
-		rmb();
+		virt_rmb();
 	} while ((version & 1) || (version != src->version));
 
 	return steal;