diff mbox

KVM: VMX: replace move_msr_up with swap macro

Message ID 20171103225819.GA3482@embeddedor.com (mailing list archive)
State New, archived
Headers show

Commit Message

Gustavo A. R. Silva Nov. 3, 2017, 10:58 p.m. UTC
Function move_msr_up is used to _manually_ swap MSR entries in MSR array.
This function can be removed and replaced using the swap macro instead.

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
---
The new lines are over 80 characters, but I think in this case that is
preferable over splitting them.

 arch/x86/kvm/vmx.c | 24 ++++++------------------
 1 file changed, 6 insertions(+), 18 deletions(-)

Comments

Paolo Bonzini Nov. 4, 2017, 5:29 p.m. UTC | #1
----- Original Message -----
> From: "Gustavo A. R. Silva" <garsilva@embeddedor.com>
> To: "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář" <rkrcmar@redhat.com>, "Thomas Gleixner"
> <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, x86@kernel.org
> Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "Gustavo A. R. Silva" <garsilva@embeddedor.com>
> Sent: Friday, November 3, 2017 11:58:19 PM
> Subject: [PATCH] KVM: VMX: replace move_msr_up with swap macro
> 
> Function move_msr_up is used to _manually_ swap MSR entries in MSR array.
> This function can be removed and replaced using the swap macro instead.
> 
> This code was detected with the help of Coccinelle.

I think move_msr_up should instead change into a function like

   void mark_msr_for_save(struct vcpu_vmx *vmx, int index)
   {
       swap(vmx->guest_msrs[index], vmx->guest_msrs[vmx->save_nmsrs]);
       vmx->save_nmsrs++;
   }

Using swap is useful, but it is also hiding what's going on exactly
(in addition, using ++ inside a macro argument might be calling for
trouble).

Paolo

> 
> Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
> ---
> The new lines are over 80 characters, but I think in this case that is
> preferable over splitting them.
> 
>  arch/x86/kvm/vmx.c | 24 ++++++------------------
>  1 file changed, 6 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index e6c8ffa..210e491 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -2544,18 +2544,6 @@ static bool vmx_invpcid_supported(void)
>  	return cpu_has_vmx_invpcid() && enable_ept;
>  }
>  
> -/*
> - * Swap MSR entry in host/guest MSR entry array.
> - */
> -static void move_msr_up(struct vcpu_vmx *vmx, int from, int to)
> -{
> -	struct shared_msr_entry tmp;
> -
> -	tmp = vmx->guest_msrs[to];
> -	vmx->guest_msrs[to] = vmx->guest_msrs[from];
> -	vmx->guest_msrs[from] = tmp;
> -}
> -
>  static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
>  {
>  	unsigned long *msr_bitmap;
> @@ -2600,28 +2588,28 @@ static void setup_msrs(struct vcpu_vmx *vmx)
>  	if (is_long_mode(&vmx->vcpu)) {
>  		index = __find_msr_index(vmx, MSR_SYSCALL_MASK);
>  		if (index >= 0)
> -			move_msr_up(vmx, index, save_nmsrs++);
> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  		index = __find_msr_index(vmx, MSR_LSTAR);
>  		if (index >= 0)
> -			move_msr_up(vmx, index, save_nmsrs++);
> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  		index = __find_msr_index(vmx, MSR_CSTAR);
>  		if (index >= 0)
> -			move_msr_up(vmx, index, save_nmsrs++);
> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  		index = __find_msr_index(vmx, MSR_TSC_AUX);
>  		if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP))
> -			move_msr_up(vmx, index, save_nmsrs++);
> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  		/*
>  		 * MSR_STAR is only needed on long mode guests, and only
>  		 * if efer.sce is enabled.
>  		 */
>  		index = __find_msr_index(vmx, MSR_STAR);
>  		if ((index >= 0) && (vmx->vcpu.arch.efer & EFER_SCE))
> -			move_msr_up(vmx, index, save_nmsrs++);
> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  	}
>  #endif
>  	index = __find_msr_index(vmx, MSR_EFER);
>  	if (index >= 0 && update_transition_efer(vmx, index))
> -		move_msr_up(vmx, index, save_nmsrs++);
> +		swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>  
>  	vmx->save_nmsrs = save_nmsrs;


>  
> --
> 2.7.4
> 
>
Gustavo A. R. Silva Nov. 6, 2017, 1:14 p.m. UTC | #2
Hi Paolo,

Quoting Paolo Bonzini <pbonzini@redhat.com>:

> ----- Original Message -----
>> From: "Gustavo A. R. Silva" <garsilva@embeddedor.com>
>> To: "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář"  
>> <rkrcmar@redhat.com>, "Thomas Gleixner"
>> <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>, "H. Peter  
>> Anvin" <hpa@zytor.com>, x86@kernel.org
>> Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "Gustavo A.  
>> R. Silva" <garsilva@embeddedor.com>
>> Sent: Friday, November 3, 2017 11:58:19 PM
>> Subject: [PATCH] KVM: VMX: replace move_msr_up with swap macro
>>
>> Function move_msr_up is used to _manually_ swap MSR entries in MSR array.
>> This function can be removed and replaced using the swap macro instead.
>>
>> This code was detected with the help of Coccinelle.
>
> I think move_msr_up should instead change into a function like
>
>    void mark_msr_for_save(struct vcpu_vmx *vmx, int index)
>    {
>        swap(vmx->guest_msrs[index], vmx->guest_msrs[vmx->save_nmsrs]);
>        vmx->save_nmsrs++;
>    }
>
> Using swap is useful, but it is also hiding what's going on exactly
> (in addition, using ++ inside a macro argument might be calling for
> trouble).
>

Thanks for your comments.

I'll work on v2 based on your feedback.

--
Gustavo A. R. Silva

> Paolo
>
>>
>> Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
>> ---
>> The new lines are over 80 characters, but I think in this case that is
>> preferable over splitting them.
>>
>>  arch/x86/kvm/vmx.c | 24 ++++++------------------
>>  1 file changed, 6 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index e6c8ffa..210e491 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -2544,18 +2544,6 @@ static bool vmx_invpcid_supported(void)
>>  	return cpu_has_vmx_invpcid() && enable_ept;
>>  }
>>
>> -/*
>> - * Swap MSR entry in host/guest MSR entry array.
>> - */
>> -static void move_msr_up(struct vcpu_vmx *vmx, int from, int to)
>> -{
>> -	struct shared_msr_entry tmp;
>> -
>> -	tmp = vmx->guest_msrs[to];
>> -	vmx->guest_msrs[to] = vmx->guest_msrs[from];
>> -	vmx->guest_msrs[from] = tmp;
>> -}
>> -
>>  static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
>>  {
>>  	unsigned long *msr_bitmap;
>> @@ -2600,28 +2588,28 @@ static void setup_msrs(struct vcpu_vmx *vmx)
>>  	if (is_long_mode(&vmx->vcpu)) {
>>  		index = __find_msr_index(vmx, MSR_SYSCALL_MASK);
>>  		if (index >= 0)
>> -			move_msr_up(vmx, index, save_nmsrs++);
>> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>  		index = __find_msr_index(vmx, MSR_LSTAR);
>>  		if (index >= 0)
>> -			move_msr_up(vmx, index, save_nmsrs++);
>> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>  		index = __find_msr_index(vmx, MSR_CSTAR);
>>  		if (index >= 0)
>> -			move_msr_up(vmx, index, save_nmsrs++);
>> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>  		index = __find_msr_index(vmx, MSR_TSC_AUX);
>>  		if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP))
>> -			move_msr_up(vmx, index, save_nmsrs++);
>> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>  		/*
>>  		 * MSR_STAR is only needed on long mode guests, and only
>>  		 * if efer.sce is enabled.
>>  		 */
>>  		index = __find_msr_index(vmx, MSR_STAR);
>>  		if ((index >= 0) && (vmx->vcpu.arch.efer & EFER_SCE))
>> -			move_msr_up(vmx, index, save_nmsrs++);
>> +			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>  	}
>>  #endif
>>  	index = __find_msr_index(vmx, MSR_EFER);
>>  	if (index >= 0 && update_transition_efer(vmx, index))
>> -		move_msr_up(vmx, index, save_nmsrs++);
>> +		swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
>>
>>  	vmx->save_nmsrs = save_nmsrs;
>
>
>>
>> --
>> 2.7.4
>>
>>
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e6c8ffa..210e491 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2544,18 +2544,6 @@  static bool vmx_invpcid_supported(void)
 	return cpu_has_vmx_invpcid() && enable_ept;
 }
 
-/*
- * Swap MSR entry in host/guest MSR entry array.
- */
-static void move_msr_up(struct vcpu_vmx *vmx, int from, int to)
-{
-	struct shared_msr_entry tmp;
-
-	tmp = vmx->guest_msrs[to];
-	vmx->guest_msrs[to] = vmx->guest_msrs[from];
-	vmx->guest_msrs[from] = tmp;
-}
-
 static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
 {
 	unsigned long *msr_bitmap;
@@ -2600,28 +2588,28 @@  static void setup_msrs(struct vcpu_vmx *vmx)
 	if (is_long_mode(&vmx->vcpu)) {
 		index = __find_msr_index(vmx, MSR_SYSCALL_MASK);
 		if (index >= 0)
-			move_msr_up(vmx, index, save_nmsrs++);
+			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 		index = __find_msr_index(vmx, MSR_LSTAR);
 		if (index >= 0)
-			move_msr_up(vmx, index, save_nmsrs++);
+			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 		index = __find_msr_index(vmx, MSR_CSTAR);
 		if (index >= 0)
-			move_msr_up(vmx, index, save_nmsrs++);
+			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 		index = __find_msr_index(vmx, MSR_TSC_AUX);
 		if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP))
-			move_msr_up(vmx, index, save_nmsrs++);
+			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 		/*
 		 * MSR_STAR is only needed on long mode guests, and only
 		 * if efer.sce is enabled.
 		 */
 		index = __find_msr_index(vmx, MSR_STAR);
 		if ((index >= 0) && (vmx->vcpu.arch.efer & EFER_SCE))
-			move_msr_up(vmx, index, save_nmsrs++);
+			swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 	}
 #endif
 	index = __find_msr_index(vmx, MSR_EFER);
 	if (index >= 0 && update_transition_efer(vmx, index))
-		move_msr_up(vmx, index, save_nmsrs++);
+		swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
 
 	vmx->save_nmsrs = save_nmsrs;