diff mbox series

KVM: x86: Fix print format and coding style

Message ID 1581734662-970-1-git-send-email-linmiaohe@huawei.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Fix print format and coding style | expand

Commit Message

Miaohe Lin Feb. 15, 2020, 2:44 a.m. UTC
From: Miaohe Lin <linmiaohe@huawei.com>

Use %u to print u32 var and correct some coding style.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 arch/x86/kvm/i8254.c      | 2 +-
 arch/x86/kvm/mmu/mmu.c    | 3 +--
 arch/x86/kvm/vmx/nested.c | 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

Comments

Vitaly Kuznetsov Feb. 17, 2020, 8:57 a.m. UTC | #1
linmiaohe <linmiaohe@huawei.com> writes:

> From: Miaohe Lin <linmiaohe@huawei.com>
>
> Use %u to print u32 var and correct some coding style.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  arch/x86/kvm/i8254.c      | 2 +-
>  arch/x86/kvm/mmu/mmu.c    | 3 +--
>  arch/x86/kvm/vmx/nested.c | 2 +-
>  3 files changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> index b24c606ac04b..febca334c320 100644
> --- a/arch/x86/kvm/i8254.c
> +++ b/arch/x86/kvm/i8254.c
> @@ -367,7 +367,7 @@ static void pit_load_count(struct kvm_pit *pit, int channel, u32 val)
>  {
>  	struct kvm_kpit_state *ps = &pit->pit_state;
>  
> -	pr_debug("load_count val is %d, channel is %d\n", val, channel);
> +	pr_debug("load_count val is %u, channel is %d\n", val, channel);
>  
>  	/*
>  	 * The largest possible initial count is 0; this is equivalent
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 7011a4e54866..9c228b9910b1 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3568,8 +3568,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		 * write-protected for dirty-logging or access tracking.
>  		 */
>  		if ((error_code & PFERR_WRITE_MASK) &&
> -		    spte_can_locklessly_be_made_writable(spte))
> -		{
> +		    spte_can_locklessly_be_made_writable(spte)) {
>  			new_spte |= PT_WRITABLE_MASK;
>  
>  			/*
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index f2d8cb68dce8..6f3e515f28fd 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4367,7 +4367,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
>  	if (base_is_valid)
>  		off += kvm_register_read(vcpu, base_reg);
>  	if (index_is_valid)
> -		off += kvm_register_read(vcpu, index_reg)<<scaling;
> +		off += kvm_register_read(vcpu, index_reg) << scaling;
>  	vmx_get_segment(vcpu, &s, seg_reg);
>  
>  	/*

I would've suggested we split such unrelated changes by source files in
the future to simplify (possible) stable backporting. Changes themselves
look good,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Paolo Bonzini Feb. 17, 2020, 5:07 p.m. UTC | #2
On 17/02/20 09:57, Vitaly Kuznetsov wrote:
> I would've suggested we split such unrelated changes by source files in
> the future to simplify (possible) stable backporting. Changes themselves
> look good,

In this case I think it's trivial enough that we shouldn't have any
problem backporting to stable, but in general I agree.

Queued, thanks.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index b24c606ac04b..febca334c320 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -367,7 +367,7 @@  static void pit_load_count(struct kvm_pit *pit, int channel, u32 val)
 {
 	struct kvm_kpit_state *ps = &pit->pit_state;
 
-	pr_debug("load_count val is %d, channel is %d\n", val, channel);
+	pr_debug("load_count val is %u, channel is %d\n", val, channel);
 
 	/*
 	 * The largest possible initial count is 0; this is equivalent
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 7011a4e54866..9c228b9910b1 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3568,8 +3568,7 @@  static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 		 * write-protected for dirty-logging or access tracking.
 		 */
 		if ((error_code & PFERR_WRITE_MASK) &&
-		    spte_can_locklessly_be_made_writable(spte))
-		{
+		    spte_can_locklessly_be_made_writable(spte)) {
 			new_spte |= PT_WRITABLE_MASK;
 
 			/*
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f2d8cb68dce8..6f3e515f28fd 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4367,7 +4367,7 @@  int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 	if (base_is_valid)
 		off += kvm_register_read(vcpu, base_reg);
 	if (index_is_valid)
-		off += kvm_register_read(vcpu, index_reg)<<scaling;
+		off += kvm_register_read(vcpu, index_reg) << scaling;
 	vmx_get_segment(vcpu, &s, seg_reg);
 
 	/*