diff mbox

KVM: x86: refactor handling of MMIO page fault (cosmetic only)

Message ID 20180329224145.2495-1-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sean Christopherson March 29, 2018, 10:41 p.m. UTC
Redo kvm_mmu_page_fault()'s interaction with handle_mmio_page_fault()
so that the behavior of falling through to mmu.page_fault() when
handle_mmio_page_fault() returns RET_PF_INVALID is more obvious.
The current approach of setting and checking RET_PF_INVALID outside
of the MMIO flow can lead readers to believe that RET_PF_INVALID
may be used for something other than signifying that the MMIO generation
has changed.

This is a purely cosmetic change, e.g. kvm.ko's kvm_mmu_page_fault
is binary identical on my system before and after this patch

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

David Hildenbrand April 3, 2018, 8:23 a.m. UTC | #1
On 30.03.2018 00:41, Sean Christopherson wrote:
> Redo kvm_mmu_page_fault()'s interaction with handle_mmio_page_fault()
> so that the behavior of falling through to mmu.page_fault() when
> handle_mmio_page_fault() returns RET_PF_INVALID is more obvious.
> The current approach of setting and checking RET_PF_INVALID outside
> of the MMIO flow can lead readers to believe that RET_PF_INVALID
> may be used for something other than signifying that the MMIO generation
> has changed.
> 
> This is a purely cosmetic change, e.g. kvm.ko's kvm_mmu_page_fault
> is binary identical on my system before and after this patch
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/mmu.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f551962ac294..662bb448c7fc 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4927,21 +4927,21 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  		vcpu->arch.gpa_val = cr2;
>  	}
>  
> -	r = RET_PF_INVALID;
>  	if (unlikely(error_code & PFERR_RSVD_MASK)) {
>  		r = handle_mmio_page_fault(vcpu, cr2, direct);
>  		if (r == RET_PF_EMULATE) {
>  			emulation_type = 0;
>  			goto emulate;
>  		}
> +		if (r != RET_PF_INVALID)
> +			goto pf_done;
>  	}
>  
> -	if (r == RET_PF_INVALID) {
> -		r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
> -					      false);
> -		WARN_ON(r == RET_PF_INVALID);
> -	}
> +	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
> +				      false);
> +	WARN_ON(r == RET_PF_INVALID);
>  
> +pf_done:
>  	if (r == RET_PF_RETRY)
>  		return 1;
>  	if (r < 0)
> 

Reviewed-by: David Hildenbrand <david@redhat.com>
Paolo Bonzini April 4, 2018, 1:05 p.m. UTC | #2
On 30/03/2018 00:41, Sean Christopherson wrote:
> Redo kvm_mmu_page_fault()'s interaction with handle_mmio_page_fault()
> so that the behavior of falling through to mmu.page_fault() when
> handle_mmio_page_fault() returns RET_PF_INVALID is more obvious.
> The current approach of setting and checking RET_PF_INVALID outside
> of the MMIO flow can lead readers to believe that RET_PF_INVALID
> may be used for something other than signifying that the MMIO generation
> has changed.
> 
> This is a purely cosmetic change, e.g. kvm.ko's kvm_mmu_page_fault
> is binary identical on my system before and after this patch
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/mmu.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f551962ac294..662bb448c7fc 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4927,21 +4927,21 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  		vcpu->arch.gpa_val = cr2;
>  	}
>  
> -	r = RET_PF_INVALID;
>  	if (unlikely(error_code & PFERR_RSVD_MASK)) {
>  		r = handle_mmio_page_fault(vcpu, cr2, direct);
>  		if (r == RET_PF_EMULATE) {
>  			emulation_type = 0;
>  			goto emulate;
>  		}
> +		if (r != RET_PF_INVALID)
> +			goto pf_done;
>  	}
>  
> -	if (r == RET_PF_INVALID) {
> -		r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
> -					      false);
> -		WARN_ON(r == RET_PF_INVALID);
> -	}
> +	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
> +				      false);
> +	WARN_ON(r == RET_PF_INVALID);
>  
> +pf_done:
>  	if (r == RET_PF_RETRY)
>  		return 1;
>  	if (r < 0)
> 

I don't know... The extra goto makes things a bit harder to read.  Maybe
moving the "emulate" label to a separate function can make things more
bearable, but I'm not sure about that either.  For now I'm not applying
the patch.

Paolo
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f551962ac294..662bb448c7fc 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4927,21 +4927,21 @@  int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		vcpu->arch.gpa_val = cr2;
 	}
 
-	r = RET_PF_INVALID;
 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
 		r = handle_mmio_page_fault(vcpu, cr2, direct);
 		if (r == RET_PF_EMULATE) {
 			emulation_type = 0;
 			goto emulate;
 		}
+		if (r != RET_PF_INVALID)
+			goto pf_done;
 	}
 
-	if (r == RET_PF_INVALID) {
-		r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
-					      false);
-		WARN_ON(r == RET_PF_INVALID);
-	}
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
+	WARN_ON(r == RET_PF_INVALID);
 
+pf_done:
 	if (r == RET_PF_RETRY)
 		return 1;
 	if (r < 0)