diff mbox series

kvm/x86: simplify kvm_mmu_do_page_fault() a little bit

Message ID 20241022100812.4955-1-jgross@suse.com (mailing list archive)
State New, archived
Headers show
Series kvm/x86: simplify kvm_mmu_do_page_fault() a little bit | expand

Commit Message

Jürgen Groß Oct. 22, 2024, 10:08 a.m. UTC
Testing whether to call kvm_tdp_page_fault() or
vcpu->arch.mmu->page_fault() doesn't make sense, as kvm_tdp_page_fault()
is selected only if vcpu->arch.mmu->page_fault == kvm_tdp_page_fault.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kvm/mmu/mmu_internal.h | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Comments

Sean Christopherson Oct. 22, 2024, 5:01 p.m. UTC | #1
On Tue, Oct 22, 2024, Juergen Gross wrote:
> Testing whether to call kvm_tdp_page_fault() or
> vcpu->arch.mmu->page_fault() doesn't make sense, as kvm_tdp_page_fault()
> is selected only if vcpu->arch.mmu->page_fault == kvm_tdp_page_fault.

It does when retpolines are enabled and significantly inflate the cost of the
indirect call.  This is a hot path in various scenarios, but KVM can't use
static_call() to avoid the retpoline due to mmu->page_fault being a property of
the current vCPU.  Only kvm_tdp_page_fault() is special cased because all other
mmu->page_fault targets are slow-ish and/or we don't care terribly about their
performance.

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/kvm/mmu/mmu_internal.h | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index c98827840e07..6eae54aa1160 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -322,10 +322,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>  		fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn);
>  	}
>  
> -	if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
> -		r = kvm_tdp_page_fault(vcpu, &fault);
> -	else
> -		r = vcpu->arch.mmu->page_fault(vcpu, &fault);
> +	r = vcpu->arch.mmu->page_fault(vcpu, &fault);
>  
>  	/*
>  	 * Not sure what's happening, but punt to userspace and hope that
> -- 
> 2.43.0
>
Jürgen Groß Oct. 24, 2024, 10:37 a.m. UTC | #2
On 22.10.24 19:01, Sean Christopherson wrote:
> On Tue, Oct 22, 2024, Juergen Gross wrote:
>> Testing whether to call kvm_tdp_page_fault() or
>> vcpu->arch.mmu->page_fault() doesn't make sense, as kvm_tdp_page_fault()
>> is selected only if vcpu->arch.mmu->page_fault == kvm_tdp_page_fault.
> 
> It does when retpolines are enabled and significantly inflate the cost of the
> indirect call.  This is a hot path in various scenarios, but KVM can't use
> static_call() to avoid the retpoline due to mmu->page_fault being a property of
> the current vCPU.  Only kvm_tdp_page_fault() is special cased because all other
> mmu->page_fault targets are slow-ish and/or we don't care terribly about their
> performance.

Fair enough. :-)

I'll modify the patch to add a comment in this regard in order to avoid
similar simplification attempts in the future.


Juergen
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index c98827840e07..6eae54aa1160 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -322,10 +322,7 @@  static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 		fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn);
 	}
 
-	if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
-		r = kvm_tdp_page_fault(vcpu, &fault);
-	else
-		r = vcpu->arch.mmu->page_fault(vcpu, &fault);
+	r = vcpu->arch.mmu->page_fault(vcpu, &fault);
 
 	/*
 	 * Not sure what's happening, but punt to userspace and hope that