Message ID | 20200326093516.24215-3-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: sync SPTEs on page/EPT fault injection | expand |
Paolo Bonzini <pbonzini@redhat.com> writes: > To reconstruct the kvm_mmu to be used for page fault injection, we > can simply use fault->nested_page_fault. This matches how > fault->nested_page_fault is assigned in the first place by > FNAME(walk_addr_generic). > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 6 ------ > arch/x86/kvm/mmu/paging_tmpl.h | 2 +- > arch/x86/kvm/x86.c | 7 +++---- > 3 files changed, 4 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index e26c9a583e75..6250e31ac617 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) > return kvm_read_cr3(vcpu); > } > > -static void inject_page_fault(struct kvm_vcpu *vcpu, > - struct x86_exception *fault) > -{ > - vcpu->arch.mmu->inject_page_fault(vcpu, fault); > -} > - This is already gone with Sean's "KVM: x86: Consolidate logic for injecting page faults to L1". It would probably make sense to have a combined series (or a branch on kvm.git) to simplify testing efforts. > static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, > unsigned int access, int *nr_present) > { > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 1ddbfff64ccc..ae646acf6703 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -812,7 +812,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, > if (!r) { > pgprintk("%s: guest page fault\n", __func__); > if (!prefault) > - inject_page_fault(vcpu, &walker.fault); > + kvm_inject_emulated_page_fault(vcpu, &walker.fault); > > return RET_PF_RETRY; > } > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 64ed6e6e2b56..522905523bf0 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -614,12 +614,11 @@ EXPORT_SYMBOL_GPL(kvm_inject_page_fault); > bool kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, > struct x86_exception *fault) > { > + struct kvm_mmu *fault_mmu; > WARN_ON_ONCE(fault->vector != PF_VECTOR); > > - if (mmu_is_nested(vcpu) && !fault->nested_page_fault) > - vcpu->arch.nested_mmu.inject_page_fault(vcpu, fault); > - else > - vcpu->arch.mmu->inject_page_fault(vcpu, fault); > + fault_mmu = fault->nested_page_fault ? vcpu->arch.mmu : vcpu->arch.walk_mmu; > + fault_mmu->inject_page_fault(vcpu, fault); > > return fault->nested_page_fault; > }
On 26/03/20 14:41, Vitaly Kuznetsov wrote: > Paolo Bonzini <pbonzini@redhat.com> writes: > >> To reconstruct the kvm_mmu to be used for page fault injection, we >> can simply use fault->nested_page_fault. This matches how >> fault->nested_page_fault is assigned in the first place by >> FNAME(walk_addr_generic). >> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >> --- >> arch/x86/kvm/mmu/mmu.c | 6 ------ >> arch/x86/kvm/mmu/paging_tmpl.h | 2 +- >> arch/x86/kvm/x86.c | 7 +++---- >> 3 files changed, 4 insertions(+), 11 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >> index e26c9a583e75..6250e31ac617 100644 >> --- a/arch/x86/kvm/mmu/mmu.c >> +++ b/arch/x86/kvm/mmu/mmu.c >> @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) >> return kvm_read_cr3(vcpu); >> } >> >> -static void inject_page_fault(struct kvm_vcpu *vcpu, >> - struct x86_exception *fault) >> -{ >> - vcpu->arch.mmu->inject_page_fault(vcpu, fault); >> -} >> - > > This is already gone with Sean's "KVM: x86: Consolidate logic for > injecting page faults to L1". > > It would probably make sense to have a combined series (or a branch on > kvm.git) to simplify testing efforts. Yes, these three patches replace part of Sean's (the patch you mention and the next one, "KVM: x86: Sync SPTEs when injecting page/EPT fault into L1"). I pushed the result to a branch named kvm-tlb-cleanup on kvm.git. Paolo
Paolo Bonzini <pbonzini@redhat.com> writes: > On 26/03/20 14:41, Vitaly Kuznetsov wrote: >> Paolo Bonzini <pbonzini@redhat.com> writes: >> >>> To reconstruct the kvm_mmu to be used for page fault injection, we >>> can simply use fault->nested_page_fault. This matches how >>> fault->nested_page_fault is assigned in the first place by >>> FNAME(walk_addr_generic). >>> >>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >>> --- >>> arch/x86/kvm/mmu/mmu.c | 6 ------ >>> arch/x86/kvm/mmu/paging_tmpl.h | 2 +- >>> arch/x86/kvm/x86.c | 7 +++---- >>> 3 files changed, 4 insertions(+), 11 deletions(-) >>> >>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >>> index e26c9a583e75..6250e31ac617 100644 >>> --- a/arch/x86/kvm/mmu/mmu.c >>> +++ b/arch/x86/kvm/mmu/mmu.c >>> @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) >>> return kvm_read_cr3(vcpu); >>> } >>> >>> -static void inject_page_fault(struct kvm_vcpu *vcpu, >>> - struct x86_exception *fault) >>> -{ >>> - vcpu->arch.mmu->inject_page_fault(vcpu, fault); >>> -} >>> - >> >> This is already gone with Sean's "KVM: x86: Consolidate logic for >> injecting page faults to L1". >> >> It would probably make sense to have a combined series (or a branch on >> kvm.git) to simplify testing efforts. > > Yes, these three patches replace part of Sean's (the patch you mention > and the next one, "KVM: x86: Sync SPTEs when injecting page/EPT fault > into L1"). > > I pushed the result to a branch named kvm-tlb-cleanup on kvm.git. > Thank you, I've tested it with Hyper-V on both VMX and SVM with and without PV TLB flush and nothing immediately blew up. I'm also observing a very nice 19000 -> 14000 cycles improvement on tight cpuid loop test (with EVMCS enabled).
On Thu, Mar 26, 2020 at 05:35:15AM -0400, Paolo Bonzini wrote: > To reconstruct the kvm_mmu to be used for page fault injection, we > can simply use fault->nested_page_fault. This matches how > fault->nested_page_fault is assigned in the first place by > FNAME(walk_addr_generic). > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 6 ------ > arch/x86/kvm/mmu/paging_tmpl.h | 2 +- > arch/x86/kvm/x86.c | 7 +++---- > 3 files changed, 4 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index e26c9a583e75..6250e31ac617 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) > return kvm_read_cr3(vcpu); > } > > -static void inject_page_fault(struct kvm_vcpu *vcpu, > - struct x86_exception *fault) > -{ > - vcpu->arch.mmu->inject_page_fault(vcpu, fault); > -} > - > static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, > unsigned int access, int *nr_present) > { > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 1ddbfff64ccc..ae646acf6703 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -812,7 +812,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, > if (!r) { > pgprintk("%s: guest page fault\n", __func__); > if (!prefault) > - inject_page_fault(vcpu, &walker.fault); > + kvm_inject_emulated_page_fault(vcpu, &walker.fault); > > return RET_PF_RETRY; > } > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 64ed6e6e2b56..522905523bf0 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -614,12 +614,11 @@ EXPORT_SYMBOL_GPL(kvm_inject_page_fault); > bool kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, > struct x86_exception *fault) > { > + struct kvm_mmu *fault_mmu; > WARN_ON_ONCE(fault->vector != PF_VECTOR); > > - if (mmu_is_nested(vcpu) && !fault->nested_page_fault) > - vcpu->arch.nested_mmu.inject_page_fault(vcpu, fault); > - else > - vcpu->arch.mmu->inject_page_fault(vcpu, fault); > + fault_mmu = fault->nested_page_fault ? vcpu->arch.mmu : vcpu->arch.walk_mmu; Apparently I'm in a nitpicky mood. IMO, a newline after the colon is easier to parse fault_mmu = fault->nested_page_fault ? vcpu->arch.mmu : vcpu->arch.walk_mmu; FWIW, I really like that "inject into the nested_mmu if it's not a nested page fault" logic goes away. That trips me up every time I look at it. > + fault_mmu->inject_page_fault(vcpu, fault); > > return fault->nested_page_fault; > } > -- > 2.18.2 > >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e26c9a583e75..6250e31ac617 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) return kvm_read_cr3(vcpu); } -static void inject_page_fault(struct kvm_vcpu *vcpu, - struct x86_exception *fault) -{ - vcpu->arch.mmu->inject_page_fault(vcpu, fault); -} - static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access, int *nr_present) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 1ddbfff64ccc..ae646acf6703 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -812,7 +812,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, if (!r) { pgprintk("%s: guest page fault\n", __func__); if (!prefault) - inject_page_fault(vcpu, &walker.fault); + kvm_inject_emulated_page_fault(vcpu, &walker.fault); return RET_PF_RETRY; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 64ed6e6e2b56..522905523bf0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -614,12 +614,11 @@ EXPORT_SYMBOL_GPL(kvm_inject_page_fault); bool kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { + struct kvm_mmu *fault_mmu; WARN_ON_ONCE(fault->vector != PF_VECTOR); - if (mmu_is_nested(vcpu) && !fault->nested_page_fault) - vcpu->arch.nested_mmu.inject_page_fault(vcpu, fault); - else - vcpu->arch.mmu->inject_page_fault(vcpu, fault); + fault_mmu = fault->nested_page_fault ? vcpu->arch.mmu : vcpu->arch.walk_mmu; + fault_mmu->inject_page_fault(vcpu, fault); return fault->nested_page_fault; }
To reconstruct the kvm_mmu to be used for page fault injection, we can simply use fault->nested_page_fault. This matches how fault->nested_page_fault is assigned in the first place by FNAME(walk_addr_generic). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/kvm/mmu/mmu.c | 6 ------ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/x86.c | 7 +++---- 3 files changed, 4 insertions(+), 11 deletions(-)