Message ID | 20180927034829.2230-3-Tianyu.Lan@microsoft.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/KVM/Hyper-v: Add HV ept tlb range flush hypercall support in KVM | expand |
On 27/09/2018 05:48, Tianyu Lan wrote: > + > + if (range && kvm_x86_ops->tlb_remote_flush_with_range) { > + /* > + * Read tlbs_dirty before flushing tlbs in order > + * to track dirty tlbs during flushing. > + */ > + long dirty_count = smp_load_acquire(&kvm->tlbs_dirty); > + > + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); > + cmpxchg(&kvm->tlbs_dirty, dirty_count, 0); This is wrong, because it's not the entire TLB that is flushed. So you cannot do the cmpxchg here. Paolo > + > + if (ret) > + kvm_flush_remote_tlbs(kvm); > +} > +
Hi Paolo: Thanks for your review. Sorry for later response due to holiday. On Mon, Oct 1, 2018 at 11:26 PM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 27/09/2018 05:48, Tianyu Lan wrote: > > + > > + if (range && kvm_x86_ops->tlb_remote_flush_with_range) { > > + /* > > + * Read tlbs_dirty before flushing tlbs in order > > + * to track dirty tlbs during flushing. > > + */ > > + long dirty_count = smp_load_acquire(&kvm->tlbs_dirty); > > + > > + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); > > + cmpxchg(&kvm->tlbs_dirty, dirty_count, 0); > > This is wrong, because it's not the entire TLB that is flushed. So you > cannot do the cmpxchg here. Yes, nice catch. Will update in the next version.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c67f09086378..18cac661a41a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -253,6 +253,54 @@ static void mmu_spte_set(u64 *sptep, u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); + +static inline bool kvm_available_flush_tlb_with_range(void) +{ + return kvm_x86_ops->tlb_remote_flush_with_range; +} + +static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) +{ + int ret = -ENOTSUPP; + + if (range && kvm_x86_ops->tlb_remote_flush_with_range) { + /* + * Read tlbs_dirty before flushing tlbs in order + * to track dirty tlbs during flushing. + */ + long dirty_count = smp_load_acquire(&kvm->tlbs_dirty); + + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); + cmpxchg(&kvm->tlbs_dirty, dirty_count, 0); + } + + if (ret) + kvm_flush_remote_tlbs(kvm); +} + +static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm, + struct list_head *flush_list) +{ + struct kvm_tlb_range range; + + range.flush_list = flush_list; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + +static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, + u64 start_gfn, u64 pages) +{ + struct kvm_tlb_range range; + + range.start_gfn = start_gfn; + range.pages = pages; + range.flush_list = NULL; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value) { BUG_ON((mmio_mask & mmio_value) != mmio_value);
This patch is to add wrapper functions for tlb_remote_flush_with_range callback. Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> --- Change since V2: Fix comment in the kvm_flush_remote_tlbs_with_range() --- arch/x86/kvm/mmu.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+)