Message ID | 20231002095740.1472907-3-paul@xen.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: xen: update shared_info and vcpu_info handling | expand |
On Mon, Oct 02, 2023, Paul Durrant wrote: > From: Paul Durrant <pdurrant@amazon.com> > > At the moment pages are marked dirty by open-coded calls to > mark_page_dirty_in_slot(), directly deferefencing the gpa and memslot > from the cache. After a subsequent patch these may not always be set > so add a helper now so that caller will protected from the need to know > about this detail. > > NOTE: Pages are now marked dirty while the cache lock is held. This is > to ensure that gpa and memslot are mutually consistent. This absolutely belongs in a separate patch. It sounds like a bug fix (haven't spent the time to figure out if it actually is), and even if it doesn't fix anything, burying something like this in a "add a helper" patch is just mean. > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c > index 0f36acdf577f..b68ed7fa56a2 100644 > --- a/virt/kvm/pfncache.c > +++ b/virt/kvm/pfncache.c > @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) > } > EXPORT_SYMBOL_GPL(kvm_gpc_activate); > > +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) > +{ If there's actually a reason to call mark_page_dirty_in_slot() while holding @gpc's lock, then this should have a lockdep. If there's no good reason, then don't move the invocation. > + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); > +} > +EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); This doesn't need to be exported. Hrm, none of the exports in this file are necessary, they likely all got added when we were thinking this stuff would be used for nVMX. I think we should remove them, not because I'm worried about sub-modules doing bad things, but just because we should avoid polluting exported symbols as much as possible.
On 31/10/2023 23:28, Sean Christopherson wrote: > On Mon, Oct 02, 2023, Paul Durrant wrote: >> From: Paul Durrant <pdurrant@amazon.com> >> >> At the moment pages are marked dirty by open-coded calls to >> mark_page_dirty_in_slot(), directly deferefencing the gpa and memslot >> from the cache. After a subsequent patch these may not always be set >> so add a helper now so that caller will protected from the need to know >> about this detail. >> >> NOTE: Pages are now marked dirty while the cache lock is held. This is >> to ensure that gpa and memslot are mutually consistent. > > This absolutely belongs in a separate patch. It sounds like a bug fix (haven't > spent the time to figure out if it actually is), and even if it doesn't fix > anything, burying something like this in a "add a helper" patch is just mean. > Ok, I can split it out. It's a pretty minor fix so didn't seem worth it. > >> diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c >> index 0f36acdf577f..b68ed7fa56a2 100644 >> --- a/virt/kvm/pfncache.c >> +++ b/virt/kvm/pfncache.c >> @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) >> } >> EXPORT_SYMBOL_GPL(kvm_gpc_activate); >> >> +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) >> +{ > > If there's actually a reason to call mark_page_dirty_in_slot() while holding @gpc's > lock, then this should have a lockdep. If there's no good reason, then don't move > the invocation. > >> + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); >> +} >> +EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); > > This doesn't need to be exported. Hrm, none of the exports in this file are > necessary, they likely all got added when we were thinking this stuff would be > used for nVMX. I think we should remove them, not because I'm worried about > sub-modules doing bad things, but just because we should avoid polluting exported > symbols as much as possible. That in a separate clean-up patch too, I assume? Paul
On Thu, Nov 02, 2023, Paul Durrant wrote: > On 31/10/2023 23:28, Sean Christopherson wrote: > > On Mon, Oct 02, 2023, Paul Durrant wrote: > > > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c > > > index 0f36acdf577f..b68ed7fa56a2 100644 > > > --- a/virt/kvm/pfncache.c > > > +++ b/virt/kvm/pfncache.c > > > @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) > > > } > > > EXPORT_SYMBOL_GPL(kvm_gpc_activate); > > > +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) > > > +{ > > > > If there's actually a reason to call mark_page_dirty_in_slot() while holding @gpc's > > lock, then this should have a lockdep. If there's no good reason, then don't move > > the invocation. > > > > > + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); > > > +} > > > +EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); > > > > This doesn't need to be exported. Hrm, none of the exports in this file are > > necessary, they likely all got added when we were thinking this stuff would be > > used for nVMX. I think we should remove them, not because I'm worried about > > sub-modules doing bad things, but just because we should avoid polluting exported > > symbols as much as possible. > > That in a separate clean-up patch too, I assume? Yes, but feel free to punt that one or post it as a standalone patch. For this series, please just don't add more exports unless they're actually used in the series.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9f18b06bbda6..eee252a0afef 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3137,7 +3137,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v, guest_hv_clock->version = ++vcpu->hv_clock.version; - mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); + kvm_gpc_mark_dirty(gpc); read_unlock_irqrestore(&gpc->lock, flags); trace_kvm_pvclock_update(v->vcpu_id, &vcpu->hv_clock); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 40edf4d1974c..33fddd29824b 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -430,14 +430,13 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) smp_wmb(); } - if (user_len2) + if (user_len2) { + kvm_gpc_mark_dirty(gpc2); read_unlock(&gpc2->lock); + } + kvm_gpc_mark_dirty(gpc1); read_unlock_irqrestore(&gpc1->lock, flags); - - mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT); - if (user_len2) - mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT); } void kvm_xen_update_runstate(struct kvm_vcpu *v, int state) @@ -543,13 +542,13 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) : "0" (evtchn_pending_sel32)); WRITE_ONCE(vi->evtchn_upcall_pending, 1); } + + kvm_gpc_mark_dirty(gpc); read_unlock_irqrestore(&gpc->lock, flags); /* For the per-vCPU lapic vector, deliver it as MSI. */ if (v->arch.xen.upcall_vector) kvm_xen_inject_vcpu_vector(v); - - mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); } int __kvm_xen_has_interrupt(struct kvm_vcpu *v) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index fb6c6109fdca..c71e8fbccaaf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1367,6 +1367,13 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, unsigned long len); */ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); +/** + * kvm_gpc_mark_dirty - mark a cached page as dirty. + * + * @gpc: struct gfn_to_pfn_cache object. + */ +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc); + void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 0f36acdf577f..b68ed7fa56a2 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -386,6 +386,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_gpc_activate); +void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) +{ + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); +} +EXPORT_SYMBOL_GPL(kvm_gpc_mark_dirty); + void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { struct kvm *kvm = gpc->kvm;