Message ID | 20240115125707.1183-4-paul@xen.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: xen: update shared_info and vcpu_info handling | expand |
KVM: x86/xen: for the scope please. A few commits have "KVM: xen:", but "x86/xen" is the overwhelming favorite.
On Tue, 2024-02-06 at 19:17 -0800, Sean Christopherson wrote: > KVM: x86/xen: for the scope please. A few commits have "KVM: xen:", but "x86/xen" > is the overwhelming favorite. Paul's been using "KVM: xen:" in this patch series since first posting it in September of last year. If there aren't more substantial changes you need, would you perhaps be able to make that minor fixup as you apply the series? I'm not currently in the country to buy him a beer and talk him down off the ceiling when he wakes up and reads your message.
On 07/02/2024 03:17, Sean Christopherson wrote: > KVM: x86/xen: for the scope please. A few commits have "KVM: xen:", but "x86/xen" > is the overwhelming favorite. If I have to re-post anyway then I can do that.
On Tue, Feb 06, 2024, David Woodhouse wrote: > On Tue, 2024-02-06 at 19:17 -0800, Sean Christopherson wrote: > > KVM: x86/xen: for the scope please. A few commits have "KVM: xen:", but "x86/xen" > > is the overwhelming favorite. > > Paul's been using "KVM: xen:" in this patch series since first posting > it in September of last year. If there aren't more substantial changes > you need, would you perhaps be able to make that minor fixup as you > apply the series? Yes, I can fixup scopes when applying, though I think in this case there's just enough small changes that another version would be helpful. Tweaks to the scope are rarely grounds for needing a new version. I'd say "never", but then someone would inevitably prove me wrong :-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index e43948b87f94..b63bf54bb376 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -452,14 +452,13 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) smp_wmb(); } - if (user_len2) + if (user_len2) { + mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT); read_unlock(&gpc2->lock); - - read_unlock_irqrestore(&gpc1->lock, flags); + } mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT); - if (user_len2) - mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT); + read_unlock_irqrestore(&gpc1->lock, flags); } void kvm_xen_update_runstate(struct kvm_vcpu *v, int state) @@ -565,13 +564,13 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) : "0" (evtchn_pending_sel32)); WRITE_ONCE(vi->evtchn_upcall_pending, 1); } + + mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); read_unlock_irqrestore(&gpc->lock, flags); /* For the per-vCPU lapic vector, deliver it as MSI. */ if (v->arch.xen.upcall_vector) kvm_xen_inject_vcpu_vector(v); - - mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); } int __kvm_xen_has_interrupt(struct kvm_vcpu *v)