Message ID | 20211114145721.209219-5-shivam.kumar1@nutanix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Dirty Quota-Based VM Live Migration Auto-Converge | expand |
On Sun, Nov 14, 2021, Shivam Kumar wrote: > For a page write fault or "page dirty", the dirty counter of the > corresponding vCPU is incremented. > > Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com> > Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com> > Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com> > Signed-off-by: Shaju Abraham <shaju.abraham@nutanix.com> > Signed-off-by: Manish Mishra <manish.mishra@nutanix.com> > --- > virt/kvm/kvm_main.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 1564d3a3f608..55bf92cf9f4f 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3091,8 +3091,15 @@ void mark_page_dirty_in_slot(struct kvm *kvm, > if (kvm->dirty_ring_size) > kvm_dirty_ring_push(kvm_dirty_ring_get(kvm), > slot, rel_gfn); > - else > + else { > + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); > + > + if (vcpu && vcpu->kvm->dirty_quota_migration_enabled && > + vcpu->vCPUdqctx) > + vcpu->vCPUdqctx->dirty_counter++; Checking dirty_quota_migration_enabled can race, and it'd be far faster to unconditionally update a counter, e.g. a per-vCPU stat. > + > set_bit_le(rel_gfn, memslot->dirty_bitmap); > + } > } > } > EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot); > -- > 2.22.3 >
On 18/11/21 11:18 pm, Sean Christopherson wrote: > On Sun, Nov 14, 2021, Shivam Kumar wrote: >> For a page write fault or "page dirty", the dirty counter of the >> corresponding vCPU is incremented. >> >> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com> >> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com> >> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com> >> Signed-off-by: Shaju Abraham <shaju.abraham@nutanix.com> >> Signed-off-by: Manish Mishra <manish.mishra@nutanix.com> >> --- >> virt/kvm/kvm_main.c | 9 ++++++++- >> 1 file changed, 8 insertions(+), 1 deletion(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index 1564d3a3f608..55bf92cf9f4f 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -3091,8 +3091,15 @@ void mark_page_dirty_in_slot(struct kvm *kvm, >> if (kvm->dirty_ring_size) >> kvm_dirty_ring_push(kvm_dirty_ring_get(kvm), >> slot, rel_gfn); >> - else >> + else { >> + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); >> + >> + if (vcpu && vcpu->kvm->dirty_quota_migration_enabled && >> + vcpu->vCPUdqctx) >> + vcpu->vCPUdqctx->dirty_counter++; > Checking dirty_quota_migration_enabled can race, and it'd be far faster to > unconditionally update a counter, e.g. a per-vCPU stat. Yes, unconditional update seems fine as it is not required to reset the dirty counter every time a new migration starts (as per our discussion on PATCH 0 in this patchset). Thanks. > >> + >> set_bit_le(rel_gfn, memslot->dirty_bitmap); >> + } >> } >> } >> EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot); >> -- >> 2.22.3 >>
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1564d3a3f608..55bf92cf9f4f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3091,8 +3091,15 @@ void mark_page_dirty_in_slot(struct kvm *kvm, if (kvm->dirty_ring_size) kvm_dirty_ring_push(kvm_dirty_ring_get(kvm), slot, rel_gfn); - else + else { + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); + + if (vcpu && vcpu->kvm->dirty_quota_migration_enabled && + vcpu->vCPUdqctx) + vcpu->vCPUdqctx->dirty_counter++; + set_bit_le(rel_gfn, memslot->dirty_bitmap); + } } } EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot);