diff mbox series

[v4,2/4] KVM: arm64: Dirty quota-based throttling of vcpus

Message ID 20220521202937.184189-3-shivam.kumar1@nutanix.com (mailing list archive)
State New, archived
Headers show
Series KVM: Dirty quota-based throttling | expand

Commit Message

Shivam Kumar May 21, 2022, 8:29 p.m. UTC
Exit to userspace whenever the dirty quota is exhausted (i.e. dirty count
equals/exceeds dirty quota) to request more dirty quota.

Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
---
 arch/arm64/kvm/arm.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Marc Zyngier May 24, 2022, 7:14 a.m. UTC | #1
On Sat, 21 May 2022 21:29:38 +0100,
Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> 
> Exit to userspace whenever the dirty quota is exhausted (i.e. dirty count
> equals/exceeds dirty quota) to request more dirty quota.
> 
> Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
> Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
> ---
>  arch/arm64/kvm/arm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index ecc5958e27fe..5b6a239b83a5 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -848,6 +848,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  	ret = 1;
>  	run->exit_reason = KVM_EXIT_UNKNOWN;
>  	while (ret > 0) {
> +		ret = kvm_vcpu_check_dirty_quota(vcpu);
> +		if (!ret)
> +			break;
>  		/*
>  		 * Check conditions before entering the guest
>  		 */

Why do we need yet another check on the fast path? It seems to me that
this is what requests are for, so I'm definitely not keen on this
approach. I certainly do not want any extra overhead for something
that is only used on migration. If anything, it is the migration path
that should incur the overhead.

	M.
Shivam Kumar May 26, 2022, 9:16 a.m. UTC | #2
On 24/05/22 12:44 pm, Marc Zyngier wrote:
> On Sat, 21 May 2022 21:29:38 +0100,
> Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>> Exit to userspace whenever the dirty quota is exhausted (i.e. dirty count
>> equals/exceeds dirty quota) to request more dirty quota.
>>
>> Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
>> Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
>> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
>> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
>> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
>> ---
>>   arch/arm64/kvm/arm.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index ecc5958e27fe..5b6a239b83a5 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -848,6 +848,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>>   	ret = 1;
>>   	run->exit_reason = KVM_EXIT_UNKNOWN;
>>   	while (ret > 0) {
>> +		ret = kvm_vcpu_check_dirty_quota(vcpu);
>> +		if (!ret)
>> +			break;
>>   		/*
>>   		 * Check conditions before entering the guest
>>   		 */
> Why do we need yet another check on the fast path? It seems to me that
> this is what requests are for, so I'm definitely not keen on this
> approach. I certainly do not want any extra overhead for something
> that is only used on migration. If anything, it is the migration path
> that should incur the overhead.
>
> 	M.
I'll try implementing this with requests. Thanks.
Sean Christopherson May 26, 2022, 5:58 p.m. UTC | #3
On Thu, May 26, 2022, Shivam Kumar wrote:
> 
> On 24/05/22 12:44 pm, Marc Zyngier wrote:
> > On Sat, 21 May 2022 21:29:38 +0100,
> > Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> > > Exit to userspace whenever the dirty quota is exhausted (i.e. dirty count
> > > equals/exceeds dirty quota) to request more dirty quota.
> > > 
> > > Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
> > > Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
> > > Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> > > Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> > > Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
> > > ---
> > >   arch/arm64/kvm/arm.c | 3 +++
> > >   1 file changed, 3 insertions(+)
> > > 
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index ecc5958e27fe..5b6a239b83a5 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -848,6 +848,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> > >   	ret = 1;
> > >   	run->exit_reason = KVM_EXIT_UNKNOWN;
> > >   	while (ret > 0) {
> > > +		ret = kvm_vcpu_check_dirty_quota(vcpu);
> > > +		if (!ret)
> > > +			break;
> > >   		/*
> > >   		 * Check conditions before entering the guest
> > >   		 */
> > Why do we need yet another check on the fast path? It seems to me that
> > this is what requests are for, so I'm definitely not keen on this
> > approach. I certainly do not want any extra overhead for something
> > that is only used on migration. If anything, it is the migration path
> > that should incur the overhead.
> > 
> > 	M.
> I'll try implementing this with requests. Thanks.

I've no objection to using a request, avoiding the extra checks is quite nice,
and x86's fastpath loop would Just Work (though VMX's handle_invalid_guest_state()
still needs manually checking).

The only gotchas are that it'll likely require a small amount of #ifdeffery, and
the quota will need to be "temporarily" captured in 'struct kvm_vcpu' because
there's no guarantee that the next exit to userspace will be due to the dirty
quota, i.e. something else might ovewrirte the exit union in vcpu->run.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index ecc5958e27fe..5b6a239b83a5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -848,6 +848,9 @@  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	ret = 1;
 	run->exit_reason = KVM_EXIT_UNKNOWN;
 	while (ret > 0) {
+		ret = kvm_vcpu_check_dirty_quota(vcpu);
+		if (!ret)
+			break;
 		/*
 		 * Check conditions before entering the guest
 		 */