diff mbox series

[v8,3/3] KVM: arm64: Dirty quota-based throttling of vcpus

Message ID 20230225204758.17726-4-shivam.kumar1@nutanix.com (mailing list archive)
State New, archived
Headers show
Series KVM: Dirty quota-based throttling | expand

Commit Message

Shivam Kumar Feb. 25, 2023, 8:48 p.m. UTC
Call update_dirty_quota whenever a page is marked dirty with
appropriate arch-specific page size. Process the KVM request
KVM_REQ_DIRTY_QUOTA_EXIT (raised by update_dirty_quota) to exit to
userspace with exit reason KVM_EXIT_DIRTY_QUOTA_EXHAUSTED.

Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
---
 arch/arm64/kvm/Kconfig | 1 +
 arch/arm64/kvm/arm.c   | 7 +++++++
 arch/arm64/kvm/mmu.c   | 3 +++
 3 files changed, 11 insertions(+)

Comments

Marc Zyngier Feb. 27, 2023, 1:49 a.m. UTC | #1
On Sat, 25 Feb 2023 20:48:01 +0000,
Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> 
> Call update_dirty_quota whenever a page is marked dirty with
> appropriate arch-specific page size. Process the KVM request
> KVM_REQ_DIRTY_QUOTA_EXIT (raised by update_dirty_quota) to exit to
> userspace with exit reason KVM_EXIT_DIRTY_QUOTA_EXHAUSTED.
> 
> Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
> Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
> ---
>  arch/arm64/kvm/Kconfig | 1 +
>  arch/arm64/kvm/arm.c   | 7 +++++++
>  arch/arm64/kvm/mmu.c   | 3 +++
>  3 files changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index ca6eadeb7d1a..8e7dea2c3a9f 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -44,6 +44,7 @@ menuconfig KVM
>  	select SCHED_INFO
>  	select GUEST_PERF_EVENTS if PERF_EVENTS
>  	select INTERVAL_TREE
> +	select HAVE_KVM_DIRTY_QUOTA

So this is selected unconditionally...

>  	help
>  	  Support hosting virtualized guest machines.
>  
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 3bd732eaf087..5162b2fc46a1 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -757,6 +757,13 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>  
>  		if (kvm_dirty_ring_check_request(vcpu))
>  			return 0;
> +
> +#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA

... and yet you litter the arch code with #ifdefs...

> +		if (kvm_check_request(KVM_REQ_DIRTY_QUOTA_EXIT, vcpu)) {
> +			vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED;
> +			return 0;

What rechecks the quota on entry?

> +		}
> +#endif
>  	}
>  
>  	return 1;
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 7113587222ff..baf416046f46 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1390,6 +1390,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	/* Mark the page dirty only if the fault is handled successfully */
>  	if (writable && !ret) {
>  		kvm_set_pfn_dirty(pfn);
> +#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
> +		update_dirty_quota(kvm, fault_granule);

fault_granule isn't necessarily the amount that gets dirtied.

	M.
Shivam Kumar March 4, 2023, 11:37 a.m. UTC | #2
On 27/02/23 7:19 am, Marc Zyngier wrote:
> On Sat, 25 Feb 2023 20:48:01 +0000,
> Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>>
>> Call update_dirty_quota whenever a page is marked dirty with
>> appropriate arch-specific page size. Process the KVM request
>> KVM_REQ_DIRTY_QUOTA_EXIT (raised by update_dirty_quota) to exit to
>> userspace with exit reason KVM_EXIT_DIRTY_QUOTA_EXHAUSTED.
>>
>> Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
>> Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
>> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
>> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
>> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
>> ---
>>   arch/arm64/kvm/Kconfig | 1 +
>>   arch/arm64/kvm/arm.c   | 7 +++++++
>>   arch/arm64/kvm/mmu.c   | 3 +++
>>   3 files changed, 11 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>> index ca6eadeb7d1a..8e7dea2c3a9f 100644
>> --- a/arch/arm64/kvm/Kconfig
>> +++ b/arch/arm64/kvm/Kconfig
>> @@ -44,6 +44,7 @@ menuconfig KVM
>>   	select SCHED_INFO
>>   	select GUEST_PERF_EVENTS if PERF_EVENTS
>>   	select INTERVAL_TREE
>> +	select HAVE_KVM_DIRTY_QUOTA
> 
> So this is selected unconditionally...
> 
>>   	help
>>   	  Support hosting virtualized guest machines.
>>   
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 3bd732eaf087..5162b2fc46a1 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -757,6 +757,13 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>>   
>>   		if (kvm_dirty_ring_check_request(vcpu))
>>   			return 0;
>> +
>> +#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
> 
> ... and yet you litter the arch code with #ifdefs...

Sorry about that. #ifdefs are not required here.

> 
>> +		if (kvm_check_request(KVM_REQ_DIRTY_QUOTA_EXIT, vcpu)) {
>> +			vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED;
>> +			return 0;
> 
> What rechecks the quota on entry?

Right now, we are not rechecking the quota after entry. So, if the 
userspace doesn't update the quota, then we let the vcpu run until it 
tries to dirty again.

I think it's a good idea to check the quota on entry and keep exiting to 
userspace until the quota is a positive value. Can add this in the next 
patchset.

Thanks.

> 
>> +		}
>> +#endif
>>   	}
>>   
>>   	return 1;
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 7113587222ff..baf416046f46 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1390,6 +1390,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   	/* Mark the page dirty only if the fault is handled successfully */
>>   	if (writable && !ret) {
>>   		kvm_set_pfn_dirty(pfn);
>> +#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
>> +		update_dirty_quota(kvm, fault_granule);
> 
> fault_granule isn't necessarily the amount that gets dirtied.
> 
> 	M.
> 

For most of the paths where we are updating the quota, we cannot track 
(or precisely account for) dirtying at a granularity less than the 
minimum page size. Looking forward to your thoughts on what we can do 
better here. Thanks.


Thanks,
Shivam
diff mbox series

Patch

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index ca6eadeb7d1a..8e7dea2c3a9f 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -44,6 +44,7 @@  menuconfig KVM
 	select SCHED_INFO
 	select GUEST_PERF_EVENTS if PERF_EVENTS
 	select INTERVAL_TREE
+	select HAVE_KVM_DIRTY_QUOTA
 	help
 	  Support hosting virtualized guest machines.
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3bd732eaf087..5162b2fc46a1 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -757,6 +757,13 @@  static int check_vcpu_requests(struct kvm_vcpu *vcpu)
 
 		if (kvm_dirty_ring_check_request(vcpu))
 			return 0;
+
+#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
+		if (kvm_check_request(KVM_REQ_DIRTY_QUOTA_EXIT, vcpu)) {
+			vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED;
+			return 0;
+		}
+#endif
 	}
 
 	return 1;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 7113587222ff..baf416046f46 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1390,6 +1390,9 @@  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	/* Mark the page dirty only if the fault is handled successfully */
 	if (writable && !ret) {
 		kvm_set_pfn_dirty(pfn);
+#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
+		update_dirty_quota(kvm, fault_granule);
+#endif
 		mark_page_dirty_in_slot(kvm, memslot, gfn);
 	}