mbox series

[v10,0/3] Per-vCPU dirty quota-based throttling

Message ID 20240221195125.102479-1-shivam.kumar1@nutanix.com (mailing list archive)
Headers show
Series Per-vCPU dirty quota-based throttling | expand

Message

Shivam Kumar Feb. 21, 2024, 7:51 p.m. UTC
This patchset introduces a new mechanism (dirty-quota-based
throttling) to throttle the rate at which memory pages can be dirtied.
This is done by setting a limit on the number of bytes  that each vCPU
is allowed to dirty at a time, until it is allocated additional quota.

This new throttling mechanism is exposed to userspace through a new
KVM capability, KVM_CAP_DIRTY_QUOTA. If this capability is enabled by
userspace, each vCPU will exit to userspace (with exit reason
KVM_EXIT_DIRTY_QUOTA_EXHAUSTED) as soon as its dirty quota is
exhausted (in other words, a given vCPU will exit to userspace as soon
as it has dirtied as many bytes as the limit set for it). When the
vCPU exits to userspace, userspace may increase the dirty quota of the
vCPU (after optionally sleeping for an appropriate period of time) so
that it can continue dirtying more memory.

Dirty-quota-based throttling is a very effective choice for live
migration, for the following reasons:

1. With dirty-quota-based throttling, we can precisely set the amount
of memory we can afford to dirty for the migration to converge (and
within reasonable time). This behaviour is much more effective than
the current state-of-the-art auto-converge mechanism that implements
time-based throttling (making vCPUs sleep for some time to throttle
dirtying), since some workloads can dirty a huge amount of memory even
if its vCPUs are given a very small interval to run, thus causing
migrations to take longer and possibly failing to converge.

2. While the current auto-converge mechanism makes the whole VM sleep
to throttle memory dirtying, we can selectively throttle vCPUs with
dirty-quota-based throttling (i.e. only causing vCPUs that are
dirtying more than a threshold to sleep). Furthermore, if we choose
very small intervals to compute and enforce dirty quota, we can
achieve micro-stunning (i.e. stunning the vCPUs precisely when they
are dirtying the memory). Both of these behaviors help the
dirty-quota-based scheme to throttle only those vCPUs that are
dirtying memory, and only when they are dirtying the memory. Hence,
while the current auto-converge scheme is prone to throttling reads
and writes equally, dirty-quota-based throttling has minimal impact on
read performance.

3. Dirty-quota-based throttling can adapt quickly to changes in
network bandwidth if it is enforced in very small intervals.  In other
words, we can consider the current available network bandwidth when
computing an appropriate dirty quota for the next interval.

The benefits of dirty-quota-based throttling are not limited to live
migration.  The dirty-quota mechanism can also be leveraged to
support other use cases that would benefit from effective throttling
of memory writes.  The update_dirty_quota hook in the implementation
can be used outside the context of live migration, but note that such
alternative uses must also write-protect the memory.

We have evaluated dirty-quota-based throttling using two key metrics:
A. Live migration performance (time to migrate)
B. Guest performance during live migration

We have used a synthetic workload that dirties memory sequentially in
a loop. It is characterised by three variables m, n and l. A given
instance of this workload (m=x,n=y,l=z) is a workload dirtying x GB of
memory with y threads at a rate of z GBps. In the following table, b
is network bandwidth configured for the live migration, t_curr is the
total time to migrate with the current throttling logic and t_dq is
the total time to migrate with dirty-quota-based throttling.

    A. Live migration performance

+--------+----+----------+----------+---------------+----------+----------+
| m (GB) |  n | l (GBps) | b (MBps) |    t_curr (s) | t_dq (s) | Diff (%) |
+--------+----+----------+----------+---------------+----------+----------+
|      8 |  2 |     8.00 |      640 |         60.38 |    15.22 |     74.8 |
|     16 |  4 |     1.26 |      640 |         75.99 |    32.22 |     57.6 |
|     32 |  6 |     0.10 |      640 |         49.81 |    49.80 |      0.0 |
|     48 |  8 |     2.20 |      640 |        287.78 |   115.65 |     59.8 |
|     32 |  6 |    32.00 |      640 |        364.30 |    84.26 |     76.9 |
|      8 |  2 |     8.00 |      128 |        452.91 |    94.99 |     79.0 |
|    512 | 32 |     0.10 |      640 |        868.94 |   841.92 |      3.1 |
|     16 |  4 |     1.26 |       64 |       1538.94 |   426.21 |     72.3 |
|     32 |  6 |     1.80 |     1024 |       1406.80 |   452.82 |     67.8 |
|    512 | 32 |     7.20 |      640 |       4561.30 |   906.60 |     80.1 |
|    128 | 16 |     3.50 |      128 |       7009.98 |  1689.61 |     75.9 |
|     16 |  4 |    16.00 |       64 | "Unconverged" |   461.47 |      N/A |
|     32 |  6 |    32.00 |      128 | "Unconverged" |   454.27 |      N/A |
|    512 | 32 |   512.00 |      640 | "Unconverged" |   917.37 |      N/A |
|    128 | 16 |   128.00 |      128 | "Unconverged" |  1946.00 |      N/A |
+--------+----+----------+----------+---------------+----------+----------+

    B. Guest performance:

+=====================+===================+===================+==========+
|        Case         | Guest Runtime (%) | Guest Runtime (%) | Diff (%) |
+=====================+===================+===================+==========+
|                     | (Current)         | (Dirty Quota)     |          |
+---------------------+-------------------+-------------------+----------+
| Write-intensive     | 26.4              | 35.3              |     33.7 |
+---------------------+-------------------+-------------------+----------+
| Read-write-balanced | 40.6              | 70.8              |     74.4 |
+---------------------+-------------------+-------------------+----------+
| Read-intensive      | 63.1              | 81.8              |     29.6 |
+---------------------+-------------------+-------------------+----------+

Guest Runtime (in percentage) in the above table is the percentage of
time a guest vCPU is actually running, averaged across all vCPUs of
the guest. For B, we have run variants of the afore-mentioned
synthetic workload dirtying memory sequentially in a loop on some
threads and just reading memory sequentially on the other threads. We
have also conducted similar experiments with more realistic benchmarks
/ workloads e.g. redis, and obtained similar results.

Dirty-quota-based throttling was presented in KVM Forum 2021. Please
find the details here:
https://kvmforum2021.sched.com/event/ke4A/dirty-quota-based-vm-live-migration-auto-converge-manish-mishra-shivam-kumar-nutanix-india

The current v10 patchset includes the following changes over v9:

1. Use vma_pagesize as the dirty granularity for updating dirty quota
on arm64.
2. Do not update dirty quota for instances where the hypervisor is
writing into guest memory. Accounting for these instances in vCPUs'
dirty quota is unfair to the vCPUs. Also, some of these instances,
such as record_steal_time, frequently try to redundantly mark the same
set of pages dirty again and again. To avoid these distortions, we had
previously relied on checking the dirty bitmap to avoid redundantly
updating quotas. Since we have now decoupled dirty-quota-based
throttling from the live-migration dirty-tracking path, we have
resolved this issue by simply avoiding the mis-accounting caused by
these hypervisor-induced writes to guest memory.  Through extensive
experiments, we have verified that this new approach is approximately
as effective as the prior approach that relied on checking the dirty
bitmap.

v1:
https://lore.kernel.org/kvm/20211114145721.209219-1-shivam.kumar1@xxxxxxxxxxx/
v2: https://lore.kernel.org/kvm/Ydx2EW6U3fpJoJF0@xxxxxxxxxx/T/
v3: https://lore.kernel.org/kvm/YkT1kzWidaRFdQQh@xxxxxxxxxx/T/
v4:
https://lore.kernel.org/all/20220521202937.184189-1-shivam.kumar1@xxxxxxxxxxx/
v5: https://lore.kernel.org/all/202209130532.2BJwW65L-lkp@xxxxxxxxx/T/
v6:
https://lore.kernel.org/all/20220915101049.187325-1-shivam.kumar1@xxxxxxxxxxx/
v7:
https://lore.kernel.org/all/a64d9818-c68d-1e33-5783-414e9a9bdbd1@xxxxxxxxxxx/t/
v8:
https://lore.kernel.org/all/20230225204758.17726-1-shivam.kumar1@nutanix.com/
v9:
https://lore.kernel.org/kvm/20230504144328.139462-1-shivam.kumar1@nutanix.com/

Thanks,
Shivam

Shivam Kumar (3):
  KVM: Implement dirty quota-based throttling of vcpus
  KVM: x86: Dirty quota-based throttling of vcpus
  KVM: arm64: Dirty quota-based throttling of vcpus

 Documentation/virt/kvm/api.rst | 17 +++++++++++++++++
 arch/arm64/kvm/Kconfig         |  1 +
 arch/arm64/kvm/arm.c           |  5 +++++
 arch/arm64/kvm/mmu.c           |  1 +
 arch/x86/kvm/Kconfig           |  1 +
 arch/x86/kvm/mmu/mmu.c         |  6 +++++-
 arch/x86/kvm/mmu/spte.c        |  1 +
 arch/x86/kvm/vmx/vmx.c         |  3 +++
 arch/x86/kvm/x86.c             |  6 +++++-
 include/linux/kvm_host.h       |  9 +++++++++
 include/uapi/linux/kvm.h       |  8 ++++++++
 tools/include/uapi/linux/kvm.h |  1 +
 virt/kvm/Kconfig               |  3 +++
 virt/kvm/kvm_main.c            | 27 +++++++++++++++++++++++++++
 14 files changed, 87 insertions(+), 2 deletions(-)

Comments

Shivam Kumar March 21, 2024, 5:48 a.m. UTC | #1
> On 22-Feb-2024, at 1:22 AM, Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> 
> The current v10 patchset includes the following changes over v9:
> 
> 1. Use vma_pagesize as the dirty granularity for updating dirty quota
> on arm64.
> 2. Do not update dirty quota for instances where the hypervisor is
> writing into guest memory. Accounting for these instances in vCPUs'
> dirty quota is unfair to the vCPUs. Also, some of these instances,
> such as record_steal_time, frequently try to redundantly mark the same
> set of pages dirty again and again. To avoid these distortions, we had
> previously relied on checking the dirty bitmap to avoid redundantly
> updating quotas. Since we have now decoupled dirty-quota-based
> throttling from the live-migration dirty-tracking path, we have
> resolved this issue by simply avoiding the mis-accounting caused by
> these hypervisor-induced writes to guest memory.  Through extensive
> experiments, we have verified that this new approach is approximately
> as effective as the prior approach that relied on checking the dirty
> bitmap.
> 

Hi Marc,

I’ve tried my best to address all the concerns raised in the previous patchset. I’d really appreciate it if you could share your thoughts and any feedback you might have on this one.

Thanks,
Shivam
Marc Zyngier April 4, 2024, 9:19 a.m. UTC | #2
On Thu, 21 Mar 2024 05:48:01 +0000,
Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> 
> 
> > On 22-Feb-2024, at 1:22 AM, Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> > 
> > The current v10 patchset includes the following changes over v9:
> > 
> > 1. Use vma_pagesize as the dirty granularity for updating dirty quota
> > on arm64.
> > 2. Do not update dirty quota for instances where the hypervisor is
> > writing into guest memory. Accounting for these instances in vCPUs'
> > dirty quota is unfair to the vCPUs. Also, some of these instances,
> > such as record_steal_time, frequently try to redundantly mark the same
> > set of pages dirty again and again. To avoid these distortions, we had
> > previously relied on checking the dirty bitmap to avoid redundantly
> > updating quotas. Since we have now decoupled dirty-quota-based
> > throttling from the live-migration dirty-tracking path, we have
> > resolved this issue by simply avoiding the mis-accounting caused by
> > these hypervisor-induced writes to guest memory.  Through extensive
> > experiments, we have verified that this new approach is approximately
> > as effective as the prior approach that relied on checking the dirty
> > bitmap.
> > 
> 
> Hi Marc,
> 
> I’ve tried my best to address all the concerns raised in the
> previous patchset. I’d really appreciate it if you could share your
> thoughts and any feedback you might have on this one.

I'll get to it at some point. However, given that it has you taken the
best part of a year to respin this, I need to page it all back it,
which is going to take a bit of time as well.

Thanks,

	M.
Sean Christopherson April 16, 2024, 5:44 p.m. UTC | #3
On Wed, Feb 21, 2024, Shivam Kumar wrote:
> v1:
> https://lore.kernel.org/kvm/20211114145721.209219-1-shivam.kumar1@xxxxxxxxxxx/
> v2: https://lore.kernel.org/kvm/Ydx2EW6U3fpJoJF0@xxxxxxxxxx/T/
> v3: https://lore.kernel.org/kvm/YkT1kzWidaRFdQQh@xxxxxxxxxx/T/
> v4:
> https://lore.kernel.org/all/20220521202937.184189-1-shivam.kumar1@xxxxxxxxxxx/
> v5: https://lore.kernel.org/all/202209130532.2BJwW65L-lkp@xxxxxxxxx/T/
> v6:
> https://lore.kernel.org/all/20220915101049.187325-1-shivam.kumar1@xxxxxxxxxxx/
> v7:
> https://lore.kernel.org/all/a64d9818-c68d-1e33-5783-414e9a9bdbd1@xxxxxxxxxxx/t/

These links are all busted, which was actually quite annoying because I wanted to
go back and look at Marc's input.

> v8:
> https://lore.kernel.org/all/20230225204758.17726-1-shivam.kumar1@nutanix.com/
> v9:
> https://lore.kernel.org/kvm/20230504144328.139462-1-shivam.kumar1@nutanix.com/
Shivam Kumar April 18, 2024, 10:42 a.m. UTC | #4
> On 16-Apr-2024, at 11:14 PM, Sean Christopherson <seanjc@google.com> wrote:
> On Wed, Feb 21, 2024, Shivam Kumar wrote:
>> v1:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_kvm_20211114145721.209219-2D1-2Dshivam.kumar1-40xxxxxxxxxxx_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=buLjKsfeC2-NhTOg3Gq9bQJg9XFUMlvJsi6vYIiVI9k&e= 
>> v2: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_kvm_Ydx2EW6U3fpJoJF0-40xxxxxxxxxx_T_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=UUUIpjYiKj6G3_SlR40R9KS6UmuIlLU089Ai6SdPrC8&e= 
>> v3: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_kvm_YkT1kzWidaRFdQQh-40xxxxxxxxxx_T_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=oQqOZNHdDOMAEkLEKPjwiffKaQdK3T4kZf_DRRUTuxo&e= 
>> v4:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20220521202937.184189-2D1-2Dshivam.kumar1-40xxxxxxxxxxx_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=4fJ-Dzy7gsEnExqmGF0nP8K41YdVWUC3v9urCMn8RQI&e= 
>> v5: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_202209130532.2BJwW65L-2Dlkp-40xxxxxxxxx_T_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=5GXvSQngNeqX62nS-3Yve0-bCtHxKYLFfl4AZiFO-u0&e= 
>> v6:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20220915101049.187325-2D1-2Dshivam.kumar1-40xxxxxxxxxxx_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=S8mqK70ZETRAaQ0pmpYz9fzoJDYcDVMSgMtcUmCL4fE&e= 
>> v7:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_a64d9818-2Dc68d-2D1e33-2D5783-2D414e9a9bdbd1-40xxxxxxxxxxx_t_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=npf2bNeivHu5BXcy66M81khdW0sy4qDh5d4kC_VThlzr1X2JvYVuDHMBYmNYzXMM&s=R9mCz9k87Sbv1QYREMeuD4l9fH-duqb1RInN3lmRBeo&e= 
> 
> These links are all busted, which was actually quite annoying because I wanted to
> go back and look at Marc's input.
Extremely sorry about that. Will fix them. I didn’t realise this when I copied the links from the previous patch.

Thanks,
Shivam
Shivam Kumar April 18, 2024, 10:46 a.m. UTC | #5
> On 04-Apr-2024, at 2:49 PM, Marc Zyngier <maz@kernel.org> wrote:
> On Thu, 21 Mar 2024 05:48:01 +0000,
> Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>> 
>> 
>>> On 22-Feb-2024, at 1:22 AM, Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>>> 
>>> The current v10 patchset includes the following changes over v9:
>>> 
>>> 1. Use vma_pagesize as the dirty granularity for updating dirty quota
>>> on arm64.
>>> 2. Do not update dirty quota for instances where the hypervisor is
>>> writing into guest memory. Accounting for these instances in vCPUs'
>>> dirty quota is unfair to the vCPUs. Also, some of these instances,
>>> such as record_steal_time, frequently try to redundantly mark the same
>>> set of pages dirty again and again. To avoid these distortions, we had
>>> previously relied on checking the dirty bitmap to avoid redundantly
>>> updating quotas. Since we have now decoupled dirty-quota-based
>>> throttling from the live-migration dirty-tracking path, we have
>>> resolved this issue by simply avoiding the mis-accounting caused by
>>> these hypervisor-induced writes to guest memory.  Through extensive
>>> experiments, we have verified that this new approach is approximately
>>> as effective as the prior approach that relied on checking the dirty
>>> bitmap.
>>> 
>> 
>> Hi Marc,
>> 
>> I’ve tried my best to address all the concerns raised in the
>> previous patchset. I’d really appreciate it if you could share your
>> thoughts and any feedback you might have on this one.
> 
> I'll get to it at some point. However, given that it has you taken the
> best part of a year to respin this, I need to page it all back it,
> which is going to take a bit of time as well.
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.
> 
No problem. Thank you for acknowledging.

Thanks,
Shivam