diff mbox series

[v6,2/4] KVM: mmu: add a helper to account memory used by KVM MMU.

Message ID 20220628220938.3657876-3-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series KVM: mm: count KVM mmu usage in memory stats | expand

Commit Message

Yosry Ahmed June 28, 2022, 10:09 p.m. UTC
Add a helper to account pages used by KVM for page tables in memory
secondary pagetable stats. This function will be used by subsequent
patches in different archs.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 include/linux/kvm_host.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Marc Zyngier June 29, 2022, 10:30 a.m. UTC | #1
On Tue, 28 Jun 2022 23:09:36 +0100,
Yosry Ahmed <yosryahmed@google.com> wrote:
> 
> Add a helper to account pages used by KVM for page tables in memory
> secondary pagetable stats. This function will be used by subsequent
> patches in different archs.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

Acked-by: Marc Zyngier <maz@kernel.org>

	M.
Sean Christopherson July 7, 2022, 9:08 p.m. UTC | #2
On Tue, Jun 28, 2022, Yosry Ahmed wrote:
> Add a helper to account pages used by KVM for page tables in memory
> secondary pagetable stats. This function will be used by subsequent
> patches in different archs.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> ---
>  include/linux/kvm_host.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 3b40f8d68fbb1..032821d77e920 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2241,6 +2241,16 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
>  }
>  #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */
>  
> +/*
> + * If more than one page is being (un)accounted, @virt must be the address of
> + * the first page of a block of pages what were allocated together (i.e
> + * accounted together).

Sorry for the belated thoughts...

If you spin a v7, can you add a note to call out that mod_lruvec_page_state() is
itself thread-safe?  Caught my eye because the TDP MMU usage happens while holding
mmu_lock for read.

> + */
> +static inline void kvm_account_pgtable_pages(void *virt, int nr)
> +{
> +	mod_lruvec_page_state(virt_to_page(virt), NR_SECONDARY_PAGETABLE, nr);
> +}
> +
>  /*
>   * This defines how many reserved entries we want to keep before we
>   * kick the vcpu to the userspace to avoid dirty ring full.  This
> -- 
> 2.37.0.rc0.161.g10f37bed90-goog
>
Yosry Ahmed July 12, 2022, 11:03 p.m. UTC | #3
On Thu, Jul 7, 2022 at 2:08 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Jun 28, 2022, Yosry Ahmed wrote:
> > Add a helper to account pages used by KVM for page tables in memory
> > secondary pagetable stats. This function will be used by subsequent
> > patches in different archs.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > ---
> >  include/linux/kvm_host.h | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 3b40f8d68fbb1..032821d77e920 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -2241,6 +2241,16 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
> >  }
> >  #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */
> >
> > +/*
> > + * If more than one page is being (un)accounted, @virt must be the address of
> > + * the first page of a block of pages what were allocated together (i.e
> > + * accounted together).
>
> Sorry for the belated thoughts...
>
> If you spin a v7, can you add a note to call out that mod_lruvec_page_state() is
> itself thread-safe?  Caught my eye because the TDP MMU usage happens while holding
> mmu_lock for read.
>

Sure! I will send a v7 anyway to address the comments on patch 1. Thanks!

> > + */
> > +static inline void kvm_account_pgtable_pages(void *virt, int nr)
> > +{
> > +     mod_lruvec_page_state(virt_to_page(virt), NR_SECONDARY_PAGETABLE, nr);
> > +}
> > +
> >  /*
> >   * This defines how many reserved entries we want to keep before we
> >   * kick the vcpu to the userspace to avoid dirty ring full.  This
> > --
> > 2.37.0.rc0.161.g10f37bed90-goog
> >
diff mbox series

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 3b40f8d68fbb1..032821d77e920 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2241,6 +2241,16 @@  static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
 }
 #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */
 
+/*
+ * If more than one page is being (un)accounted, @virt must be the address of
+ * the first page of a block of pages what were allocated together (i.e
+ * accounted together).
+ */
+static inline void kvm_account_pgtable_pages(void *virt, int nr)
+{
+	mod_lruvec_page_state(virt_to_page(virt), NR_SECONDARY_PAGETABLE, nr);
+}
+
 /*
  * This defines how many reserved entries we want to keep before we
  * kick the vcpu to the userspace to avoid dirty ring full.  This