diff mbox series

[17/21] KVM: arm64: Use common code's approach for __GFP_ZERO with memory caches

Message ID 20200605213853.14959-18-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: Cleanup and unify kvm_mmu_memory_cache usage | expand

Commit Message

Sean Christopherson June 5, 2020, 9:38 p.m. UTC
Add a "gfp_zero" member to arm64's 'struct kvm_mmu_memory_cache' to make
the struct and its usage compatible with the common 'struct
kvm_mmu_memory_cache' in linux/kvm_host.h.  This will minimize code
churn when arm64 moves to the common implementation in a future patch, at
the cost of temporarily having somewhat silly code.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 2 ++
 arch/arm64/kvm/mmu.c              | 5 +++--
 3 files changed, 6 insertions(+), 2 deletions(-)

Comments

Marc Zyngier June 11, 2020, 7:59 a.m. UTC | #1
Hi Sean,

On 2020-06-05 22:38, Sean Christopherson wrote:
> Add a "gfp_zero" member to arm64's 'struct kvm_mmu_memory_cache' to 
> make
> the struct and its usage compatible with the common 'struct
> kvm_mmu_memory_cache' in linux/kvm_host.h.  This will minimize code
> churn when arm64 moves to the common implementation in a future patch, 
> at
> the cost of temporarily having somewhat silly code.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c              | 2 ++
>  arch/arm64/kvm/mmu.c              | 5 +++--
>  3 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h
> b/arch/arm64/include/asm/kvm_host.h
> index abbdf9703e20..2385dede96e0 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -105,6 +105,7 @@ struct kvm_arch {
>   */
>  struct kvm_mmu_memory_cache {
>  	int nobjs;
> +	gfp_t gfp_zero;
>  	void *objects[KVM_NR_MEM_OBJS];
>  };
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 45276ed50dd6..4c98c6b4d850 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -270,6 +270,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.target = -1;
>  	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> 
> +	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> +
>  	/* Set up the timer */
>  	kvm_timer_vcpu_init(vcpu);
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 9398b66f8a87..688213ef34f0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -131,7 +131,8 @@ static int mmu_topup_memory_cache(struct
> kvm_mmu_memory_cache *cache, int min)
>  	if (cache->nobjs >= min)
>  		return 0;
>  	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
> -		page = (void *)__get_free_page(GFP_PGTABLE_USER);
> +		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT |

This is definitely a change in the way we account for guest
page tables allocation, although I find it bizarre that not
all architectures account for it the same way.

It seems logical to me that nested page tables would be accounted
against userspace, but I'm willing to be educated on the matter.

Another possibility is that depending on the context, some allocations
should be accounted on either the kernel or userspace (NV on arm64
could definitely do something like that). If that was the case,
maybe moving most of the GFP_* flags into the per-cache flags,
and have the renaming that Ben suggested earlier.

Thanks,

         M.
Sean Christopherson June 11, 2020, 3:43 p.m. UTC | #2
On Thu, Jun 11, 2020 at 08:59:05AM +0100, Marc Zyngier wrote:
> >diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> >index 9398b66f8a87..688213ef34f0 100644
> >--- a/arch/arm64/kvm/mmu.c
> >+++ b/arch/arm64/kvm/mmu.c
> >@@ -131,7 +131,8 @@ static int mmu_topup_memory_cache(struct
> >kvm_mmu_memory_cache *cache, int min)
> > 	if (cache->nobjs >= min)
> > 		return 0;
> > 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
> >-		page = (void *)__get_free_page(GFP_PGTABLE_USER);
> >+		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT |
> 
> This is definitely a change in the way we account for guest
> page tables allocation, although I find it bizarre that not
> all architectures account for it the same way.

It's not intended to be a functional change, i.e. the allocations should
still be accounted:

  #define GFP_PGTABLE_USER  (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
  |
  -> #define GFP_PGTABLE_KERNEL        (GFP_KERNEL | __GFP_ZERO)

  == GFP_KERNEL | __GFP_ACCOUNT | __GFP_ZERO

versus 

  #define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)

    with __GFP_ZERO explicitly OR'd in

  == GFP_KERNEL | __GFP_ACCOUNT | __GFP_ZERO

I can put the above in the changelog, unless of course it's wrong and I've
missed something.

> It seems logical to me that nested page tables would be accounted
> against userspace, but I'm willing to be educated on the matter.
> 
> Another possibility is that depending on the context, some allocations
> should be accounted on either the kernel or userspace (NV on arm64
> could definitely do something like that). If that was the case,
> maybe moving most of the GFP_* flags into the per-cache flags,
> and have the renaming that Ben suggested earlier.
> 
> Thanks,
> 
>         M.
> -- 
> Jazz is not dead. It just smells funny...
Marc Zyngier June 11, 2020, 3:51 p.m. UTC | #3
On 2020-06-11 16:43, Sean Christopherson wrote:
> On Thu, Jun 11, 2020 at 08:59:05AM +0100, Marc Zyngier wrote:
>> >diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> >index 9398b66f8a87..688213ef34f0 100644
>> >--- a/arch/arm64/kvm/mmu.c
>> >+++ b/arch/arm64/kvm/mmu.c
>> >@@ -131,7 +131,8 @@ static int mmu_topup_memory_cache(struct
>> >kvm_mmu_memory_cache *cache, int min)
>> > 	if (cache->nobjs >= min)
>> > 		return 0;
>> > 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
>> >-		page = (void *)__get_free_page(GFP_PGTABLE_USER);
>> >+		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT |
>> 
>> This is definitely a change in the way we account for guest
>> page tables allocation, although I find it bizarre that not
>> all architectures account for it the same way.
> 
> It's not intended to be a functional change, i.e. the allocations 
> should
> still be accounted:
> 
>   #define GFP_PGTABLE_USER  (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
>   |
>   -> #define GFP_PGTABLE_KERNEL        (GFP_KERNEL | __GFP_ZERO)
> 
>   == GFP_KERNEL | __GFP_ACCOUNT | __GFP_ZERO
> 
> versus
> 
>   #define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)
> 
>     with __GFP_ZERO explicitly OR'd in
> 
>   == GFP_KERNEL | __GFP_ACCOUNT | __GFP_ZERO
> 
> I can put the above in the changelog, unless of course it's wrong and 
> I've
> missed something.

Ah, good point. Serves me right for judging the symbol at face value! 
;-)
I guess a quick mention in the changelog wouldn't hurt.

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index abbdf9703e20..2385dede96e0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -105,6 +105,7 @@  struct kvm_arch {
  */
 struct kvm_mmu_memory_cache {
 	int nobjs;
+	gfp_t gfp_zero;
 	void *objects[KVM_NR_MEM_OBJS];
 };
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 45276ed50dd6..4c98c6b4d850 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -270,6 +270,8 @@  int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.target = -1;
 	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
 
+	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
+
 	/* Set up the timer */
 	kvm_timer_vcpu_init(vcpu);
 
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9398b66f8a87..688213ef34f0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -131,7 +131,8 @@  static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, int min)
 	if (cache->nobjs >= min)
 		return 0;
 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
-		page = (void *)__get_free_page(GFP_PGTABLE_USER);
+		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT |
+					       cache->gfp_zero);
 		if (!page)
 			return -ENOMEM;
 		cache->objects[cache->nobjs++] = page;
@@ -1342,7 +1343,7 @@  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 	phys_addr_t addr, end;
 	int ret = 0;
 	unsigned long pfn;
-	struct kvm_mmu_memory_cache cache = { 0, };
+	struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, };
 
 	end = (guest_ipa + size + PAGE_SIZE - 1) & PAGE_MASK;
 	pfn = __phys_to_pfn(pa);