diff mbox series

[RFC,03/28] kvm: mmu: Zero page cache memory at allocation time

Message ID 20190926231824.149014-4-bgardon@google.com (mailing list archive)
State New, archived
Headers show
Series kvm: mmu: Rework the x86 TDP direct mapped case | expand

Commit Message

Ben Gardon Sept. 26, 2019, 11:17 p.m. UTC
Simplify use of the MMU page cache by allocating pages pre-zeroed. This
ensures that future code does not accidentally add non-zeroed memory to
the paging structure and moves the work of zeroing page page out from
under the MMU lock.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 arch/x86/kvm/mmu.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Sean Christopherson Nov. 27, 2019, 6:32 p.m. UTC | #1
On Thu, Sep 26, 2019 at 04:17:59PM -0700, Ben Gardon wrote:
> Simplify use of the MMU page cache by allocating pages pre-zeroed. This
> ensures that future code does not accidentally add non-zeroed memory to
> the paging structure and moves the work of zeroing page page out from
> under the MMU lock.

Ha, this *just* came up in a different series[*].  Unless there is a hard
dependency on the rest of this series, it'd be nice to tackle this
separately so that we can fully understand the tradeoffs.  And it could be
merged early/independently as well.

[*] https://patchwork.kernel.org/patch/11228487/#23025353

> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  arch/x86/kvm/mmu.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 7e5ab9c6e2b09..1ecd6d51c0ee0 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1037,7 +1037,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
>  	if (cache->nobjs >= min)
>  		return 0;
>  	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
> -		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
> +		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
>  		if (!page)
>  			return cache->nobjs >= min ? 0 : -ENOMEM;
>  		cache->objects[cache->nobjs++] = page;
> @@ -2548,7 +2548,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  		if (level > PT_PAGE_TABLE_LEVEL && need_sync)
>  			flush |= kvm_sync_pages(vcpu, gfn, &invalid_list);
>  	}
> -	clear_page(sp->spt);
>  	trace_kvm_mmu_get_page(sp, true);
>  
>  	kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
> -- 
> 2.23.0.444.g18eeb5a265-goog
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7e5ab9c6e2b09..1ecd6d51c0ee0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1037,7 +1037,7 @@  static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
 	if (cache->nobjs >= min)
 		return 0;
 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
-		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
+		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
 		if (!page)
 			return cache->nobjs >= min ? 0 : -ENOMEM;
 		cache->objects[cache->nobjs++] = page;
@@ -2548,7 +2548,6 @@  static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 		if (level > PT_PAGE_TABLE_LEVEL && need_sync)
 			flush |= kvm_sync_pages(vcpu, gfn, &invalid_list);
 	}
-	clear_page(sp->spt);
 	trace_kvm_mmu_get_page(sp, true);
 
 	kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);