diff mbox

[1/3] kvm: dont hold pagecount reference for mapped sptes pages

Message ID 1253731638-24575-2-git-send-email-ieidus@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Izik Eidus Sept. 23, 2009, 6:47 p.m. UTC
When using mmu notifiers, we are allowed to remove the page count
reference tooken by get_user_pages to a specific page that is mapped
inside the shadow page tables.

This is needed so we can balance the pagecount against mapcount
checking.

(Right now kvm increase the pagecount and does not increase the
mapcount when mapping page into shadow page table entry,
so when comparing pagecount against mapcount, you have no
reliable result.)

Signed-off-by: Izik Eidus <ieidus@redhat.com>
---
 arch/x86/kvm/mmu.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

Comments

Marcelo Tosatti Sept. 24, 2009, 2:18 p.m. UTC | #1
This needs compat code for !MMU_NOTIFIERS case in kvm-kmod (Jan CC'ed).

Otherwise looks good.

On Wed, Sep 23, 2009 at 09:47:16PM +0300, Izik Eidus wrote:
> When using mmu notifiers, we are allowed to remove the page count
> reference tooken by get_user_pages to a specific page that is mapped
> inside the shadow page tables.
> 
> This is needed so we can balance the pagecount against mapcount
> checking.
> 
> (Right now kvm increase the pagecount and does not increase the
> mapcount when mapping page into shadow page table entry,
> so when comparing pagecount against mapcount, you have no
> reliable result.)
> 
> Signed-off-by: Izik Eidus <ieidus@redhat.com>
> ---
>  arch/x86/kvm/mmu.c |    7 ++-----
>  1 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index eca41ae..6c67b23 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -634,9 +634,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
>  	if (*spte & shadow_accessed_mask)
>  		kvm_set_pfn_accessed(pfn);
>  	if (is_writeble_pte(*spte))
> -		kvm_release_pfn_dirty(pfn);
> -	else
> -		kvm_release_pfn_clean(pfn);
> +		kvm_set_pfn_dirty(pfn);
>  	rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], sp->role.level);
>  	if (!*rmapp) {
>  		printk(KERN_ERR "rmap_remove: %p %llx 0->BUG\n", spte, *spte);
> @@ -1877,8 +1875,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>  	page_header_update_slot(vcpu->kvm, sptep, gfn);
>  	if (!was_rmapped) {
>  		rmap_count = rmap_add(vcpu, sptep, gfn);
> -		if (!is_rmap_spte(*sptep))
> -			kvm_release_pfn_clean(pfn);
> +		kvm_release_pfn_clean(pfn);
>  		if (rmap_count > RMAP_RECYCLE_THRESHOLD)
>  			rmap_recycle(vcpu, sptep, gfn);
>  	} else {
> -- 
> 1.5.6.5
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Andrea Arcangeli Sept. 24, 2009, 4:56 p.m. UTC | #2
On Wed, Sep 23, 2009 at 09:47:16PM +0300, Izik Eidus wrote:
> When using mmu notifiers, we are allowed to remove the page count
> reference tooken by get_user_pages to a specific page that is mapped
> inside the shadow page tables.
> 
> This is needed so we can balance the pagecount against mapcount
> checking.
> 
> (Right now kvm increase the pagecount and does not increase the
> mapcount when mapping page into shadow page table entry,
> so when comparing pagecount against mapcount, you have no
> reliable result.)
> 
> Signed-off-by: Izik Eidus <ieidus@redhat.com>

Acked-by: Andrea Arcangeli <aarcange@redhat.com>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index eca41ae..6c67b23 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -634,9 +634,7 @@  static void rmap_remove(struct kvm *kvm, u64 *spte)
 	if (*spte & shadow_accessed_mask)
 		kvm_set_pfn_accessed(pfn);
 	if (is_writeble_pte(*spte))
-		kvm_release_pfn_dirty(pfn);
-	else
-		kvm_release_pfn_clean(pfn);
+		kvm_set_pfn_dirty(pfn);
 	rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], sp->role.level);
 	if (!*rmapp) {
 		printk(KERN_ERR "rmap_remove: %p %llx 0->BUG\n", spte, *spte);
@@ -1877,8 +1875,7 @@  static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	page_header_update_slot(vcpu->kvm, sptep, gfn);
 	if (!was_rmapped) {
 		rmap_count = rmap_add(vcpu, sptep, gfn);
-		if (!is_rmap_spte(*sptep))
-			kvm_release_pfn_clean(pfn);
+		kvm_release_pfn_clean(pfn);
 		if (rmap_count > RMAP_RECYCLE_THRESHOLD)
 			rmap_recycle(vcpu, sptep, gfn);
 	} else {