From patchwork Thu Sep 10 16:38:56 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Izik Eidus X-Patchwork-Id: 46624 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8AGV0fr010511 for ; Thu, 10 Sep 2009 16:31:01 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752779AbZIJQaz (ORCPT ); Thu, 10 Sep 2009 12:30:55 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752749AbZIJQaz (ORCPT ); Thu, 10 Sep 2009 12:30:55 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43589 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752727AbZIJQaz (ORCPT ); Thu, 10 Sep 2009 12:30:55 -0400 Received: from int-mx05.intmail.prod.int.phx2.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.18]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8AGUwME005162 for ; Thu, 10 Sep 2009 12:30:58 -0400 Received: from localhost.localdomain (dhcp-1-211.tlv.redhat.com [10.35.1.211]) by int-mx05.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n8AGUrff013014; Thu, 10 Sep 2009 12:30:56 -0400 From: Izik Eidus To: avi@redhat.com Cc: kvm@vger.kernel.org, aarcange@redhat.com, Izik Eidus Subject: [PATCH 1/3] kvm: dont hold pagecount reference for mapped sptes pages Date: Thu, 10 Sep 2009 19:38:56 +0300 Message-Id: <1252600738-9456-2-git-send-email-ieidus@redhat.com> In-Reply-To: <1252600738-9456-1-git-send-email-ieidus@redhat.com> References: <1252600738-9456-1-git-send-email-ieidus@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.18 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When using mmu notifiers, we are allowed to remove the page count reference tooken by get_user_pages to a specific page that is mapped inside the shadow page tables. This is needed so we can balance the pagecount against mapcount checking. (Right now kvm increase the pagecount and does not increase the mapcount when mapping page into shadow page table entry, so when comparing pagecount against mapcount, you have no reliable result.) Signed-off-by: Izik Eidus --- arch/x86/kvm/mmu.c | 7 ++----- 1 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index f76d086..62d2f86 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -634,9 +634,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) if (*spte & shadow_accessed_mask) kvm_set_pfn_accessed(pfn); if (is_writeble_pte(*spte)) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); + kvm_set_pfn_dirty(pfn); rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], sp->role.level); if (!*rmapp) { printk(KERN_ERR "rmap_remove: %p %llx 0->BUG\n", spte, *spte); @@ -1877,8 +1875,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, page_header_update_slot(vcpu->kvm, sptep, gfn); if (!was_rmapped) { rmap_count = rmap_add(vcpu, sptep, gfn); - if (!is_rmap_spte(*sptep)) - kvm_release_pfn_clean(pfn); + kvm_release_pfn_clean(pfn); if (rmap_count > RMAP_RECYCLE_THRESHOLD) rmap_recycle(vcpu, sptep, gfn); } else {