From patchwork Wed May 5 01:03:49 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gui Jianfeng X-Patchwork-Id: 96945 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o4515EB7004628 for ; Wed, 5 May 2010 01:05:14 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756537Ab0EEBFL (ORCPT ); Tue, 4 May 2010 21:05:11 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:54689 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1755714Ab0EEBFK (ORCPT ); Tue, 4 May 2010 21:05:10 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 42089170080; Wed, 5 May 2010 09:05:05 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o4513E29023794; Wed, 5 May 2010 09:03:15 +0800 Received: from [127.0.0.1] (unknown [10.167.141.226]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id DE6BADC2FF; Wed, 5 May 2010 09:08:21 +0800 (CST) Message-ID: <4BE0C3F5.1010907@cn.fujitsu.com> Date: Wed, 05 May 2010 09:03:49 +0800 From: Gui Jianfeng User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity , Marcelo Tosatti CC: kvm@vger.kernel.org Subject: [PATCH] KVM: make kvm_mmu_zap_page() return the number of pages it actually freed. Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 05 May 2010 01:05:15 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 51eb6d6..8ab6820 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1503,6 +1503,8 @@ static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp) if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); if (!sp->root_count) { + /* Count self */ + ret++; hlist_del(&sp->hash_link); kvm_mmu_free_page(kvm, sp); } else { @@ -1539,7 +1541,6 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages) page = container_of(kvm->arch.active_mmu_pages.prev, struct kvm_mmu_page, link); used_pages -= kvm_mmu_zap_page(kvm, page); - used_pages--; } kvm_nr_mmu_pages = used_pages; kvm->arch.n_free_mmu_pages = 0; @@ -2908,7 +2909,7 @@ static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm) page = container_of(kvm->arch.active_mmu_pages.prev, struct kvm_mmu_page, link); - return kvm_mmu_zap_page(kvm, page) + 1; + return kvm_mmu_zap_page(kvm, page); } static int mmu_shrink(int nr_to_scan, gfp_t gfp_mask)