From patchwork Tue Aug 24 02:31:07 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaotian Feng X-Patchwork-Id: 125141 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o7O2VKYd031877 for ; Tue, 24 Aug 2010 02:32:18 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754090Ab0HXCbP (ORCPT ); Mon, 23 Aug 2010 22:31:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45614 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753248Ab0HXCbN (ORCPT ); Mon, 23 Aug 2010 22:31:13 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o7O2VCkX027017 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 23 Aug 2010 22:31:12 -0400 Received: from danny.com (dhcp-65-180.nay.redhat.com [10.66.65.180]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o7O2VAcV007540; Mon, 23 Aug 2010 22:31:10 -0400 From: Xiaotian Feng To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Xiaotian Feng , Marcelo Tosatti , Dave Hansen , Tim Pepper Subject: [PATCH -kvm] kvm: fix regression from rework KVM mmu_shrink() code Date: Tue, 24 Aug 2010 10:31:07 +0800 Message-Id: <1282617067-7686-1-git-send-email-dfeng@redhat.com> In-Reply-To: <20100824020721.GA14726@amt.cnet> References: <20100824020721.GA14726@amt.cnet> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 24 Aug 2010 02:32:18 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index f52a965..0991de3 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1724,10 +1724,9 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages) page = container_of(kvm->arch.active_mmu_pages.prev, struct kvm_mmu_page, link); - kvm_mmu_prepare_zap_page(kvm, page, - &invalid_list); + kvm_mmu_prepare_zap_page(kvm, page, &invalid_list); + kvm_mmu_commit_zap_page(kvm, &invalid_list); } - kvm_mmu_commit_zap_page(kvm, &invalid_list); goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages; } @@ -2976,9 +2975,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu) sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev, struct kvm_mmu_page, link); kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); ++vcpu->kvm->stat.mmu_recycled; } - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); } int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code)