From patchwork Thu May 6 09:31:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 97318 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o46AfFov024690 for ; Thu, 6 May 2010 10:45:31 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757053Ab0EFJee (ORCPT ); Thu, 6 May 2010 05:34:34 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:56565 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1757025Ab0EFJec (ORCPT ); Thu, 6 May 2010 05:34:32 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id B5B8A17008D; Thu, 6 May 2010 17:34:31 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o469Wd6a032628; Thu, 6 May 2010 17:32:40 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 7A186DC2FF; Thu, 6 May 2010 17:37:49 +0800 (CST) Message-ID: <4BE28C6B.8010505@cn.fujitsu.com> Date: Thu, 06 May 2010 17:31:23 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH v4 6/9] KVM MMU: support keeping sp live while it's out of protection References: <4BE2818A.5000301@cn.fujitsu.com> In-Reply-To: <4BE2818A.5000301@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 06 May 2010 10:45:32 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 58cf0f1..8ab1a49 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -894,6 +894,7 @@ static int is_empty_shadow_page(u64 *spt) static void kvm_mmu_free_page(struct kvm *kvm, struct kvm_mmu_page *sp) { ASSERT(is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); list_del(&sp->link); __free_page(virt_to_page(sp->spt)); __free_page(virt_to_page(sp->gfns)); @@ -1539,13 +1540,14 @@ static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp) unaccount_shadowed(kvm, sp->gfn); if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); - if (!sp->active_count) { - hlist_del(&sp->hash_link); + if (!sp->active_count) kvm_mmu_free_page(kvm, sp); - } else { + else { sp->role.invalid = 1; list_move(&sp->link, &kvm->arch.active_mmu_pages); - kvm_reload_remote_mmus(kvm); + /* No need reload mmu if it's unsync page zapped */ + if (sp->role.level != PT_PAGE_TABLE_LEVEL) + kvm_reload_remote_mmus(kvm); } kvm_mmu_reset_last_pte_updated(kvm); return ret; @@ -1781,7 +1783,8 @@ static void kvm_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn) bucket = &vcpu->kvm->arch.mmu_page_hash[index]; hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { - if (s->gfn != gfn || s->role.direct || s->unsync) + if (s->gfn != gfn || s->role.direct || s->unsync || + s->role.invalid) continue; WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL); __kvm_unsync_page(vcpu, s); @@ -1806,7 +1809,7 @@ static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, if (s->role.level != PT_PAGE_TABLE_LEVEL) return 1; - if (!need_unsync && !s->unsync) { + if (!need_unsync && !s->unsync && !s->role.invalid) { if (!can_unsync || !oos_shadow) return 1; need_unsync = true;