From patchwork Fri Apr 3 21:45:48 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 16242 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n33LknPa013343 for ; Fri, 3 Apr 2009 21:46:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759120AbZDCVq1 (ORCPT ); Fri, 3 Apr 2009 17:46:27 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1760308AbZDCVq1 (ORCPT ); Fri, 3 Apr 2009 17:46:27 -0400 Received: from mx2.redhat.com ([66.187.237.31]:34242 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758339AbZDCVq0 (ORCPT ); Fri, 3 Apr 2009 17:46:26 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n33LkJsW028130; Fri, 3 Apr 2009 17:46:19 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n33LkJav000987; Fri, 3 Apr 2009 17:46:20 -0400 Received: from amt.cnet (vpn-10-56.str.redhat.com [10.32.10.56]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n33LkHDU006049; Fri, 3 Apr 2009 17:46:18 -0400 Received: from amt.cnet (amt.cnet [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 2B443588008; Fri, 3 Apr 2009 18:45:50 -0300 (BRT) Received: (from marcelo@localhost) by amt.cnet (8.14.3/8.14.3/Submit) id n33Ljmtn005403; Fri, 3 Apr 2009 18:45:48 -0300 Date: Fri, 3 Apr 2009 18:45:48 -0300 From: Marcelo Tosatti To: Avi Kivity Cc: Aurelien Jarno , kvm@vger.kernel.org Subject: Re: cr3 OOS optimisation breaks 32-bit GNU/kFreeBSD guest Message-ID: <20090403214548.GA5394@amt.cnet> References: <20090223003305.GW12976@hall.aurel32.net> <20090320231405.GA26415@amt.cnet> <49C60644.2090904@redhat.com> <20090323172725.GA28775@amt.cnet> <49C8AC35.3030803@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <49C8AC35.3030803@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, Mar 24, 2009 at 11:47:33AM +0200, Avi Kivity wrote: >> index 2ea8262..48169d7 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -3109,6 +3109,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) >> kvm_write_guest_time(vcpu); >> if (test_and_clear_bit(KVM_REQ_MMU_SYNC, &vcpu->requests)) >> kvm_mmu_sync_roots(vcpu); >> + if (test_and_clear_bit(KVM_REQ_MMU_GLOBAL_SYNC, &vcpu->requests)) >> + kvm_mmu_sync_global(vcpu); >> if (test_and_clear_bit(KVM_REQ_TLB_FLUSH, &vcpu->requests)) >> kvm_x86_ops->tlb_flush(vcpu); >> if (test_and_clear_bit(KVM_REQ_REPORT_TPR_ACCESS > > Windows will (I think) write a PDE on every context switch, so this > effectively disables global unsync for that guest. > > What about recursively syncing the newly linked page in FNAME(fetch)()? > If the page isn't global, this becomes a no-op, so no new overhead. The > only question is the expense when linking a populated top-level page, > especially in long mode. How about this? KVM: MMU: sync global pages on fetch() If an unsync global page becomes unreachable via the shadow tree, which can happen if one its parent pages is zapped, invlpg will fail to invalidate translations for gvas contained in such unreachable pages. So sync global pages in fetch(). Signed-off-by: Marcelo Tosatti --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 09782a9..728be72 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -308,8 +308,14 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, break; } - if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) + if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) { + if (level-1 == PT_PAGE_TABLE_LEVEL) { + shadow_page = page_header(__pa(sptep)); + if (shadow_page->unsync && shadow_page->global) + kvm_sync_page(vcpu, shadow_page); + } continue; + } if (is_large_pte(*sptep)) { rmap_remove(vcpu->kvm, sptep);