From patchwork Mon Sep 17 02:07:43 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hao, Xudong" X-Patchwork-Id: 1464781 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 1B110DF28C for ; Mon, 17 Sep 2012 02:08:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752515Ab2IQCHs (ORCPT ); Sun, 16 Sep 2012 22:07:48 -0400 Received: from mga02.intel.com ([134.134.136.20]:27498 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752249Ab2IQCHr convert rfc822-to-8bit (ORCPT ); Sun, 16 Sep 2012 22:07:47 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 16 Sep 2012 19:07:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.80,432,1344236400"; d="scan'208";a="193726678" Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54]) by orsmga001.jf.intel.com with ESMTP; 16 Sep 2012 19:07:46 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server (TLS) id 14.1.355.2; Sun, 16 Sep 2012 19:07:46 -0700 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id 14.01.0355.002; Mon, 17 Sep 2012 10:07:44 +0800 From: "Hao, Xudong" To: Avi Kivity , Marcelo Tosatti CC: "kvm@vger.kernel.org" , "Zhang, Xiantao" Subject: RE: [PATCH v3] kvm/fpu: Enable fully eager restore kvm FPU Thread-Topic: [PATCH v3] kvm/fpu: Enable fully eager restore kvm FPU Thread-Index: AQHNkL2lgTeLP7cQaU6tfwE2l61RIpeH8X8AgAAAzoCAAAL4gIAF2Odw Date: Mon, 17 Sep 2012 02:07:43 +0000 Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEC2B2A@SHSMSX102.ccr.corp.intel.com> References: <1347437424-3006-1-git-send-email-xudong.hao@intel.com> <20120913162636.GA10191@amt.cnet> <20120913162929.GB10191@amt.cnet> <50520C67.8020609@redhat.com> In-Reply-To: <50520C67.8020609@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org > -----Original Message----- > From: Avi Kivity [mailto:avi@redhat.com] > Sent: Friday, September 14, 2012 12:40 AM > To: Marcelo Tosatti > Cc: Hao, Xudong; kvm@vger.kernel.org; Zhang, Xiantao > Subject: Re: [PATCH v3] kvm/fpu: Enable fully eager restore kvm FPU > > On 09/13/2012 07:29 PM, Marcelo Tosatti wrote: > > On Thu, Sep 13, 2012 at 01:26:36PM -0300, Marcelo Tosatti wrote: > >> On Wed, Sep 12, 2012 at 04:10:24PM +0800, Xudong Hao wrote: > >> > Enable KVM FPU fully eager restore, if there is other FPU state which isn't > >> > tracked by CR0.TS bit. > >> > > >> > v3 changes from v2: > >> > - Make fpu active explicitly while guest xsave is enabling and non-lazy > xstate bit > >> > exist. > >> > >> How about a "guest_xcr0_can_lazy_saverestore" bool to control this? > >> It only needs to be updated when guest xcr0 is updated. > >> > >> That seems cleaner. Avi? > > > > Reasoning below. > > > >> > v2 changes from v1: > >> > - Expand KVM_XSTATE_LAZY to 64 bits before negating it. > >> > > >> > Signed-off-by: Xudong Hao > >> > --- > >> > arch/x86/include/asm/kvm.h | 4 ++++ > >> > arch/x86/kvm/vmx.c | 2 ++ > >> > arch/x86/kvm/x86.c | 15 ++++++++++++++- > >> > 3 files changed, 20 insertions(+), 1 deletions(-) > >> > > >> > diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h > >> > index 521bf25..4c27056 100644 > >> > --- a/arch/x86/include/asm/kvm.h > >> > +++ b/arch/x86/include/asm/kvm.h > >> > @@ -8,6 +8,8 @@ > >> > > >> > #include > >> > #include > >> > +#include > >> > +#include > >> > > >> > /* Select x86 specific features in */ > >> > #define __KVM_HAVE_PIT > >> > @@ -30,6 +32,8 @@ > >> > /* Architectural interrupt line count. */ > >> > #define KVM_NR_INTERRUPTS 256 > >> > > >> > +#define KVM_XSTATE_LAZY (XSTATE_FP | XSTATE_SSE | XSTATE_YMM) > >> > + > >> > struct kvm_memory_alias { > >> > __u32 slot; /* this has a different namespace than memory slots */ > >> > __u32 flags; > >> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >> > index 248c2b4..853e875 100644 > >> > --- a/arch/x86/kvm/vmx.c > >> > +++ b/arch/x86/kvm/vmx.c > >> > @@ -3028,6 +3028,8 @@ static void vmx_set_cr0(struct kvm_vcpu *vcpu, > unsigned long cr0) > >> > > >> > if (!vcpu->fpu_active) > >> > hw_cr0 |= X86_CR0_TS | X86_CR0_MP; > >> > + else > >> > + hw_cr0 &= ~(X86_CR0_TS | X86_CR0_MP); > >> > > >> > vmcs_writel(CR0_READ_SHADOW, cr0); > >> > vmcs_writel(GUEST_CR0, hw_cr0); > >> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > >> > index 20f2266..183cf60 100644 > >> > --- a/arch/x86/kvm/x86.c > >> > +++ b/arch/x86/kvm/x86.c > >> > @@ -560,6 +560,8 @@ int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 > index, u64 xcr) > >> > return 1; > >> > if (xcr0 & ~host_xcr0) > >> > return 1; > >> > + if (xcr0 & ~((u64)KVM_XSTATE_LAZY)) > >> > + vcpu->fpu_active = 1; > > > > This is confusing. The variable allows to decrease the number of places > > the decision is made. > > Better to have a helper function (lazy_fpu_allowed(), for example). > Variables raise the question of whether they are maintained correctly. > I realized to modifying the fpu_active variable is incorrect, it must update exception bitmap. To avoid the cr0 and xcrs setting order for live migrate case, how about calling fpu_activate() in kvm_set_xcr()? I can add code comments in this function calling. Thanks, -Xudong --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index be6d549..e4646d9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -574,6 +574,9 @@ int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) kvm_inject_gp(vcpu, 0); return 1; } + if (xcr & ~((u64)KVM_XSTATE_LAZY)) + /* Allow fpu eager restore */ + kvm_x86_ops->fpu_activate(vcpu); return 0; }