From patchwork Tue Aug 21 11:27:33 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikunj A. Dadhania" X-Patchwork-Id: 1354351 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id ED34D40210 for ; Tue, 21 Aug 2012 11:28:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752715Ab2HUL2P (ORCPT ); Tue, 21 Aug 2012 07:28:15 -0400 Received: from e23smtp04.au.ibm.com ([202.81.31.146]:58808 "EHLO e23smtp04.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751647Ab2HUL2O (ORCPT ); Tue, 21 Aug 2012 07:28:14 -0400 Received: from /spool/local by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 21 Aug 2012 21:26:48 +1000 Received: from d23relay03.au.ibm.com (202.81.31.245) by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 21 Aug 2012 21:26:46 +1000 Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q7LBSAaM12517598 for ; Tue, 21 Aug 2012 21:28:10 +1000 Received: from d23av01.au.ibm.com (loopback [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q7LBS9aY001864 for ; Tue, 21 Aug 2012 21:28:10 +1000 Received: from [9.124.35.230] (abhimanyu.in.ibm.com [9.124.35.230] (may be forged)) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q7LBS7Fl001798; Tue, 21 Aug 2012 21:28:07 +1000 Subject: [PATCH v4 6/8] KVM-HV: Add flush_on_enter before guest enter To: mtosatti@redhat.com, avi@redhat.com From: "Nikunj A. Dadhania" Cc: raghukt@linux.vnet.ibm.com, alex.shi@intel.com, kvm@vger.kernel.org, stefano.stabellini@eu.citrix.com, peterz@infradead.org, hpa@zytor.com, vsrivatsa@gmail.com, mingo@elte.hu Date: Tue, 21 Aug 2012 16:57:33 +0530 Message-ID: <20120821112725.3512.11171.stgit@abhimanyu> In-Reply-To: <20120821112346.3512.99814.stgit@abhimanyu.in.ibm.com> References: <20120821112346.3512.99814.stgit@abhimanyu.in.ibm.com> User-Agent: StGit/0.16-2-g0d85 MIME-Version: 1.0 x-cbid: 12082111-9264-0000-0000-0000022B99C6 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PV-Flush guest would indicate to flush on enter, flush the TLB before entering and exiting the guest. Signed-off-by: Nikunj A. Dadhania --- arch/x86/kvm/x86.c | 28 ++++++++++++---------------- 1 files changed, 12 insertions(+), 16 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 43f2c19..07fdb0f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1557,20 +1557,9 @@ static void record_steal_time(struct kvm_vcpu *vcpu) &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); } -static void kvm_set_atomic(u64 *addr, u64 old, u64 new) -{ - int loop = 1000000; - while (1) { - if (cmpxchg(addr, old, new) == old) - break; - loop--; - if (!loop) { - pr_info("atomic cur: %lx old: %lx new: %lx\n", - *addr, old, new); - break; - } - } -} +#define VS_NOT_IN_GUEST (0) +#define VS_IN_GUEST (1 << KVM_VCPU_STATE_IN_GUEST_MODE) +#define VS_SHOULD_FLUSH (1 << KVM_VCPU_STATE_SHOULD_FLUSH) static void kvm_set_vcpu_state(struct kvm_vcpu *vcpu) { @@ -1584,7 +1573,13 @@ static void kvm_set_vcpu_state(struct kvm_vcpu *vcpu) kaddr = kmap_atomic(vcpu->arch.v_state.vs_page); kaddr += vcpu->arch.v_state.vs_offset; vs = kaddr; - kvm_set_atomic(&vs->state, 0, 1 << KVM_VCPU_STATE_IN_GUEST_MODE); + if (xchg(&vs->state, VS_IN_GUEST) == VS_SHOULD_FLUSH) { + /* + * Do TLB_FLUSH before entering the guest, its passed + * the stage of request checking + */ + kvm_x86_ops->tlb_flush(vcpu); + } kunmap_atomic(kaddr); } @@ -1600,7 +1595,8 @@ static void kvm_clear_vcpu_state(struct kvm_vcpu *vcpu) kaddr = kmap_atomic(vcpu->arch.v_state.vs_page); kaddr += vcpu->arch.v_state.vs_offset; vs = kaddr; - kvm_set_atomic(&vs->state, 1 << KVM_VCPU_STATE_IN_GUEST_MODE, 0); + if (xchg(&vs->state, VS_NOT_IN_GUEST) == VS_SHOULD_FLUSH) + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); kunmap_atomic(kaddr); }