From patchwork Fri Nov 30 18:49:13 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1826581 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id DF3E8DF24C for ; Fri, 30 Nov 2012 18:52:18 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TeVeJ-0005IE-VR; Fri, 30 Nov 2012 18:49:20 +0000 Received: from mail-vb0-f49.google.com ([209.85.212.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TeVeG-0005He-6J for linux-arm-kernel@lists.infradead.org; Fri, 30 Nov 2012 18:49:16 +0000 Received: by mail-vb0-f49.google.com with SMTP id r6so11705396vbi.22 for ; Fri, 30 Nov 2012 10:49:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=rVD52xZbsGlJbxnmIWoj6gKQaUSXlAUSI5YJVAimH9A=; b=mrtdCuJTTo/wtMKrqZdiStdvw4w3SES+51hpDEyZQ6q3B412QZ84l1j74RPsuMuuNP EXMRgLuBd6IaljZyEKX2kzjh6ckYBcsDbgopQP64MVyvoJCwYBlus4kc97riclhcnwPM 1Out34h6JfbUtA8BQRI4yC33KsKO7Va5/7/MA0FO8MT506WVwtx/oIFfOjwgtoAJ62OY dbzMm90c/XdY2IKd6UgaebvHmSphPvQrjxOQHKod4Kxfa3QoMoSjuDqodQlWxv58GjF4 qtZxEPT58rw7MqCGF66o18x7Rdd3yiTLo9hlY8YJAQppGlzKadztQAYTH//7dOgzz/Xn w1Cg== MIME-Version: 1.0 Received: by 10.52.92.148 with SMTP id cm20mr1593746vdb.12.1354301353500; Fri, 30 Nov 2012 10:49:13 -0800 (PST) Received: by 10.220.127.16 with HTTP; Fri, 30 Nov 2012 10:49:13 -0800 (PST) X-Originating-IP: [72.80.83.148] In-Reply-To: <20121130171420.GC619@mudshark.cambridge.arm.com> References: <20121110154203.2836.46686.stgit@chazy-air> <20121110154306.2836.93473.stgit@chazy-air> <20121119145758.GA3205@mudshark.cambridge.arm.com> <20121130151500.GC26289@mudshark.cambridge.arm.com> <20121130171420.GC619@mudshark.cambridge.arm.com> Date: Fri, 30 Nov 2012 13:49:13 -0500 Message-ID: Subject: Re: [PATCH v4 08/14] KVM: ARM: World-switch implementation From: Christoffer Dall To: Will Deacon X-Gm-Message-State: ALoCoQkYBacJA6Iv391GNiGFvZuJUPw2ZzxyHSRlcU2oK1zIdXC44bv+aNOpKHGy67e7uceuWqxS X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121130_134916_364584_01F0FB77 X-CRM114-Status: GOOD ( 23.50 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.49 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Rusty Russell , "kvm@vger.kernel.org" , Marc Zyngier , Marcelo Tosatti , Antonios Motakis , "kvmarm@lists.cs.columbia.edu" , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On Fri, Nov 30, 2012 at 12:14 PM, Will Deacon wrote: > On Fri, Nov 30, 2012 at 04:47:40PM +0000, Christoffer Dall wrote: >> On Fri, Nov 30, 2012 at 10:15 AM, Will Deacon wrote: >> > At this point, VM1 is running and VM0:VCPU1 is running. VM0:VCPU0 is not >> > running because physical CPU0 is handling an interrupt. The problem is that >> > when VCPU0 *is* resumed, it will update the VMID of VM0 and could be >> > scheduled in parallel with VCPU1 but with a different VMID. >> > >> > How do you avoid this in the current code? >> > >> I don't. Nice catch. Please apply your interesting brain to the following fix:) > > I'm far too sober to look at your patch right now, but I'll think about it > over the weekend [I can't break it at a quick glance] :) > > In the meantime, can you think about whether the TLB operations need to run > on every CPU please? > they don't we can invalidate the TLB and the icache using the inner shareability domain. Here's a patch: diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index ad1390f..df1b753 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -146,6 +146,7 @@ struct kvm_one_reg; int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); u64 kvm_call_hyp(void *hypfn, ...); +void force_vm_exit(const cpumask_t *mask); #define KVM_ARCH_WANT_MMU_NOTIFIER struct kvm; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index c4f631e..674592e 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -405,9 +405,14 @@ int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v) return v->mode == IN_GUEST_MODE; } -static void reset_vm_context(void *info) +/* Just ensure a guest exit from a particular CPU */ +static void exit_vm_noop(void *info) { - kvm_call_hyp(__kvm_flush_vm_context); +} + +void force_vm_exit(const cpumask_t *mask) +{ + smp_call_function_many(mask, exit_vm_noop, NULL, true); } /** @@ -445,17 +450,33 @@ static void update_vttbr(struct kvm *kvm) spin_lock(&kvm_vmid_lock); + /* + * We need to re-check the vmid_gen here to ensure that if another vcpu + * already allocated a valid vmid for this vm, then this vcpu should + * use the same vmid. + */ + if (!need_new_vmid_gen(kvm)) { + spin_unlock(&kvm_vmid_lock); + return; + } + /* First user of a new VMID generation? */ if (unlikely(kvm_next_vmid == 0)) { atomic64_inc(&kvm_vmid_gen); kvm_next_vmid = 1; /* - * On SMP we know no other CPUs can use this CPU's or - * each other's VMID since the kvm_vmid_lock blocks - * them from reentry to the guest. + * On SMP we know no other CPUs can use this CPU's or each + * other's VMID after force_vm_exit returns since the + * kvm_vmid_lock blocks them from reentry to the guest. + */ + force_vm_exit(cpu_all_mask); + /* + * Now broadcast TLB + ICACHE invalidation over the inner + * shareable domain to make sure all data structures are + * clean. */ - on_each_cpu(reset_vm_context, NULL, 1); + kvm_call_hyp(__kvm_flush_vm_context); } kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen); diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S index 8a1f2df..91bb9c5 100644 --- a/arch/arm/kvm/interrupts.S +++ b/arch/arm/kvm/interrupts.S @@ -63,14 +63,18 @@ ENTRY(__kvm_tlb_flush_vmid) ENDPROC(__kvm_tlb_flush_vmid) /******************************************************************** - * Flush TLBs and instruction caches of current CPU for all VMIDs + * Flush TLBs and instruction caches of all CPUs inside the inner-shareable + * domain, for all VMIDs * * void __kvm_flush_vm_context(void); */ ENTRY(__kvm_flush_vm_context) mov r0, #0 @ rn parameter for c15 flushes is SBZ - mcr p15, 4, r0, c8, c7, 4 @ Invalidate Non-secure Non-Hyp TLB - mcr p15, 0, r0, c7, c5, 0 @ Invalidate instruction caches + + /* Invalidate NS Non-Hyp TLB Inner Shareable (TLBIALLNSNHIS) */ + mcr p15, 4, r0, c8, c3, 4 + /* Invalidate instruction caches Inner Shareable (ICIALLUIS) */ + mcr p15, 0, r0, c7, c1, 0 dsb isb @ Not necessary if followed by eret