From patchwork Tue May 22 16:54:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 10419189 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7F71B6032A for ; Tue, 22 May 2018 16:55:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E69928F91 for ; Tue, 22 May 2018 16:55:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 631B928FEA; Tue, 22 May 2018 16:55:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAB1428F91 for ; Tue, 22 May 2018 16:55:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751473AbeEVQzL (ORCPT ); Tue, 22 May 2018 12:55:11 -0400 Received: from mail-it0-f73.google.com ([209.85.214.73]:51452 "EHLO mail-it0-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751229AbeEVQzG (ORCPT ); Tue, 22 May 2018 12:55:06 -0400 Received: by mail-it0-f73.google.com with SMTP id i3-v6so453244iti.1 for ; Tue, 22 May 2018 09:55:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:date:message-id:subject:from:to:cc; bh=Y52N6Bvxg/5wbotsffz/64LNCrBMqfFSI6gwciLzQQ0=; b=lgm4hx8EX9Y+4qi893aetzV0UiHztECH73Jf7Z6r/Ma4LDlCgwwzpfU/mA6xAmfuiY hDxDtA0EpixRTKrzpuBtOKPNl9RSChMgpgRSuwFATsalK3ZBt7zpWptjhixjTneqkq79 F4dGaZo49bU8zMHB8tDkpmIPERhfJnfhKeEhBnZIZTHJreWnQBxTC+eXxYfx0Jo1l5JZ uwhBMgTxdnoOyAbqb5UYhksE+e2Rk1tq3aJgY9omm9zgosPUyIKrTsmshDjvwK+vbZEk tzYzKcgoTnIbcEQ+fmeG0F88Tux+MB/VQ4fmkuPkQdY4FIAkdiXUPXqXCWzUUZFnfqPO iZrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc; bh=Y52N6Bvxg/5wbotsffz/64LNCrBMqfFSI6gwciLzQQ0=; b=hJl/ggKwUAFoJ7hCZWyMg2zrmbkmIH7bu4f+8A4tHg1G9UZimntQxwICzG/9PzQ/xV Z3qzYq9Hlqh8/A3DZX6lPKuxUwEBd8HCX4xtFf+AULZUmSaW0TtP25ZwhJOgCdaZc7Tu DzijZKUiDBjBW2GDRgxgzZCv78B9InY3S6fo/U9QkBDqbIt6oZwZ0Lbek8myFiSgbaQ1 Yujs1xJ0kfogao/4NO5tx5CUNITG39bfa9WSCuXgxOxZb6FYgKIQkaQn8tGhAilFO5FQ BfsjewTAgOZ23SsQsDsdtLdXCYaXGb1kFZ6iLUOmccDV2Pn+gp+Vj1dUmOg7zIge39qu 9Ehg== X-Gm-Message-State: ALKqPweZnfJtFURqlUpO2Cn4P2AD4n1l7EC58iq95IitArtDfBq72CpU /o6jLlYELoVXi6q10bUaxq3RGW2CBV3JGSgNXa+IWPIM4KBI9ZAz/NTuziKfs15bqWwTnu4rQi0 ehbPFu5pkmwg/C8rZN/ZiOFzroet9QDQPAQytUQYK1CnbXoaJC7M8OgCc4uUmIrI= X-Google-Smtp-Source: AB8JxZocVrzOmHoX1TayT8rb+8xDK17RFjkGaWBxVeSGfDWMaeUNnMx/qPnacn8U5GmOY70edwFmwu5nnBuYwg== MIME-Version: 1.0 X-Received: by 2002:a24:b349:: with SMTP id z9-v6mr892311iti.20.1527008105655; Tue, 22 May 2018 09:55:05 -0700 (PDT) Date: Tue, 22 May 2018 09:54:20 -0700 Message-Id: <20180522165420.22196-1-jmattson@google.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog Subject: [PATCH] kvm: svm: Ensure an IBPB on all affected CPUs when freeing a vmcb From: Jim Mattson To: kvm@vger.kernel.org, ashok.raj@intel.com, peterz@infradead.org, dwmw@amazon.co.uk, karahmed@amazon.de, tglx@linutronix.de, konrad.wilk@oracle.com, jmattson@google.com Cc: Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously, we only called indirect_branch_prediction_barrier on the logical CPU that freed a vmcb. This function should be called on all logical CPUs that last loaded the vmcb in question. Fixes: 15d45071523d ("KVM/x86: Add IBPB support") Reported-by: Neel Natu Signed-off-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk --- arch/x86/kvm/svm.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 220e5a89465a..ffa27f75e323 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2172,21 +2172,31 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) return ERR_PTR(err); } +static void svm_clear_current_vmcb(struct vmcb *vmcb) +{ + int i; + + for_each_online_cpu(i) + cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL); +} + static void svm_free_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + /* + * The vmcb page can be recycled, causing a false negative in + * svm_vcpu_load(). So, ensure that no logical CPU has this + * vmcb page recorded as its current vmcb. + */ + svm_clear_current_vmcb(svm->vmcb); + __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); __free_page(virt_to_page(svm->nested.hsave)); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); - /* - * The vmcb page can be recycled, causing a false negative in - * svm_vcpu_load(). So do a full IBPB now. - */ - indirect_branch_prediction_barrier(); } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)