From patchwork Thu Dec 31 00:26:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B64C433E9 for ; Thu, 31 Dec 2020 00:28:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0DFF21973 for ; Thu, 31 Dec 2020 00:28:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726589AbgLaA2J (ORCPT ); Wed, 30 Dec 2020 19:28:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726579AbgLaA2H (ORCPT ); Wed, 30 Dec 2020 19:28:07 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4BFAC06179B for ; Wed, 30 Dec 2020 16:27:26 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id j1so31401917ybj.11 for ; Wed, 30 Dec 2020 16:27:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=rOxjhQpHu1RfWA8cWpiD6IGTVkEoEMcY9+NA6EUVZWk=; b=eWtJP4mxfNkHF6+f0Z+ZCUumhU5z4MG+btS9lZOP0MGoxttBnbiPVrOv/cs3QKr2pe yYHU/MiR5+CfS5HKckQfismEZ2onO1b43xvLt19DiaRfuuz7E5vm7ESC7NurhcvFBG4Y SIfyDAtMkAP2H3hvQd5HaKqfz6yoYoHBQorZSe+M7OH5gl0ySennnI9r2vuP5DPL3QMb NrfC+9yDGuMgcc9CPRSs9qNhB9PENGEY8C/jssqK9C16avVIu5pID2cquJ/3YPi0X5Nk xc5IqvX88gxCaEJ89h4E4N/UN8ewUCKNETDimIU3V1m4b24vBykeCZS9R9QY100hftEU lkHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=rOxjhQpHu1RfWA8cWpiD6IGTVkEoEMcY9+NA6EUVZWk=; b=kPdq6SfGAh81cDL4WjEjqDdpaBZVN+mRcEGjhIuKrUPcEe5Vsfs8NEwo7auY8kHrCl imDE7pfqDym3oJd57ncoF7tEOhvQineyY9FsV1iUuqi4zaDjmyGxodcjBDYpmA/dpECb 2EH8ikEBJzWFalSCQp9/rabySWdiETjHYOT7qMC5oqVuWMP2Tt4FyFfkiwjjC33fcAQ+ +G7P8YxOzGpVcp2qOcXerJ6olZC2Vm1apRUDEGPeBJPuo7p5OAvrSs2QDtefdvcuwQP9 NqhRWPuh+WlqfrQ9yGKOx1dBkEgZoMgtTz5RhMAnC83jg1Hz9mwgRU47hWn9XGUamR3V zSNQ== X-Gm-Message-State: AOAM5330Xo9sjuKO7Y9H//SCT+CFjhReWP/NFYilKSYSFYsY5YzGC3tc IzQw3ocIdEhvqsWhUnj+BAEI/7kXun4= X-Google-Smtp-Source: ABdhPJxzJSFVMtUWiaSVuoQJJUIlDSp5lHxECjas4or3vPMGlBJ8HdaSd0PwczifhPn2MQG8W/VGjRXTDA0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:38d0:: with SMTP id f199mr75851834yba.8.1609374446107; Wed, 30 Dec 2020 16:27:26 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:54 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-2-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Silently ignore all faults on VMXOFF in the reboot flows as such faults are all but guaranteed to be due to the CPU not being in VMX root. Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX state without faulting, and (c) the whole point is to get out of VMX root, eating faults is the simplest way to achieve the desired behaior. Technically, VMXOFF can fault (or fail) for other reasons, but all other fault and failure scenarios are mode related, i.e. the kernel would have to magically end up in RM, V86, compat mode, at CPL>0, or running with the SMI Transfer Monitor active. The kernel is beyond hosed if any of those scenarios are encountered; trying to do something fancy in the error path to handle them cleanly is pointless. Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function") Reported-by: David P. Reed Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/include/asm/virtext.h | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index 9aad0e0876fb..fda3e7747c22 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -30,15 +30,22 @@ static inline int cpu_has_vmx(void) } -/** Disable VMX on the current CPU +/** + * cpu_vmxoff() - Disable VMX on the current CPU * - * vmxoff causes a undefined-opcode exception if vmxon was not run - * on the CPU previously. Only call this function if you know VMX - * is enabled. + * Disable VMX and clear CR4.VMXE (even if VMXOFF faults) + * + * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to + * atomically track post-VMXON state, e.g. this may be called in NMI context. + * Eat all faults as all other faults on VMXOFF faults are mode related, i.e. + * faults are guaranteed to be due to the !post-VMXON check unless the CPU is + * magically in RM, VM86, compat mode, or at CPL>0. */ static inline void cpu_vmxoff(void) { - asm volatile ("vmxoff"); + asm_volatile_goto("1: vmxoff\n\t" + _ASM_EXTABLE(1b, %l[fault]) :::: fault); +fault: cr4_clear_bits(X86_CR4_VMXE); } From patchwork Thu Dec 31 00:26:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EEB9C4332D for ; Thu, 31 Dec 2020 00:29:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E21C4212CC for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726638AbgLaA2p (ORCPT ); Wed, 30 Dec 2020 19:28:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726428AbgLaA2p (ORCPT ); Wed, 30 Dec 2020 19:28:45 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80771C06179E for ; Wed, 30 Dec 2020 16:27:29 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id d187so31548776ybc.6 for ; Wed, 30 Dec 2020 16:27:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=41wwaewfcJslPww7frzv8kWmZ0bKcG8QuMACLByWJEU=; b=wMCCuSsz+pinc1Mj+WpSYsfU1nTpkmw9iuZLLXwxdfGKdkVNpDGMntA6BRSabJyS20 ZnSzKX94N2/XWIKYBi+BfKRV71OvxxokuM/+h17VtmWLrCIM2fUqweOhiwrVyITwix/o pb5nwHtRk/fKLQ/JCrLCtAgyA3SAQrDp53HH8vwcGzL1OpaPDrOX/7CErqQwijjfSVe9 lSo7yP9KI8E2tYly5KdJNUytKeoj9cHNeTrr2hjmJiHhwfSwD3IeOZEKQ9I1eauS1qn7 8EIKKsS7Em+1i/T72DB2dWcR+F9ZCOqFhRtXtAa/wi8ehCgW8SU6twx6mt1SLYMS2hwn t/1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=41wwaewfcJslPww7frzv8kWmZ0bKcG8QuMACLByWJEU=; b=YxSzcAdQootxHZqzdw7qFXU0sfJDBjzWEDbRh57NYCX+7U16tGtr8FJgeibtvCqx/G pPUxOZLIqdHijzPQFDSkktIMLKyqy8HIicbNbOvXR0iNWOTJ3RqhCabEs/F12gnFX7PK auV6Z7kf16eqg6T7kaiLpKfx6XsrKOegtBlNXAIsry249L/N2nntE5Qbtq1UCPRJsdBe Qmleq3tFyu6sHPIt18uNOBeAQgQeWdMvF3k5iVuB4buJ3BGEQbWIK04cooy5IzGbDNpX rWcrUo03j3JiQV0qZAMuzjWeTeVI2xBcF4Wn+6hUdpwOnFDqbdRWghZ12126Xq3PMaft 2ndQ== X-Gm-Message-State: AOAM533PHl6GiqjFYr5IDyv30Y9ccJgREzEaE50f4HHlkS7eKz6qRlnJ XwNu0dU4/1aEtIzoW7a7z/zPiz7Vi7c= X-Google-Smtp-Source: ABdhPJxbf3HbdMV1aaBlSXogwf9G9n/zBskYdZe0d0RHK0RHSyxpfbpVLkqSOXzyBWBDfO0j7+J4gxxiCOU= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:d753:: with SMTP id o80mr73705503ybg.169.1609374448707; Wed, 30 Dec 2020 16:27:28 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:55 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-3-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 2/9] x86/reboot: Force all cpus to exit VMX root if VMX is supported From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Force all CPUs to do VMXOFF (via NMI shootdown) during an emergency reboot if VMX is _supported_, as VMX being off on the current CPU does not prevent other CPUs from being in VMX root (post-VMXON). This fixes a bug where a crash/panic reboot could leave other CPUs in VMX root and prevent them from being woken via INIT-SIPI-SIPI in the new kernel. Fixes: d176720d34c7 ("x86: disable VMX on all CPUs on reboot") Cc: stable@vger.kernel.org Suggested-by: Sean Christopherson Signed-off-by: David P. Reed [sean: reworked changelog and further tweaked comment] Signed-off-by: Sean Christopherson --- arch/x86/kernel/reboot.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index db115943e8bd..efbaef8b4de9 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -538,31 +538,21 @@ static void emergency_vmx_disable_all(void) local_irq_disable(); /* - * We need to disable VMX on all CPUs before rebooting, otherwise - * we risk hanging up the machine, because the CPU ignores INIT - * signals when VMX is enabled. + * Disable VMX on all CPUs before rebooting, otherwise we risk hanging + * the machine, because the CPU blocks INIT when it's in VMX root. * - * We can't take any locks and we may be on an inconsistent - * state, so we use NMIs as IPIs to tell the other CPUs to disable - * VMX and halt. + * We can't take any locks and we may be on an inconsistent state, so + * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt. * - * For safety, we will avoid running the nmi_shootdown_cpus() - * stuff unnecessarily, but we don't have a way to check - * if other CPUs have VMX enabled. So we will call it only if the - * CPU we are running on has VMX enabled. - * - * We will miss cases where VMX is not enabled on all CPUs. This - * shouldn't do much harm because KVM always enable VMX on all - * CPUs anyway. But we can miss it on the small window where KVM - * is still enabling VMX. + * Do the NMI shootdown even if VMX if off on _this_ CPU, as that + * doesn't prevent a different CPU from being in VMX root operation. */ - if (cpu_has_vmx() && cpu_vmx_enabled()) { - /* Disable VMX on this CPU. */ - cpu_vmxoff(); + if (cpu_has_vmx()) { + /* Safely force _this_ CPU out of VMX root operation. */ + __cpu_emergency_vmxoff(); - /* Halt and disable VMX on the other CPUs */ + /* Halt and exit VMX root operation on the other CPUs. */ nmi_shootdown_cpus(vmxoff_nmi); - } } From patchwork Thu Dec 31 00:26:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BC13C433E9 for ; Thu, 31 Dec 2020 00:29:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA7ED20758 for ; Thu, 31 Dec 2020 00:29:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726656AbgLaA2q (ORCPT ); Wed, 30 Dec 2020 19:28:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726632AbgLaA2p (ORCPT ); Wed, 30 Dec 2020 19:28:45 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41E51C0617A0 for ; Wed, 30 Dec 2020 16:27:32 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id l8so31411880ybj.16 for ; Wed, 30 Dec 2020 16:27:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=HWHVyAaYti5x+balTemYgWkWGZb6R5Q+IhblvNdSnA4=; b=nV2DAgOKUv74I0sZv6UestOBd9ZPiX0Tg5MXL0UKMx0mTWOjG9EEAeSfT6nwfqxPRj 6t/C1W1GURdgv3QTtBSCN8b81oh7fE2Q2G4HbxZ2Mm+9RVaevdonszamXmqsXFNM758x UD6VC08n26QxwFOM5J+vGC8PPcp2RSuo9+SuvofXxCdzKM9IBjW+MzqLMi5MTi0ajuki 3rm7j0OIibL6xxHQGGHqy+iire07vkRWSvljrKkw/0zP01hZXfC2eYNfVm2K5fWr2jtK AGcs8mqrD0yucFTdx23v4f3PSRG7f13gX2Zi4CI47A+2TV59R028P2RcCvd9Dlt6t9ud EaJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=HWHVyAaYti5x+balTemYgWkWGZb6R5Q+IhblvNdSnA4=; b=nDdeSmffxwmIRmZ1X8RZjR53EFRs5u3LhaWiTAeb4CbJz5Foo63jV0Gd63I13G5RFO 7qAvCRg0RqWy2xhlVF5t7KnMjKApTw5Uq3F89FIomlO7lT5FJg18+88qaNFeL4k0SVLc E/+qrpBN5ou6Vhm3LDzmybC4rqlz1gYPR85/ys50Ko81AxRCFDuwxw7JGoFp1oLotJiD uXUMpHVCef+JmhLyFZf5ND3xTQkwguhzHZScGqHxt4g3VMRvpUB9gAbz4xwY5pmqmXDy WT8uDwVY22W5yqFSh4xvSKE7hfNO1QwtYzUUnmCElbYodumfZ6omNd1fRL5GD1ZDA+CZ mX2g== X-Gm-Message-State: AOAM531CPyq3PHhT07/nnB41Oo+28zgH0ePHgLGkmLYm8JNBUeD/Nu0U IEnp45/e19loraimFh9d+jQaorVaoIQ= X-Google-Smtp-Source: ABdhPJzomd7keOE6oEDsAyqrWFvuieTucqpMPjNu/+4bcm40AHqQpXUJW2cdRPQVL2sUb/ai4Rtsz9xIqdU= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:3206:: with SMTP id y6mr78907819yby.127.1609374451495; Wed, 30 Dec 2020 16:27:31 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:56 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-4-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David P. Reed Explicitly tell the compiler that VMXOFF modifies flags (like all VMX instructions), and mark memory as clobbered since VMXOFF must not be reordered and also may have memory side effects (though the kernel really shouldn't be accessing the root VMCS anyways). Practically speaking, adding the clobbers is most likely a nop; the primary motivation is to properly document VMXOFF's behavior. For the flags clobber, both Clang and GCC automatically mark flags as clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly instruction mnemonics instead of .byte streams"), which intentionally removed the previous clobber. But, neither Clang nor GCC documents this behavior, and there's no downside to including the clobber. For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that immediately follow VMXOFF have compiler barriers of their own, i.e. VMXOFF can't get reordered after clearing CR4.VMXE, which is really what's of interest. Cc: Randy Dunlap Signed-off-by: David P. Reed [sean: rewrote changelog, dropped comment adjustments] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/virtext.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index fda3e7747c22..2cc585467667 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -44,7 +44,8 @@ static inline int cpu_has_vmx(void) static inline void cpu_vmxoff(void) { asm_volatile_goto("1: vmxoff\n\t" - _ASM_EXTABLE(1b, %l[fault]) :::: fault); + _ASM_EXTABLE(1b, %l[fault]) + ::: "cc", "memory" : fault); fault: cr4_clear_bits(X86_CR4_VMXE); } From patchwork Thu Dec 31 00:26:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C29AFC433E0 for ; Thu, 31 Dec 2020 00:29:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F3D420758 for ; Thu, 31 Dec 2020 00:29:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726528AbgLaA2q (ORCPT ); Wed, 30 Dec 2020 19:28:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726634AbgLaA2p (ORCPT ); Wed, 30 Dec 2020 19:28:45 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B021AC0617A1 for ; Wed, 30 Dec 2020 16:27:34 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id e14so10490693qtr.8 for ; Wed, 30 Dec 2020 16:27:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Tucg/+W3WHc5RU8v1ZH0tJXOCpyf6aOsIMisUdgCGHs=; b=sKF2tCwSjXqqow4Iyi9/IUyLP1Pk16kGLgcb/1bBrlx9SA+kGWO1rn3Lr6uHczoS1/ ZnTiTkLMgb0IxPEQfNCjvfoMP3rwWVVv/Qa1DWmo2Dj/FOTt+J33lvdltNOtcmTWu6rY EoToBy+ib7K7ssV7zDDJa9sCFEb4KtASgymMlQwkzpfRLkyUZ7Yr5ACyfcMJMZBlTtI2 aTyuFfc5O41mNAJ1818U5Ogj5q7GMo6UFpaWyTCiQbwtJ6wZgIDS9TUirQFLE6HXR75F /E0MgCevzBg3bF/KZF4G3wx5KkQZOebW/yoqbEqsoFcdrjdsdazdAzP7e9mX1UM/l45T gnpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Tucg/+W3WHc5RU8v1ZH0tJXOCpyf6aOsIMisUdgCGHs=; b=Ti8mDJIlSSB3sArNSMLpQsgwzJUtvt0Od/oxpc1uNxgUg6ouyUHDSIGi7BsaGEpPTl RPtbLfH9OrxcY2Pec4mfkL+OVg5kRwF9LLaSiwlBSoY3F+bAD8gLdyIRIFZn6egQB5ho UOflXzZYjLsYI1N+QQA7GaVKSQ6PD6N2oRBjj/0XLfrV1FrRROnSf3HX7rwkAZMcylGF EmWFVzWZWF2am/TD7nQqYIRRbsG2RqtzOYOU0yaAjImxYloHvneH4y3+LoA4d5nabR5w G9TwuIdIxo6lu2sQiy16pci05c7REUPiMmxsA4XpDjfImEdSvgPqXpGsr/4cbs6J1FX9 wjhA== X-Gm-Message-State: AOAM5339lpN+7w0bjUVkWxgQ1In8Bcnc6uYn6WKLjT06zlnyhAspjsy9 jGkxecTtahkisMvBswzhFwU/fu0G0Ys= X-Google-Smtp-Source: ABdhPJyRF6MBNzauYpfMo3rlcvAY1wQ8KAAg+zarlNAgqRQOVqqLCIHOyRWg4T+qI4hXF9KqonLI3d1QuAI= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a05:6214:58d:: with SMTP id bx13mr12664650qvb.61.1609374453844; Wed, 30 Dec 2020 16:27:33 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:57 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-5-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 4/9] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Uros Bizjak Replace inline assembly in nested_vmx_check_vmentry_hw with a call to __vmx_vcpu_run. The function is not performance critical, so (double) GPR save/restore in __vmx_vcpu_run can be tolerated, as far as performance effects are concerned. Cc: Paolo Bonzini Cc: Sean Christopherson Reviewed-and-tested-by: Sean Christopherson Signed-off-by: Uros Bizjak [sean: dropped versioning info from changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 32 +++----------------------------- arch/x86/kvm/vmx/vmenter.S | 2 +- arch/x86/kvm/vmx/vmx.c | 2 -- arch/x86/kvm/vmx/vmx.h | 1 + 4 files changed, 5 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e2f26564a12d..5bbb4d667370 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -12,6 +12,7 @@ #include "nested.h" #include "pmu.h" #include "trace.h" +#include "vmx.h" #include "x86.h" static bool __read_mostly enable_shadow_vmcs = 1; @@ -3057,35 +3058,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->host_state.cr4 = cr4; } - asm( - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "je 1f \n\t" - __ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t" - "mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ - - /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%[loaded_vmcs])\n\t" - - /* - * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set - * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail - * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the - * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail. - */ - "call vmx_vmenter\n\t" - - CC_SET(be) - : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) - : [HOST_RSP]"r"((unsigned long)HOST_RSP), - [loaded_vmcs]"r"(vmx->loaded_vmcs), - [launched]"i"(offsetof(struct loaded_vmcs, launched)), - [host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)), - [wordsize]"i"(sizeof(ulong)) - : "memory" - ); + vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, + vmx->loaded_vmcs->launched); if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index e85aa5faa22d..3a6461694fc2 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -44,7 +44,7 @@ * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump * to vmx_vmexit. */ -SYM_FUNC_START(vmx_vmenter) +SYM_FUNC_START_LOCAL(vmx_vmenter) /* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */ je 2f diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 75c9c6a0a3a4..65b5f02b199f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6577,8 +6577,6 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) } } -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); - static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 9d3a557949ac..03fc90569ae1 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -339,6 +339,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu); From patchwork Thu Dec 31 00:26:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C264EC433E9 for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1567212CC for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726683AbgLaA2u (ORCPT ); Wed, 30 Dec 2020 19:28:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726667AbgLaA2t (ORCPT ); Wed, 30 Dec 2020 19:28:49 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D8A4C0617A3 for ; Wed, 30 Dec 2020 16:27:37 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id g67so31427819ybb.9 for ; Wed, 30 Dec 2020 16:27:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=s8lcQSUVfrrCRH8wEFilf3d3qdZktxl5kDJ/wTn8kL0=; b=lkTEfxWLoLxmVAHlOKbFzeOt+Zji4NnJB5lgOs999WZtXA/WiOFTK+p7N5zOHCDGTO MGq0t5bfLRD5C1j7VQ5ePiin2GQE7c2sEe0Re31VpOReBXj0AAPpNpghlgkIkTMxjGBs j6+JqX3JmOMGTDkduoTKDnSvaIRbU3OqwEtG5+mVa9Y86Q10xmJ+aFl/3zlcZLX2uBIN gWMj6oiYyZfAdvHU5lfCaZTOwkn/jGnLU0wJz0tNBJiHFl9PPf/jRM9t2wcjEWfa9iv4 Gm7F0/8jUoNkhSVXUW+3yC9RJBAVY0U4pCGOfFD0XuhHEIDczXyY8tY8TEXE2/y1ya47 +UBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=s8lcQSUVfrrCRH8wEFilf3d3qdZktxl5kDJ/wTn8kL0=; b=PE+UVSidoVJUCa0jf2so+lN8bJwkUy0R3dZFQVM3vJnNG1l58qTB2uilgJLwESqxxu 0akmAtYJu89gQIR0l/+BsO7sYN7MROqraXf27XQVJ68qg+DX8V3lpDvEIfzPp90wH5LS 3y/Stp4QlEYg+aGwWEfj1CPyg3Et8xP9n1pbTpWbE6rLtC0W7zLFELsWw9pJ9EBD611n 1yPkW+/olm6Byo6ckJ3ThlEVSZq8DD8AI4DcilRm9Ay2DgXVNyg2qs4gWSMbOqs+Dh1r JyZe8UN4COhpxrXW9JW+5gZUs1BcNJ0F1qCZDpWMoXc8gVi+/IXvzPg5pqv/j/MsrG62 Y8sg== X-Gm-Message-State: AOAM531XCltb5tvk9rRRs2jkoeNdIn7WGXnaudbDnmsZWYAM0CLeUUJW 92iiZEPOAfUtnuGNTVBpM+bt0nlurqQ= X-Google-Smtp-Source: ABdhPJxSw0Li4KDLRvKlY96P7FTBS408h7GYGBLjdJ8tqYmJw92+bLXp8WpiwnkpCC7PekKzBO8/aZ4K7Aw= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a5b:148:: with SMTP id c8mr77569092ybp.45.1609374456428; Wed, 30 Dec 2020 16:27:36 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:58 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-6-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 5/9] KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the Intel PT tracking outside of the VMXON/VMXOFF helpers so that a future patch can drop KVM's kvm_cpu_vmxoff() in favor of the kernel's cpu_vmxoff() without an associated PT functional change, and without losing symmetry between the VMXON and VMXOFF flows. Barring undocumented behavior, this should have no meaningful effects as Intel PT behavior does not interact with CR4.VMXE. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 65b5f02b199f..131f390ade24 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2265,7 +2265,6 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer) u64 msr; cr4_set_bits(X86_CR4_VMXE); - intel_pt_handle_vmx(1); asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" _ASM_EXTABLE(1b, %l[fault]) @@ -2276,7 +2275,6 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer) fault: WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); - intel_pt_handle_vmx(0); cr4_clear_bits(X86_CR4_VMXE); return -EFAULT; @@ -2299,9 +2297,13 @@ static int hardware_enable(void) !hv_get_vp_assist_page(cpu)) return -EFAULT; + intel_pt_handle_vmx(1); + r = kvm_cpu_vmxon(phys_addr); - if (r) + if (r) { + intel_pt_handle_vmx(0); return r; + } if (enable_ept) ept_sync_global(); @@ -2327,7 +2329,6 @@ static void kvm_cpu_vmxoff(void) { asm volatile (__ex("vmxoff")); - intel_pt_handle_vmx(0); cr4_clear_bits(X86_CR4_VMXE); } @@ -2335,6 +2336,8 @@ static void hardware_disable(void) { vmclear_local_loaded_vmcss(); kvm_cpu_vmxoff(); + + intel_pt_handle_vmx(0); } /* From patchwork Thu Dec 31 00:26:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B96C4332B for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C15CF20758 for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726901AbgLaA3R (ORCPT ); Wed, 30 Dec 2020 19:29:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726687AbgLaA2u (ORCPT ); Wed, 30 Dec 2020 19:28:50 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D991DC0617A6 for ; Wed, 30 Dec 2020 16:27:39 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id q11so31336444ybm.21 for ; Wed, 30 Dec 2020 16:27:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=vWz/7XjNeryyp630nr1DJj55bwLnjvhE7YDu01lE+Dg=; b=I4LgBOH8+yxg22Qg6fzB2fWAyVTEhg/xk4ZHl5/5gqdjzc+1roduWn+H8pQLUs3uJF i6G1w/taqQhZOjZJ0GBxfX+boaGsH+xG2acge1dCsJbnWeQyn7tERYX4D9tqXueKCpMM Vet+vU/u/7FHAJ4p4Ngk5NPXw/ZJvRUVdtIMjynLZBgI4J2FeT63XSmZ8M0oKy5pg2pd XslgNxpGbRIGNNwCJMCPOimZReXRRU8fV0lcV9p9d59V/+veP70rRmpS3oUEEMoeCnVw kCn7AnAbJ6KZKk7ehf43OPQdElAhpxHOmC5Aw8HhXf6f/1LX2xFCWVlgv0qWbyTVqSFj +dtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=vWz/7XjNeryyp630nr1DJj55bwLnjvhE7YDu01lE+Dg=; b=pXrlEdX6LSV1Tb4YAHLPx45rURUBjjnFsOmCSlJiSwuY+YuwxCEMWAVUzrPmqFJYDv qJZzFiGf/OPMuZOT9W90hh/VA+cDK21NLv6OKGs2TXhTyxtGzzj7bqI/lOpKnhBMveNd pMp2+uGl9MsiVMNBh5Aqmh40cE99TKZCgzX5d7foFc5Ct4B6VuQrNXFWNgNPG7VltBJO p1gEwc/mr4oPxUtuf3UWif5ukGk6UTS8HSs7HaXqGCyCCJXZCmKkz2uwAW9Xbe9KmPO2 +9llijuvcSj0t9d3fNrhRdyFhg1yhQLiVrz+Z4kQDtKcZfjGmfefSWuJFJREW6YgCYo8 KxFg== X-Gm-Message-State: AOAM5306EwfMe5FfunVyFTFQHJYvkSWPrNz3hRucD5rwZp3PGaHJ6tRG 5dIDcxbjysVXOJT+FoaOWIS27AZQC30= X-Google-Smtp-Source: ABdhPJzhfpK3QbxW1/mF/Hqsh5JXYQcPMDRZHSVzCzGdFpXQbPRupniByhw6/ygyZ+UDwkXp9rQ2BivbqsI= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a5b:147:: with SMTP id c7mr17467602ybp.500.1609374459123; Wed, 30 Dec 2020 16:27:39 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:26:59 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-7-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 6/9] KVM: VMX: Use the kernel's version of VMXOFF From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop kvm_cpu_vmxoff() in favor of the kernel's cpu_vmxoff(). Modify the latter to return -EIO on fault so that KVM can invoke kvm_spurious_fault() when appropriate. In addition to the obvious code reuse, dropping kvm_cpu_vmxoff() also eliminates VMX's last usage of the __ex()/__kvm_handle_fault_on_reboot() macros, thus helping pave the way toward dropping them entirely. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/virtext.h | 7 ++++++- arch/x86/kvm/vmx/vmx.c | 15 +++------------ 2 files changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index 2cc585467667..8757078d4442 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -41,13 +41,18 @@ static inline int cpu_has_vmx(void) * faults are guaranteed to be due to the !post-VMXON check unless the CPU is * magically in RM, VM86, compat mode, or at CPL>0. */ -static inline void cpu_vmxoff(void) +static inline int cpu_vmxoff(void) { asm_volatile_goto("1: vmxoff\n\t" _ASM_EXTABLE(1b, %l[fault]) ::: "cc", "memory" : fault); + + cr4_clear_bits(X86_CR4_VMXE); + return 0; + fault: cr4_clear_bits(X86_CR4_VMXE); + return -EIO; } static inline int cpu_vmx_enabled(void) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 131f390ade24..1a3b508ba8c1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2321,21 +2321,12 @@ static void vmclear_local_loaded_vmcss(void) __loaded_vmcs_clear(v); } - -/* Just like cpu_vmxoff(), but with the __kvm_handle_fault_on_reboot() - * tricks. - */ -static void kvm_cpu_vmxoff(void) -{ - asm volatile (__ex("vmxoff")); - - cr4_clear_bits(X86_CR4_VMXE); -} - static void hardware_disable(void) { vmclear_local_loaded_vmcss(); - kvm_cpu_vmxoff(); + + if (cpu_vmxoff()) + kvm_spurious_fault(); intel_pt_handle_vmx(0); } From patchwork Thu Dec 31 00:27:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22AE3C43381 for ; Thu, 31 Dec 2020 00:29:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC30D212CC for ; Thu, 31 Dec 2020 00:29:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726703AbgLaA2v (ORCPT ); Wed, 30 Dec 2020 19:28:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726688AbgLaA2u (ORCPT ); Wed, 30 Dec 2020 19:28:50 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 902E7C0617A7 for ; Wed, 30 Dec 2020 16:27:42 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id s66so13595366qkh.10 for ; Wed, 30 Dec 2020 16:27:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=8ANnx6AvMPuAyAMKxozebrm5eQtYsGkfZPsOWk0XO8s=; b=rVBqmfJjho2XG5nRs0AINrhMHs5IAP8aScZFDDQUtuyNLrGcdZqe17gSZgdPfTnDjr oiN5IIVQs5IdHgVJUESSU5ZKlzt8Xc/lMRtSRdxWgZGSKBSeOZ2VXhN0a+7WraqCqlb/ AvTk0T26bEk56/QxThAzKEEPDpMqmBA3TLuLCXDkgBuO+D+IJFMKcE8odWIT4xZSG079 5e3VKt+f/JJpqNG055WJsJPlR3ezxVnUGbOZjYb6sktpP2mcA7kdKQ8vghB2l46IRn9J yzXWxvldNS9HzGXsFjfKLXmHE4qTfazjL+Qn9qVqj05rPiRRn1pO344/GaPDLRHt4ChD 9w4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=8ANnx6AvMPuAyAMKxozebrm5eQtYsGkfZPsOWk0XO8s=; b=SDQIHxOy78r596BpAWPKD4X41j7spNTDWw4ASEHn04rJhIsby2uq0sXXI+iZqVSvNf SYgIZJpIKGbevdBqDtsGrBtsC23Y968DB0pgvbLdmFH+z09hXOp+0YeWqt/JAra3nifw hGXxQjx/5b18ZayGIA/4mYlxxiS43P+dt6V0cglf7qqSpKSFT+YbBiAbh8l4McM6XNts bnMV4w2MGaqlh+t1/ucZB6boRPt5U/21ZFSoH+CppAOZmI8UFuEngzGcmByvUFAZElnk X9uPX4dlCwubSdxJa3Dniy1dpjNF+JxIaCjAGiNU6bVHYLRl00TTwMDPNi9Atau/XfFY Vfow== X-Gm-Message-State: AOAM530FBAygXGwGAYAWYsJZSC5v83R+GeCeHru5U0+6Whg43LYDnoDp ozcU1HoLclRNtPZcPoHBkJxjNIb2GdU= X-Google-Smtp-Source: ABdhPJxeWir92fZ6IxS7tj0vsL1m/7kjxVmvp449e/9f0pAau+7nBvcKgkpymvsRDq/D8vw4iajihq/uxaY= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a0c:f046:: with SMTP id b6mr59156219qvl.14.1609374461656; Wed, 30 Dec 2020 16:27:41 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:27:00 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-8-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 7/9] KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add svm_asm*() macros, a la the existing vmx_asm*() macros, to handle faults on SVM instructions instead of using the generic __ex(), a.k.a. __kvm_handle_fault_on_reboot(). Using asm goto generates slightly better code as it eliminates the in-line JMP+CALL sequences that are needed by __kvm_handle_fault_on_reboot() to avoid triggering BUG() from fixup (which generates bad stack traces). Using SVM specific macros also drops the last user of __ex() and the the last asm linkage to kvm_spurious_fault(), and adds a helper for VMSAVE, which may gain an addition call site in the future (as part of optimizing the SVM context switching). Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 3 +- arch/x86/kvm/svm/svm.c | 16 +---------- arch/x86/kvm/svm/svm_ops.h | 59 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 62 insertions(+), 16 deletions(-) create mode 100644 arch/x86/kvm/svm/svm_ops.h diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 9858d5ae9ddd..4511d7ccdb19 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -22,6 +22,7 @@ #include "x86.h" #include "svm.h" +#include "svm_ops.h" #include "cpuid.h" #include "trace.h" @@ -2001,7 +2002,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) * of which one step is to perform a VMLOAD. Since hardware does not * perform a VMSAVE on VMRUN, the host savearea must be updated. */ - asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory"); + vmsave(__sme_page_pa(sd->save_area)); /* * Certain MSRs are restored on VMEXIT, only save ones that aren't diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cce0143a6f80..4308ab5ca27e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -41,6 +41,7 @@ #include "trace.h" #include "svm.h" +#include "svm_ops.h" #define __ex(x) __kvm_handle_fault_on_reboot(x) @@ -246,21 +247,6 @@ u32 svm_msrpm_offset(u32 msr) #define MAX_INST_SIZE 15 -static inline void clgi(void) -{ - asm volatile (__ex("clgi")); -} - -static inline void stgi(void) -{ - asm volatile (__ex("stgi")); -} - -static inline void invlpga(unsigned long addr, u32 asid) -{ - asm volatile (__ex("invlpga %1, %0") : : "c"(asid), "a"(addr)); -} - static int get_max_npt_level(void) { #ifdef CONFIG_X86_64 diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h new file mode 100644 index 000000000000..0c8377aee52c --- /dev/null +++ b/arch/x86/kvm/svm/svm_ops.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_X86_SVM_OPS_H +#define __KVM_X86_SVM_OPS_H + +#include + +#include + +#define svm_asm(insn, clobber...) \ +do { \ + asm_volatile_goto("1: " __stringify(insn) "\n\t" \ + _ASM_EXTABLE(1b, %l[fault]) \ + ::: clobber : fault); \ + return; \ +fault: \ + kvm_spurious_fault(); \ +} while (0) + +#define svm_asm1(insn, op1, clobber...) \ +do { \ + asm_volatile_goto("1: " __stringify(insn) " %0\n\t" \ + _ASM_EXTABLE(1b, %l[fault]) \ + :: op1 : clobber : fault); \ + return; \ +fault: \ + kvm_spurious_fault(); \ +} while (0) + +#define svm_asm2(insn, op1, op2, clobber...) \ +do { \ + asm_volatile_goto("1: " __stringify(insn) " %1, %0\n\t" \ + _ASM_EXTABLE(1b, %l[fault]) \ + :: op1, op2 : clobber : fault); \ + return; \ +fault: \ + kvm_spurious_fault(); \ +} while (0) + +static inline void clgi(void) +{ + svm_asm(clgi); +} + +static inline void stgi(void) +{ + svm_asm(stgi); +} + +static inline void invlpga(unsigned long addr, u32 asid) +{ + svm_asm2(invlpga, "c"(asid), "a"(addr)); +} + +static inline void vmsave(hpa_t pa) +{ + svm_asm1(vmsave, "a" (pa), "memory"); +} + +#endif /* __KVM_X86_SVM_OPS_H */ From patchwork Thu Dec 31 00:27:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8800EC43331 for ; Thu, 31 Dec 2020 00:29:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 604402220B for ; Thu, 31 Dec 2020 00:29:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726799AbgLaA3B (ORCPT ); Wed, 30 Dec 2020 19:29:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726773AbgLaA3A (ORCPT ); Wed, 30 Dec 2020 19:29:00 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F29B0C0617AB for ; Wed, 30 Dec 2020 16:27:44 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id l8so31412352ybj.16 for ; Wed, 30 Dec 2020 16:27:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uVgCl7Nu2FXUG56acnNl/pxMnIlKTrWSEFVB9EZQEgA=; b=BJ2saRRgGYmNkHST+OHJwIlTUSitobyo5wgejFmXy5xfCg6xSaYArcCOTEYxCCu95U YWKCmMqsisWCZS3hxI3HnDbCy5Ros0QQDgNSw2jXRIKVUgK5qYsoWCnd1/Zr9Am3Jpf3 1LXOXXhpQe68JSKwm1hGr16/CuH+MjxjWS1CSjne44ZFILzQNMDIhoOsoWom4ED8qC8X A81GKMwT+ClV9tL5J53kIlHdgcVtG3bIHgfuDn6RUFSH5461/5f9jAaZtl1yqg1j41yL avtBwbEgi96C0KQvIYLPq2IA7Gnvu5Qc9GpiOacbuWk/DMyJ0xiNJ4GMtEBBpV5IJi+J PD+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uVgCl7Nu2FXUG56acnNl/pxMnIlKTrWSEFVB9EZQEgA=; b=ARB1hxnALMIJTWY6CZb5et3S98YPo7KZ5kmiSAG2yESZySSgPlM9xrE/u2iOp6QcYo M2t7cCVAYK6q8eiGrUbm+OnxUe0rcOHewaHpiNQOIrI9D8nR6Ucj2+MwyFmZFcw2mtwx do0DDQ00G1Q1Hs4HAaO0g0XFsnoP/3t5gpXzy/U54oeaAll0Xb5DWaBWbBdSgabKjsFu h5udtAga+fi9rN5L4NHaj6qCSjg9SAxWQD/xfQ5OIkZ1klik/wLBCudGSxhfoIJZA2s/ frmIYwiyTnZGs4SDE2QUHEMsmVzdyaeuXfWWUSDW5lV+GrR+ffysr0e3//ATzwKUBX0o X/sQ== X-Gm-Message-State: AOAM532TPvW0N81enDDcJxilC8tTTuWKeCl16/8mI8VhDi4bELIcHWwp HFHYnSq6wVln3aH2rL65+8Xj0GgY2Oc= X-Google-Smtp-Source: ABdhPJxRcor84DKLBK+eKyb4sF+wbye6f6qXwCr9kBSPQnUHfIL18nDVMp+WW7CEI0G/PeDoxrWqFnvHCnw= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:6b46:: with SMTP id o6mr79807409ybm.409.1609374464238; Wed, 30 Dec 2020 16:27:44 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:27:01 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-9-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 8/9] KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot() From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the __kvm_handle_fault_on_reboot() and __ex() macros now that all VMX and SVM instructions use asm goto to handle the fault (or in the case of VMREAD, completely custom logic). Drop kvm_spurious_fault()'s asmlinkage annotation as __kvm_handle_fault_on_reboot() was the only flow that invoked it from assembly code. Cc: Uros Bizjak Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 25 +------------------------ arch/x86/kvm/svm/sev.c | 2 -- arch/x86/kvm/svm/svm.c | 2 -- arch/x86/kvm/vmx/vmx_ops.h | 2 -- arch/x86/kvm/x86.c | 9 ++++++++- 5 files changed, 9 insertions(+), 31 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3ab7b46087b7..51ba20ffaedb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1634,30 +1634,7 @@ enum { #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) -asmlinkage void kvm_spurious_fault(void); - -/* - * Hardware virtualization extension instructions may fault if a - * reboot turns off virtualization while processes are running. - * Usually after catching the fault we just panic; during reboot - * instead the instruction is ignored. - */ -#define __kvm_handle_fault_on_reboot(insn) \ - "666: \n\t" \ - insn "\n\t" \ - "jmp 668f \n\t" \ - "667: \n\t" \ - "1: \n\t" \ - ".pushsection .discard.instr_begin \n\t" \ - ".long 1b - . \n\t" \ - ".popsection \n\t" \ - "call kvm_spurious_fault \n\t" \ - "1: \n\t" \ - ".pushsection .discard.instr_end \n\t" \ - ".long 1b - . \n\t" \ - ".popsection \n\t" \ - "668: \n\t" \ - _ASM_EXTABLE(666b, 667b) +void kvm_spurious_fault(void); #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4511d7ccdb19..e7080e5056a4 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -26,8 +26,6 @@ #include "cpuid.h" #include "trace.h" -#define __ex(x) __kvm_handle_fault_on_reboot(x) - static u8 sev_enc_bit; static int sev_flush_asids(void); static DECLARE_RWSEM(sev_deactivate_lock); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4308ab5ca27e..e4907e490c24 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -43,8 +43,6 @@ #include "svm.h" #include "svm_ops.h" -#define __ex(x) __kvm_handle_fault_on_reboot(x) - MODULE_AUTHOR("Qumranet"); MODULE_LICENSE("GPL"); diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h index 692b0c31c9c8..7b6fbe103c61 100644 --- a/arch/x86/kvm/vmx/vmx_ops.h +++ b/arch/x86/kvm/vmx/vmx_ops.h @@ -10,8 +10,6 @@ #include "evmcs.h" #include "vmcs.h" -#define __ex(x) __kvm_handle_fault_on_reboot(x) - asmlinkage void vmread_error(unsigned long field, bool fault); __attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field, bool fault); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3f7c1fc7a3ce..836912b42030 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -412,7 +412,14 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_set_apic_base); -asmlinkage __visible noinstr void kvm_spurious_fault(void) +/* + * Handle a fault on a hardware virtualization (VMX or SVM) instruction. + * + * Hardware virtualization extension instructions may fault if a reboot turns + * off virtualization while processes are running. Usually after catching the + * fault we just panic; during reboot instead the instruction is ignored. + */ +noinstr void kvm_spurious_fault(void) { /* Fault while not rebooting. We want the trace. */ BUG_ON(!kvm_rebooting); From patchwork Thu Dec 31 00:27:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11993807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B23B7C43381 for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 813F020758 for ; Thu, 31 Dec 2020 00:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726763AbgLaA3P (ORCPT ); Wed, 30 Dec 2020 19:29:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726777AbgLaA3A (ORCPT ); Wed, 30 Dec 2020 19:29:00 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F3B1C0617B1 for ; Wed, 30 Dec 2020 16:27:47 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id c17so14751827qvv.9 for ; Wed, 30 Dec 2020 16:27:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Hbk8rdCypxZQvfdZLDuzYjKUok+n+FkfRcH5jI3PlNw=; b=B3sdOr5BLKe7aAEN8dvatmjua4D081XjfvGkK38L+SS9oVfNNTjza8wQIpWodTKdOH 1S+3yTlzdgVKHjebYTH7xE3oJjYQJrWi2SRo2yHPBvY2i1Tv3Vpkrdf0PMM8ETEv6wAE WGkvPmpqXgS+lXLhWg0iBvwnkjuj4bocbyARWO+XFLPZJ2Q2D3YSfcumDG+lLnu9bUn3 IUXzVG9v2qFeORt5JPh3K+vD5LndvTmoUUngN9A/jDjGaBMd/SidFQCtc0/SbQS+T7En dQtVku/I2RUKki4Qpdx+n0oh6WfuK8Df3svMz6gCtv1iVv0S6jiwdeYAnCNUVxyuhuX6 WgAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Hbk8rdCypxZQvfdZLDuzYjKUok+n+FkfRcH5jI3PlNw=; b=cORidtpOPDyVvWeScUCn6pkD5lCAOXVYIAuW2k51sG1ZH40b/S/bRqpN1eya2azOoY xEMXs5+LYU32lmDrMhKs11ESGs61X/+sS7+r5mA9jn52M4nvfxuVYcTjSHAd87sq/37g Cz+chIo1yULZ7/8k8ZITu7N6E5lKTHiqeMZB2viOBWy6UJ6tQ2RrCMXj9WjIyGzA23v9 xQyAn/KRhqu78RLwCtcaLEjcCU4Q2J4hDMyHg+lrKL7iWBTaZDQ4rkLJI4bP43WwFJLx mYUlD1TC61rcObg9lL4Oj+pV9VXGXLpLT3BLJQzHsIq3rd2tqAz4xhJwl8EsKLG7CcMe xrGw== X-Gm-Message-State: AOAM532H7RS/9dWyTyhhMA3+mATbcBclrhwMBgMo1vDzBoQaaLOwfwst LPuZTKSbevC9XgqGcmUnh4hrqfRH1A8= X-Google-Smtp-Source: ABdhPJz9FhK/Y1QvMm6QTyaMAeBnsHgekZgrgFV3k3M6dCuFVbWpc4EluZNruvUhyOoZEME+sDGdDb6uZRc= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a0c:cc12:: with SMTP id r18mr59611912qvk.51.1609374466605; Wed, 30 Dec 2020 16:27:46 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 30 Dec 2020 16:27:02 -0800 In-Reply-To: <20201231002702.2223707-1-seanjc@google.com> Message-Id: <20201231002702.2223707-10-seanjc@google.com> Mime-Version: 1.0 References: <20201231002702.2223707-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH 9/9] KVM: x86: Move declaration of kvm_spurious_fault() to x86.h From: Sean Christopherson To: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, "David P . Reed" , Randy Dunlap , Uros Bizjak Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Uros Bizjak Move the declaration of kvm_spurious_fault() to KVM's "private" x86.h, it should never be called by anything other than low level KVM code. Cc: Paolo Bonzini Cc: Sean Christopherson Signed-off-by: Uros Bizjak [sean: rebased to a series without __ex()/__kvm_handle_fault_on_reboot()] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 -- arch/x86/kvm/svm/svm_ops.h | 2 +- arch/x86/kvm/vmx/vmx_ops.h | 2 +- arch/x86/kvm/x86.h | 2 ++ 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 51ba20ffaedb..feba0ec5474b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1634,8 +1634,6 @@ enum { #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) -void kvm_spurious_fault(void); - #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, unsigned flags); diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h index 0c8377aee52c..aa028ef5b1e9 100644 --- a/arch/x86/kvm/svm/svm_ops.h +++ b/arch/x86/kvm/svm/svm_ops.h @@ -4,7 +4,7 @@ #include -#include +#include "x86.h" #define svm_asm(insn, clobber...) \ do { \ diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h index 7b6fbe103c61..7e3cb53c413f 100644 --- a/arch/x86/kvm/vmx/vmx_ops.h +++ b/arch/x86/kvm/vmx/vmx_ops.h @@ -4,11 +4,11 @@ #include -#include #include #include "evmcs.h" #include "vmcs.h" +#include "x86.h" asmlinkage void vmread_error(unsigned long field, bool fault); __attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field, diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index c5ee0f5ce0f1..0d830945ae38 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -8,6 +8,8 @@ #include "kvm_cache_regs.h" #include "kvm_emulate.h" +void kvm_spurious_fault(void); + #define KVM_DEFAULT_PLE_GAP 128 #define KVM_VMX_DEFAULT_PLE_WINDOW 4096 #define KVM_DEFAULT_PLE_WINDOW_GROW 2