From patchwork Mon Jul 8 15:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 11035273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3CF21395 for ; Mon, 8 Jul 2019 15:16:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEE152793B for ; Mon, 8 Jul 2019 15:16:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C07D627F93; Mon, 8 Jul 2019 15:16:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AD272793B for ; Mon, 8 Jul 2019 15:16:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732544AbfGHPQ0 (ORCPT ); Mon, 8 Jul 2019 11:16:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:39246 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732518AbfGHPQT (ORCPT ); Mon, 8 Jul 2019 11:16:19 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 27A4E216F4; Mon, 8 Jul 2019 15:16:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1562598978; bh=4Ahg2GXYYsHUau4Ok8zIOt9qfq6O1JPEwBm4JbHt6iM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wADDV46KuZsm5fjLWjwotkorSttdf2NnALgmosPSt6AePCAnpt8zfIGp12Yx3G0z6 cch+kVU/m66FEQGpPA2xlHTJC/KNfqzTymy4nO632IYOFMgSkY0AUOem+5anEg8E4v /6fXw1Yn9Bwl8ZDk4o5o4Yfc3Q+Gy1aj0lgs82oI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Alejandro Jimenez , Thomas Gleixner , Liam Merwick , Mark Kanda , Paolo Bonzini , bp@alien8.de, rkrcmar@redhat.com, kvm@vger.kernel.org Subject: [PATCH 4.4 38/73] x86/speculation: Allow guests to use SSBD even if host does not Date: Mon, 8 Jul 2019 17:12:48 +0200 Message-Id: <20190708150523.343123789@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190708150513.136580595@linuxfoundation.org> References: <20190708150513.136580595@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alejandro Jimenez commit c1f7fec1eb6a2c86d01bc22afce772c743451d88 upstream. The bits set in x86_spec_ctrl_mask are used to calculate the guest's value of SPEC_CTRL that is written to the MSR before VMENTRY, and control which mitigations the guest can enable. In the case of SSBD, unless the host has enabled SSBD always on mode (by passing "spec_store_bypass_disable=on" in the kernel parameters), the SSBD bit is not set in the mask and the guest can not properly enable the SSBD always on mitigation mode. This has been confirmed by running the SSBD PoC on a guest using the SSBD always on mitigation mode (booted with kernel parameter "spec_store_bypass_disable=on"), and verifying that the guest is vulnerable unless the host is also using SSBD always on mode. In addition, the guest OS incorrectly reports the SSB vulnerability as mitigated. Always set the SSBD bit in x86_spec_ctrl_mask when the host CPU supports it, allowing the guest to use SSBD whether or not the host has chosen to enable the mitigation in any of its modes. Fixes: be6fcb5478e9 ("x86/bugs: Rework spec_ctrl base and mask logic") Signed-off-by: Alejandro Jimenez Signed-off-by: Thomas Gleixner Reviewed-by: Liam Merwick Reviewed-by: Mark Kanda Reviewed-by: Paolo Bonzini Cc: bp@alien8.de Cc: rkrcmar@redhat.com Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1560187210-11054-1-git-send-email-alejandro.j.jimenez@oracle.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -807,6 +807,16 @@ static enum ssb_mitigation __init __ssb_ } /* + * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper + * bit in the mask to allow guests to use the mitigation even in the + * case where the host does not enable it. + */ + if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || + static_cpu_has(X86_FEATURE_AMD_SSBD)) { + x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; + } + + /* * We have three CPU feature flags that are in play here: * - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible. * - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass @@ -823,7 +833,6 @@ static enum ssb_mitigation __init __ssb_ x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); } }