From patchwork Tue Jun 1 08:47:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_NONE,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 300F8C4708F for ; Tue, 1 Jun 2021 08:48:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0762561364 for ; Tue, 1 Jun 2021 08:48:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233771AbhFAItt (ORCPT ); Tue, 1 Jun 2021 04:49:49 -0400 Received: from mga07.intel.com ([134.134.136.100]:45142 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233773AbhFAItm (ORCPT ); Tue, 1 Jun 2021 04:49:42 -0400 IronPort-SDR: 31+FrmYbTJBIL4YUs0STaZNqG1zZLmGZObYRn/5VMehe58Y+TxbzmDO1th8b8EncfG5LWXjqtO OU7DqLmgwwNg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381287" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381287" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:00 -0700 IronPort-SDR: eMj/DQuW6Kh5rgYb2frTv4AumtMZRkwm3a+lNo3hyRsEX0HKk9NUScYkg7NZxm9MYwbYDANEQ2 rcA/M+K2jQCw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967736" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:47:58 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 01/15] x86/keylocker: Move KEYSRC_{SW,HW}RAND to keylocker.h Date: Tue, 1 Jun 2021 16:47:40 +0800 Message-Id: <1622537274-146420-2-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM needs the KEYSRC_SWRAND and KEYSRC_HWRAND macro definitions. Move them to Signed-off-by: Robert Hoo Reviewed-by: Tony Luck Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/include/asm/keylocker.h | 3 +++ arch/x86/kernel/keylocker.c | 2 -- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 74b8063..9836e68 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -9,6 +9,9 @@ #include #include +#define KEYSRC_SWRAND 0 +#define KEYSRC_HWRAND BIT(1) + #define KEYLOCKER_CPUID 0x019 #define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0) #define KEYLOCKER_CPUID_EBX_AESKLE BIT(0) diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index 5a784492..17bb2e8 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -66,8 +66,6 @@ void flush_keylocker_data(void) keydata.valid = false; } -#define KEYSRC_SWRAND 0 -#define KEYSRC_HWRAND BIT(1) #define KEYSRC_HWRAND_RETRY 10 /** From patchwork Tue Jun 1 08:47:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CB85C47093 for ; Tue, 1 Jun 2021 08:49:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F9E161364 for ; Tue, 1 Jun 2021 08:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233790AbhFAIuv (ORCPT ); Tue, 1 Jun 2021 04:50:51 -0400 Received: from mga07.intel.com ([134.134.136.100]:45129 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233722AbhFAItq (ORCPT ); Tue, 1 Jun 2021 04:49:46 -0400 IronPort-SDR: CuAJ249u78JZPGcT5ku5h+jdZKxVAr7tPQfF5LspHq4hXGOj8cJA4+NPsKcIK3f6obLI0Zp9TQ g2j2zbQ4LQOg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381299" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381299" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:04 -0700 IronPort-SDR: SLmp6cTjwsUa+6fPYO7Rpn2tj6hYCW5AzKdpBqR2GUDRzFhnZ40jozGSgA+PF4JbNp7tGrvNYc lg8IFzhMXuFA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967751" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:00 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 02/15] x86/cpufeatures: Define Key Locker sub feature flags Date: Tue, 1 Jun 2021 16:47:41 +0800 Message-Id: <1622537274-146420-3-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Though KeyLocker is generally enumerated by CPUID.(07H,0):ECX.KL[bit23], CPUID.19H:{EBX,ECX} enumerate more details of KeyLocker supporting status. Define them in scattered cpuid bits. CPUID.19H:EBX bit0 enumerates if OS (CR4.KeyLocker) and BIOS have enabled KeyLocker. bit2 enumerates if wide Key Locker instructions are supported. bit4 enumerates if IWKey backup is supported. CPUID.19H:ECX bit0 enumerates if the NoBackup parameter to LOADIWKEY is supported. bit1 enumerates if IWKey randomization is supported. Most of above features don't necessarily appear in /proc/cpuinfo, except "iwkey_rand", which we think might be interesting to indicate that the system supports randomized IWKey. Signed-off-by: Robert Hoo Reviewed-by: Tony Luck Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/include/asm/cpufeatures.h | 5 +++++ arch/x86/kernel/cpu/scattered.c | 5 +++++ 2 files changed, 10 insertions(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 578cf3f..8dd7271 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -294,6 +294,11 @@ #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ #define X86_FEATURE_SGX1 (11*32+ 8) /* "" Basic SGX */ #define X86_FEATURE_SGX2 (11*32+ 9) /* "" SGX Enclave Dynamic Memory Management (EDMM) */ +#define X86_FEATURE_KL_INS_ENABLED (11*32 + 10) /* "" Key Locker instructions */ +#define X86_FEATURE_KL_WIDE (11*32 + 11) /* "" Wide Key Locker instructions */ +#define X86_FEATURE_IWKEY_BACKUP (11*32 + 12) /* "" IWKey backup */ +#define X86_FEATURE_IWKEY_NOBACKUP (11*32 + 13) /* "" NoBackup parameter to LOADIWKEY */ +#define X86_FEATURE_IWKEY_RAND (11*32 + 14) /* IWKey Randomization */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index 21d1f06..de8677c 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -38,6 +38,11 @@ struct cpuid_bit { { X86_FEATURE_PER_THREAD_MBA, CPUID_ECX, 0, 0x00000010, 3 }, { X86_FEATURE_SGX1, CPUID_EAX, 0, 0x00000012, 0 }, { X86_FEATURE_SGX2, CPUID_EAX, 1, 0x00000012, 0 }, + { X86_FEATURE_KL_INS_ENABLED, CPUID_EBX, 0, 0x00000019, 0 }, + { X86_FEATURE_KL_WIDE, CPUID_EBX, 2, 0x00000019, 0 }, + { X86_FEATURE_IWKEY_BACKUP, CPUID_EBX, 4, 0x00000019, 0 }, + { X86_FEATURE_IWKEY_NOBACKUP, CPUID_ECX, 0, 0x00000019, 0 }, + { X86_FEATURE_IWKEY_RAND, CPUID_ECX, 1, 0x00000019, 0 }, { X86_FEATURE_HW_PSTATE, CPUID_EDX, 7, 0x80000007, 0 }, { X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 }, { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 }, From patchwork Tue Jun 1 08:47:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_NONE,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A2BCC4708F for ; Tue, 1 Jun 2021 08:48:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DAC761364 for ; Tue, 1 Jun 2021 08:48:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233645AbhFAIt7 (ORCPT ); Tue, 1 Jun 2021 04:49:59 -0400 Received: from mga07.intel.com ([134.134.136.100]:45164 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233803AbhFAItr (ORCPT ); Tue, 1 Jun 2021 04:49:47 -0400 IronPort-SDR: U0QLAR54HIrT+qM5d/F/dVbE+B/bkChOXf4iqi+YG/mmCCbQ4qqmE1iPRXaUf8iQmg85fQ9VUj Jxhxr+AO6mcQ== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381315" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381315" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:06 -0700 IronPort-SDR: wtZFw+9ZFYGi9KXP1aqOz0IQ7/xnh+uD9uVdP9BQPptp/tewJDplL53oPV7sPEnXEoriltzMxi n2tbmrpVf9rQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967768" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:03 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 03/15] x86/feat_ctl: Add new VMX feature, Tertiary VM-Execution control and LOADIWKEY Exiting Date: Tue, 1 Jun 2021 16:47:42 +0800 Message-Id: <1622537274-146420-4-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is a new VMX capability MSR IA32_VMX_PROCBASED_CTLS3. All 64 bits of this MSR define capability bits for the new tertiary VM-Exec control so two new 32-bit vmx_feature leaves are needed to record all the capabilities. The 2 new VMX features: Tertiary VM-Execution control is enumerated by bit 17 of existing Primary VM-Execution control. LOADIWKEY Exiting is enumerated by bit 0 of this new tertiary VM-Exec control, which designates if guest running 'loadiwkey' instruction will cause VM-Exit. Signed-off-by: Robert Hoo Reviewed-by: Tony Luck Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/include/asm/vmxfeatures.h | 6 +++++- arch/x86/kernel/cpu/feat_ctl.c | 9 +++++++++ 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index f8e7878..dd103c2 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -915,6 +915,7 @@ #define MSR_IA32_VMX_TRUE_EXIT_CTLS 0x0000048f #define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x00000490 #define MSR_IA32_VMX_VMFUNC 0x00000491 +#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 /* VMX_BASIC bits and bitmasks */ #define VMX_BASIC_VMCS_SIZE_SHIFT 32 diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h index d9a7468..c0b2f63 100644 --- a/arch/x86/include/asm/vmxfeatures.h +++ b/arch/x86/include/asm/vmxfeatures.h @@ -5,7 +5,7 @@ /* * Defines VMX CPU feature bits */ -#define NVMXINTS 3 /* N 32-bit words worth of info */ +#define NVMXINTS 5 /* N 32-bit words worth of info */ /* * Note: If the comment begins with a quoted string, that string is used @@ -43,6 +43,7 @@ #define VMX_FEATURE_RDTSC_EXITING ( 1*32+ 12) /* "" VM-Exit on RDTSC */ #define VMX_FEATURE_CR3_LOAD_EXITING ( 1*32+ 15) /* "" VM-Exit on writes to CR3 */ #define VMX_FEATURE_CR3_STORE_EXITING ( 1*32+ 16) /* "" VM-Exit on reads from CR3 */ +#define VMX_FEATURE_TER_CONTROLS (1*32 + 17) /* "" Enable Tertiary VM-Execution Controls */ #define VMX_FEATURE_CR8_LOAD_EXITING ( 1*32+ 19) /* "" VM-Exit on writes to CR8 */ #define VMX_FEATURE_CR8_STORE_EXITING ( 1*32+ 20) /* "" VM-Exit on reads from CR8 */ #define VMX_FEATURE_VIRTUAL_TPR ( 1*32+ 21) /* "vtpr" TPR virtualization, a.k.a. TPR shadow */ @@ -85,4 +86,7 @@ #define VMX_FEATURE_ENCLV_EXITING ( 2*32+ 28) /* "" VM-Exit on ENCLV (leaf dependent) */ #define VMX_FEATURE_BUS_LOCK_DETECTION ( 2*32+ 30) /* "" VM-Exit when bus lock caused */ +/* Tertiary Processor-Based VM-Execution Controls, word 3 */ +#define VMX_FEATURE_LOADIWKEY_EXITING (3*32 + 0) /* "" VM-Exit on LOADIWKey */ + #endif /* _ASM_X86_VMXFEATURES_H */ diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c index da696eb..2e0272d 100644 --- a/arch/x86/kernel/cpu/feat_ctl.c +++ b/arch/x86/kernel/cpu/feat_ctl.c @@ -15,6 +15,8 @@ enum vmx_feature_leafs { MISC_FEATURES = 0, PRIMARY_CTLS, SECONDARY_CTLS, + TERTIARY_CTLS_LOW, + TERTIARY_CTLS_HIGH, NR_VMX_FEATURE_WORDS, }; @@ -42,6 +44,13 @@ static void init_vmx_capabilities(struct cpuinfo_x86 *c) rdmsr_safe(MSR_IA32_VMX_PROCBASED_CTLS2, &ign, &supported); c->vmx_capability[SECONDARY_CTLS] = supported; + /* + * For tertiary execution controls MSR, it's actually a 64bit allowed-1. + */ + rdmsr_safe(MSR_IA32_VMX_PROCBASED_CTLS3, &ign, &supported); + c->vmx_capability[TERTIARY_CTLS_LOW] = ign; + c->vmx_capability[TERTIARY_CTLS_HIGH] = supported; + rdmsr(MSR_IA32_VMX_PINBASED_CTLS, ign, supported); rdmsr_safe(MSR_IA32_VMX_VMFUNC, &ign, &funcs); From patchwork Tue Jun 1 08:47:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_NONE,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2697BC47080 for ; Tue, 1 Jun 2021 08:48:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09D0A613B1 for ; Tue, 1 Jun 2021 08:48:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233422AbhFAIuB (ORCPT ); Tue, 1 Jun 2021 04:50:01 -0400 Received: from mga07.intel.com ([134.134.136.100]:45142 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233815AbhFAItw (ORCPT ); Tue, 1 Jun 2021 04:49:52 -0400 IronPort-SDR: MiXq/JtI+GYXubowU9VdcMavTnDuBud929BqRM8Dm5i0OpuD9EHQHpDEPKxvJI1uW5BU+qCKnk sx8NUfRPrCwg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381326" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381326" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:09 -0700 IronPort-SDR: T5LQ++tYZETNnGLjYaG+/+XzXSCDeaG02gBwux8Z5KF5kHzVNbKBcWNjxjG0jrzaIeBMy1hOfe bUxCwUsprbCA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967776" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:06 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 04/15] kvm/vmx: Detect Tertiary VM-Execution control when setup VMCS config Date: Tue, 1 Jun 2021 16:47:43 +0800 Message-Id: <1622537274-146420-5-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: "Hu, Robert" Add new Tertiary VM-Exec Control field for vmcs_config and related functions. And when eVMCS in use, filter it out. Signed-off-by: Hu, Robert --- arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/vmx/capabilities.h | 7 +++++++ arch/x86/kvm/vmx/evmcs.c | 2 ++ arch/x86/kvm/vmx/evmcs.h | 1 + arch/x86/kvm/vmx/vmx.c | 5 ++++- 5 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 0ffaa315..c035649 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -31,6 +31,7 @@ #define CPU_BASED_RDTSC_EXITING VMCS_CONTROL_BIT(RDTSC_EXITING) #define CPU_BASED_CR3_LOAD_EXITING VMCS_CONTROL_BIT(CR3_LOAD_EXITING) #define CPU_BASED_CR3_STORE_EXITING VMCS_CONTROL_BIT(CR3_STORE_EXITING) +#define CPU_BASED_ACTIVATE_TERTIARY_CONTROLS VMCS_CONTROL_BIT(TER_CONTROLS) #define CPU_BASED_CR8_LOAD_EXITING VMCS_CONTROL_BIT(CR8_LOAD_EXITING) #define CPU_BASED_CR8_STORE_EXITING VMCS_CONTROL_BIT(CR8_STORE_EXITING) #define CPU_BASED_TPR_SHADOW VMCS_CONTROL_BIT(VIRTUAL_TPR) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index d1d7798..df7550c 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -60,6 +60,7 @@ struct vmcs_config { u32 pin_based_exec_ctrl; u32 cpu_based_exec_ctrl; u32 cpu_based_2nd_exec_ctrl; + u64 cpu_based_3rd_exec_ctrl; u32 vmexit_ctrl; u32 vmentry_ctrl; struct nested_vmx_msrs nested; @@ -133,6 +134,12 @@ static inline bool cpu_has_secondary_exec_ctrls(void) CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; } +static inline bool cpu_has_tertiary_exec_ctrls(void) +{ + return vmcs_config.cpu_based_exec_ctrl & + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; +} + static inline bool cpu_has_vmx_virtualize_apic_accesses(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index 41f2466..1e883ff 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -299,8 +299,10 @@ __init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf) { + vmcs_conf->cpu_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_EXEC_CTRL; vmcs_conf->pin_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_PINCTRL; vmcs_conf->cpu_based_2nd_exec_ctrl &= ~EVMCS1_UNSUPPORTED_2NDEXEC; + vmcs_conf->cpu_based_3rd_exec_ctrl = 0; vmcs_conf->vmexit_ctrl &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL; vmcs_conf->vmentry_ctrl &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL; diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index bd41d94..bf2c5e7 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -50,6 +50,7 @@ */ #define EVMCS1_UNSUPPORTED_PINCTRL (PIN_BASED_POSTED_INTR | \ PIN_BASED_VMX_PREEMPTION_TIMER) +#define EVMCS1_UNSUPPORTED_EXEC_CTRL (CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) #define EVMCS1_UNSUPPORTED_2NDEXEC \ (SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | \ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES | \ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d000cdd..554e572 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2506,6 +2506,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, u32 _pin_based_exec_control = 0; u32 _cpu_based_exec_control = 0; u32 _cpu_based_2nd_exec_control = 0; + u64 _cpu_based_3rd_exec_control = 0; u32 _vmexit_control = 0; u32 _vmentry_control = 0; @@ -2527,7 +2528,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, opt = CPU_BASED_TPR_SHADOW | CPU_BASED_USE_MSR_BITMAPS | - CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; + CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PROCBASED_CTLS, &_cpu_based_exec_control) < 0) return -EIO; @@ -2688,6 +2690,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control; vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control; vmcs_conf->cpu_based_2nd_exec_ctrl = _cpu_based_2nd_exec_control; + vmcs_conf->cpu_based_3rd_exec_ctrl = _cpu_based_3rd_exec_control; vmcs_conf->vmexit_ctrl = _vmexit_control; vmcs_conf->vmentry_ctrl = _vmentry_control; From patchwork Tue Jun 1 08:47:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6438BC47092 for ; Tue, 1 Jun 2021 08:48:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48C7F613A9 for ; Tue, 1 Jun 2021 08:48:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233877AbhFAIuK (ORCPT ); Tue, 1 Jun 2021 04:50:10 -0400 Received: from mga07.intel.com ([134.134.136.100]:45129 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233848AbhFAIt5 (ORCPT ); Tue, 1 Jun 2021 04:49:57 -0400 IronPort-SDR: G6AyyxSgF3qS/qwSjf1aBSVX+SwnzWfNV7nizSajaoRwrmtFIFU8teUdLd0+cOzB12YT0HbrJp vOUJZIRSndWg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381344" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381344" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:12 -0700 IronPort-SDR: Jn5COYwlkes7pkXWBE9cgprqyUGeO6ucrN5Hxp6vjaTalHC/Ge5cjiucrJf23XXOBYCkpeGbGo +qYcsFIrZ2iA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967783" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:09 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 05/15] kvm/vmx: Extend BUILD_CONTROLS_SHADOW macro to support 64-bit variation Date: Tue, 1 Jun 2021 16:47:44 +0800 Message-Id: <1622537274-146420-6-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Tertiary VM-Exec Control, different from previous control fields, is 64 bit. So extend BUILD_CONTROLS_SHADOW() by adding a 'bit' parameter, to support both 32 bit and 64 bit fields' auxiliary functions building. Also, define the auxiliary functions for Tertiary control field here, using the new BUILD_CONTROLS_SHADOW(). Suggested-by: Sean Christopherson Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmx.h | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 008cb87..e0ade10 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -412,31 +412,32 @@ static inline u8 vmx_get_rvi(void) return vmcs_read16(GUEST_INTR_STATUS) & 0xff; } -#define BUILD_CONTROLS_SHADOW(lname, uname) \ -static inline void lname##_controls_set(struct vcpu_vmx *vmx, u32 val) \ +#define BUILD_CONTROLS_SHADOW(lname, uname, bits) \ +static inline void lname##_controls_set(struct vcpu_vmx *vmx, u##bits val) \ { \ if (vmx->loaded_vmcs->controls_shadow.lname != val) { \ - vmcs_write32(uname, val); \ + vmcs_write##bits(uname, val); \ vmx->loaded_vmcs->controls_shadow.lname = val; \ } \ } \ -static inline u32 lname##_controls_get(struct vcpu_vmx *vmx) \ +static inline u##bits lname##_controls_get(struct vcpu_vmx *vmx) \ { \ return vmx->loaded_vmcs->controls_shadow.lname; \ } \ -static inline void lname##_controls_setbit(struct vcpu_vmx *vmx, u32 val) \ +static inline void lname##_controls_setbit(struct vcpu_vmx *vmx, u##bits val) \ { \ lname##_controls_set(vmx, lname##_controls_get(vmx) | val); \ } \ -static inline void lname##_controls_clearbit(struct vcpu_vmx *vmx, u32 val) \ +static inline void lname##_controls_clearbit(struct vcpu_vmx *vmx, u##bits val) \ { \ lname##_controls_set(vmx, lname##_controls_get(vmx) & ~val); \ } -BUILD_CONTROLS_SHADOW(vm_entry, VM_ENTRY_CONTROLS) -BUILD_CONTROLS_SHADOW(vm_exit, VM_EXIT_CONTROLS) -BUILD_CONTROLS_SHADOW(pin, PIN_BASED_VM_EXEC_CONTROL) -BUILD_CONTROLS_SHADOW(exec, CPU_BASED_VM_EXEC_CONTROL) -BUILD_CONTROLS_SHADOW(secondary_exec, SECONDARY_VM_EXEC_CONTROL) +BUILD_CONTROLS_SHADOW(vm_entry, VM_ENTRY_CONTROLS, 32) +BUILD_CONTROLS_SHADOW(vm_exit, VM_EXIT_CONTROLS, 32) +BUILD_CONTROLS_SHADOW(pin, PIN_BASED_VM_EXEC_CONTROL, 32) +BUILD_CONTROLS_SHADOW(exec, CPU_BASED_VM_EXEC_CONTROL, 32) +BUILD_CONTROLS_SHADOW(secondary_exec, SECONDARY_VM_EXEC_CONTROL, 32) +BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64) static inline void vmx_register_cache_reset(struct kvm_vcpu *vcpu) { From patchwork Tue Jun 1 08:47:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC955C47092 for ; Tue, 1 Jun 2021 08:48:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D19846138C for ; Tue, 1 Jun 2021 08:48:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233921AbhFAIuS (ORCPT ); Tue, 1 Jun 2021 04:50:18 -0400 Received: from mga07.intel.com ([134.134.136.100]:45164 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233182AbhFAIt6 (ORCPT ); Tue, 1 Jun 2021 04:49:58 -0400 IronPort-SDR: wBaon7NPHhrrvu6F3UZ8GvxttPkWt1sCRlYvpu/yCfvBBeZZkfWJU5T5ZWfXKfSxDsMX5WXJRO qBlhCf2WFZrA== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381353" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381353" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:15 -0700 IronPort-SDR: CP44KpMcMxtiTFQ+YGit0IDawMBPvayVTbQ78XltNVn8mmIPhxZTUvxUi5I2dsTgAmheGyTfS6 g2xGdt8aRxng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967791" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:12 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 06/15] kvm/vmx: Set Tertiary VM-Execution control field When init vCPU's VMCS Date: Tue, 1 Jun 2021 16:47:45 +0800 Message-Id: <1622537274-146420-7-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define the new 64bit VMCS field, as well as its auxiliary set function. In init_vmcs(), set this field to the value previously configured. But in vmx_exec_control(), at this moment, returning vmcs_config.primary_exec_control with tertiary-exec artifially disabled, as till now no real feature on it is enabled. This will be removed later in Key Locker enablement patch. Signed-off-by: Robert Hoo --- arch/x86/include/asm/vmx.h | 2 ++ arch/x86/kvm/vmx/vmcs.h | 1 + arch/x86/kvm/vmx/vmx.c | 6 ++++++ 3 files changed, 9 insertions(+) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index c035649..dc549e3 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -222,6 +222,8 @@ enum vmcs_field { ENCLS_EXITING_BITMAP_HIGH = 0x0000202F, TSC_MULTIPLIER = 0x00002032, TSC_MULTIPLIER_HIGH = 0x00002033, + TERTIARY_VM_EXEC_CONTROL = 0x00002034, + TERTIARY_VM_EXEC_CONTROL_HIGH = 0x00002035, GUEST_PHYSICAL_ADDRESS = 0x00002400, GUEST_PHYSICAL_ADDRESS_HIGH = 0x00002401, VMCS_LINK_POINTER = 0x00002800, diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index 1472c6c..343c329 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -48,6 +48,7 @@ struct vmcs_controls_shadow { u32 pin; u32 exec; u32 secondary_exec; + u64 tertiary_exec; }; /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 554e572..56a56f5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4255,6 +4255,9 @@ u32 vmx_exec_control(struct vcpu_vmx *vmx) CPU_BASED_MONITOR_EXITING); if (kvm_hlt_in_guest(vmx->vcpu.kvm)) exec_control &= ~CPU_BASED_HLT_EXITING; + + /* Disable Tertiary-Exec Control at this moment, as no feature used yet */ + exec_control &= ~CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; return exec_control; } @@ -4413,6 +4416,9 @@ static void init_vmcs(struct vcpu_vmx *vmx) secondary_exec_controls_set(vmx, vmx->secondary_exec_control); } + if (cpu_has_tertiary_exec_ctrls()) + tertiary_exec_controls_set(vmx, vmcs_config.cpu_based_3rd_exec_ctrl); + if (kvm_vcpu_apicv_active(&vmx->vcpu)) { vmcs_write64(EOI_EXIT_BITMAP0, 0); vmcs_write64(EOI_EXIT_BITMAP1, 0); From patchwork Tue Jun 1 08:47:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 415DBC4708F for ; Tue, 1 Jun 2021 08:48:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26734613A9 for ; Tue, 1 Jun 2021 08:48:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233784AbhFAIuR (ORCPT ); Tue, 1 Jun 2021 04:50:17 -0400 Received: from mga07.intel.com ([134.134.136.100]:45142 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233628AbhFAIuA (ORCPT ); Tue, 1 Jun 2021 04:50:00 -0400 IronPort-SDR: F3Iod4SisXO2+g+PoOhzcztIDIOyMlgjZz19OUZ14GXgk+Zf3IIPhhZ4tMn/srghnIMqE1uPud HdN5B5yNhP7A== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381375" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381375" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:18 -0700 IronPort-SDR: eb4AjpMNOguevhBgoborz8xZ4/APOoo15DSixNyUBXcXZOmjAWWOLsb/Go5KNqGI065msxl7tH xBoMPPKZoWKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967799" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:15 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 07/15] kvm/vmx: dump_vmcs() reports tertiary_exec_control field as well Date: Tue, 1 Jun 2021 16:47:46 +0800 Message-Id: <1622537274-146420-8-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: "Hu, Robert" Signed-off-by: Hu, Robert --- arch/x86/kvm/vmx/vmx.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 56a56f5..afcf1e0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5808,6 +5808,7 @@ void dump_vmcs(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx = to_vmx(vcpu); u32 vmentry_ctl, vmexit_ctl; u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control; + u64 tertiary_exec_control = 0; unsigned long cr4; int efer_slot; @@ -5825,6 +5826,9 @@ void dump_vmcs(struct kvm_vcpu *vcpu) if (cpu_has_secondary_exec_ctrls()) secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); + if (cpu_has_tertiary_exec_ctrls()) + tertiary_exec_control = vmcs_read64(TERTIARY_VM_EXEC_CONTROL); + pr_err("*** Guest State ***\n"); pr_err("CR0: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n", vmcs_readl(GUEST_CR0), vmcs_readl(CR0_READ_SHADOW), @@ -5921,8 +5925,9 @@ void dump_vmcs(struct kvm_vcpu *vcpu) vmx_dump_msrs("host autoload", &vmx->msr_autoload.host); pr_err("*** Control State ***\n"); - pr_err("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n", - pin_based_exec_ctrl, cpu_based_exec_ctrl, secondary_exec_control); + pr_err("PinBased=0x%08x CPUBased=0x%08x SecondaryExec=0x%08x TertiaryExec=0x%016llx\n", + pin_based_exec_ctrl, cpu_based_exec_ctrl, secondary_exec_control, + tertiary_exec_control); pr_err("EntryControls=%08x ExitControls=%08x\n", vmentry_ctl, vmexit_ctl); pr_err("ExceptionBitmap=%08x PFECmask=%08x PFECmatch=%08x\n", vmcs_read32(EXCEPTION_BITMAP), From patchwork Tue Jun 1 08:47:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B3BDC47092 for ; Tue, 1 Jun 2021 08:48:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 137E961375 for ; Tue, 1 Jun 2021 08:48:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234017AbhFAIuf (ORCPT ); Tue, 1 Jun 2021 04:50:35 -0400 Received: from mga07.intel.com ([134.134.136.100]:45201 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233922AbhFAIuT (ORCPT ); Tue, 1 Jun 2021 04:50:19 -0400 IronPort-SDR: 0BmBQhEOOB1ACZqI7ysr7E889h6gMJBXN4ow3cRa/IfZOE5Y1YBTfliOrQ6Kk4LqrLDQ9HA9wF iHa/q2yM0UuA== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381405" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381405" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:21 -0700 IronPort-SDR: gZGos9obolz22FwzcztTsS14KM/KJODpjvKtpIppsMoovPuaK3iYWNEinlQthuMyargxwdKBvJ R/t6w+pa4rvw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967812" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:18 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 08/15] kvm/vmx: Add KVM support on guest Key Locker operations Date: Tue, 1 Jun 2021 16:47:47 +0800 Message-Id: <1622537274-146420-9-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't clear CPU_BASED_ACTIVATE_TERTIARY_CONTROLS in vmx_exec_control(), as we really need it now. Enable and implement handle_loadiwkey() VM-Exit handler, which fetches guest IWKey and do it onbehalf. Other Key Locker instructions can execute in non-root mode. (Note: till this patch, we haven't expose Key Locker feature to guest yet, guest Kernel won't set CR4.KL, if guest deliberately execute Key Locker instructions, it will get #UD, instead of VM-Exit.) We load guest's IWKey when load vcpu, even if it is NULL (guest doesn't support/enable Key Locker), to flush last vcpu's IWKey, which is possibly another VM's. We flush guest's IWKey (loadiwkey with all 0) when put vcpu. Trap guest write on MSRs of IA32_COPY_LOCAL_TO_PLATFORM and IA32_COPY_PLATFORM_TO_LOCAL_TO_PLATFORM, emulate IWKey save and restore operations. Trap guest read on MSRs of IA32_COPY_STATUS and IA32_IWKEYBACKUP_STATUS, return their shadow values. Analogous to adjust_vmx_controls(), we define the adjust_vmx_controls_64() auxiliary function, for MSR_IA32_VMX_PROCBASED_CTLS3 is 64bit allow-1 semantics, different from previous VMX capability MSRs, which were 32bit allow-0 and 32bit allow-1. Also, define a helper get_xmm(), which per input index fetches an xmm value. VM-Exit of LOADIWKEY saves IWKey.encryption_key in some 2 xmm regs, and LOADIWKEY itself implicitly uses xmm0~2 as input. This helper facilitates xmm value's save/restore. Signed-off-by: Robert Hoo --- arch/x86/include/asm/kvm_host.h | 22 ++++ arch/x86/include/asm/vmx.h | 6 + arch/x86/include/uapi/asm/vmx.h | 2 + arch/x86/kvm/vmx/vmx.c | 249 +++++++++++++++++++++++++++++++++++++++- 4 files changed, 276 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cbbcee0..4b929dc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -558,6 +558,19 @@ struct kvm_vcpu_xen { u64 runstate_times[4]; }; +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(CONFIG_CC_HAS_INT128) +typedef unsigned __int128 u128; +#else +typedef struct { + u64 reg64[2]; +} u128; +#endif + +struct iwkey { + u128 encryption_key[2]; /* 256bit encryption key */ + u128 integrity_key; /* 128bit integration key */ +}; + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -849,6 +862,11 @@ struct kvm_vcpu_arch { /* Protected Guests */ bool guest_state_protected; + + /* Intel KeyLocker */ + bool iwkey_loaded; + struct iwkey iwkey; + u32 msr_ia32_copy_status; }; struct kvm_lpage_info { @@ -1003,6 +1021,10 @@ struct kvm_arch { bool apic_access_page_done; unsigned long apicv_inhibit_reasons; + bool iwkey_backup_valid; + u32 msr_ia32_iwkey_backup_status; + struct iwkey iwkey_backup; + gpa_t wall_clock; bool mwait_in_guest; diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index dc549e3..71ac797 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -76,6 +76,12 @@ #define SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE VMCS_CONTROL_BIT(USR_WAIT_PAUSE) #define SECONDARY_EXEC_BUS_LOCK_DETECTION VMCS_CONTROL_BIT(BUS_LOCK_DETECTION) +/* + * Definitions of Tertiary Processor-Based VM-Execution Controls. + */ +#define TERTIARY_EXEC_LOADIWKEY_EXITING VMCS_CONTROL_BIT(LOADIWKEY_EXITING) + + #define PIN_BASED_EXT_INTR_MASK VMCS_CONTROL_BIT(INTR_EXITING) #define PIN_BASED_NMI_EXITING VMCS_CONTROL_BIT(NMI_EXITING) #define PIN_BASED_VIRTUAL_NMIS VMCS_CONTROL_BIT(VIRTUAL_NMIS) diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h index 946d761..25ab849 100644 --- a/arch/x86/include/uapi/asm/vmx.h +++ b/arch/x86/include/uapi/asm/vmx.h @@ -90,6 +90,7 @@ #define EXIT_REASON_XRSTORS 64 #define EXIT_REASON_UMWAIT 67 #define EXIT_REASON_TPAUSE 68 +#define EXIT_REASON_LOADIWKEY 69 #define EXIT_REASON_BUS_LOCK 74 #define VMX_EXIT_REASONS \ @@ -153,6 +154,7 @@ { EXIT_REASON_XRSTORS, "XRSTORS" }, \ { EXIT_REASON_UMWAIT, "UMWAIT" }, \ { EXIT_REASON_TPAUSE, "TPAUSE" }, \ + { EXIT_REASON_LOADIWKEY, "LOADIWKEY" }, \ { EXIT_REASON_BUS_LOCK, "BUS_LOCK" } #define VMX_EXIT_REASON_FLAGS \ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index afcf1e0..752b1e4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -47,6 +47,7 @@ #include #include #include +#include #include "capabilities.h" #include "cpuid.h" @@ -1220,6 +1221,140 @@ void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, } } +static int get_xmm(int index, u128 *mem_ptr) +{ + int ret = 0; + + switch (index) { + case 0: + asm ("movdqu %%xmm0, %0" : : "m"(*mem_ptr)); + break; + case 1: + asm ("movdqu %%xmm1, %0" : : "m"(*mem_ptr)); + break; + case 2: + asm ("movdqu %%xmm2, %0" : : "m"(*mem_ptr)); + break; + case 3: + asm ("movdqu %%xmm3, %0" : : "m"(*mem_ptr)); + break; + case 4: + asm ("movdqu %%xmm4, %0" : : "m"(*mem_ptr)); + break; + case 5: + asm ("movdqu %%xmm5, %0" : : "m"(*mem_ptr)); + break; + case 6: + asm ("movdqu %%xmm6, %0" : : "m"(*mem_ptr)); + break; + case 7: + asm ("movdqu %%xmm7, %0" : : "m"(*mem_ptr)); + break; +#ifdef CONFIG_X86_64 + case 8: + asm ("movdqu %%xmm8, %0" : : "m"(*mem_ptr)); + break; + case 9: + asm ("movdqu %%xmm9, %0" : : "m"(*mem_ptr)); + break; + case 10: + asm ("movdqu %%xmm10, %0" : : "m"(*mem_ptr)); + break; + case 11: + asm ("movdqu %%xmm11, %0" : : "m"(*mem_ptr)); + break; + case 12: + asm ("movdqu %%xmm12, %0" : : "m"(*mem_ptr)); + break; + case 13: + asm ("movdqu %%xmm13, %0" : : "m"(*mem_ptr)); + break; + case 14: + asm ("movdqu %%xmm14, %0" : : "m"(*mem_ptr)); + break; + case 15: + asm ("movdqu %%xmm15, %0" : : "m"(*mem_ptr)); + break; +#endif + default: + WARN(1, "xmm index exceeds"); + ret = -1; + break; + } + + return ret; +} + +static void vmx_load_guest_iwkey(struct kvm_vcpu *vcpu) +{ + u128 xmm[3] = {0}; + int ret; + + /* + * By current design, Guest and Host can only exclusively + * use Key Locker. We can assert that CR4.KL is 0 here, + * otherwise, it's abnormal and worth a warn. + */ + if (cr4_read_shadow() & X86_CR4_KEYLOCKER) { + WARN(1, "Host is using Key Locker, " + "guest should not use it"); + return; + } + + cr4_set_bits(X86_CR4_KEYLOCKER); + + /* Save origin %xmm */ + get_xmm(0, &xmm[0]); + get_xmm(1, &xmm[1]); + get_xmm(2, &xmm[2]); + + asm ("movdqu %0, %%xmm0;" + "movdqu %1, %%xmm1;" + "movdqu %2, %%xmm2;" + : : "m"(vcpu->arch.iwkey.integrity_key), + "m"(vcpu->arch.iwkey.encryption_key[0]), + "m"(vcpu->arch.iwkey.encryption_key[1])); + + ret = loadiwkey(KEYSRC_SWRAND); + /* restore %xmm */ + asm ("movdqu %0, %%xmm0;" + "movdqu %1, %%xmm1;" + "movdqu %2, %%xmm2;" + : : "m"(xmm[0]), + "m"(xmm[1]), + "m"(xmm[2])); + + cr4_clear_bits(X86_CR4_KEYLOCKER); +} + +static void vmx_clear_guest_iwkey(void) +{ + u128 xmm[3] = {0}; + u128 zero = 0; + int ret; + + cr4_set_bits(X86_CR4_KEYLOCKER); + /* Save origin %xmm */ + get_xmm(0, &xmm[0]); + get_xmm(1, &xmm[1]); + get_xmm(2, &xmm[2]); + + asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;" + :: "m"(zero)); + + ret = loadiwkey(KEYSRC_SWRAND); + + /* restore %xmm */ + asm ("movdqu %0, %%xmm0;" + "movdqu %1, %%xmm1;" + "movdqu %2, %%xmm2;" + : : "m"(xmm[0]), + "m"(xmm[1]), + "m"(xmm[2])); + + cr4_clear_bits(X86_CR4_KEYLOCKER); +} + void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -1430,6 +1565,8 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vmx_vcpu_pi_load(vcpu, cpu); + vmx_load_guest_iwkey(vcpu); + vmx->host_debugctlmsr = get_debugctlmsr(); } @@ -1437,6 +1574,8 @@ static void vmx_vcpu_put(struct kvm_vcpu *vcpu) { vmx_vcpu_pi_put(vcpu); + vmx_clear_guest_iwkey(); + vmx_prepare_switch_to_host(to_vmx(vcpu)); } @@ -2001,6 +2140,19 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_DEBUGCTLMSR: msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL); break; + case MSR_IA32_COPY_STATUS: + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + msr_info->data = vcpu->arch.msr_ia32_copy_status; + break; + + case MSR_IA32_IWKEYBACKUP_STATUS: + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + msr_info->data = vcpu->kvm->arch.msr_ia32_iwkey_backup_status; + break; default: find_uret_msr: msr = vmx_find_uret_msr(vmx, msr_info->index); @@ -2313,6 +2465,36 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) else vmx->pt_desc.guest.addr_a[index / 2] = data; break; + case MSR_IA32_COPY_LOCAL_TO_PLATFORM: + if (msr_info->data != 1) + return 1; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + if (!vcpu->arch.iwkey_loaded) + return 1; + + if (!vcpu->kvm->arch.iwkey_backup_valid) { + vcpu->kvm->arch.iwkey_backup = vcpu->arch.iwkey; + vcpu->kvm->arch.iwkey_backup_valid = true; + vcpu->kvm->arch.msr_ia32_iwkey_backup_status = 0x9; + } + vcpu->arch.msr_ia32_copy_status = 1; + break; + + case MSR_IA32_COPY_PLATFORM_TO_LOCAL: + if (msr_info->data != 1) + return 1; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + if (!vcpu->kvm->arch.iwkey_backup_valid) + return 1; + vcpu->arch.iwkey = vcpu->kvm->arch.iwkey_backup; + vcpu->arch.msr_ia32_copy_status = 1; + break; + case MSR_TSC_AUX: if (!msr_info->host_initiated && !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) @@ -2498,6 +2680,23 @@ static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, return 0; } +static __init int adjust_vmx_controls_64(u64 ctl_min, u64 ctl_opt, + u32 msr, u64 *result) +{ + u64 vmx_msr; + u64 ctl = ctl_min | ctl_opt; + + rdmsrl(msr, vmx_msr); + ctl &= vmx_msr; /* bit == 1 means it can be set */ + + /* Ensure minimum (required) set of control bits are supported. */ + if (ctl_min & ~ctl) + return -EIO; + + *result = ctl; + return 0; +} + static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, struct vmx_capability *vmx_cap) { @@ -2603,6 +2802,16 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, "1-setting enable VPID VM-execution control\n"); } + if (_cpu_based_exec_control & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) { + u64 opt3 = TERTIARY_EXEC_LOADIWKEY_EXITING; + u64 min3 = 0; + + if (adjust_vmx_controls_64(min3, opt3, + MSR_IA32_VMX_PROCBASED_CTLS3, + &_cpu_based_3rd_exec_control)) + return -EIO; + } + min = VM_EXIT_SAVE_DEBUG_CONTROLS | VM_EXIT_ACK_INTR_ON_EXIT; #ifdef CONFIG_X86_64 min |= VM_EXIT_HOST_ADDR_SPACE_SIZE; @@ -4255,9 +4464,6 @@ u32 vmx_exec_control(struct vcpu_vmx *vmx) CPU_BASED_MONITOR_EXITING); if (kvm_hlt_in_guest(vmx->vcpu.kvm)) exec_control &= ~CPU_BASED_HLT_EXITING; - - /* Disable Tertiary-Exec Control at this moment, as no feature used yet */ - exec_control &= ~CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; return exec_control; } @@ -5656,6 +5862,42 @@ static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu) return 0; } +static int handle_loadiwkey(struct kvm_vcpu *vcpu) +{ + u128 xmm[3] = {0}; + u32 vmx_instruction_info; + int reg1, reg2; + int r; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + reg1 = (vmx_instruction_info & 0x78) >> 3; + reg2 = (vmx_instruction_info >> 28) & 0xf; + + r = get_xmm(0, &xmm[0]); + if (r) + return 0; + r = get_xmm(reg1, &xmm[1]); + if (r) + return 0; + r = get_xmm(reg2, &xmm[2]); + if (r) + return 0; + + vcpu->arch.iwkey.integrity_key = xmm[0]; + vcpu->arch.iwkey.encryption_key[0] = xmm[1]; + vcpu->arch.iwkey.encryption_key[1] = xmm[2]; + vcpu->arch.iwkey_loaded = true; + + vmx_load_guest_iwkey(vcpu); + + return kvm_skip_emulated_instruction(vcpu); +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -5713,6 +5955,7 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = { [EXIT_REASON_PREEMPTION_TIMER] = handle_preemption_timer, [EXIT_REASON_ENCLS] = handle_encls, [EXIT_REASON_BUS_LOCK] = handle_bus_lock_vmexit, + [EXIT_REASON_LOADIWKEY] = handle_loadiwkey, }; static const int kvm_vmx_max_exit_handlers = From patchwork Tue Jun 1 08:47:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD95C47092 for ; Tue, 1 Jun 2021 08:49:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 508D5613AB for ; Tue, 1 Jun 2021 08:49:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234041AbhFAIus (ORCPT ); Tue, 1 Jun 2021 04:50:48 -0400 Received: from mga07.intel.com ([134.134.136.100]:45208 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233813AbhFAIuW (ORCPT ); Tue, 1 Jun 2021 04:50:22 -0400 IronPort-SDR: nulyBmdU0wcg3FjdYAUGmbbpy/Pf1F/hi5s02WznrO/oI5tObkX0XjGsHIJP+Dw0I28PSgDnAC LYltx+dde82Q== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381416" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381416" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:24 -0700 IronPort-SDR: LckntoZwfGAbor9II+7dgzhSuNyZ5pUXvjsNlFgA53Y40Qsu1oIgscdPH2w+9Tm3onCwLob7kf oQcfnSZRdvgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967817" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:21 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 09/15] kvm/cpuid: Enumerate Key Locker feature in KVM Date: Tue, 1 Jun 2021 16:47:48 +0800 Message-Id: <1622537274-146420-10-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In kvm_set_cpu_caps(), add Key Locker feature enumeration, under the condition that 1) HW supports this feature 2) host Kernel isn't enabled with this feature. Among its sub-features, filter out randomization support bit (CPUID.0x19.ECX[1]), as by design it cannot be supported at this moment. (Refer to Intel Key Locker Spec.) Also, CPUID.0x19.EBX[0] (Key Locker instructions is enabled) is dynamic, based on CR4.KL status, thus reserve CR4.KL, and update this CPUID bit dynamically. Signed-off-by: Robert Hoo --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/cpuid.c | 26 ++++++++++++++++++++++++-- arch/x86/kvm/reverse_cpuid.h | 32 ++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/x86.h | 2 ++ 5 files changed, 56 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4b929dc..aec9ccc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -103,7 +103,7 @@ | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP | X86_CR4_KEYLOCKER)) #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 19606a3..f3b00ae 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -135,6 +135,12 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu) cpuid_entry_has(best, X86_FEATURE_XSAVEC))) best->ebx = xstate_required_size(vcpu->arch.xcr0, true); + /* update CPUID.0x19.EBX[0], depends on CR4.KL */ + best = kvm_find_cpuid_entry(vcpu, 0x19, 0); + if (best) + cpuid_entry_change(best, X86_FEATURE_KL_INS_ENABLED, + kvm_read_cr4_bits(vcpu, X86_CR4_KEYLOCKER)); + best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0); if (kvm_hlt_in_guest(vcpu->kvm) && best && (best->eax & (1 << KVM_FEATURE_PV_UNHALT))) @@ -456,14 +462,20 @@ void kvm_set_cpu_caps(void) kvm_cpu_cap_mask(CPUID_7_ECX, F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | - F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | - F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ | + F(VAES) | 0 /*KEYLOCKER*/ | F(VPCLMULQDQ) | F(AVX512_VNNI) | + F(AVX512_BITALG) | F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ | F(SGX_LC) ); + /* Set LA57 based on hardware capability. */ if (cpuid_ecx(7) & F(LA57)) kvm_cpu_cap_set(X86_FEATURE_LA57); + /* At present, host and guest can only exclusively use KeyLocker */ + if (!boot_cpu_has(X86_FEATURE_KEYLOCKER) && (cpuid_ecx(0x7) & + feature_bit(KEYLOCKER))) + kvm_cpu_cap_set(X86_FEATURE_KEYLOCKER); + /* * PKU not yet implemented for shadow paging and requires OSPKE * to be set on the host. Clear it if that is not the case @@ -500,6 +512,10 @@ void kvm_set_cpu_caps(void) kvm_cpu_cap_init_scattered(CPUID_12_EAX, SF(SGX1) | SF(SGX2) ); + kvm_cpu_cap_init_scattered(CPUID_19_EBX, SF(KL_INS_ENABLED) | SF(KL_WIDE) | + SF(IWKEY_BACKUP)); + /* No randomize exposed to guest */ + kvm_cpu_cap_init_scattered(CPUID_19_ECX, SF(IWKEY_NOBACKUP)); kvm_cpu_cap_mask(CPUID_8000_0001_ECX, F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ | @@ -870,6 +886,12 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) goto out; } break; + /* KeyLocker */ + case 0x19: + cpuid_entry_override(entry, CPUID_19_EBX); + cpuid_entry_override(entry, CPUID_19_ECX); + break; + case KVM_CPUID_SIGNATURE: { static const char signature[12] = "KVMKVMKVM\0\0"; const u32 *sigptr = (const u32 *)signature; diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h index a19d473..3ea8875 100644 --- a/arch/x86/kvm/reverse_cpuid.h +++ b/arch/x86/kvm/reverse_cpuid.h @@ -13,6 +13,9 @@ */ enum kvm_only_cpuid_leafs { CPUID_12_EAX = NCAPINTS, + CPUID_19_EBX, + CPUID_19_ECX, + NR_KVM_CPU_CAPS, NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS, @@ -24,6 +27,13 @@ enum kvm_only_cpuid_leafs { #define KVM_X86_FEATURE_SGX1 KVM_X86_FEATURE(CPUID_12_EAX, 0) #define KVM_X86_FEATURE_SGX2 KVM_X86_FEATURE(CPUID_12_EAX, 1) +/* Intel-defined Key Locker sub-features, CPUID level 0x19 (EBX). */ +#define KVM_X86_FEATURE_KL_INS_ENABLED KVM_X86_FEATURE(CPUID_19_EBX, 0) +#define KVM_X86_FEATURE_KL_WIDE KVM_X86_FEATURE(CPUID_19_EBX, 2) +#define KVM_X86_FEATURE_IWKEY_BACKUP KVM_X86_FEATURE(CPUID_19_EBX, 4) +#define KVM_X86_FEATURE_IWKEY_NOBACKUP KVM_X86_FEATURE(CPUID_19_ECX, 0) +#define KVM_X86_FEATURE_IWKEY_RAND KVM_X86_FEATURE(CPUID_19_ECX, 1) + struct cpuid_reg { u32 function; u32 index; @@ -48,6 +58,8 @@ struct cpuid_reg { [CPUID_7_1_EAX] = { 7, 1, CPUID_EAX}, [CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX}, [CPUID_8000_001F_EAX] = {0x8000001f, 0, CPUID_EAX}, + [CPUID_19_EBX] = { 0x19, 0, CPUID_EBX}, + [CPUID_19_ECX] = { 0x19, 0, CPUID_ECX}, }; /* @@ -74,12 +86,24 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf) */ static __always_inline u32 __feature_translate(int x86_feature) { - if (x86_feature == X86_FEATURE_SGX1) + switch (x86_feature) { + case X86_FEATURE_SGX1: return KVM_X86_FEATURE_SGX1; - else if (x86_feature == X86_FEATURE_SGX2) + case X86_FEATURE_SGX2: return KVM_X86_FEATURE_SGX2; - - return x86_feature; + case X86_FEATURE_KL_INS_ENABLED: + return KVM_X86_FEATURE_KL_INS_ENABLED; + case X86_FEATURE_KL_WIDE: + return KVM_X86_FEATURE_KL_WIDE; + case X86_FEATURE_IWKEY_BACKUP: + return KVM_X86_FEATURE_IWKEY_BACKUP; + case X86_FEATURE_IWKEY_NOBACKUP: + return KVM_X86_FEATURE_IWKEY_NOBACKUP; + case X86_FEATURE_IWKEY_RAND: + return KVM_X86_FEATURE_IWKEY_RAND; + default: + return x86_feature; + } } static __always_inline u32 __feature_leaf(int x86_feature) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 752b1e4..da4c123 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3507,7 +3507,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) vmcs_writel(CR4_READ_SHADOW, cr4); vmcs_writel(GUEST_CR4, hw_cr4); - if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) + if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE | X86_CR4_KEYLOCKER)) kvm_update_cpuid_runtime(vcpu); } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 521f74e..b0cf5f7 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -485,6 +485,8 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r, __reserved_bits |= X86_CR4_VMXE; \ if (!__cpu_has(__c, X86_FEATURE_PCID)) \ __reserved_bits |= X86_CR4_PCIDE; \ + if (!__cpu_has(__c, X86_FEATURE_KEYLOCKER)) \ + __reserved_bits |= X86_CR4_KEYLOCKER; \ __reserved_bits; \ }) From patchwork Tue Jun 1 08:47:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFA5DC4708F for ; Tue, 1 Jun 2021 08:49:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4C7D61364 for ; Tue, 1 Jun 2021 08:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233832AbhFAIuu (ORCPT ); Tue, 1 Jun 2021 04:50:50 -0400 Received: from mga07.intel.com ([134.134.136.100]:45213 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233818AbhFAIuW (ORCPT ); Tue, 1 Jun 2021 04:50:22 -0400 IronPort-SDR: yKVa4fsE+CY758/R3/1tXTyFV9uKKEsa/STmFlcLQBrhF5YHL5ZNNNmC7GgRI8dsQPDIYEsp4T 3U5KyKaC+lTw== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381439" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381439" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:27 -0700 IronPort-SDR: j6+pw+2suYFV1Y2iWYvoJjKwf6F8XIkYNO0Pqgtt9LxZMUu2Y0p9wiB3HbeYfTCrnkNTaIAK/2 SEYyGd2ZNX5A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967826" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:24 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 10/15] kvm/vmx/nested: Support new IA32_VMX_PROCBASED_CTLS3 vmx capability MSR Date: Tue, 1 Jun 2021 16:47:49 +0800 Message-Id: <1622537274-146420-11-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add this new VMX capability MSR in nested_vmx_msrs, and related functions for its nested support. Don't set its LOADIWKEY VM-Exit bit at present. It will be enabled in last patch when everything's prepared. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/capabilities.h | 2 ++ arch/x86/kvm/vmx/nested.c | 22 +++++++++++++++++++++- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 2 ++ 4 files changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index df7550c..0a0bea9 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -33,6 +33,8 @@ struct nested_vmx_msrs { u32 procbased_ctls_high; u32 secondary_ctls_low; u32 secondary_ctls_high; + /* Tertiary Controls is 64bit allow-1 semantics */ + u64 tertiary_ctls; u32 pinbased_ctls_low; u32 pinbased_ctls_high; u32 exit_ctls_low; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index bced766..b04184b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1272,6 +1272,18 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data) lowp = &vmx->nested.msrs.secondary_ctls_low; highp = &vmx->nested.msrs.secondary_ctls_high; break; + /* + * MSR_IA32_VMX_PROCBASED_CTLS3 is 64bit, all allow-1 (must-be-0) + * semantics. + */ + case MSR_IA32_VMX_PROCBASED_CTLS3: + /* Check must-be-0 bits are still 0. */ + if (!is_bitwise_subset(vmx->nested.msrs.tertiary_ctls, + data, GENMASK_ULL(63, 0))) + return -EINVAL; + + vmx->nested.msrs.tertiary_ctls = data; + return 0; default: BUG(); } @@ -1408,6 +1420,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) case MSR_IA32_VMX_TRUE_EXIT_CTLS: case MSR_IA32_VMX_TRUE_ENTRY_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS2: + case MSR_IA32_VMX_PROCBASED_CTLS3: return vmx_restore_control_msr(vmx, msr_index, data); case MSR_IA32_VMX_MISC: return vmx_restore_vmx_misc(vmx, data); @@ -1503,6 +1516,9 @@ int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata) msrs->secondary_ctls_low, msrs->secondary_ctls_high); break; + case MSR_IA32_VMX_PROCBASED_CTLS3: + *pdata = msrs->tertiary_ctls; + break; case MSR_IA32_VMX_EPT_VPID_CAP: *pdata = msrs->ept_caps | ((u64)msrs->vpid_caps << 32); @@ -6429,7 +6445,8 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) CPU_BASED_USE_IO_BITMAPS | CPU_BASED_MONITOR_TRAP_FLAG | CPU_BASED_MONITOR_EXITING | CPU_BASED_RDPMC_EXITING | CPU_BASED_RDTSC_EXITING | CPU_BASED_PAUSE_EXITING | - CPU_BASED_TPR_SHADOW | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; + CPU_BASED_TPR_SHADOW | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; /* * We can allow some features even when not supported by the * hardware. For example, L1 can specify an MSR bitmap - and we @@ -6467,6 +6484,9 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) SECONDARY_EXEC_RDSEED_EXITING | SECONDARY_EXEC_XSAVES; + if (msrs->procbased_ctls_high & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) + rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS3, msrs->tertiary_ctls); + msrs->tertiary_ctls &= 0; /* * We can emulate "VMCS shadowing," even if the hardware * doesn't support it. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index da4c123..44c9f16 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1981,7 +1981,7 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu, static int vmx_get_msr_feature(struct kvm_msr_entry *msr) { switch (msr->index) { - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!nested) return 1; return vmx_get_vmx_msr(&vmcs_config.nested, msr->index, &msr->data); @@ -2069,7 +2069,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) msr_info->data = to_vmx(vcpu)->msr_ia32_sgxlepubkeyhash [msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0]; break; - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!nested_vmx_allowed(vcpu)) return 1; if (vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, @@ -2399,7 +2399,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) vmx->msr_ia32_sgxlepubkeyhash [msr_index - MSR_IA32_SGXLEPUBKEYHASH0] = data; break; - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!msr_info->host_initiated) return 1; /* they are read-only */ if (!nested_vmx_allowed(vcpu)) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6eda283..9b87e64 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1332,6 +1332,7 @@ int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu) MSR_IA32_VMX_PROCBASED_CTLS2, MSR_IA32_VMX_EPT_VPID_CAP, MSR_IA32_VMX_VMFUNC, + MSR_IA32_VMX_PROCBASED_CTLS3, MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, @@ -1363,6 +1364,7 @@ int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu) MSR_IA32_VMX_PROCBASED_CTLS2, MSR_IA32_VMX_EPT_VPID_CAP, MSR_IA32_VMX_VMFUNC, + MSR_IA32_VMX_PROCBASED_CTLS3, MSR_F10H_DECFG, MSR_IA32_UCODE_REV, From patchwork Tue Jun 1 08:47:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D4B8C47080 for ; Tue, 1 Jun 2021 08:49:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 128EA61375 for ; Tue, 1 Jun 2021 08:49:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234008AbhFAIvA (ORCPT ); Tue, 1 Jun 2021 04:51:00 -0400 Received: from mga07.intel.com ([134.134.136.100]:45201 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233906AbhFAIue (ORCPT ); Tue, 1 Jun 2021 04:50:34 -0400 IronPort-SDR: NYtbMDpcOyvDufvxG7c8oMU9aQzJG2GctxYKaVnzJAjJjeQzmNwj/VtDY56bUX1mzT8R5W0w8z TsLmjVbzFQWQ== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381454" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381454" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:30 -0700 IronPort-SDR: W+lflgdUTzZQ1CrjmgkWekOe5ht+j3gWgQhJLKHZsdhcdgXJkzZwg6LwI2wNzLfCED2smKBz5D d6AyXJ+mURdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967829" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:27 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 11/15] kvm/vmx: Implement vmx_compute_tertiary_exec_control() Date: Tue, 1 Jun 2021 16:47:50 +0800 Message-Id: <1622537274-146420-12-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Like vmx_compute_secondary_exec_control(), before L1 set VMCS, compute its tertiary control capability MSR's value according to guest CPUID setting. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++++-- arch/x86/kvm/vmx/vmx.h | 1 + 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 44c9f16..5b46d7b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4523,6 +4523,22 @@ u32 vmx_exec_control(struct vcpu_vmx *vmx) #define vmx_adjust_sec_exec_exiting(vmx, exec_control, lname, uname) \ vmx_adjust_sec_exec_control(vmx, exec_control, lname, uname, uname##_EXITING, true) +static void vmx_compute_tertiary_exec_control(struct vcpu_vmx *vmx) +{ + struct kvm_vcpu *vcpu = &vmx->vcpu; + u32 exec_control = vmcs_config.cpu_based_3rd_exec_ctrl; + + if (nested) { + if (guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + vmx->nested.msrs.tertiary_ctls |= + TERTIARY_EXEC_LOADIWKEY_EXITING; + else + vmx->nested.msrs.tertiary_ctls &= + ~TERTIARY_EXEC_LOADIWKEY_EXITING; + } + vmx->tertiary_exec_control = exec_control; +} + static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) { struct kvm_vcpu *vcpu = &vmx->vcpu; @@ -4622,8 +4638,10 @@ static void init_vmcs(struct vcpu_vmx *vmx) secondary_exec_controls_set(vmx, vmx->secondary_exec_control); } - if (cpu_has_tertiary_exec_ctrls()) - tertiary_exec_controls_set(vmx, vmcs_config.cpu_based_3rd_exec_ctrl); + if (cpu_has_tertiary_exec_ctrls()) { + vmx_compute_tertiary_exec_control(vmx); + tertiary_exec_controls_set(vmx, vmx->tertiary_exec_control); + } if (kvm_vcpu_apicv_active(&vmx->vcpu)) { vmcs_write64(EOI_EXIT_BITMAP0, 0); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index e0ade10..d2eecb8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -258,6 +258,7 @@ struct vcpu_vmx { u32 msr_ia32_umwait_control; u32 secondary_exec_control; + u64 tertiary_exec_control; /* * loaded_vmcs points to the VMCS currently used in this vcpu. For a From patchwork Tue Jun 1 08:47:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8318AC47080 for ; Tue, 1 Jun 2021 08:49:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60BB16138C for ; Tue, 1 Jun 2021 08:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234037AbhFAIvN (ORCPT ); Tue, 1 Jun 2021 04:51:13 -0400 Received: from mga07.intel.com ([134.134.136.100]:45208 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233819AbhFAIur (ORCPT ); Tue, 1 Jun 2021 04:50:47 -0400 IronPort-SDR: qWiThMF1Ui27T+8VRVn/9wxtMElVWdL9ZaFWJ8HGEPCp38gesvVtnpdxjlusbYp6P9dPsEC5XH Vq/RG9OIpnbQ== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381467" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381467" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:33 -0700 IronPort-SDR: pcY6ZVRYCG3AIXdG+E7mH60EuWfF95mcW3CLBxPGNBbu0wDKDpKok1w38XReVN4oTNQKHzw7Zv sz2c0c8l9FOQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967839" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:30 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 12/15] kvm/vmx/vmcs12: Add Tertiary VM-Exec control field in vmcs12 Date: Tue, 1 Jun 2021 16:47:51 +0800 Message-Id: <1622537274-146420-13-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmcs12.c | 1 + arch/x86/kvm/vmx/vmcs12.h | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c index 034adb6..717e63a 100644 --- a/arch/x86/kvm/vmx/vmcs12.c +++ b/arch/x86/kvm/vmx/vmcs12.c @@ -51,6 +51,7 @@ FIELD64(VMWRITE_BITMAP, vmwrite_bitmap), FIELD64(XSS_EXIT_BITMAP, xss_exit_bitmap), FIELD64(ENCLS_EXITING_BITMAP, encls_exiting_bitmap), + FIELD64(TERTIARY_VM_EXEC_CONTROL, tertiary_vm_exec_control), FIELD64(GUEST_PHYSICAL_ADDRESS, guest_physical_address), FIELD64(VMCS_LINK_POINTER, vmcs_link_pointer), FIELD64(GUEST_IA32_DEBUGCTL, guest_ia32_debugctl), diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 1349495..ec901b6 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -70,7 +70,8 @@ struct __packed vmcs12 { u64 eptp_list_address; u64 pml_address; u64 encls_exiting_bitmap; - u64 padding64[2]; /* room for future expansion */ + u64 tertiary_vm_exec_control; + u64 padding64[1]; /* room for future expansion */ /* * To allow migration of L1 (complete with its L2 guests) between * machines of different natural widths (32 or 64 bit), we cannot have @@ -258,6 +259,7 @@ static inline void vmx_check_vmcs12_offsets(void) CHECK_OFFSET(eptp_list_address, 304); CHECK_OFFSET(pml_address, 312); CHECK_OFFSET(encls_exiting_bitmap, 320); + CHECK_OFFSET(tertiary_vm_exec_control, 328); CHECK_OFFSET(cr0_guest_host_mask, 344); CHECK_OFFSET(cr4_guest_host_mask, 352); CHECK_OFFSET(cr0_read_shadow, 360); From patchwork Tue Jun 1 08:47:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E790C47092 for ; Tue, 1 Jun 2021 08:49:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05F976138C for ; Tue, 1 Jun 2021 08:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234120AbhFAIvP (ORCPT ); Tue, 1 Jun 2021 04:51:15 -0400 Received: from mga07.intel.com ([134.134.136.100]:45213 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233808AbhFAIur (ORCPT ); Tue, 1 Jun 2021 04:50:47 -0400 IronPort-SDR: JnW+IM80NHSYz4bwzIUuky+mrke9svHpaIT/N6XfeCLb+nqT9+fLPDTDRPTT6aEw2U2ZWECdmA 25YYAnjrVYcg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381474" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381474" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:36 -0700 IronPort-SDR: 3+wwGZUYS9nCMhfMTiSsaiUlxW+DwLjhmFE/Wi97D/GXrBpq3RyO1SteaZUshAaqI/FXU3XUPI w4oB3bcw5upg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967849" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:33 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 13/15] kvm/vmx/nested: Support Tertiary VM-Exec control in vmcs02 Date: Tue, 1 Jun 2021 16:47:52 +0800 Message-Id: <1622537274-146420-14-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/nested.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b04184b..f5ec215 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2222,6 +2222,7 @@ static void prepare_vmcs02_early_rare(struct vcpu_vmx *vmx, static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) { u32 exec_control; + u64 vmcs02_ter_exec_ctrl; u64 guest_efer = nested_vmx_calc_efer(vmx, vmcs12); if (vmx->nested.dirty_vmcs12 || vmx->nested.hv_evmcs) @@ -2344,6 +2345,18 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) vm_entry_controls_set(vmx, exec_control); /* + * Tertiary EXEC CONTROLS + */ + if (cpu_has_tertiary_exec_ctrls()) { + vmcs02_ter_exec_ctrl = vmx->tertiary_exec_control; + if (nested_cpu_has(vmcs12, + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS)) + vmcs02_ter_exec_ctrl |= vmcs12->tertiary_vm_exec_control; + + tertiary_exec_controls_set(vmx, vmcs02_ter_exec_ctrl); + } + + /* * EXIT CONTROLS * * L2->L1 exit controls are emulated - the hardware exit is to L0 so From patchwork Tue Jun 1 08:47:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7D4EC47080 for ; Tue, 1 Jun 2021 08:49:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C17A61396 for ; Tue, 1 Jun 2021 08:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234082AbhFAIv0 (ORCPT ); Tue, 1 Jun 2021 04:51:26 -0400 Received: from mga07.intel.com ([134.134.136.100]:45201 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234084AbhFAIvA (ORCPT ); Tue, 1 Jun 2021 04:51:00 -0400 IronPort-SDR: HgYWY3P5/3LQVlgOT2khHS4HFkAMkHuJiEvfJ8nmywHG6GbOF20IFRXB79AzLanD5MUs4IrmyA 7PeIMw/HOD7Q== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381514" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381514" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:39 -0700 IronPort-SDR: JGnnY3k+FKSOY0ZiyMIU5Mx88us40VVwLQXda7mFKL+sHvZUaKE7uUqtmW5TWoe2N8EAzUcRaU CTks3GR+77Ug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967856" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:36 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 14/15] kvm/vmx/nested: Support CR4.KL in nested Date: Tue, 1 Jun 2021 16:47:53 +0800 Message-Id: <1622537274-146420-15-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add CR4.KL in nested.msr.cr4_fixed1 when guest CPUID supports KeyLocker. So that it can pass check when preparing vmcs02. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmx.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5b46d7b..070ba81 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7440,6 +7440,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) cr4_fixed1_update(X86_CR4_PKE, ecx, feature_bit(PKU)); cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP)); cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57)); + cr4_fixed1_update(X86_CR4_KEYLOCKER, ecx, feature_bit(KEYLOCKER)); #undef cr4_fixed1_update } From patchwork Tue Jun 1 08:47:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12290781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7F38C4708F for ; Tue, 1 Jun 2021 08:49:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B16026138C for ; Tue, 1 Jun 2021 08:49:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234035AbhFAIvg (ORCPT ); Tue, 1 Jun 2021 04:51:36 -0400 Received: from mga07.intel.com ([134.134.136.100]:45208 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233958AbhFAIvL (ORCPT ); Tue, 1 Jun 2021 04:51:11 -0400 IronPort-SDR: Bt+Sjy6Izis6MJfuHuKSh2Omk0wg6E57TojLzEB0zr43BMQy0yeVHckTRoqvpsrHJy6EEXU4rr T4FurxyAT8xg== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="267381541" X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="267381541" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 01:48:42 -0700 IronPort-SDR: KF3INlzmY+dy1eXTMBimHFb9/Jd4n/Vq4XJxMyxlgYoGEZLDyf1IywD3JbnXtL4F/RA0DuKsWI OpiqEcsDj14g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,239,1616482800"; d="scan'208";a="437967879" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2021 01:48:39 -0700 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kvm@vger.kernel.org Cc: x86@kernel.org, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com, robert.hu@intel.com, robert.hu@linux.intel.com Subject: [PATCH 15/15] kvm/vmx/nested: Enable nested LOADIWKEY VM-exit Date: Tue, 1 Jun 2021 16:47:54 +0800 Message-Id: <1622537274-146420-16-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> References: <1622537274-146420-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set the LOADIWKEY VM-exit bit in nested vmx ctrl MSR, and let L1 intercept L2's LOADIWKEY VM-Exit. Add helper nested_cpu_has3(), which returns if some feature in Tertiary VM-Exec Control is set. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/nested.c | 5 ++++- arch/x86/kvm/vmx/nested.h | 7 +++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index f5ec215..514df3f 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -5983,6 +5983,9 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE); case EXIT_REASON_ENCLS: return nested_vmx_exit_handled_encls(vcpu, vmcs12); + case EXIT_REASON_LOADIWKEY: + return nested_cpu_has3(vmcs12, + TERTIARY_EXEC_LOADIWKEY_EXITING); default: return true; } @@ -6499,7 +6502,7 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) if (msrs->procbased_ctls_high & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS3, msrs->tertiary_ctls); - msrs->tertiary_ctls &= 0; + msrs->tertiary_ctls &= TERTIARY_EXEC_LOADIWKEY_EXITING; /* * We can emulate "VMCS shadowing," even if the hardware * doesn't support it. diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 184418b..f1e43e2 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -145,6 +145,13 @@ static inline bool nested_cpu_has2(struct vmcs12 *vmcs12, u32 bit) (vmcs12->secondary_vm_exec_control & bit); } +static inline bool nested_cpu_has3(struct vmcs12 *vmcs12, u32 bit) +{ + return (vmcs12->cpu_based_vm_exec_control & + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) && + (vmcs12->tertiary_vm_exec_control & bit); +} + static inline bool nested_cpu_has_preemption_timer(struct vmcs12 *vmcs12) { return vmcs12->pin_based_vm_exec_control &