From patchwork Mon Jan 25 09:06:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 700BFC433DB for ; Tue, 26 Jan 2021 03:16:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3973622583 for ; Tue, 26 Jan 2021 03:16:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726462AbhAYJ0g (ORCPT ); Mon, 25 Jan 2021 04:26:36 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726498AbhAYJYu (ORCPT ); Mon, 25 Jan 2021 04:24:50 -0500 IronPort-SDR: DFIi6KXTtVCca3G9b3CbSjhbZmPOiRGrI7YMGo9Y8MAhiQg0YAoMEXd8XEYi50AEKO+Hwrpx/0 U/eOfGg3vFGw== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915773" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915773" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:02 -0800 IronPort-SDR: tIJOBhYPaik+XgVUZGNBW0F+itKgGuPAGyUFDby4gueIDHXxaZcmLvdbQ28i0SUvGx7p4dDaP/ O/HvOMTaZezg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223803" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:00 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 01/12] x86/keylocker: Move LOADIWKEY opcode definition from keylocker.c to keylocker.h Date: Mon, 25 Jan 2021 17:06:09 +0800 Message-Id: <1611565580-47718-2-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For later kvm part code easy referrence. Signed-off-by: Robert Hoo --- arch/x86/include/asm/keylocker.h | 2 ++ arch/x86/kernel/keylocker.c | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index a6774ce..8c0792f 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -8,6 +8,8 @@ #include #include +#define LOADIWKEY ".byte 0xf3,0x0f,0x38,0xdc,0xd1" + #define KEYLOCKER_CPUID 0x019 #define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0) #define KEYLOCKER_CPUID_EBX_AESKLE BIT(0) diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index e77e4c3..06a30f0 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -40,7 +40,6 @@ bool check_keylocker_readiness(void) } /* Load Internal (Wrapping) Key */ -#define LOADIWKEY ".byte 0xf3,0x0f,0x38,0xdc,0xd1" #define LOADIWKEY_NUM_OPERANDS 3 #define LOADIWKEY_HWRAND_RETRY 10 From patchwork Mon Jan 25 09:06:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23CE8C433DB for ; Tue, 26 Jan 2021 03:16:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E328D22D58 for ; Tue, 26 Jan 2021 03:16:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726506AbhAYJZn (ORCPT ); Mon, 25 Jan 2021 04:25:43 -0500 Received: from mga14.intel.com ([192.55.52.115]:22889 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726503AbhAYJYv (ORCPT ); Mon, 25 Jan 2021 04:24:51 -0500 IronPort-SDR: LUjGTxxN9jvUlvFEMi2cdE0Ho5itiYDw/fKtu5a+EF+hOUSw47jMGYi2Xn2oBzT6dGlxFUAKjg yXn1oXzwtJDg== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915776" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915776" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:04 -0800 IronPort-SDR: ywGu0M7hzIWe5wSix6f8QBri7jH40rme7PA4ItRNPT69YDov3gf3B55I5KAVE4alYFWAulMXO9 QjGWfGVZNhaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223823" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:02 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 02/12] x86/cpufeature: Add CPUID.19H:{EBX,ECX} cpuid leaves Date: Mon, 25 Jan 2021 17:06:10 +0800 Message-Id: <1611565580-47718-3-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Though KeyLocker is generally enumerated by CPUID.(07H,0):ECX.KL[bit23], CPUID.19H:{EBX,ECX} enumerate more details of KeyLocker supporting status. CPUID.19H:EBX bit0 enumerates if OS (CR4.KeyLocker) and BIOS have enabled KeyLocker. bit2 enumerates if wide Key Locker instructions are supported. bit4 enumerates if IWKey backup is supported. CPUID.19H:ECX bit0 enumerates if the NoBackup parameter to LOADIWKEY is supported. bit1 enumerates if IWKey randomization is supported. Define these 2 cpuid_leafs so that get_cpu_cap() will have these capabilities included, which will be the knowledge source of KVM on host KeyLocker capabilities. Most of above features don't have the necessity to appear in /proc/cpuinfo, except "iwkey_rand", which we think might be interesting for user to easily know if his system is using randomized IWKey. Signed-off-by: Robert Hoo --- arch/x86/include/asm/cpufeature.h | 6 ++++-- arch/x86/include/asm/cpufeatures.h | 11 ++++++++++- arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/required-features.h | 2 +- arch/x86/kernel/cpu/common.c | 7 +++++++ 5 files changed, 23 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 59bf91c..f9fea5f 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -30,6 +30,8 @@ enum cpuid_leafs CPUID_7_ECX, CPUID_8000_0007_EBX, CPUID_7_EDX, + CPUID_19_EBX, + CPUID_19_ECX, }; #ifdef CONFIG_X86_FEATURE_NAMES @@ -89,7 +91,7 @@ enum cpuid_leafs CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \ REQUIRED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 19)) + BUILD_BUG_ON_ZERO(NCAPINTS != 21)) #define DISABLED_MASK_BIT_SET(feature_bit) \ ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \ @@ -112,7 +114,7 @@ enum cpuid_leafs CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \ DISABLED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 19)) + BUILD_BUG_ON_ZERO(NCAPINTS != 21)) #define cpu_has(c, bit) \ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \ diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 8f2f050..d4a883a 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -13,7 +13,7 @@ /* * Defines x86 CPU feature bits */ -#define NCAPINTS 19 /* N 32-bit words worth of info */ +#define NCAPINTS 21 /* N 32-bit words worth of info */ #define NBUGINTS 1 /* N 32-bit bug flags */ /* @@ -382,6 +382,15 @@ #define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */ +/* Intel-defined KeyLocker feature CPUID level 0x00000019 (EBX), word 20*/ +#define X86_FEATURE_KL_INS_ENABLED (19*32 + 0) /* "" Key Locker instructions */ +#define X86_FEATURE_KL_WIDE (19*32 + 2) /* "" Wide Key Locker instructions */ +#define X86_FEATURE_IWKEY_BACKUP (19*32 + 4) /* "" IWKey backup */ + +/* Intel-defined KeyLocker feature CPUID level 0x00000019 (ECX), word 21*/ +#define X86_FEATURE_IWKEY_NOBACKUP (20*32 + 0) /* "" NoBackup parameter to LOADIWKEY */ +#define X86_FEATURE_IWKEY_RAND (20*32 + 1) /* IWKey Randomization */ + /* * BUG word(s) */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index 0ac9414..904baf8 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -91,6 +91,6 @@ DISABLE_ENQCMD|DISABLE_KEYLOCKER) #define DISABLED_MASK17 0 #define DISABLED_MASK18 0 -#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19) +#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) #endif /* _ASM_X86_DISABLED_FEATURES_H */ diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h index 3ff0d48..a165a16 100644 --- a/arch/x86/include/asm/required-features.h +++ b/arch/x86/include/asm/required-features.h @@ -101,6 +101,6 @@ #define REQUIRED_MASK16 0 #define REQUIRED_MASK17 0 #define REQUIRED_MASK18 0 -#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19) +#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) #endif /* _ASM_X86_REQUIRED_FEATURES_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 48881d8..ea46956 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -995,6 +995,13 @@ void get_cpu_cap(struct cpuinfo_x86 *c) c->x86_capability[CPUID_D_1_EAX] = eax; } + /* Additional Intel-defined KeyLocker flags: level 0x00000019 */ + if (c->cpuid_level >= 0x00000019) { + cpuid(0x00000019, &eax, &ebx, &ecx, &edx); + c->x86_capability[CPUID_19_EBX] = ebx; + c->x86_capability[CPUID_19_ECX] = ecx; + } + /* AMD-defined flags: level 0x80000001 */ eax = cpuid_eax(0x80000000); c->extended_cpuid_level = eax; From patchwork Mon Jan 25 09:06:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD5CCC433E9 for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A33252053B for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730204AbhAZFO7 (ORCPT ); Tue, 26 Jan 2021 00:14:59 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726569AbhAYJ0V (ORCPT ); Mon, 25 Jan 2021 04:26:21 -0500 IronPort-SDR: wOx4Aky5mXPxW/i/3P4ehYOqL0kY/T43CuIxwo6eG1CaKP8clk4eY2CC44+OZzqFkUukDpEBEs HX/TCPjlNrAw== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915778" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915778" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:07 -0800 IronPort-SDR: ocCu9Z0B5ky92hVKS7AyowiyK2Dqw2QK/mB2e1D0SrKDdU2JNP8BPRJPONjd4BA4rpVppe0PTn 3MJV4rpElHTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223840" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:04 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 03/12] kvm/vmx: Introduce the new tertiary processor-based VM-execution controls Date: Mon, 25 Jan 2021 17:06:11 +0800 Message-Id: <1611565580-47718-4-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Tertiary Exec Control field is enabled by Primary Exec Control field's bit 17. And the Tertiary-Exec-Control-Field.bit[0] in turn controls the 'LOADIWKEY' VM-exit. Its companion VMX CTRL MSR is IA32_VMX_PROCBASED_CTLS3 (0x492). Please be noted that they're 64 bit, different from previous control fields. Signed-off-by: Robert Hoo signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/include/asm/vmx.h | 3 +++ arch/x86/include/asm/vmxfeatures.h | 3 ++- arch/x86/kernel/cpu/feat_ctl.c | 9 +++++++++ arch/x86/kvm/vmx/capabilities.h | 7 +++++++ arch/x86/kvm/vmx/vmcs.h | 1 + arch/x86/kvm/vmx/vmx.c | 23 ++++++++++++++++++++--- arch/x86/kvm/vmx/vmx.h | 8 ++++++++ 8 files changed, 51 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index c0b9157..3bca658 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -902,6 +902,7 @@ #define MSR_IA32_VMX_TRUE_EXIT_CTLS 0x0000048f #define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x00000490 #define MSR_IA32_VMX_VMFUNC 0x00000491 +#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 /* VMX_BASIC bits and bitmasks */ #define VMX_BASIC_VMCS_SIZE_SHIFT 32 diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index f8ba528..1517692 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -31,6 +31,7 @@ #define CPU_BASED_RDTSC_EXITING VMCS_CONTROL_BIT(RDTSC_EXITING) #define CPU_BASED_CR3_LOAD_EXITING VMCS_CONTROL_BIT(CR3_LOAD_EXITING) #define CPU_BASED_CR3_STORE_EXITING VMCS_CONTROL_BIT(CR3_STORE_EXITING) +#define CPU_BASED_ACTIVATE_TERTIARY_CONTROLS VMCS_CONTROL_BIT(TER_CONTROLS) #define CPU_BASED_CR8_LOAD_EXITING VMCS_CONTROL_BIT(CR8_LOAD_EXITING) #define CPU_BASED_CR8_STORE_EXITING VMCS_CONTROL_BIT(CR8_STORE_EXITING) #define CPU_BASED_TPR_SHADOW VMCS_CONTROL_BIT(VIRTUAL_TPR) @@ -219,6 +220,8 @@ enum vmcs_field { ENCLS_EXITING_BITMAP_HIGH = 0x0000202F, TSC_MULTIPLIER = 0x00002032, TSC_MULTIPLIER_HIGH = 0x00002033, + TERTIARY_VM_EXEC_CONTROL = 0x00002034, + TERTIARY_VM_EXEC_CONTROL_HIGH = 0x00002035, GUEST_PHYSICAL_ADDRESS = 0x00002400, GUEST_PHYSICAL_ADDRESS_HIGH = 0x00002401, VMCS_LINK_POINTER = 0x00002800, diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h index 9915990..75a15c2 100644 --- a/arch/x86/include/asm/vmxfeatures.h +++ b/arch/x86/include/asm/vmxfeatures.h @@ -5,7 +5,7 @@ /* * Defines VMX CPU feature bits */ -#define NVMXINTS 3 /* N 32-bit words worth of info */ +#define NVMXINTS 5 /* N 32-bit words worth of info */ /* * Note: If the comment begins with a quoted string, that string is used @@ -43,6 +43,7 @@ #define VMX_FEATURE_RDTSC_EXITING ( 1*32+ 12) /* "" VM-Exit on RDTSC */ #define VMX_FEATURE_CR3_LOAD_EXITING ( 1*32+ 15) /* "" VM-Exit on writes to CR3 */ #define VMX_FEATURE_CR3_STORE_EXITING ( 1*32+ 16) /* "" VM-Exit on reads from CR3 */ +#define VMX_FEATURE_TER_CONTROLS (1*32 + 17) /* "" Enable Tertiary VM-Execution Controls */ #define VMX_FEATURE_CR8_LOAD_EXITING ( 1*32+ 19) /* "" VM-Exit on writes to CR8 */ #define VMX_FEATURE_CR8_STORE_EXITING ( 1*32+ 20) /* "" VM-Exit on reads from CR8 */ #define VMX_FEATURE_VIRTUAL_TPR ( 1*32+ 21) /* "vtpr" TPR virtualization, a.k.a. TPR shadow */ diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c index 29a3bed..e7bf637 100644 --- a/arch/x86/kernel/cpu/feat_ctl.c +++ b/arch/x86/kernel/cpu/feat_ctl.c @@ -15,6 +15,8 @@ enum vmx_feature_leafs { MISC_FEATURES = 0, PRIMARY_CTLS, SECONDARY_CTLS, + TERTIARY_CTLS_LOW, + TERTIARY_CTLS_HIGH, NR_VMX_FEATURE_WORDS, }; @@ -42,6 +44,13 @@ static void init_vmx_capabilities(struct cpuinfo_x86 *c) rdmsr_safe(MSR_IA32_VMX_PROCBASED_CTLS2, &ign, &supported); c->vmx_capability[SECONDARY_CTLS] = supported; + /* + * For tertiary execution controls MSR, it's actually a 64bit allowed-1. + */ + rdmsr_safe(MSR_IA32_VMX_PROCBASED_CTLS3, &ign, &supported); + c->vmx_capability[TERTIARY_CTLS_LOW] = ign; + c->vmx_capability[TERTIARY_CTLS_HIGH] = supported; + rdmsr(MSR_IA32_VMX_PINBASED_CTLS, ign, supported); rdmsr_safe(MSR_IA32_VMX_VMFUNC, &ign, &funcs); diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 3a18614..d8bbde4 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -57,6 +57,7 @@ struct vmcs_config { u32 pin_based_exec_ctrl; u32 cpu_based_exec_ctrl; u32 cpu_based_2nd_exec_ctrl; + u64 cpu_based_3rd_exec_ctrl; u32 vmexit_ctrl; u32 vmentry_ctrl; struct nested_vmx_msrs nested; @@ -130,6 +131,12 @@ static inline bool cpu_has_secondary_exec_ctrls(void) CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; } +static inline bool cpu_has_tertiary_exec_ctrls(void) +{ + return vmcs_config.cpu_based_exec_ctrl & + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; +} + static inline bool cpu_has_vmx_virtualize_apic_accesses(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index 1472c6c..343c329 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -48,6 +48,7 @@ struct vmcs_controls_shadow { u32 pin; u32 exec; u32 secondary_exec; + u64 tertiary_exec; }; /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 47b8357..12a926e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2376,6 +2376,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, u32 _pin_based_exec_control = 0; u32 _cpu_based_exec_control = 0; u32 _cpu_based_2nd_exec_control = 0; + u64 _cpu_based_3rd_exec_control = 0; u32 _vmexit_control = 0; u32 _vmentry_control = 0; @@ -2397,7 +2398,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, opt = CPU_BASED_TPR_SHADOW | CPU_BASED_USE_MSR_BITMAPS | - CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; + CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PROCBASED_CTLS, &_cpu_based_exec_control) < 0) return -EIO; @@ -2557,6 +2559,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control; vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control; vmcs_conf->cpu_based_2nd_exec_ctrl = _cpu_based_2nd_exec_control; + vmcs_conf->cpu_based_3rd_exec_ctrl = _cpu_based_3rd_exec_control; vmcs_conf->vmexit_ctrl = _vmexit_control; vmcs_conf->vmentry_ctrl = _vmentry_control; @@ -4200,6 +4203,12 @@ u32 vmx_exec_control(struct vcpu_vmx *vmx) #define vmx_adjust_sec_exec_exiting(vmx, exec_control, lname, uname) \ vmx_adjust_sec_exec_control(vmx, exec_control, lname, uname, uname##_EXITING, true) +static u64 vmx_tertiary_exec_control(struct vcpu_vmx *vmx) +{ + /* Though currently, no special adjustment. There might be in the future*/ + return vmcs_config.cpu_based_3rd_exec_ctrl; +} + static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) { struct kvm_vcpu *vcpu = &vmx->vcpu; @@ -4310,6 +4319,9 @@ static void init_vmcs(struct vcpu_vmx *vmx) secondary_exec_controls_set(vmx, vmx->secondary_exec_control); } + if (cpu_has_tertiary_exec_ctrls()) + tertiary_exec_controls_set(vmx, vmx_tertiary_exec_control(vmx)); + if (kvm_vcpu_apicv_active(&vmx->vcpu)) { vmcs_write64(EOI_EXIT_BITMAP0, 0); vmcs_write64(EOI_EXIT_BITMAP1, 0); @@ -5778,6 +5790,7 @@ void dump_vmcs(void) { u32 vmentry_ctl, vmexit_ctl; u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control; + u64 tertiary_exec_control = 0; unsigned long cr4; u64 efer; @@ -5796,6 +5809,9 @@ void dump_vmcs(void) if (cpu_has_secondary_exec_ctrls()) secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); + if (cpu_has_tertiary_exec_ctrls()) + tertiary_exec_control = vmcs_read64(TERTIARY_VM_EXEC_CONTROL); + pr_err("*** Guest State ***\n"); pr_err("CR0: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n", vmcs_readl(GUEST_CR0), vmcs_readl(CR0_READ_SHADOW), @@ -5878,8 +5894,9 @@ void dump_vmcs(void) vmcs_read64(HOST_IA32_PERF_GLOBAL_CTRL)); pr_err("*** Control State ***\n"); - pr_err("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n", - pin_based_exec_ctrl, cpu_based_exec_ctrl, secondary_exec_control); + pr_err("PinBased=0x%08x CPUBased=0x%08x SecondaryExec=0x%08x TertiaryExec=0x%016llx\n", + pin_based_exec_ctrl, cpu_based_exec_ctrl, secondary_exec_control, + tertiary_exec_control); pr_err("EntryControls=%08x ExitControls=%08x\n", vmentry_ctl, vmexit_ctl); pr_err("ExceptionBitmap=%08x PFECmask=%08x PFECmatch=%08x\n", vmcs_read32(EXCEPTION_BITMAP), diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index f6f66e5..94f1c27 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -373,6 +373,14 @@ static inline u8 vmx_get_rvi(void) BUILD_CONTROLS_SHADOW(exec, CPU_BASED_VM_EXEC_CONTROL) BUILD_CONTROLS_SHADOW(secondary_exec, SECONDARY_VM_EXEC_CONTROL) +static inline void tertiary_exec_controls_set(struct vcpu_vmx *vmx, u64 val) +{ + if (vmx->loaded_vmcs->controls_shadow.tertiary_exec != val) { + vmcs_write64(TERTIARY_VM_EXEC_CONTROL, val); + vmx->loaded_vmcs->controls_shadow.tertiary_exec = val; + } +} + static inline void vmx_register_cache_reset(struct kvm_vcpu *vcpu) { vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP) From patchwork Mon Jan 25 09:06:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFE18C433DB for ; Tue, 26 Jan 2021 05:12:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE86620732 for ; Tue, 26 Jan 2021 05:12:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729017AbhAZFMa (ORCPT ); Tue, 26 Jan 2021 00:12:30 -0500 Received: from mga14.intel.com ([192.55.52.115]:22891 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725935AbhAYJPX (ORCPT ); Mon, 25 Jan 2021 04:15:23 -0500 IronPort-SDR: UyNWly9re0az8Ul8RXe62eCmlYfhF547aehk9sSh7c384t/HHA9OvwHD3Txn08IqxGIoYB3XJb /s7NLZp1VDVg== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915785" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915785" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:09 -0800 IronPort-SDR: vYnEoaErLNtYQ4Yk5C0Ok6pLMsNBzs2vZ5iD/LkQiJaeO3AXDiiTnH5V5vrFsEkX3vedX54oxu ZQ6f0deVQFNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223856" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:07 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 04/12] kvm/vmx: enable LOADIWKEY vm-exit support in tertiary processor-based VM-execution controls Date: Mon, 25 Jan 2021 17:06:12 +0800 Message-Id: <1611565580-47718-5-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org But don't implement 'loadiwkey' vm-exit handler in this patch, it will be done in next one. Signed-off-by: Robert Hoo --- arch/x86/include/asm/vmx.h | 6 ++++++ arch/x86/include/asm/vmxfeatures.h | 3 +++ arch/x86/include/uapi/asm/vmx.h | 5 ++++- arch/x86/kvm/vmx/vmx.c | 27 +++++++++++++++++++++++++++ 4 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 1517692..2fe69e9 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -75,6 +75,12 @@ #define SECONDARY_EXEC_TSC_SCALING VMCS_CONTROL_BIT(TSC_SCALING) #define SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE VMCS_CONTROL_BIT(USR_WAIT_PAUSE) +/* + * Definitions of Tertiary Processor-Based VM-Execution Controls. + */ +#define TERTIARY_EXEC_LOADIWKEY_EXITING VMCS_CONTROL_BIT(LOADIWKEY_EXITING) + + #define PIN_BASED_EXT_INTR_MASK VMCS_CONTROL_BIT(INTR_EXITING) #define PIN_BASED_NMI_EXITING VMCS_CONTROL_BIT(NMI_EXITING) #define PIN_BASED_VIRTUAL_NMIS VMCS_CONTROL_BIT(VIRTUAL_NMIS) diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h index 75a15c2..1d11575 100644 --- a/arch/x86/include/asm/vmxfeatures.h +++ b/arch/x86/include/asm/vmxfeatures.h @@ -85,4 +85,7 @@ #define VMX_FEATURE_USR_WAIT_PAUSE ( 2*32+ 26) /* Enable TPAUSE, UMONITOR, UMWAIT in guest */ #define VMX_FEATURE_ENCLV_EXITING ( 2*32+ 28) /* "" VM-Exit on ENCLV (leaf dependent) */ +/* Tertiary Processor-Based VM-Execution Controls, word 3 */ +#define VMX_FEATURE_LOADIWKEY_EXITING (3*32 + 0) /* "" VM-Exit on LOADIWKey */ + #endif /* _ASM_X86_VMXFEATURES_H */ diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h index b8ff9e8..9c04550 100644 --- a/arch/x86/include/uapi/asm/vmx.h +++ b/arch/x86/include/uapi/asm/vmx.h @@ -88,6 +88,7 @@ #define EXIT_REASON_XRSTORS 64 #define EXIT_REASON_UMWAIT 67 #define EXIT_REASON_TPAUSE 68 +#define EXIT_REASON_LOADIWKEY 69 #define VMX_EXIT_REASONS \ { EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \ @@ -148,7 +149,9 @@ { EXIT_REASON_XSAVES, "XSAVES" }, \ { EXIT_REASON_XRSTORS, "XRSTORS" }, \ { EXIT_REASON_UMWAIT, "UMWAIT" }, \ - { EXIT_REASON_TPAUSE, "TPAUSE" } + { EXIT_REASON_TPAUSE, "TPAUSE" }, \ + { EXIT_REASON_LOADIWKEY, "LOADIWKEY" } + #define VMX_EXIT_REASON_FLAGS \ { VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 12a926e..d01bbb4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2368,6 +2368,23 @@ static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, return 0; } +static __init int adjust_vmx_controls_64(u64 ctl_min, u64 ctl_opt, + u32 msr, u64 *result) +{ + u64 vmx_msr; + u64 ctl = ctl_min | ctl_opt; + + rdmsrl(msr, vmx_msr); + ctl &= vmx_msr; /* bit == 1 means it can be set */ + + /* Ensure minimum (required) set of control bits are supported. */ + if (ctl_min & ~ctl) + return -EIO; + + *result = ctl; + return 0; +} + static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, struct vmx_capability *vmx_cap) { @@ -2472,6 +2489,16 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, "1-setting enable VPID VM-execution control\n"); } + if (_cpu_based_exec_control & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) { + u64 opt3 = TERTIARY_EXEC_LOADIWKEY_EXITING; + u64 min3 = 0; + + if (adjust_vmx_controls_64(min3, opt3, + MSR_IA32_VMX_PROCBASED_CTLS3, + &_cpu_based_3rd_exec_control)) + return -EIO; + } + min = VM_EXIT_SAVE_DEBUG_CONTROLS | VM_EXIT_ACK_INTR_ON_EXIT; #ifdef CONFIG_X86_64 min |= VM_EXIT_HOST_ADDR_SPACE_SIZE; From patchwork Mon Jan 25 09:06:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14718C433E9 for ; Tue, 26 Jan 2021 05:12:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7A9022ADF for ; Tue, 26 Jan 2021 05:12:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728917AbhAZFMQ (ORCPT ); Tue, 26 Jan 2021 00:12:16 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725892AbhAYJPX (ORCPT ); Mon, 25 Jan 2021 04:15:23 -0500 IronPort-SDR: AhLtlsn/xLXvbzYQiGwP/S80szn8cQqKdNW0BfmxaQgYJ1ajuuJaG/9Wq9fZTTULhxdNHl9mYm unMQ0NR1y/Ug== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915795" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915795" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:11 -0800 IronPort-SDR: 1IpE4DRsdRIrpVeGyrp262CYRHQ8Qlu3vD47nBx1ApFJajCJvJhRL2V4+1RmBAzX3ms9V0xXcy 9356pH//dQHw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223871" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:09 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 05/12] kvm/vmx: Add KVM support on KeyLocker operations Date: Mon, 25 Jan 2021 17:06:13 +0800 Message-Id: <1611565580-47718-6-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define handle_loadiwkey() VM-Exit handler, which fetch the IWKey guest's setting and do it onbehalf. Note LOADIWKEY needs CR4.KeyLocker set. Trap guest write on MSRs of IA32_COPY_LOCAL_TO_PLATFORM and IA32_COPY_PLATFORM_TO_LOCAL_TO_PLATFORM, emulate IWKey save and restore operations. Trap guest read on MSRs of IA32_COPY_STATUS and IA32_IWKEYBACKUP_STATUS, return their shadow values stored in kvm_vcpu_arch and kvm_arch. Also, guest CPUID.0x19:EBX[0] is dynamic, its status changes as CR4.KL changes. On each VM-Entry, we need to resotre vCPU's IWKey, stored in kvm_vcpu_arch. Signed-off-by: Robert Hoo --- arch/x86/include/asm/kvm_host.h | 24 ++++- arch/x86/kvm/cpuid.c | 11 +++ arch/x86/kvm/cpuid.h | 2 + arch/x86/kvm/vmx/vmx.c | 189 ++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 4 +- arch/x86/kvm/x86.h | 2 + 6 files changed, 229 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7e5f33a..dc09142 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -100,7 +100,7 @@ | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP | X86_CR4_KEYLOCKER)) #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) @@ -520,6 +520,19 @@ struct kvm_vcpu_hv { cpumask_t tlb_flush; }; +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(CONFIG_CC_HAS_INT128) +typedef unsigned __int128 u128; +#else +typedef struct { + u64 reg64[2]; +} u128; +#endif + +struct iwkey { + u128 encryption_key[2]; /* 256bit encryption key */ + u128 integrity_key; /* 128bit integration key */ +}; + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -805,6 +818,11 @@ struct kvm_vcpu_arch { */ bool enforce; } pv_cpuid; + + /* Intel KeyLocker */ + bool iwkey_loaded; + struct iwkey iwkey; + u32 msr_ia32_copy_status; }; struct kvm_lpage_info { @@ -931,6 +949,10 @@ struct kvm_arch { bool apic_access_page_done; unsigned long apicv_inhibit_reasons; + bool iwkey_backup_valid; + u32 msr_ia32_iwkey_backup_status; + struct iwkey iwkey_backup; + gpa_t wall_clock; bool mwait_in_guest; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 83637a2..2fbf4af 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -133,6 +133,12 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu) cpuid_entry_has(best, X86_FEATURE_XSAVEC))) best->ebx = xstate_required_size(vcpu->arch.xcr0, true); + /* update CPUID.0x19.EBX[0], depends on CR4.KL */ + best = kvm_find_cpuid_entry(vcpu, 0x19, 0); + if (best) + cpuid_entry_change(best, X86_FEATURE_KL_INS_ENABLED, + kvm_read_cr4_bits(vcpu, X86_CR4_KEYLOCKER)); + best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0); if (kvm_hlt_in_guest(vcpu->kvm) && best && (best->eax & (1 << KVM_FEATURE_PV_UNHALT))) @@ -407,6 +413,11 @@ void kvm_set_cpu_caps(void) if (cpuid_ecx(7) & F(LA57)) kvm_cpu_cap_set(X86_FEATURE_LA57); + /* At present, host and guest can only exclusively use KeyLocker */ + if (!boot_cpu_has(X86_FEATURE_KEYLOCKER) && (cpuid_ecx(0x7) & + feature_bit(KEYLOCKER))) + kvm_cpu_cap_set(X86_FEATURE_KEYLOCKER); + /* * PKU not yet implemented for shadow paging and requires OSPKE * to be set on the host. Clear it if that is not the case diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index f7a6e8f..639c647 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -63,6 +63,8 @@ struct cpuid_reg { [CPUID_8000_0007_EBX] = {0x80000007, 0, CPUID_EBX}, [CPUID_7_EDX] = { 7, 0, CPUID_EDX}, [CPUID_7_1_EAX] = { 7, 1, CPUID_EAX}, + [CPUID_19_EBX] = { 0x19, 0, CPUID_EBX}, + [CPUID_19_ECX] = { 0x19, 0, CPUID_ECX}, }; /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d01bbb4..6be6d87 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -47,6 +47,7 @@ #include #include #include +#include #include "capabilities.h" #include "cpuid.h" @@ -1192,6 +1193,106 @@ void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, } } +static int get_xmm(int index, u128 *mem_ptr) +{ + int ret = 0; + + asm ("cli"); + switch (index) { + case 0: + asm ("movdqu %%xmm0, %0" : : "m"(*mem_ptr)); + break; + case 1: + asm ("movdqu %%xmm1, %0" : : "m"(*mem_ptr)); + break; + case 2: + asm ("movdqu %%xmm2, %0" : : "m"(*mem_ptr)); + break; + case 3: + asm ("movdqu %%xmm3, %0" : : "m"(*mem_ptr)); + break; + case 4: + asm ("movdqu %%xmm4, %0" : : "m"(*mem_ptr)); + break; + case 5: + asm ("movdqu %%xmm5, %0" : : "m"(*mem_ptr)); + break; + case 6: + asm ("movdqu %%xmm6, %0" : : "m"(*mem_ptr)); + break; + case 7: + asm ("movdqu %%xmm7, %0" : : "m"(*mem_ptr)); + break; +#ifdef CONFIG_X86_64 + case 8: + asm ("movdqu %%xmm8, %0" : : "m"(*mem_ptr)); + break; + case 9: + asm ("movdqu %%xmm9, %0" : : "m"(*mem_ptr)); + break; + case 10: + asm ("movdqu %%xmm10, %0" : : "m"(*mem_ptr)); + break; + case 11: + asm ("movdqu %%xmm11, %0" : : "m"(*mem_ptr)); + break; + case 12: + asm ("movdqu %%xmm12, %0" : : "m"(*mem_ptr)); + break; + case 13: + asm ("movdqu %%xmm13, %0" : : "m"(*mem_ptr)); + break; + case 14: + asm ("movdqu %%xmm14, %0" : : "m"(*mem_ptr)); + break; + case 15: + asm ("movdqu %%xmm15, %0" : : "m"(*mem_ptr)); + break; +#endif + default: + pr_err_once("xmm index exceeds"); + ret = -1; + break; + } + asm ("sti"); + + return ret; +} + +static void vmx_load_guest_iwkey(struct kvm_vcpu *vcpu) +{ + u128 xmm[3] = {0}; + + if (vcpu->arch.iwkey_loaded) { + bool clear_cr4 = false; + /* Save origin %xmm */ + get_xmm(0, &xmm[0]); + get_xmm(1, &xmm[1]); + get_xmm(2, &xmm[2]); + + asm ("movdqu %0, %%xmm0;" + "movdqu %1, %%xmm1;" + "movdqu %2, %%xmm2;" + : : "m"(vcpu->arch.iwkey.integrity_key), + "m"(vcpu->arch.iwkey.encryption_key[0]), + "m"(vcpu->arch.iwkey.encryption_key[1])); + if (!(cr4_read_shadow() & X86_CR4_KEYLOCKER)) { + cr4_set_bits(X86_CR4_KEYLOCKER); + clear_cr4 = true; + } + asm volatile(LOADIWKEY : : "a" (0x0)); + if (clear_cr4) + cr4_clear_bits(X86_CR4_KEYLOCKER); + /* restore %xmm */ + asm ("movdqu %0, %%xmm0;" + "movdqu %1, %%xmm1;" + "movdqu %2, %%xmm2;" + : : "m"(xmm[0]), + "m"(xmm[1]), + "m"(xmm[2])); + } +} + void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -1260,6 +1361,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) #endif vmx_set_host_fs_gs(host_state, fs_sel, gs_sel, fs_base, gs_base); + + vmx_load_guest_iwkey(vcpu); + vmx->guest_state_loaded = true; } @@ -1925,6 +2029,19 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) return 1; goto find_uret_msr; + case MSR_IA32_COPY_STATUS: + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + msr_info->data = vcpu->arch.msr_ia32_copy_status; + break; + + case MSR_IA32_IWKEYBACKUP_STATUS: + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + msr_info->data = vcpu->kvm->arch.msr_ia32_iwkey_backup_status; + break; default: find_uret_msr: msr = vmx_find_uret_msr(vmx, msr_info->index); @@ -2189,6 +2306,36 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) else vmx->pt_desc.guest.addr_a[index / 2] = data; break; + case MSR_IA32_COPY_LOCAL_TO_PLATFORM: + if (msr_info->data != 1) + return 1; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + + if (!vcpu->arch.iwkey_loaded) + return 1; + + if (!vcpu->kvm->arch.iwkey_backup_valid) { + vcpu->kvm->arch.iwkey_backup = vcpu->arch.iwkey; + vcpu->kvm->arch.iwkey_backup_valid = true; + vcpu->kvm->arch.msr_ia32_iwkey_backup_status = 0x9; + } + vcpu->arch.msr_ia32_copy_status = 1; + break; + + case MSR_IA32_COPY_PLATFORM_TO_LOCAL: + if (msr_info->data != 1) + return 1; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + return 1; + if (!vcpu->kvm->arch.iwkey_backup_valid) + return 1; + vcpu->arch.iwkey = vcpu->kvm->arch.iwkey_backup; + vcpu->arch.msr_ia32_copy_status = 1; + break; + case MSR_TSC_AUX: if (!msr_info->host_initiated && !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) @@ -5659,6 +5806,47 @@ static int handle_encls(struct kvm_vcpu *vcpu) return 1; } +static int handle_loadiwkey(struct kvm_vcpu *vcpu) +{ + u128 xmm[3] = {0}; + u32 vmx_instruction_info; + int reg1, reg2; + int r; + + if (!guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + reg1 = (vmx_instruction_info & 0x78) >> 3; + reg2 = (vmx_instruction_info >> 28) & 0xf; + + /* + * vmx instruction info on Current TGL is broken. + * Before the microcode fix, we hardcode XMM1 & XMM2 holding + * IWKey.encryption_key. + */ + reg1 = 1; + reg2 = 2; + r = get_xmm(0, &xmm[0]); + if (r) + return 0; + r = get_xmm(reg1, &xmm[1]); + if (r) + return 0; + r = get_xmm(reg2, &xmm[2]); + if (r) + return 0; + + vcpu->arch.iwkey.integrity_key = xmm[0]; + vcpu->arch.iwkey.encryption_key[0] = xmm[1]; + vcpu->arch.iwkey.encryption_key[1] = xmm[2]; + vcpu->arch.iwkey_loaded = true; + + return kvm_skip_emulated_instruction(vcpu); +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -5715,6 +5903,7 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = { [EXIT_REASON_VMFUNC] = handle_vmx_instruction, [EXIT_REASON_PREEMPTION_TIMER] = handle_preemption_timer, [EXIT_REASON_ENCLS] = handle_encls, + [EXIT_REASON_LOADIWKEY] = handle_loadiwkey, }; static const int kvm_vmx_max_exit_handlers = diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e545a8a..fbc839a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1013,7 +1013,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) kvm_mmu_reset_context(vcpu); - if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) + if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE | X86_CR4_KEYLOCKER)) kvm_update_cpuid_runtime(vcpu); return 0; @@ -9598,7 +9598,7 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4; cpuid_update_needed |= ((kvm_read_cr4(vcpu) ^ sregs->cr4) & - (X86_CR4_OSXSAVE | X86_CR4_PKE)); + (X86_CR4_OSXSAVE | X86_CR4_PKE | X86_CR4_KEYLOCKER)); kvm_x86_ops.set_cr4(vcpu, sregs->cr4); if (cpuid_update_needed) kvm_update_cpuid_runtime(vcpu); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index e7ca622..0e6b826 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -404,6 +404,8 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r, __reserved_bits |= X86_CR4_UMIP; \ if (!__cpu_has(__c, X86_FEATURE_VMX)) \ __reserved_bits |= X86_CR4_VMXE; \ + if (!__cpu_has(__c, X86_FEATURE_KEYLOCKER)) \ + __reserved_bits |= X86_CR4_KEYLOCKER; \ __reserved_bits; \ }) From patchwork Mon Jan 25 09:06:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A7C7C433E0 for ; Tue, 26 Jan 2021 05:12:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CC7520732 for ; Tue, 26 Jan 2021 05:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729109AbhAZFMi (ORCPT ); Tue, 26 Jan 2021 00:12:38 -0500 Received: from mga14.intel.com ([192.55.52.115]:22889 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726167AbhAYJRV (ORCPT ); Mon, 25 Jan 2021 04:17:21 -0500 IronPort-SDR: wslqyyTAphjGvfXDAbrHffxEoYMsKTt5He+1JqTnUqqb/w97lWtdMJpGwsbxAkclxia4Y7NuWn n6STyzAielvg== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915797" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915797" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:13 -0800 IronPort-SDR: lVeFJzCtaKsi5mZ78pAEn6lTIoe/roBqEzX/gmjaSZHp0TYOyb+KSsEaI9gSMRT8NMg70e5pwX yPYQkv5Fbl/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223880" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:11 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 06/12] kvm/cpuid: Enumerate KeyLocker feature in KVM Date: Mon, 25 Jan 2021 17:06:14 +0800 Message-Id: <1611565580-47718-7-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In kvm_set_cpu_caps(), add KeyLocker feature enumeration, under the condition that 1) HW supports this feature 2) host Kernel isn't enabled with this feature. Filter out randomization support bit (CPUID.0x19.ECX[1]), as by design it cannot be supported at this moment. Signed-off-by: Robert Hoo --- arch/x86/kvm/cpuid.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 2fbf4af..5fc6b2c 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -406,9 +406,10 @@ void kvm_set_cpu_caps(void) kvm_cpu_cap_mask(CPUID_7_ECX, F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | - F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | - F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ + F(VAES) | 0 /*KEYLOCKER*/ | F(VPCLMULQDQ) | F(AVX512_VNNI) | + F(AVX512_BITALG) | F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ ); + /* Set LA57 based on hardware capability. */ if (cpuid_ecx(7) & F(LA57)) kvm_cpu_cap_set(X86_FEATURE_LA57); @@ -451,6 +452,11 @@ void kvm_set_cpu_caps(void) F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES) ); + kvm_cpu_cap_mask(CPUID_19_EBX, F(KL_INS_ENABLED) | F(KL_WIDE) | + F(IWKEY_BACKUP)); + /* No randomize exposed to guest */ + kvm_cpu_cap_mask(CPUID_19_ECX, F(IWKEY_NOBACKUP)); + kvm_cpu_cap_mask(CPUID_8000_0001_ECX, F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ | F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) | @@ -784,6 +790,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) goto out; } break; + /* KeyLocker */ + case 0x19: + cpuid_entry_override(entry, CPUID_19_ECX); + break; + case KVM_CPUID_SIGNATURE: { static const char signature[12] = "KVMKVMKVM\0\0"; const u32 *sigptr = (const u32 *)signature; From patchwork Mon Jan 25 09:06:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7120DC433E0 for ; Tue, 26 Jan 2021 05:12:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4236A20732 for ; Tue, 26 Jan 2021 05:12:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729206AbhAZFMs (ORCPT ); Tue, 26 Jan 2021 00:12:48 -0500 Received: from mga14.intel.com ([192.55.52.115]:22891 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726205AbhAYJRV (ORCPT ); Mon, 25 Jan 2021 04:17:21 -0500 IronPort-SDR: C3/TwMfYo9FRN5ds5iEwhCYyEigez+d5Fp247BTIgXHXjaDGl7H0JLdupN8tOt2zMzdM+Z24Lx UVp5e67f4pCw== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915803" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915803" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:15 -0800 IronPort-SDR: auUtUaqroYPsrUBB/18m0dISPt3GX+kzly+G3Hx9di3PUWL5ffaduo8X4XzrHnvnuNwOYVJhEH NZJAkFFWB10Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223891" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:13 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 07/12] kvm/vmx/nested: Support new IA32_VMX_PROCBASED_CTLS3 vmx feature control MSR Date: Mon, 25 Jan 2021 17:06:15 +0800 Message-Id: <1611565580-47718-8-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add this new VMX feature MSR in nested_vmx_msrs, for the Tertiary Proc-based Exec-Control nested support. Don't set its LOADIWKEY VM-Exit bit at present. It will be enabled in last patch when everything's prepared. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/capabilities.h | 2 ++ arch/x86/kvm/vmx/nested.c | 18 +++++++++++++++++- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 2 ++ 4 files changed, 24 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index d8bbde4..2a694c9 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -30,6 +30,8 @@ struct nested_vmx_msrs { u32 procbased_ctls_high; u32 secondary_ctls_low; u32 secondary_ctls_high; + /* Tertiary Controls is 64bit allow-1 semantics */ + u64 tertiary_ctls; u32 pinbased_ctls_low; u32 pinbased_ctls_high; u32 exit_ctls_low; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 89af692..9eb1c0b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1285,6 +1285,13 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data) lowp = &vmx->nested.msrs.secondary_ctls_low; highp = &vmx->nested.msrs.secondary_ctls_high; break; + /* + * MSR_IA32_VMX_PROCBASED_CTLS3 is 64bit, all allow-1. + * No need to check. Just return. + */ + case MSR_IA32_VMX_PROCBASED_CTLS3: + vmx->nested.msrs.tertiary_ctls = data; + return 0; default: BUG(); } @@ -1421,6 +1428,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) case MSR_IA32_VMX_TRUE_EXIT_CTLS: case MSR_IA32_VMX_TRUE_ENTRY_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS2: + case MSR_IA32_VMX_PROCBASED_CTLS3: return vmx_restore_control_msr(vmx, msr_index, data); case MSR_IA32_VMX_MISC: return vmx_restore_vmx_misc(vmx, data); @@ -1516,6 +1524,9 @@ int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata) msrs->secondary_ctls_low, msrs->secondary_ctls_high); break; + case MSR_IA32_VMX_PROCBASED_CTLS3: + *pdata = msrs->tertiary_ctls; + break; case MSR_IA32_VMX_EPT_VPID_CAP: *pdata = msrs->ept_caps | ((u64)msrs->vpid_caps << 32); @@ -6375,7 +6386,8 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) CPU_BASED_USE_IO_BITMAPS | CPU_BASED_MONITOR_TRAP_FLAG | CPU_BASED_MONITOR_EXITING | CPU_BASED_RDPMC_EXITING | CPU_BASED_RDTSC_EXITING | CPU_BASED_PAUSE_EXITING | - CPU_BASED_TPR_SHADOW | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; + CPU_BASED_TPR_SHADOW | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS; /* * We can allow some features even when not supported by the * hardware. For example, L1 can specify an MSR bitmap - and we @@ -6413,6 +6425,10 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) SECONDARY_EXEC_RDSEED_EXITING | SECONDARY_EXEC_XSAVES; + if (msrs->procbased_ctls_high & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) + rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS3, + msrs->tertiary_ctls); + msrs->tertiary_ctls &= ~TERTIARY_EXEC_LOADIWKEY_EXITING; /* * We can emulate "VMCS shadowing," even if the hardware * doesn't support it. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6be6d87..f29a91c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1880,7 +1880,7 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu, static int vmx_get_msr_feature(struct kvm_msr_entry *msr) { switch (msr->index) { - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!nested) return 1; return vmx_get_vmx_msr(&vmcs_config.nested, msr->index, &msr->data); @@ -1961,7 +1961,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_FEAT_CTL: msr_info->data = vmx->msr_ia32_feature_control; break; - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!nested_vmx_allowed(vcpu)) return 1; if (vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, @@ -2240,7 +2240,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (msr_info->host_initiated && data == 0) vmx_leave_nested(vcpu); break; - case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_PROCBASED_CTLS3: if (!msr_info->host_initiated) return 1; /* they are read-only */ if (!nested_vmx_allowed(vcpu)) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fbc839a..d428022 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1300,6 +1300,7 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu) MSR_IA32_VMX_PROCBASED_CTLS2, MSR_IA32_VMX_EPT_VPID_CAP, MSR_IA32_VMX_VMFUNC, + MSR_IA32_VMX_PROCBASED_CTLS3, MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, @@ -1331,6 +1332,7 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu) MSR_IA32_VMX_PROCBASED_CTLS2, MSR_IA32_VMX_EPT_VPID_CAP, MSR_IA32_VMX_VMFUNC, + MSR_IA32_VMX_PROCBASED_CTLS3, MSR_F10H_DECFG, MSR_IA32_UCODE_REV, From patchwork Mon Jan 25 09:06:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7022BC433DB for ; Tue, 26 Jan 2021 05:13:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 381A320732 for ; Tue, 26 Jan 2021 05:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729300AbhAZFM6 (ORCPT ); Tue, 26 Jan 2021 00:12:58 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726213AbhAYJRV (ORCPT ); Mon, 25 Jan 2021 04:17:21 -0500 IronPort-SDR: YTCaMAUxY4k+oL3ZbIr4by0PIN4KBz9eoXJpCv/SOL6t3pCaMaJT8x+N7WeUjdR1T0NGJ1U/Mq Xifw6UG5UdRA== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915811" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915811" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:18 -0800 IronPort-SDR: Ch7Z9tUe5CjN5UUCUdZ44gYHUa+DzmsLXDUPGZAVk7EH7soqotHzydoggakX5HWAU7Iin/V9dd 8pQzompdyGgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223907" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:16 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 08/12] kvm/vmx: Refactor vmx_compute_tertiary_exec_control() Date: Mon, 25 Jan 2021 17:06:16 +0800 Message-Id: <1611565580-47718-9-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Like vmx_compute_tertiary_exec_control(), before L1 set VMCS, compute its nested VMX feature control MSR's value according to guest CPUID setting. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmx.c | 22 +++++++++++++++++----- arch/x86/kvm/vmx/vmx.h | 1 + 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f29a91c..cf8ab95 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4377,10 +4377,20 @@ u32 vmx_exec_control(struct vcpu_vmx *vmx) #define vmx_adjust_sec_exec_exiting(vmx, exec_control, lname, uname) \ vmx_adjust_sec_exec_control(vmx, exec_control, lname, uname, uname##_EXITING, true) -static u64 vmx_tertiary_exec_control(struct vcpu_vmx *vmx) +static void vmx_compute_tertiary_exec_control(struct vcpu_vmx *vmx) { - /* Though currently, no special adjustment. There might be in the future*/ - return vmcs_config.cpu_based_3rd_exec_ctrl; + struct kvm_vcpu *vcpu = &vmx->vcpu; + u32 exec_control = vmcs_config.cpu_based_3rd_exec_ctrl; + + if (nested) { + if (guest_cpuid_has(vcpu, X86_FEATURE_KEYLOCKER)) + vmx->nested.msrs.tertiary_ctls |= + TERTIARY_EXEC_LOADIWKEY_EXITING; + else + vmx->nested.msrs.tertiary_ctls &= + ~TERTIARY_EXEC_LOADIWKEY_EXITING; + } + vmx->tertiary_exec_control = exec_control; } static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) @@ -4493,8 +4503,10 @@ static void init_vmcs(struct vcpu_vmx *vmx) secondary_exec_controls_set(vmx, vmx->secondary_exec_control); } - if (cpu_has_tertiary_exec_ctrls()) - tertiary_exec_controls_set(vmx, vmx_tertiary_exec_control(vmx)); + if (cpu_has_tertiary_exec_ctrls()) { + vmx_compute_tertiary_exec_control(vmx); + tertiary_exec_controls_set(vmx, vmx->tertiary_exec_control); + } if (kvm_vcpu_apicv_active(&vmx->vcpu)) { vmcs_write64(EOI_EXIT_BITMAP0, 0); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 94f1c27..0915fad 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -209,6 +209,7 @@ struct vcpu_vmx { u32 msr_ia32_umwait_control; u32 secondary_exec_control; + u64 tertiary_exec_control; /* * loaded_vmcs points to the VMCS currently used in this vcpu. For a From patchwork Mon Jan 25 09:06:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50499C433E0 for ; Tue, 26 Jan 2021 05:13:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2742822273 for ; Tue, 26 Jan 2021 05:13:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729334AbhAZFND (ORCPT ); Tue, 26 Jan 2021 00:13:03 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726315AbhAYJUK (ORCPT ); Mon, 25 Jan 2021 04:20:10 -0500 IronPort-SDR: hvAWilIEWUXJrtgqVe9991IPgTymmrZDe/faJjoc0CJPlE/GL6maAD/HwhoyleNT5x5aAHlxG7 w0ALUeTExXtg== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915819" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915819" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:20 -0800 IronPort-SDR: 71/nGqXYa/II0UoekRPQTUlBfPR0KL3wyY9IjmrPn2148ytJQclvVQg4cSKqFDKYVDOcLFaD2k eVX/bbnKL77w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223925" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:18 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 09/12] kvm/vmx/vmcs12: Add Tertiary Exec-Control field in vmcs12 Date: Mon, 25 Jan 2021 17:06:17 +0800 Message-Id: <1611565580-47718-10-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmcs12.c | 1 + arch/x86/kvm/vmx/vmcs12.h | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c index c8e51c0..603c785 100644 --- a/arch/x86/kvm/vmx/vmcs12.c +++ b/arch/x86/kvm/vmx/vmcs12.c @@ -50,6 +50,7 @@ FIELD64(VMREAD_BITMAP, vmread_bitmap), FIELD64(VMWRITE_BITMAP, vmwrite_bitmap), FIELD64(XSS_EXIT_BITMAP, xss_exit_bitmap), + FIELD64(TERTIARY_VM_EXEC_CONTROL, tertiary_vm_exec_control), FIELD64(GUEST_PHYSICAL_ADDRESS, guest_physical_address), FIELD64(VMCS_LINK_POINTER, vmcs_link_pointer), FIELD64(GUEST_IA32_DEBUGCTL, guest_ia32_debugctl), diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 80232da..489c29d 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -69,7 +69,8 @@ struct __packed vmcs12 { u64 vm_function_control; u64 eptp_list_address; u64 pml_address; - u64 padding64[3]; /* room for future expansion */ + u64 tertiary_vm_exec_control; + u64 padding64[2]; /* room for future expansion */ /* * To allow migration of L1 (complete with its L2 guests) between * machines of different natural widths (32 or 64 bit), we cannot have From patchwork Mon Jan 25 09:06:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D4D2C433E0 for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24C3522273 for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729690AbhAZFOR (ORCPT ); Tue, 26 Jan 2021 00:14:17 -0500 Received: from mga14.intel.com ([192.55.52.115]:22889 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726400AbhAYJVt (ORCPT ); Mon, 25 Jan 2021 04:21:49 -0500 IronPort-SDR: FYregR+jhb/yYFFLFBNgzOTNt11eQRV0loOpncGnH8O70biI6K0w5LC7dTK8bKRETTBDFBAfmJ hHVPkBWFqgbw== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915821" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915821" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:23 -0800 IronPort-SDR: KbzRGf7xP/vQcQXBTGhjXtwidXB7YYJjN8HgSrzMcp0M99wvyaHcxX/NYWyRsUmPKyGxJcRAFf aRy/QmdUjulg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223944" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:21 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 10/12] kvm/vmx/nested: Support tertiary VM-Exec control in vmcs02 Date: Mon, 25 Jan 2021 17:06:18 +0800 Message-Id: <1611565580-47718-11-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/nested.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9eb1c0b..de36129 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2232,6 +2232,7 @@ static void prepare_vmcs02_early_rare(struct vcpu_vmx *vmx, static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) { u32 exec_control, vmcs12_exec_ctrl; + u64 vmcs02_ter_exec_ctrl; u64 guest_efer = nested_vmx_calc_efer(vmx, vmcs12); if (vmx->nested.dirty_vmcs12 || vmx->nested.hv_evmcs) @@ -2351,6 +2352,18 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) vm_entry_controls_set(vmx, exec_control); /* + * Tertiary EXEC CONTROLS + */ + if (cpu_has_tertiary_exec_ctrls()) { + vmcs02_ter_exec_ctrl = vmx->tertiary_exec_control; + if (nested_cpu_has(vmcs12, + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS)) + vmcs02_ter_exec_ctrl |= vmcs12->tertiary_vm_exec_control; + + tertiary_exec_controls_set(vmx, vmcs02_ter_exec_ctrl); + } + + /* * EXIT CONTROLS * * L2->L1 exit controls are emulated - the hardware exit is to L0 so From patchwork Mon Jan 25 09:06:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 700D0C433E0 for ; Tue, 26 Jan 2021 05:14:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E14F22ADF for ; Tue, 26 Jan 2021 05:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729487AbhAZFNd (ORCPT ); Tue, 26 Jan 2021 00:13:33 -0500 Received: from mga14.intel.com ([192.55.52.115]:22891 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbhAYJVt (ORCPT ); Mon, 25 Jan 2021 04:21:49 -0500 IronPort-SDR: XG+eqluQ8F43AjNtNvXDj3Stw4QMf4ejjvh1PmbQA/hThYeL5COk5yGkKHrhlW3jW8mvNb9soP vibmfwMAwXLg== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915827" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915827" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:25 -0800 IronPort-SDR: 8IAAKiQ1IlzmMdVyBA/wUeStiLzhFO5EVeMyWOk7aEcjh2bFIeSWMnVhVvbi1DBjHOJpb5j4FF UFSxHLb5zBvg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223960" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:23 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 11/12] kvm/vmx/nested: Support CR4.KL in nested Date: Mon, 25 Jan 2021 17:06:19 +0800 Message-Id: <1611565580-47718-12-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add CR4.KL in nested.msr.cr4_fixed1 when guest CPUID supports KeyLocker. So that it can pass check when preparing vmcs02. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/vmx.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cf8ab95..f66887d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7374,6 +7374,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) cr4_fixed1_update(X86_CR4_PKE, ecx, feature_bit(PKU)); cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP)); cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57)); + cr4_fixed1_update(X86_CR4_KEYLOCKER, ecx, feature_bit(KEYLOCKER)); #undef cr4_fixed1_update } From patchwork Mon Jan 25 09:06:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12045311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87213C433DB for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B72122B3B for ; Tue, 26 Jan 2021 05:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729996AbhAZFOe (ORCPT ); Tue, 26 Jan 2021 00:14:34 -0500 Received: from mga14.intel.com ([192.55.52.115]:22894 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726408AbhAYJVt (ORCPT ); Mon, 25 Jan 2021 04:21:49 -0500 IronPort-SDR: kLpixzCbEWLgAImB9tzgTfik2BmW/2rKFmwjKUgesarkoVEWDBO5REJVXrWL2c7E7YGE8i274k LXF5gjuBAXEQ== X-IronPort-AV: E=McAfee;i="6000,8403,9874"; a="178915833" X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="178915833" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 01:07:27 -0800 IronPort-SDR: oZ3N02yOEcNYPhP41ed+7VkTEQYhBl3kxoe2rpWApfndI9MQKzlKG3YPlLG6AfEzb4YkKu5s69 nOt5TTkK+D7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,373,1602572400"; d="scan'208";a="402223980" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 01:07:25 -0800 From: Robert Hoo To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: chang.seok.bae@intel.com, kvm@vger.kernel.org, robert.hu@intel.com, Robert Hoo Subject: [RFC PATCH 12/12] kvm/vmx/nested: Enable nested LOADIWKey VM-exit Date: Mon, 25 Jan 2021 17:06:20 +0800 Message-Id: <1611565580-47718-13-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> References: <1611565580-47718-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set the LOADIWkey VM-exit bit in nested vmx ctrl MSR, and let L1 intercept L2's LOADIWKEY VM-Exit. Signed-off-by: Robert Hoo --- arch/x86/kvm/vmx/nested.c | 5 ++++- arch/x86/kvm/vmx/nested.h | 7 +++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index de36129..5a6b04d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -5927,6 +5927,9 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason) case EXIT_REASON_TPAUSE: return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE); + case EXIT_REASON_LOADIWKEY: + return nested_cpu_has3(vmcs12, + TERTIARY_EXEC_LOADIWKEY_EXITING); default: return true; } @@ -6441,7 +6444,7 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) if (msrs->procbased_ctls_high & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS3, msrs->tertiary_ctls); - msrs->tertiary_ctls &= ~TERTIARY_EXEC_LOADIWKEY_EXITING; + msrs->tertiary_ctls &= TERTIARY_EXEC_LOADIWKEY_EXITING; /* * We can emulate "VMCS shadowing," even if the hardware * doesn't support it. diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 197148d..3dda114 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -145,6 +145,13 @@ static inline bool nested_cpu_has2(struct vmcs12 *vmcs12, u32 bit) (vmcs12->secondary_vm_exec_control & bit); } +static inline bool nested_cpu_has3(struct vmcs12 *vmcs12, u32 bit) +{ + return (vmcs12->cpu_based_vm_exec_control & + CPU_BASED_ACTIVATE_TERTIARY_CONTROLS) && + (vmcs12->tertiary_vm_exec_control & bit); +} + static inline bool nested_cpu_has_preemption_timer(struct vmcs12 *vmcs12) { return vmcs12->pin_based_vm_exec_control &