From patchwork Wed Jul 22 16:00:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678825 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EED1014E3 for ; Wed, 22 Jul 2020 16:01:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D68EB206F5 for ; Wed, 22 Jul 2020 16:01:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728579AbgGVQBe (ORCPT ); Wed, 22 Jul 2020 12:01:34 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37824 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726938AbgGVQBe (ORCPT ); Wed, 22 Jul 2020 12:01:34 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id E33593016C4A; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id D81173072785; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Marian Rotariu , =?utf-8?q?=C8=98tefan_=C8=98ic?= =?utf-8?q?leru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 01/34] KVM: x86: export .get_vmfunc_status() Date: Wed, 22 Jul 2020 19:00:48 +0300 Message-Id: <20200722160121.9601-2-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marian Rotariu The introspection tool uses this function to check the hardware support for VMFUNC, which can be used either to singlestep vCPUs on a unprotected EPT view or to use #VE in order to filter out VM-exits caused by EPT violations. Signed-off-by: Marian Rotariu Co-developed-by: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/vmx.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d96bf0e15ea2..ab6989745f9c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1300,6 +1300,7 @@ struct kvm_x86_ops { bool (*spt_fault)(struct kvm_vcpu *vcpu); bool (*gpt_translation_fault)(struct kvm_vcpu *vcpu); void (*control_singlestep)(struct kvm_vcpu *vcpu, bool enable); + bool (*get_vmfunc_status)(void); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8c9ccd1ba0f0..ec4396d5f36f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7992,6 +7992,11 @@ static void vmx_control_singlestep(struct kvm_vcpu *vcpu, bool enable) CPU_BASED_MONITOR_TRAP_FLAG); } +static bool vmx_get_vmfunc_status(void) +{ + return cpu_has_vmx_vmfunc(); +} + static struct kvm_x86_ops vmx_x86_ops __initdata = { .hardware_unsetup = hardware_unsetup, @@ -8133,6 +8138,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .spt_fault = vmx_spt_fault, .gpt_translation_fault = vmx_gpt_translation_fault, .control_singlestep = vmx_control_singlestep, + .get_vmfunc_status = vmx_get_vmfunc_status, }; static __init int hardware_setup(void) From patchwork Wed Jul 22 16:00:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678889 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35B2E14E3 for ; Wed, 22 Jul 2020 16:02:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2823820717 for ; Wed, 22 Jul 2020 16:02:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731245AbgGVQC1 (ORCPT ); Wed, 22 Jul 2020 12:02:27 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37840 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727096AbgGVQBf (ORCPT ); Wed, 22 Jul 2020 12:01:35 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id ECA03305D7D0; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id E12943072786; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Marian Rotariu , =?utf-8?q?=C8=98tefan_=C8=98ic?= =?utf-8?q?leru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 02/34] KVM: x86: export .get_eptp_switching_status() Date: Wed, 22 Jul 2020 19:00:49 +0300 Message-Id: <20200722160121.9601-3-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marian Rotariu The introspection tool uses this function to check the hardware support for EPT switching, which can be used either to singlestep vCPUs on a unprotected EPT view or to use #VE in order to avoid filter out VM-exits caused by EPT violations. Signed-off-by: Marian Rotariu Co-developed-by: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/capabilities.h | 8 ++++++++ arch/x86/kvm/vmx/vmx.c | 8 ++++++++ arch/x86/kvm/x86.c | 3 +++ 4 files changed, 21 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ab6989745f9c..5eb26135e81b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1301,6 +1301,7 @@ struct kvm_x86_ops { bool (*gpt_translation_fault)(struct kvm_vcpu *vcpu); void (*control_singlestep)(struct kvm_vcpu *vcpu, bool enable); bool (*get_vmfunc_status)(void); + bool (*get_eptp_switching_status)(void); }; struct kvm_x86_nested_ops { @@ -1422,6 +1423,7 @@ extern u64 kvm_max_tsc_scaling_ratio; extern u64 kvm_default_tsc_scaling_ratio; extern u64 kvm_mce_cap_supported; +extern bool kvm_eptp_switching_supported; /* * EMULTYPE_NO_DECODE - Set when re-emulating an instruction (after completing diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index e7d7fcb7e17f..92781e2c523e 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -219,6 +219,14 @@ static inline bool cpu_has_vmx_vmfunc(void) SECONDARY_EXEC_ENABLE_VMFUNC; } +static inline bool cpu_has_vmx_eptp_switching(void) +{ + u64 vmx_msr; + + rdmsrl(MSR_IA32_VMX_VMFUNC, vmx_msr); + return vmx_msr & VMX_VMFUNC_EPTP_SWITCHING; +} + static inline bool cpu_has_vmx_shadow_vmcs(void) { u64 vmx_msr; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ec4396d5f36f..ccbf561b0fc4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7997,6 +7997,11 @@ static bool vmx_get_vmfunc_status(void) return cpu_has_vmx_vmfunc(); } +static bool vmx_get_eptp_switching_status(void) +{ + return kvm_eptp_switching_supported; +} + static struct kvm_x86_ops vmx_x86_ops __initdata = { .hardware_unsetup = hardware_unsetup, @@ -8139,6 +8144,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .gpt_translation_fault = vmx_gpt_translation_fault, .control_singlestep = vmx_control_singlestep, .get_vmfunc_status = vmx_get_vmfunc_status, + .get_eptp_switching_status = vmx_get_eptp_switching_status, }; static __init int hardware_setup(void) @@ -8178,6 +8184,8 @@ static __init int hardware_setup(void) !cpu_has_vmx_invept_global()) enable_ept = 0; + kvm_eptp_switching_supported = cpu_has_vmx_eptp_switching(); + if (!cpu_has_vmx_ept_ad_bits() || !enable_ept) enable_ept_ad_bits = 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index feb20b29bb92..b16b018c74cc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -161,6 +161,9 @@ module_param(force_emulation_prefix, bool, S_IRUGO); int __read_mostly pi_inject_timer = -1; module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR); +bool __read_mostly kvm_eptp_switching_supported; +EXPORT_SYMBOL_GPL(kvm_eptp_switching_supported); + #define KVM_NR_SHARED_MSRS 16 struct kvm_shared_msrs_global { From patchwork Wed Jul 22 16:00:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678823 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 447BA618 for ; Wed, 22 Jul 2020 16:01:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35982206F5 for ; Wed, 22 Jul 2020 16:01:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728906AbgGVQBf (ORCPT ); Wed, 22 Jul 2020 12:01:35 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37848 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727778AbgGVQBe (ORCPT ); Wed, 22 Jul 2020 12:01:34 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 03FBB305D7D1; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id E967F3072787; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 03/34] KVM: x86: add kvm_get_ept_view() Date: Wed, 22 Jul 2020 19:00:50 +0300 Message-Id: <20200722160121.9601-4-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This function returns the EPT view of the current vCPU or 0 if the hardware support is missing. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/vmx/vmx.c | 8 ++++++++ arch/x86/kvm/vmx/vmx.h | 3 +++ arch/x86/kvm/x86.c | 10 ++++++++++ 4 files changed, 24 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5eb26135e81b..0acc21087caf 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1302,6 +1302,7 @@ struct kvm_x86_ops { void (*control_singlestep)(struct kvm_vcpu *vcpu, bool enable); bool (*get_vmfunc_status)(void); bool (*get_eptp_switching_status)(void); + u16 (*get_ept_view)(struct kvm_vcpu *vcpu); }; struct kvm_x86_nested_ops { @@ -1773,4 +1774,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #define GET_SMSTATE(type, buf, offset) \ (*(type *)((buf) + (offset) - 0x7e00)) +u16 kvm_get_ept_view(struct kvm_vcpu *vcpu); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ccbf561b0fc4..0256c3a93c87 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -8002,6 +8002,13 @@ static bool vmx_get_eptp_switching_status(void) return kvm_eptp_switching_supported; } +static u16 vmx_get_ept_view(struct kvm_vcpu *vcpu) +{ + const struct vcpu_vmx *vmx = to_vmx(vcpu); + + return vmx->view; +} + static struct kvm_x86_ops vmx_x86_ops __initdata = { .hardware_unsetup = hardware_unsetup, @@ -8145,6 +8152,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .control_singlestep = vmx_control_singlestep, .get_vmfunc_status = vmx_get_vmfunc_status, .get_eptp_switching_status = vmx_get_eptp_switching_status, + .get_ept_view = vmx_get_ept_view, }; static __init int hardware_setup(void) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index aa0c7ffd588b..14f0b9102d58 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -296,6 +296,9 @@ struct vcpu_vmx { u64 ept_pointer; struct pt_desc pt_desc; + + /* The view this vcpu operates on. */ + u16 view; }; enum ept_pointers_status { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b16b018c74cc..2e2c56a37bdb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10869,6 +10869,16 @@ u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits); +u16 kvm_get_ept_view(struct kvm_vcpu *vcpu) +{ + if (!kvm_x86_ops.get_ept_view) + return 0; + + return kvm_x86_ops.get_ept_view(vcpu); +} +EXPORT_SYMBOL_GPL(kvm_get_ept_view); + + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq); From patchwork Wed Jul 22 16:00:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678891 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65F9F618 for ; Wed, 22 Jul 2020 16:02:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57491207CD for ; Wed, 22 Jul 2020 16:02:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728564AbgGVQBe (ORCPT ); Wed, 22 Jul 2020 12:01:34 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37856 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727818AbgGVQBe (ORCPT ); Wed, 22 Jul 2020 12:01:34 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 0CB77305D7E6; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id EF29B3072788; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 04/34] KVM: x86: mmu: reindent to avoid lines longer than 80 chars Date: Wed, 22 Jul 2020 19:00:51 +0300 Message-Id: <20200722160121.9601-5-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/mmu/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 97766f34910d..f3ba4d0452c9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2573,6 +2573,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, bool flush = false; int collisions = 0; LIST_HEAD(invalid_list); + unsigned int pg_hash; role = vcpu->arch.mmu->mmu_role.base; role.level = level; @@ -2623,8 +2624,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp->gfn = gfn; sp->role = role; + pg_hash = kvm_page_table_hashfn(gfn); hlist_add_head(&sp->hash_link, - &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]); + &vcpu->kvm->arch.mmu_page_hash[pg_hash]); if (!direct) { /* * we should do write protection before syncing pages From patchwork Wed Jul 22 16:00:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678869 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E638A159A for ; Wed, 22 Jul 2020 16:02:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2D6120771 for ; Wed, 22 Jul 2020 16:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730574AbgGVQBk (ORCPT ); Wed, 22 Jul 2020 12:01:40 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37950 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728640AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 1BD97305D7E9; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id F3E583072789; Wed, 22 Jul 2020 19:01:31 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 05/34] KVM: x86: mmu: add EPT view parameter to kvm_mmu_get_page() Date: Wed, 22 Jul 2020 19:00:52 +0300 Message-Id: <20200722160121.9601-6-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This will be used to create root_hpa for all the EPT views. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 7 +++++- arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++++------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +++-- 3 files changed, 36 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0acc21087caf..bd45778e0904 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -243,6 +243,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) +#define KVM_MAX_EPT_VIEWS 3 + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* @@ -349,6 +351,9 @@ struct kvm_mmu_page { union kvm_mmu_page_role role; gfn_t gfn; + /* The view this shadow page belongs to */ + u16 view; + u64 *spt; /* hold the gfn of each spte inside spt */ gfn_t *gfns; @@ -936,7 +941,7 @@ struct kvm_arch { unsigned long n_max_mmu_pages; unsigned int indirect_shadow_pages; u8 mmu_valid_gen; - struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; + struct hlist_head mmu_page_hash[KVM_MAX_EPT_VIEWS][KVM_NUM_MMU_PAGES]; /* * Hash table of struct kvm_mmu_page. */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f3ba4d0452c9..0b6527a1ebe6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2349,14 +2349,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_list); -#define for_each_valid_sp(_kvm, _sp, _gfn) \ +#define for_each_valid_sp(_kvm, _sp, _gfn, view) \ hlist_for_each_entry(_sp, \ - &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ + &(_kvm)->arch.mmu_page_hash[view][kvm_page_table_hashfn(_gfn)], hash_link) \ if (is_obsolete_sp((_kvm), (_sp))) { \ } else #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \ - for_each_valid_sp(_kvm, _sp, _gfn) \ + for_each_valid_sp(_kvm, _sp, _gfn, 0) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else static inline bool is_ept_sp(struct kvm_mmu_page *sp) @@ -2564,7 +2564,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gva_t gaddr, unsigned level, int direct, - unsigned int access) + unsigned int access, + u16 view) { union kvm_mmu_page_role role; unsigned quadrant; @@ -2587,7 +2588,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; } - for_each_valid_sp(vcpu->kvm, sp, gfn) { + for_each_valid_sp(vcpu->kvm, sp, gfn, view) { if (sp->gfn != gfn) { collisions++; continue; @@ -2624,9 +2625,10 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp->gfn = gfn; sp->role = role; + sp->view = view; pg_hash = kvm_page_table_hashfn(gfn); hlist_add_head(&sp->hash_link, - &vcpu->kvm->arch.mmu_page_hash[pg_hash]); + &vcpu->kvm->arch.mmu_page_hash[view][pg_hash]); if (!direct) { /* * we should do write protection before syncing pages @@ -3463,7 +3465,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write, drop_large_spte(vcpu, it.sptep); if (!is_shadow_present_pte(*it.sptep)) { sp = kvm_mmu_get_page(vcpu, base_gfn, it.addr, - it.level - 1, true, ACC_ALL); + it.level - 1, true, ACC_ALL, + kvm_get_ept_view(vcpu)); link_shadow_page(vcpu, it.sptep, sp); if (account_disallowed_nx_lpage) @@ -3788,7 +3791,7 @@ static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn) } static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, - u8 level, bool direct) + u8 level, bool direct, u16 view) { struct kvm_mmu_page *sp; @@ -3798,7 +3801,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, spin_unlock(&vcpu->kvm->mmu_lock); return INVALID_PAGE; } - sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); + sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL, view); ++sp->root_count; spin_unlock(&vcpu->kvm->mmu_lock); @@ -3809,19 +3812,24 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) { u8 shadow_root_level = vcpu->arch.mmu->shadow_root_level; hpa_t root; - unsigned i; + + u16 i; if (shadow_root_level >= PT64_ROOT_4LEVEL) { - root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); - if (!VALID_PAGE(root)) - return -ENOSPC; - vcpu->arch.mmu->root_hpa = root; + for (i = 0; i < KVM_MAX_EPT_VIEWS; i++) { + root = mmu_alloc_root(vcpu, 0, PAGE_SIZE * i, + shadow_root_level, true, i); + if (!VALID_PAGE(root)) + return -ENOSPC; + if (i == 0) + vcpu->arch.mmu->root_hpa = root; + } } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i])); root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), - i << 30, PT32_ROOT_LEVEL, true); + i << 30, PT32_ROOT_LEVEL, true, 0); if (!VALID_PAGE(root)) return -ENOSPC; vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK; @@ -3857,7 +3865,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa)); root = mmu_alloc_root(vcpu, root_gfn, 0, - vcpu->arch.mmu->shadow_root_level, false); + vcpu->arch.mmu->shadow_root_level, false, + 0); if (!VALID_PAGE(root)) return -ENOSPC; vcpu->arch.mmu->root_hpa = root; @@ -3887,7 +3896,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } root = mmu_alloc_root(vcpu, root_gfn, i << 30, - PT32_ROOT_LEVEL, false); + PT32_ROOT_LEVEL, false, 0); if (!VALID_PAGE(root)) return -ENOSPC; vcpu->arch.mmu->pae_root[i] = root | pm_mask; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index bd70ece1ef8b..244e339dee52 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -665,7 +665,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, if (!is_shadow_present_pte(*it.sptep)) { table_gfn = gw->table_gfn[it.level - 2]; sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1, - false, access); + false, access, + kvm_get_ept_view(vcpu)); } /* @@ -702,7 +703,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, if (!is_shadow_present_pte(*it.sptep)) { sp = kvm_mmu_get_page(vcpu, base_gfn, addr, - it.level - 1, true, direct_access); + it.level - 1, true, direct_access, + kvm_get_ept_view(vcpu)); link_shadow_page(vcpu, it.sptep, sp); if (lpage_disallowed) account_huge_nx_page(vcpu->kvm, sp); From patchwork Wed Jul 22 16:00:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678877 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D02C618 for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5AF1E20717 for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729096AbgGVQCS (ORCPT ); Wed, 22 Jul 2020 12:02:18 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37954 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728697AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 1F3C2305D7EA; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 064B9307278A; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Marian Rotariu , =?utf-8?q?=C8=98tefan_=C8=98ic?= =?utf-8?q?leru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 06/34] KVM: x86: mmu: add support for EPT switching Date: Wed, 22 Jul 2020 19:00:53 +0300 Message-Id: <20200722160121.9601-7-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marian Rotariu The introspection tool uses this function to check the hardware support for EPT switching, which can be used either to singlestep vCPUs on a unprotected EPT view or to use #VE in order to avoid filter out VM-exits caused by EPT violations. Signed-off-by: Marian Rotariu Co-developed-by: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 12 ++-- arch/x86/kvm/vmx/vmx.c | 98 +++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 1 + 4 files changed, 108 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index bd45778e0904..1035308940fe 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -421,6 +421,7 @@ struct kvm_mmu { void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, const void *pte); hpa_t root_hpa; + hpa_t root_hpa_altviews[KVM_MAX_EPT_VIEWS]; gpa_t root_pgd; union kvm_mmu_role mmu_role; u8 root_level; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0b6527a1ebe6..553425ab3518 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3760,8 +3760,11 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (free_active_root) { if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) { - mmu_free_root_page(vcpu->kvm, &mmu->root_hpa, - &invalid_list); + for (i = 0; i < KVM_MAX_EPT_VIEWS; i++) + mmu_free_root_page(vcpu->kvm, + mmu->root_hpa_altviews + i, + &invalid_list); + mmu->root_hpa = INVALID_PAGE; } else { for (i = 0; i < 4; ++i) if (mmu->pae_root[i] != 0) @@ -3821,9 +3824,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) shadow_root_level, true, i); if (!VALID_PAGE(root)) return -ENOSPC; - if (i == 0) - vcpu->arch.mmu->root_hpa = root; + vcpu->arch.mmu->root_hpa_altviews[i] = root; } + vcpu->arch.mmu->root_hpa = + vcpu->arch.mmu->root_hpa_altviews[kvm_get_ept_view(vcpu)]; } else if (shadow_root_level == PT32E_ROOT_LEVEL) { for (i = 0; i < 4; ++i) { MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i])); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0256c3a93c87..2024ef4d9a74 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3124,6 +3124,32 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa) return eptp; } +static void vmx_construct_eptp_with_index(struct kvm_vcpu *vcpu, + unsigned short view) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + u64 *eptp_list = NULL; + + if (!vmx->eptp_list_pg) + return; + + eptp_list = phys_to_virt(page_to_phys(vmx->eptp_list_pg)); + + if (!eptp_list) + return; + + eptp_list[view] = construct_eptp(vcpu, + vcpu->arch.mmu->root_hpa_altviews[view]); +} + +static void vmx_construct_eptp_list(struct kvm_vcpu *vcpu) +{ + unsigned short view; + + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + vmx_construct_eptp_with_index(vcpu, view); +} + void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd) { struct kvm *kvm = vcpu->kvm; @@ -3135,6 +3161,8 @@ void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd) eptp = construct_eptp(vcpu, pgd); vmcs_write64(EPT_POINTER, eptp); + vmx_construct_eptp_list(vcpu); + if (kvm_x86_ops.tlb_remote_flush) { spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock); to_vmx(vcpu)->ept_pointer = eptp; @@ -4336,6 +4364,15 @@ static void ept_set_mmio_spte_mask(void) kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, 0); } +static int vmx_alloc_eptp_list_page(struct vcpu_vmx *vmx) +{ + vmx->eptp_list_pg = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!vmx->eptp_list_pg) + return -ENOMEM; + + return 0; +} + #define VMX_XSS_EXIT_BITMAP 0 /* @@ -4426,6 +4463,10 @@ static void init_vmcs(struct vcpu_vmx *vmx) if (cpu_has_vmx_encls_vmexit()) vmcs_write64(ENCLS_EXITING_BITMAP, -1ull); + if (vmx->eptp_list_pg) + vmcs_write64(EPTP_LIST_ADDRESS, + page_to_phys(vmx->eptp_list_pg)); + if (vmx_pt_mode_is_host_guest()) { memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc)); /* Bit[6~0] are forced to 1, writes are ignored. */ @@ -5913,6 +5954,24 @@ static void vmx_dump_dtsel(char *name, uint32_t limit) vmcs_readl(limit + GUEST_GDTR_BASE - GUEST_GDTR_LIMIT)); } +static void dump_eptp_list(void) +{ + phys_addr_t eptp_list_phys, *eptp_list = NULL; + int i; + + eptp_list_phys = (phys_addr_t)vmcs_read64(EPTP_LIST_ADDRESS); + if (!eptp_list_phys) + return; + + eptp_list = phys_to_virt(eptp_list_phys); + + pr_err("*** EPTP Switching ***\n"); + pr_err("EPTP List Address: %p (phys %p)\n", + eptp_list, (void *)eptp_list_phys); + for (i = 0; i < KVM_MAX_EPT_VIEWS; i++) + pr_err("%d: %016llx\n", i, *(eptp_list + i)); +} + void dump_vmcs(void) { u32 vmentry_ctl, vmexit_ctl; @@ -6061,6 +6120,23 @@ void dump_vmcs(void) if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID) pr_err("Virtual processor ID = 0x%04x\n", vmcs_read16(VIRTUAL_PROCESSOR_ID)); + + dump_eptp_list(); +} + +static unsigned int update_ept_view(struct vcpu_vmx *vmx) +{ + u64 *eptp_list = phys_to_virt(page_to_phys(vmx->eptp_list_pg)); + u64 eptp = vmcs_read64(EPT_POINTER); + unsigned int view; + + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + if (eptp_list[view] == eptp) { + vmx->view = view; + break; + } + + return vmx->view; } /* @@ -6073,6 +6149,13 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) u32 exit_reason = vmx->exit_reason; u32 vectoring_info = vmx->idt_vectoring_info; + if (vmx->eptp_list_pg) { + unsigned int view = update_ept_view(vmx); + struct kvm_mmu *mmu = vcpu->arch.mmu; + + mmu->root_hpa = mmu->root_hpa_altviews[view]; + } + /* * Flush logged GPAs PML buffer, this will make dirty_bitmap more * updated. Another good is, in kvm_vm_ioctl_get_dirty_log, before @@ -6951,12 +7034,21 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) return exit_fastpath; } +static void vmx_destroy_eptp_list_page(struct vcpu_vmx *vmx) +{ + if (vmx->eptp_list_pg) { + __free_page(vmx->eptp_list_pg); + vmx->eptp_list_pg = NULL; + } +} + static void vmx_free_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); if (enable_pml) vmx_destroy_pml_buffer(vmx); + vmx_destroy_eptp_list_page(vmx); free_vpid(vmx->vpid); nested_vmx_free_vcpu(vcpu); free_loaded_vmcs(vmx->loaded_vmcs); @@ -7021,6 +7113,12 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) if (err < 0) goto free_pml; + if (kvm_eptp_switching_supported) { + err = vmx_alloc_eptp_list_page(vmx); + if (err) + goto free_pml; + } + msr_bitmap = vmx->vmcs01.msr_bitmap; vmx_disable_intercept_for_msr(NULL, msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R); vmx_disable_intercept_for_msr(NULL, msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 14f0b9102d58..4e2f86458ca2 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -297,6 +297,7 @@ struct vcpu_vmx { struct pt_desc pt_desc; + struct page *eptp_list_pg; /* The view this vcpu operates on. */ u16 view; }; From patchwork Wed Jul 22 16:00:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9CF9159A for ; Wed, 22 Jul 2020 16:01:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB9D0207CD for ; Wed, 22 Jul 2020 16:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729249AbgGVQBh (ORCPT ); Wed, 22 Jul 2020 12:01:37 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37952 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728692AbgGVQBg (ORCPT ); Wed, 22 Jul 2020 12:01:36 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 1F9E8305D7EB; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 0E6963072798; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 07/34] KVM: x86: mmu: increase mmu_memory_cache size Date: Wed, 22 Jul 2020 19:00:54 +0300 Message-Id: <20200722160121.9601-8-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru We use/allocate more root_hpa's every time mmu_alloc_roots() is called. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 553425ab3518..70461c7ef58c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1119,11 +1119,13 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) pte_list_desc_cache, 8 + PTE_PREFETCH_NUM); if (r) goto out; - r = mmu_topup_memory_cache_page(&vcpu->arch.mmu_page_cache, 8); + r = mmu_topup_memory_cache_page(&vcpu->arch.mmu_page_cache, + 8 * KVM_MAX_EPT_VIEWS); if (r) goto out; r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, - mmu_page_header_cache, 4); + mmu_page_header_cache, + 4 * KVM_MAX_EPT_VIEWS); out: return r; } From patchwork Wed Jul 22 16:00:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678885 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ECA6618 for ; Wed, 22 Jul 2020 16:02:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 40D93206F5 for ; Wed, 22 Jul 2020 16:02:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730870AbgGVQCW (ORCPT ); Wed, 22 Jul 2020 12:02:22 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37956 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728771AbgGVQBg (ORCPT ); Wed, 22 Jul 2020 12:01:36 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 2CC8C305D7F1; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 1B65F3072799; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Marian Rotariu , =?utf-8?q?=C8=98tefan_=C8=98ic?= =?utf-8?q?leru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 08/34] KVM: x86: add .set_ept_view() Date: Wed, 22 Jul 2020 19:00:55 +0300 Message-Id: <20200722160121.9601-9-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marian Rotariu The introspection tool uses this function to check the hardware support for EPT switching, which can be used either to singlestep vCPUs on a unprotected EPT view. Signed-off-by: Marian Rotariu Co-developed-by: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/vmx.c | 35 ++++++++++++++++++++++++++++++++- 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1035308940fe..300f7fc43987 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1309,6 +1309,7 @@ struct kvm_x86_ops { bool (*get_vmfunc_status)(void); bool (*get_eptp_switching_status)(void); u16 (*get_ept_view)(struct kvm_vcpu *vcpu); + int (*set_ept_view)(struct kvm_vcpu *vcpu, u16 view); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2024ef4d9a74..0d39487ce5c6 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4373,6 +4373,28 @@ static int vmx_alloc_eptp_list_page(struct vcpu_vmx *vmx) return 0; } +static int vmx_set_ept_view(struct kvm_vcpu *vcpu, u16 view) +{ + if (view >= KVM_MAX_EPT_VIEWS) + return -EINVAL; + + if (to_vmx(vcpu)->eptp_list_pg) { + int r; + + to_vmx(vcpu)->view = view; + + /* + * Reload mmu and make sure vmx_load_mmu_pgd() is called so that + * VMCS::EPT_POINTER is updated accordingly + */ + kvm_mmu_unload(vcpu); + r = kvm_mmu_reload(vcpu); + WARN_ON_ONCE(r); + } + + return 0; +} + #define VMX_XSS_EXIT_BITMAP 0 /* @@ -4463,9 +4485,15 @@ static void init_vmcs(struct vcpu_vmx *vmx) if (cpu_has_vmx_encls_vmexit()) vmcs_write64(ENCLS_EXITING_BITMAP, -1ull); - if (vmx->eptp_list_pg) + if (vmx->eptp_list_pg) { + u64 vm_function_control; + vmcs_write64(EPTP_LIST_ADDRESS, page_to_phys(vmx->eptp_list_pg)); + vm_function_control = vmcs_read64(VM_FUNCTION_CONTROL); + vm_function_control |= VMX_VMFUNC_EPTP_SWITCHING; + vmcs_write64(VM_FUNCTION_CONTROL, vm_function_control); + } if (vmx_pt_mode_is_host_guest()) { memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc)); @@ -5965,6 +5993,10 @@ static void dump_eptp_list(void) eptp_list = phys_to_virt(eptp_list_phys); + pr_err("VMFunctionControl=%08x VMFunctionControlHigh=%08x\n", + vmcs_read32(VM_FUNCTION_CONTROL), + vmcs_read32(VM_FUNCTION_CONTROL_HIGH)); + pr_err("*** EPTP Switching ***\n"); pr_err("EPTP List Address: %p (phys %p)\n", eptp_list, (void *)eptp_list_phys); @@ -8251,6 +8283,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .get_vmfunc_status = vmx_get_vmfunc_status, .get_eptp_switching_status = vmx_get_eptp_switching_status, .get_ept_view = vmx_get_ept_view, + .set_ept_view = vmx_set_ept_view, }; static __init int hardware_setup(void) From patchwork Wed Jul 22 16:00:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678827 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47C5A618 for ; Wed, 22 Jul 2020 16:01:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3998B20717 for ; Wed, 22 Jul 2020 16:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729604AbgGVQBh (ORCPT ); Wed, 22 Jul 2020 12:01:37 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37948 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728670AbgGVQBg (ORCPT ); Wed, 22 Jul 2020 12:01:36 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 2C6B2305D7EC; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 22FDD307279A; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 09/34] KVM: x86: add .control_ept_view() Date: Wed, 22 Jul 2020 19:00:56 +0300 Message-Id: <20200722160121.9601-10-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This will be used by the introspection tool to control the EPT views to which the guest is allowed to switch. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/vmx.c | 18 +++++++++++++++++- arch/x86/kvm/vmx/vmx.h | 2 ++ 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 300f7fc43987..5e241863153f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1310,6 +1310,7 @@ struct kvm_x86_ops { bool (*get_eptp_switching_status)(void); u16 (*get_ept_view)(struct kvm_vcpu *vcpu); int (*set_ept_view)(struct kvm_vcpu *vcpu, u16 view); + int (*control_ept_view)(struct kvm_vcpu *vcpu, u16 view, u8 visible); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0d39487ce5c6..cbc943d217e3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3138,8 +3138,11 @@ static void vmx_construct_eptp_with_index(struct kvm_vcpu *vcpu, if (!eptp_list) return; - eptp_list[view] = construct_eptp(vcpu, + if (test_bit(view, &vmx->allowed_views)) + eptp_list[view] = construct_eptp(vcpu, vcpu->arch.mmu->root_hpa_altviews[view]); + else + eptp_list[view] = (~0ULL); } static void vmx_construct_eptp_list(struct kvm_vcpu *vcpu) @@ -4395,6 +4398,18 @@ static int vmx_set_ept_view(struct kvm_vcpu *vcpu, u16 view) return 0; } +static int vmx_control_ept_view(struct kvm_vcpu *vcpu, u16 view, u8 visible) +{ + if (visible) + set_bit(view, &to_vmx(vcpu)->allowed_views); + else + clear_bit(view, &to_vmx(vcpu)->allowed_views); + + vmx_construct_eptp_with_index(vcpu, view); + + return 0; +} + #define VMX_XSS_EXIT_BITMAP 0 /* @@ -8284,6 +8299,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .get_eptp_switching_status = vmx_get_eptp_switching_status, .get_ept_view = vmx_get_ept_view, .set_ept_view = vmx_set_ept_view, + .control_ept_view = vmx_control_ept_view, }; static __init int hardware_setup(void) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 4e2f86458ca2..38d50fc7357b 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -300,6 +300,8 @@ struct vcpu_vmx { struct page *eptp_list_pg; /* The view this vcpu operates on. */ u16 view; + /* Visible EPT views bitmap for in-guest VMFUNC. */ + unsigned long allowed_views; }; enum ept_pointers_status { From patchwork Wed Jul 22 16:00:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40C5C159A for ; Wed, 22 Jul 2020 16:01:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 244A620717 for ; Wed, 22 Jul 2020 16:01:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730589AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37958 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728780AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 33F84305D7F3; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 291933073698; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 10/34] KVM: x86: page track: allow page tracking for different EPT views Date: Wed, 22 Jul 2020 19:00:57 +0300 Message-Id: <20200722160121.9601-11-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool uses this to set distinct access rights on different EPT views. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/include/asm/kvm_page_track.h | 4 +- arch/x86/kvm/kvmi.c | 6 ++- arch/x86/kvm/mmu.h | 9 ++-- arch/x86/kvm/mmu/mmu.c | 60 +++++++++++++++++---------- arch/x86/kvm/mmu/page_track.c | 56 +++++++++++++------------ drivers/gpu/drm/i915/gvt/kvmgt.c | 8 ++-- 7 files changed, 86 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5e241863153f..2fbb26b54cf1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -860,7 +860,7 @@ struct kvm_lpage_info { struct kvm_arch_memory_slot { struct kvm_rmap_head *rmap[KVM_NR_PAGE_SIZES]; struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1]; - unsigned short *gfn_track[KVM_PAGE_TRACK_MAX]; + unsigned short *gfn_track[KVM_MAX_EPT_VIEWS][KVM_PAGE_TRACK_MAX]; }; /* diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index c10f0f65c77a..96d2ab7da4a7 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -109,10 +109,10 @@ int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, void kvm_slot_page_track_add_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + enum kvm_page_track_mode mode, u16 view); void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + enum kvm_page_track_mode mode, u16 view); bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, enum kvm_page_track_mode mode); diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 4e75858c03b4..7b3b64d27d18 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1215,11 +1215,13 @@ void kvmi_arch_update_page_tracking(struct kvm *kvm, if (m->access & allow_bit) { if (slot_tracked) { kvm_slot_page_track_remove_page(kvm, slot, - m->gfn, mode); + m->gfn, mode, + 0); clear_bit(slot->id, arch->active[mode]); } } else if (!slot_tracked) { - kvm_slot_page_track_add_page(kvm, slot, m->gfn, mode); + kvm_slot_page_track_add_page(kvm, slot, m->gfn, mode, + 0); set_bit(slot->id, arch->active[mode]); } } diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index e2c0518af750..2692b14fb605 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -221,11 +221,14 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn); + struct kvm_memory_slot *slot, u64 gfn, + u16 view); bool kvm_mmu_slot_gfn_read_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn); + struct kvm_memory_slot *slot, u64 gfn, + u16 view); bool kvm_mmu_slot_gfn_exec_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn); + struct kvm_memory_slot *slot, u64 gfn, + u16 view); int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa); int kvm_mmu_post_init_vm(struct kvm *kvm); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 70461c7ef58c..cca12982b795 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1231,9 +1231,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) { kvm_slot_page_track_add_page(kvm, slot, gfn, - KVM_PAGE_TRACK_PREWRITE); + KVM_PAGE_TRACK_PREWRITE, 0); kvm_slot_page_track_add_page(kvm, slot, gfn, - KVM_PAGE_TRACK_WRITE); + KVM_PAGE_TRACK_WRITE, 0); return; } @@ -1263,9 +1263,9 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) slot = __gfn_to_memslot(slots, gfn); if (sp->role.level > PG_LEVEL_4K) { kvm_slot_page_track_remove_page(kvm, slot, gfn, - KVM_PAGE_TRACK_PREWRITE); + KVM_PAGE_TRACK_PREWRITE, 0); kvm_slot_page_track_remove_page(kvm, slot, gfn, - KVM_PAGE_TRACK_WRITE); + KVM_PAGE_TRACK_WRITE, 0); return; } @@ -1617,40 +1617,52 @@ static bool spte_exec_protect(u64 *sptep) static bool __rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - bool pt_protect) + bool pt_protect, u16 view) { u64 *sptep; struct rmap_iterator iter; bool flush = false; + struct kvm_mmu_page *sp; - for_each_rmap_spte(rmap_head, &iter, sptep) - flush |= spte_write_protect(sptep, pt_protect); + for_each_rmap_spte(rmap_head, &iter, sptep) { + sp = page_header(__pa(sptep)); + if (view == 0 || (view > 0 && sp->view == view)) + flush |= spte_write_protect(sptep, pt_protect); + } return flush; } static bool __rmap_read_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, u16 view) { struct rmap_iterator iter; + struct kvm_mmu_page *sp; bool flush = false; u64 *sptep; - for_each_rmap_spte(rmap_head, &iter, sptep) - flush |= spte_read_protect(sptep); + for_each_rmap_spte(rmap_head, &iter, sptep) { + sp = page_header(__pa(sptep)); + if (view == 0 || (view > 0 && sp->view == view)) + flush |= spte_read_protect(sptep); + } return flush; } static bool __rmap_exec_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, u16 view) { struct rmap_iterator iter; + struct kvm_mmu_page *sp; bool flush = false; u64 *sptep; - for_each_rmap_spte(rmap_head, &iter, sptep) - flush |= spte_exec_protect(sptep); + for_each_rmap_spte(rmap_head, &iter, sptep) { + sp = page_header(__pa(sptep)); + if (view == 0 || (view > 0 && sp->view == view)) + flush |= spte_exec_protect(sptep); + } return flush; } @@ -1745,7 +1757,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); - __rmap_write_protect(kvm, rmap_head, false); + __rmap_write_protect(kvm, rmap_head, false, 0); /* clear the first set bit */ mask &= mask - 1; @@ -1816,7 +1828,8 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa) } bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn) + struct kvm_memory_slot *slot, u64 gfn, + u16 view) { struct kvm_rmap_head *rmap_head; int i; @@ -1824,14 +1837,16 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + write_protected |= __rmap_write_protect(kvm, rmap_head, true, + view); } return write_protected; } bool kvm_mmu_slot_gfn_read_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn) + struct kvm_memory_slot *slot, u64 gfn, + u16 view) { struct kvm_rmap_head *rmap_head; bool read_protected = false; @@ -1839,14 +1854,15 @@ bool kvm_mmu_slot_gfn_read_protect(struct kvm *kvm, for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - read_protected |= __rmap_read_protect(kvm, rmap_head); + read_protected |= __rmap_read_protect(kvm, rmap_head, view); } return read_protected; } bool kvm_mmu_slot_gfn_exec_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn) + struct kvm_memory_slot *slot, u64 gfn, + u16 view) { struct kvm_rmap_head *rmap_head; bool exec_protected = false; @@ -1854,7 +1870,7 @@ bool kvm_mmu_slot_gfn_exec_protect(struct kvm *kvm, for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - exec_protected |= __rmap_exec_protect(kvm, rmap_head); + exec_protected |= __rmap_exec_protect(kvm, rmap_head, view); } return exec_protected; @@ -1865,7 +1881,7 @@ static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) struct kvm_memory_slot *slot; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn); + return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, 0); } static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head) @@ -6008,7 +6024,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) static bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head) { - return __rmap_write_protect(kvm, rmap_head, false); + return __rmap_write_protect(kvm, rmap_head, false, 0); } void kvm_mmu_slot_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index b593bcf80be0..bf26b21cfeb8 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -20,12 +20,13 @@ void kvm_page_track_free_memslot(struct kvm_memory_slot *slot) { - int i; + int i, view; - for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { - kvfree(slot->arch.gfn_track[i]); - slot->arch.gfn_track[i] = NULL; - } + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { + kvfree(slot->arch.gfn_track[view][i]); + slot->arch.gfn_track[view][i] = NULL; + } } int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -33,16 +34,17 @@ int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, { struct kvm_page_track_notifier_head *head; struct kvm_page_track_notifier_node *n; - int idx; - int i; + int view, idx, i; - for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { - slot->arch.gfn_track[i] = - kvcalloc(npages, sizeof(*slot->arch.gfn_track[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.gfn_track[i]) - goto track_free; - } + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { + slot->arch.gfn_track[view][i] = + kvcalloc(npages, + sizeof(*slot->arch.gfn_track[view][i]), + GFP_KERNEL_ACCOUNT); + if (!slot->arch.gfn_track[view][i]) + goto track_free; + } head = &kvm->arch.track_notifier_head; @@ -71,18 +73,19 @@ static inline bool page_track_mode_is_valid(enum kvm_page_track_mode mode) } static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode, short count) + enum kvm_page_track_mode mode, short count, + u16 view) { int index, val; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); - val = slot->arch.gfn_track[mode][index]; + val = slot->arch.gfn_track[view][mode][index]; if (WARN_ON(val + count < 0 || val + count > USHRT_MAX)) return; - slot->arch.gfn_track[mode][index] += count; + slot->arch.gfn_track[view][mode][index] += count; } /* @@ -99,13 +102,13 @@ static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, */ void kvm_slot_page_track_add_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + enum kvm_page_track_mode mode, u16 view) { if (WARN_ON(!page_track_mode_is_valid(mode))) return; - update_gfn_track(slot, gfn, mode, 1); + update_gfn_track(slot, gfn, mode, 1, view); /* * new track stops large page mapping for the @@ -114,13 +117,13 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, kvm_mmu_gfn_disallow_lpage(slot, gfn); if (mode == KVM_PAGE_TRACK_PREWRITE || mode == KVM_PAGE_TRACK_WRITE) { - if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn)) + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, view)) kvm_flush_remote_tlbs(kvm); } else if (mode == KVM_PAGE_TRACK_PREREAD) { - if (kvm_mmu_slot_gfn_read_protect(kvm, slot, gfn)) + if (kvm_mmu_slot_gfn_read_protect(kvm, slot, gfn, view)) kvm_flush_remote_tlbs(kvm); } else if (mode == KVM_PAGE_TRACK_PREEXEC) { - if (kvm_mmu_slot_gfn_exec_protect(kvm, slot, gfn)) + if (kvm_mmu_slot_gfn_exec_protect(kvm, slot, gfn, view)) kvm_flush_remote_tlbs(kvm); } } @@ -141,12 +144,12 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_add_page); */ void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + enum kvm_page_track_mode mode, u16 view) { if (WARN_ON(!page_track_mode_is_valid(mode))) return; - update_gfn_track(slot, gfn, mode, -1); + update_gfn_track(slot, gfn, mode, -1, view); /* * allow large page mapping for the tracked page @@ -163,7 +166,7 @@ bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, enum kvm_page_track_mode mode) { struct kvm_memory_slot *slot; - int index; + int index, view; if (WARN_ON(!page_track_mode_is_valid(mode))) return false; @@ -173,7 +176,8 @@ bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, return false; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); - return !!READ_ONCE(slot->arch.gfn_track[mode][index]); + view = kvm_get_ept_view(vcpu); + return !!READ_ONCE(slot->arch.gfn_track[view][mode][index]); } void kvm_page_track_cleanup(struct kvm *kvm) diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index 4e370b216365..98e2e75c0d22 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -1706,7 +1706,8 @@ static int kvmgt_page_track_add(unsigned long handle, u64 gfn) if (kvmgt_gfn_is_write_protected(info, gfn)) goto out; - kvm_slot_page_track_add_page(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE); + kvm_slot_page_track_add_page(kvm, slot, gfn, + KVM_PAGE_TRACK_WRITE, 0); kvmgt_protect_table_add(info, gfn); out: @@ -1740,7 +1741,8 @@ static int kvmgt_page_track_remove(unsigned long handle, u64 gfn) if (!kvmgt_gfn_is_write_protected(info, gfn)) goto out; - kvm_slot_page_track_remove_page(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE); + kvm_slot_page_track_remove_page(kvm, slot, gfn, + KVM_PAGE_TRACK_WRITE, 0); kvmgt_protect_table_del(info, gfn); out: @@ -1775,7 +1777,7 @@ static void kvmgt_page_track_flush_slot(struct kvm *kvm, gfn = slot->base_gfn + i; if (kvmgt_gfn_is_write_protected(info, gfn)) { kvm_slot_page_track_remove_page(kvm, slot, gfn, - KVM_PAGE_TRACK_WRITE); + KVM_PAGE_TRACK_WRITE, 0); kvmgt_protect_table_del(info, gfn); } } From patchwork Wed Jul 22 16:00:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678883 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E51DE618 for ; Wed, 22 Jul 2020 16:02:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD650206F5 for ; Wed, 22 Jul 2020 16:02:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730494AbgGVQCW (ORCPT ); Wed, 22 Jul 2020 12:02:22 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38024 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729066AbgGVQBi (ORCPT ); Wed, 22 Jul 2020 12:01:38 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 3A0B5305D7F7; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 325E43011F3C; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 11/34] KVM: x86: mmu: allow zapping shadow pages for specific EPT views Date: Wed, 22 Jul 2020 19:00:58 +0300 Message-Id: <20200722160121.9601-12-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Add a view mask for kvm_mmu_zap_all() in order to allow zapping shadow pages for specific EPT views. This is required when an introspected VM is unhooked. In that case, shadow pages that belong to non-default views will be zapped. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 4 +++- arch/x86/kvm/x86.c | 4 +++- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2fbb26b54cf1..519b8210b8ef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1392,7 +1392,7 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask); -void kvm_mmu_zap_all(struct kvm *kvm); +void kvm_mmu_zap_all(struct kvm *kvm, u16 view_mask); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cca12982b795..22c83192bba1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6166,7 +6166,7 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, } EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); -void kvm_mmu_zap_all(struct kvm *kvm) +void kvm_mmu_zap_all(struct kvm *kvm, u16 view_mask) { struct kvm_mmu_page *sp, *node; LIST_HEAD(invalid_list); @@ -6175,6 +6175,8 @@ void kvm_mmu_zap_all(struct kvm *kvm) spin_lock(&kvm->mmu_lock); restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { + if (!test_bit(sp->view, (unsigned long *)&view_mask)) + continue; if (sp->role.invalid && sp->root_count) continue; if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2e2c56a37bdb..78aacac839bb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10406,7 +10406,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_mmu_zap_all(kvm); + u16 ept_views_to_keep = 0; + + kvm_mmu_zap_all(kvm, ~ept_views_to_keep); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Wed Jul 22 16:00:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678875 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83F78618 for ; Wed, 22 Jul 2020 16:02:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 771A420771 for ; Wed, 22 Jul 2020 16:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728956AbgGVQCQ (ORCPT ); Wed, 22 Jul 2020 12:02:16 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38026 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729191AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 4FF5D305D7FE; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 3F05D30003E9; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 12/34] KVM: introspection: extend struct kvmi_features with the EPT views status support Date: Wed, 22 Jul 2020 19:00:59 +0300 Message-Id: <20200722160121.9601-13-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool will use these new fields to check the hardware support before using the related introspection commands. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 6 ++++-- arch/x86/include/uapi/asm/kvmi.h | 4 +++- arch/x86/kvm/kvmi.c | 4 ++++ tools/testing/selftests/kvm/x86_64/kvmi_test.c | 2 ++ 4 files changed, 13 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index 62138fa4b65c..234eacec4db1 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -263,11 +263,13 @@ For x86 struct kvmi_features { __u8 singlestep; - __u8 padding[7]; + __u8 vmfunc; + __u8 eptp; + __u8 padding[5]; }; Returns the introspection API version and some of the features supported -by the hardware. +by the hardware (eg. alternate EPT views). This command is always allowed and successful. diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index 32af803f1d70..51b399d50a2a 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -147,7 +147,9 @@ struct kvmi_event_msr_reply { struct kvmi_features { __u8 singlestep; - __u8 padding[7]; + __u8 vmfunc; + __u8 eptp; + __u8 padding[5]; }; #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 7b3b64d27d18..25c1f8f2e221 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1356,6 +1356,10 @@ static void kvmi_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot, void kvmi_arch_features(struct kvmi_features *feat) { feat->singlestep = !!kvm_x86_ops.control_singlestep; + feat->vmfunc = kvm_x86_ops.get_vmfunc_status && + kvm_x86_ops.get_vmfunc_status(); + feat->eptp = kvm_x86_ops.get_eptp_switching_status && + kvm_x86_ops.get_eptp_switching_status(); } bool kvmi_arch_start_singlestep(struct kvm_vcpu *vcpu) diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index e968b1a6f969..33fffcb3a171 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -443,6 +443,8 @@ static void test_cmd_get_version(void) pr_info("KVMI version: %u\n", rpl.version); pr_info("\tsinglestep: %u\n", features.singlestep); + pr_info("\tvmfunc: %u\n", features.vmfunc); + pr_info("\teptp: %u\n", features.eptp); } static void cmd_vm_check_command(__u16 id, __u16 padding, int expected_err) From patchwork Wed Jul 22 16:01:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8B7A14E3 for ; Wed, 22 Jul 2020 16:02:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9AC5120717 for ; Wed, 22 Jul 2020 16:02:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728612AbgGVQCV (ORCPT ); Wed, 22 Jul 2020 12:02:21 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37956 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728980AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 540B3305D761; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 44E27305FFA0; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 13/34] KVM: introspection: add KVMI_VCPU_GET_EPT_VIEW Date: Wed, 22 Jul 2020 19:01:00 +0300 Message-Id: <20200722160121.9601-14-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool uses this function to check the hardware support for EPT switching, which can be used either to singlestep vCPUs on a unprotected EPT view or to use #VE in order to avoid filter out VM-exits caused by EPT violations. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 34 +++++++++++++++++++ arch/x86/include/uapi/asm/kvmi.h | 6 ++++ arch/x86/kvm/kvmi.c | 5 +++ include/uapi/linux/kvmi.h | 1 + .../testing/selftests/kvm/x86_64/kvmi_test.c | 28 +++++++++++++++ virt/kvm/introspection/kvmi_int.h | 1 + virt/kvm/introspection/kvmi_msg.c | 14 ++++++++ 7 files changed, 89 insertions(+) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index 234eacec4db1..76a2d0125f78 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1120,6 +1120,40 @@ the address cannot be translated. * -KVM_EINVAL - the padding is not zero * -KVM_EAGAIN - the selected vCPU can't be introspected yet +26. KVMI_VCPU_GET_EPT_VIEW +-------------------------- + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vcpu_hdr; + +:Returns: + +:: + + struct kvmi_error_code; + struct kvmi_vcpu_get_ept_view_reply { + __u16 view; + __u16 padding1; + __u32 padding2; + }; + +Returns the EPT ``view`` the provided vCPU operates on. + +Before getting EPT views, the introspection tool should use +*KVMI_GET_VERSION* to check if the hardware has support for VMFUNC and +EPTP switching mechanism (see **KVMI_GET_VERSION**). If the hardware +does not provide support for these features, the returned EPT view will +be zero. + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EINVAL - the padding is not zero +* -KVM_EAGAIN - the selected vCPU can't be introspected yet + Events ====== diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index 51b399d50a2a..3087c685c232 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -152,4 +152,10 @@ struct kvmi_features { __u8 padding[5]; }; +struct kvmi_vcpu_get_ept_view_reply { + __u16 view; + __u16 padding1; + __u32 padding2; +}; + #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 25c1f8f2e221..bd31809ff812 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1417,3 +1417,8 @@ bool kvmi_update_ad_flags(struct kvm_vcpu *vcpu) return ret; } + +u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu) +{ + return kvm_get_ept_view(vcpu); +} diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index 3c15c17d28e3..cf3422ec60a8 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -49,6 +49,7 @@ enum { KVMI_VCPU_CONTROL_SINGLESTEP = 24, KVMI_VCPU_TRANSLATE_GVA = 25, + KVMI_VCPU_GET_EPT_VIEW = 26, KVMI_NUM_MESSAGES }; diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index 33fffcb3a171..74eafbcae14a 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -2071,6 +2071,33 @@ static void test_cmd_translate_gva(struct kvm_vm *vm) (vm_vaddr_t)-1, (vm_paddr_t)-1); } +static __u16 get_ept_view(struct kvm_vm *vm) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vcpu_hdr vcpu_hdr; + } req = {}; + struct kvmi_vcpu_get_ept_view_reply rpl; + + test_vcpu0_command(vm, KVMI_VCPU_GET_EPT_VIEW, + &req.hdr, sizeof(req), &rpl, sizeof(rpl)); + + return rpl.view; +} + +static void test_cmd_vcpu_get_ept_view(struct kvm_vm *vm) +{ + __u16 view; + + if (!features.eptp) { + print_skip("EPT views not supported"); + return; + } + + view = get_ept_view(vm); + pr_info("EPT view %u\n", view); +} + static void test_introspection(struct kvm_vm *vm) { srandom(time(0)); @@ -2107,6 +2134,7 @@ static void test_introspection(struct kvm_vm *vm) test_event_pf(vm); test_cmd_vcpu_control_singlestep(vm); test_cmd_translate_gva(vm); + test_cmd_vcpu_get_ept_view(vm); unhook_introspection(vm); } diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index cb8453f0fb87..f88999bf59e8 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -142,5 +142,6 @@ void kvmi_arch_features(struct kvmi_features *feat); bool kvmi_arch_start_singlestep(struct kvm_vcpu *vcpu); bool kvmi_arch_stop_singlestep(struct kvm_vcpu *vcpu); gpa_t kvmi_arch_cmd_translate_gva(struct kvm_vcpu *vcpu, gva_t gva); +u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu); #endif diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c index d8874bd7a8b7..6cb3473190db 100644 --- a/virt/kvm/introspection/kvmi_msg.c +++ b/virt/kvm/introspection/kvmi_msg.c @@ -661,6 +661,19 @@ static int handle_vcpu_translate_gva(const struct kvmi_vcpu_msg_job *job, return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl)); } +static int handle_vcpu_get_ept_view(const struct kvmi_vcpu_msg_job *job, + const struct kvmi_msg_hdr *msg, + const void *req) +{ + struct kvmi_vcpu_get_ept_view_reply rpl; + + memset(&rpl, 0, sizeof(rpl)); + + rpl.view = kvmi_arch_cmd_get_ept_view(job->vcpu); + + return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl)); +} + /* * These functions are executed from the vCPU thread. The receiving thread * passes the messages using a newly allocated 'struct kvmi_vcpu_msg_job' @@ -675,6 +688,7 @@ static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *, [KVMI_VCPU_CONTROL_MSR] = handle_vcpu_control_msr, [KVMI_VCPU_CONTROL_SINGLESTEP] = handle_vcpu_control_singlestep, [KVMI_VCPU_GET_CPUID] = handle_vcpu_get_cpuid, + [KVMI_VCPU_GET_EPT_VIEW] = handle_vcpu_get_ept_view, [KVMI_VCPU_GET_INFO] = handle_vcpu_get_info, [KVMI_VCPU_GET_MTRR_TYPE] = handle_vcpu_get_mtrr_type, [KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers, From patchwork Wed Jul 22 16:01:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678881 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C06CB14E3 for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B2E3A208E4 for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730106AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37952 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728956AbgGVQBi (ORCPT ); Wed, 22 Jul 2020 12:01:38 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 62B3C305D763; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 4C8A3305FFA1; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 14/34] KVM: introspection: add 'view' field to struct kvmi_event_arch Date: Wed, 22 Jul 2020 19:01:01 +0300 Message-Id: <20200722160121.9601-15-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Report the view a vCPU operates on when sending events to the introspection tool. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/uapi/asm/kvmi.h | 4 +++- arch/x86/kvm/kvmi.c | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index 3087c685c232..a13a98fa863f 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -12,7 +12,9 @@ struct kvmi_event_arch { __u8 mode; /* 2, 4 or 8 */ - __u8 padding[7]; + __u8 padding1; + __u16 view; + __u32 padding2; struct kvm_regs regs; struct kvm_sregs sregs; struct { diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index bd31809ff812..292606902338 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -102,6 +102,7 @@ void kvmi_arch_setup_event(struct kvm_vcpu *vcpu, struct kvmi_event *ev) kvm_arch_vcpu_get_sregs(vcpu, &event->sregs); ev->arch.mode = kvmi_vcpu_mode(vcpu, &event->sregs); kvmi_get_msrs(vcpu, event); + event->view = kvm_get_ept_view(vcpu); } int kvmi_arch_cmd_vcpu_get_info(struct kvm_vcpu *vcpu, From patchwork Wed Jul 22 16:01:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678873 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 692201667 for ; Wed, 22 Jul 2020 16:02:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56810206F5 for ; Wed, 22 Jul 2020 16:02:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728236AbgGVQCP (ORCPT ); Wed, 22 Jul 2020 12:02:15 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38030 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729114AbgGVQBk (ORCPT ); Wed, 22 Jul 2020 12:01:40 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 7876E305D766; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 6079E305FFA2; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 15/34] KVM: introspection: add KVMI_VCPU_SET_EPT_VIEW Date: Wed, 22 Jul 2020 19:01:02 +0300 Message-Id: <20200722160121.9601-16-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool uses this function to check the hardware support for EPT switching, which can be used to singlestep vCPUs on a unprotected EPT view. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 36 ++++++++++++++++ arch/x86/include/uapi/asm/kvmi.h | 6 +++ arch/x86/kvm/kvmi.c | 9 ++++ include/uapi/linux/kvmi.h | 1 + .../testing/selftests/kvm/x86_64/kvmi_test.c | 43 +++++++++++++++++++ virt/kvm/introspection/kvmi_int.h | 6 +++ virt/kvm/introspection/kvmi_msg.c | 20 +++++++++ 7 files changed, 121 insertions(+) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index 76a2d0125f78..02f03c62adef 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1154,6 +1154,42 @@ be zero. * -KVM_EINVAL - the padding is not zero * -KVM_EAGAIN - the selected vCPU can't be introspected yet +27. KVMI_VCPU_SET_EPT_VIEW +-------------------------- + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vcpu_hdr; + struct kvmi_vcpu_set_ept_view { + __u16 view; + __u16 padding1; + __u32 padding2; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Configures the vCPU to use the provided ``view``. + +Before switching EPT views, the introspection tool should use +*KVMI_GET_VERSION* to check if the hardware has support for VMFUNC and +EPTP switching mechanism (see **KVMI_GET_VERSION**). + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EINVAL - padding is not zero +* -KVM_EINVAL - the selected EPT view is invalid +* -KVM_EAGAIN - the selected vCPU can't be introspected yet +* -KVM_EOPNOTSUPP - an EPT view was selected but the hardware doesn't support it + Events ====== diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index a13a98fa863f..f7a080d5e227 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -160,4 +160,10 @@ struct kvmi_vcpu_get_ept_view_reply { __u32 padding2; }; +struct kvmi_vcpu_set_ept_view { + __u16 view; + __u16 padding1; + __u32 padding2; +}; + #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 292606902338..99ea8ef70be2 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1423,3 +1423,12 @@ u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu) { return kvm_get_ept_view(vcpu); } + +int kvmi_arch_cmd_set_ept_view(struct kvm_vcpu *vcpu, u16 view) +{ + + if (!kvm_x86_ops.set_ept_view) + return -KVM_EINVAL; + + return kvm_x86_ops.set_ept_view(vcpu, view); +} diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index cf3422ec60a8..8204661d944d 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -50,6 +50,7 @@ enum { KVMI_VCPU_CONTROL_SINGLESTEP = 24, KVMI_VCPU_TRANSLATE_GVA = 25, KVMI_VCPU_GET_EPT_VIEW = 26, + KVMI_VCPU_SET_EPT_VIEW = 27, KVMI_NUM_MESSAGES }; diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index 74eafbcae14a..c6f7d10563db 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -20,6 +20,8 @@ #include "linux/kvm_para.h" #include "linux/kvmi.h" +#define KVM_MAX_EPT_VIEWS 3 + #define VCPU_ID 5 #define X86_FEATURE_XSAVE (1<<26) @@ -2098,6 +2100,46 @@ static void test_cmd_vcpu_get_ept_view(struct kvm_vm *vm) pr_info("EPT view %u\n", view); } +static void set_ept_view(struct kvm_vm *vm, __u16 view) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vcpu_hdr vcpu_hdr; + struct kvmi_vcpu_set_ept_view cmd; + } req = {}; + + req.cmd.view = view; + + test_vcpu0_command(vm, KVMI_VCPU_SET_EPT_VIEW, + &req.hdr, sizeof(req), NULL, 0); +} + +static void test_cmd_vcpu_set_ept_view(struct kvm_vm *vm) +{ + __u16 old_view; + __u16 new_view; + __u16 check_view; + + if (!features.eptp) { + print_skip("EPT views not supported"); + return; + } + + old_view = get_ept_view(vm); + + new_view = (old_view + 1) % KVM_MAX_EPT_VIEWS; + pr_info("Change EPT view from %u to %u\n", old_view, new_view); + set_ept_view(vm, new_view); + + check_view = get_ept_view(vm); + TEST_ASSERT(check_view == new_view, + "Switching EPT view failed, found ept view (%u), expected view (%u)\n", + check_view, new_view); + + pr_info("Change EPT view from %u to %u\n", check_view, old_view); + set_ept_view(vm, old_view); +} + static void test_introspection(struct kvm_vm *vm) { srandom(time(0)); @@ -2135,6 +2177,7 @@ static void test_introspection(struct kvm_vm *vm) test_cmd_vcpu_control_singlestep(vm); test_cmd_translate_gva(vm); test_cmd_vcpu_get_ept_view(vm); + test_cmd_vcpu_set_ept_view(vm); unhook_introspection(vm); } diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index f88999bf59e8..f093aad2f804 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -32,6 +32,11 @@ static inline bool is_event_enabled(struct kvm_vcpu *vcpu, int event) return test_bit(event, VCPUI(vcpu)->ev_enable_mask); } +static inline bool is_valid_view(unsigned short view) +{ + return (view < KVM_MAX_EPT_VIEWS); +} + /* kvmi_msg.c */ bool kvmi_sock_get(struct kvm_introspection *kvmi, int fd); void kvmi_sock_shutdown(struct kvm_introspection *kvmi); @@ -143,5 +148,6 @@ bool kvmi_arch_start_singlestep(struct kvm_vcpu *vcpu); bool kvmi_arch_stop_singlestep(struct kvm_vcpu *vcpu); gpa_t kvmi_arch_cmd_translate_gva(struct kvm_vcpu *vcpu, gva_t gva); u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu); +int kvmi_arch_cmd_set_ept_view(struct kvm_vcpu *vcpu, u16 view); #endif diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c index 6cb3473190db..73a7179f7031 100644 --- a/virt/kvm/introspection/kvmi_msg.c +++ b/virt/kvm/introspection/kvmi_msg.c @@ -674,6 +674,25 @@ static int handle_vcpu_get_ept_view(const struct kvmi_vcpu_msg_job *job, return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl)); } +static int handle_vcpu_set_ept_view(const struct kvmi_vcpu_msg_job *job, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_vcpu_set_ept_view *req = _req; + int ec; + + if (req->padding1 || req->padding2) + ec = -KVM_EINVAL; + else if (!is_valid_view(req->view)) + ec = -KVM_EINVAL; + else if (!kvm_eptp_switching_supported) + ec = -KVM_EOPNOTSUPP; + else + ec = kvmi_arch_cmd_set_ept_view(job->vcpu, req->view); + + return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); +} + /* * These functions are executed from the vCPU thread. The receiving thread * passes the messages using a newly allocated 'struct kvmi_vcpu_msg_job' @@ -695,6 +714,7 @@ static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *, [KVMI_VCPU_GET_XCR] = handle_vcpu_get_xcr, [KVMI_VCPU_GET_XSAVE] = handle_vcpu_get_xsave, [KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception, + [KVMI_VCPU_SET_EPT_VIEW] = handle_vcpu_set_ept_view, [KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers, [KVMI_VCPU_SET_XSAVE] = handle_vcpu_set_xsave, [KVMI_VCPU_TRANSLATE_GVA] = handle_vcpu_translate_gva, From patchwork Wed Jul 22 16:01:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678847 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC69314E3 for ; Wed, 22 Jul 2020 16:01:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95515207CD for ; Wed, 22 Jul 2020 16:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730694AbgGVQBr (ORCPT ); Wed, 22 Jul 2020 12:01:47 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38028 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729195AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 80B85305D767; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 75D9B305FFA3; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 16/34] KVM: introspection: add KVMI_VCPU_CONTROL_EPT_VIEW Date: Wed, 22 Jul 2020 19:01:03 +0300 Message-Id: <20200722160121.9601-17-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This will be used by the introspection tool to control the EPT views to which the guest is allowed to switch. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 37 ++++++ arch/x86/include/uapi/asm/kvmi.h | 7 ++ arch/x86/kvm/kvmi.c | 9 ++ include/uapi/linux/kvmi.h | 1 + .../testing/selftests/kvm/x86_64/kvmi_test.c | 118 ++++++++++++++++++ virt/kvm/introspection/kvmi_int.h | 2 + virt/kvm/introspection/kvmi_msg.c | 19 +++ 7 files changed, 193 insertions(+) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index 02f03c62adef..f4c60aba9b53 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1190,6 +1190,43 @@ EPTP switching mechanism (see **KVMI_GET_VERSION**). * -KVM_EAGAIN - the selected vCPU can't be introspected yet * -KVM_EOPNOTSUPP - an EPT view was selected but the hardware doesn't support it +28. KVMI_VCPU_CONTROL_EPT_VIEW +------------------------------ + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vcpu_hdr; + struct kvmi_vcpu_control_ept_view { + __u16 view; + __u8 visible; + __u8 padding1; + __u32 padding2; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Controls the capability of the guest to successfully change EPT views +through VMFUNC instruction without triggering a vm-exit. If ``visible`` +is true, the guest will be capable to change EPT views through VMFUNC(0, +``view``). If ``visible`` is false, VMFUNC(0, ``view``) triggers a +vm-exit, a #UD exception is injected to guest and the guest application +is terminated. + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EAGAIN - the selected vCPU can't be introspected yet +* -KVM_EINVAL - padding is not zero +* -KVM_EINVAL - the selected EPT view is not valid + Events ====== diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index f7a080d5e227..fc35da900778 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -166,4 +166,11 @@ struct kvmi_vcpu_set_ept_view { __u32 padding2; }; +struct kvmi_vcpu_control_ept_view { + __u16 view; + __u8 visible; + __u8 padding1; + __u32 padding2; +}; + #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 99ea8ef70be2..06357b8ab54a 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1432,3 +1432,12 @@ int kvmi_arch_cmd_set_ept_view(struct kvm_vcpu *vcpu, u16 view) return kvm_x86_ops.set_ept_view(vcpu, view); } + +int kvmi_arch_cmd_control_ept_view(struct kvm_vcpu *vcpu, u16 view, + bool visible) +{ + if (!kvm_x86_ops.control_ept_view) + return -KVM_EINVAL; + + return kvm_x86_ops.control_ept_view(vcpu, view, visible); +} diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index 8204661d944d..a72c536a2c80 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -51,6 +51,7 @@ enum { KVMI_VCPU_TRANSLATE_GVA = 25, KVMI_VCPU_GET_EPT_VIEW = 26, KVMI_VCPU_SET_EPT_VIEW = 27, + KVMI_VCPU_CONTROL_EPT_VIEW = 28, KVMI_NUM_MESSAGES }; diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index c6f7d10563db..d808cb61463d 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -56,6 +56,7 @@ struct vcpu_worker_data { bool stop; bool shutdown; bool restart_on_shutdown; + bool run_guest_once; }; static struct kvmi_features features; @@ -72,6 +73,7 @@ enum { GUEST_TEST_HYPERCALL, GUEST_TEST_MSR, GUEST_TEST_PF, + GUEST_TEST_VMFUNC, GUEST_TEST_XSETBV, }; @@ -130,6 +132,13 @@ static void guest_pf_test(void) *((uint8_t *)test_gva) = READ_ONCE(test_write_pattern); } +static void guest_vmfunc_test(void) +{ + asm volatile("mov $0, %rax"); + asm volatile("mov $1, %rcx"); + asm volatile(".byte 0x0f,0x01,0xd4"); +} + /* from fpu/internal.h */ static u64 xgetbv(u32 index) { @@ -193,6 +202,9 @@ static void guest_code(void) case GUEST_TEST_PF: guest_pf_test(); break; + case GUEST_TEST_VMFUNC: + guest_vmfunc_test(); + break; case GUEST_TEST_XSETBV: guest_xsetbv_test(); break; @@ -777,6 +789,7 @@ static void test_memory_access(struct kvm_vm *vm) static void *vcpu_worker(void *data) { struct vcpu_worker_data *ctx = data; + bool first_run = false; struct kvm_run *run; run = vcpu_state(ctx->vm, ctx->vcpu_id); @@ -805,6 +818,13 @@ static void *vcpu_worker(void *data) if (HOST_SEND_TEST(uc)) { test_id = READ_ONCE(ctx->test_id); sync_global_to_guest(ctx->vm, test_id); + if (run->exit_reason == KVM_EXIT_IO && + ctx->run_guest_once) { + if (!first_run) + first_run = true; + else + break; + } } } @@ -2140,6 +2160,103 @@ static void test_cmd_vcpu_set_ept_view(struct kvm_vm *vm) set_ept_view(vm, old_view); } +static void check_expected_view(struct kvm_vm *vm, + __u16 check_view) +{ + __u16 found_view = get_ept_view(vm); + + TEST_ASSERT(check_view == found_view, + "Unexpected EPT view, found ept view (%u), expected view (%u)\n", + found_view, check_view); +} + +static void test_guest_switch_to_invisible_view(struct kvm_vm *vm) +{ + struct vcpu_worker_data data = { + .vm = vm, + .vcpu_id = VCPU_ID, + .shutdown = true, + .test_id = GUEST_TEST_VMFUNC, + }; + pthread_t vcpu_thread; + struct kvm_regs regs; + __u16 view = 0; + + check_expected_view(vm, view); + + vcpu_thread = start_vcpu_worker(&data); + wait_vcpu_worker(vcpu_thread); + + /* + * Move to the next instruction, so the guest would not + * re-execute VMFUNC again when vcpu_run() is called. + */ + vcpu_regs_get(vm, VCPU_ID, ®s); + regs.rip += 3; + vcpu_regs_set(vm, VCPU_ID, ®s); + + check_expected_view(vm, view); +} + +static void test_control_ept_view(struct kvm_vm *vm, __u16 view, bool visible) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vcpu_hdr vcpu_hdr; + struct kvmi_vcpu_control_ept_view cmd; + } req = {}; + + req.cmd.view = view; + req.cmd.visible = visible; + + test_vcpu0_command(vm, KVMI_VCPU_CONTROL_EPT_VIEW, + &req.hdr, sizeof(req), NULL, 0); +} + +static void enable_ept_view_visibility(struct kvm_vm *vm, __u16 view) +{ + test_control_ept_view(vm, view, true); +} + +static void disable_ept_view_visibility(struct kvm_vm *vm, __u16 view) +{ + test_control_ept_view(vm, view, false); +} + +static void test_guest_switch_to_visible_view(struct kvm_vm *vm) +{ + struct vcpu_worker_data data = { + .vm = vm, + .vcpu_id = VCPU_ID, + .run_guest_once = true, + .test_id = GUEST_TEST_VMFUNC, + }; + pthread_t vcpu_thread; + __u16 old_view = 0, new_view = 1; + + enable_ept_view_visibility(vm, new_view); + + vcpu_thread = start_vcpu_worker(&data); + wait_vcpu_worker(vcpu_thread); + + check_expected_view(vm, new_view); + + disable_ept_view_visibility(vm, new_view); + + set_ept_view(vm, old_view); +} + +static void test_cmd_vcpu_vmfunc(struct kvm_vm *vm) +{ + if (!features.vmfunc) { + print_skip("EPT views not supported"); + return; + } + + test_guest_switch_to_invisible_view(vm); + test_guest_switch_to_visible_view(vm); +} + static void test_introspection(struct kvm_vm *vm) { srandom(time(0)); @@ -2178,6 +2295,7 @@ static void test_introspection(struct kvm_vm *vm) test_cmd_translate_gva(vm); test_cmd_vcpu_get_ept_view(vm); test_cmd_vcpu_set_ept_view(vm); + test_cmd_vcpu_vmfunc(vm); unhook_introspection(vm); } diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index f093aad2f804..d78116442ddd 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -149,5 +149,7 @@ bool kvmi_arch_stop_singlestep(struct kvm_vcpu *vcpu); gpa_t kvmi_arch_cmd_translate_gva(struct kvm_vcpu *vcpu, gva_t gva); u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu); int kvmi_arch_cmd_set_ept_view(struct kvm_vcpu *vcpu, u16 view); +int kvmi_arch_cmd_control_ept_view(struct kvm_vcpu *vcpu, u16 view, + bool visible); #endif diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c index 73a7179f7031..696857f6d008 100644 --- a/virt/kvm/introspection/kvmi_msg.c +++ b/virt/kvm/introspection/kvmi_msg.c @@ -693,6 +693,24 @@ static int handle_vcpu_set_ept_view(const struct kvmi_vcpu_msg_job *job, return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); } +static int handle_vcpu_control_ept_view(const struct kvmi_vcpu_msg_job *job, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_vcpu_control_ept_view *req = _req; + int ec; + + if (req->padding1 || req->padding2) + ec = -KVM_EINVAL; + else if (!is_valid_view(req->view)) + ec = -KVM_EINVAL; + else + ec = kvmi_arch_cmd_control_ept_view(job->vcpu, req->view, + req->visible); + + return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); +} + /* * These functions are executed from the vCPU thread. The receiving thread * passes the messages using a newly allocated 'struct kvmi_vcpu_msg_job' @@ -703,6 +721,7 @@ static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *, const struct kvmi_msg_hdr *, const void *) = { [KVMI_EVENT] = handle_vcpu_event_reply, [KVMI_VCPU_CONTROL_CR] = handle_vcpu_control_cr, + [KVMI_VCPU_CONTROL_EPT_VIEW] = handle_vcpu_control_ept_view, [KVMI_VCPU_CONTROL_EVENTS] = handle_vcpu_control_events, [KVMI_VCPU_CONTROL_MSR] = handle_vcpu_control_msr, [KVMI_VCPU_CONTROL_SINGLESTEP] = handle_vcpu_control_singlestep, From patchwork Wed Jul 22 16:01:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA51A14E3 for ; Wed, 22 Jul 2020 16:01:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A213C20771 for ; Wed, 22 Jul 2020 16:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731580AbgGVQB7 (ORCPT ); Wed, 22 Jul 2020 12:01:59 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38032 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729178AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 8EA6E305D768; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 807BE305FFA4; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 17/34] KVM: introspection: extend the access rights database with EPT view info Date: Wed, 22 Jul 2020 19:01:04 +0300 Message-Id: <20200722160121.9601-18-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru On EPT violations, when we check if the introspection tool has shown interest in the current guest page, we will take into consideration the EPT view of the current vCPU too. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvmi_host.h | 1 + arch/x86/kvm/kvmi.c | 9 +-- include/linux/kvmi_host.h | 2 +- virt/kvm/introspection/kvmi.c | 107 +++++++++++++++++------------- virt/kvm/introspection/kvmi_int.h | 4 +- 5 files changed, 71 insertions(+), 52 deletions(-) diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h index 509fa3fff5e7..8af03ba38316 100644 --- a/arch/x86/include/asm/kvmi_host.h +++ b/arch/x86/include/asm/kvmi_host.h @@ -9,6 +9,7 @@ struct msr_data; #define KVMI_NUM_CR 5 #define KVMI_NUM_MSR 0x2000 +#define KVMI_MAX_ACCESS_TREES KVM_MAX_EPT_VIEWS struct kvmi_monitor_interception { bool kvmi_intercepted; diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 06357b8ab54a..52885b9e5b6e 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1197,7 +1197,7 @@ static const struct { void kvmi_arch_update_page_tracking(struct kvm *kvm, struct kvm_memory_slot *slot, - struct kvmi_mem_access *m) + struct kvmi_mem_access *m, u16 view) { struct kvmi_arch_mem_access *arch = &m->arch; int i; @@ -1217,12 +1217,12 @@ void kvmi_arch_update_page_tracking(struct kvm *kvm, if (slot_tracked) { kvm_slot_page_track_remove_page(kvm, slot, m->gfn, mode, - 0); + view); clear_bit(slot->id, arch->active[mode]); } } else if (!slot_tracked) { kvm_slot_page_track_add_page(kvm, slot, m->gfn, mode, - 0); + view); set_bit(slot->id, arch->active[mode]); } } @@ -1256,7 +1256,8 @@ static bool is_pf_of_interest(struct kvm_vcpu *vcpu, gpa_t gpa, u8 access) if (kvm_x86_ops.gpt_translation_fault(vcpu)) return false; - return kvmi_restricted_page_access(KVMI(vcpu->kvm), gpa, access); + return kvmi_restricted_page_access(KVMI(vcpu->kvm), gpa, access, + kvm_get_ept_view(vcpu)); } static bool handle_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h index 5baef68d8cbe..c38c7f16d5d0 100644 --- a/include/linux/kvmi_host.h +++ b/include/linux/kvmi_host.h @@ -69,7 +69,7 @@ struct kvm_introspection { bool cleanup_on_unhook; - struct radix_tree_root access_tree; + struct radix_tree_root access_tree[KVMI_MAX_ACCESS_TREES]; rwlock_t access_tree_lock; }; diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c index 2a96b80bddb2..737fe3c7a956 100644 --- a/virt/kvm/introspection/kvmi.c +++ b/virt/kvm/introspection/kvmi.c @@ -258,20 +258,23 @@ static void kvmi_clear_mem_access(struct kvm *kvm) struct kvm_introspection *kvmi = KVMI(kvm); struct radix_tree_iter iter; void **slot; - int idx; + int idx, view; idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); - radix_tree_for_each_slot(slot, &kvmi->access_tree, &iter, 0) { - struct kvmi_mem_access *m = *slot; + for (view = 0; view < KVMI_MAX_ACCESS_TREES; view++) + radix_tree_for_each_slot(slot, &kvmi->access_tree[view], + &iter, 0) { + struct kvmi_mem_access *m = *slot; - m->access = full_access; - kvmi_arch_update_page_tracking(kvm, NULL, m); + m->access = full_access; + kvmi_arch_update_page_tracking(kvm, NULL, m, view); - radix_tree_iter_delete(&kvmi->access_tree, &iter, slot); - kmem_cache_free(radix_cache, m); - } + radix_tree_iter_delete(&kvmi->access_tree[view], + &iter, slot); + kmem_cache_free(radix_cache, m); + } spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); @@ -335,8 +338,9 @@ alloc_kvmi(struct kvm *kvm, const struct kvm_introspection_hook *hook) atomic_set(&kvmi->ev_seq, 0); - INIT_RADIX_TREE(&kvmi->access_tree, - GFP_KERNEL & ~__GFP_DIRECT_RECLAIM); + for (i = 0; i < ARRAY_SIZE(kvmi->access_tree); i++) + INIT_RADIX_TREE(&kvmi->access_tree[i], + GFP_KERNEL & ~__GFP_DIRECT_RECLAIM); rwlock_init(&kvmi->access_tree_lock); kvm_for_each_vcpu(i, vcpu, kvm) { @@ -1065,33 +1069,35 @@ bool kvmi_enter_guest(struct kvm_vcpu *vcpu) } static struct kvmi_mem_access * -__kvmi_get_gfn_access(struct kvm_introspection *kvmi, const gfn_t gfn) +__kvmi_get_gfn_access(struct kvm_introspection *kvmi, const gfn_t gfn, u16 view) { - return radix_tree_lookup(&kvmi->access_tree, gfn); + return radix_tree_lookup(&kvmi->access_tree[view], gfn); } -static void kvmi_update_mem_access(struct kvm *kvm, struct kvmi_mem_access *m) +static void kvmi_update_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, + u16 view) { struct kvm_introspection *kvmi = KVMI(kvm); - kvmi_arch_update_page_tracking(kvm, NULL, m); + kvmi_arch_update_page_tracking(kvm, NULL, m, view); if (m->access == full_access) { - radix_tree_delete(&kvmi->access_tree, m->gfn); + radix_tree_delete(&kvmi->access_tree[view], m->gfn); kmem_cache_free(radix_cache, m); } } -static void kvmi_insert_mem_access(struct kvm *kvm, struct kvmi_mem_access *m) +static void kvmi_insert_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, + u16 view) { struct kvm_introspection *kvmi = KVMI(kvm); - radix_tree_insert(&kvmi->access_tree, m->gfn, m); - kvmi_arch_update_page_tracking(kvm, NULL, m); + radix_tree_insert(&kvmi->access_tree[view], m->gfn, m); + kvmi_arch_update_page_tracking(kvm, NULL, m, view); } static void kvmi_set_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, - bool *used) + u16 view, bool *used) { struct kvm_introspection *kvmi = KVMI(kvm); struct kvmi_mem_access *found; @@ -1101,12 +1107,12 @@ static void kvmi_set_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, spin_lock(&kvm->mmu_lock); write_lock(&kvmi->access_tree_lock); - found = __kvmi_get_gfn_access(kvmi, m->gfn); + found = __kvmi_get_gfn_access(kvmi, m->gfn, view); if (found) { found->access = m->access; - kvmi_update_mem_access(kvm, found); + kvmi_update_mem_access(kvm, found, view); } else if (m->access != full_access) { - kvmi_insert_mem_access(kvm, m); + kvmi_insert_mem_access(kvm, m, view); *used = true; } @@ -1115,7 +1121,8 @@ static void kvmi_set_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, srcu_read_unlock(&kvm->srcu, idx); } -static int kvmi_set_gfn_access(struct kvm *kvm, gfn_t gfn, u8 access) +static int kvmi_set_gfn_access(struct kvm *kvm, gfn_t gfn, u8 access, + u16 view) { struct kvmi_mem_access *m; bool used = false; @@ -1131,7 +1138,7 @@ static int kvmi_set_gfn_access(struct kvm *kvm, gfn_t gfn, u8 access) if (radix_tree_preload(GFP_KERNEL)) err = -KVM_ENOMEM; else - kvmi_set_mem_access(kvm, m, &used); + kvmi_set_mem_access(kvm, m, view, &used); radix_tree_preload_end(); @@ -1153,7 +1160,7 @@ static bool kvmi_is_visible_gfn(struct kvm *kvm, gfn_t gfn) return visible; } -static int set_page_access_entry(struct kvm_introspection *kvmi, +static int set_page_access_entry(struct kvm_introspection *kvmi, u16 view, const struct kvmi_page_access_entry *entry) { u8 unknown_bits = ~(KVMI_PAGE_ACCESS_R | KVMI_PAGE_ACCESS_W @@ -1169,7 +1176,7 @@ static int set_page_access_entry(struct kvm_introspection *kvmi, if (!kvmi_is_visible_gfn(kvm, gfn)) return entry->visible ? -KVM_EINVAL : 0; - return kvmi_set_gfn_access(kvm, gfn, entry->access); + return kvmi_set_gfn_access(kvm, gfn, entry->access, view); } int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, @@ -1187,7 +1194,7 @@ int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, return -KVM_EINVAL; for (; entry < end; entry++) { - int r = set_page_access_entry(kvmi, entry); + int r = set_page_access_entry(kvmi, 0, entry); if (r && !ec) ec = r; @@ -1197,12 +1204,12 @@ int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, } static int kvmi_get_gfn_access(struct kvm_introspection *kvmi, const gfn_t gfn, - u8 *access) + u8 *access, u16 view) { struct kvmi_mem_access *m; read_lock(&kvmi->access_tree_lock); - m = __kvmi_get_gfn_access(kvmi, gfn); + m = __kvmi_get_gfn_access(kvmi, gfn, view); if (m) *access = m->access; read_unlock(&kvmi->access_tree_lock); @@ -1211,12 +1218,13 @@ static int kvmi_get_gfn_access(struct kvm_introspection *kvmi, const gfn_t gfn, } bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, gpa_t gpa, - u8 access) + u8 access, u16 view) { u8 allowed_access; int err; - err = kvmi_get_gfn_access(kvmi, gpa_to_gfn(gpa), &allowed_access); + err = kvmi_get_gfn_access(kvmi, gpa_to_gfn(gpa), &allowed_access, view); + if (err) return false; @@ -1264,10 +1272,14 @@ void kvmi_add_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, while (start < end) { struct kvmi_mem_access *m; + u16 view; - m = __kvmi_get_gfn_access(kvmi, start); - if (m) - kvmi_arch_update_page_tracking(kvm, slot, m); + for (view = 0; view < KVMI_MAX_ACCESS_TREES; view++) { + m = __kvmi_get_gfn_access(kvmi, start, view); + if (m) + kvmi_arch_update_page_tracking(kvm, slot, m, + view); + } start++; } @@ -1289,14 +1301,18 @@ void kvmi_remove_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) while (start < end) { struct kvmi_mem_access *m; + u16 view; - m = __kvmi_get_gfn_access(kvmi, start); - if (m) { - u8 prev_access = m->access; + for (view = 0; view < KVMI_MAX_ACCESS_TREES; view++) { + m = __kvmi_get_gfn_access(kvmi, start, view); + if (m) { + u8 prev_access = m->access; - m->access = full_access; - kvmi_arch_update_page_tracking(kvm, slot, m); - m->access = prev_access; + m->access = full_access; + kvmi_arch_update_page_tracking(kvm, slot, m, + view); + m->access = prev_access; + } } start++; } @@ -1382,14 +1398,15 @@ void kvmi_singlestep_failed(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL(kvmi_singlestep_failed); -static bool __kvmi_tracked_gfn(struct kvm_introspection *kvmi, gfn_t gfn) +static bool __kvmi_tracked_gfn(struct kvm_introspection *kvmi, gfn_t gfn, + u16 view) { u8 ignored_access; + int err; - if (kvmi_get_gfn_access(kvmi, gfn, &ignored_access)) - return false; + err = kvmi_get_gfn_access(kvmi, gfn, &ignored_access, view); - return true; + return !err; } bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) @@ -1401,7 +1418,7 @@ bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) if (!kvmi) return false; - ret = __kvmi_tracked_gfn(kvmi, gfn); + ret = __kvmi_tracked_gfn(kvmi, gfn, kvm_get_ept_view(vcpu)); kvmi_put(vcpu->kvm); diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index d78116442ddd..fc6dbd3a6472 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -89,7 +89,7 @@ int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, const struct kvmi_msg_hdr *msg, const struct kvmi_vm_set_page_access *req); bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, gpa_t gpa, - u8 access); + u8 access, u16 view); bool kvmi_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, u8 access); void kvmi_add_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned long npages); @@ -140,7 +140,7 @@ int kvmi_arch_cmd_vcpu_control_msr(struct kvm_vcpu *vcpu, const struct kvmi_vcpu_control_msr *req); void kvmi_arch_update_page_tracking(struct kvm *kvm, struct kvm_memory_slot *slot, - struct kvmi_mem_access *m); + struct kvmi_mem_access *m, u16 view); void kvmi_arch_hook(struct kvm *kvm); void kvmi_arch_unhook(struct kvm *kvm); void kvmi_arch_features(struct kvmi_features *feat); From patchwork Wed Jul 22 16:01:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678879 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8199E159A for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 742B720771 for ; Wed, 22 Jul 2020 16:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729064AbgGVQCS (ORCPT ); Wed, 22 Jul 2020 12:02:18 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38038 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729217AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 935A3305D769; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 894D8305FFA5; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 18/34] KVM: introspection: extend KVMI_VM_SET_PAGE_ACCESS with EPT view info Date: Wed, 22 Jul 2020 19:01:05 +0300 Message-Id: <20200722160121.9601-19-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool uses this command to set distinct access rights on different EPT views. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 8 +++++--- include/uapi/linux/kvmi.h | 4 ++-- virt/kvm/introspection/kvmi.c | 10 ++++++++-- 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index f4c60aba9b53..658c9df01469 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1003,8 +1003,8 @@ to control events for any other register will fail with -KVM_EINVAL:: struct kvmi_vm_set_page_access { __u16 count; - __u16 padding1; - __u32 padding2; + __u16 view; + __u32 padding; struct kvmi_page_access_entry entries[0]; }; @@ -1026,7 +1026,7 @@ where:: struct kvmi_error_code Sets the access bits (rwx) for an array of ``count`` guest physical -addresses. +addresses, for the selected view. The valid access bits are:: @@ -1048,7 +1048,9 @@ In order to 'forget' an address, all three bits ('rwx') must be set. * -KVM_EINVAL - the specified access bits combination is invalid * -KVM_EINVAL - the padding is not zero +* -KVM_EINVAL - the selected EPT view is invalid * -KVM_EINVAL - the message size is invalid +* -KVM_EOPNOTSUPP - an EPT view was selected but the hardware doesn't support it * -KVM_EAGAIN - the selected vCPU can't be introspected yet * -KVM_ENOMEM - there is not enough memory to add the page tracking structures diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index a72c536a2c80..505a865cd115 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -191,8 +191,8 @@ struct kvmi_page_access_entry { struct kvmi_vm_set_page_access { __u16 count; - __u16 padding1; - __u32 padding2; + __u16 view; + __u32 padding; struct kvmi_page_access_entry entries[0]; }; diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c index 737fe3c7a956..44b0092e304f 100644 --- a/virt/kvm/introspection/kvmi.c +++ b/virt/kvm/introspection/kvmi.c @@ -1187,14 +1187,20 @@ int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, const struct kvmi_page_access_entry *end = req->entries + req->count; int ec = 0; - if (req->padding1 || req->padding2) + if (req->padding) return -KVM_EINVAL; if (msg->size < struct_size(req, entries, req->count)) return -KVM_EINVAL; + if (!is_valid_view(req->view)) + return -KVM_EINVAL; + + if (req->view != 0 && !kvm_eptp_switching_supported) + return -KVM_EOPNOTSUPP; + for (; entry < end; entry++) { - int r = set_page_access_entry(kvmi, 0, entry); + int r = set_page_access_entry(kvmi, req->view, entry); if (r && !ec) ec = r; From patchwork Wed Jul 22 16:01:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678871 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF9E3618 for ; Wed, 22 Jul 2020 16:02:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B0DB820771 for ; Wed, 22 Jul 2020 16:02:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730415AbgGVQBk (ORCPT ); Wed, 22 Jul 2020 12:01:40 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37948 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729015AbgGVQBj (ORCPT ); Wed, 22 Jul 2020 12:01:39 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 9928D305D76A; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 8E9A4305FFA6; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 19/34] KVM: introspection: clean non-default EPTs on unhook Date: Wed, 22 Jul 2020 19:01:06 +0300 Message-Id: <20200722160121.9601-20-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru When a guest is unhooked, the VM is brought to default state and uses default EPT view. Delete all shadow pages that belong to non-default EPT views in order to free unused shadow pages. They are not used because the guest cannot VMFUNC to any EPT view. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/kvmi.c | 23 ++++++++++++++++++++++- virt/kvm/introspection/kvmi.c | 3 +++ 3 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 519b8210b8ef..086b6e2a2314 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1026,6 +1026,8 @@ struct kvm_arch { struct kvm_pmu_event_filter *pmu_event_filter; struct task_struct *nx_lpage_recovery_thread; + + refcount_t kvmi_refcount; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 52885b9e5b6e..27fd732cff29 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -640,6 +640,25 @@ static void kvmi_arch_restore_interception(struct kvm_vcpu *vcpu) kvmi_arch_disable_msrw_intercept(vcpu, arch_vcpui->msrw.kvmi_mask.high); } +void kvmi_arch_restore_ept_view(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + u16 view, default_view = 0; + bool visible = false; + + if (kvm_get_ept_view(vcpu) != default_view) + kvmi_arch_cmd_set_ept_view(vcpu, default_view); + + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + kvmi_arch_cmd_control_ept_view(vcpu, view, visible); + + if (refcount_dec_and_test(&kvm->arch.kvmi_refcount)) { + u16 zap_mask = ~(1 << default_view); + + kvm_mmu_zap_all(vcpu->kvm, zap_mask); + } +} + bool kvmi_arch_clean_up_interception(struct kvm_vcpu *vcpu) { struct kvmi_interception *arch_vcpui = vcpu->arch.kvmi; @@ -647,8 +666,10 @@ bool kvmi_arch_clean_up_interception(struct kvm_vcpu *vcpu) if (!arch_vcpui || !arch_vcpui->cleanup) return false; - if (arch_vcpui->restore_interception) + if (arch_vcpui->restore_interception) { kvmi_arch_restore_interception(vcpu); + kvmi_arch_restore_ept_view(vcpu); + } return true; } diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c index 44b0092e304f..f3bdef3c54e6 100644 --- a/virt/kvm/introspection/kvmi.c +++ b/virt/kvm/introspection/kvmi.c @@ -288,6 +288,9 @@ static void free_kvmi(struct kvm *kvm) kvmi_clear_mem_access(kvm); + refcount_set(&kvm->arch.kvmi_refcount, + atomic_read(&kvm->online_vcpus)); + kvm_for_each_vcpu(i, vcpu, kvm) free_vcpui(vcpu, restore_interception); From patchwork Wed Jul 22 16:01:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D6FF618 for ; Wed, 22 Jul 2020 16:02:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CE05207CD for ; Wed, 22 Jul 2020 16:02:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728780AbgGVQCE (ORCPT ); Wed, 22 Jul 2020 12:02:04 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38040 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728692AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 9FCF5305D76B; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 95FE8305FFA7; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Marian Rotariu , =?utf-8?q?=C8=98tefan_=C8=98ic?= =?utf-8?q?leru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 20/34] KVM: x86: vmx: add support for virtualization exceptions Date: Wed, 22 Jul 2020 19:01:07 +0300 Message-Id: <20200722160121.9601-21-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marian Rotariu Only the hardware support check function and the #VE info page management are introduced. Signed-off-by: Marian Rotariu Co-developed-by: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/asm/vmx.h | 3 +++ arch/x86/kvm/vmx/capabilities.h | 5 +++++ arch/x86/kvm/vmx/vmx.c | 31 +++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 12 ++++++++++++ arch/x86/kvm/x86.c | 3 +++ 6 files changed, 55 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 086b6e2a2314..a9f225f9dd12 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1435,6 +1435,7 @@ extern u64 kvm_default_tsc_scaling_ratio; extern u64 kvm_mce_cap_supported; extern bool kvm_eptp_switching_supported; +extern bool kvm_ve_supported; /* * EMULTYPE_NO_DECODE - Set when re-emulating an instruction (after completing diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 04487eb38b5c..177500e9e68c 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -67,6 +67,7 @@ #define SECONDARY_EXEC_ENCLS_EXITING VMCS_CONTROL_BIT(ENCLS_EXITING) #define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING) #define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_LOGGING) +#define SECONDARY_EXEC_EPT_VE VMCS_CONTROL_BIT(EPT_VIOLATION_VE) #define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX) #define SECONDARY_EXEC_XSAVES VMCS_CONTROL_BIT(XSAVES) #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC) @@ -213,6 +214,8 @@ enum vmcs_field { VMREAD_BITMAP_HIGH = 0x00002027, VMWRITE_BITMAP = 0x00002028, VMWRITE_BITMAP_HIGH = 0x00002029, + VE_INFO_ADDRESS = 0x0000202A, + VE_INFO_ADDRESS_HIGH = 0x0000202B, XSS_EXIT_BITMAP = 0x0000202C, XSS_EXIT_BITMAP_HIGH = 0x0000202D, ENCLS_EXITING_BITMAP = 0x0000202E, diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 92781e2c523e..bc5bbc41ca92 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -257,6 +257,11 @@ static inline bool cpu_has_vmx_pml(void) return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_ENABLE_PML; } +static inline bool cpu_has_vmx_ve(void) +{ + return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_EPT_VE; +} + static inline bool vmx_xsaves_supported(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cbc943d217e3..1c1dda14d18d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2463,6 +2463,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, SECONDARY_EXEC_RDSEED_EXITING | SECONDARY_EXEC_RDRAND_EXITING | SECONDARY_EXEC_ENABLE_PML | + SECONDARY_EXEC_EPT_VE | SECONDARY_EXEC_TSC_SCALING | SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | SECONDARY_EXEC_PT_USE_GPA | @@ -4247,6 +4248,12 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) */ exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS; + /* #VE must be disabled by default. + * Once enabled, all EPT violations on pages missing the SVE bit + * will be delivered to the guest. + */ + exec_control &= ~SECONDARY_EXEC_EPT_VE; + if (!enable_pml) exec_control &= ~SECONDARY_EXEC_ENABLE_PML; @@ -6019,6 +6026,28 @@ static void dump_eptp_list(void) pr_err("%d: %016llx\n", i, *(eptp_list + i)); } +static void dump_ve_info(void) +{ + phys_addr_t ve_info_phys; + struct vcpu_ve_info *ve_info = NULL; + + if (!cpu_has_vmx_ve()) + return; + + ve_info_phys = (phys_addr_t)vmcs_read64(VE_INFO_ADDRESS); + if (!ve_info_phys) + return; + + ve_info = (struct vcpu_ve_info *)phys_to_virt(ve_info_phys); + + pr_err("*** Virtualization Exception Info ***\n"); + pr_err("ExitReason: %x\n", ve_info->exit_reason); + pr_err("ExitQualification: %llx\n", ve_info->exit_qualification); + pr_err("GVA: %llx\n", ve_info->gva); + pr_err("GPA: %llx\n", ve_info->gpa); + pr_err("EPTPIndex: %x\n", ve_info->eptp_index); +} + void dump_vmcs(void) { u32 vmentry_ctl, vmexit_ctl; @@ -6169,6 +6198,7 @@ void dump_vmcs(void) vmcs_read16(VIRTUAL_PROCESSOR_ID)); dump_eptp_list(); + dump_ve_info(); } static unsigned int update_ept_view(struct vcpu_vmx *vmx) @@ -8340,6 +8370,7 @@ static __init int hardware_setup(void) enable_ept = 0; kvm_eptp_switching_supported = cpu_has_vmx_eptp_switching(); + kvm_ve_supported = cpu_has_vmx_ve(); if (!cpu_has_vmx_ept_ad_bits() || !enable_ept) enable_ept_ad_bits = 0; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 38d50fc7357b..49f64be4bbef 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -24,6 +24,18 @@ extern const u32 vmx_msr_index[]; #define NR_LOADSTORE_MSRS 8 +struct vcpu_ve_info { + u32 exit_reason; + u32 unused; + u64 exit_qualification; + u64 gva; + u64 gpa; + u16 eptp_index; + + u16 offset1; + u32 offset2; +}; + struct vmx_msrs { unsigned int nr; struct vmx_msr_entry val[NR_LOADSTORE_MSRS]; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 78aacac839bb..9aa646a65967 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -164,6 +164,9 @@ module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR); bool __read_mostly kvm_eptp_switching_supported; EXPORT_SYMBOL_GPL(kvm_eptp_switching_supported); +bool __read_mostly kvm_ve_supported; +EXPORT_SYMBOL_GPL(kvm_ve_supported); + #define KVM_NR_SHARED_MSRS 16 struct kvm_shared_msrs_global { From patchwork Wed Jul 22 16:01:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BBD54618 for ; Wed, 22 Jul 2020 16:01:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9CE76207CD for ; Wed, 22 Jul 2020 16:01:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730617AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38080 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730092AbgGVQBk (ORCPT ); Wed, 22 Jul 2020 12:01:40 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id A7F69305D76C; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 9DD4D305FFA8; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Sean Christopherson , =?utf-8?q?Adalbert_L?= =?utf-8?q?az=C4=83r?= Subject: [RFC PATCH v1 21/34] KVM: VMX: Define EPT suppress #VE bit (bit 63 in EPT leaf entries) Date: Wed, 22 Jul 2020 19:01:08 +0300 Message-Id: <20200722160121.9601-22-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson VMX provides a capability that allows EPT violations to be reflected into the guest as Virtualization Exceptions (#VE). The primary use case of EPT violation #VEs is to improve the performance of virtualization- based security solutions, e.g. eliminate a VM-Exit -> VM-Exit roundtrip when utilizing EPT to protect priveleged data structures or code. The "Suppress #VE" bit allows a VMM to opt-out of EPT violation #VEs on a per page basis, e.g. when a page is marked not-present due to lazy installation or is write-protected for dirty page logging. The "Suppress #VE" bit is ignored: - By hardware that does not support EPT violation #VEs - When the EPT violation #VE VMCS control is disabled - On non-leaf EPT entries Signed-off-by: Sean Christopherson Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/vmx.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 177500e9e68c..8082158e3e96 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -498,6 +498,7 @@ enum vmcs_field { #define VMX_EPT_IPAT_BIT (1ull << 6) #define VMX_EPT_ACCESS_BIT (1ull << 8) #define VMX_EPT_DIRTY_BIT (1ull << 9) +#define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63) #define VMX_EPT_RWX_MASK (VMX_EPT_READABLE_MASK | \ VMX_EPT_WRITABLE_MASK | \ VMX_EPT_EXECUTABLE_MASK) From patchwork Wed Jul 22 16:01:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4127B14E3 for ; Wed, 22 Jul 2020 16:01:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E0AB206F5 for ; Wed, 22 Jul 2020 16:01:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728122AbgGVQB5 (ORCPT ); Wed, 22 Jul 2020 12:01:57 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37952 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729642AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id AF338305D76D; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id A48B7305FFA9; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , Sean Christopherson , =?utf-8?q?Adalbert_L?= =?utf-8?q?az=C4=83r?= Subject: [RFC PATCH v1 22/34] KVM: VMX: Suppress EPT violation #VE by default (when enabled) Date: Wed, 22 Jul 2020 19:01:09 +0300 Message-Id: <20200722160121.9601-23-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Unfortunately (for software), EPT violation #VEs are opt-out on a per page basis, e.g. a not-present EPT violation on a zeroed EPT entry will be morphed to a #VE due to the "suppress #VE" bit not being set. When EPT violation #VEs are enabled, use a variation of clear_page() that sets bit 63 (suppress #VE) in all 8-byte entries. To wire up the new behavior in the x86 MMU, add a new kvm_x86_ops hook and a new mask to define a "shadow init value", which is needed to express the concept that a cleared spte has a non-zero value when EPT violation #VEs are in use. Signed-off-by: Sean Christopherson Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++++------- arch/x86/kvm/vmx/clear_page.S | 17 +++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 18 +++++++++++++++--- 6 files changed, 50 insertions(+), 11 deletions(-) create mode 100644 arch/x86/kvm/vmx/clear_page.S diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a9f225f9dd12..e89cea041ec9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1168,6 +1168,7 @@ struct kvm_x86_ops { * the implementation may choose to ignore if appropriate. */ void (*tlb_flush_gva)(struct kvm_vcpu *vcpu, gva_t addr); + void (*clear_page)(void *page); /* * Flush any TLB entries created by the guest. Like tlb_flush_gva(), diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 3cfe76299dee..b5972a3fdfee 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -19,7 +19,7 @@ kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o -kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o +kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o vmx/clear_page.o kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 2692b14fb605..02fa0d30407f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -52,6 +52,7 @@ static inline u64 rsvd_bits(int s, int e) } void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask); +void kvm_mmu_set_spte_init_value(u64 init_value); void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 22c83192bba1..810e22f41306 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -253,6 +253,7 @@ static u64 __read_mostly shadow_mmio_value; static u64 __read_mostly shadow_mmio_access_mask; static u64 __read_mostly shadow_present_mask; static u64 __read_mostly shadow_me_mask; +static u64 __read_mostly shadow_init_value; /* * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; @@ -542,6 +543,12 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, } EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes); +void kvm_mmu_set_spte_init_value(u64 init_value) +{ + shadow_init_value = init_value; +} +EXPORT_SYMBOL_GPL(kvm_mmu_set_spte_init_value); + static u8 kvm_get_shadow_phys_bits(void) { /* @@ -572,6 +579,7 @@ static void kvm_mmu_reset_all_pte_masks(void) shadow_x_mask = 0; shadow_present_mask = 0; shadow_acc_track_mask = 0; + shadow_init_value = 0; shadow_phys_bits = kvm_get_shadow_phys_bits(); @@ -612,7 +620,7 @@ static int is_nx(struct kvm_vcpu *vcpu) static int is_shadow_present_pte(u64 pte) { - return (pte != 0) && !is_mmio_spte(pte); + return (pte != 0) && pte != shadow_init_value && !is_mmio_spte(pte); } static int is_large_pte(u64 pte) @@ -923,9 +931,9 @@ static int mmu_spte_clear_track_bits(u64 *sptep) u64 old_spte = *sptep; if (!spte_has_volatile_bits(old_spte)) - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, shadow_init_value); else - old_spte = __update_clear_spte_slow(sptep, 0ull); + old_spte = __update_clear_spte_slow(sptep, shadow_init_value); if (!is_shadow_present_pte(old_spte)) return 0; @@ -955,7 +963,7 @@ static int mmu_spte_clear_track_bits(u64 *sptep) */ static void mmu_spte_clear_no_track(u64 *sptep) { - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, shadow_init_value); } static u64 mmu_spte_get_lockless(u64 *sptep) @@ -2660,7 +2668,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (level > PG_LEVEL_4K && need_sync) flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); } - clear_page(sp->spt); + kvm_x86_ops.clear_page(sp->spt); trace_kvm_mmu_get_page(sp, true); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); @@ -3637,7 +3645,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, struct kvm_shadow_walk_iterator iterator; struct kvm_mmu_page *sp; bool fault_handled = false; - u64 spte = 0ull; + u64 spte = shadow_init_value; uint retry_count = 0; if (!page_fault_can_be_fast(error_code)) @@ -4073,7 +4081,7 @@ static bool walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { struct kvm_shadow_walk_iterator iterator; - u64 sptes[PT64_ROOT_MAX_LEVEL], spte = 0ull; + u64 sptes[PT64_ROOT_MAX_LEVEL], spte = shadow_init_value; struct rsvd_bits_validate *rsvd_check; int root, leaf; bool reserved = false; diff --git a/arch/x86/kvm/vmx/clear_page.S b/arch/x86/kvm/vmx/clear_page.S new file mode 100644 index 000000000000..89fcf5697391 --- /dev/null +++ b/arch/x86/kvm/vmx/clear_page.S @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include + +/* + * "Clear" an EPT page when EPT violation #VEs are enabled, in which case the + * suppress #VE bit needs to be set for all unused entries. + * + * %rdi - page + */ +#define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63) + +SYM_FUNC_START(vmx_suppress_ve_clear_page) + movl $4096/8,%ecx + movabsq $0x8000000000000000,%rax + rep stosq + ret +SYM_FUNC_END(vmx_suppress_ve_clear_page) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1c1dda14d18d..3428857c6157 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5639,14 +5639,24 @@ static void wakeup_handler(void) spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); } +void vmx_suppress_ve_clear_page(void *page); + static void vmx_enable_tdp(void) { + u64 p_mask = 0; + + if (!cpu_has_vmx_ept_execute_only()) + p_mask |= VMX_EPT_READABLE_MASK; + if (kvm_ve_supported) { + p_mask |= VMX_EPT_SUPPRESS_VE_BIT; + kvm_mmu_set_spte_init_value(VMX_EPT_SUPPRESS_VE_BIT); + kvm_x86_ops.clear_page = vmx_suppress_ve_clear_page; + } + kvm_mmu_set_mask_ptes(VMX_EPT_READABLE_MASK, enable_ept_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull, enable_ept_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull, - 0ull, VMX_EPT_EXECUTABLE_MASK, - cpu_has_vmx_ept_execute_only() ? 0ull : VMX_EPT_READABLE_MASK, - VMX_EPT_RWX_MASK, 0ull); + 0ull, VMX_EPT_EXECUTABLE_MASK, p_mask, VMX_EPT_RWX_MASK, 0ull); ept_set_mmio_spte_mask(); } @@ -8238,6 +8248,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .tlb_flush_gva = vmx_flush_tlb_gva, .tlb_flush_guest = vmx_flush_tlb_guest, + .clear_page = clear_page, + .run = vmx_vcpu_run, .handle_exit = vmx_handle_exit, .skip_emulated_instruction = vmx_skip_emulated_instruction, From patchwork Wed Jul 22 16:01:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 445E8618 for ; Wed, 22 Jul 2020 16:02:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3733E20771 for ; Wed, 22 Jul 2020 16:02:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729015AbgGVQB5 (ORCPT ); Wed, 22 Jul 2020 12:01:57 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38086 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729493AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id B4A99305D76E; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id AB635305FFAA; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 23/34] KVM: x86: mmu: fix: update present_mask in spte_read_protect() Date: Wed, 22 Jul 2020 19:01:10 +0300 Message-Id: <20200722160121.9601-24-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru shadow_present_mask is not 0ull if #VE support is enabled. If #VE support is enabled, shadow_present_mask is updated in vmx_enable_tdp() with VMX_EPT_SUPPRESS_VE_BIT. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/mmu/mmu.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 810e22f41306..28ab4a1ba25a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1601,7 +1601,13 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) static bool spte_read_protect(u64 *sptep) { u64 spte = *sptep; - bool exec_only_supported = (shadow_present_mask == 0ull); + bool exec_only_supported; + + if (kvm_ve_supported) + exec_only_supported = + (shadow_present_mask == VMX_EPT_SUPPRESS_VE_BIT); + else + exec_only_supported = (shadow_present_mask == 0ull); rmap_printk("rmap_read_protect: spte %p %llx\n", sptep, *sptep); From patchwork Wed Jul 22 16:01:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678845 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CEE9B14E3 for ; Wed, 22 Jul 2020 16:01:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B474F20771 for ; Wed, 22 Jul 2020 16:01:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730108AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37948 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730292AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id B9D1E305D770; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id B0358305FFAB; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 24/34] KVM: vmx: trigger vm-exits for mmio sptes by default when #VE is enabled Date: Wed, 22 Jul 2020 19:01:11 +0300 Message-Id: <20200722160121.9601-25-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru All sptes, including mmio sptes must have SVE bit set by default, in order to trigger vm-exits instead of #VEs (in case of an EPT violation). MMIO sptes were overlooked in commit 28b8bc704111 ("KVM: VMX: Suppress EPT violation #VE by default (when enabled)") which provided a new mask for non-mmio sptes. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/vmx/vmx.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3428857c6157..b65bd0d144e5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4367,11 +4367,19 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) static void ept_set_mmio_spte_mask(void) { + u64 mmio_value = VMX_EPT_MISCONFIG_WX_VALUE; + + /* All sptes, including mmio sptes should trigger vm-exits by + * default, instead of #VE (when supported) + */ + if (kvm_ve_supported) + mmio_value |= VMX_EPT_SUPPRESS_VE_BIT; + /* * EPT Misconfigurations can be generated if the value of bits 2:0 * of an EPT paging-structure entry is 110b (write/execute). */ - kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, 0); + kvm_mmu_set_mmio_spte_mask(mmio_value, 0); } static int vmx_alloc_eptp_list_page(struct vcpu_vmx *vmx) From patchwork Wed Jul 22 16:01:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678867 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F3D314E3 for ; Wed, 22 Jul 2020 16:02:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80EB9207CD for ; Wed, 22 Jul 2020 16:02:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729868AbgGVQCF (ORCPT ); Wed, 22 Jul 2020 12:02:05 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38024 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730293AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id C91BD305D6AC; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id B7CC2305FFB2; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 25/34] KVM: x86: svm: set .clear_page() Date: Wed, 22 Jul 2020 19:01:12 +0300 Message-Id: <20200722160121.9601-26-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/svm/svm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 5c2d4a0c3d31..1c78b913eb5d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4324,6 +4324,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .fault_gla = svm_fault_gla, .spt_fault = svm_spt_fault, .gpt_translation_fault = svm_gpt_translation_fault, + .clear_page = clear_page, }; static struct kvm_x86_init_ops svm_init_ops __initdata = { From patchwork Wed Jul 22 16:01:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C665A618 for ; Wed, 22 Jul 2020 16:01:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B53D5207CD for ; Wed, 22 Jul 2020 16:01:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730478AbgGVQBn (ORCPT ); Wed, 22 Jul 2020 12:01:43 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38088 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730346AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id DDB0F305D6B2; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id C8AE4305FFB3; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 26/34] KVM: x86: add .set_ve_info() Date: Wed, 22 Jul 2020 19:01:13 +0300 Message-Id: <20200722160121.9601-27-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This function is needed for the KVMI_VCPU_SET_VE_INFO command. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/vmx/vmx.c | 40 +++++++++++++++++++++++++++++++++ 3 files changed, 43 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e89cea041ec9..4cee641af48e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1314,6 +1314,8 @@ struct kvm_x86_ops { u16 (*get_ept_view)(struct kvm_vcpu *vcpu); int (*set_ept_view)(struct kvm_vcpu *vcpu, u16 view); int (*control_ept_view)(struct kvm_vcpu *vcpu, u16 view, u8 visible); + int (*set_ve_info)(struct kvm_vcpu *vcpu, unsigned long ve_info, + bool trigger_vmexit); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 8082158e3e96..222fe9c7f463 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -157,6 +157,7 @@ static inline int vmx_misc_mseg_revid(u64 vmx_misc) enum vmcs_field { VIRTUAL_PROCESSOR_ID = 0x00000000, POSTED_INTR_NV = 0x00000002, + EPTP_INDEX = 0x00000004, GUEST_ES_SELECTOR = 0x00000800, GUEST_CS_SELECTOR = 0x00000802, GUEST_SS_SELECTOR = 0x00000804, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b65bd0d144e5..871cc49063d8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4425,6 +4425,45 @@ static int vmx_control_ept_view(struct kvm_vcpu *vcpu, u16 view, u8 visible) return 0; } +static int vmx_set_ve_info(struct kvm_vcpu *vcpu, unsigned long ve_info, + bool trigger_vmexit) +{ + struct page *ve_info_pg; + struct vcpu_vmx *vmx = to_vmx(vcpu); + int idx; + u32 eb; + + if (!kvm_ve_supported) + return -KVM_EOPNOTSUPP; + + idx = srcu_read_lock(&vcpu->kvm->srcu); + ve_info_pg = kvm_vcpu_gpa_to_page(vcpu, ve_info); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + + if (is_error_page(ve_info_pg)) + return -KVM_EINVAL; + + vmcs_write64(VE_INFO_ADDRESS, page_to_phys(ve_info_pg)); + + /* Make sure EPTP_INDEX is up-to-date before enabling #VE */ + vmcs_write16(EPTP_INDEX, vmx->view); + + /* Enable #VE mechanism */ + secondary_exec_controls_setbit(vmx, SECONDARY_EXEC_EPT_VE); + + /* Decide if #VE exception should trigger a VM exit */ + eb = vmcs_read32(EXCEPTION_BITMAP); + + if (trigger_vmexit) + eb |= (1u << VE_VECTOR); + else + eb &= ~(1u << VE_VECTOR); + + vmcs_write32(EXCEPTION_BITMAP, eb); + + return 0; +} + #define VMX_XSS_EXIT_BITMAP 0 /* @@ -8350,6 +8389,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .get_ept_view = vmx_get_ept_view, .set_ept_view = vmx_set_ept_view, .control_ept_view = vmx_control_ept_view, + .set_ve_info = vmx_set_ve_info, }; static __init int hardware_setup(void) From patchwork Wed Jul 22 16:01:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678859 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22213618 for ; Wed, 22 Jul 2020 16:01:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 087B420771 for ; Wed, 22 Jul 2020 16:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731036AbgGVQB6 (ORCPT ); Wed, 22 Jul 2020 12:01:58 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37950 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730325AbgGVQBl (ORCPT ); Wed, 22 Jul 2020 12:01:41 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id EF4DE305D6CF; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id DC45D305FFB4; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 27/34] KVM: x86: add .disable_ve() Date: Wed, 22 Jul 2020 19:01:14 +0300 Message-Id: <20200722160121.9601-28-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This function is needed for the KVMI_VCPU_DISABLE_VE command. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/vmx.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4cee641af48e..54969c2e804e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1316,6 +1316,7 @@ struct kvm_x86_ops { int (*control_ept_view)(struct kvm_vcpu *vcpu, u16 view, u8 visible); int (*set_ve_info)(struct kvm_vcpu *vcpu, unsigned long ve_info, bool trigger_vmexit); + int (*disable_ve)(struct kvm_vcpu *vcpu); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 871cc49063d8..96aa4b7e2857 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4464,6 +4464,15 @@ static int vmx_set_ve_info(struct kvm_vcpu *vcpu, unsigned long ve_info, return 0; } +static int vmx_disable_ve(struct kvm_vcpu *vcpu) +{ + if (kvm_ve_supported) + secondary_exec_controls_clearbit(to_vmx(vcpu), + SECONDARY_EXEC_EPT_VE); + + return 0; +} + #define VMX_XSS_EXIT_BITMAP 0 /* @@ -8390,6 +8399,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .set_ept_view = vmx_set_ept_view, .control_ept_view = vmx_control_ept_view, .set_ve_info = vmx_set_ve_info, + .disable_ve = vmx_disable_ve, }; static __init int hardware_setup(void) From patchwork Wed Jul 22 16:01:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C5DF14E3 for ; Wed, 22 Jul 2020 16:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2ADF420717 for ; Wed, 22 Jul 2020 16:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730640AbgGVQBo (ORCPT ); Wed, 22 Jul 2020 12:01:44 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37956 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730308AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 0EF0A305D64F; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id ED4A3305FFB5; Wed, 22 Jul 2020 19:01:32 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 28/34] KVM: x86: page_track: add support for suppress #VE bit Date: Wed, 22 Jul 2020 19:01:15 +0300 Message-Id: <20200722160121.9601-29-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Setting SPTEs from rmaps is not enough because rmaps contain only present SPTEs. If there is no mapping created for the GFN, SPTEs must be configured when they are created. Use the page tracking mechanism in order to configure the SVE bit when a PF occurs. This is similar to how access rights are configured using the page tracking mechanism. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/kvm_page_track.h | 1 + arch/x86/kvm/mmu.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++++++ arch/x86/kvm/mmu/page_track.c | 7 +++++ 4 files changed, 48 insertions(+) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 96d2ab7da4a7..108161f63a44 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -7,6 +7,7 @@ enum kvm_page_track_mode { KVM_PAGE_TRACK_PREWRITE, KVM_PAGE_TRACK_WRITE, KVM_PAGE_TRACK_PREEXEC, + KVM_PAGE_TRACK_SVE, KVM_PAGE_TRACK_MAX, }; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 02fa0d30407f..160e66ae9852 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -234,5 +234,7 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa); int kvm_mmu_post_init_vm(struct kvm *kvm); void kvm_mmu_pre_destroy_vm(struct kvm *kvm); +bool kvm_mmu_set_ept_page_sve(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, u16 index, bool suppress); #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 28ab4a1ba25a..7254f5679828 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1890,6 +1890,41 @@ bool kvm_mmu_slot_gfn_exec_protect(struct kvm *kvm, return exec_protected; } +static bool spte_suppress_ve(u64 *sptep, bool suppress) +{ + u64 spte = *sptep; + + if (suppress) + spte |= VMX_EPT_SUPPRESS_VE_BIT; + else + spte &= ~VMX_EPT_SUPPRESS_VE_BIT; + + return mmu_spte_update(sptep, spte); +} + +bool kvm_mmu_set_ept_page_sve(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, u16 index, bool suppress) +{ + struct kvm_rmap_head *rmap_head; + struct rmap_iterator iter; + struct kvm_mmu_page *sp; + bool flush = false; + u64 *sptep; + int i; + + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; i++) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + for_each_rmap_spte(rmap_head, &iter, sptep) { + sp = page_header(__pa(sptep)); + if (index == 0 || (index > 0 && index == sp->view)) + flush |= spte_suppress_ve(sptep, suppress); + } + } + + return flush; +} +EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_page_sve); + static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) { struct kvm_memory_slot *slot; @@ -3171,6 +3206,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, spte |= (u64)pfn << PAGE_SHIFT; + if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_SVE)) + spte &= ~VMX_EPT_SUPPRESS_VE_BIT; + if (pte_access & ACC_WRITE_MASK) { spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index bf26b21cfeb8..153c5285361f 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -125,6 +125,9 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, } else if (mode == KVM_PAGE_TRACK_PREEXEC) { if (kvm_mmu_slot_gfn_exec_protect(kvm, slot, gfn, view)) kvm_flush_remote_tlbs(kvm); + } else if (mode == KVM_PAGE_TRACK_SVE) { + if (kvm_mmu_set_ept_page_sve(kvm, slot, gfn, view, false)) + kvm_flush_remote_tlbs(kvm); } } EXPORT_SYMBOL_GPL(kvm_slot_page_track_add_page); @@ -151,6 +154,10 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, update_gfn_track(slot, gfn, mode, -1, view); + if (mode == KVM_PAGE_TRACK_SVE) + if (kvm_mmu_set_ept_page_sve(kvm, slot, gfn, view, true)) + kvm_flush_remote_tlbs(kvm); + /* * allow large page mapping for the tracked page * after the tracker is gone. From patchwork Wed Jul 22 16:01:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678853 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34E61618 for ; Wed, 22 Jul 2020 16:01:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2682320717 for ; Wed, 22 Jul 2020 16:01:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726784AbgGVQBy (ORCPT ); Wed, 22 Jul 2020 12:01:54 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38094 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730402AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 2225A305D654; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 0D129305FFB6; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 29/34] KVM: vmx: make use of EPTP_INDEX in vmx_handle_exit() Date: Wed, 22 Jul 2020 19:01:16 +0300 Message-Id: <20200722160121.9601-30-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru If the guest has EPTP switching capabilities with VMFUNC, read the current view from VMCS instead of walking through the EPTP list when #VE support is active. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 96aa4b7e2857..035f6c43a2a4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6269,15 +6269,21 @@ void dump_vmcs(void) static unsigned int update_ept_view(struct vcpu_vmx *vmx) { - u64 *eptp_list = phys_to_virt(page_to_phys(vmx->eptp_list_pg)); - u64 eptp = vmcs_read64(EPT_POINTER); - unsigned int view; + /* if #VE support is active, read the EPT index from VMCS */ + if (kvm_ve_supported && + secondary_exec_controls_get(vmx) & SECONDARY_EXEC_EPT_VE) { + vmx->view = vmcs_read16(EPTP_INDEX); + } else { + u64 *eptp_list = phys_to_virt(page_to_phys(vmx->eptp_list_pg)); + u64 eptp = vmcs_read64(EPT_POINTER); + unsigned int view; - for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) - if (eptp_list[view] == eptp) { - vmx->view = view; - break; - } + for (view = 0; view < KVM_MAX_EPT_VIEWS; view++) + if (eptp_list[view] == eptp) { + vmx->view = view; + break; + } + } return vmx->view; } From patchwork Wed Jul 22 16:01:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678855 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 503F414E3 for ; Wed, 22 Jul 2020 16:01:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4092A206F5 for ; Wed, 22 Jul 2020 16:01:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730917AbgGVQBz (ORCPT ); Wed, 22 Jul 2020 12:01:55 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38096 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729000AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 2DC05305D65A; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 222FD305FFA0; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 30/34] KVM: vmx: make use of EPTP_INDEX in vmx_set_ept_view() Date: Wed, 22 Jul 2020 19:01:17 +0300 Message-Id: <20200722160121.9601-31-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/vmx/vmx.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 035f6c43a2a4..736b6cc6ca8f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4408,6 +4408,13 @@ static int vmx_set_ept_view(struct kvm_vcpu *vcpu, u16 view) kvm_mmu_unload(vcpu); r = kvm_mmu_reload(vcpu); WARN_ON_ONCE(r); + + /* When #VE happens, current EPT index will be saved + * by the logical processor into VE information area, + * see chapter 24.6.18 and 25.5.6.2 from Intel SDM. + */ + if (kvm_ve_supported) + vmcs_write16(EPTP_INDEX, view); } return 0; From patchwork Wed Jul 22 16:01:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678841 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DAD57159A for ; Wed, 22 Jul 2020 16:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD3C9207DD for ; Wed, 22 Jul 2020 16:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730625AbgGVQBn (ORCPT ); Wed, 22 Jul 2020 12:01:43 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37958 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730381AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 360CB305D678; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 2AB37305FFA1; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 31/34] KVM: introspection: add #VE host capability checker Date: Wed, 22 Jul 2020 19:01:18 +0300 Message-Id: <20200722160121.9601-32-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru Add one more field to struct kvmi_features in order to publish #VE capabilities on the host as indicated by kvm_ve_supported flag. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 5 +++-- arch/x86/include/uapi/asm/kvmi.h | 3 ++- arch/x86/kvm/kvmi.c | 1 + tools/testing/selftests/kvm/x86_64/kvmi_test.c | 1 + 4 files changed, 7 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index 658c9df01469..caa51fccc463 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -265,11 +265,12 @@ For x86 __u8 singlestep; __u8 vmfunc; __u8 eptp; - __u8 padding[5]; + __u8 ve; + __u8 padding[4]; }; Returns the introspection API version and some of the features supported -by the hardware (eg. alternate EPT views). +by the hardware (eg. alternate EPT views, virtualization exception). This command is always allowed and successful. diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index fc35da900778..56992dacfb69 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -151,7 +151,8 @@ struct kvmi_features { __u8 singlestep; __u8 vmfunc; __u8 eptp; - __u8 padding[5]; + __u8 ve; + __u8 padding[4]; }; struct kvmi_vcpu_get_ept_view_reply { diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 27fd732cff29..3e8c83623703 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1383,6 +1383,7 @@ void kvmi_arch_features(struct kvmi_features *feat) kvm_x86_ops.get_vmfunc_status(); feat->eptp = kvm_x86_ops.get_eptp_switching_status && kvm_x86_ops.get_eptp_switching_status(); + feat->ve = kvm_ve_supported; } bool kvmi_arch_start_singlestep(struct kvm_vcpu *vcpu) diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index d808cb61463d..4e099cbfcf4e 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -459,6 +459,7 @@ static void test_cmd_get_version(void) pr_info("\tsinglestep: %u\n", features.singlestep); pr_info("\tvmfunc: %u\n", features.vmfunc); pr_info("\teptp: %u\n", features.eptp); + pr_info("\tve: %u\n", features.ve); } static void cmd_vm_check_command(__u16 id, __u16 padding, int expected_err) From patchwork Wed Jul 22 16:01:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678843 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BD74618 for ; Wed, 22 Jul 2020 16:01:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A85E207CD for ; Wed, 22 Jul 2020 16:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730308AbgGVQBp (ORCPT ); Wed, 22 Jul 2020 12:01:45 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38098 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730435AbgGVQBo (ORCPT ); Wed, 22 Jul 2020 12:01:44 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 4366C305D679; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 3371C305FFA2; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 32/34] KVM: introspection: add KVMI_VCPU_SET_VE_INFO/KVMI_VCPU_DISABLE_VE Date: Wed, 22 Jul 2020 19:01:19 +0300 Message-Id: <20200722160121.9601-33-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru The introspection tool can use #VE to reduce the number of VM-exits caused by SPT violations for some guests. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 63 +++++++++++++++++++ arch/x86/include/uapi/asm/kvmi.h | 8 +++ arch/x86/kvm/kvmi.c | 19 ++++++ include/uapi/linux/kvmi.h | 2 + .../testing/selftests/kvm/x86_64/kvmi_test.c | 52 +++++++++++++++ virt/kvm/introspection/kvmi_int.h | 3 + virt/kvm/introspection/kvmi_msg.c | 30 +++++++++ 7 files changed, 177 insertions(+) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index caa51fccc463..c50c40638d46 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1230,6 +1230,69 @@ is terminated. * -KVM_EINVAL - padding is not zero * -KVM_EINVAL - the selected EPT view is not valid +29. KVMI_VCPU_SET_VE_INFO +------------------------- + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vcpu_hdr; + struct kvmi_vcpu_set_ve_info { + __u64 gpa; + __u8 trigger_vmexit; + __u8 padding1; + __u16 padding2; + __u32 padding3; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Configures the guest physical address for the #VE info page and enables +the #VE mechanism. If ``trigger_vmexit`` is true, any virtualization +exception will trigger a vm-exit. Otherwise, the exception is delivered +using gate descriptor 20 from the Interrupt Descriptor Table (IDT). + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EINVAL - one of the specified GPAs is invalid +* -KVM_EOPNOTSUPP - the hardware does not support #VE +* -KVM_EINVAL - padding is not zero +* -KVM_EAGAIN - the selected vCPU can't be introspected yet + +30. KVMI_VCPU_DISABLE_VE +------------------------ + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vcpu_hdr; + +:Returns: + +:: + + struct kvmi_error_code; + +Disables the #VE mechanism. All EPT violations will trigger a vm-exit, +regardless of the corresponding spte 63rd bit (SVE) for the GPA that +triggered the EPT violation within a specific EPT view. + +:Errors: + +* -KVM_EINVAL - the selected vCPU is invalid +* -KVM_EAGAIN - the selected vCPU can't be introspected yet + Events ====== diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index 56992dacfb69..d925e6d49f50 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -174,4 +174,12 @@ struct kvmi_vcpu_control_ept_view { __u32 padding2; }; +struct kvmi_vcpu_set_ve_info { + __u64 gpa; + __u8 trigger_vmexit; + __u8 padding1; + __u16 padding2; + __u32 padding3; +}; + #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index 3e8c83623703..e101ac390809 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1464,3 +1464,22 @@ int kvmi_arch_cmd_control_ept_view(struct kvm_vcpu *vcpu, u16 view, return kvm_x86_ops.control_ept_view(vcpu, view, visible); } + +int kvmi_arch_cmd_set_ve_info(struct kvm_vcpu *vcpu, u64 gpa, + bool trigger_vmexit) +{ + unsigned long ve_info = (unsigned long) gpa; + + if (!kvm_x86_ops.set_ve_info) + return -KVM_EINVAL; + + return kvm_x86_ops.set_ve_info(vcpu, ve_info, trigger_vmexit); +} + +int kvmi_arch_cmd_disable_ve(struct kvm_vcpu *vcpu) +{ + if (!kvm_x86_ops.disable_ve) + return 0; + + return kvm_x86_ops.disable_ve(vcpu); +} diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index 505a865cd115..a17cd1fa16d0 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -52,6 +52,8 @@ enum { KVMI_VCPU_GET_EPT_VIEW = 26, KVMI_VCPU_SET_EPT_VIEW = 27, KVMI_VCPU_CONTROL_EPT_VIEW = 28, + KVMI_VCPU_SET_VE_INFO = 29, + KVMI_VCPU_DISABLE_VE = 30, KVMI_NUM_MESSAGES }; diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index 4e099cbfcf4e..a3ea22f546ec 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -35,6 +35,10 @@ static vm_vaddr_t test_gva; static void *test_hva; static vm_paddr_t test_gpa; +static vm_vaddr_t test_ve_info_gva; +static void *test_ve_info_hva; +static vm_paddr_t test_ve_info_gpa; + static uint8_t test_write_pattern; static int page_size; @@ -2258,6 +2262,43 @@ static void test_cmd_vcpu_vmfunc(struct kvm_vm *vm) test_guest_switch_to_visible_view(vm); } +static void enable_ve(struct kvm_vm *vm) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vcpu_hdr vcpu_hdr; + struct kvmi_vcpu_set_ve_info cmd; + } req = {}; + + req.cmd.gpa = test_ve_info_gpa; + req.cmd.trigger_vmexit = 1; + + test_vcpu0_command(vm, KVMI_VCPU_SET_VE_INFO, &req.hdr, + sizeof(req), NULL, 0); +} + +static void disable_ve(struct kvm_vm *vm) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vcpu_hdr vcpu_hdr; + } req = {}; + + test_vcpu0_command(vm, KVMI_VCPU_DISABLE_VE, &req.hdr, + sizeof(req), NULL, 0); +} + +static void test_virtualization_exceptions(struct kvm_vm *vm) +{ + if (!features.ve) { + print_skip("#VE not supported"); + return; + } + + enable_ve(vm); + disable_ve(vm); +} + static void test_introspection(struct kvm_vm *vm) { srandom(time(0)); @@ -2297,6 +2338,7 @@ static void test_introspection(struct kvm_vm *vm) test_cmd_vcpu_get_ept_view(vm); test_cmd_vcpu_set_ept_view(vm); test_cmd_vcpu_vmfunc(vm); + test_virtualization_exceptions(vm); unhook_introspection(vm); } @@ -2311,6 +2353,16 @@ static void setup_test_pages(struct kvm_vm *vm) memset(test_hva, 0, page_size); test_gpa = addr_gva2gpa(vm, test_gva); + + /* Allocate #VE info page */ + test_ve_info_gva = vm_vaddr_alloc(vm, page_size, KVM_UTIL_MIN_VADDR, + 0, 0); + sync_global_to_guest(vm, test_ve_info_gva); + + test_ve_info_hva = addr_gva2hva(vm, test_ve_info_gva); + memset(test_ve_info_hva, 0, page_size); + + test_ve_info_gpa = addr_gva2gpa(vm, test_ve_info_gva); } int main(int argc, char *argv[]) diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index fc6dbd3a6472..a0062fbde49e 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -151,5 +151,8 @@ u16 kvmi_arch_cmd_get_ept_view(struct kvm_vcpu *vcpu); int kvmi_arch_cmd_set_ept_view(struct kvm_vcpu *vcpu, u16 view); int kvmi_arch_cmd_control_ept_view(struct kvm_vcpu *vcpu, u16 view, bool visible); +int kvmi_arch_cmd_set_ve_info(struct kvm_vcpu *vcpu, u64 gpa, + bool trigger_vmexit); +int kvmi_arch_cmd_disable_ve(struct kvm_vcpu *vcpu); #endif diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c index 696857f6d008..664b78d545c3 100644 --- a/virt/kvm/introspection/kvmi_msg.c +++ b/virt/kvm/introspection/kvmi_msg.c @@ -711,6 +711,34 @@ static int handle_vcpu_control_ept_view(const struct kvmi_vcpu_msg_job *job, return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); } +static int handle_vcpu_set_ve_info(const struct kvmi_vcpu_msg_job *job, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_vcpu_set_ve_info *req = _req; + bool trigger_vmexit = !!req->trigger_vmexit; + int ec; + + if (req->padding1 || req->padding2 || req->padding3) + ec = -KVM_EINVAL; + else + ec = kvmi_arch_cmd_set_ve_info(job->vcpu, req->gpa, + trigger_vmexit); + + return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); +} + +static int handle_vcpu_disable_ve(const struct kvmi_vcpu_msg_job *job, + const struct kvmi_msg_hdr *msg, + const void *req) +{ + int ec; + + ec = kvmi_arch_cmd_disable_ve(job->vcpu); + + return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0); +} + /* * These functions are executed from the vCPU thread. The receiving thread * passes the messages using a newly allocated 'struct kvmi_vcpu_msg_job' @@ -725,6 +753,7 @@ static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *, [KVMI_VCPU_CONTROL_EVENTS] = handle_vcpu_control_events, [KVMI_VCPU_CONTROL_MSR] = handle_vcpu_control_msr, [KVMI_VCPU_CONTROL_SINGLESTEP] = handle_vcpu_control_singlestep, + [KVMI_VCPU_DISABLE_VE] = handle_vcpu_disable_ve, [KVMI_VCPU_GET_CPUID] = handle_vcpu_get_cpuid, [KVMI_VCPU_GET_EPT_VIEW] = handle_vcpu_get_ept_view, [KVMI_VCPU_GET_INFO] = handle_vcpu_get_info, @@ -736,6 +765,7 @@ static int(*const msg_vcpu[])(const struct kvmi_vcpu_msg_job *, [KVMI_VCPU_SET_EPT_VIEW] = handle_vcpu_set_ept_view, [KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers, [KVMI_VCPU_SET_XSAVE] = handle_vcpu_set_xsave, + [KVMI_VCPU_SET_VE_INFO] = handle_vcpu_set_ve_info, [KVMI_VCPU_TRANSLATE_GVA] = handle_vcpu_translate_gva, }; From patchwork Wed Jul 22 16:01:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678849 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78696618 for ; Wed, 22 Jul 2020 16:01:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A69820717 for ; Wed, 22 Jul 2020 16:01:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731151AbgGVQBt (ORCPT ); Wed, 22 Jul 2020 12:01:49 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:37954 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728670AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 47D28305D67A; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 3C8C2305FFA3; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 33/34] KVM: introspection: mask out non-rwx flags when reading/writing from/to the internal database Date: Wed, 22 Jul 2020 19:01:20 +0300 Message-Id: <20200722160121.9601-34-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is needed because the KVMI_VM_SET_PAGE_SVE command we will use the same database to keep the suppress #VE bit requested by the introspection tool. Signed-off-by: Adalbert Lazăr --- virt/kvm/introspection/kvmi.c | 36 ++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c index f3bdef3c54e6..6bae2981cda7 100644 --- a/virt/kvm/introspection/kvmi.c +++ b/virt/kvm/introspection/kvmi.c @@ -23,9 +23,12 @@ static struct kmem_cache *msg_cache; static struct kmem_cache *job_cache; static struct kmem_cache *radix_cache; -static const u8 full_access = KVMI_PAGE_ACCESS_R | - KVMI_PAGE_ACCESS_W | - KVMI_PAGE_ACCESS_X; +static const u8 rwx_access = KVMI_PAGE_ACCESS_R | + KVMI_PAGE_ACCESS_W | + KVMI_PAGE_ACCESS_X; +static const u8 full_access = KVMI_PAGE_ACCESS_R | + KVMI_PAGE_ACCESS_W | + KVMI_PAGE_ACCESS_X; void *kvmi_msg_alloc(void) { @@ -1100,7 +1103,7 @@ static void kvmi_insert_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, } static void kvmi_set_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, - u16 view, bool *used) + u8 mask, u16 view, bool *used) { struct kvm_introspection *kvmi = KVMI(kvm); struct kvmi_mem_access *found; @@ -1112,11 +1115,14 @@ static void kvmi_set_mem_access(struct kvm *kvm, struct kvmi_mem_access *m, found = __kvmi_get_gfn_access(kvmi, m->gfn, view); if (found) { - found->access = m->access; + found->access = (m->access & mask) | (found->access & ~mask); kvmi_update_mem_access(kvm, found, view); - } else if (m->access != full_access) { - kvmi_insert_mem_access(kvm, m, view); - *used = true; + } else { + m->access |= full_access & ~mask; + if (m->access != full_access) { + kvmi_insert_mem_access(kvm, m, view); + *used = true; + } } write_unlock(&kvmi->access_tree_lock); @@ -1141,7 +1147,7 @@ static int kvmi_set_gfn_access(struct kvm *kvm, gfn_t gfn, u8 access, if (radix_tree_preload(GFP_KERNEL)) err = -KVM_ENOMEM; else - kvmi_set_mem_access(kvm, m, view, &used); + kvmi_set_mem_access(kvm, m, rwx_access, view, &used); radix_tree_preload_end(); @@ -1216,14 +1222,22 @@ static int kvmi_get_gfn_access(struct kvm_introspection *kvmi, const gfn_t gfn, u8 *access, u16 view) { struct kvmi_mem_access *m; + u8 allowed = rwx_access; + bool restricted; read_lock(&kvmi->access_tree_lock); m = __kvmi_get_gfn_access(kvmi, gfn, view); if (m) - *access = m->access; + allowed = m->access; read_unlock(&kvmi->access_tree_lock); - return m ? 0 : -1; + restricted = (allowed & rwx_access) != rwx_access; + + if (!restricted) + return -1; + + *access = allowed; + return 0; } bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, gpa_t gpa, From patchwork Wed Jul 22 16:01:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11678851 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AA46618 for ; Wed, 22 Jul 2020 16:01:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35AE020717 for ; Wed, 22 Jul 2020 16:01:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730914AbgGVQBs (ORCPT ); Wed, 22 Jul 2020 12:01:48 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:38080 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729015AbgGVQBm (ORCPT ); Wed, 22 Jul 2020 12:01:42 -0400 Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 50792305D618; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) Received: from localhost.localdomain (unknown [91.199.104.6]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 44296305FFB7; Wed, 22 Jul 2020 19:01:33 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?q?=C8=98tefan_=C8=98icleru?= , =?utf-8?q?Adalbert_Laz=C4=83r?= Subject: [RFC PATCH v1 34/34] KVM: introspection: add KVMI_VM_SET_PAGE_SVE Date: Wed, 22 Jul 2020 19:01:21 +0300 Message-Id: <20200722160121.9601-35-alazar@bitdefender.com> In-Reply-To: <20200722160121.9601-1-alazar@bitdefender.com> References: <20200722160121.9601-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ștefan Șicleru This command is used by the introspection tool to set/clear the suppress-VE bit for specific guest memory pages. Signed-off-by: Ștefan Șicleru Signed-off-by: Adalbert Lazăr --- Documentation/virt/kvm/kvmi.rst | 42 +++++++++ arch/x86/include/uapi/asm/kvmi.h | 8 ++ arch/x86/kvm/kvmi.c | 1 + include/uapi/linux/kvmi.h | 3 + .../testing/selftests/kvm/x86_64/kvmi_test.c | 91 ++++++++++++++++++- virt/kvm/introspection/kvmi.c | 29 +++++- virt/kvm/introspection/kvmi_int.h | 1 + virt/kvm/introspection/kvmi_msg.c | 23 +++++ 8 files changed, 196 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst index c50c40638d46..0f87442f6881 100644 --- a/Documentation/virt/kvm/kvmi.rst +++ b/Documentation/virt/kvm/kvmi.rst @@ -1293,6 +1293,48 @@ triggered the EPT violation within a specific EPT view. * -KVM_EINVAL - the selected vCPU is invalid * -KVM_EAGAIN - the selected vCPU can't be introspected yet +31. KVMI_VM_SET_PAGE_SVE +------------------------ + +:Architecture: x86 +:Versions: >= 1 +:Parameters: + +:: + + struct kvmi_vm_set_page_sve { + __u16 view; + __u8 suppress; + __u8 padding1; + __u32 padding2; + __u64 gpa; + }; + +:Returns: + +:: + + struct kvmi_error_code; + +Configures the spte 63rd bit (Suppress #VE, SVE) for ``gpa`` on the +provided EPT ``view``. If ``suppress`` field is 1, the SVE bit will be set. +If it is 0, the SVE it will be cleared. + +If the SVE bit is cleared, EPT violations generated by the provided +guest physical address will trigger a #VE instead of a #PF, which is +delivered using gate descriptor 20 in the IDT. + +Before configuring the SVE bit, the introspection tool should use +*KVMI_GET_VERSION* to check if the hardware has support for the #VE +mechanism (see **KVMI_GET_VERSION**). + +:Errors: + +* -KVM_EINVAL - padding is not zero +* -KVM_ENOMEM - not enough memory to add the page tracking structures +* -KVM_EOPNOTSUPP - an EPT view was selected but the hardware doesn't support it +* -KVM_EINVAL - the selected EPT view is not valid + Events ====== diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h index d925e6d49f50..17b02624cb4d 100644 --- a/arch/x86/include/uapi/asm/kvmi.h +++ b/arch/x86/include/uapi/asm/kvmi.h @@ -182,4 +182,12 @@ struct kvmi_vcpu_set_ve_info { __u32 padding3; }; +struct kvmi_vm_set_page_sve { + __u16 view; + __u8 suppress; + __u8 padding1; + __u32 padding2; + __u64 gpa; +}; + #endif /* _UAPI_ASM_X86_KVMI_H */ diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c index e101ac390809..f3c488f703ec 100644 --- a/arch/x86/kvm/kvmi.c +++ b/arch/x86/kvm/kvmi.c @@ -1214,6 +1214,7 @@ static const struct { { KVMI_PAGE_ACCESS_R, KVM_PAGE_TRACK_PREREAD }, { KVMI_PAGE_ACCESS_W, KVM_PAGE_TRACK_PREWRITE }, { KVMI_PAGE_ACCESS_X, KVM_PAGE_TRACK_PREEXEC }, + { KVMI_PAGE_SVE, KVM_PAGE_TRACK_SVE }, }; void kvmi_arch_update_page_tracking(struct kvm *kvm, diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h index a17cd1fa16d0..110fb011260b 100644 --- a/include/uapi/linux/kvmi.h +++ b/include/uapi/linux/kvmi.h @@ -55,6 +55,8 @@ enum { KVMI_VCPU_SET_VE_INFO = 29, KVMI_VCPU_DISABLE_VE = 30, + KVMI_VM_SET_PAGE_SVE = 31, + KVMI_NUM_MESSAGES }; @@ -84,6 +86,7 @@ enum { KVMI_PAGE_ACCESS_R = 1 << 0, KVMI_PAGE_ACCESS_W = 1 << 1, KVMI_PAGE_ACCESS_X = 1 << 2, + KVMI_PAGE_SVE = 1 << 3, }; struct kvmi_msg_hdr { diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c index a3ea22f546ec..0dc5b150a739 100644 --- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c +++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c @@ -19,6 +19,7 @@ #include "linux/kvm_para.h" #include "linux/kvmi.h" +#include "asm/kvmi.h" #define KVM_MAX_EPT_VIEWS 3 @@ -39,6 +40,15 @@ static vm_vaddr_t test_ve_info_gva; static void *test_ve_info_hva; static vm_paddr_t test_ve_info_gpa; +struct vcpu_ve_info { + u32 exit_reason; + u32 unused; + u64 exit_qualification; + u64 gva; + u64 gpa; + u16 eptp_index; +}; + static uint8_t test_write_pattern; static int page_size; @@ -53,6 +63,11 @@ struct pf_ev { struct kvmi_event_pf pf; }; +struct exception { + uint32_t exception; + uint32_t error_code; +}; + struct vcpu_worker_data { struct kvm_vm *vm; int vcpu_id; @@ -61,6 +76,8 @@ struct vcpu_worker_data { bool shutdown; bool restart_on_shutdown; bool run_guest_once; + bool expect_exception; + struct exception ex; }; static struct kvmi_features features; @@ -806,7 +823,9 @@ static void *vcpu_worker(void *data) TEST_ASSERT(run->exit_reason == KVM_EXIT_IO || (run->exit_reason == KVM_EXIT_SHUTDOWN - && ctx->shutdown), + && ctx->shutdown) + || (run->exit_reason == KVM_EXIT_EXCEPTION + && ctx->expect_exception), "vcpu_run() failed, test_id %d, exit reason %u (%s)\n", ctx->test_id, run->exit_reason, exit_reason_str(run->exit_reason)); @@ -817,6 +836,12 @@ static void *vcpu_worker(void *data) break; } + if (run->exit_reason == KVM_EXIT_EXCEPTION) { + ctx->ex.exception = run->ex.exception; + ctx->ex.error_code = run->ex.error_code; + break; + } + TEST_ASSERT(get_ucall(ctx->vm, ctx->vcpu_id, &uc), "No guest request\n"); @@ -2288,15 +2313,79 @@ static void disable_ve(struct kvm_vm *vm) sizeof(req), NULL, 0); } +static void set_page_sve(__u64 gpa, bool sve) +{ + struct { + struct kvmi_msg_hdr hdr; + struct kvmi_vm_set_page_sve cmd; + } req = {}; + + req.cmd.gpa = gpa; + req.cmd.suppress = sve; + + test_vm_command(KVMI_VM_SET_PAGE_SVE, &req.hdr, sizeof(req), + NULL, 0); +} + static void test_virtualization_exceptions(struct kvm_vm *vm) { + struct vcpu_worker_data data = { + .vm = vm, + .vcpu_id = VCPU_ID, + .test_id = GUEST_TEST_PF, + .expect_exception = true, + }; + pthread_t vcpu_thread; + struct vcpu_ve_info *ve_info; + if (!features.ve) { print_skip("#VE not supported"); return; } + set_page_access(test_gpa, KVMI_PAGE_ACCESS_R); + set_page_sve(test_gpa, false); + + new_test_write_pattern(vm); + enable_ve(vm); + + vcpu_thread = start_vcpu_worker(&data); + + wait_vcpu_worker(vcpu_thread); + + TEST_ASSERT(data.ex.exception == VE_VECTOR && + data.ex.error_code == 0, + "Unexpected exception, vector %u (expected %u), error code %u (expected 0)\n", + data.ex.exception, VE_VECTOR, data.ex.error_code); + + ve_info = (struct vcpu_ve_info *)test_ve_info_hva; + + TEST_ASSERT(ve_info->exit_reason == 48 && /* EPT violation */ + (ve_info->exit_qualification & 0x18a) && + ve_info->gva == test_gva && + ve_info->gpa == test_gpa && + ve_info->eptp_index == 0, + "#VE exit_reason %u (expected 48), exit qualification 0x%lx (expected mask 0x18a), gva %lx (expected %lx), gpa %lx (expected %lx), ept index %u (expected 0)\n", + ve_info->exit_reason, + ve_info->exit_qualification, + ve_info->gva, test_gva, + ve_info->gpa, test_gpa, + ve_info->eptp_index); + + /* When vcpu_run() is called next, guest will re-execute the + * last instruction that triggered a #VE, so the guest + * remains in a clean state before executing other tests. + * But not before adding write access to test_gpa. + */ + set_page_access(test_gpa, KVMI_PAGE_ACCESS_R | KVMI_PAGE_ACCESS_W); + + /* Disable #VE and and check that a #PF is triggered + * instead of a #VE, even though test_gpa is convertible; + * here vcpu_run() is called as well. + */ disable_ve(vm); + test_event_pf(vm); } static void test_introspection(struct kvm_vm *vm) diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c index 6bae2981cda7..665ff223ce84 100644 --- a/virt/kvm/introspection/kvmi.c +++ b/virt/kvm/introspection/kvmi.c @@ -28,7 +28,7 @@ static const u8 rwx_access = KVMI_PAGE_ACCESS_R | KVMI_PAGE_ACCESS_X; static const u8 full_access = KVMI_PAGE_ACCESS_R | KVMI_PAGE_ACCESS_W | - KVMI_PAGE_ACCESS_X; + KVMI_PAGE_ACCESS_X | KVMI_PAGE_SVE; void *kvmi_msg_alloc(void) { @@ -1447,3 +1447,30 @@ bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) return ret; } + +int kvmi_cmd_set_page_sve(struct kvm *kvm, gpa_t gpa, u16 view, bool suppress) +{ + struct kvmi_mem_access *m; + u8 mask = KVMI_PAGE_SVE; + bool used = false; + int err = 0; + + m = kmem_cache_zalloc(radix_cache, GFP_KERNEL); + if (!m) + return -KVM_ENOMEM; + + m->gfn = gpa_to_gfn(gpa); + m->access = suppress ? KVMI_PAGE_SVE : 0; + + if (radix_tree_preload(GFP_KERNEL)) + err = -KVM_ENOMEM; + else + kvmi_set_mem_access(kvm, m, mask, view, &used); + + radix_tree_preload_end(); + + if (!used) + kmem_cache_free(radix_cache, m); + + return err; +} diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h index a0062fbde49e..915ac75321da 100644 --- a/virt/kvm/introspection/kvmi_int.h +++ b/virt/kvm/introspection/kvmi_int.h @@ -88,6 +88,7 @@ int kvmi_cmd_vcpu_set_registers(struct kvm_vcpu *vcpu, int kvmi_cmd_set_page_access(struct kvm_introspection *kvmi, const struct kvmi_msg_hdr *msg, const struct kvmi_vm_set_page_access *req); +int kvmi_cmd_set_page_sve(struct kvm *kvm, gpa_t gpa, u16 view, bool suppress); bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, gpa_t gpa, u8 access, u16 view); bool kvmi_pf_event(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva, u8 access); diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c index 664b78d545c3..2b31b117401b 100644 --- a/virt/kvm/introspection/kvmi_msg.c +++ b/virt/kvm/introspection/kvmi_msg.c @@ -347,6 +347,28 @@ static int handle_vm_set_page_access(struct kvm_introspection *kvmi, return kvmi_msg_vm_reply(kvmi, msg, ec, NULL, 0); } +static int handle_vm_set_page_sve(struct kvm_introspection *kvmi, + const struct kvmi_msg_hdr *msg, + const void *_req) +{ + const struct kvmi_vm_set_page_sve *req = _req; + int ec; + + if (!is_valid_view(req->view)) + ec = -KVM_EINVAL; + else if (req->suppress > 1) + ec = -KVM_EINVAL; + else if (req->padding1 || req->padding2) + ec = -KVM_EINVAL; + else if (req->view != 0 && !kvm_eptp_switching_supported) + ec = -KVM_EOPNOTSUPP; + else + ec = kvmi_cmd_set_page_sve(kvmi->kvm, req->gpa, req->view, + req->suppress == 1); + + return kvmi_msg_vm_reply(kvmi, msg, ec, NULL, 0); +} + /* * These commands are executed by the receiving thread. */ @@ -362,6 +384,7 @@ static int(*const msg_vm[])(struct kvm_introspection *, [KVMI_VM_GET_MAX_GFN] = handle_vm_get_max_gfn, [KVMI_VM_READ_PHYSICAL] = handle_vm_read_physical, [KVMI_VM_SET_PAGE_ACCESS] = handle_vm_set_page_access, + [KVMI_VM_SET_PAGE_SVE] = handle_vm_set_page_sve, [KVMI_VM_WRITE_PHYSICAL] = handle_vm_write_physical, };