diff mbox series

[v19,093/130] KVM: TDX: Implements vcpu request_immediate_exit

Message ID 3fd2824a8f77412476b58155776e88dfe84a8c73.1708933498.git.isaku.yamahata@intel.com (mailing list archive)
State New, archived
Headers show
Series [v19,001/130] x86/virt/tdx: Rename _offset to _member for TD_SYSINFO_MAP() macro | expand

Commit Message

Isaku Yamahata Feb. 26, 2024, 8:26 a.m. UTC
From: Isaku Yamahata <isaku.yamahata@intel.com>

Now we are able to inject interrupts into TDX vcpu, it's ready to block TDX
vcpu.  Wire up kvm x86 methods for blocking/unblocking vcpu for TDX.  To
unblock on pending events, request immediate exit methods is also needed.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/main.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Comments

Chao Gao March 29, 2024, 1:54 a.m. UTC | #1
On Mon, Feb 26, 2024 at 12:26:35AM -0800, isaku.yamahata@intel.com wrote:
>From: Isaku Yamahata <isaku.yamahata@intel.com>
>
>Now we are able to inject interrupts into TDX vcpu, it's ready to block TDX
>vcpu.  Wire up kvm x86 methods for blocking/unblocking vcpu for TDX.  To
>unblock on pending events, request immediate exit methods is also needed.

TDX doesn't support this immediate exit. It is considered as a potential
attack to TDs. TDX module deploys 0/1-step mitigations to prevent this.
Even KVM issues a self-IPI before TD-entry, TD-exit will happen after
the guest runs a random number of instructions.

KVM shouldn't request immediate exits in the first place. Just emit a
warning if KVM tries to do this.

>
>Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
>Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
>---
> arch/x86/kvm/vmx/main.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
>diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
>index f2c9d6358f9e..ee6c04959d4c 100644
>--- a/arch/x86/kvm/vmx/main.c
>+++ b/arch/x86/kvm/vmx/main.c
>@@ -372,6 +372,16 @@ static void vt_enable_irq_window(struct kvm_vcpu *vcpu)
> 	vmx_enable_irq_window(vcpu);
> }
> 
>+static void vt_request_immediate_exit(struct kvm_vcpu *vcpu)
>+{
>+	if (is_td_vcpu(vcpu)) {
>+		__kvm_request_immediate_exit(vcpu);
>+		return;
>+	}
>+
>+	vmx_request_immediate_exit(vcpu);
>+}
>+
> static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> {
> 	if (is_td_vcpu(vcpu))
>@@ -549,7 +559,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> 	.check_intercept = vmx_check_intercept,
> 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
> 
>-	.request_immediate_exit = vmx_request_immediate_exit,
>+	.request_immediate_exit = vt_request_immediate_exit,
> 
> 	.sched_in = vt_sched_in,
> 
>-- 
>2.25.1
>
>
Isaku Yamahata April 2, 2024, 6:52 a.m. UTC | #2
On Fri, Mar 29, 2024 at 09:54:04AM +0800,
Chao Gao <chao.gao@intel.com> wrote:

> On Mon, Feb 26, 2024 at 12:26:35AM -0800, isaku.yamahata@intel.com wrote:
> >From: Isaku Yamahata <isaku.yamahata@intel.com>
> >
> >Now we are able to inject interrupts into TDX vcpu, it's ready to block TDX
> >vcpu.  Wire up kvm x86 methods for blocking/unblocking vcpu for TDX.  To
> >unblock on pending events, request immediate exit methods is also needed.
> 
> TDX doesn't support this immediate exit. It is considered as a potential
> attack to TDs. TDX module deploys 0/1-step mitigations to prevent this.
> Even KVM issues a self-IPI before TD-entry, TD-exit will happen after
> the guest runs a random number of instructions.
> 
> KVM shouldn't request immediate exits in the first place. Just emit a
> warning if KVM tries to do this.

0ec3d6d1f169
("KVM: x86: Fully defer to vendor code to decide how to force immediate exit")
removed the hook.  This patch will be dropped and tdx_vcpu_run() will ignore
force_immediate_exit.
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index f2c9d6358f9e..ee6c04959d4c 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -372,6 +372,16 @@  static void vt_enable_irq_window(struct kvm_vcpu *vcpu)
 	vmx_enable_irq_window(vcpu);
 }
 
+static void vt_request_immediate_exit(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu)) {
+		__kvm_request_immediate_exit(vcpu);
+		return;
+	}
+
+	vmx_request_immediate_exit(vcpu);
+}
+
 static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 {
 	if (is_td_vcpu(vcpu))
@@ -549,7 +559,7 @@  struct kvm_x86_ops vt_x86_ops __initdata = {
 	.check_intercept = vmx_check_intercept,
 	.handle_exit_irqoff = vmx_handle_exit_irqoff,
 
-	.request_immediate_exit = vmx_request_immediate_exit,
+	.request_immediate_exit = vt_request_immediate_exit,
 
 	.sched_in = vt_sched_in,