diff mbox series

Stable bugfix backport request of "KVM: x86: smm: preserve interrupt shadow in SMRAM"?

Message ID 20240127002016.95369-1-dongli.zhang@oracle.com (mailing list archive)
State New, archived
Headers show
Series Stable bugfix backport request of "KVM: x86: smm: preserve interrupt shadow in SMRAM"? | expand

Commit Message

Dongli Zhang Jan. 27, 2024, 12:20 a.m. UTC
Hi Maxim and Paolo, 

This is the linux-stable backport request regarding the below patch.

KVM: x86: smm: preserve interrupt shadow in SMRAM
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fb28875fd7da184079150295da7ee8d80a70917e

According to the below link, there may be a backport to stable kernels, while I
do not see it in the stable kernels.

https://gitlab.com/qemu-project/qemu/-/issues/1198

Would you mind sharing if there is already any existing backport, or please let
me know if I can send the backport to the linux-stable?

There are many conflicts unless we backport the entire patchset, e.g.,: I
choose 0x7f1a/0x7ecb for 32-bit/64-bit int_shadow in the smram.

--------------------------------

From 90f492c865a4b7ca6187a4fc9eebe451f3d6c17e Mon Sep 17 00:00:00 2001
From: Maxim Levitsky <mlevitsk@redhat.com>
Date: Fri, 26 Jan 2024 14:03:59 -0800
Subject: [PATCH linux-5.15.y 1/1] KVM: x86: smm: preserve interrupt shadow in SMRAM

[ Upstream commit fb28875fd7da184079150295da7ee8d80a70917e ]

When #SMI is asserted, the CPU can be in interrupt shadow due to sti or
mov ss.

It is not mandatory in  Intel/AMD prm to have the #SMI blocked during the
shadow, and on top of that, since neither SVM nor VMX has true support
for SMI window, waiting for one instruction would mean single stepping
the guest.

Instead, allow #SMI in this case, but both reset the interrupt window and
stash its value in SMRAM to restore it on exit from SMM.

This fixes rare failures seen mostly on windows guests on VMX, when #SMI
falls on the sti instruction which mainfest in VM entry failure due
to EFLAGS.IF not being set, but STI interrupt window still being set
in the VMCS.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20221025124741.228045-24-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Backport fb28875fd7da184079150295da7ee8d80a70917e from a big patchset
merge:

[PATCH RESEND v4 00/23] SMM emulation and interrupt shadow fixes
https://lore.kernel.org/all/20221025124741.228045-1-mlevitsk@redhat.com/

Since only the last patch is backported, there are many conflicts.

The core idea of the patch:

- Save the interruptibility before entering SMM.
- Load the interruptibility after leaving SMM.

Although the real offsets in smram buffer are the same, the bugfix and the
UEK5 use different offsets in the function calls. Here are some examples.

32-bit:
              bugfix      UEK6
smbase     -> 0xFEF8  -> 0x7ef8
cr4        -> 0xFF14  -> 0x7f14
int_shadow -> 0xFF1A  ->  n/a
eip        -> 0xFFF0  -> 0x7ff0
cr0        -> 0xFFFC  -> 0x7ffc

64-bit:
              bugfix      UEK6
int_shadow -> 0xFECB  ->  n/a
efer       -> 0xFEd0  -> 0x7ed0
smbase     -> 0xFF00  -> 0x7f00
cr4        -> 0xFF48  -> 0x7f48
cr0        -> 0xFF58  -> 0x7f58
rip        -> 0xFF78  -> 0x7f78

Therefore, we choose the below offsets for int_shadow:

32-bit: int_shadow = 0x7f1a
64-bit: int_shadow = 0x7ecb

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 arch/x86/kvm/emulate.c | 15 +++++++++++++--
 arch/x86/kvm/x86.c     |  6 ++++++
 2 files changed, 19 insertions(+), 2 deletions(-)

--
1.8.3.1

--------------------------------

Thank you very much!

Dongli Zhang

Comments

Greg KH Jan. 27, 2024, 1:08 a.m. UTC | #1
On Fri, Jan 26, 2024 at 04:20:16PM -0800, Dongli Zhang wrote:
> Hi Maxim and Paolo, 
> 
> This is the linux-stable backport request regarding the below patch.

For what tree(s)?

And you forgot to sign off on the patch :(

thanks,

greg k-h
Dongli Zhang Jan. 27, 2024, 1:33 a.m. UTC | #2
Hi Greg,

On 1/26/24 17:08, Greg KH wrote:
> On Fri, Jan 26, 2024 at 04:20:16PM -0800, Dongli Zhang wrote:
>> Hi Maxim and Paolo, 
>>
>> This is the linux-stable backport request regarding the below patch.
> 
> For what tree(s)?

It is linux-5.15.y as in the Subject of the patch.

However, more versions require this bugfix, e.g., 6.1 or 5.4.
I have a backport for 5.4 as well.

I just send the version on top of 5.15 for suggestion, or there
is already a backport available.

> 
> And you forgot to sign off on the patch :(

I have a signed-off after the commit message. There are some conflicts:
e.g., the smram buffer offsets used in function calls.

I have added the commit messages to explain the conflicts between
Paolo's signed-off and my own signed-off.


BTW, I have created a kvm selftest program to reproduce this issue. Although I
cannot reproduce on baremetal (perhaps it is too fast), I can always reproduce
on a KVM running on top of a VM.


$ ./smm_interrupt_window
Create thread for vcpu=0
Create thread for vcpu=1
Waiting for 2-second for test to start ...
vcpu=0: stage = 1
vcpu=1: stage = 2
Start the test!
==== Test Assertion Failure ====
  x86_64/mytest.c:96: exit_reason == (2)
  pid=5541 tid=5544 errno=0 - Success
     1	0x0000000000401dd3: vcpu_worker at mytest.c:96
     2	0x0000000000417cc9: start_thread at libpthread.o:?
     3	0x0000000000470d32: __clone at ??:?
  Wanted KVM exit reason: 2 (IO), got: 9 (FAIL_ENTRY)



There are below in the dmesg.

[  165.292990] VMCS 0000000088f567e4, last attempted VM-entry on CPU 14
... ...
[  165.304272] RFLAGS=0x00000002         DR7 = 0x0000000000000400
... ...
[  165.329264] Interruptibility = 00000009  ActivityState = 00000000



// SPDX-License-Identifier: GPL-2.0
/*
 * Reproduce the issue fixed by the commit fb28875fd7da ("KVM: x86: smm:
 * preserve interrupt shadow in SMRAM").
 *
 * The vCPU#0 sends SMI to vCPU#1 that is running sti to trap into the
 * interrupt window.
 *
 * Adapted from smm_test.c
 */
#include <pthread.h>

#include "kvm_util.h"
#include "processor.h"
#include "vmx.h"

#define SMRAM_SIZE 65536
#define SMRAM_MEMSLOT ((1 << 16) | 1)
#define SMRAM_PAGES (SMRAM_SIZE / PAGE_SIZE)
#define SMRAM_GPA 0x1000000
#define SMRAM_STAGE 0xfe

#define STR(x) #x
#define XSTR(s) STR(s)

#define SYNC_PORT 0xe

#define NR_VCPUS		2

uint8_t smi_handler[] = {
	0xb0, SMRAM_STAGE,    /* mov $SMRAM_STAGE, %al */
	0x0f, 0xaa,           /* rsm */
};

static inline void sync_with_host(uint64_t phase)
{
	asm volatile("in $" XSTR(SYNC_PORT)", %%al \n"
		     : "+a" (phase));
}

static void guest_code(int cpu)
{
	uint64_t apicbase = rdmsr(MSR_IA32_APICBASE);
	int i;

	wrmsr(MSR_IA32_APICBASE, apicbase | X2APIC_ENABLE);

	if (cpu == 0) {
		sync_with_host(1);
		/*
		 * vCPU#0 keeps cli/nop/sti
		 */
		while(1) {
			asm volatile("cli");
			asm volatile("nop");
			asm volatile("nop");
			asm volatile("nop");
			asm volatile("sti");
			asm volatile("nop");
			asm volatile("nop");
			asm volatile("nop");
		}
	}

	if (cpu == 1) {
		sync_with_host(2);
		/*
		 * vCPU#1 keeps sending SMI to vCPU#0
		 */
		while(1) {
			x2apic_write_reg(APIC_ICR, APIC_INT_ASSERT | APIC_DM_SMI);
			for (i = 0; i < 1000000; i++)
				asm volatile("nop");
		}

	}
}

static void *vcpu_worker(void *data)
{
	struct kvm_vcpu *vcpu = data;
	int stage_reported;
	struct kvm_regs regs;

	pr_info("Create thread for vcpu=%u\n", vcpu->id);

	if (vcpu->id == 0) {
		vcpu_run(vcpu);
		TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
		memset(&regs, 0, sizeof(regs));
		vcpu_regs_get(vcpu, &regs);
		stage_reported = regs.rax & 0xff;
		pr_info("vcpu=%u: stage = %d\n", vcpu->id, stage_reported);

		vcpu_run(vcpu);
		TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
		memset(&regs, 0, sizeof(regs));
		vcpu_regs_get(vcpu, &regs);
		stage_reported = regs.rax & 0xff;
		pr_info("vcpu=%u: stage = %d\n", vcpu->id, stage_reported);
	}

	if (vcpu->id == 1) {
		pr_info("Waiting for 2-second for test to start ...\n");
		sleep(2);

		vcpu_run(vcpu);
		TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
		memset(&regs, 0, sizeof(regs));
		vcpu_regs_get(vcpu, &regs);
		stage_reported = regs.rax & 0xff;
		pr_info("vcpu=%u: stage = %d\n", vcpu->id, stage_reported);

		pr_info("Start the test!\n");
		vcpu_run(vcpu);
	}

	return NULL;
}

int main(int argc, char **argv)
{
	struct kvm_vcpu *vcpus[NR_VCPUS];
	struct kvm_vm *vm;
	pthread_t tids[NR_VCPUS];

	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, vcpus);

	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, SMRAM_GPA,
				    SMRAM_MEMSLOT, SMRAM_PAGES, 0);

	TEST_ASSERT(vm_phy_pages_alloc(vm, SMRAM_PAGES, SMRAM_GPA, SMRAM_MEMSLOT)
		    == SMRAM_GPA, "could not allocate guest physical addresses?");

	memset(addr_gpa2hva(vm, SMRAM_GPA), 0x0, SMRAM_SIZE);

	memcpy(addr_gpa2hva(vm, SMRAM_GPA) + 0x8000, smi_handler, sizeof(smi_handler));

	vcpu_set_msr(vcpus[0], MSR_IA32_SMBASE, SMRAM_GPA);
	vcpu_set_msr(vcpus[1], MSR_IA32_SMBASE, SMRAM_GPA);

	vcpu_args_set(vcpus[0], 1, 0);
	vcpu_args_set(vcpus[1], 1, 1);

	pthread_create(&tids[0], NULL, vcpu_worker, vcpus[0]);
	pthread_create(&tids[1], NULL, vcpu_worker, vcpus[1]);

	pthread_join(tids[0], NULL);
	pthread_join(tids[1], NULL);

	return 0;
}

Dongli Zhang

> 
> thanks,
> 
> greg k-h
Greg KH Jan. 27, 2024, 1:38 a.m. UTC | #3
On Fri, Jan 26, 2024 at 05:33:28PM -0800, Dongli Zhang wrote:
> Hi Greg,
> 
> On 1/26/24 17:08, Greg KH wrote:
> > On Fri, Jan 26, 2024 at 04:20:16PM -0800, Dongli Zhang wrote:
> >> Hi Maxim and Paolo, 
> >>
> >> This is the linux-stable backport request regarding the below patch.
> > 
> > For what tree(s)?
> 
> It is linux-5.15.y as in the Subject of the patch.

Am I blind, but I don't see that in the subject line anywhere :(
Dongli Zhang Jan. 27, 2024, 1:44 a.m. UTC | #4
Hi Greg,

On 1/26/24 17:38, Greg KH wrote:
> On Fri, Jan 26, 2024 at 05:33:28PM -0800, Dongli Zhang wrote:
>> Hi Greg,
>>
>> On 1/26/24 17:08, Greg KH wrote:
>>> On Fri, Jan 26, 2024 at 04:20:16PM -0800, Dongli Zhang wrote:
>>>> Hi Maxim and Paolo, 
>>>>
>>>> This is the linux-stable backport request regarding the below patch.
>>>
>>> For what tree(s)?
>>
>> It is linux-5.15.y as in the Subject of the patch.
> 
> Am I blind, but I don't see that in the subject line anywhere :(
> 

I did not send the patch directly but had copied the patch in the text.

That's why it is not in the Subject of this email, but the Subject of the
patch text.

From 90f492c865a4b7ca6187a4fc9eebe451f3d6c17e Mon Sep 17 00:00:00 2001
From: Maxim Levitsky <mlevitsk@redhat.com>
Date: Fri, 26 Jan 2024 14:03:59 -0800
Subject: [PATCH linux-5.15.y 1/1] KVM: x86: smm: preserve interrupt shadow in SMRAM

[ Upstream commit fb28875fd7da184079150295da7ee8d80a70917e ]
... ...

Thank you very much!

Dongli Zhang
Dongli Zhang Jan. 27, 2024, 2:08 a.m. UTC | #5
On 1/26/24 16:20, Dongli Zhang wrote:
> Hi Maxim and Paolo, 
> 
> This is the linux-stable backport request regarding the below patch.
> 
> KVM: x86: smm: preserve interrupt shadow in SMRAM
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fb28875fd7da184079150295da7ee8d80a70917e
> 
> According to the below link, there may be a backport to stable kernels, while I
> do not see it in the stable kernels.
> 
> https://gitlab.com/qemu-project/qemu/-/issues/1198
> 
> Would you mind sharing if there is already any existing backport, or please let
> me know if I can send the backport to the linux-stable?
> 
> There are many conflicts unless we backport the entire patchset, e.g.,: I
> choose 0x7f1a/0x7ecb for 32-bit/64-bit int_shadow in the smram.
> 
> --------------------------------
> 
> From 90f492c865a4b7ca6187a4fc9eebe451f3d6c17e Mon Sep 17 00:00:00 2001
> From: Maxim Levitsky <mlevitsk@redhat.com>
> Date: Fri, 26 Jan 2024 14:03:59 -0800
> Subject: [PATCH linux-5.15.y 1/1] KVM: x86: smm: preserve interrupt shadow in SMRAM
> 
> [ Upstream commit fb28875fd7da184079150295da7ee8d80a70917e ]
> 
> When #SMI is asserted, the CPU can be in interrupt shadow due to sti or
> mov ss.
> 
> It is not mandatory in  Intel/AMD prm to have the #SMI blocked during the
> shadow, and on top of that, since neither SVM nor VMX has true support
> for SMI window, waiting for one instruction would mean single stepping
> the guest.
> 
> Instead, allow #SMI in this case, but both reset the interrupt window and
> stash its value in SMRAM to restore it on exit from SMM.
> 
> This fixes rare failures seen mostly on windows guests on VMX, when #SMI
> falls on the sti instruction which mainfest in VM entry failure due
> to EFLAGS.IF not being set, but STI interrupt window still being set
> in the VMCS.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> Message-Id: <20221025124741.228045-24-mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Backport fb28875fd7da184079150295da7ee8d80a70917e from a big patchset
> merge:
> 
> [PATCH RESEND v4 00/23] SMM emulation and interrupt shadow fixes
> https://lore.kernel.org/all/20221025124741.228045-1-mlevitsk@redhat.com/
> 
> Since only the last patch is backported, there are many conflicts.
> 
> The core idea of the patch:
> 
> - Save the interruptibility before entering SMM.
> - Load the interruptibility after leaving SMM.
> 
> Although the real offsets in smram buffer are the same, the bugfix and the
> UEK5 use different offsets in the function calls. Here are some examples.
> 
> 32-bit:
>               bugfix      UEK6

Apologies for my fault that I should use "bugfix" and "5.15.y" in the
table.

I may correct them if I need to send this backport to stable kernels.

So far just to confirm if there is already a backport from Maxim or Paolo
but just never get the chance to send out.

Thank you very much and apologies for the fault again!

Dongli Zhang

> smbase     -> 0xFEF8  -> 0x7ef8
> cr4        -> 0xFF14  -> 0x7f14
> int_shadow -> 0xFF1A  ->  n/a
> eip        -> 0xFFF0  -> 0x7ff0
> cr0        -> 0xFFFC  -> 0x7ffc
> 
> 64-bit:
>               bugfix      UEK6
> int_shadow -> 0xFECB  ->  n/a
> efer       -> 0xFEd0  -> 0x7ed0
> smbase     -> 0xFF00  -> 0x7f00
> cr4        -> 0xFF48  -> 0x7f48
> cr0        -> 0xFF58  -> 0x7f58
> rip        -> 0xFF78  -> 0x7f78
> 
> Therefore, we choose the below offsets for int_shadow:
> 
> 32-bit: int_shadow = 0x7f1a
> 64-bit: int_shadow = 0x7ecb
> 
> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
> ---
>  arch/x86/kvm/emulate.c | 15 +++++++++++++--
>  arch/x86/kvm/x86.c     |  6 ++++++
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 98b25a7..00df781b 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -2438,7 +2438,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,
>  	struct desc_ptr dt;
>  	u16 selector;
>  	u32 val, cr0, cr3, cr4;
> -	int i;
> +	int i, r;
> 
>  	cr0 =                      GET_SMSTATE(u32, smstate, 0x7ffc);
>  	cr3 =                      GET_SMSTATE(u32, smstate, 0x7ff8);
> @@ -2488,7 +2488,15 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,
> 
>  	ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8));
> 
> -	return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
> +	r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
> +
> +	if (r != X86EMUL_CONTINUE)
> +		return r;
> +
> +	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
> +	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7f1a);
> +
> +	return r;
>  }
> 
>  #ifdef CONFIG_X86_64
> @@ -2559,6 +2567,9 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
>  			return r;
>  	}
> 
> +	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
> +	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7ecb);
> +
>  	return X86EMUL_CONTINUE;
>  }
>  #endif
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index aa6f700..6b30d40 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9400,6 +9400,8 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
>  	/* revision id */
>  	put_smstate(u32, buf, 0x7efc, 0x00020000);
>  	put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase);
> +
> +	put_smstate(u8, buf, 0x7f1a, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
>  }
> 
>  #ifdef CONFIG_X86_64
> @@ -9454,6 +9456,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
> 
>  	for (i = 0; i < 6; i++)
>  		enter_smm_save_seg_64(vcpu, buf, i);
> +
> +	put_smstate(u8, buf, 0x7ecb, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
>  }
>  #endif
> 
> @@ -9490,6 +9494,8 @@ static void enter_smm(struct kvm_vcpu *vcpu)
>  	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
>  	kvm_rip_write(vcpu, 0x8000);
> 
> +	static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0);
> +
>  	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
>  	static_call(kvm_x86_set_cr0)(vcpu, cr0);
>  	vcpu->arch.cr0 = cr0;
> --
> 1.8.3.1
> 
> --------------------------------
> 
> Thank you very much!
> 
> Dongli Zhang
diff mbox series

Patch

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 98b25a7..00df781b 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2438,7 +2438,7 @@  static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,
 	struct desc_ptr dt;
 	u16 selector;
 	u32 val, cr0, cr3, cr4;
-	int i;
+	int i, r;

 	cr0 =                      GET_SMSTATE(u32, smstate, 0x7ffc);
 	cr3 =                      GET_SMSTATE(u32, smstate, 0x7ff8);
@@ -2488,7 +2488,15 @@  static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,

 	ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8));

-	return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
+	r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
+
+	if (r != X86EMUL_CONTINUE)
+		return r;
+
+	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
+	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7f1a);
+
+	return r;
 }

 #ifdef CONFIG_X86_64
@@ -2559,6 +2567,9 @@  static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
 			return r;
 	}

+	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
+	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7ecb);
+
 	return X86EMUL_CONTINUE;
 }
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index aa6f700..6b30d40 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9400,6 +9400,8 @@  static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
 	/* revision id */
 	put_smstate(u32, buf, 0x7efc, 0x00020000);
 	put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase);
+
+	put_smstate(u8, buf, 0x7f1a, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
 }

 #ifdef CONFIG_X86_64
@@ -9454,6 +9456,8 @@  static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)

 	for (i = 0; i < 6; i++)
 		enter_smm_save_seg_64(vcpu, buf, i);
+
+	put_smstate(u8, buf, 0x7ecb, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
 }
 #endif

@@ -9490,6 +9494,8 @@  static void enter_smm(struct kvm_vcpu *vcpu)
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0x8000);

+	static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0);
+
 	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
 	static_call(kvm_x86_set_cr0)(vcpu, cr0);
 	vcpu->arch.cr0 = cr0;