diff mbox series

KVM: SVM: Remove TSS reloading code after VMEXIT

Message ID 20230518170653.704562-1-mizhang@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: SVM: Remove TSS reloading code after VMEXIT | expand

Commit Message

Mingwei Zhang May 18, 2023, 5:06 p.m. UTC
Remove TSS reloading code after VMEXIT since upstream KVM after [1] has
already been using VMLOAD to load host segment state (including TSS).
Therefore, reload_tss() becomes redundant. Because of that, also remove the
relevant data field tss_desc in svm_cpu_data as well as its data structure
definition.

[1] commit e79b91bb3c91 ("KVM: SVM: use vmsave/vmload for saving/restoring additionalhost state")

Reported-by: Venkatesh Srinivas <venkateshs@google.com>
Suggested-by: Jim Mattson <jmattson@google.com>
Tested-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
---
 arch/x86/kvm/svm/svm.c | 24 ------------------------
 arch/x86/kvm/svm/svm.h |  1 -
 2 files changed, 25 deletions(-)

Comments

Sean Christopherson May 18, 2023, 5:45 p.m. UTC | #1
On Thu, May 18, 2023, Mingwei Zhang wrote:
> Remove TSS reloading code after VMEXIT since upstream KVM after [1] has
> already been using VMLOAD to load host segment state (including TSS).
> Therefore, reload_tss() becomes redundant. Because of that, also remove the
> relevant data field tss_desc in svm_cpu_data as well as its data structure
> definition.
> 
> [1] commit e79b91bb3c91 ("KVM: SVM: use vmsave/vmload for saving/restoring additionalhost state")

This should be

Fixes: e79b91bb3c91 ("KVM: SVM: use vmsave/vmload for saving/restoring additional host state")

to make it clear that the code could have, and should have, been removed by that
commit.

Can you also explain what happens with the TSS busy bit?  I'm staring at a comically
long internal discussion about this patch, I would likely to capture the important
bits in the changelog.  Doesn't have to be super verbose, e.g. just an explanation
that makes it abundantly clear reload_tss() is fully redundant.

> Reported-by: Venkatesh Srinivas <venkateshs@google.com>
> Suggested-by: Jim Mattson <jmattson@google.com>
> Tested-by: Mingwei Zhang <mizhang@google.com>

Heh, you wrote the code and sent the patch, so it darn well better be tested :-)
There are scenarios where a Tested-by for the _original_ author is warranted, e.g.
if someone else tweaked and reposted the patch.  But in this case, there's no need.
Mingwei Zhang May 18, 2023, 6:24 p.m. UTC | #2
On Thu, May 18, 2023, Sean Christopherson wrote:
> On Thu, May 18, 2023, Mingwei Zhang wrote:
> > Remove TSS reloading code after VMEXIT since upstream KVM after [1] has
> > already been using VMLOAD to load host segment state (including TSS).
> > Therefore, reload_tss() becomes redundant. Because of that, also remove the
> > relevant data field tss_desc in svm_cpu_data as well as its data structure
> > definition.
> > 
> > [1] commit e79b91bb3c91 ("KVM: SVM: use vmsave/vmload for saving/restoring additionalhost state")
> 
> This should be
> 
> Fixes: e79b91bb3c91 ("KVM: SVM: use vmsave/vmload for saving/restoring additional host state")
> 
> to make it clear that the code could have, and should have, been removed by that
> commit.

Sure, will do in next version.
> 
> Can you also explain what happens with the TSS busy bit?  I'm staring at a comically
> long internal discussion about this patch, I would likely to capture the important
> bits in the changelog.  Doesn't have to be super verbose, e.g. just an explanation
> that makes it abundantly clear reload_tss() is fully redundant.
> 

Oh, the busy bit was not related with the removal. I was confused about
the busy bit being 0 when being loaded by LTR on SVM side. I thought
this was an inconsistency since on VMX side, immediately after VMEXIT,
TR.type == 11 (1011b) which means busy bit (bit 1) is 1 (SDM vol 3
28.5.2).

It turns out it was just my confusion, since busy bit is to prevent
reloading a 'busy' segment, i.e., if LTR reloads a 'busy' segment, it
triggers #GP at host level. To avoid that, KVM clear the bit in
reload_tss() and make it 'available' (that's why the value is 9).
Immediately after being loaded by LTR, the busy bit will be set again.

> > Reported-by: Venkatesh Srinivas <venkateshs@google.com>
> > Suggested-by: Jim Mattson <jmattson@google.com>
> > Tested-by: Mingwei Zhang <mizhang@google.com>
> 
> Heh, you wrote the code and sent the patch, so it darn well better be tested :-)
> There are scenarios where a Tested-by for the _original_ author is warranted, e.g.
> if someone else tweaked and reposted the patch.  But in this case, there's no need.

I see. I can remove the Tested-by.
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index eb308c9994f9..cfbe00360908 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -240,15 +240,6 @@  static u8 rsm_ins_bytes[] = "\x0f\xaa";
 
 static unsigned long iopm_base;
 
-struct kvm_ldttss_desc {
-	u16 limit0;
-	u16 base0;
-	unsigned base1:8, type:5, dpl:2, p:1;
-	unsigned limit1:4, zero0:3, g:1, base2:8;
-	u32 base3;
-	u32 zero1;
-} __attribute__((packed));
-
 DEFINE_PER_CPU(struct svm_cpu_data, svm_data);
 
 /*
@@ -584,7 +575,6 @@  static int svm_hardware_enable(void)
 
 	struct svm_cpu_data *sd;
 	uint64_t efer;
-	struct desc_struct *gdt;
 	int me = raw_smp_processor_id();
 
 	rdmsrl(MSR_EFER, efer);
@@ -597,9 +587,6 @@  static int svm_hardware_enable(void)
 	sd->next_asid = sd->max_asid + 1;
 	sd->min_asid = max_sev_asid + 1;
 
-	gdt = get_current_gdt_rw();
-	sd->tss_desc = (struct kvm_ldttss_desc *)(gdt + GDT_ENTRY_TSS);
-
 	wrmsrl(MSR_EFER, efer | EFER_SVME);
 
 	wrmsrl(MSR_VM_HSAVE_PA, sd->save_area_pa);
@@ -3453,14 +3440,6 @@  static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 	return svm_invoke_exit_handler(vcpu, exit_code);
 }
 
-static void reload_tss(struct kvm_vcpu *vcpu)
-{
-	struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
-
-	sd->tss_desc->type = 9; /* available 32/64-bit TSS */
-	load_TR_desc();
-}
-
 static void pre_svm_run(struct kvm_vcpu *vcpu)
 {
 	struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
@@ -4064,9 +4043,6 @@  static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	svm_vcpu_enter_exit(vcpu, spec_ctrl_intercepted);
 
-	if (!sev_es_guest(vcpu->kvm))
-		reload_tss(vcpu);
-
 	if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL))
 		x86_spec_ctrl_restore_host(svm->virt_spec_ctrl);
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index f44751dd8d5d..18af7e712a5a 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -303,7 +303,6 @@  struct svm_cpu_data {
 	u32 max_asid;
 	u32 next_asid;
 	u32 min_asid;
-	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
 	unsigned long save_area_pa;