diff mbox series

[2/2] KVM: x86: Prevent set vCPU into INIT/SIPI_RECEIVED state when INIT are latched

Message ID 20191111091640.92660-3-liran.alon@oracle.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Fix userspace API regarding latched init | expand

Commit Message

Liran Alon Nov. 11, 2019, 9:16 a.m. UTC
Commit 4b9852f4f389 ("KVM: x86: Fix INIT signal handling in various CPU states")
fixed KVM to also latch pending LAPIC INIT event when vCPU is in VMX
operation.

However, current API of KVM_SET_MP_STATE allows userspace to put vCPU
into KVM_MP_STATE_SIPI_RECEIVED or KVM_MP_STATE_INIT_RECEIVED even when
vCPU is in VMX operation.

Fix this by introducing a util method to check if vCPU state latch INIT
signals and use it in KVM_SET_MP_STATE handler.

Fixes: 4b9852f4f389 ("KVM: x86: Fix INIT signal handling in various CPU states")
Reported-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
---
 arch/x86/kvm/lapic.c | 5 ++---
 arch/x86/kvm/x86.c   | 4 ++--
 arch/x86/kvm/x86.h   | 5 +++++
 3 files changed, 9 insertions(+), 5 deletions(-)

Comments

Paolo Bonzini Nov. 11, 2019, 1:40 p.m. UTC | #1
On 11/11/19 10:16, Liran Alon wrote:
> -	/* INITs are latched while in SMM */
> -	if ((is_smm(vcpu) || vcpu->arch.smi_pending) &&
> +	/* INITs are latched while CPU is in specific states */
> +	if ((kvm_vcpu_latch_init(vcpu) || vcpu->arch.smi_pending) &&
>  	    (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
>  	     mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))
>  		goto out;

Just a small doc clarification:

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 318046647fda..cacfe14717d6 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2707,7 +2707,8 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 		return;
 
 	/*
-	 * INITs are latched while CPU is in specific states.
+	 * INITs are latched while CPU is in specific states
+	 * (SMM, VMX non-root mode, SVM with GIF=0).
 	 * Because a CPU cannot be in these states immediately
 	 * after it has processed an INIT signal (and thus in
 	 * KVM_MP_STATE_INIT_RECEIVED state), just eat SIPIs
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 681544f8db31..11746534e209 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8706,7 +8706,11 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 	    mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
 		goto out;
 
-	/* INITs are latched while CPU is in specific states */
+	/*
+	 * KVM_MP_STATE_INIT_RECEIVED means the processor is in
+	 * INIT state; latched init should be reported using
+	 * KVM_SET_VCPU_EVENTS, so reject it here.
+	 */
 	if ((kvm_vcpu_latch_init(vcpu) || vcpu->arch.smi_pending) &&
 	    (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
 	     mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))


I'm not sure why you're removing the first hunk, it's just meant to
explain why it needs to be a kvm_x86_ops in case the reader is not
thinking about nested virtualization.

Paolo
Liran Alon Nov. 11, 2019, 1:46 p.m. UTC | #2
> On 11 Nov 2019, at 15:40, Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> On 11/11/19 10:16, Liran Alon wrote:
>> -	/* INITs are latched while in SMM */
>> -	if ((is_smm(vcpu) || vcpu->arch.smi_pending) &&
>> +	/* INITs are latched while CPU is in specific states */
>> +	if ((kvm_vcpu_latch_init(vcpu) || vcpu->arch.smi_pending) &&
>> 	    (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
>> 	     mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))
>> 		goto out;
> 
> Just a small doc clarification:
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 318046647fda..cacfe14717d6 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -2707,7 +2707,8 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
> 		return;
> 
> 	/*
> -	 * INITs are latched while CPU is in specific states.
> +	 * INITs are latched while CPU is in specific states
> +	 * (SMM, VMX non-root mode, SVM with GIF=0).

I didn’t want this line of comment as it may diverge from the implementation of kvm_vcpu_latch_init().
That’s why I removed it.

> 	 * Because a CPU cannot be in these states immediately
> 	 * after it has processed an INIT signal (and thus in
> 	 * KVM_MP_STATE_INIT_RECEIVED state), just eat SIPIs
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 681544f8db31..11746534e209 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8706,7 +8706,11 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> 	    mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
> 		goto out;
> 
> -	/* INITs are latched while CPU is in specific states */
> +	/*
> +	 * KVM_MP_STATE_INIT_RECEIVED means the processor is in
> +	 * INIT state; latched init should be reported using
> +	 * KVM_SET_VCPU_EVENTS, so reject it here.
> +	 */

Yes this is a good comment. Thanks for adding it.

> 	if ((kvm_vcpu_latch_init(vcpu) || vcpu->arch.smi_pending) &&
> 	    (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
> 	     mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))
> 
> 
> I'm not sure why you're removing the first hunk, it's just meant to
> explain why it needs to be a kvm_x86_ops in case the reader is not
> thinking about nested virtualization.
> 
> Paolo
>
Paolo Bonzini Nov. 11, 2019, 2:02 p.m. UTC | #3
On 11/11/19 14:46, Liran Alon wrote:
>> +	 * INITs are latched while CPU is in specific states
>> +	 * (SMM, VMX non-root mode, SVM with GIF=0).
> I didn’t want this line of comment as it may diverge from the implementation of kvm_vcpu_latch_init().
> That’s why I removed it.
> 
>> 	 * Because a CPU cannot be in these states immediately
>> 	 * after it has processed an INIT signal (and thus in
>> 	 * KVM_MP_STATE_INIT_RECEIVED state), just eat SIPIs

Got it... on the other hand knowing the specific states clarifies why
they cannot be in that state immediately after processing INIT.  It's a
bit of a catch-22 indeed.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index b29d00b661ff..0df265248cae 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2702,14 +2702,13 @@  void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 		return;
 
 	/*
-	 * INITs are latched while CPU is in specific states
-	 * (SMM, VMX non-root mode, SVM with GIF=0).
+	 * INITs are latched while CPU is in specific states.
 	 * Because a CPU cannot be in these states immediately
 	 * after it has processed an INIT signal (and thus in
 	 * KVM_MP_STATE_INIT_RECEIVED state), just eat SIPIs
 	 * and leave the INIT pending.
 	 */
-	if (is_smm(vcpu) || kvm_x86_ops->apic_init_signal_blocked(vcpu)) {
+	if (kvm_vcpu_latch_init(vcpu)) {
 		WARN_ON_ONCE(vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED);
 		if (test_bit(KVM_APIC_SIPI, &apic->pending_events))
 			clear_bit(KVM_APIC_SIPI, &apic->pending_events);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f41d5d05e9f2..eb992f5d299f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8675,8 +8675,8 @@  int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 	    mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
 		goto out;
 
-	/* INITs are latched while in SMM */
-	if ((is_smm(vcpu) || vcpu->arch.smi_pending) &&
+	/* INITs are latched while CPU is in specific states */
+	if ((kvm_vcpu_latch_init(vcpu) || vcpu->arch.smi_pending) &&
 	    (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
 	     mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))
 		goto out;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index dbf7442a822b..d40da892f889 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -260,6 +260,11 @@  static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 	return !(kvm->arch.disabled_quirks & quirk);
 }
 
+static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
+{
+	return is_smm(vcpu) || kvm_x86_ops->apic_init_signal_blocked(vcpu);
+}
+
 void kvm_set_pending_timer(struct kvm_vcpu *vcpu);
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);