diff mbox series

[v2,37/43] KVM: SVM: Unconditionally mark AVIC as running on vCPU load (with APICv)

Message ID 20211009021236.4122790-38-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: Halt-polling and x86 APICv overhaul | expand

Commit Message

Sean Christopherson Oct. 9, 2021, 2:12 a.m. UTC
Always mark the AVIC as "running" on vCPU load when the AVIC is enabled and
drop the vcpu_blocking/unblocking hooks that toggle "running".  There is no
harm in keeping the flag set for a wee bit longer when a vCPU is blocking,
i.e. between the start of blocking and being scheduled out.  At worst, an
agent in the host will unnecessarily signal the doorbell, but that's
already the status quo in KVM as the "running" flag is set the entire time
a vCPU is loaded, not just when it's actively running the guest.

In addition to simplifying the code, keeping the "running" flag set longer
can reduce the number of VM-Exits due to incomplete IPI delivery.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/avic.c | 53 +++++++++++++----------------------------
 arch/x86/kvm/svm/svm.c  |  8 -------
 arch/x86/kvm/svm/svm.h  |  3 ---
 3 files changed, 17 insertions(+), 47 deletions(-)

Comments

Paolo Bonzini Oct. 25, 2021, 2:22 p.m. UTC | #1
On 09/10/21 04:12, Sean Christopherson wrote:
> +	/* TODO: Document why the unblocking path checks for updates. */

Is that a riddle or what? :)

Paolo

> +	if (kvm_vcpu_is_blocking(vcpu) &&
> +	    kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu)) {
> +		kvm_vcpu_update_apicv(vcpu);
> +
> +		if (!kvm_vcpu_apicv_active(vcpu))
> +			return;
> +	}
> +
Sean Christopherson Oct. 25, 2021, 3:48 p.m. UTC | #2
On Mon, Oct 25, 2021, Paolo Bonzini wrote:
> On 09/10/21 04:12, Sean Christopherson wrote:
> > +	/* TODO: Document why the unblocking path checks for updates. */
> 
> Is that a riddle or what? :)

Yes?  I haven't been able to figure out why the unblocking path explicitly
checks and handles an APICv update.
Paolo Bonzini Oct. 25, 2021, 3:57 p.m. UTC | #3
On 25/10/21 17:48, Sean Christopherson wrote:
> On Mon, Oct 25, 2021, Paolo Bonzini wrote:
>> On 09/10/21 04:12, Sean Christopherson wrote:
>>> +	/* TODO: Document why the unblocking path checks for updates. */
>>
>> Is that a riddle or what? :)
> 
> Yes?  I haven't been able to figure out why the unblocking path explicitly
> checks and handles an APICv update.
> 

Challenge accepted.  In the original code, it was because without it 
avic_vcpu_load would do nothing, and nothing would update the IS_RUNNING 
flag.

It shouldn't be necessary anymore since commit df7e4827c549 ("KVM: SVM: 
call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC", 
2021-08-20), where svm_refresh_apicv_exec_ctrl takes care of the 
avic_vcpu_load on the next VMRUN.

Paolo
Sean Christopherson Oct. 25, 2021, 4 p.m. UTC | #4
On Mon, Oct 25, 2021, Paolo Bonzini wrote:
> On 25/10/21 17:48, Sean Christopherson wrote:
> > On Mon, Oct 25, 2021, Paolo Bonzini wrote:
> > > On 09/10/21 04:12, Sean Christopherson wrote:
> > > > +	/* TODO: Document why the unblocking path checks for updates. */
> > > 
> > > Is that a riddle or what? :)
> > 
> > Yes?  I haven't been able to figure out why the unblocking path explicitly
> > checks and handles an APICv update.
> > 
> 
> Challenge accepted.  In the original code, it was because without it
> avic_vcpu_load would do nothing, and nothing would update the IS_RUNNING
> flag.
> 
> It shouldn't be necessary anymore since commit df7e4827c549 ("KVM: SVM: call
> avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC", 2021-08-20),
> where svm_refresh_apicv_exec_ctrl takes care of the avic_vcpu_load on the
> next VMRUN.

Aha!  Thanks, I'll work in a removal for the next version.
Maxim Levitsky Oct. 31, 2021, 4:34 p.m. UTC | #5
On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> Always mark the AVIC as "running" on vCPU load when the AVIC is enabled and
> drop the vcpu_blocking/unblocking hooks that toggle "running".  There is no
> harm in keeping the flag set for a wee bit longer when a vCPU is blocking,
> i.e. between the start of blocking and being scheduled out.  At worst, an
> agent in the host will unnecessarily signal the doorbell, but that's
> already the status quo in KVM as the "running" flag is set the entire time
> a vCPU is loaded, not just when it's actively running the guest.
> 
> In addition to simplifying the code, keeping the "running" flag set longer
> can reduce the number of VM-Exits due to incomplete IPI delivery.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/svm/avic.c | 53 +++++++++++++----------------------------
>  arch/x86/kvm/svm/svm.c  |  8 -------
>  arch/x86/kvm/svm/svm.h  |  3 ---
>  3 files changed, 17 insertions(+), 47 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index b43b05610ade..213f5223f63e 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -967,6 +967,15 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	int h_physical_id = kvm_cpu_get_apicid(cpu);
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  
> +	/* TODO: Document why the unblocking path checks for updates. */
> +	if (kvm_vcpu_is_blocking(vcpu) &&
> +	    kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu)) {
> +		kvm_vcpu_update_apicv(vcpu);
> +
> +		if (!kvm_vcpu_apicv_active(vcpu))
> +			return;
> +	}
> +
>  	/*
>  	 * Since the host physical APIC id is 8 bits,
>  	 * we can support host APIC ID upto 255.
> @@ -974,19 +983,21 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
>  		return;
>  
> +	/*
> +	 * Unconditionally mark the AVIC as "running", even if the vCPU is in
> +	 * kvm_vcpu_block().  kvm_vcpu_check_block() will detect pending IRQs
> +	 * and bail out of the block loop, and if not, avic_vcpu_put() will
> +	 * set the AVIC back to "not running" when the vCPU is scheduled out.
> +	 */
>  	entry = READ_ONCE(*(svm->avic_physical_id_cache));
>  	WARN_ON(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
>  
>  	entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
>  	entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
> -
> -	entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
> -	if (svm->avic_is_running)
> -		entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
> +	entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
>  
>  	WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
> -	avic_update_iommu_vcpu_affinity(vcpu, h_physical_id,
> -					svm->avic_is_running);
> +	avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true);
>  }
>  
>  void avic_vcpu_put(struct kvm_vcpu *vcpu)
> @@ -1001,33 +1012,3 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
>  	entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
>  	WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
>  }
> -
> -/*
> - * This function is called during VCPU halt/unhalt.
> - */
> -static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)
> -{
> -	struct vcpu_svm *svm = to_svm(vcpu);
> -
> -	svm->avic_is_running = is_run;
> -
> -	if (!kvm_vcpu_apicv_active(vcpu))
> -		return;
> -
> -	if (is_run)
> -		avic_vcpu_load(vcpu, vcpu->cpu);
> -	else
> -		avic_vcpu_put(vcpu);
> -}
> -
> -void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
> -{
> -	avic_set_running(vcpu, false);
> -}
> -
> -void svm_vcpu_unblocking(struct kvm_vcpu *vcpu)
> -{
> -	if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
> -		kvm_vcpu_update_apicv(vcpu);
> -	avic_set_running(vcpu, true);
> -}
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 89077160d463..a1ca5707f2c8 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -1433,12 +1433,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
>  	if (err)
>  		goto error_free_vmsa_page;
>  
> -	/* We initialize this flag to true to make sure that the is_running
> -	 * bit would be set the first time the vcpu is loaded.
> -	 */
> -	if (irqchip_in_kernel(vcpu->kvm) && kvm_apicv_activated(vcpu->kvm))
> -		svm->avic_is_running = true;
> -
>  	svm->msrpm = svm_vcpu_alloc_msrpm();
>  	if (!svm->msrpm) {
>  		err = -ENOMEM;
> @@ -4597,8 +4591,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>  	.prepare_guest_switch = svm_prepare_guest_switch,
>  	.vcpu_load = svm_vcpu_load,
>  	.vcpu_put = svm_vcpu_put,
> -	.vcpu_blocking = svm_vcpu_blocking,
> -	.vcpu_unblocking = svm_vcpu_unblocking,
>  
>  	.update_exception_bitmap = svm_update_exception_bitmap,
>  	.get_msr_feature = svm_get_msr_feature,
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 7f5b01bbee29..652d71acfb6c 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -169,7 +169,6 @@ struct vcpu_svm {
>  	u32 dfr_reg;
>  	struct page *avic_backing_page;
>  	u64 *avic_physical_id_cache;
> -	bool avic_is_running;
>  
>  	/*
>  	 * Per-vcpu list of struct amd_svm_iommu_ir:
> @@ -529,8 +528,6 @@ int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec);
>  bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
>  int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
>  		       uint32_t guest_irq, bool set);
> -void svm_vcpu_blocking(struct kvm_vcpu *vcpu);
> -void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
>  
>  /* sev.c */
>  

Looks good. It is nice to get rid of all of this logic that was just making things more complicated.

Something else nice to do here which I didn't finish back then when I worked on avic, would be
to maybe rename avic_vcpu_load/avic_vcpu_put because those are also now run on avic inhibit/uninhibit.

Basically the 'svm_refresh_apicv_exec_ctrl' is the full avic activate/deactivate, while
avic_vcpu_load/avic_vcpu_put are the lighter weight partial avic activation/deactivation functions.

So minus the comment from Paolo about the updating avic on unblock which I missed back when I wrote
my patches:

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index b43b05610ade..213f5223f63e 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -967,6 +967,15 @@  void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	int h_physical_id = kvm_cpu_get_apicid(cpu);
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	/* TODO: Document why the unblocking path checks for updates. */
+	if (kvm_vcpu_is_blocking(vcpu) &&
+	    kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu)) {
+		kvm_vcpu_update_apicv(vcpu);
+
+		if (!kvm_vcpu_apicv_active(vcpu))
+			return;
+	}
+
 	/*
 	 * Since the host physical APIC id is 8 bits,
 	 * we can support host APIC ID upto 255.
@@ -974,19 +983,21 @@  void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
 		return;
 
+	/*
+	 * Unconditionally mark the AVIC as "running", even if the vCPU is in
+	 * kvm_vcpu_block().  kvm_vcpu_check_block() will detect pending IRQs
+	 * and bail out of the block loop, and if not, avic_vcpu_put() will
+	 * set the AVIC back to "not running" when the vCPU is scheduled out.
+	 */
 	entry = READ_ONCE(*(svm->avic_physical_id_cache));
 	WARN_ON(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
 
 	entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
 	entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
-
-	entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
-	if (svm->avic_is_running)
-		entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+	entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
 
 	WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
-	avic_update_iommu_vcpu_affinity(vcpu, h_physical_id,
-					svm->avic_is_running);
+	avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true);
 }
 
 void avic_vcpu_put(struct kvm_vcpu *vcpu)
@@ -1001,33 +1012,3 @@  void avic_vcpu_put(struct kvm_vcpu *vcpu)
 	entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
 	WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
 }
-
-/*
- * This function is called during VCPU halt/unhalt.
- */
-static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)
-{
-	struct vcpu_svm *svm = to_svm(vcpu);
-
-	svm->avic_is_running = is_run;
-
-	if (!kvm_vcpu_apicv_active(vcpu))
-		return;
-
-	if (is_run)
-		avic_vcpu_load(vcpu, vcpu->cpu);
-	else
-		avic_vcpu_put(vcpu);
-}
-
-void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
-{
-	avic_set_running(vcpu, false);
-}
-
-void svm_vcpu_unblocking(struct kvm_vcpu *vcpu)
-{
-	if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
-		kvm_vcpu_update_apicv(vcpu);
-	avic_set_running(vcpu, true);
-}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 89077160d463..a1ca5707f2c8 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1433,12 +1433,6 @@  static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 	if (err)
 		goto error_free_vmsa_page;
 
-	/* We initialize this flag to true to make sure that the is_running
-	 * bit would be set the first time the vcpu is loaded.
-	 */
-	if (irqchip_in_kernel(vcpu->kvm) && kvm_apicv_activated(vcpu->kvm))
-		svm->avic_is_running = true;
-
 	svm->msrpm = svm_vcpu_alloc_msrpm();
 	if (!svm->msrpm) {
 		err = -ENOMEM;
@@ -4597,8 +4591,6 @@  static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.prepare_guest_switch = svm_prepare_guest_switch,
 	.vcpu_load = svm_vcpu_load,
 	.vcpu_put = svm_vcpu_put,
-	.vcpu_blocking = svm_vcpu_blocking,
-	.vcpu_unblocking = svm_vcpu_unblocking,
 
 	.update_exception_bitmap = svm_update_exception_bitmap,
 	.get_msr_feature = svm_get_msr_feature,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7f5b01bbee29..652d71acfb6c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -169,7 +169,6 @@  struct vcpu_svm {
 	u32 dfr_reg;
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
-	bool avic_is_running;
 
 	/*
 	 * Per-vcpu list of struct amd_svm_iommu_ir:
@@ -529,8 +528,6 @@  int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec);
 bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
 int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 		       uint32_t guest_irq, bool set);
-void svm_vcpu_blocking(struct kvm_vcpu *vcpu);
-void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
 
 /* sev.c */