diff mbox series

[v4,07/17] svm: Add support for setup/destroy virutal APIC backing page for AVIC

Message ID 1572648072-84536-8-git-send-email-suravee.suthikulpanit@amd.com (mailing list archive)
State New, archived
Headers show
Series kvm: x86: Support AMD SVM AVIC w/ in-kernel irqchip mode | expand

Commit Message

Suthikulpanit, Suravee Nov. 1, 2019, 10:41 p.m. UTC
Re-factor avic_init_access_page() to avic_update_access_page() since
activate/deactivate AVIC requires setting/unsetting the memory region used
for virtual APIC backing page (APIC_ACCESS_PAGE_PRIVATE_MEMSLOT).

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 arch/x86/kvm/svm.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

Comments

Roman Kagan Nov. 4, 2019, 9:53 p.m. UTC | #1
On Fri, Nov 01, 2019 at 10:41:30PM +0000, Suthikulpanit, Suravee wrote:
> Re-factor avic_init_access_page() to avic_update_access_page() since
> activate/deactivate AVIC requires setting/unsetting the memory region used
> for virtual APIC backing page (APIC_ACCESS_PAGE_PRIVATE_MEMSLOT).

AFAICT the patch actually touches the (de)allocation of the APIC access
page rather than the APIC backing page (or I'm confused in the
nomenclature).

Thanks,
Roman.

> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>  arch/x86/kvm/svm.c | 11 +++++------
>  1 file changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index b7d0adc..46842a2 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1668,23 +1668,22 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,
>   * field of the VMCB. Therefore, we set up the
>   * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
>   */
> -static int avic_init_access_page(struct kvm_vcpu *vcpu)
> +static int avic_update_access_page(struct kvm *kvm, bool activate)
>  {
> -	struct kvm *kvm = vcpu->kvm;
>  	int ret = 0;
>  
>  	mutex_lock(&kvm->slots_lock);
> -	if (kvm->arch.apic_access_page_done)
> +	if (kvm->arch.apic_access_page_done == activate)
>  		goto out;
>  
>  	ret = __x86_set_memory_region(kvm,
>  				      APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
>  				      APIC_DEFAULT_PHYS_BASE,
> -				      PAGE_SIZE);
> +				      activate ? PAGE_SIZE : 0);
>  	if (ret)
>  		goto out;
>  
> -	kvm->arch.apic_access_page_done = true;
> +	kvm->arch.apic_access_page_done = activate;
>  out:
>  	mutex_unlock(&kvm->slots_lock);
>  	return ret;
> @@ -1697,7 +1696,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
>  	int id = vcpu->vcpu_id;
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  
> -	ret = avic_init_access_page(vcpu);
> +	ret = avic_update_access_page(vcpu->kvm, true);
>  	if (ret)
>  		return ret;
>  
> -- 
> 1.8.3.1
>
Suthikulpanit, Suravee Nov. 12, 2019, 12:05 a.m. UTC | #2
Roman,

On 11/4/19 3:53 PM, Roman Kagan wrote:
> On Fri, Nov 01, 2019 at 10:41:30PM +0000, Suthikulpanit, Suravee wrote:
>> Re-factor avic_init_access_page() to avic_update_access_page() since
>> activate/deactivate AVIC requires setting/unsetting the memory region used
>> for virtual APIC backing page (APIC_ACCESS_PAGE_PRIVATE_MEMSLOT).
>
> AFAICT the patch actually touches the (de)allocation of the APIC access
> page rather than the APIC backing page (or I'm confused in the
> nomenclature).

The APIC backing page is allocated during vcpu initialization, while
the APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, is initialized per-vm, and is
used mainly for access permission control of the APIC backing page.

There is a comment in the arch/x86/kvm/svm.c:

  /**
   * Note:
   * AVIC hardware walks the nested page table to check permissions,
   * but does not use the SPA address specified in the leaf page
   * table entry since it uses address in the AVIC_BACKING_PAGE pointer
   * field of the VMCB. Therefore, we set up the
   * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
   */

When deactivate APICv, we do not destroy the APIC backing page, but
we need to de-allocate the APIC_ACCESS_PAGE_PRIVATE_MEMSLOT.

Thanks,
Suravee

> Thanks,
> Roman.
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b7d0adc..46842a2 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1668,23 +1668,22 @@  static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,
  * field of the VMCB. Therefore, we set up the
  * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
  */
-static int avic_init_access_page(struct kvm_vcpu *vcpu)
+static int avic_update_access_page(struct kvm *kvm, bool activate)
 {
-	struct kvm *kvm = vcpu->kvm;
 	int ret = 0;
 
 	mutex_lock(&kvm->slots_lock);
-	if (kvm->arch.apic_access_page_done)
+	if (kvm->arch.apic_access_page_done == activate)
 		goto out;
 
 	ret = __x86_set_memory_region(kvm,
 				      APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
 				      APIC_DEFAULT_PHYS_BASE,
-				      PAGE_SIZE);
+				      activate ? PAGE_SIZE : 0);
 	if (ret)
 		goto out;
 
-	kvm->arch.apic_access_page_done = true;
+	kvm->arch.apic_access_page_done = activate;
 out:
 	mutex_unlock(&kvm->slots_lock);
 	return ret;
@@ -1697,7 +1696,7 @@  static int avic_init_backing_page(struct kvm_vcpu *vcpu)
 	int id = vcpu->vcpu_id;
 	struct vcpu_svm *svm = to_svm(vcpu);
 
-	ret = avic_init_access_page(vcpu);
+	ret = avic_update_access_page(vcpu->kvm, true);
 	if (ret)
 		return ret;