diff mbox series

[v2] x86/passthrough: fix migration of MSI when using posted interrupts

Message ID 20191009125252.3112-1-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series [v2] x86/passthrough: fix migration of MSI when using posted interrupts | expand

Commit Message

Roger Pau Monne Oct. 9, 2019, 12:52 p.m. UTC
When using posted interrupts and the guest migrates MSI from vCPUs Xen
needs to flush any pending PIRR vectors on the previous vCPU, or else
those vectors could get wrongly injected at a later point when the MSI
fields are already updated.

Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
can be called when updating the posted interrupt descriptor field in
pi_update_irte. While there also remove the unlock_out from
pi_update_irte, it's used in a single goto and removing it makes the
function smaller.

Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
pt_irq_create_bind when the interrupt delivery data is being updated.

Also store the vCPU ID in multi-destination mode when using posted
interrupts and the interrupt is bound to a single vCPU in order for
posted interrupts to be used.

While there guard pi_update_irte with CONFIG_HVM since it's only used
with HVM guests.

Reported-by: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Joe Jin <joe.jin@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
---
I would like to see a bug fix for this issue in 4.13. The fix here only
affects posted interrupts, hence I think the risk of breaking anything
else is low.
---
Changes since v1:
 - Store the vcpu id also in multi-dest mode if the interrupt is bound
   to a vcpu for posted delivery.
 - s/#if/#ifdef/.
---
 xen/arch/x86/hvm/vlapic.c              |  6 +++---
 xen/drivers/passthrough/io.c           | 13 ++++++++++---
 xen/drivers/passthrough/vtd/intremap.c | 15 ++++++++-------
 xen/include/asm-x86/hvm/vlapic.h       |  2 ++
 xen/include/asm-x86/iommu.h            |  2 +-
 5 files changed, 24 insertions(+), 14 deletions(-)

Comments

Jan Beulich Oct. 9, 2019, 1:35 p.m. UTC | #1
On 09.10.2019 14:52, Roger Pau Monne wrote:
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at a later point when the MSI
> fields are already updated.
> 
> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
> can be called when updating the posted interrupt descriptor field in
> pi_update_irte. While there also remove the unlock_out from
> pi_update_irte, it's used in a single goto and removing it makes the
> function smaller.
> 
> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
> pt_irq_create_bind when the interrupt delivery data is being updated.
> 
> Also store the vCPU ID in multi-destination mode when using posted
> interrupts and the interrupt is bound to a single vCPU in order for
> posted interrupts to be used.
> 
> While there guard pi_update_irte with CONFIG_HVM since it's only used
> with HVM guests.
> 
> Reported-by: Joe Jin <joe.jin@oracle.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Like for the other patch I'd prefer to wait a little with committing
(even if the VT-d ack appeared quickly) until hopefully a Tested-by
could be provided.

Jan
Joe Jin Oct. 14, 2019, 2:36 p.m. UTC | #2
On 10/9/19 6:35 AM, Jan Beulich wrote:
> On 09.10.2019 14:52, Roger Pau Monne wrote:
>> When using posted interrupts and the guest migrates MSI from vCPUs Xen
>> needs to flush any pending PIRR vectors on the previous vCPU, or else
>> those vectors could get wrongly injected at a later point when the MSI
>> fields are already updated.
>>
>> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
>> can be called when updating the posted interrupt descriptor field in
>> pi_update_irte. While there also remove the unlock_out from
>> pi_update_irte, it's used in a single goto and removing it makes the
>> function smaller.
>>
>> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
>> pt_irq_create_bind when the interrupt delivery data is being updated.
>>
>> Also store the vCPU ID in multi-destination mode when using posted
>> interrupts and the interrupt is bound to a single vCPU in order for
>> posted interrupts to be used.
>>
>> While there guard pi_update_irte with CONFIG_HVM since it's only used
>> with HVM guests.
>>
>> Reported-by: Joe Jin <joe.jin@oracle.com>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Like for the other patch I'd prefer to wait a little with committing
> (even if the VT-d ack appeared quickly) until hopefully a Tested-by
> could be provided.

My test env has not been fixed yet, not sure when it can be fixed, once
it available I'll test it.

Thanks,
Joe
Joe Jin Oct. 30, 2019, 12:20 a.m. UTC | #3
Hi Roger & Jan,

I got my test env back, and back the patch to stable-4.12, run same
test, I still seen original issue, guest kernel printed error:

 kernel:do_IRQ: 20.114 No irq handler for vector (irq -1)

After that, pass through infiniband VF stopped to work.

My patch as below, please check:

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index a1a43cd792..2d175d2a00 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -111,6 +111,12 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
     vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
 }
 
+void vlapic_sync_pir_to_irr(struct vcpu *v)
+{
+    if ( hvm_funcs.sync_pir_to_irr )
+        hvm_funcs.sync_pir_to_irr(v);
+}
+
 static int vlapic_find_highest_irr(struct vlapic *vlapic)
 {
     if ( hvm_funcs.sync_pir_to_irr )
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 4290c7c710..b628adea4c 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -341,7 +341,7 @@ int pt_irq_create_bind(
     {
         uint8_t dest, delivery_mode;
         bool dest_mode;
-        int dest_vcpu_id;
+        int dest_vcpu_id, prev_vcpu_id = -1;
         const struct vcpu *vcpu;
         uint32_t gflags = pt_irq_bind->u.msi.gflags &
                           ~XEN_DOMCTL_VMSI_X86_UNMASKED;
@@ -411,6 +411,7 @@ int pt_irq_create_bind(
 
                 pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
                 pirq_dpci->gmsi.gflags = gflags;
+                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
             }
         }
         /* Calculate dest_vcpu_id for MSI-type pirq migration. */
@@ -432,7 +433,10 @@ int pt_irq_create_bind(
                 vcpu = vector_hashing_dest(d, dest, dest_mode,
                                            pirq_dpci->gmsi.gvec);
             if ( vcpu )
+            {
                 pirq_dpci->gmsi.posted = true;
+                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
+            }
         }
         if ( vcpu && iommu_enabled )
             hvm_migrate_pirq(pirq_dpci, vcpu);
@@ -440,7 +444,8 @@ int pt_irq_create_bind(
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
             pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+                           info, pirq_dpci->gmsi.gvec,
+                           prev_vcpu_id >= 0 ? d->vcpu[prev_vcpu_id] : NULL);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
@@ -729,7 +734,9 @@ int pt_irq_destroy_bind(
             what = "bogus";
     }
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
-        pi_update_irte(NULL, pirq, 0);
+        pi_update_irte(NULL, pirq, 0,
+                       pirq_dpci->gmsi.dest_vcpu_id >= 0
+                       ? d->vcpu[pirq_dpci->gmsi.dest_vcpu_id] : NULL);
 
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index c9927e4706..d788a4b9e7 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -961,12 +961,13 @@ void iommu_disable_x2apic_IR(void)
         disable_qinval(drhd->iommu);
 }
 
+#ifdef CONFIG_HVM
 /*
  * This function is used to update the IRTE for posted-interrupt
  * when guest changes MSI/MSI-X information.
  */
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-    const uint8_t gvec)
+    const uint8_t gvec, struct vcpu *prev)
 {
     struct irq_desc *desc;
     struct msi_desc *msi_desc;
@@ -979,8 +980,8 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
     msi_desc = desc->msi_desc;
     if ( !msi_desc )
     {
-        rc = -ENODEV;
-        goto unlock_out;
+        spin_unlock_irq(&desc->lock);
+        return -ENODEV;
     }
     msi_desc->pi_desc = pi_desc;
     msi_desc->gvec = gvec;
@@ -989,10 +990,10 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
 
     ASSERT(pcidevs_locked());
 
-    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
-
- unlock_out:
-    spin_unlock_irq(&desc->lock);
+    rc = msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
+    if ( !rc && prev )
+         vlapic_sync_pir_to_irr(prev);
 
     return rc;
 }
+#endif
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index dde66b4f0f..b0017d1dae 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -150,4 +150,6 @@ bool_t vlapic_match_dest(
     const struct vlapic *target, const struct vlapic *source,
     int short_hand, uint32_t dest, bool_t dest_mode);
 
+void vlapic_sync_pir_to_irr(struct vcpu *v);
+
 #endif /* __ASM_X86_HVM_VLAPIC_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 8dc392473d..32bfa23648 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -99,7 +99,7 @@ void iommu_disable_x2apic_IR(void);
 extern bool untrusted_msi;
 
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-                   const uint8_t gvec);
+                   const uint8_t gvec, struct vcpu *prev);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*

Thanks,
Joe

On 10/9/19 5:52 AM, Roger Pau Monne wrote:
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at a later point when the MSI
> fields are already updated.
> 
> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
> can be called when updating the posted interrupt descriptor field in
> pi_update_irte. While there also remove the unlock_out from
> pi_update_irte, it's used in a single goto and removing it makes the
> function smaller.
> 
> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
> pt_irq_create_bind when the interrupt delivery data is being updated.
> 
> Also store the vCPU ID in multi-destination mode when using posted
> interrupts and the interrupt is bound to a single vCPU in order for
> posted interrupts to be used.
> 
> While there guard pi_update_irte with CONFIG_HVM since it's only used
> with HVM guests.
> 
> Reported-by: Joe Jin <joe.jin@oracle.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Joe Jin <joe.jin@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> I would like to see a bug fix for this issue in 4.13. The fix here only
> affects posted interrupts, hence I think the risk of breaking anything
> else is low.
> ---
> Changes since v1:
>  - Store the vcpu id also in multi-dest mode if the interrupt is bound
>    to a vcpu for posted delivery.
>  - s/#if/#ifdef/.
> ---
>  xen/arch/x86/hvm/vlapic.c              |  6 +++---
>  xen/drivers/passthrough/io.c           | 13 ++++++++++---
>  xen/drivers/passthrough/vtd/intremap.c | 15 ++++++++-------
>  xen/include/asm-x86/hvm/vlapic.h       |  2 ++
>  xen/include/asm-x86/iommu.h            |  2 +-
>  5 files changed, 24 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 9466258d6f..d255ad8db7 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
>      vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
>  }
>  
> -static void sync_pir_to_irr(struct vcpu *v)
> +void vlapic_sync_pir_to_irr(struct vcpu *v)
>  {
>      if ( hvm_funcs.sync_pir_to_irr )
>          alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
> @@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v)
>  
>  static int vlapic_find_highest_irr(struct vlapic *vlapic)
>  {
> -    sync_pir_to_irr(vlapic_vcpu(vlapic));
> +    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
>  
>      return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
>  }
> @@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
>      if ( !has_vlapic(v->domain) )
>          return 0;
>  
> -    sync_pir_to_irr(v);
> +    vlapic_sync_pir_to_irr(v);
>  
>      return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
>  }
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index b292e79382..5bf1877726 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -341,7 +341,7 @@ int pt_irq_create_bind(
>      {
>          uint8_t dest, delivery_mode;
>          bool dest_mode;
> -        int dest_vcpu_id;
> +        int dest_vcpu_id, prev_vcpu_id = -1;
>          const struct vcpu *vcpu;
>          uint32_t gflags = pt_irq_bind->u.msi.gflags &
>                            ~XEN_DOMCTL_VMSI_X86_UNMASKED;
> @@ -411,6 +411,7 @@ int pt_irq_create_bind(
>  
>                  pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
>                  pirq_dpci->gmsi.gflags = gflags;
> +                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
>              }
>          }
>          /* Calculate dest_vcpu_id for MSI-type pirq migration. */
> @@ -432,7 +433,10 @@ int pt_irq_create_bind(
>                  vcpu = vector_hashing_dest(d, dest, dest_mode,
>                                             pirq_dpci->gmsi.gvec);
>              if ( vcpu )
> +            {
>                  pirq_dpci->gmsi.posted = true;
> +                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
> +            }
>          }
>          if ( vcpu && is_iommu_enabled(d) )
>              hvm_migrate_pirq(pirq_dpci, vcpu);
> @@ -440,7 +444,8 @@ int pt_irq_create_bind(
>          /* Use interrupt posting if it is supported. */
>          if ( iommu_intpost )
>              pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
> -                           info, pirq_dpci->gmsi.gvec);
> +                           info, pirq_dpci->gmsi.gvec,
> +                           prev_vcpu_id >= 0 ? d->vcpu[prev_vcpu_id] : NULL);
>  
>          if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
>          {
> @@ -729,7 +734,9 @@ int pt_irq_destroy_bind(
>              what = "bogus";
>      }
>      else if ( pirq_dpci && pirq_dpci->gmsi.posted )
> -        pi_update_irte(NULL, pirq, 0);
> +        pi_update_irte(NULL, pirq, 0,
> +                       pirq_dpci->gmsi.dest_vcpu_id >= 0
> +                       ? d->vcpu[pirq_dpci->gmsi.dest_vcpu_id] : NULL);
>  
>      if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
>           list_empty(&pirq_dpci->digl_list) )
> diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
> index bf846195c4..07c1c1627a 100644
> --- a/xen/drivers/passthrough/vtd/intremap.c
> +++ b/xen/drivers/passthrough/vtd/intremap.c
> @@ -946,12 +946,13 @@ void intel_iommu_disable_eim(void)
>          disable_qinval(drhd->iommu);
>  }
>  
> +#ifdef CONFIG_HVM
>  /*
>   * This function is used to update the IRTE for posted-interrupt
>   * when guest changes MSI/MSI-X information.
>   */
>  int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
> -    const uint8_t gvec)
> +    const uint8_t gvec, struct vcpu *prev)
>  {
>      struct irq_desc *desc;
>      struct msi_desc *msi_desc;
> @@ -964,8 +965,8 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
>      msi_desc = desc->msi_desc;
>      if ( !msi_desc )
>      {
> -        rc = -ENODEV;
> -        goto unlock_out;
> +        spin_unlock_irq(&desc->lock);
> +        return -ENODEV;
>      }
>      msi_desc->pi_desc = pi_desc;
>      msi_desc->gvec = gvec;
> @@ -974,10 +975,10 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
>  
>      ASSERT(pcidevs_locked());
>  
> -    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
> -
> - unlock_out:
> -    spin_unlock_irq(&desc->lock);
> +    rc = msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
> +    if ( !rc && prev )
> +         vlapic_sync_pir_to_irr(prev);
>  
>      return rc;
>  }
> +#endif
> diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
> index dde66b4f0f..b0017d1dae 100644
> --- a/xen/include/asm-x86/hvm/vlapic.h
> +++ b/xen/include/asm-x86/hvm/vlapic.h
> @@ -150,4 +150,6 @@ bool_t vlapic_match_dest(
>      const struct vlapic *target, const struct vlapic *source,
>      int short_hand, uint32_t dest, bool_t dest_mode);
>  
> +void vlapic_sync_pir_to_irr(struct vcpu *v);
> +
>  #endif /* __ASM_X86_HVM_VLAPIC_H__ */
> diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
> index 85741f7c96..314dcfbe47 100644
> --- a/xen/include/asm-x86/iommu.h
> +++ b/xen/include/asm-x86/iommu.h
> @@ -119,7 +119,7 @@ static inline void iommu_disable_x2apic(void)
>  extern bool untrusted_msi;
>  
>  int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
> -                   const uint8_t gvec);
> +                   const uint8_t gvec, struct vcpu *prev);
>  
>  #endif /* !__ARCH_X86_IOMMU_H__ */
>  /*
>
Roger Pau Monne Oct. 30, 2019, 8:24 a.m. UTC | #4
On Tue, Oct 29, 2019 at 05:20:18PM -0700, Joe Jin wrote:
> Hi Roger & Jan,
> 
> I got my test env back, and back the patch to stable-4.12, run same
> test, I still seen original issue, guest kernel printed error:
> 
>  kernel:do_IRQ: 20.114 No irq handler for vector (irq -1)
> 
> After that, pass through infiniband VF stopped to work.

Thanks for the testing, TBH I'm not sure what's wrong here, since I
intended my proposed patch to be functionally equivalent to your first
proposed fix.

> My patch as below, please check:

The patch LGTM.

Can you try to add the following debug patch on top of the existing
one and report the output that you get on the Xen console?

---8<---
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index 07c1c1627a..91a1dde131 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -977,7 +977,13 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
 
     rc = msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
     if ( !rc && prev )
+{
+         printk("sync PIRR on vcpu#%u\n", prev->vcpu_id);
          vlapic_sync_pir_to_irr(prev);
+}
+else
+         printk("not syncing PIRR rc: %d vcpu#%u\n",
+                rc, prev ? prev->vcpu_id : -1);
 
     return rc;
 }
Joe Jin Oct. 30, 2019, 4:38 p.m. UTC | #5
On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> Can you try to add the following debug patch on top of the existing
> one and report the output that you get on the Xen console?

Applied debug patch and run the test again, not of any log printed,
attached Xen log on serial console, seems pi_update_irte() not been
called for iommu_intpost was false.

Thanks,
Joe
(XEN) Xen version 4.12.2-pre (root@) (gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23.0.1)) debug=n  Tue Oct 29 02:43:40 PDT 2019
(XEN) Latest ChangeSet:                                                         
(XEN) Bootloader: GRUB 2.02~beta2                                               
(XEN) Command line: placeholder dom0_mem=max:3456M allowsuperpage dom0_vcpus_pin=numa dom0_max_vcpus=4 crashkernel=512M@1024M iommu=1 hvm_debug=832 guest_loglvl=all com1=115200,8n1 console=com1 conring_size=1m console_to_ring                                     
(XEN) Xen image load base address: 0x77200000                                   
(XEN) Video information:                                                        
(XEN)  VGA is text mode 80x25, font 8x16                                        
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds                     
(XEN)  EDID info not retrieved because no DDC retrieval method detected         
(XEN) Disc information:                                                         
(XEN)  Found 2 MBR signatures                                                   
(XEN)  Found 2 EDD information structures                                       
(XEN) Xen-e820 RAM map:                                                         
(XEN)  0000000000000000 - 000000000009b000 (usable)                             
(XEN)  000000000009b000 - 00000000000a0000 (reserved)                           
(XEN)  00000000000e0000 - 0000000000100000 (reserved)                           
(XEN)  0000000000100000 - 0000000077928000 (usable)                             
(XEN)  0000000077928000 - 0000000079356000 (reserved)                          
(XEN)  0000000079356000 - 0000000079391000 (ACPI data)
(XEN)  0000000079391000 - 0000000079900000 (ACPI NVS)
(XEN)  0000000079900000 - 000000007bd4d000 (reserved)
(XEN)  000000007bd4d000 - 000000007bd58000 (usable)
(XEN)  000000007bd58000 - 000000007bd59000 (reserved)
(XEN)  000000007bd59000 - 000000007bd5c000 (usable)
(XEN)  000000007bd5c000 - 000000007bd5d000 (reserved)
(XEN)  000000007bd5d000 - 000000007bd5e000 (usable)
(XEN)  000000007bd5e000 - 000000007bde4000 (reserved)
(XEN)  000000007bde4000 - 000000007c000000 (usable)
(XEN)  0000000080000000 - 0000000090000000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000002080000000 (usable)
(XEN) Kdump: 512MB (524288kB) at 0x40000000
(XEN) ACPI: RSDP 000F0530, 0024 (r2 ORACLE)
(XEN) ACPI: XSDT 7936C0B0, 00E4 (r1 ORACLE     X5-2 30130200 AMI     10013)
(XEN) ACPI: FACP 7937F608, 010C (r5 ORACLE     X5-2 30130200 AMI     10013)
(XEN) ACPI: DSDT 7936C228, 133DE (r2 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: FACS 798FDF80, 0040
(XEN) ACPI: APIC 7937F718, 0224 (r3 ORACLE     X5-2 30130200 AMI     10013)
(XEN) ACPI: FPDT 7937F940, 0044 (r1 ORACLE     X5-2 30130200 AMI     10013)
(XEN) ACPI: FIDT 7937F988, 009C (r1 ORACLE     X5-2 30130200 AMI     10013)
(XEN) ACPI: SPMI 7937FA28, 0041 (r5 ORACLE     X5-2 30130200 AMI.        0)
(XEN) ACPI: MCFG 7937FA70, 003C (r1 ORACLE     X5-2 30130200 MSFT       97)
(XEN) ACPI: OEMS 7937FAB0, 07DC (r1 ORACLE     X5-2 30130200 ORCL        1)
(XEN) ACPI: UEFI 79380290, 0042 (r1 ORACLE     X5-2 30130200             0)
(XEN) ACPI: BDAT 793802D8, 0030 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: HPET 79380308, 0038 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: MSCT 79380340, 0090 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: PCCT 793803D0, 006E (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: PMCT 79380440, 0064 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: PMTT 793804A8, 0268 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: SLIT 79380710, 0030 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: SRAT 79380740, 0E58 (r3 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: WDDT 79381598, 0040 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: SSDT 793815D8, ED2F (r1 ORACLE    PmMgt 30130200 INTL 20120913)
(XEN) ACPI: OEMP 79390308, 0158 (r1 ORACLE     X5-2 30130200 ORCL        1)
(XEN) ACPI: DMAR 79390460, 0148 (r1 ORACLE     X5-2 30130200 INTL 20091013)
(XEN) ACPI: HEST 793905A8, 013C (r1 ORACLE     X5-2 30130200 INTL        1)
(XEN) ACPI: BERT 793906E8, 0030 (r1 ORACLE     X5-2 30130200 INTL        1)
(XEN) ACPI: ERST 79390718, 0230 (r1 ORACLE     X5-2 30130200 INTL        1)
(XEN) ACPI: EINJ 79390948, 0130 (r1 ORACLE     X5-2 30130200 INTL        1)
(XEN) System RAM: 130938MB (134081464kB)
(XEN) Domain heap initialised DMA width 32 bits
(XEN) Allocated console ring of 1024 KiB.
(XEN) ACPI: 32/64X FACS address mismatch in FADT - 798fdf80/0000000000000000, using 32
(XEN) IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
(XEN) IOAPIC[1]: apic_id 9, version 32, address 0xfec01000, GSI 24-47
(XEN) IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71
(XEN) Enabling APIC mode:  Phys.  Using 3 I/O APICs
(XEN) Switched to APIC driver x2apic_cluster
(XEN) xstate: size: 0x340 and states: 0x7
(XEN) CMCI: threshold 0x2 too large for CPU0 bank 17, using 0x1
(XEN) CMCI: threshold 0x2 too large for CPU0 bank 18, using 0x1
(XEN) CMCI: threshold 0x2 too large for CPU0 bank 19, using 0x1
(XEN) Speculative mitigation facilities:
(XEN)   Hardware features:
(XEN)   Compiled-in support: INDIRECT_THUNK SHADOW_PAGING
(XEN)   Xen settings: BTI-Thunk RETPOLINE, SPEC_CTRL: No, Other:
(XEN)   L1TF: believed vulnerable, maxphysaddr L1D 46, CPUID 46, Safe address 300000000000
(XEN)   Support for HVM VMs: RSB EAGER_FPU
(XEN)   Support for PV VMs: RSB EAGER_FPU
(XEN)   XPTI (64-bit PV only): Dom0 enabled, DomU enabled (with PCID)
(XEN)   PV L1TF shadowing: Dom0 disabled, DomU enabled
(XEN) Using scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Initializing Credit2 scheduler
(XEN) Platform timer is 14.318MHz HPET
(XEN) Detected 2394.674 MHz processor.
(XEN) Initing memory sharing.
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Posted Interrupt not enabled.
(XEN) Intel VT-d Shared EPT tables enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN)  - APIC Register Virtualization
(XEN)  - Virtual Interrupt Delivery
(XEN)  - Posted Interrupt Processing
(XEN)  - VMCS shadowing
(XEN)  - VM Functions
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) CMCI: threshold 0x2 too large for CPU16 bank 17, using 0x1
(XEN) CMCI: threshold 0x2 too large for CPU16 bank 18, using 0x1
(XEN) CMCI: threshold 0x2 too large for CPU16 bank 19, using 0x1
(XEN) Brought up 32 CPUs
(XEN) Dom0 has maximum 840 PIRQs
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x217f000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   000000202c000000->0000002030000000 (858882 pages to be allocated)
(XEN)  Init. ramdisk: 000000207db02000->000000207ffff6a8
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8217f000
(XEN)  Init. ramdisk: 0000000000000000->0000000000000000
(XEN)  Phys-Mach map: 0000008000000000->00000080006c0000
(XEN)  Start info:    ffffffff8217f000->ffffffff8217f4b8
(XEN)  Xenstore ring: 0000000000000000->0000000000000000
(XEN)  Console ring:  0000000000000000->0000000000000000
(XEN)  Page tables:   ffffffff82180000->ffffffff82195000
(XEN)  Boot stack:    ffffffff82195000->ffffffff82196000
(XEN)  TOTAL:         ffffffff80000000->ffffffff82400000
(XEN)  ENTRY ADDRESS: ffffffff81c871f0
(XEN) Dom0 has maximum 4 VCPUs
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Scrubbing Free RAM in background
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: All
(XEN) ***************************************************
(XEN) Booted on L1TF-vulnerable hardware with SMT/Hyperthreading
(XEN) enabled.  Please assess your configuration and choose an
(XEN) explicit 'smt=<bool>' setting.  See XSA-273.
(XEN) ***************************************************
(XEN) Booted on MLPDS/MFBDS-vulnerable hardware with SMT/Hyperthreading
(XEN) enabled.  Mitigations will not be fully effective.  Please
(XEN) choose an explicit smt=<bool> setting.  See XSA-297.
(XEN) ***************************************************
(XEN) 3... 2... 1... 
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
(XEN) Freed 488kB init memory
ca-dev39.us.oracle.com login: (XEN) HVM d1v0 save: CPU
(XEN) HVM d1v1 save: CPU
(XEN) HVM d1v2 save: CPU
(XEN) HVM d1v3 save: CPU
(XEN) HVM d1v4 save: CPU
(XEN) HVM d1v5 save: CPU
(XEN) HVM d1v6 save: CPU
(XEN) HVM d1v7 save: CPU
(XEN) HVM d1v8 save: CPU
(XEN) HVM d1v9 save: CPU
(XEN) HVM d1v10 save: CPU
(XEN) HVM d1v11 save: CPU
(XEN) HVM d1v12 save: CPU
(XEN) HVM d1v13 save: CPU
(XEN) HVM d1v14 save: CPU
(XEN) HVM d1v15 save: CPU
(XEN) HVM d1v16 save: CPU
(XEN) HVM d1v17 save: CPU
(XEN) HVM d1v18 save: CPU
(XEN) HVM d1v19 save: CPU
(XEN) HVM d1v20 save: CPU
(XEN) HVM d1v21 save: CPU
(XEN) HVM d1v22 save: CPU
(XEN) HVM d1v23 save: CPU
(XEN) HVM d1v24 save: CPU
(XEN) HVM d1v25 save: CPU
(XEN) HVM d1v26 save: CPU
(XEN) HVM d1v27 save: CPU
(XEN) HVM d1v28 save: CPU
(XEN) HVM d1v29 save: CPU
(XEN) HVM d1v30 save: CPU
(XEN) HVM d1v31 save: CPU
(XEN) HVM d1 save: PIC
(XEN) HVM d1 save: IOAPIC
(XEN) HVM d1v0 save: LAPIC
(XEN) HVM d1v1 save: LAPIC
(XEN) HVM d1v2 save: LAPIC
(XEN) HVM d1v3 save: LAPIC
(XEN) HVM d1v4 save: LAPIC
(XEN) HVM d1v5 save: LAPIC
(XEN) HVM d1v6 save: LAPIC
(XEN) HVM d1v7 save: LAPIC
(XEN) HVM d1v8 save: LAPIC
(XEN) HVM d1v9 save: LAPIC
(XEN) HVM d1v10 save: LAPIC
(XEN) HVM d1v11 save: LAPIC
(XEN) HVM d1v12 save: LAPIC
(XEN) HVM d1v13 save: LAPIC
(XEN) HVM d1v14 save: LAPIC
(XEN) HVM d1v15 save: LAPIC
(XEN) HVM d1v16 save: LAPIC
(XEN) HVM d1v17 save: LAPIC
(XEN) HVM d1v18 save: LAPIC
(XEN) HVM d1v19 save: LAPIC
(XEN) HVM d1v20 save: LAPIC
(XEN) HVM d1v21 save: LAPIC
(XEN) HVM d1v22 save: LAPIC
(XEN) HVM d1v23 save: LAPIC
(XEN) HVM d1v24 save: LAPIC
(XEN) HVM d1v25 save: LAPIC
(XEN) HVM d1v26 save: LAPIC
(XEN) HVM d1v27 save: LAPIC
(XEN) HVM d1v28 save: LAPIC
(XEN) HVM d1v29 save: LAPIC
(XEN) HVM d1v30 save: LAPIC
(XEN) HVM d1v31 save: LAPIC
(XEN) HVM d1v0 save: LAPIC_REGS
(XEN) HVM d1v1 save: LAPIC_REGS
(XEN) HVM d1v2 save: LAPIC_REGS
(XEN) HVM d1v3 save: LAPIC_REGS
(XEN) HVM d1v4 save: LAPIC_REGS
(XEN) HVM d1v5 save: LAPIC_REGS
(XEN) HVM d1v6 save: LAPIC_REGS
(XEN) HVM d1v7 save: LAPIC_REGS
(XEN) HVM d1v8 save: LAPIC_REGS
(XEN) HVM d1v9 save: LAPIC_REGS
(XEN) HVM d1v10 save: LAPIC_REGS
(XEN) HVM d1v11 save: LAPIC_REGS
(XEN) HVM d1v12 save: LAPIC_REGS
(XEN) HVM d1v13 save: LAPIC_REGS
(XEN) HVM d1v14 save: LAPIC_REGS
(XEN) HVM d1v15 save: LAPIC_REGS
(XEN) HVM d1v16 save: LAPIC_REGS
(XEN) HVM d1v17 save: LAPIC_REGS
(XEN) HVM d1v18 save: LAPIC_REGS
(XEN) HVM d1v19 save: LAPIC_REGS
(XEN) HVM d1v20 save: LAPIC_REGS
(XEN) HVM d1v21 save: LAPIC_REGS
(XEN) HVM d1v22 save: LAPIC_REGS
(XEN) HVM d1v23 save: LAPIC_REGS
(XEN) HVM d1v24 save: LAPIC_REGS
(XEN) HVM d1v25 save: LAPIC_REGS
(XEN) HVM d1v26 save: LAPIC_REGS
(XEN) HVM d1v27 save: LAPIC_REGS
(XEN) HVM d1v28 save: LAPIC_REGS
(XEN) HVM d1v29 save: LAPIC_REGS
(XEN) HVM d1v30 save: LAPIC_REGS
(XEN) HVM d1v31 save: LAPIC_REGS
(XEN) HVM d1 save: PCI_IRQ
(XEN) HVM d1 save: ISA_IRQ
(XEN) HVM d1 save: PCI_LINK
(XEN) HVM d1 save: PIT
(XEN) HVM d1 save: RTC
(XEN) HVM d1 save: HPET
(XEN) HVM d1 save: PMTIMER
(XEN) HVM d1v0 save: MTRR
(XEN) HVM d1v1 save: MTRR
(XEN) HVM d1v2 save: MTRR
(XEN) HVM d1v3 save: MTRR
(XEN) HVM d1v4 save: MTRR
(XEN) HVM d1v5 save: MTRR
(XEN) HVM d1v6 save: MTRR
(XEN) HVM d1v7 save: MTRR
(XEN) HVM d1v8 save: MTRR
(XEN) HVM d1v9 save: MTRR
(XEN) HVM d1v10 save: MTRR
(XEN) HVM d1v11 save: MTRR
(XEN) HVM d1v12 save: MTRR
(XEN) HVM d1v13 save: MTRR
(XEN) HVM d1v14 save: MTRR
(XEN) HVM d1v15 save: MTRR
(XEN) HVM d1v16 save: MTRR
(XEN) HVM d1v17 save: MTRR
(XEN) HVM d1v18 save: MTRR
(XEN) HVM d1v19 save: MTRR
(XEN) HVM d1v20 save: MTRR
(XEN) HVM d1v21 save: MTRR
(XEN) HVM d1v22 save: MTRR
(XEN) HVM d1v23 save: MTRR
(XEN) HVM d1v24 save: MTRR
(XEN) HVM d1v25 save: MTRR
(XEN) HVM d1v26 save: MTRR
(XEN) HVM d1v27 save: MTRR
(XEN) HVM d1v28 save: MTRR
(XEN) HVM d1v29 save: MTRR
(XEN) HVM d1v30 save: MTRR
(XEN) HVM d1v31 save: MTRR
(XEN) HVM d1 save: VIRIDIAN_DOMAIN
(XEN) HVM d1v0 save: CPU_XSAVE
(XEN) HVM d1v1 save: CPU_XSAVE
(XEN) HVM d1v2 save: CPU_XSAVE
(XEN) HVM d1v3 save: CPU_XSAVE
(XEN) HVM d1v4 save: CPU_XSAVE
(XEN) HVM d1v5 save: CPU_XSAVE
(XEN) HVM d1v6 save: CPU_XSAVE
(XEN) HVM d1v7 save: CPU_XSAVE
(XEN) HVM d1v8 save: CPU_XSAVE
(XEN) HVM d1v9 save: CPU_XSAVE
(XEN) HVM d1v10 save: CPU_XSAVE
(XEN) HVM d1v11 save: CPU_XSAVE
(XEN) HVM d1v12 save: CPU_XSAVE
(XEN) HVM d1v13 save: CPU_XSAVE
(XEN) HVM d1v14 save: CPU_XSAVE
(XEN) HVM d1v15 save: CPU_XSAVE
(XEN) HVM d1v16 save: CPU_XSAVE
(XEN) HVM d1v17 save: CPU_XSAVE
(XEN) HVM d1v18 save: CPU_XSAVE
(XEN) HVM d1v19 save: CPU_XSAVE
(XEN) HVM d1v20 save: CPU_XSAVE
(XEN) HVM d1v21 save: CPU_XSAVE
(XEN) HVM d1v22 save: CPU_XSAVE
(XEN) HVM d1v23 save: CPU_XSAVE
(XEN) HVM d1v24 save: CPU_XSAVE
(XEN) HVM d1v25 save: CPU_XSAVE
(XEN) HVM d1v26 save: CPU_XSAVE
(XEN) HVM d1v27 save: CPU_XSAVE
(XEN) HVM d1v28 save: CPU_XSAVE
(XEN) HVM d1v29 save: CPU_XSAVE
(XEN) HVM d1v30 save: CPU_XSAVE
(XEN) HVM d1v31 save: CPU_XSAVE
(XEN) HVM d1v0 save: VIRIDIAN_VCPU
(XEN) HVM d1v1 save: VIRIDIAN_VCPU
(XEN) HVM d1v2 save: VIRIDIAN_VCPU
(XEN) HVM d1v3 save: VIRIDIAN_VCPU
(XEN) HVM d1v4 save: VIRIDIAN_VCPU
(XEN) HVM d1v5 save: VIRIDIAN_VCPU
(XEN) HVM d1v6 save: VIRIDIAN_VCPU
(XEN) HVM d1v7 save: VIRIDIAN_VCPU
(XEN) HVM d1v8 save: VIRIDIAN_VCPU
(XEN) HVM d1v9 save: VIRIDIAN_VCPU
(XEN) HVM d1v10 save: VIRIDIAN_VCPU
(XEN) HVM d1v11 save: VIRIDIAN_VCPU
(XEN) HVM d1v12 save: VIRIDIAN_VCPU
(XEN) HVM d1v13 save: VIRIDIAN_VCPU
(XEN) HVM d1v14 save: VIRIDIAN_VCPU
(XEN) HVM d1v15 save: VIRIDIAN_VCPU
(XEN) HVM d1v16 save: VIRIDIAN_VCPU
(XEN) HVM d1v17 save: VIRIDIAN_VCPU
(XEN) HVM d1v18 save: VIRIDIAN_VCPU
(XEN) HVM d1v19 save: VIRIDIAN_VCPU
(XEN) HVM d1v20 save: VIRIDIAN_VCPU
(XEN) HVM d1v21 save: VIRIDIAN_VCPU
(XEN) HVM d1v22 save: VIRIDIAN_VCPU
(XEN) HVM d1v23 save: VIRIDIAN_VCPU
(XEN) HVM d1v24 save: VIRIDIAN_VCPU
(XEN) HVM d1v25 save: VIRIDIAN_VCPU
(XEN) HVM d1v26 save: VIRIDIAN_VCPU
(XEN) HVM d1v27 save: VIRIDIAN_VCPU
(XEN) HVM d1v28 save: VIRIDIAN_VCPU
(XEN) HVM d1v29 save: VIRIDIAN_VCPU
(XEN) HVM d1v30 save: VIRIDIAN_VCPU
(XEN) HVM d1v31 save: VIRIDIAN_VCPU
(XEN) HVM d1v0 save: VMCE_VCPU
(XEN) HVM d1v1 save: VMCE_VCPU
(XEN) HVM d1v2 save: VMCE_VCPU
(XEN) HVM d1v3 save: VMCE_VCPU
(XEN) HVM d1v4 save: VMCE_VCPU
(XEN) HVM d1v5 save: VMCE_VCPU
(XEN) HVM d1v6 save: VMCE_VCPU
(XEN) HVM d1v7 save: VMCE_VCPU
(XEN) HVM d1v8 save: VMCE_VCPU
(XEN) HVM d1v9 save: VMCE_VCPU
(XEN) HVM d1v10 save: VMCE_VCPU
(XEN) HVM d1v11 save: VMCE_VCPU
(XEN) HVM d1v12 save: VMCE_VCPU
(XEN) HVM d1v13 save: VMCE_VCPU
(XEN) HVM d1v14 save: VMCE_VCPU
(XEN) HVM d1v15 save: VMCE_VCPU
(XEN) HVM d1v16 save: VMCE_VCPU
(XEN) HVM d1v17 save: VMCE_VCPU
(XEN) HVM d1v18 save: VMCE_VCPU
(XEN) HVM d1v19 save: VMCE_VCPU
(XEN) HVM d1v20 save: VMCE_VCPU
(XEN) HVM d1v21 save: VMCE_VCPU
(XEN) HVM d1v22 save: VMCE_VCPU
(XEN) HVM d1v23 save: VMCE_VCPU
(XEN) HVM d1v24 save: VMCE_VCPU
(XEN) HVM d1v25 save: VMCE_VCPU
(XEN) HVM d1v26 save: VMCE_VCPU
(XEN) HVM d1v27 save: VMCE_VCPU
(XEN) HVM d1v28 save: VMCE_VCPU
(XEN) HVM d1v29 save: VMCE_VCPU
(XEN) HVM d1v30 save: VMCE_VCPU
(XEN) HVM d1v31 save: VMCE_VCPU
(XEN) HVM d1v0 save: TSC_ADJUST
(XEN) HVM d1v1 save: TSC_ADJUST
(XEN) HVM d1v2 save: TSC_ADJUST
(XEN) HVM d1v3 save: TSC_ADJUST
(XEN) HVM d1v4 save: TSC_ADJUST
(XEN) HVM d1v5 save: TSC_ADJUST
(XEN) HVM d1v6 save: TSC_ADJUST
(XEN) HVM d1v7 save: TSC_ADJUST
(XEN) HVM d1v8 save: TSC_ADJUST
(XEN) HVM d1v9 save: TSC_ADJUST
(XEN) HVM d1v10 save: TSC_ADJUST
(XEN) HVM d1v11 save: TSC_ADJUST
(XEN) HVM d1v12 save: TSC_ADJUST
(XEN) HVM d1v13 save: TSC_ADJUST
(XEN) HVM d1v14 save: TSC_ADJUST
(XEN) HVM d1v15 save: TSC_ADJUST
(XEN) HVM d1v16 save: TSC_ADJUST
(XEN) HVM d1v17 save: TSC_ADJUST
(XEN) HVM d1v18 save: TSC_ADJUST
(XEN) HVM d1v19 save: TSC_ADJUST
(XEN) HVM d1v20 save: TSC_ADJUST
(XEN) HVM d1v21 save: TSC_ADJUST
(XEN) HVM d1v22 save: TSC_ADJUST
(XEN) HVM d1v23 save: TSC_ADJUST
(XEN) HVM d1v24 save: TSC_ADJUST
(XEN) HVM d1v25 save: TSC_ADJUST
(XEN) HVM d1v26 save: TSC_ADJUST
(XEN) HVM d1v27 save: TSC_ADJUST
(XEN) HVM d1v28 save: TSC_ADJUST
(XEN) HVM d1v29 save: TSC_ADJUST
(XEN) HVM d1v30 save: TSC_ADJUST
(XEN) HVM d1v31 save: TSC_ADJUST
(XEN) HVM d1v0 save: CPU_MSR
(XEN) HVM d1v1 save: CPU_MSR
(XEN) HVM d1v2 save: CPU_MSR
(XEN) HVM d1v3 save: CPU_MSR
(XEN) HVM d1v4 save: CPU_MSR
(XEN) HVM d1v5 save: CPU_MSR
(XEN) HVM d1v6 save: CPU_MSR
(XEN) HVM d1v7 save: CPU_MSR
(XEN) HVM d1v8 save: CPU_MSR
(XEN) HVM d1v9 save: CPU_MSR
(XEN) HVM d1v10 save: CPU_MSR
(XEN) HVM d1v11 save: CPU_MSR
(XEN) HVM d1v12 save: CPU_MSR
(XEN) HVM d1v13 save: CPU_MSR
(XEN) HVM d1v14 save: CPU_MSR
(XEN) HVM d1v15 save: CPU_MSR
(XEN) HVM d1v16 save: CPU_MSR
(XEN) HVM d1v17 save: CPU_MSR
(XEN) HVM d1v18 save: CPU_MSR
(XEN) HVM d1v19 save: CPU_MSR
(XEN) HVM d1v20 save: CPU_MSR
(XEN) HVM d1v21 save: CPU_MSR
(XEN) HVM d1v22 save: CPU_MSR
(XEN) HVM d1v23 save: CPU_MSR
(XEN) HVM d1v24 save: CPU_MSR
(XEN) HVM d1v25 save: CPU_MSR
(XEN) HVM d1v26 save: CPU_MSR
(XEN) HVM d1v27 save: CPU_MSR
(XEN) HVM d1v28 save: CPU_MSR
(XEN) HVM d1v29 save: CPU_MSR
(XEN) HVM d1v30 save: CPU_MSR
(XEN) HVM d1v31 save: CPU_MSR
(XEN) HVM1 restore: CPU 0
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) HVM d2v0 save: CPU
(XEN) HVM d2v1 save: CPU
(XEN) HVM d2v2 save: CPU
(XEN) HVM d2v3 save: CPU
(XEN) HVM d2v4 save: CPU
(XEN) HVM d2v5 save: CPU
(XEN) HVM d2v6 save: CPU
(XEN) HVM d2v7 save: CPU
(XEN) HVM d2v8 save: CPU
(XEN) HVM d2v9 save: CPU
(XEN) HVM d2v10 save: CPU
(XEN) HVM d2v11 save: CPU
(XEN) HVM d2v12 save: CPU
(XEN) HVM d2v13 save: CPU
(XEN) HVM d2v14 save: CPU
(XEN) HVM d2v15 save: CPU
(XEN) HVM d2v16 save: CPU
(XEN) HVM d2v17 save: CPU
(XEN) HVM d2v18 save: CPU
(XEN) HVM d2v19 save: CPU
(XEN) HVM d2v20 save: CPU
(XEN) HVM d2v21 save: CPU
(XEN) HVM d2v22 save: CPU
(XEN) HVM d2v23 save: CPU
(XEN) HVM d2v24 save: CPU
(XEN) HVM d2v25 save: CPU
(XEN) HVM d2v26 save: CPU
(XEN) HVM d2v27 save: CPU
(XEN) HVM d2v28 save: CPU
(XEN) HVM d2v29 save: CPU
(XEN) HVM d2v30 save: CPU
(XEN) HVM d2v31 save: CPU
(XEN) HVM d2 save: PIC
(XEN) HVM d2 save: IOAPIC
(XEN) HVM d2v0 save: LAPIC
(XEN) HVM d2v1 save: LAPIC
(XEN) HVM d2v2 save: LAPIC
(XEN) HVM d2v3 save: LAPIC
(XEN) HVM d2v4 save: LAPIC
(XEN) HVM d2v5 save: LAPIC
(XEN) HVM d2v6 save: LAPIC
(XEN) HVM d2v7 save: LAPIC
(XEN) HVM d2v8 save: LAPIC
(XEN) HVM d2v9 save: LAPIC
(XEN) HVM d2v10 save: LAPIC
(XEN) HVM d2v11 save: LAPIC
(XEN) HVM d2v12 save: LAPIC
(XEN) HVM d2v13 save: LAPIC
(XEN) HVM d2v14 save: LAPIC
(XEN) HVM d2v15 save: LAPIC
(XEN) HVM d2v16 save: LAPIC
(XEN) HVM d2v17 save: LAPIC
(XEN) HVM d2v18 save: LAPIC
(XEN) HVM d2v19 save: LAPIC
(XEN) HVM d2v20 save: LAPIC
(XEN) HVM d2v21 save: LAPIC
(XEN) HVM d2v22 save: LAPIC
(XEN) HVM d2v23 save: LAPIC
(XEN) HVM d2v24 save: LAPIC
(XEN) HVM d2v25 save: LAPIC
(XEN) HVM d2v26 save: LAPIC
(XEN) HVM d2v27 save: LAPIC
(XEN) HVM d2v28 save: LAPIC
(XEN) HVM d2v29 save: LAPIC
(XEN) HVM d2v30 save: LAPIC
(XEN) HVM d2v31 save: LAPIC
(XEN) HVM d2v0 save: LAPIC_REGS
(XEN) HVM d2v1 save: LAPIC_REGS
(XEN) HVM d2v2 save: LAPIC_REGS
(XEN) HVM d2v3 save: LAPIC_REGS
(XEN) HVM d2v4 save: LAPIC_REGS
(XEN) HVM d2v5 save: LAPIC_REGS
(XEN) HVM d2v6 save: LAPIC_REGS
(XEN) HVM d2v7 save: LAPIC_REGS
(XEN) HVM d2v8 save: LAPIC_REGS
(XEN) HVM d2v9 save: LAPIC_REGS
(XEN) HVM d2v10 save: LAPIC_REGS
(XEN) HVM d2v11 save: LAPIC_REGS
(XEN) HVM d2v12 save: LAPIC_REGS
(XEN) HVM d2v13 save: LAPIC_REGS
(XEN) HVM d2v14 save: LAPIC_REGS
(XEN) HVM d2v15 save: LAPIC_REGS
(XEN) HVM d2v16 save: LAPIC_REGS
(XEN) HVM d2v17 save: LAPIC_REGS
(XEN) HVM d2v18 save: LAPIC_REGS
(XEN) HVM d2v19 save: LAPIC_REGS
(XEN) HVM d2v20 save: LAPIC_REGS
(XEN) HVM d2v21 save: LAPIC_REGS
(XEN) HVM d2v22 save: LAPIC_REGS
(XEN) HVM d2v23 save: LAPIC_REGS
(XEN) HVM d2v24 save: LAPIC_REGS
(XEN) HVM d2v25 save: LAPIC_REGS
(XEN) HVM d2v26 save: LAPIC_REGS
(XEN) HVM d2v27 save: LAPIC_REGS
(XEN) HVM d2v28 save: LAPIC_REGS
(XEN) HVM d2v29 save: LAPIC_REGS
(XEN) HVM d2v30 save: LAPIC_REGS
(XEN) HVM d2v31 save: LAPIC_REGS
(XEN) HVM d2 save: PCI_IRQ
(XEN) HVM d2 save: ISA_IRQ
(XEN) HVM d2 save: PCI_LINK
(XEN) HVM d2 save: PIT
(XEN) HVM d2 save: RTC
(XEN) HVM d2 save: HPET
(XEN) HVM d2 save: PMTIMER
(XEN) HVM d2v0 save: MTRR
(XEN) HVM d2v1 save: MTRR
(XEN) HVM d2v2 save: MTRR
(XEN) HVM d2v3 save: MTRR
(XEN) HVM d2v4 save: MTRR
(XEN) HVM d2v5 save: MTRR
(XEN) HVM d2v6 save: MTRR
(XEN) HVM d2v7 save: MTRR
(XEN) HVM d2v8 save: MTRR
(XEN) HVM d2v9 save: MTRR
(XEN) HVM d2v10 save: MTRR
(XEN) HVM d2v11 save: MTRR
(XEN) HVM d2v12 save: MTRR
(XEN) HVM d2v13 save: MTRR
(XEN) HVM d2v14 save: MTRR
(XEN) HVM d2v15 save: MTRR
(XEN) HVM d2v16 save: MTRR
(XEN) HVM d2v17 save: MTRR
(XEN) HVM d2v18 save: MTRR
(XEN) HVM d2v19 save: MTRR
(XEN) HVM d2v20 save: MTRR
(XEN) HVM d2v21 save: MTRR
(XEN) HVM d2v22 save: MTRR
(XEN) HVM d2v23 save: MTRR
(XEN) HVM d2v24 save: MTRR
(XEN) HVM d2v25 save: MTRR
(XEN) HVM d2v26 save: MTRR
(XEN) HVM d2v27 save: MTRR
(XEN) HVM d2v28 save: MTRR
(XEN) HVM d2v29 save: MTRR
(XEN) HVM d2v30 save: MTRR
(XEN) HVM d2v31 save: MTRR
(XEN) HVM d2 save: VIRIDIAN_DOMAIN
(XEN) HVM d2v0 save: CPU_XSAVE
(XEN) HVM d2v1 save: CPU_XSAVE
(XEN) HVM d2v2 save: CPU_XSAVE
(XEN) HVM d2v3 save: CPU_XSAVE
(XEN) HVM d2v4 save: CPU_XSAVE
(XEN) HVM d2v5 save: CPU_XSAVE
(XEN) HVM d2v6 save: CPU_XSAVE
(XEN) HVM d2v7 save: CPU_XSAVE
(XEN) HVM d2v8 save: CPU_XSAVE
(XEN) HVM d2v9 save: CPU_XSAVE
(XEN) HVM d2v10 save: CPU_XSAVE
(XEN) HVM d2v11 save: CPU_XSAVE
(XEN) HVM d2v12 save: CPU_XSAVE
(XEN) HVM d2v13 save: CPU_XSAVE
(XEN) HVM d2v14 save: CPU_XSAVE
(XEN) HVM d2v15 save: CPU_XSAVE
(XEN) HVM d2v16 save: CPU_XSAVE
(XEN) HVM d2v17 save: CPU_XSAVE
(XEN) HVM d2v18 save: CPU_XSAVE
(XEN) HVM d2v19 save: CPU_XSAVE
(XEN) HVM d2v20 save: CPU_XSAVE
(XEN) HVM d2v21 save: CPU_XSAVE
(XEN) HVM d2v22 save: CPU_XSAVE
(XEN) HVM d2v23 save: CPU_XSAVE
(XEN) HVM d2v24 save: CPU_XSAVE
(XEN) HVM d2v25 save: CPU_XSAVE
(XEN) HVM d2v26 save: CPU_XSAVE
(XEN) HVM d2v27 save: CPU_XSAVE
(XEN) HVM d2v28 save: CPU_XSAVE
(XEN) HVM d2v29 save: CPU_XSAVE
(XEN) HVM d2v30 save: CPU_XSAVE
(XEN) HVM d2v31 save: CPU_XSAVE
(XEN) HVM d2v0 save: VIRIDIAN_VCPU
(XEN) HVM d2v1 save: VIRIDIAN_VCPU
(XEN) HVM d2v2 save: VIRIDIAN_VCPU
(XEN) HVM d2v3 save: VIRIDIAN_VCPU
(XEN) HVM d2v4 save: VIRIDIAN_VCPU
(XEN) HVM d2v5 save: VIRIDIAN_VCPU
(XEN) HVM d2v6 save: VIRIDIAN_VCPU
(XEN) HVM d2v7 save: VIRIDIAN_VCPU
(XEN) HVM d2v8 save: VIRIDIAN_VCPU
(XEN) HVM d2v9 save: VIRIDIAN_VCPU
(XEN) HVM d2v10 save: VIRIDIAN_VCPU
(XEN) HVM d2v11 save: VIRIDIAN_VCPU
(XEN) HVM d2v12 save: VIRIDIAN_VCPU
(XEN) HVM d2v13 save: VIRIDIAN_VCPU
(XEN) HVM d2v14 save: VIRIDIAN_VCPU
(XEN) HVM d2v15 save: VIRIDIAN_VCPU
(XEN) HVM d2v16 save: VIRIDIAN_VCPU
(XEN) HVM d2v17 save: VIRIDIAN_VCPU
(XEN) HVM d2v18 save: VIRIDIAN_VCPU
(XEN) HVM d2v19 save: VIRIDIAN_VCPU
(XEN) HVM d2v20 save: VIRIDIAN_VCPU
(XEN) HVM d2v21 save: VIRIDIAN_VCPU
(XEN) HVM d2v22 save: VIRIDIAN_VCPU
(XEN) HVM d2v23 save: VIRIDIAN_VCPU
(XEN) HVM d2v24 save: VIRIDIAN_VCPU
(XEN) HVM d2v25 save: VIRIDIAN_VCPU
(XEN) HVM d2v26 save: VIRIDIAN_VCPU
(XEN) HVM d2v27 save: VIRIDIAN_VCPU
(XEN) HVM d2v28 save: VIRIDIAN_VCPU
(XEN) HVM d2v29 save: VIRIDIAN_VCPU
(XEN) HVM d2v30 save: VIRIDIAN_VCPU
(XEN) HVM d2v31 save: VIRIDIAN_VCPU
(XEN) HVM d2v0 save: VMCE_VCPU
(XEN) HVM d2v1 save: VMCE_VCPU
(XEN) HVM d2v2 save: VMCE_VCPU
(XEN) HVM d2v3 save: VMCE_VCPU
(XEN) HVM d2v4 save: VMCE_VCPU
(XEN) HVM d2v5 save: VMCE_VCPU
(XEN) HVM d2v6 save: VMCE_VCPU
(XEN) HVM d2v7 save: VMCE_VCPU
(XEN) HVM d2v8 save: VMCE_VCPU
(XEN) HVM d2v9 save: VMCE_VCPU
(XEN) HVM d2v10 save: VMCE_VCPU
(XEN) HVM d2v11 save: VMCE_VCPU
(XEN) HVM d2v12 save: VMCE_VCPU
(XEN) HVM d2v13 save: VMCE_VCPU
(XEN) HVM d2v14 save: VMCE_VCPU
(XEN) HVM d2v15 save: VMCE_VCPU
(XEN) HVM d2v16 save: VMCE_VCPU
(XEN) HVM d2v17 save: VMCE_VCPU
(XEN) HVM d2v18 save: VMCE_VCPU
(XEN) HVM d2v19 save: VMCE_VCPU
(XEN) HVM d2v20 save: VMCE_VCPU
(XEN) HVM d2v21 save: VMCE_VCPU
(XEN) HVM d2v22 save: VMCE_VCPU
(XEN) HVM d2v23 save: VMCE_VCPU
(XEN) HVM d2v24 save: VMCE_VCPU
(XEN) HVM d2v25 save: VMCE_VCPU
(XEN) HVM d2v26 save: VMCE_VCPU
(XEN) HVM d2v27 save: VMCE_VCPU
(XEN) HVM d2v28 save: VMCE_VCPU
(XEN) HVM d2v29 save: VMCE_VCPU
(XEN) HVM d2v30 save: VMCE_VCPU
(XEN) HVM d2v31 save: VMCE_VCPU
(XEN) HVM d2v0 save: TSC_ADJUST
(XEN) HVM d2v1 save: TSC_ADJUST
(XEN) HVM d2v2 save: TSC_ADJUST
(XEN) HVM d2v3 save: TSC_ADJUST
(XEN) HVM d2v4 save: TSC_ADJUST
(XEN) HVM d2v5 save: TSC_ADJUST
(XEN) HVM d2v6 save: TSC_ADJUST
(XEN) HVM d2v7 save: TSC_ADJUST
(XEN) HVM d2v8 save: TSC_ADJUST
(XEN) HVM d2v9 save: TSC_ADJUST
(XEN) HVM d2v10 save: TSC_ADJUST
(XEN) HVM d2v11 save: TSC_ADJUST
(XEN) HVM d2v12 save: TSC_ADJUST
(XEN) HVM d2v13 save: TSC_ADJUST
(XEN) HVM d2v14 save: TSC_ADJUST
(XEN) HVM d2v15 save: TSC_ADJUST
(XEN) HVM d2v16 save: TSC_ADJUST
(XEN) HVM d2v17 save: TSC_ADJUST
(XEN) HVM d2v18 save: TSC_ADJUST
(XEN) HVM d2v19 save: TSC_ADJUST
(XEN) HVM d2v20 save: TSC_ADJUST
(XEN) HVM d2v21 save: TSC_ADJUST
(XEN) HVM d2v22 save: TSC_ADJUST
(XEN) HVM d2v23 save: TSC_ADJUST
(XEN) HVM d2v24 save: TSC_ADJUST
(XEN) HVM d2v25 save: TSC_ADJUST
(XEN) HVM d2v26 save: TSC_ADJUST
(XEN) HVM d2v27 save: TSC_ADJUST
(XEN) HVM d2v28 save: TSC_ADJUST
(XEN) HVM d2v29 save: TSC_ADJUST
(XEN) HVM d2v30 save: TSC_ADJUST
(XEN) HVM d2v31 save: TSC_ADJUST
(XEN) HVM d2v0 save: CPU_MSR
(XEN) HVM d2v1 save: CPU_MSR
(XEN) HVM d2v2 save: CPU_MSR
(XEN) HVM d2v3 save: CPU_MSR
(XEN) HVM d2v4 save: CPU_MSR
(XEN) HVM d2v5 save: CPU_MSR
(XEN) HVM d2v6 save: CPU_MSR
(XEN) HVM d2v7 save: CPU_MSR
(XEN) HVM d2v8 save: CPU_MSR
(XEN) HVM d2v9 save: CPU_MSR
(XEN) HVM d2v10 save: CPU_MSR
(XEN) HVM d2v11 save: CPU_MSR
(XEN) HVM d2v12 save: CPU_MSR
(XEN) HVM d2v13 save: CPU_MSR
(XEN) HVM d2v14 save: CPU_MSR
(XEN) HVM d2v15 save: CPU_MSR
(XEN) HVM d2v16 save: CPU_MSR
(XEN) HVM d2v17 save: CPU_MSR
(XEN) HVM d2v18 save: CPU_MSR
(XEN) HVM d2v19 save: CPU_MSR
(XEN) HVM d2v20 save: CPU_MSR
(XEN) HVM d2v21 save: CPU_MSR
(XEN) HVM d2v22 save: CPU_MSR
(XEN) HVM d2v23 save: CPU_MSR
(XEN) HVM d2v24 save: CPU_MSR
(XEN) HVM d2v25 save: CPU_MSR
(XEN) HVM d2v26 save: CPU_MSR
(XEN) HVM d2v27 save: CPU_MSR
(XEN) HVM d2v28 save: CPU_MSR
(XEN) HVM d2v29 save: CPU_MSR
(XEN) HVM d2v30 save: CPU_MSR
(XEN) HVM d2v31 save: CPU_MSR
(XEN) HVM2 restore: CPU 0
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:remove: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:remove: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:remove: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:remove: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:remove: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:remove: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:remove: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:remove: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:remove: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:add: dom1 gfn=f0000 mfn=383fe8000 nr=2
(XEN) memory_map:add: dom1 gfn=f0003 mfn=383fe8003 nr=7ffd
(XEN) memory_map:add: dom1 gfn=f0043 mfn=383fe8043 nr=7fbd
(XEN) memory_map:add: dom1 gfn=f0083 mfn=383fe8083 nr=7f7d
(XEN) memory_map:add: dom1 gfn=f00c3 mfn=383fe80c3 nr=7f3d
(XEN) memory_map:add: dom1 gfn=f0103 mfn=383fe8103 nr=7efd
(XEN) memory_map:add: dom1 gfn=f0143 mfn=383fe8143 nr=7ebd
(XEN) memory_map:add: dom1 gfn=f0183 mfn=383fe8183 nr=7e7d
(XEN) memory_map:add: dom1 gfn=f01c3 mfn=383fe81c3 nr=7e3d
(XEN) memory_map:add: dom1 gfn=f0800 mfn=383fe8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:remove: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:remove: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:remove: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:remove: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:remove: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:remove: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:remove: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:remove: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:remove: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:remove: dom2 gfn=f0800 mfn=383fd8800 nr=7800
(XEN) memory_map:add: dom2 gfn=f0000 mfn=383fd8000 nr=2
(XEN) memory_map:add: dom2 gfn=f0003 mfn=383fd8003 nr=7ffd
(XEN) memory_map:add: dom2 gfn=f0043 mfn=383fd8043 nr=7fbd
(XEN) memory_map:add: dom2 gfn=f0083 mfn=383fd8083 nr=7f7d
(XEN) memory_map:add: dom2 gfn=f00c3 mfn=383fd80c3 nr=7f3d
(XEN) memory_map:add: dom2 gfn=f0103 mfn=383fd8103 nr=7efd
(XEN) memory_map:add: dom2 gfn=f0143 mfn=383fd8143 nr=7ebd
(XEN) memory_map:add: dom2 gfn=f0183 mfn=383fd8183 nr=7e7d
(XEN) memory_map:add: dom2 gfn=f01c3 mfn=383fd81c3 nr=7e3d
(XEN) memory_map:add: dom2 gfn=f0800 mfn=383fd8800 nr=7800
Roger Pau Monne Oct. 30, 2019, 5:23 p.m. UTC | #6
On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > Can you try to add the following debug patch on top of the existing
> > one and report the output that you get on the Xen console?
> 
> Applied debug patch and run the test again, not of any log printed,
> attached Xen log on serial console, seems pi_update_irte() not been
> called for iommu_intpost was false.

I have to admit I'm lost at this point. Does it mean the original
issue had nothing to do with posted interrupts?

Where you booting using iommu=intpost in your previous tests? Note
that posted interrupts is not enabled by default according to the
command line documentation.

Can you confirm whether you also see weird behavior without using
posted interrupts, and can you turn posted interrupts on and give the
patch a try?

Thanks, Roger.
Joe Jin Oct. 30, 2019, 6:01 p.m. UTC | #7
On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
>>> Can you try to add the following debug patch on top of the existing
>>> one and report the output that you get on the Xen console?
>>
>> Applied debug patch and run the test again, not of any log printed,
>> attached Xen log on serial console, seems pi_update_irte() not been
>> called for iommu_intpost was false.
> 
> I have to admit I'm lost at this point. Does it mean the original
> issue had nothing to do with posted interrupts?

Looks when inject irq by vlapic_set_irq(), it checked by
hvm_funcs.deliver_posted_intr rather than iommu_intpost:

 176     if ( hvm_funcs.deliver_posted_intr )
 177         hvm_funcs.deliver_posted_intr(target, vec);

And deliver_posted_intr() would be there, when vmx enabled:

(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB

So original issue still used posted interrupts?

> 
> Where you booting using iommu=intpost in your previous tests? Note
> that posted interrupts is not enabled by default according to the
> command line documentation.
> 

No, from xen command line you can see I only had iommu=1 for xen.

> Can you confirm whether you also see weird behavior without using
> posted interrupts, and can you turn posted interrupts on and give the
> patch a try?

I disabled apicv, looks posted interrupts been disables as well, then
I could not reproduced it anymore:

(XEN) Command line: placeholder dom0_mem=max:3456M allowsuperpage dom0_vcpus_pin=numa dom0_max_vcpus=4 crashkernel=512M@1024M iommu=verbose,debug,force,intremap,intpost hvm_debug=832 guest_loglvl=all com1=115200,8n1 console=com1 conring_size=1m console_to_ring apicv=0
...
(XEN) Initing memory sharing.
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Posted Interrupt not enabled.
(XEN) Intel VT-d Shared EPT tables enabled.
...
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN)  - VMCS shadowing
(XEN)  - VM Functions
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled

Thanks,
Joe
Jan Beulich Oct. 31, 2019, 8:01 a.m. UTC | #8
On 30.10.2019 19:01, Joe Jin wrote:
> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
>>>> Can you try to add the following debug patch on top of the existing
>>>> one and report the output that you get on the Xen console?
>>>
>>> Applied debug patch and run the test again, not of any log printed,
>>> attached Xen log on serial console, seems pi_update_irte() not been
>>> called for iommu_intpost was false.
>>
>> I have to admit I'm lost at this point. Does it mean the original
>> issue had nothing to do with posted interrupts?
> 
> Looks when inject irq by vlapic_set_irq(), it checked by
> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> 
>  176     if ( hvm_funcs.deliver_posted_intr )
>  177         hvm_funcs.deliver_posted_intr(target, vec);
> 
> And deliver_posted_intr() would be there, when vmx enabled:
> 
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB

I can't see the connection. start_vmx() has

    if ( cpu_has_vmx_posted_intr_processing )
    {
        alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
        if ( iommu_intpost )
            alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);

        vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
        vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
        vmx_function_table.test_pir            = vmx_test_pir;
    }

i.e. the hook is present only when posted interrupts are
available in general. I.e. also with just CPU-side posted
interrupts, yes, which gets confirmed by your "apicv=0"
test. Yet with just CPU-side posted interrupts I'm
struggling again to understand your original problem
description, and the need to fiddle with IOMMU side code.

Jan
Joe Jin Oct. 31, 2019, 2:52 p.m. UTC | #9
On 10/31/19 1:01 AM, Jan Beulich wrote:
> On 30.10.2019 19:01, Joe Jin wrote:
>> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
>>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
>>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
>>>>> Can you try to add the following debug patch on top of the existing
>>>>> one and report the output that you get on the Xen console?
>>>>
>>>> Applied debug patch and run the test again, not of any log printed,
>>>> attached Xen log on serial console, seems pi_update_irte() not been
>>>> called for iommu_intpost was false.
>>>
>>> I have to admit I'm lost at this point. Does it mean the original
>>> issue had nothing to do with posted interrupts?
>>
>> Looks when inject irq by vlapic_set_irq(), it checked by
>> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
>>
>>  176     if ( hvm_funcs.deliver_posted_intr )
>>  177         hvm_funcs.deliver_posted_intr(target, vec);
>>
>> And deliver_posted_intr() would be there, when vmx enabled:
>>
>> (XEN) HVM: VMX enabled
>> (XEN) HVM: Hardware Assisted Paging (HAP) detected
>> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> 
> I can't see the connection. start_vmx() has
> 
>     if ( cpu_has_vmx_posted_intr_processing )
>     {
>         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
>         if ( iommu_intpost )
>             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
> 
>         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
>         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
>         vmx_function_table.test_pir            = vmx_test_pir;
>     }
> 
> i.e. the hook is present only when posted interrupts are
> available in general. I.e. also with just CPU-side posted
> interrupts, yes, which gets confirmed by your "apicv=0"
> test. Yet with just CPU-side posted interrupts I'm
> struggling again to understand your original problem
> description, and the need to fiddle with IOMMU side code.

Yes, on my test env, cpu_has_vmx_posted_intr_processing == true && iommu_intpost == false,
with this, posted interrupts been enabled.

Thanks,
Joe
Jan Beulich Oct. 31, 2019, 2:56 p.m. UTC | #10
On 31.10.2019 15:52, Joe Jin wrote:
> On 10/31/19 1:01 AM, Jan Beulich wrote:
>> On 30.10.2019 19:01, Joe Jin wrote:
>>> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
>>>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
>>>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
>>>>>> Can you try to add the following debug patch on top of the existing
>>>>>> one and report the output that you get on the Xen console?
>>>>>
>>>>> Applied debug patch and run the test again, not of any log printed,
>>>>> attached Xen log on serial console, seems pi_update_irte() not been
>>>>> called for iommu_intpost was false.
>>>>
>>>> I have to admit I'm lost at this point. Does it mean the original
>>>> issue had nothing to do with posted interrupts?
>>>
>>> Looks when inject irq by vlapic_set_irq(), it checked by
>>> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
>>>
>>>  176     if ( hvm_funcs.deliver_posted_intr )
>>>  177         hvm_funcs.deliver_posted_intr(target, vec);
>>>
>>> And deliver_posted_intr() would be there, when vmx enabled:
>>>
>>> (XEN) HVM: VMX enabled
>>> (XEN) HVM: Hardware Assisted Paging (HAP) detected
>>> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
>>
>> I can't see the connection. start_vmx() has
>>
>>     if ( cpu_has_vmx_posted_intr_processing )
>>     {
>>         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
>>         if ( iommu_intpost )
>>             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
>>
>>         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
>>         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
>>         vmx_function_table.test_pir            = vmx_test_pir;
>>     }
>>
>> i.e. the hook is present only when posted interrupts are
>> available in general. I.e. also with just CPU-side posted
>> interrupts, yes, which gets confirmed by your "apicv=0"
>> test. Yet with just CPU-side posted interrupts I'm
>> struggling again to understand your original problem
>> description, and the need to fiddle with IOMMU side code.
> 
> Yes, on my test env, cpu_has_vmx_posted_intr_processing == true && iommu_intpost == false,
> with this, posted interrupts been enabled.

And what's the theory then again for needing to modify IOMMU
code in this case?

Jan
Joe Jin Oct. 31, 2019, 3:11 p.m. UTC | #11
On 10/31/19 7:56 AM, Jan Beulich wrote:
> On 31.10.2019 15:52, Joe Jin wrote:
>> On 10/31/19 1:01 AM, Jan Beulich wrote:
>>> On 30.10.2019 19:01, Joe Jin wrote:
>>>> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
>>>>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
>>>>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
>>>>>>> Can you try to add the following debug patch on top of the existing
>>>>>>> one and report the output that you get on the Xen console?
>>>>>>
>>>>>> Applied debug patch and run the test again, not of any log printed,
>>>>>> attached Xen log on serial console, seems pi_update_irte() not been
>>>>>> called for iommu_intpost was false.
>>>>>
>>>>> I have to admit I'm lost at this point. Does it mean the original
>>>>> issue had nothing to do with posted interrupts?
>>>>
>>>> Looks when inject irq by vlapic_set_irq(), it checked by
>>>> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
>>>>
>>>>  176     if ( hvm_funcs.deliver_posted_intr )
>>>>  177         hvm_funcs.deliver_posted_intr(target, vec);
>>>>
>>>> And deliver_posted_intr() would be there, when vmx enabled:
>>>>
>>>> (XEN) HVM: VMX enabled
>>>> (XEN) HVM: Hardware Assisted Paging (HAP) detected
>>>> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
>>>
>>> I can't see the connection. start_vmx() has
>>>
>>>     if ( cpu_has_vmx_posted_intr_processing )
>>>     {
>>>         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
>>>         if ( iommu_intpost )
>>>             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
>>>
>>>         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
>>>         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
>>>         vmx_function_table.test_pir            = vmx_test_pir;
>>>     }
>>>
>>> i.e. the hook is present only when posted interrupts are
>>> available in general. I.e. also with just CPU-side posted
>>> interrupts, yes, which gets confirmed by your "apicv=0"
>>> test. Yet with just CPU-side posted interrupts I'm
>>> struggling again to understand your original problem
>>> description, and the need to fiddle with IOMMU side code.
>>
>> Yes, on my test env, cpu_has_vmx_posted_intr_processing == true && iommu_intpost == false,
>> with this, posted interrupts been enabled.
> 
> And what's the theory then again for needing to modify IOMMU
> code in this case?

Idea is when dev msix vector been updated, we need to let vcpu know to avoid
to lost interrupt. Not sure we can do this or other path?

Thanks,
Joe
Roger Pau Monne Oct. 31, 2019, 3:23 p.m. UTC | #12
On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> On 10/31/19 1:01 AM, Jan Beulich wrote:
> > On 30.10.2019 19:01, Joe Jin wrote:
> >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> >>>>> Can you try to add the following debug patch on top of the existing
> >>>>> one and report the output that you get on the Xen console?
> >>>>
> >>>> Applied debug patch and run the test again, not of any log printed,
> >>>> attached Xen log on serial console, seems pi_update_irte() not been
> >>>> called for iommu_intpost was false.
> >>>
> >>> I have to admit I'm lost at this point. Does it mean the original
> >>> issue had nothing to do with posted interrupts?
> >>
> >> Looks when inject irq by vlapic_set_irq(), it checked by
> >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> >>
> >>  176     if ( hvm_funcs.deliver_posted_intr )
> >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> >>
> >> And deliver_posted_intr() would be there, when vmx enabled:
> >>
> >> (XEN) HVM: VMX enabled
> >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > 
> > I can't see the connection. start_vmx() has
> > 
> >     if ( cpu_has_vmx_posted_intr_processing )
> >     {
> >         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
> >         if ( iommu_intpost )
> >             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
> > 
> >         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
> >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> >         vmx_function_table.test_pir            = vmx_test_pir;
> >     }
> > 
> > i.e. the hook is present only when posted interrupts are
> > available in general. I.e. also with just CPU-side posted
> > interrupts, yes, which gets confirmed by your "apicv=0"
> > test. Yet with just CPU-side posted interrupts I'm
> > struggling again to understand your original problem
> > description, and the need to fiddle with IOMMU side code.
> 
> Yes, on my test env, cpu_has_vmx_posted_intr_processing == true && iommu_intpost == false,
> with this, posted interrupts been enabled.

I'm still quite lost. My reading of the Intel VT-d spec is that the
posted interrupt descriptor (which contains the PIRR) is used in
conjunction with a posted interrupt remapping entry in the iommu, so
that interrupts get recorded in the PIRR and later synced by the
hypervisor into the vlapic IRR when resuming the virtual CPU.

How is the PIRR filled if there's no interrupt remapping entry
pointing to it?

I have to admit I'm not super-familiar with the implementation in Xen,
so it's likely I'm missing something.

Roger.
Tian, Kevin Nov. 2, 2019, 7:48 a.m. UTC | #13
> From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> Sent: Thursday, October 31, 2019 11:23 PM
> 
> On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> > On 10/31/19 1:01 AM, Jan Beulich wrote:
> > > On 30.10.2019 19:01, Joe Jin wrote:
> > >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> > >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> > >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > >>>>> Can you try to add the following debug patch on top of the existing
> > >>>>> one and report the output that you get on the Xen console?
> > >>>>
> > >>>> Applied debug patch and run the test again, not of any log printed,
> > >>>> attached Xen log on serial console, seems pi_update_irte() not been
> > >>>> called for iommu_intpost was false.
> > >>>
> > >>> I have to admit I'm lost at this point. Does it mean the original
> > >>> issue had nothing to do with posted interrupts?
> > >>
> > >> Looks when inject irq by vlapic_set_irq(), it checked by
> > >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> > >>
> > >>  176     if ( hvm_funcs.deliver_posted_intr )
> > >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> > >>
> > >> And deliver_posted_intr() would be there, when vmx enabled:
> > >>
> > >> (XEN) HVM: VMX enabled
> > >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> > >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > >
> > > I can't see the connection. start_vmx() has
> > >
> > >     if ( cpu_has_vmx_posted_intr_processing )
> > >     {
> > >         alloc_direct_apic_vector(&posted_intr_vector,
> pi_notification_interrupt);
> > >         if ( iommu_intpost )
> > >             alloc_direct_apic_vector(&pi_wakeup_vector,
> pi_wakeup_interrupt);
> > >
> > >         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
> > >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> > >         vmx_function_table.test_pir            = vmx_test_pir;
> > >     }
> > >
> > > i.e. the hook is present only when posted interrupts are
> > > available in general. I.e. also with just CPU-side posted
> > > interrupts, yes, which gets confirmed by your "apicv=0"
> > > test. Yet with just CPU-side posted interrupts I'm
> > > struggling again to understand your original problem
> > > description, and the need to fiddle with IOMMU side code.
> >
> > Yes, on my test env, cpu_has_vmx_posted_intr_processing == true &&
> iommu_intpost == false,
> > with this, posted interrupts been enabled.
> 
> I'm still quite lost. My reading of the Intel VT-d spec is that the
> posted interrupt descriptor (which contains the PIRR) is used in
> conjunction with a posted interrupt remapping entry in the iommu, so
> that interrupts get recorded in the PIRR and later synced by the
> hypervisor into the vlapic IRR when resuming the virtual CPU.

there are two parts. Intel first implements CPU posted interrupt,
which allows one CPU to post IPI into non-root context in another
CPU through posted interrupt descriptor. Later VT-d posted 
interrupt comes, which use interrupt remapping entry and the
same posted interrupt descriptor (using more fields) to convert
a device interrupt into a posted interrupt. The posting process is
same on the dest CPU, regardless of whether it's from another CPU
or a device. 

> 
> How is the PIRR filled if there's no interrupt remapping entry
> pointing to it?
> 
> I have to admit I'm not super-familiar with the implementation in Xen,
> so it's likely I'm missing something.
> 
> Roger.
Roger Pau Monne Nov. 4, 2019, 9:46 a.m. UTC | #14
On Sat, Nov 02, 2019 at 07:48:06AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > Sent: Thursday, October 31, 2019 11:23 PM
> > 
> > On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> > > On 10/31/19 1:01 AM, Jan Beulich wrote:
> > > > On 30.10.2019 19:01, Joe Jin wrote:
> > > >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> > > >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> > > >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > > >>>>> Can you try to add the following debug patch on top of the existing
> > > >>>>> one and report the output that you get on the Xen console?
> > > >>>>
> > > >>>> Applied debug patch and run the test again, not of any log printed,
> > > >>>> attached Xen log on serial console, seems pi_update_irte() not been
> > > >>>> called for iommu_intpost was false.
> > > >>>
> > > >>> I have to admit I'm lost at this point. Does it mean the original
> > > >>> issue had nothing to do with posted interrupts?
> > > >>
> > > >> Looks when inject irq by vlapic_set_irq(), it checked by
> > > >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> > > >>
> > > >>  176     if ( hvm_funcs.deliver_posted_intr )
> > > >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> > > >>
> > > >> And deliver_posted_intr() would be there, when vmx enabled:
> > > >>
> > > >> (XEN) HVM: VMX enabled
> > > >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> > > >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > > >
> > > > I can't see the connection. start_vmx() has
> > > >
> > > >     if ( cpu_has_vmx_posted_intr_processing )
> > > >     {
> > > >         alloc_direct_apic_vector(&posted_intr_vector,
> > pi_notification_interrupt);
> > > >         if ( iommu_intpost )
> > > >             alloc_direct_apic_vector(&pi_wakeup_vector,
> > pi_wakeup_interrupt);
> > > >
> > > >         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
> > > >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> > > >         vmx_function_table.test_pir            = vmx_test_pir;
> > > >     }
> > > >
> > > > i.e. the hook is present only when posted interrupts are
> > > > available in general. I.e. also with just CPU-side posted
> > > > interrupts, yes, which gets confirmed by your "apicv=0"
> > > > test. Yet with just CPU-side posted interrupts I'm
> > > > struggling again to understand your original problem
> > > > description, and the need to fiddle with IOMMU side code.
> > >
> > > Yes, on my test env, cpu_has_vmx_posted_intr_processing == true &&
> > iommu_intpost == false,
> > > with this, posted interrupts been enabled.
> > 
> > I'm still quite lost. My reading of the Intel VT-d spec is that the
> > posted interrupt descriptor (which contains the PIRR) is used in
> > conjunction with a posted interrupt remapping entry in the iommu, so
> > that interrupts get recorded in the PIRR and later synced by the
> > hypervisor into the vlapic IRR when resuming the virtual CPU.
> 
> there are two parts. Intel first implements CPU posted interrupt,
> which allows one CPU to post IPI into non-root context in another
> CPU through posted interrupt descriptor. Later VT-d posted 
> interrupt comes, which use interrupt remapping entry and the
> same posted interrupt descriptor (using more fields) to convert
> a device interrupt into a posted interrupt. The posting process is
> same on the dest CPU, regardless of whether it's from another CPU
> or a device. 

Thanks for the description.

So the problem reported by Jin happens when using CPU posted
interrupts but not VT-d posted interrupts, in which case there
shouldn't be a need to sync PIRR with IRR when interrupts from a
passthrough device are reconfigured, because interrupts from that
device shouldn't end up signaled in PIRR because VT-d posted
interrupts is not being used.

Do interrupts from passthrough devices end up signaled in the posted
interrupt descriptor PIRR field when not using VT-d posted
interrupts but using CPU posted interrupts?

From my reading of your description above when using CPU posted
interrupts only the vectors signaled in the PIRR field should belong
to IPIs from other vCPUs?

Thanks, Roger.
Tian, Kevin Nov. 8, 2019, 2:25 a.m. UTC | #15
> From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> Sent: Monday, November 4, 2019 5:47 PM
> 
> On Sat, Nov 02, 2019 at 07:48:06AM +0000, Tian, Kevin wrote:
> > > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > > Sent: Thursday, October 31, 2019 11:23 PM
> > >
> > > On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> > > > On 10/31/19 1:01 AM, Jan Beulich wrote:
> > > > > On 30.10.2019 19:01, Joe Jin wrote:
> > > > >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> > > > >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> > > > >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > > > >>>>> Can you try to add the following debug patch on top of the
> existing
> > > > >>>>> one and report the output that you get on the Xen console?
> > > > >>>>
> > > > >>>> Applied debug patch and run the test again, not of any log
> printed,
> > > > >>>> attached Xen log on serial console, seems pi_update_irte() not
> been
> > > > >>>> called for iommu_intpost was false.
> > > > >>>
> > > > >>> I have to admit I'm lost at this point. Does it mean the original
> > > > >>> issue had nothing to do with posted interrupts?
> > > > >>
> > > > >> Looks when inject irq by vlapic_set_irq(), it checked by
> > > > >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> > > > >>
> > > > >>  176     if ( hvm_funcs.deliver_posted_intr )
> > > > >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> > > > >>
> > > > >> And deliver_posted_intr() would be there, when vmx enabled:
> > > > >>
> > > > >> (XEN) HVM: VMX enabled
> > > > >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> > > > >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > > > >
> > > > > I can't see the connection. start_vmx() has
> > > > >
> > > > >     if ( cpu_has_vmx_posted_intr_processing )
> > > > >     {
> > > > >         alloc_direct_apic_vector(&posted_intr_vector,
> > > pi_notification_interrupt);
> > > > >         if ( iommu_intpost )
> > > > >             alloc_direct_apic_vector(&pi_wakeup_vector,
> > > pi_wakeup_interrupt);
> > > > >
> > > > >         vmx_function_table.deliver_posted_intr =
> vmx_deliver_posted_intr;
> > > > >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> > > > >         vmx_function_table.test_pir            = vmx_test_pir;
> > > > >     }
> > > > >
> > > > > i.e. the hook is present only when posted interrupts are
> > > > > available in general. I.e. also with just CPU-side posted
> > > > > interrupts, yes, which gets confirmed by your "apicv=0"
> > > > > test. Yet with just CPU-side posted interrupts I'm
> > > > > struggling again to understand your original problem
> > > > > description, and the need to fiddle with IOMMU side code.
> > > >
> > > > Yes, on my test env, cpu_has_vmx_posted_intr_processing == true &&
> > > iommu_intpost == false,
> > > > with this, posted interrupts been enabled.
> > >
> > > I'm still quite lost. My reading of the Intel VT-d spec is that the
> > > posted interrupt descriptor (which contains the PIRR) is used in
> > > conjunction with a posted interrupt remapping entry in the iommu, so
> > > that interrupts get recorded in the PIRR and later synced by the
> > > hypervisor into the vlapic IRR when resuming the virtual CPU.
> >
> > there are two parts. Intel first implements CPU posted interrupt,
> > which allows one CPU to post IPI into non-root context in another
> > CPU through posted interrupt descriptor. Later VT-d posted
> > interrupt comes, which use interrupt remapping entry and the
> > same posted interrupt descriptor (using more fields) to convert
> > a device interrupt into a posted interrupt. The posting process is
> > same on the dest CPU, regardless of whether it's from another CPU
> > or a device.
> 
> Thanks for the description.
> 
> So the problem reported by Jin happens when using CPU posted
> interrupts but not VT-d posted interrupts, in which case there
> shouldn't be a need to sync PIRR with IRR when interrupts from a
> passthrough device are reconfigured, because interrupts from that
> device shouldn't end up signaled in PIRR because VT-d posted
> interrupts is not being used.
> 
> Do interrupts from passthrough devices end up signaled in the posted
> interrupt descriptor PIRR field when not using VT-d posted
> interrupts but using CPU posted interrupts?

No. If VT-d posted interrupt is disabled, interrupts from passthrough
devices don't go through posted interrupt descriptor. But after hypervisor
serves the interrupt and when it decides to inject a virtual interrupt into
the guest, PIRR will be updated if CPU posted interrupt is enabled.

> 
> From my reading of your description above when using CPU posted
> interrupts only the vectors signaled in the PIRR field should belong
> to IPIs from other vCPUs?
> 

I didn't understand your question.

Thanks
Kevin
Roger Pau Monne Nov. 8, 2019, 10:20 a.m. UTC | #16
On Fri, Nov 08, 2019 at 02:25:05AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > Sent: Monday, November 4, 2019 5:47 PM
> > 
> > On Sat, Nov 02, 2019 at 07:48:06AM +0000, Tian, Kevin wrote:
> > > > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > > > Sent: Thursday, October 31, 2019 11:23 PM
> > > >
> > > > On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> > > > > On 10/31/19 1:01 AM, Jan Beulich wrote:
> > > > > > On 30.10.2019 19:01, Joe Jin wrote:
> > > > > >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> > > > > >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> > > > > >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > > > > >>>>> Can you try to add the following debug patch on top of the
> > existing
> > > > > >>>>> one and report the output that you get on the Xen console?
> > > > > >>>>
> > > > > >>>> Applied debug patch and run the test again, not of any log
> > printed,
> > > > > >>>> attached Xen log on serial console, seems pi_update_irte() not
> > been
> > > > > >>>> called for iommu_intpost was false.
> > > > > >>>
> > > > > >>> I have to admit I'm lost at this point. Does it mean the original
> > > > > >>> issue had nothing to do with posted interrupts?
> > > > > >>
> > > > > >> Looks when inject irq by vlapic_set_irq(), it checked by
> > > > > >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> > > > > >>
> > > > > >>  176     if ( hvm_funcs.deliver_posted_intr )
> > > > > >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> > > > > >>
> > > > > >> And deliver_posted_intr() would be there, when vmx enabled:
> > > > > >>
> > > > > >> (XEN) HVM: VMX enabled
> > > > > >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> > > > > >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > > > > >
> > > > > > I can't see the connection. start_vmx() has
> > > > > >
> > > > > >     if ( cpu_has_vmx_posted_intr_processing )
> > > > > >     {
> > > > > >         alloc_direct_apic_vector(&posted_intr_vector,
> > > > pi_notification_interrupt);
> > > > > >         if ( iommu_intpost )
> > > > > >             alloc_direct_apic_vector(&pi_wakeup_vector,
> > > > pi_wakeup_interrupt);
> > > > > >
> > > > > >         vmx_function_table.deliver_posted_intr =
> > vmx_deliver_posted_intr;
> > > > > >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> > > > > >         vmx_function_table.test_pir            = vmx_test_pir;
> > > > > >     }
> > > > > >
> > > > > > i.e. the hook is present only when posted interrupts are
> > > > > > available in general. I.e. also with just CPU-side posted
> > > > > > interrupts, yes, which gets confirmed by your "apicv=0"
> > > > > > test. Yet with just CPU-side posted interrupts I'm
> > > > > > struggling again to understand your original problem
> > > > > > description, and the need to fiddle with IOMMU side code.
> > > > >
> > > > > Yes, on my test env, cpu_has_vmx_posted_intr_processing == true &&
> > > > iommu_intpost == false,
> > > > > with this, posted interrupts been enabled.
> > > >
> > > > I'm still quite lost. My reading of the Intel VT-d spec is that the
> > > > posted interrupt descriptor (which contains the PIRR) is used in
> > > > conjunction with a posted interrupt remapping entry in the iommu, so
> > > > that interrupts get recorded in the PIRR and later synced by the
> > > > hypervisor into the vlapic IRR when resuming the virtual CPU.
> > >
> > > there are two parts. Intel first implements CPU posted interrupt,
> > > which allows one CPU to post IPI into non-root context in another
> > > CPU through posted interrupt descriptor. Later VT-d posted
> > > interrupt comes, which use interrupt remapping entry and the
> > > same posted interrupt descriptor (using more fields) to convert
> > > a device interrupt into a posted interrupt. The posting process is
> > > same on the dest CPU, regardless of whether it's from another CPU
> > > or a device.
> > 
> > Thanks for the description.
> > 
> > So the problem reported by Jin happens when using CPU posted
> > interrupts but not VT-d posted interrupts, in which case there
> > shouldn't be a need to sync PIRR with IRR when interrupts from a
> > passthrough device are reconfigured, because interrupts from that
> > device shouldn't end up signaled in PIRR because VT-d posted
> > interrupts is not being used.
> > 
> > Do interrupts from passthrough devices end up signaled in the posted
> > interrupt descriptor PIRR field when not using VT-d posted
> > interrupts but using CPU posted interrupts?
> 
> No. If VT-d posted interrupt is disabled, interrupts from passthrough
> devices don't go through posted interrupt descriptor. But after hypervisor
> serves the interrupt and when it decides to inject a virtual interrupt into
> the guest, PIRR will be updated if CPU posted interrupt is enabled.

Oh, I see. vmx_deliver_posted_intr which is called regardless of
whether VT-d posted interrupts are enabled or not does set the vector
in the PIRR, so we do need to sync the PIRR with the IRR even when CPU
only posted interrupts are used.

May I ask why this is done that way? When VT-d posted interrupts are
not used wouldn't it be simpler to just set the vector in the IRR
directly instead of setting it in the PIRR and later syncing the PIRR
with IRR?

Thanks, Roger.
Tian, Kevin Nov. 15, 2019, 5:18 a.m. UTC | #17
> -----Original Message-----
> From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> Sent: Friday, November 8, 2019 6:20 PM
> To: Tian, Kevin <kevin.tian@intel.com>
> Cc: Joe Jin <joe.jin@oracle.com>; Jan Beulich <jbeulich@suse.com>;
> Andrew Cooper <andrew.cooper3@citrix.com>; xen-
> devel@lists.xenproject.org; Juergen Gross <jgross@suse.com>; Wei Liu
> <wl@xen.org>
> Subject: Re: [Xen-devel] [PATCH v2] x86/passthrough: fix migration of MSI
> when using posted interrupts
> 
> On Fri, Nov 08, 2019 at 02:25:05AM +0000, Tian, Kevin wrote:
> > > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > > Sent: Monday, November 4, 2019 5:47 PM
> > >
> > > On Sat, Nov 02, 2019 at 07:48:06AM +0000, Tian, Kevin wrote:
> > > > > From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> > > > > Sent: Thursday, October 31, 2019 11:23 PM
> > > > >
> > > > > On Thu, Oct 31, 2019 at 07:52:23AM -0700, Joe Jin wrote:
> > > > > > On 10/31/19 1:01 AM, Jan Beulich wrote:
> > > > > > > On 30.10.2019 19:01, Joe Jin wrote:
> > > > > > >> On 10/30/19 10:23 AM, Roger Pau Monné wrote:
> > > > > > >>> On Wed, Oct 30, 2019 at 09:38:16AM -0700, Joe Jin wrote:
> > > > > > >>>> On 10/30/19 1:24 AM, Roger Pau Monné wrote:
> > > > > > >>>>> Can you try to add the following debug patch on top of the
> > > existing
> > > > > > >>>>> one and report the output that you get on the Xen console?
> > > > > > >>>>
> > > > > > >>>> Applied debug patch and run the test again, not of any log
> > > printed,
> > > > > > >>>> attached Xen log on serial console, seems pi_update_irte()
> not
> > > been
> > > > > > >>>> called for iommu_intpost was false.
> > > > > > >>>
> > > > > > >>> I have to admit I'm lost at this point. Does it mean the original
> > > > > > >>> issue had nothing to do with posted interrupts?
> > > > > > >>
> > > > > > >> Looks when inject irq by vlapic_set_irq(), it checked by
> > > > > > >> hvm_funcs.deliver_posted_intr rather than iommu_intpost:
> > > > > > >>
> > > > > > >>  176     if ( hvm_funcs.deliver_posted_intr )
> > > > > > >>  177         hvm_funcs.deliver_posted_intr(target, vec);
> > > > > > >>
> > > > > > >> And deliver_posted_intr() would be there, when vmx enabled:
> > > > > > >>
> > > > > > >> (XEN) HVM: VMX enabled
> > > > > > >> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> > > > > > >> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> > > > > > >
> > > > > > > I can't see the connection. start_vmx() has
> > > > > > >
> > > > > > >     if ( cpu_has_vmx_posted_intr_processing )
> > > > > > >     {
> > > > > > >         alloc_direct_apic_vector(&posted_intr_vector,
> > > > > pi_notification_interrupt);
> > > > > > >         if ( iommu_intpost )
> > > > > > >             alloc_direct_apic_vector(&pi_wakeup_vector,
> > > > > pi_wakeup_interrupt);
> > > > > > >
> > > > > > >         vmx_function_table.deliver_posted_intr =
> > > vmx_deliver_posted_intr;
> > > > > > >         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
> > > > > > >         vmx_function_table.test_pir            = vmx_test_pir;
> > > > > > >     }
> > > > > > >
> > > > > > > i.e. the hook is present only when posted interrupts are
> > > > > > > available in general. I.e. also with just CPU-side posted
> > > > > > > interrupts, yes, which gets confirmed by your "apicv=0"
> > > > > > > test. Yet with just CPU-side posted interrupts I'm
> > > > > > > struggling again to understand your original problem
> > > > > > > description, and the need to fiddle with IOMMU side code.
> > > > > >
> > > > > > Yes, on my test env, cpu_has_vmx_posted_intr_processing == true
> &&
> > > > > iommu_intpost == false,
> > > > > > with this, posted interrupts been enabled.
> > > > >
> > > > > I'm still quite lost. My reading of the Intel VT-d spec is that the
> > > > > posted interrupt descriptor (which contains the PIRR) is used in
> > > > > conjunction with a posted interrupt remapping entry in the iommu,
> so
> > > > > that interrupts get recorded in the PIRR and later synced by the
> > > > > hypervisor into the vlapic IRR when resuming the virtual CPU.
> > > >
> > > > there are two parts. Intel first implements CPU posted interrupt,
> > > > which allows one CPU to post IPI into non-root context in another
> > > > CPU through posted interrupt descriptor. Later VT-d posted
> > > > interrupt comes, which use interrupt remapping entry and the
> > > > same posted interrupt descriptor (using more fields) to convert
> > > > a device interrupt into a posted interrupt. The posting process is
> > > > same on the dest CPU, regardless of whether it's from another CPU
> > > > or a device.
> > >
> > > Thanks for the description.
> > >
> > > So the problem reported by Jin happens when using CPU posted
> > > interrupts but not VT-d posted interrupts, in which case there
> > > shouldn't be a need to sync PIRR with IRR when interrupts from a
> > > passthrough device are reconfigured, because interrupts from that
> > > device shouldn't end up signaled in PIRR because VT-d posted
> > > interrupts is not being used.
> > >
> > > Do interrupts from passthrough devices end up signaled in the posted
> > > interrupt descriptor PIRR field when not using VT-d posted
> > > interrupts but using CPU posted interrupts?
> >
> > No. If VT-d posted interrupt is disabled, interrupts from passthrough
> > devices don't go through posted interrupt descriptor. But after hypervisor
> > serves the interrupt and when it decides to inject a virtual interrupt into
> > the guest, PIRR will be updated if CPU posted interrupt is enabled.
> 
> Oh, I see. vmx_deliver_posted_intr which is called regardless of
> whether VT-d posted interrupts are enabled or not does set the vector
> in the PIRR, so we do need to sync the PIRR with the IRR even when CPU
> only posted interrupts are used.
> 
> May I ask why this is done that way? When VT-d posted interrupts are
> not used wouldn't it be simpler to just set the vector in the IRR
> directly instead of setting it in the PIRR and later syncing the PIRR
> with IRR?
> 

because PIRR allows direct virtual interrupt posting when the dest
vcpu is in non-root mode (you save physical IPI). It benefits generic 
interrupt virtualization.

Thanks
Kevin
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9466258d6f..d255ad8db7 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -106,7 +106,7 @@  static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
     vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
 }
 
-static void sync_pir_to_irr(struct vcpu *v)
+void vlapic_sync_pir_to_irr(struct vcpu *v)
 {
     if ( hvm_funcs.sync_pir_to_irr )
         alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
@@ -114,7 +114,7 @@  static void sync_pir_to_irr(struct vcpu *v)
 
 static int vlapic_find_highest_irr(struct vlapic *vlapic)
 {
-    sync_pir_to_irr(vlapic_vcpu(vlapic));
+    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
 
     return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
 }
@@ -1493,7 +1493,7 @@  static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
     if ( !has_vlapic(v->domain) )
         return 0;
 
-    sync_pir_to_irr(v);
+    vlapic_sync_pir_to_irr(v);
 
     return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
 }
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index b292e79382..5bf1877726 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -341,7 +341,7 @@  int pt_irq_create_bind(
     {
         uint8_t dest, delivery_mode;
         bool dest_mode;
-        int dest_vcpu_id;
+        int dest_vcpu_id, prev_vcpu_id = -1;
         const struct vcpu *vcpu;
         uint32_t gflags = pt_irq_bind->u.msi.gflags &
                           ~XEN_DOMCTL_VMSI_X86_UNMASKED;
@@ -411,6 +411,7 @@  int pt_irq_create_bind(
 
                 pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
                 pirq_dpci->gmsi.gflags = gflags;
+                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
             }
         }
         /* Calculate dest_vcpu_id for MSI-type pirq migration. */
@@ -432,7 +433,10 @@  int pt_irq_create_bind(
                 vcpu = vector_hashing_dest(d, dest, dest_mode,
                                            pirq_dpci->gmsi.gvec);
             if ( vcpu )
+            {
                 pirq_dpci->gmsi.posted = true;
+                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
+            }
         }
         if ( vcpu && is_iommu_enabled(d) )
             hvm_migrate_pirq(pirq_dpci, vcpu);
@@ -440,7 +444,8 @@  int pt_irq_create_bind(
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
             pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+                           info, pirq_dpci->gmsi.gvec,
+                           prev_vcpu_id >= 0 ? d->vcpu[prev_vcpu_id] : NULL);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
@@ -729,7 +734,9 @@  int pt_irq_destroy_bind(
             what = "bogus";
     }
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
-        pi_update_irte(NULL, pirq, 0);
+        pi_update_irte(NULL, pirq, 0,
+                       pirq_dpci->gmsi.dest_vcpu_id >= 0
+                       ? d->vcpu[pirq_dpci->gmsi.dest_vcpu_id] : NULL);
 
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index bf846195c4..07c1c1627a 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -946,12 +946,13 @@  void intel_iommu_disable_eim(void)
         disable_qinval(drhd->iommu);
 }
 
+#ifdef CONFIG_HVM
 /*
  * This function is used to update the IRTE for posted-interrupt
  * when guest changes MSI/MSI-X information.
  */
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-    const uint8_t gvec)
+    const uint8_t gvec, struct vcpu *prev)
 {
     struct irq_desc *desc;
     struct msi_desc *msi_desc;
@@ -964,8 +965,8 @@  int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
     msi_desc = desc->msi_desc;
     if ( !msi_desc )
     {
-        rc = -ENODEV;
-        goto unlock_out;
+        spin_unlock_irq(&desc->lock);
+        return -ENODEV;
     }
     msi_desc->pi_desc = pi_desc;
     msi_desc->gvec = gvec;
@@ -974,10 +975,10 @@  int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
 
     ASSERT(pcidevs_locked());
 
-    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
-
- unlock_out:
-    spin_unlock_irq(&desc->lock);
+    rc = msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
+    if ( !rc && prev )
+         vlapic_sync_pir_to_irr(prev);
 
     return rc;
 }
+#endif
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index dde66b4f0f..b0017d1dae 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -150,4 +150,6 @@  bool_t vlapic_match_dest(
     const struct vlapic *target, const struct vlapic *source,
     int short_hand, uint32_t dest, bool_t dest_mode);
 
+void vlapic_sync_pir_to_irr(struct vcpu *v);
+
 #endif /* __ASM_X86_HVM_VLAPIC_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 85741f7c96..314dcfbe47 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -119,7 +119,7 @@  static inline void iommu_disable_x2apic(void)
 extern bool untrusted_msi;
 
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-                   const uint8_t gvec);
+                   const uint8_t gvec, struct vcpu *prev);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*