diff mbox

[v10,5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

Message ID 1489554682-6126-6-git-send-email-chao.gao@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Chao Gao March 15, 2017, 5:11 a.m. UTC
When a vCPU migrated to another pCPU, pt irqs binded to this vCPU also needed
migration. When VT-d PI is enabled, interrupt vector will be recorded to
a main memory resident data-structure and a notification whose destination
is decided by NDST is generated. NDST is properly adjusted during vCPU
migration so pirq directly injected to guest needn't be migrated.

This patch adds a indicator, @via_pi, to show whether the pt irq is delivered
through VT-d PI.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
v10:
- Newly added.

 xen/arch/x86/hvm/hvm.c       |  3 +++
 xen/drivers/passthrough/io.c | 15 +++++++++++----
 xen/include/xen/hvm/irq.h    |  1 +
 3 files changed, 15 insertions(+), 4 deletions(-)

Comments

Jan Beulich March 17, 2017, 10:43 a.m. UTC | #1
>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -445,6 +445,9 @@ static int hvm_migrate_pirq(struct domain *d, struct hvm_pirq_dpci *pirq_dpci,
>      struct vcpu *v = arg;
>  
>      if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
> +         (pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI) &&
> +         /* Needn't migrate pirq if this pirq is delivered to guest directly.*/
> +         (pirq_dpci->gmsi.via_pi != 1) &&

Considering the field is bool, !pirq_dpci->gmsi.via_pi please.

> @@ -441,6 +442,15 @@ int pt_irq_create_bind(
>  
>          dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
>          pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
> +
> +        pirq_dpci->gmsi.via_pi = 0;

false

> +        if ( iommu_intpost )
> +        {
> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
> +                                     pirq_dpci->gmsi.gvec);

This is now outside of the event_lock-ed region - is this safe?

Furthermore, looking at this, why do we need to call this function
at all in other than the LowestPrio case? It looks as if we could
easily use what hvm_girq_dest_2_vcpu_id() provides in the Fixed
case.

> +            if ( vcpu )
> +                pirq_dpci->gmsi.via_pi = 1;

true

> +        }
>          spin_unlock(&d->event_lock);
>          if ( dest_vcpu_id >= 0 )
>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);

(continuing from above) This could then use vcpu too.

> --- a/xen/include/xen/hvm/irq.h
> +++ b/xen/include/xen/hvm/irq.h
> @@ -63,6 +63,7 @@ struct hvm_gmsi_info {
>      uint32_t gvec;
>      uint32_t gflags;
>      int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
> +    bool via_pi; /* directly deliver to guest via VT-d PI? */

Wouldn't the field name perhaps better be (or include) "posted"?

Jan
Chao Gao March 20, 2017, 1:59 a.m. UTC | #2
On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>> @@ -441,6 +442,15 @@ int pt_irq_create_bind(
>>  
>>          dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
>>          pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
>> +
>> +        pirq_dpci->gmsi.via_pi = 0;
>
>false
>
>> +        if ( iommu_intpost )
>> +        {
>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>> +                                     pirq_dpci->gmsi.gvec);
>
>This is now outside of the event_lock-ed region - is this safe?

do you mean it is __inside__ the event_lock-ed region? I think it is safe
for the functions called by pi_find_dest_vcpu() are almost same with
hvm_girq_dest_2_vcpu_id.

>
>Furthermore, looking at this, why do we need to call this function
>at all in other than the LowestPrio case? It looks as if we could
>easily use what hvm_girq_dest_2_vcpu_id() provides in the Fixed
>case.

Agree. we can delete pi_find_dest_vcpu() and for LowestPrio case 
when VT-d PI enabled, we call vector_hashing_dest() directly.

>
>> +            if ( vcpu )
>> +                pirq_dpci->gmsi.via_pi = 1;
>
>true
>
>> +        }
>>          spin_unlock(&d->event_lock);
>>          if ( dest_vcpu_id >= 0 )
>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>
>(continuing from above) This could then use vcpu too.

I don't understand. In this patch, vcpu is always null when VT-d PI is not
enabled. Do you mean something like below: 

if ( dest_vcpu_id >= 0 )
    vcpu = d->vcpu[dest_vcpu_id];
if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
{
    vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
...
}
spin_unlock(&d->event_lock);
if ( vcpu )
    hvm_migrate_pirqs(vcpu);

Thank,
Chao

>
>> --- a/xen/include/xen/hvm/irq.h
>> +++ b/xen/include/xen/hvm/irq.h
>> @@ -63,6 +63,7 @@ struct hvm_gmsi_info {
>>      uint32_t gvec;
>>      uint32_t gflags;
>>      int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
>> +    bool via_pi; /* directly deliver to guest via VT-d PI? */
>
>Wouldn't the field name perhaps better be (or include) "posted"?
>
>Jan
>
Chao Gao March 20, 2017, 2:38 a.m. UTC | #3
On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>>> On 20.03.17 at 02:59, <chao.gao@intel.com> wrote:
>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>>>> +        if ( iommu_intpost )
>>>> +        {
>>>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>>> +                                     pirq_dpci->gmsi.gvec);
>>>
>>>This is now outside of the event_lock-ed region - is this safe?
>> 
>> do you mean it is __inside__ the event_lock-ed region?
>
>Oops, indeed.
>
>> I think it is safe
>> for the functions called by pi_find_dest_vcpu() are almost same with
>> hvm_girq_dest_2_vcpu_id.
>
>The question then needs to be put differently: Is this needed?
>You shouldn't move into a locked region what doesn't need to
>be there.

Yes. For we rely on the result to set @via_pi which needs to be 
protected by the lock.

>
>>>> +        }
>>>>          spin_unlock(&d->event_lock);
>>>>          if ( dest_vcpu_id >= 0 )
>>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>>
>>>(continuing from above) This could then use vcpu too.
>> 
>> I don't understand. In this patch, vcpu is always null when VT-d PI is not
>> enabled. Do you mean something like below: 
>> 
>> if ( dest_vcpu_id >= 0 )
>>     vcpu = d->vcpu[dest_vcpu_id];
>> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
>> {
>>     vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
>> ...
>> }
>> spin_unlock(&d->event_lock);
>> if ( vcpu )
>>     hvm_migrate_pirqs(vcpu);
>
>Yes, along these lines, albeit I think the first if() is more complicated
>than it needs to be.

We can make it simple like this:

const struct *vcpu vcpu;
...

vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;

Thanks
Chao

>
>Jan
>
Chao Gao March 20, 2017, 5:22 a.m. UTC | #4
On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
>>>> On 20.03.17 at 03:38, <chao.gao@intel.com> wrote:
>> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>>>>> On 20.03.17 at 02:59, <chao.gao@intel.com> wrote:
>>>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>>>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>>>>>> +        if ( iommu_intpost )
>>>>>> +        {
>>>>>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>>>>> +                                     pirq_dpci->gmsi.gvec);
>>>>>
>>>>>This is now outside of the event_lock-ed region - is this safe?
>>>> 
>>>> do you mean it is __inside__ the event_lock-ed region?
>>>
>>>Oops, indeed.
>>>
>>>> I think it is safe
>>>> for the functions called by pi_find_dest_vcpu() are almost same with
>>>> hvm_girq_dest_2_vcpu_id.
>>>
>>>The question then needs to be put differently: Is this needed?
>>>You shouldn't move into a locked region what doesn't need to
>>>be there.
>> 
>> Yes. For we rely on the result to set @via_pi which needs to be 
>> protected by the lock.
>
>I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
>yet the call to that function sits ...
>
>>>>>> +        }
>>>>>>          spin_unlock(&d->event_lock);
>>>>>>          if ( dest_vcpu_id >= 0 )
>>>>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>
>... right after the lock release.

@via_pi is also consumed during vCPU migration.
I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
To prohibit potential race(we can't use VT-d PI in 1st binding, but we can use
in the 2nd binding. But somehow the first update to via_pi overrides the second one),
and don't complicate the fields event_lock protects,
I'm inclined to put it in event_lock-ed region as long as it doesn't introduce other issues.

>
>>>>>(continuing from above) This could then use vcpu too.
>>>> 
>>>> I don't understand. In this patch, vcpu is always null when VT-d PI is not
>>>> enabled. Do you mean something like below: 
>>>> 
>>>> if ( dest_vcpu_id >= 0 )
>>>>     vcpu = d->vcpu[dest_vcpu_id];
>>>> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
>>>> {
>>>>     vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
>>>> ...
>>>> }
>>>> spin_unlock(&d->event_lock);
>>>> if ( vcpu )
>>>>     hvm_migrate_pirqs(vcpu);
>>>
>>>Yes, along these lines, albeit I think the first if() is more complicated
>>>than it needs to be.
>> 
>> We can make it simple like this:
>> 
>> const struct *vcpu vcpu;
>> ...
>> 
>> vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
>
>Ouch - there were three if()s, and I missed the first one, i.e. I
>really meant the middle of them.

Yes. the code may be wrong, other than complicated. The code above has changing
the way we choose the destination vcpu when VT-d PI is enabled. Even we
can get a single destination vcpu for LowestPrio, we should use
vector_hashing_dest() to calculate the destination vcpu like the original logic.

I think the code below is better:
if ( iommu_intpost && (delivery_mode == dest_LowestPrio) )

>
>Jan
>
Chao Gao March 20, 2017, 6:11 a.m. UTC | #5
On Mon, Mar 20, 2017 at 06:50:37AM -0600, Jan Beulich wrote:
>>>> On 20.03.17 at 06:22, <chao.gao@intel.com> wrote:
>> On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
>>>>>> On 20.03.17 at 03:38, <chao.gao@intel.com> wrote:
>>>> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>>>>>>>          spin_unlock(&d->event_lock);
>>>>>>>>          if ( dest_vcpu_id >= 0 )
>>>>>>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>>
>>>... right after the lock release.
>> 
>> @via_pi is also consumed during vCPU migration.
>
>But the event lock isn't being held around the checking of the
>field, so putting the setting of the field under lock is of no use.
>
>> I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
>> To prohibit potential race(we can't use VT-d PI in 1st binding, but we can use
>> in the 2nd binding. But somehow the first update to via_pi overrides the second one),
>> and don't complicate the fields event_lock protects,
>> I'm inclined to put it in event_lock-ed region as long as it doesn't 
>> introduce other issues.
>
>I certainly don't object to properly synchronizing things here,
>but then both producing and consuming side need to hold
>respective locks. Otherwise the best you can hope for is to
>reduce timing windows; you won't eliminate them though.

You are right. I am convinced.
Thanks
Chao

>
>Jan
>
Jan Beulich March 20, 2017, 9:18 a.m. UTC | #6
>>> On 20.03.17 at 02:59, <chao.gao@intel.com> wrote:
> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>>> +        if ( iommu_intpost )
>>> +        {
>>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>> +                                     pirq_dpci->gmsi.gvec);
>>
>>This is now outside of the event_lock-ed region - is this safe?
> 
> do you mean it is __inside__ the event_lock-ed region?

Oops, indeed.

> I think it is safe
> for the functions called by pi_find_dest_vcpu() are almost same with
> hvm_girq_dest_2_vcpu_id.

The question then needs to be put differently: Is this needed?
You shouldn't move into a locked region what doesn't need to
be there.

>>> +        }
>>>          spin_unlock(&d->event_lock);
>>>          if ( dest_vcpu_id >= 0 )
>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>
>>(continuing from above) This could then use vcpu too.
> 
> I don't understand. In this patch, vcpu is always null when VT-d PI is not
> enabled. Do you mean something like below: 
> 
> if ( dest_vcpu_id >= 0 )
>     vcpu = d->vcpu[dest_vcpu_id];
> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
> {
>     vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
> ...
> }
> spin_unlock(&d->event_lock);
> if ( vcpu )
>     hvm_migrate_pirqs(vcpu);

Yes, along these lines, albeit I think the first if() is more complicated
than it needs to be.

Jan
Jan Beulich March 20, 2017, 10:26 a.m. UTC | #7
>>> On 20.03.17 at 03:38, <chao.gao@intel.com> wrote:
> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>>>> On 20.03.17 at 02:59, <chao.gao@intel.com> wrote:
>>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>>>>> +        if ( iommu_intpost )
>>>>> +        {
>>>>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>>>> +                                     pirq_dpci->gmsi.gvec);
>>>>
>>>>This is now outside of the event_lock-ed region - is this safe?
>>> 
>>> do you mean it is __inside__ the event_lock-ed region?
>>
>>Oops, indeed.
>>
>>> I think it is safe
>>> for the functions called by pi_find_dest_vcpu() are almost same with
>>> hvm_girq_dest_2_vcpu_id.
>>
>>The question then needs to be put differently: Is this needed?
>>You shouldn't move into a locked region what doesn't need to
>>be there.
> 
> Yes. For we rely on the result to set @via_pi which needs to be 
> protected by the lock.

I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
yet the call to that function sits ...

>>>>> +        }
>>>>>          spin_unlock(&d->event_lock);
>>>>>          if ( dest_vcpu_id >= 0 )
>>>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);

... right after the lock release.

>>>>(continuing from above) This could then use vcpu too.
>>> 
>>> I don't understand. In this patch, vcpu is always null when VT-d PI is not
>>> enabled. Do you mean something like below: 
>>> 
>>> if ( dest_vcpu_id >= 0 )
>>>     vcpu = d->vcpu[dest_vcpu_id];
>>> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
>>> {
>>>     vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
>>> ...
>>> }
>>> spin_unlock(&d->event_lock);
>>> if ( vcpu )
>>>     hvm_migrate_pirqs(vcpu);
>>
>>Yes, along these lines, albeit I think the first if() is more complicated
>>than it needs to be.
> 
> We can make it simple like this:
> 
> const struct *vcpu vcpu;
> ...
> 
> vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;

Ouch - there were three if()s, and I missed the first one, i.e. I
really meant the middle of them.

Jan
Jan Beulich March 20, 2017, 12:50 p.m. UTC | #8
>>> On 20.03.17 at 06:22, <chao.gao@intel.com> wrote:
> On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
>>>>> On 20.03.17 at 03:38, <chao.gao@intel.com> wrote:
>>> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>>>>>> On 20.03.17 at 02:59, <chao.gao@intel.com> wrote:
>>>>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>>>>>>>> On 15.03.17 at 06:11, <chao.gao@intel.com> wrote:
>>>>>>> +        if ( iommu_intpost )
>>>>>>> +        {
>>>>>>> +            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>>>>>> +                                     pirq_dpci->gmsi.gvec);
>>>>>>
>>>>>>This is now outside of the event_lock-ed region - is this safe?
>>>>> 
>>>>> do you mean it is __inside__ the event_lock-ed region?
>>>>
>>>>Oops, indeed.
>>>>
>>>>> I think it is safe
>>>>> for the functions called by pi_find_dest_vcpu() are almost same with
>>>>> hvm_girq_dest_2_vcpu_id.
>>>>
>>>>The question then needs to be put differently: Is this needed?
>>>>You shouldn't move into a locked region what doesn't need to
>>>>be there.
>>> 
>>> Yes. For we rely on the result to set @via_pi which needs to be 
>>> protected by the lock.
>>
>>I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
>>yet the call to that function sits ...
>>
>>>>>>> +        }
>>>>>>>          spin_unlock(&d->event_lock);
>>>>>>>          if ( dest_vcpu_id >= 0 )
>>>>>>>              hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>
>>... right after the lock release.
> 
> @via_pi is also consumed during vCPU migration.

But the event lock isn't being held around the checking of the
field, so putting the setting of the field under lock is of no use.

> I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
> To prohibit potential race(we can't use VT-d PI in 1st binding, but we can use
> in the 2nd binding. But somehow the first update to via_pi overrides the second one),
> and don't complicate the fields event_lock protects,
> I'm inclined to put it in event_lock-ed region as long as it doesn't 
> introduce other issues.

I certainly don't object to properly synchronizing things here,
but then both producing and consuming side need to hold
respective locks. Otherwise the best you can hope for is to
reduce timing windows; you won't eliminate them though.

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ccfae4f..abbdab9 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -445,6 +445,9 @@  static int hvm_migrate_pirq(struct domain *d, struct hvm_pirq_dpci *pirq_dpci,
     struct vcpu *v = arg;
 
     if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
+         (pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI) &&
+         /* Needn't migrate pirq if this pirq is delivered to guest directly.*/
+         (pirq_dpci->gmsi.via_pi != 1) &&
          (pirq_dpci->gmsi.dest_vcpu_id == v->vcpu_id) )
     {
         struct irq_desc *desc =
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 080183e..37d9af6 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -365,6 +365,7 @@  int pt_irq_create_bind(
     {
         uint8_t dest, dest_mode, delivery_mode;
         int dest_vcpu_id;
+        const struct vcpu *vcpu = NULL;
 
         if ( !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         {
@@ -441,6 +442,15 @@  int pt_irq_create_bind(
 
         dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
         pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
+
+        pirq_dpci->gmsi.via_pi = 0;
+        if ( iommu_intpost )
+        {
+            vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
+                                     pirq_dpci->gmsi.gvec);
+            if ( vcpu )
+                pirq_dpci->gmsi.via_pi = 1;
+        }
         spin_unlock(&d->event_lock);
         if ( dest_vcpu_id >= 0 )
             hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
@@ -448,11 +458,8 @@  int pt_irq_create_bind(
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
         {
-            const struct vcpu *vcpu = pi_find_dest_vcpu(d, dest, dest_mode,
-                                          delivery_mode, pirq_dpci->gmsi.gvec);
-
             if ( vcpu )
-                pi_update_irte( vcpu, info, pirq_dpci->gmsi.gvec );
+                pi_update_irte(vcpu, info, pirq_dpci->gmsi.gvec);
             else
                 dprintk(XENLOG_G_INFO,
                         "%pv: deliver interrupt in remapping mode,gvec:%02x\n",
diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h
index d3f8623..ba74a31 100644
--- a/xen/include/xen/hvm/irq.h
+++ b/xen/include/xen/hvm/irq.h
@@ -63,6 +63,7 @@  struct hvm_gmsi_info {
     uint32_t gvec;
     uint32_t gflags;
     int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
+    bool via_pi; /* directly deliver to guest via VT-d PI? */
 };
 
 struct hvm_girq_dpci_mapping {