diff mbox

[v4,08/16] KVM: kvm-vfio: User API for IRQ forwarding

Message ID 1434657848.3700.83.camel@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Alex Williamson June 18, 2015, 8:04 p.m. UTC
[Adding Joerg since he was part of this original idea]

On Thu, 2015-06-18 at 09:16 +0000, Wu, Feng wrote:
> 
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Tuesday, June 16, 2015 12:45 AM
> > To: Eric Auger
> > Cc: Avi Kivity; Wu, Feng; kvm@vger.kernel.org; linux-kernel@vger.kernel.org;
> > pbonzini@redhat.com; mtosatti@redhat.com
> > Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> > 
> > On Mon, 2015-06-15 at 18:17 +0200, Eric Auger wrote:
> > > Hi Alex, all,
> > > On 06/12/2015 09:03 PM, Alex Williamson wrote:
> > > > On Fri, 2015-06-12 at 21:48 +0300, Avi Kivity wrote:
> > > >> On 06/12/2015 06:41 PM, Alex Williamson wrote:
> > > >>> On Fri, 2015-06-12 at 00:23 +0000, Wu, Feng wrote:
> > > >>>>> -----Original Message-----
> > > >>>>> From: Avi Kivity [mailto:avi.kivity@gmail.com]
> > > >>>>> Sent: Friday, June 12, 2015 3:59 AM
> > > >>>>> To: Wu, Feng; kvm@vger.kernel.org; linux-kernel@vger.kernel.org
> > > >>>>> Cc: pbonzini@redhat.com; mtosatti@redhat.com;
> > > >>>>> alex.williamson@redhat.com; eric.auger@linaro.org
> > > >>>>> Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> > > >>>>>
> > > >>>>> On 06/11/2015 01:51 PM, Feng Wu wrote:
> > > >>>>>> From: Eric Auger <eric.auger@linaro.org>
> > > >>>>>>
> > > >>>>>> This patch adds and documents a new KVM_DEV_VFIO_DEVICE
> > group
> > > >>>>>> and 2 device attributes: KVM_DEV_VFIO_DEVICE_FORWARD_IRQ,
> > > >>>>>> KVM_DEV_VFIO_DEVICE_UNFORWARD_IRQ. The purpose is to be
> > able
> > > >>>>>> to set a VFIO device IRQ as forwarded or not forwarded.
> > > >>>>>> the command takes as argument a handle to a new struct named
> > > >>>>>> kvm_vfio_dev_irq.
> > > >>>>> Is there no way to do this automatically?  After all, vfio knows that a
> > > >>>>> device interrupt is forwarded to some eventfd, and kvm knows that
> > some
> > > >>>>> eventfd is forwarded to a guest interrupt.  If they compare notes
> > > >>>>> through a central registry, they can figure out that the interrupt needs
> > > >>>>> to be forwarded.
> > > >>>> Oh, just like Eric mentioned in his reply, this description is out of context
> > of
> > > >>>> this series, I will remove them in the next version.
> > > >>>
> > > >>> I suspect Avi's question was more general.  While forward/unforward is
> > > >>> out of context for this series, it's very similar in nature to
> > > >>> enabling/disabling posted interrupts.  So I think the question remains
> > > >>> whether we really need userspace to participate in creating this
> > > >>> shortcut or if kvm and vfio can some how orchestrate figuring it out
> > > >>> automatically.
> > > >>>
> > > >>> Personally I don't know how we could do it automatically.  We've always
> > > >>> relied on userspace to independently setup vfio and kvm such that
> > > >>> neither have any idea that the other is there and update each side
> > > >>> independently when anything changes.  So it seems consistent to
> > continue
> > > >>> that here.  It doesn't seem like there's much to gain performance-wise
> > > >>> either, updates should be a relatively rare event I'd expect.
> > > >>>
> > > >>> There's really no metadata associated with an eventfd, so "comparing
> > > >>> notes" automatically might imply some central registration entity.  That
> > > >>> immediately sounds like a much more complex solution, but maybe Avi
> > has
> > > >>> some ideas to manage it.  Thanks,
> > > >>>
> > > >>
> > > >> The idea is to have a central registry maintained by a posted interrupts
> > > >> manager.  Both vfio and kvm pass the filp (along with extra information)
> > > >> to the posted interrupts manager, which, when it detects a filp match,
> > > >> tells each of them what to do.
> > > >>
> > > >> The advantages are:
> > > >> - old userspace gains the optimization without change
> > > >> - a userspace API is more expensive to maintain than internal kernel
> > > >> interfaces (CVEs, documentation, maintaining backwards compatibility)
> > > >> - if you can do it without a new interface, this indicates that all the
> > > >> information in the new interface is redundant.  That means you have to
> > > >> check it for consistency with the existing information, so it's extra
> > > >> work (likely, it's exactly what the posted interrupt manager would be
> > > >> doing anyway).
> > > >
> > > > Yep, those all sound like good things and I believe that's similar in
> > > > design to the way we had originally discussed this interaction at
> > > > LPC/KVM Forum several years ago.  I'd be in favor of that approach.
> > >
> > > I guess this discussion also is relevant wrt "[RFC v6 00/16] KVM-VFIO
> > > IRQ forward control" series? Or is that "central registry maintained by
> > > a posted interrupts manager" something more specific to x86?
> > 
> > I'd think we'd want it for any sort of offload and supporting both
> > posted-interrupts and irq-forwarding would be a good validation.  I
> > imagine there would be registration/de-registration callbacks separate
> > for interrupt producers vs interrupt consumers.  Each registration
> > function would likely provide a struct of callbacks, probably similar to
> > the get_symbol callbacks proposed for the kvm-vfio device on the IRQ
> > producer side.  The eventfd would be the token that the manager would
> > use to match producers and consumers.  The hard part is probably
> > figuring out what information to retrieve from the producer and provide
> > to the consumer in a generic way between pci and platform, but as an
> > internal interface, it's not a big deal if we screw it up a few times to
> > start.  Thanks,
> 
> On posted-interrupts side, the main purpose of the new APIs is to update
> the IRTE when guest changes vMSI/vMSIx configuration. Alex, do you have
> any detailed ideas for the new solution to achieve this purpose? It should
> be helpful if you can share some!


There are plenty of details to be filled in, but I think the basics
looks something like the code below.  The IRQ bypass manager just
defines a pair of structures, one for interrupt producers and one for
interrupt consumers.  I'm certain that we'll need more callbacks than
I've defined below, but figuring out what those should be for the best
abstraction is the hardest part of this idea.  The manager provides both
registration and de-registration interfaces for both types of objects
and keeps lists for each, protected by a lock.  The manager doesn't even
really need to know what the match token is, but I assume for our
purposes it will be an eventfd_ctx.

On the vfio side, the producer struct would be embedded in the
vfio_pci_irq_ctx struct.  KVM would probably embed the consumer struct
in _irqfd.  As I've coded below, the IRQ bypass manager calls the
consumer callbacks, so the producer struct would need fields or
callbacks to provide the consumer the info it needs.  AIUI the Posted
Interrupt model, VFIO only needs to provide data to the consumer.  For
IRQ Forwarding, I think the producer needs to be informed when bypass is
active to model the incoming interrupt as edge vs level.

I've prototyped the base IRQ bypass manager here as static, but I don't
see any reason it couldn't be a module that's loaded by dependency when
either vfio-pci or kvm-intel is loaded (or other producer/consumer
objects).

Is this a reasonable starting point to craft the additional fields and
callbacks and interaction of who calls who that we need to support
Posted Interrupts and IRQ Forwarding?  Is the AMD version of this still
alive?  Thanks,

Alex




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Joerg Roedel June 24, 2015, 3:46 p.m. UTC | #1
On Thu, Jun 18, 2015 at 02:04:08PM -0600, Alex Williamson wrote:
> There are plenty of details to be filled in,

I also need to fill plenty of details in my head first, so here are some
suggestions based on my current understanding. Please don't hesitate to
correct me if where I got something wrong.

So first I totally agree that the handling of PI/non-PI configurations
should be transparent to user-space.

I read a bit through the VT-d spec, and my understanding of posted
interrupts so far is that:

	1) Each VCPU gets a PI-Descriptor with its pending Posted
	   Interrupts. This descriptor needs to be updated when a VCPU
	   is migrated to another PCPU and should thus be under control
	   of KVM.

	   This is similar to the vAPIC backing page in the AMD version
	   of this, except that the PCPU routing information is stored
	   somewhere else on AMD.

	2) As long as the VCPU runs the IRTEs are configured for
	   posting, when the VCPU goes to sleep the old remapped entry is
	   established again. So when the VCPU sleeps the interrupt
	   would get routed to VFIO and forwarded through the eventfd.

	   This would be different to the AMD version, where we have a
	   running bit. When this is clear the IOMMU will trigger an event
	   in its event-log. This might need special handling in VFIO
	   ('might' because VFIO does not need to forward the interrupt,
	    it just needs to make sure the VCPU wakes up).

	   Please correct me if my understanding of the Intel version is
	   wrong.

So most of the data structures the IOMMU reads for this need to be
updated from KVM code (either x86-generic or AMD/Intel specific code),
as KVM has the information about VCPU load/unload and the IRQ routing.

What KVM needs from VFIO are the informations about the physical
interrupts, and it makes total sense to attach them as metadata to the
eventfd.

But the problems start at how this metadata should look like. It would
be good to have some generic description, but not sure if this is
possible. Otherwise this metadata would need to be requested by VFIO
from the IOMMU driver and passed on to KVM, which it then passes back to
the IOMMU driver. Or something like that.



	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wu, Feng June 25, 2015, 1:54 a.m. UTC | #2
> -----Original Message-----
> From: Joerg Roedel [mailto:joro@8bytes.org]
> Sent: Wednesday, June 24, 2015 11:46 PM
> To: Alex Williamson
> Cc: Wu, Feng; Eric Auger; Avi Kivity; kvm@vger.kernel.org;
> linux-kernel@vger.kernel.org; pbonzini@redhat.com; mtosatti@redhat.com
> Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> 
> On Thu, Jun 18, 2015 at 02:04:08PM -0600, Alex Williamson wrote:
> > There are plenty of details to be filled in,
> 
> I also need to fill plenty of details in my head first, so here are some
> suggestions based on my current understanding. Please don't hesitate to
> correct me if where I got something wrong.
> 
> So first I totally agree that the handling of PI/non-PI configurations
> should be transparent to user-space.
> 
> I read a bit through the VT-d spec, and my understanding of posted
> interrupts so far is that:
> 
> 	1) Each VCPU gets a PI-Descriptor with its pending Posted
> 	   Interrupts. This descriptor needs to be updated when a VCPU
> 	   is migrated to another PCPU and should thus be under control
> 	   of KVM.
> 
> 	   This is similar to the vAPIC backing page in the AMD version
> 	   of this, except that the PCPU routing information is stored
> 	   somewhere else on AMD.
> 
> 	2) As long as the VCPU runs the IRTEs are configured for
> 	   posting, when the VCPU goes to sleep the old remapped entry is
> 	   established again. So when the VCPU sleeps the interrupt
> 	   would get routed to VFIO and forwarded through the eventfd.

When the vCPU sleeps, says, blocked when guest is running HLT, the
interrupt is still in posted mode. The solution is when the vCPU is blocked,
we use another notification vector (named wakeup notification vector) to
wakeup the blocked vCPU when interrupts happens. And in the wakeup
event handler, we unblock the vCPU.

Thanks,
Feng

> 
> 	   This would be different to the AMD version, where we have a
> 	   running bit. When this is clear the IOMMU will trigger an event
> 	   in its event-log. This might need special handling in VFIO
> 	   ('might' because VFIO does not need to forward the interrupt,
> 	    it just needs to make sure the VCPU wakes up).
> 
> 	   Please correct me if my understanding of the Intel version is
> 	   wrong.
> 
> So most of the data structures the IOMMU reads for this need to be
> updated from KVM code (either x86-generic or AMD/Intel specific code),
> as KVM has the information about VCPU load/unload and the IRQ routing.

Yes, this part has nothing to do with VFIO, KVM itself can handle it well.

> 
> What KVM needs from VFIO are the informations about the physical
> interrupts, and it makes total sense to attach them as metadata to the
> eventfd.

When guest set the irq affinity, QEMU first gets the MSI/MSIx configuration,
then it passes these information to kernel space via VFIO infrastructure, we
need these MSI/MSIx configuration to update the associated posted-format
IRTE according. This is the key point for PI in term of VFIO.

Thanks,
Feng

> 
> But the problems start at how this metadata should look like. It would
> be good to have some generic description, but not sure if this is
> possible. Otherwise this metadata would need to be requested by VFIO
> from the IOMMU driver and passed on to KVM, which it then passes back to
> the IOMMU driver. Or something like that.
> 
> 
> 
> 	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wu, Feng June 25, 2015, 9:37 a.m. UTC | #3
> -----Original Message-----
> From: Joerg Roedel [mailto:joro@8bytes.org]
> Sent: Wednesday, June 24, 2015 11:46 PM
> To: Alex Williamson
> Cc: Wu, Feng; Eric Auger; Avi Kivity; kvm@vger.kernel.org;
> linux-kernel@vger.kernel.org; pbonzini@redhat.com; mtosatti@redhat.com
> Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> 
> On Thu, Jun 18, 2015 at 02:04:08PM -0600, Alex Williamson wrote:
> > There are plenty of details to be filled in,
> 
> I also need to fill plenty of details in my head first, so here are some
> suggestions based on my current understanding. Please don't hesitate to
> correct me if where I got something wrong.
> 
> So first I totally agree that the handling of PI/non-PI configurations
> should be transparent to user-space.

After thinking about this a bit more, I recall that why I used user-space
to trigger the IRTE update for posted-interrupts, here is the reason:

Let's take MSI for an example:
When guest updates the MSI configuration, here is the code path in
QEMU and KVM:

vfio_update_msi() --> vfio_update_kvm_msi_virq() -->
kvm_irqchip_update_msi_route() --> kvm_update_routing_entry() -->
kvm_irqchip_commit_routes() --> kvm_irqchip_commit_routes() -->
KVM_SET_GSI_ROUTING --> kvm_set_irq_routing()

It will finally go to kvm_set_irq_routing() in KVM, there are two problem:
1. It use RCU in this function, it is hard to find which entry in the irq routing
  table is being updated.
2. Even we find the updated entry, it is hard to find the associated assigned
  device with this irq routing entry.

So I used a VFIO API to notify KVM the updated MSI/MSIx configuration and
the associated assigned devices. I think we need to find a way to address
the above two issues before going forward. Alex, what is your opinion?
Thanks a lot!

Thanks,
Feng


> 
> I read a bit through the VT-d spec, and my understanding of posted
> interrupts so far is that:
> 
> 	1) Each VCPU gets a PI-Descriptor with its pending Posted
> 	   Interrupts. This descriptor needs to be updated when a VCPU
> 	   is migrated to another PCPU and should thus be under control
> 	   of KVM.
> 
> 	   This is similar to the vAPIC backing page in the AMD version
> 	   of this, except that the PCPU routing information is stored
> 	   somewhere else on AMD.
> 
> 	2) As long as the VCPU runs the IRTEs are configured for
> 	   posting, when the VCPU goes to sleep the old remapped entry is
> 	   established again. So when the VCPU sleeps the interrupt
> 	   would get routed to VFIO and forwarded through the eventfd.
> 
> 	   This would be different to the AMD version, where we have a
> 	   running bit. When this is clear the IOMMU will trigger an event
> 	   in its event-log. This might need special handling in VFIO
> 	   ('might' because VFIO does not need to forward the interrupt,
> 	    it just needs to make sure the VCPU wakes up).
> 
> 	   Please correct me if my understanding of the Intel version is
> 	   wrong.
> 
> So most of the data structures the IOMMU reads for this need to be
> updated from KVM code (either x86-generic or AMD/Intel specific code),
> as KVM has the information about VCPU load/unload and the IRQ routing.
> 
> What KVM needs from VFIO are the informations about the physical
> interrupts, and it makes total sense to attach them as metadata to the
> eventfd.
> 
> But the problems start at how this metadata should look like. It would
> be good to have some generic description, but not sure if this is
> possible. Otherwise this metadata would need to be requested by VFIO
> from the IOMMU driver and passed on to KVM, which it then passes back to
> the IOMMU driver. Or something like that.
> 
> 
> 
> 	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alex Williamson June 25, 2015, 3:11 p.m. UTC | #4
On Thu, 2015-06-25 at 09:37 +0000, Wu, Feng wrote:
> 
> > -----Original Message-----
> > From: Joerg Roedel [mailto:joro@8bytes.org]
> > Sent: Wednesday, June 24, 2015 11:46 PM
> > To: Alex Williamson
> > Cc: Wu, Feng; Eric Auger; Avi Kivity; kvm@vger.kernel.org;
> > linux-kernel@vger.kernel.org; pbonzini@redhat.com; mtosatti@redhat.com
> > Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> > 
> > On Thu, Jun 18, 2015 at 02:04:08PM -0600, Alex Williamson wrote:
> > > There are plenty of details to be filled in,
> > 
> > I also need to fill plenty of details in my head first, so here are some
> > suggestions based on my current understanding. Please don't hesitate to
> > correct me if where I got something wrong.
> > 
> > So first I totally agree that the handling of PI/non-PI configurations
> > should be transparent to user-space.
> 
> After thinking about this a bit more, I recall that why I used user-space
> to trigger the IRTE update for posted-interrupts, here is the reason:
> 
> Let's take MSI for an example:
> When guest updates the MSI configuration, here is the code path in
> QEMU and KVM:
> 
> vfio_update_msi() --> vfio_update_kvm_msi_virq() -->
> kvm_irqchip_update_msi_route() --> kvm_update_routing_entry() -->
> kvm_irqchip_commit_routes() --> kvm_irqchip_commit_routes() -->
> KVM_SET_GSI_ROUTING --> kvm_set_irq_routing()
> 
> It will finally go to kvm_set_irq_routing() in KVM, there are two problem:
> 1. It use RCU in this function, it is hard to find which entry in the irq routing
>   table is being updated.
> 2. Even we find the updated entry, it is hard to find the associated assigned
>   device with this irq routing entry.
> 
> So I used a VFIO API to notify KVM the updated MSI/MSIx configuration and
> the associated assigned devices. I think we need to find a way to address
> the above two issues before going forward. Alex, what is your opinion?

So the trouble is that QEMU vfio updates a single MSI vector, but that
just updates a single entry within a whole table of routes, then the
whole table is pushed to KVM.  But in kvm_set_irq_routing() we have
access to both the new and the old tables, so we do have the ability to
detect the change.  We can therefore detect which GSI changed and
cross-reference that to KVMs irqfds.  If we have an irqfd that matches
the GSI then we have all the information we need, right?  We can use the
eventfd_ctx of the irqfd to call into the IRQ bypass manager if we need
to.  If it's an irqfd that's already enabled for bypass then we may
already have the data we need to tweak the PI config.

Yes, I agree it's more difficult, but it doesn't appear to be
impossible, right?  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Joerg Roedel June 29, 2015, 9:06 a.m. UTC | #5
Hi Feng,

On Thu, Jun 25, 2015 at 09:11:52AM -0600, Alex Williamson wrote:
> So the trouble is that QEMU vfio updates a single MSI vector, but that
> just updates a single entry within a whole table of routes, then the
> whole table is pushed to KVM.  But in kvm_set_irq_routing() we have
> access to both the new and the old tables, so we do have the ability to
> detect the change.  We can therefore detect which GSI changed and
> cross-reference that to KVMs irqfds.  If we have an irqfd that matches
> the GSI then we have all the information we need, right?  We can use the
> eventfd_ctx of the irqfd to call into the IRQ bypass manager if we need
> to.  If it's an irqfd that's already enabled for bypass then we may
> already have the data we need to tweak the PI config.
> 
> Yes, I agree it's more difficult, but it doesn't appear to be
> impossible, right?

Since this also doesn't happen very often, you could also just update _all_
PI data-structures from kvm_set_irq_routing, no? This would just
resemble the way the API works anyway.

You just need to be careful to update the data structures only when the
function can't fail anymore, so that you don't have to roll back
anything.


	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wu, Feng June 29, 2015, 9:14 a.m. UTC | #6
> -----Original Message-----
> From: Joerg Roedel [mailto:joro@8bytes.org]
> Sent: Monday, June 29, 2015 5:06 PM
> To: Wu, Feng
> Cc: Alex Williamson; Eric Auger; Avi Kivity; kvm@vger.kernel.org;
> linux-kernel@vger.kernel.org; pbonzini@redhat.com; mtosatti@redhat.com
> Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> 
> Hi Feng,
> 
> On Thu, Jun 25, 2015 at 09:11:52AM -0600, Alex Williamson wrote:
> > So the trouble is that QEMU vfio updates a single MSI vector, but that
> > just updates a single entry within a whole table of routes, then the
> > whole table is pushed to KVM.  But in kvm_set_irq_routing() we have
> > access to both the new and the old tables, so we do have the ability to
> > detect the change.  We can therefore detect which GSI changed and
> > cross-reference that to KVMs irqfds.  If we have an irqfd that matches
> > the GSI then we have all the information we need, right?  We can use the
> > eventfd_ctx of the irqfd to call into the IRQ bypass manager if we need
> > to.  If it's an irqfd that's already enabled for bypass then we may
> > already have the data we need to tweak the PI config.
> >
> > Yes, I agree it's more difficult, but it doesn't appear to be
> > impossible, right?
> 
> Since this also doesn't happen very often, you could also just update _all_
> PI data-structures from kvm_set_irq_routing, no? This would just
> resemble the way the API works anyway.

Thanks a lot for your suggestion, Joerg!

Do you mean updating the hardware IRTEs for all the entries in the irq
routing table, no matter whether it is the updated one?

Thanks,
Feng

> 
> You just need to be careful to update the data structures only when the
> function can't fail anymore, so that you don't have to roll back
> anything.
> 
> 
> 	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Joerg Roedel June 29, 2015, 9:22 a.m. UTC | #7
On Mon, Jun 29, 2015 at 09:14:54AM +0000, Wu, Feng wrote:
> Do you mean updating the hardware IRTEs for all the entries in the irq
> routing table, no matter whether it is the updated one?

Right, that's what I mean. It seems wrong to me to work around the API
interface by creating a diff between the old and the new routing table.
It is much simpler (and easier to maintain) to just update the IRTE
and PI structures for all IRQs in the routing table, especially since
this is not a hot-path.


	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wu, Feng June 29, 2015, 1:01 p.m. UTC | #8
> -----Original Message-----
> From: Joerg Roedel [mailto:joro@8bytes.org]
> Sent: Monday, June 29, 2015 5:23 PM
> To: Wu, Feng
> Cc: Alex Williamson; Eric Auger; Avi Kivity; kvm@vger.kernel.org;
> linux-kernel@vger.kernel.org; pbonzini@redhat.com; mtosatti@redhat.com
> Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> 
> On Mon, Jun 29, 2015 at 09:14:54AM +0000, Wu, Feng wrote:
> > Do you mean updating the hardware IRTEs for all the entries in the irq
> > routing table, no matter whether it is the updated one?
> 
> Right, that's what I mean. It seems wrong to me to work around the API
> interface by creating a diff between the old and the new routing table.

Yes the original usage model here doesn't care about the diff between
the old and new, it is a little intrusive to add the comparison code here.

> It is much simpler (and easier to maintain) to just update the IRTE
> and PI structures for all IRQs in the routing table, especially since
> this is not a hot-path.

Agree.

Thanks,
Feng

> 
> 
> 	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wu, Feng June 29, 2015, 1:27 p.m. UTC | #9
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IEZyaWRheSwgSnVu
ZSAxOSwgMjAxNSA0OjA0IEFNDQo+IFRvOiBXdSwgRmVuZw0KPiBDYzogRXJpYyBBdWdlcjsgQXZp
IEtpdml0eTsga3ZtQHZnZXIua2VybmVsLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9y
ZzsNCj4gcGJvbnppbmlAcmVkaGF0LmNvbTsgbXRvc2F0dGlAcmVkaGF0LmNvbTsgSm9lcmcgUm9l
ZGVsDQo+IFN1YmplY3Q6IFJlOiBbdjQgMDgvMTZdIEtWTToga3ZtLXZmaW86IFVzZXIgQVBJIGZv
ciBJUlEgZm9yd2FyZGluZw0KPiANCj4gW0FkZGluZyBKb2VyZyBzaW5jZSBoZSB3YXMgcGFydCBv
ZiB0aGlzIG9yaWdpbmFsIGlkZWFdDQo+IA0KPiBPbiBUaHUsIDIwMTUtMDYtMTggYXQgMDk6MTYg
KzAwMDAsIFd1LCBGZW5nIHdyb3RlOg0KPiA+DQo+ID4NCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVz
c2FnZS0tLS0tDQo+ID4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4LndpbGxp
YW1zb25AcmVkaGF0LmNvbV0NCj4gPiA+IFNlbnQ6IFR1ZXNkYXksIEp1bmUgMTYsIDIwMTUgMTI6
NDUgQU0NCj4gPiA+IFRvOiBFcmljIEF1Z2VyDQo+ID4gPiBDYzogQXZpIEtpdml0eTsgV3UsIEZl
bmc7IGt2bUB2Z2VyLmtlcm5lbC5vcmc7DQo+IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7
DQo+ID4gPiBwYm9uemluaUByZWRoYXQuY29tOyBtdG9zYXR0aUByZWRoYXQuY29tDQo+ID4gPiBT
dWJqZWN0OiBSZTogW3Y0IDA4LzE2XSBLVk06IGt2bS12ZmlvOiBVc2VyIEFQSSBmb3IgSVJRIGZv
cndhcmRpbmcNCj4gPiA+DQo+ID4gPiBPbiBNb24sIDIwMTUtMDYtMTUgYXQgMTg6MTcgKzAyMDAs
IEVyaWMgQXVnZXIgd3JvdGU6DQo+ID4gPiA+IEhpIEFsZXgsIGFsbCwNCj4gPiA+ID4gT24gMDYv
MTIvMjAxNSAwOTowMyBQTSwgQWxleCBXaWxsaWFtc29uIHdyb3RlOg0KPiA+ID4gPiA+IE9uIEZy
aSwgMjAxNS0wNi0xMiBhdCAyMTo0OCArMDMwMCwgQXZpIEtpdml0eSB3cm90ZToNCj4gPiA+ID4g
Pj4gT24gMDYvMTIvMjAxNSAwNjo0MSBQTSwgQWxleCBXaWxsaWFtc29uIHdyb3RlOg0KPiA+ID4g
PiA+Pj4gT24gRnJpLCAyMDE1LTA2LTEyIGF0IDAwOjIzICswMDAwLCBXdSwgRmVuZyB3cm90ZToN
Cj4gPiA+ID4gPj4+Pj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4gPj4+Pj4g
RnJvbTogQXZpIEtpdml0eSBbbWFpbHRvOmF2aS5raXZpdHlAZ21haWwuY29tXQ0KPiA+ID4gPiA+
Pj4+PiBTZW50OiBGcmlkYXksIEp1bmUgMTIsIDIwMTUgMzo1OSBBTQ0KPiA+ID4gPiA+Pj4+PiBU
bzogV3UsIEZlbmc7IGt2bUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5l
bC5vcmcNCj4gPiA+ID4gPj4+Pj4gQ2M6IHBib256aW5pQHJlZGhhdC5jb207IG10b3NhdHRpQHJl
ZGhhdC5jb207DQo+ID4gPiA+ID4+Pj4+IGFsZXgud2lsbGlhbXNvbkByZWRoYXQuY29tOyBlcmlj
LmF1Z2VyQGxpbmFyby5vcmcNCj4gPiA+ID4gPj4+Pj4gU3ViamVjdDogUmU6IFt2NCAwOC8xNl0g
S1ZNOiBrdm0tdmZpbzogVXNlciBBUEkgZm9yIElSUSBmb3J3YXJkaW5nDQo+ID4gPiA+ID4+Pj4+
DQo+ID4gPiA+ID4+Pj4+IE9uIDA2LzExLzIwMTUgMDE6NTEgUE0sIEZlbmcgV3Ugd3JvdGU6DQo+
ID4gPiA+ID4+Pj4+PiBGcm9tOiBFcmljIEF1Z2VyIDxlcmljLmF1Z2VyQGxpbmFyby5vcmc+DQo+
ID4gPiA+ID4+Pj4+Pg0KPiA+ID4gPiA+Pj4+Pj4gVGhpcyBwYXRjaCBhZGRzIGFuZCBkb2N1bWVu
dHMgYSBuZXcgS1ZNX0RFVl9WRklPX0RFVklDRQ0KPiA+ID4gZ3JvdXANCj4gPiA+ID4gPj4+Pj4+
IGFuZCAyIGRldmljZSBhdHRyaWJ1dGVzOg0KPiBLVk1fREVWX1ZGSU9fREVWSUNFX0ZPUldBUkRf
SVJRLA0KPiA+ID4gPiA+Pj4+Pj4gS1ZNX0RFVl9WRklPX0RFVklDRV9VTkZPUldBUkRfSVJRLiBU
aGUgcHVycG9zZSBpcyB0byBiZQ0KPiA+ID4gYWJsZQ0KPiA+ID4gPiA+Pj4+Pj4gdG8gc2V0IGEg
VkZJTyBkZXZpY2UgSVJRIGFzIGZvcndhcmRlZCBvciBub3QgZm9yd2FyZGVkLg0KPiA+ID4gPiA+
Pj4+Pj4gdGhlIGNvbW1hbmQgdGFrZXMgYXMgYXJndW1lbnQgYSBoYW5kbGUgdG8gYSBuZXcgc3Ry
dWN0IG5hbWVkDQo+ID4gPiA+ID4+Pj4+PiBrdm1fdmZpb19kZXZfaXJxLg0KPiA+ID4gPiA+Pj4+
PiBJcyB0aGVyZSBubyB3YXkgdG8gZG8gdGhpcyBhdXRvbWF0aWNhbGx5PyAgQWZ0ZXIgYWxsLCB2
ZmlvIGtub3dzIHRoYXQNCj4gYQ0KPiA+ID4gPiA+Pj4+PiBkZXZpY2UgaW50ZXJydXB0IGlzIGZv
cndhcmRlZCB0byBzb21lIGV2ZW50ZmQsIGFuZCBrdm0ga25vd3MgdGhhdA0KPiA+ID4gc29tZQ0K
PiA+ID4gPiA+Pj4+PiBldmVudGZkIGlzIGZvcndhcmRlZCB0byBhIGd1ZXN0IGludGVycnVwdC4g
IElmIHRoZXkgY29tcGFyZSBub3Rlcw0KPiA+ID4gPiA+Pj4+PiB0aHJvdWdoIGEgY2VudHJhbCBy
ZWdpc3RyeSwgdGhleSBjYW4gZmlndXJlIG91dCB0aGF0IHRoZSBpbnRlcnJ1cHQNCj4gbmVlZHMN
Cj4gPiA+ID4gPj4+Pj4gdG8gYmUgZm9yd2FyZGVkLg0KPiA+ID4gPiA+Pj4+IE9oLCBqdXN0IGxp
a2UgRXJpYyBtZW50aW9uZWQgaW4gaGlzIHJlcGx5LCB0aGlzIGRlc2NyaXB0aW9uIGlzIG91dCBv
Zg0KPiBjb250ZXh0DQo+ID4gPiBvZg0KPiA+ID4gPiA+Pj4+IHRoaXMgc2VyaWVzLCBJIHdpbGwg
cmVtb3ZlIHRoZW0gaW4gdGhlIG5leHQgdmVyc2lvbi4NCj4gPiA+ID4gPj4+DQo+ID4gPiA+ID4+
PiBJIHN1c3BlY3QgQXZpJ3MgcXVlc3Rpb24gd2FzIG1vcmUgZ2VuZXJhbC4gIFdoaWxlIGZvcndh
cmQvdW5mb3J3YXJkDQo+IGlzDQo+ID4gPiA+ID4+PiBvdXQgb2YgY29udGV4dCBmb3IgdGhpcyBz
ZXJpZXMsIGl0J3MgdmVyeSBzaW1pbGFyIGluIG5hdHVyZSB0bw0KPiA+ID4gPiA+Pj4gZW5hYmxp
bmcvZGlzYWJsaW5nIHBvc3RlZCBpbnRlcnJ1cHRzLiAgU28gSSB0aGluayB0aGUgcXVlc3Rpb24g
cmVtYWlucw0KPiA+ID4gPiA+Pj4gd2hldGhlciB3ZSByZWFsbHkgbmVlZCB1c2Vyc3BhY2UgdG8g
cGFydGljaXBhdGUgaW4gY3JlYXRpbmcgdGhpcw0KPiA+ID4gPiA+Pj4gc2hvcnRjdXQgb3IgaWYg
a3ZtIGFuZCB2ZmlvIGNhbiBzb21lIGhvdyBvcmNoZXN0cmF0ZSBmaWd1cmluZyBpdCBvdXQNCj4g
PiA+ID4gPj4+IGF1dG9tYXRpY2FsbHkuDQo+ID4gPiA+ID4+Pg0KPiA+ID4gPiA+Pj4gUGVyc29u
YWxseSBJIGRvbid0IGtub3cgaG93IHdlIGNvdWxkIGRvIGl0IGF1dG9tYXRpY2FsbHkuICBXZSd2
ZQ0KPiBhbHdheXMNCj4gPiA+ID4gPj4+IHJlbGllZCBvbiB1c2Vyc3BhY2UgdG8gaW5kZXBlbmRl
bnRseSBzZXR1cCB2ZmlvIGFuZCBrdm0gc3VjaCB0aGF0DQo+ID4gPiA+ID4+PiBuZWl0aGVyIGhh
dmUgYW55IGlkZWEgdGhhdCB0aGUgb3RoZXIgaXMgdGhlcmUgYW5kIHVwZGF0ZSBlYWNoIHNpZGUN
Cj4gPiA+ID4gPj4+IGluZGVwZW5kZW50bHkgd2hlbiBhbnl0aGluZyBjaGFuZ2VzLiAgU28gaXQg
c2VlbXMgY29uc2lzdGVudCB0bw0KPiA+ID4gY29udGludWUNCj4gPiA+ID4gPj4+IHRoYXQgaGVy
ZS4gIEl0IGRvZXNuJ3Qgc2VlbSBsaWtlIHRoZXJlJ3MgbXVjaCB0byBnYWluDQo+IHBlcmZvcm1h
bmNlLXdpc2UNCj4gPiA+ID4gPj4+IGVpdGhlciwgdXBkYXRlcyBzaG91bGQgYmUgYSByZWxhdGl2
ZWx5IHJhcmUgZXZlbnQgSSdkIGV4cGVjdC4NCj4gPiA+ID4gPj4+DQo+ID4gPiA+ID4+PiBUaGVy
ZSdzIHJlYWxseSBubyBtZXRhZGF0YSBhc3NvY2lhdGVkIHdpdGggYW4gZXZlbnRmZCwgc28gImNv
bXBhcmluZw0KPiA+ID4gPiA+Pj4gbm90ZXMiIGF1dG9tYXRpY2FsbHkgbWlnaHQgaW1wbHkgc29t
ZSBjZW50cmFsIHJlZ2lzdHJhdGlvbiBlbnRpdHkuDQo+IFRoYXQNCj4gPiA+ID4gPj4+IGltbWVk
aWF0ZWx5IHNvdW5kcyBsaWtlIGEgbXVjaCBtb3JlIGNvbXBsZXggc29sdXRpb24sIGJ1dCBtYXli
ZSBBdmkNCj4gPiA+IGhhcw0KPiA+ID4gPiA+Pj4gc29tZSBpZGVhcyB0byBtYW5hZ2UgaXQuICBU
aGFua3MsDQo+ID4gPiA+ID4+Pg0KPiA+ID4gPiA+Pg0KPiA+ID4gPiA+PiBUaGUgaWRlYSBpcyB0
byBoYXZlIGEgY2VudHJhbCByZWdpc3RyeSBtYWludGFpbmVkIGJ5IGEgcG9zdGVkIGludGVycnVw
dHMNCj4gPiA+ID4gPj4gbWFuYWdlci4gIEJvdGggdmZpbyBhbmQga3ZtIHBhc3MgdGhlIGZpbHAg
KGFsb25nIHdpdGggZXh0cmENCj4gaW5mb3JtYXRpb24pDQo+ID4gPiA+ID4+IHRvIHRoZSBwb3N0
ZWQgaW50ZXJydXB0cyBtYW5hZ2VyLCB3aGljaCwgd2hlbiBpdCBkZXRlY3RzIGEgZmlscCBtYXRj
aCwNCj4gPiA+ID4gPj4gdGVsbHMgZWFjaCBvZiB0aGVtIHdoYXQgdG8gZG8uDQo+ID4gPiA+ID4+
DQo+ID4gPiA+ID4+IFRoZSBhZHZhbnRhZ2VzIGFyZToNCj4gPiA+ID4gPj4gLSBvbGQgdXNlcnNw
YWNlIGdhaW5zIHRoZSBvcHRpbWl6YXRpb24gd2l0aG91dCBjaGFuZ2UNCj4gPiA+ID4gPj4gLSBh
IHVzZXJzcGFjZSBBUEkgaXMgbW9yZSBleHBlbnNpdmUgdG8gbWFpbnRhaW4gdGhhbiBpbnRlcm5h
bCBrZXJuZWwNCj4gPiA+ID4gPj4gaW50ZXJmYWNlcyAoQ1ZFcywgZG9jdW1lbnRhdGlvbiwgbWFp
bnRhaW5pbmcgYmFja3dhcmRzIGNvbXBhdGliaWxpdHkpDQo+ID4gPiA+ID4+IC0gaWYgeW91IGNh
biBkbyBpdCB3aXRob3V0IGEgbmV3IGludGVyZmFjZSwgdGhpcyBpbmRpY2F0ZXMgdGhhdCBhbGwg
dGhlDQo+ID4gPiA+ID4+IGluZm9ybWF0aW9uIGluIHRoZSBuZXcgaW50ZXJmYWNlIGlzIHJlZHVu
ZGFudC4gIFRoYXQgbWVhbnMgeW91IGhhdmUNCj4gdG8NCj4gPiA+ID4gPj4gY2hlY2sgaXQgZm9y
IGNvbnNpc3RlbmN5IHdpdGggdGhlIGV4aXN0aW5nIGluZm9ybWF0aW9uLCBzbyBpdCdzIGV4dHJh
DQo+ID4gPiA+ID4+IHdvcmsgKGxpa2VseSwgaXQncyBleGFjdGx5IHdoYXQgdGhlIHBvc3RlZCBp
bnRlcnJ1cHQgbWFuYWdlciB3b3VsZCBiZQ0KPiA+ID4gPiA+PiBkb2luZyBhbnl3YXkpLg0KPiA+
ID4gPiA+DQo+ID4gPiA+ID4gWWVwLCB0aG9zZSBhbGwgc291bmQgbGlrZSBnb29kIHRoaW5ncyBh
bmQgSSBiZWxpZXZlIHRoYXQncyBzaW1pbGFyIGluDQo+ID4gPiA+ID4gZGVzaWduIHRvIHRoZSB3
YXkgd2UgaGFkIG9yaWdpbmFsbHkgZGlzY3Vzc2VkIHRoaXMgaW50ZXJhY3Rpb24gYXQNCj4gPiA+
ID4gPiBMUEMvS1ZNIEZvcnVtIHNldmVyYWwgeWVhcnMgYWdvLiAgSSdkIGJlIGluIGZhdm9yIG9m
IHRoYXQgYXBwcm9hY2guDQo+ID4gPiA+DQo+ID4gPiA+IEkgZ3Vlc3MgdGhpcyBkaXNjdXNzaW9u
IGFsc28gaXMgcmVsZXZhbnQgd3J0ICJbUkZDIHY2IDAwLzE2XSBLVk0tVkZJTw0KPiA+ID4gPiBJ
UlEgZm9yd2FyZCBjb250cm9sIiBzZXJpZXM/IE9yIGlzIHRoYXQgImNlbnRyYWwgcmVnaXN0cnkg
bWFpbnRhaW5lZCBieQ0KPiA+ID4gPiBhIHBvc3RlZCBpbnRlcnJ1cHRzIG1hbmFnZXIiIHNvbWV0
aGluZyBtb3JlIHNwZWNpZmljIHRvIHg4Nj8NCj4gPiA+DQo+ID4gPiBJJ2QgdGhpbmsgd2UnZCB3
YW50IGl0IGZvciBhbnkgc29ydCBvZiBvZmZsb2FkIGFuZCBzdXBwb3J0aW5nIGJvdGgNCj4gPiA+
IHBvc3RlZC1pbnRlcnJ1cHRzIGFuZCBpcnEtZm9yd2FyZGluZyB3b3VsZCBiZSBhIGdvb2QgdmFs
aWRhdGlvbi4gIEkNCj4gPiA+IGltYWdpbmUgdGhlcmUgd291bGQgYmUgcmVnaXN0cmF0aW9uL2Rl
LXJlZ2lzdHJhdGlvbiBjYWxsYmFja3Mgc2VwYXJhdGUNCj4gPiA+IGZvciBpbnRlcnJ1cHQgcHJv
ZHVjZXJzIHZzIGludGVycnVwdCBjb25zdW1lcnMuICBFYWNoIHJlZ2lzdHJhdGlvbg0KPiA+ID4g
ZnVuY3Rpb24gd291bGQgbGlrZWx5IHByb3ZpZGUgYSBzdHJ1Y3Qgb2YgY2FsbGJhY2tzLCBwcm9i
YWJseSBzaW1pbGFyIHRvDQo+ID4gPiB0aGUgZ2V0X3N5bWJvbCBjYWxsYmFja3MgcHJvcG9zZWQg
Zm9yIHRoZSBrdm0tdmZpbyBkZXZpY2Ugb24gdGhlIElSUQ0KPiA+ID4gcHJvZHVjZXIgc2lkZS4g
IFRoZSBldmVudGZkIHdvdWxkIGJlIHRoZSB0b2tlbiB0aGF0IHRoZSBtYW5hZ2VyIHdvdWxkDQo+
ID4gPiB1c2UgdG8gbWF0Y2ggcHJvZHVjZXJzIGFuZCBjb25zdW1lcnMuICBUaGUgaGFyZCBwYXJ0
IGlzIHByb2JhYmx5DQo+ID4gPiBmaWd1cmluZyBvdXQgd2hhdCBpbmZvcm1hdGlvbiB0byByZXRy
aWV2ZSBmcm9tIHRoZSBwcm9kdWNlciBhbmQgcHJvdmlkZQ0KPiA+ID4gdG8gdGhlIGNvbnN1bWVy
IGluIGEgZ2VuZXJpYyB3YXkgYmV0d2VlbiBwY2kgYW5kIHBsYXRmb3JtLCBidXQgYXMgYW4NCj4g
PiA+IGludGVybmFsIGludGVyZmFjZSwgaXQncyBub3QgYSBiaWcgZGVhbCBpZiB3ZSBzY3JldyBp
dCB1cCBhIGZldyB0aW1lcyB0bw0KPiA+ID4gc3RhcnQuICBUaGFua3MsDQo+ID4NCj4gPiBPbiBw
b3N0ZWQtaW50ZXJydXB0cyBzaWRlLCB0aGUgbWFpbiBwdXJwb3NlIG9mIHRoZSBuZXcgQVBJcyBp
cyB0byB1cGRhdGUNCj4gPiB0aGUgSVJURSB3aGVuIGd1ZXN0IGNoYW5nZXMgdk1TSS92TVNJeCBj
b25maWd1cmF0aW9uLiBBbGV4LCBkbyB5b3UgaGF2ZQ0KPiA+IGFueSBkZXRhaWxlZCBpZGVhcyBm
b3IgdGhlIG5ldyBzb2x1dGlvbiB0byBhY2hpZXZlIHRoaXMgcHVycG9zZT8gSXQgc2hvdWxkDQo+
ID4gYmUgaGVscGZ1bCBpZiB5b3UgY2FuIHNoYXJlIHNvbWUhDQo+IA0KPiANCj4gVGhlcmUgYXJl
IHBsZW50eSBvZiBkZXRhaWxzIHRvIGJlIGZpbGxlZCBpbiwgYnV0IEkgdGhpbmsgdGhlIGJhc2lj
cw0KPiBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGUgY29kZSBiZWxvdy4gIFRoZSBJUlEgYnlwYXNz
IG1hbmFnZXIganVzdA0KPiBkZWZpbmVzIGEgcGFpciBvZiBzdHJ1Y3R1cmVzLCBvbmUgZm9yIGlu
dGVycnVwdCBwcm9kdWNlcnMgYW5kIG9uZSBmb3INCj4gaW50ZXJydXB0IGNvbnN1bWVycy4gIEkn
bSBjZXJ0YWluIHRoYXQgd2UnbGwgbmVlZCBtb3JlIGNhbGxiYWNrcyB0aGFuDQo+IEkndmUgZGVm
aW5lZCBiZWxvdywgYnV0IGZpZ3VyaW5nIG91dCB3aGF0IHRob3NlIHNob3VsZCBiZSBmb3IgdGhl
IGJlc3QNCj4gYWJzdHJhY3Rpb24gaXMgdGhlIGhhcmRlc3QgcGFydCBvZiB0aGlzIGlkZWEuICBU
aGUgbWFuYWdlciBwcm92aWRlcyBib3RoDQo+IHJlZ2lzdHJhdGlvbiBhbmQgZGUtcmVnaXN0cmF0
aW9uIGludGVyZmFjZXMgZm9yIGJvdGggdHlwZXMgb2Ygb2JqZWN0cw0KPiBhbmQga2VlcHMgbGlz
dHMgZm9yIGVhY2gsIHByb3RlY3RlZCBieSBhIGxvY2suICBUaGUgbWFuYWdlciBkb2Vzbid0IGV2
ZW4NCj4gcmVhbGx5IG5lZWQgdG8ga25vdyB3aGF0IHRoZSBtYXRjaCB0b2tlbiBpcywgYnV0IEkg
YXNzdW1lIGZvciBvdXINCj4gcHVycG9zZXMgaXQgd2lsbCBiZSBhbiBldmVudGZkX2N0eC4NCj4g
DQo+IE9uIHRoZSB2ZmlvIHNpZGUsIHRoZSBwcm9kdWNlciBzdHJ1Y3Qgd291bGQgYmUgZW1iZWRk
ZWQgaW4gdGhlDQo+IHZmaW9fcGNpX2lycV9jdHggc3RydWN0LiAgS1ZNIHdvdWxkIHByb2JhYmx5
IGVtYmVkIHRoZSBjb25zdW1lciBzdHJ1Y3QNCj4gaW4gX2lycWZkLiAgQXMgSSd2ZSBjb2RlZCBi
ZWxvdywgdGhlIElSUSBieXBhc3MgbWFuYWdlciBjYWxscyB0aGUNCj4gY29uc3VtZXIgY2FsbGJh
Y2tzLCBzbyB0aGUgcHJvZHVjZXIgc3RydWN0IHdvdWxkIG5lZWQgZmllbGRzIG9yDQo+IGNhbGxi
YWNrcyB0byBwcm92aWRlIHRoZSBjb25zdW1lciB0aGUgaW5mbyBpdCBuZWVkcy4gIEFJVUkgdGhl
IFBvc3RlZA0KPiBJbnRlcnJ1cHQgbW9kZWwsIFZGSU8gb25seSBuZWVkcyB0byBwcm92aWRlIGRh
dGEgdG8gdGhlIGNvbnN1bWVyLiAgRm9yDQo+IElSUSBGb3J3YXJkaW5nLCBJIHRoaW5rIHRoZSBw
cm9kdWNlciBuZWVkcyB0byBiZSBpbmZvcm1lZCB3aGVuIGJ5cGFzcyBpcw0KPiBhY3RpdmUgdG8g
bW9kZWwgdGhlIGluY29taW5nIGludGVycnVwdCBhcyBlZGdlIHZzIGxldmVsLg0KPiANCj4gSSd2
ZSBwcm90b3R5cGVkIHRoZSBiYXNlIElSUSBieXBhc3MgbWFuYWdlciBoZXJlIGFzIHN0YXRpYywg
YnV0IEkgZG9uJ3QNCj4gc2VlIGFueSByZWFzb24gaXQgY291bGRuJ3QgYmUgYSBtb2R1bGUgdGhh
dCdzIGxvYWRlZCBieSBkZXBlbmRlbmN5IHdoZW4NCj4gZWl0aGVyIHZmaW8tcGNpIG9yIGt2bS1p
bnRlbCBpcyBsb2FkZWQgKG9yIG90aGVyIHByb2R1Y2VyL2NvbnN1bWVyDQo+IG9iamVjdHMpLg0K
PiANCj4gSXMgdGhpcyBhIHJlYXNvbmFibGUgc3RhcnRpbmcgcG9pbnQgdG8gY3JhZnQgdGhlIGFk
ZGl0aW9uYWwgZmllbGRzIGFuZA0KPiBjYWxsYmFja3MgYW5kIGludGVyYWN0aW9uIG9mIHdobyBj
YWxscyB3aG8gdGhhdCB3ZSBuZWVkIHRvIHN1cHBvcnQNCj4gUG9zdGVkIEludGVycnVwdHMgYW5k
IElSUSBGb3J3YXJkaW5nPyAgSXMgdGhlIEFNRCB2ZXJzaW9uIG9mIHRoaXMgc3RpbGwNCj4gYWxp
dmU/ICBUaGFua3MsDQo+IA0KPiBBbGV4DQo+IA0KPiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYva3Zt
L0tjb25maWcgYi9hcmNoL3g4Ni9rdm0vS2NvbmZpZw0KPiBpbmRleCA0MTNhN2JmLi4yMmY2ZmNi
IDEwMDY0NA0KPiAtLS0gYS9hcmNoL3g4Ni9rdm0vS2NvbmZpZw0KPiArKysgYi9hcmNoL3g4Ni9r
dm0vS2NvbmZpZw0KPiBAQCAtNjEsNiArNjEsNyBAQCBjb25maWcgS1ZNX0lOVEVMDQo+ICAJZGVw
ZW5kcyBvbiBLVk0NCj4gIAkjIGZvciBwZXJmX2d1ZXN0X2dldF9tc3JzKCk6DQo+ICAJZGVwZW5k
cyBvbiBDUFVfU1VQX0lOVEVMDQo+ICsJc2VsZWN0IElSUV9CWVBBU1NfTUFOQUdFUg0KPiAgCS0t
LWhlbHAtLS0NCj4gIAkgIFByb3ZpZGVzIHN1cHBvcnQgZm9yIEtWTSBvbiBJbnRlbCBwcm9jZXNz
b3JzIGVxdWlwcGVkIHdpdGggdGhlIFZUDQo+ICAJICBleHRlbnNpb25zLg0KPiBkaWZmIC0tZ2l0
IGEvZHJpdmVycy92ZmlvL3BjaS9LY29uZmlnIGIvZHJpdmVycy92ZmlvL3BjaS9LY29uZmlnDQo+
IGluZGV4IDU3OWQ4M2IuLjAyOTEyZjEgMTAwNjQ0DQo+IC0tLSBhL2RyaXZlcnMvdmZpby9wY2kv
S2NvbmZpZw0KPiArKysgYi9kcml2ZXJzL3ZmaW8vcGNpL0tjb25maWcNCj4gQEAgLTIsNiArMiw3
IEBAIGNvbmZpZyBWRklPX1BDSQ0KPiAgCXRyaXN0YXRlICJWRklPIHN1cHBvcnQgZm9yIFBDSSBk
ZXZpY2VzIg0KPiAgCWRlcGVuZHMgb24gVkZJTyAmJiBQQ0kgJiYgRVZFTlRGRA0KPiAgCXNlbGVj
dCBWRklPX1ZJUlFGRA0KPiArCXNlbGVjdCBJUlFfQllQQVNTX01BTkFHRVINCj4gIAloZWxwDQo+
ICAJICBTdXBwb3J0IGZvciB0aGUgUENJIFZGSU8gYnVzIGRyaXZlci4gIFRoaXMgaXMgcmVxdWly
ZWQgdG8gbWFrZQ0KPiAgCSAgdXNlIG9mIFBDSSBkcml2ZXJzIHVzaW5nIHRoZSBWRklPIGZyYW1l
d29yay4NCj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvdmZpby9wY2kvdmZpb19wY2lfaW50cnMuYyBi
L2RyaXZlcnMvdmZpby9wY2kvdmZpb19wY2lfaW50cnMuYw0KPiBpbmRleCAxZjU3N2I0Li40ZTA1
M2JlIDEwMDY0NA0KPiAtLS0gYS9kcml2ZXJzL3ZmaW8vcGNpL3ZmaW9fcGNpX2ludHJzLmMNCj4g
KysrIGIvZHJpdmVycy92ZmlvL3BjaS92ZmlvX3BjaV9pbnRycy5jDQo+IEBAIC0xODEsNiArMTgx
LDcgQEAgc3RhdGljIGludCB2ZmlvX2ludHhfc2V0X3NpZ25hbChzdHJ1Y3QgdmZpb19wY2lfZGV2
aWNlDQo+ICp2ZGV2LCBpbnQgZmQpDQo+IA0KPiAgCWlmICh2ZGV2LT5jdHhbMF0udHJpZ2dlcikg
ew0KPiAgCQlmcmVlX2lycShwZGV2LT5pcnEsIHZkZXYpOw0KPiArCQkvKiBpcnFfYnlwYXNzX3Vu
cmVnaXN0ZXJfcHJvZHVjZXIoKTsgKi8NCj4gIAkJa2ZyZWUodmRldi0+Y3R4WzBdLm5hbWUpOw0K
PiAgCQlldmVudGZkX2N0eF9wdXQodmRldi0+Y3R4WzBdLnRyaWdnZXIpOw0KPiAgCQl2ZGV2LT5j
dHhbMF0udHJpZ2dlciA9IE5VTEw7DQo+IEBAIC0yMTQsNiArMjE1LDggQEAgc3RhdGljIGludCB2
ZmlvX2ludHhfc2V0X3NpZ25hbChzdHJ1Y3QgdmZpb19wY2lfZGV2aWNlDQo+ICp2ZGV2LCBpbnQg
ZmQpDQo+ICAJCXJldHVybiByZXQ7DQo+ICAJfQ0KPiANCj4gKwkvKiBpcnFfYnlwYXNzX3JlZ2lz
dGVyX3Byb2R1Y2VyKCk7ICovDQo+ICsNCj4gIAkvKg0KPiAgCSAqIElOVHggZGlzYWJsZSB3aWxs
IHN0aWNrIGFjcm9zcyB0aGUgbmV3IGlycSBzZXR1cCwNCj4gIAkgKiBkaXNhYmxlX2lycSB3b24n
dC4NCj4gQEAgLTMxOSw2ICszMjIsNyBAQCBzdGF0aWMgaW50IHZmaW9fbXNpX3NldF92ZWN0b3Jf
c2lnbmFsKHN0cnVjdA0KPiB2ZmlvX3BjaV9kZXZpY2UgKnZkZXYsDQo+IA0KPiAgCWlmICh2ZGV2
LT5jdHhbdmVjdG9yXS50cmlnZ2VyKSB7DQo+ICAJCWZyZWVfaXJxKGlycSwgdmRldi0+Y3R4W3Zl
Y3Rvcl0udHJpZ2dlcik7DQo+ICsJCS8qIGlycV9ieXBhc3NfdW5yZWdpc3Rlcl9wcm9kdWNlcigp
OyAqLw0KPiAgCQlrZnJlZSh2ZGV2LT5jdHhbdmVjdG9yXS5uYW1lKTsNCj4gIAkJZXZlbnRmZF9j
dHhfcHV0KHZkZXYtPmN0eFt2ZWN0b3JdLnRyaWdnZXIpOw0KPiAgCQl2ZGV2LT5jdHhbdmVjdG9y
XS50cmlnZ2VyID0gTlVMTDsNCj4gQEAgLTM2MCw2ICszNjQsOCBAQCBzdGF0aWMgaW50IHZmaW9f
bXNpX3NldF92ZWN0b3Jfc2lnbmFsKHN0cnVjdA0KPiB2ZmlvX3BjaV9kZXZpY2UgKnZkZXYsDQo+
ICAJCXJldHVybiByZXQ7DQo+ICAJfQ0KPiANCj4gKwkvKiBpcnFfYnlwYXNzX3JlZ2lzdGVyX3By
b2R1Y2VyKCk7ICovDQo+ICsNCj4gIAl2ZGV2LT5jdHhbdmVjdG9yXS50cmlnZ2VyID0gdHJpZ2dl
cjsNCj4gDQo+ICAJcmV0dXJuIDA7DQo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2lycWJ5
cGFzcy5oIGIvaW5jbHVkZS9saW51eC9pcnFieXBhc3MuaA0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0
NA0KPiBpbmRleCAwMDAwMDAwLi43MTg1MDhlDQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIvaW5j
bHVkZS9saW51eC9pcnFieXBhc3MuaA0KPiBAQCAtMCwwICsxLDIzIEBADQo+ICsjaWZuZGVmIElS
UUJZUEFTU19IDQo+ICsjZGVmaW5lIElSUUJZUEFTU19IDQo+ICsNCj4gKyNpbmNsdWRlIDxsaW51
eC9saXN0Lmg+DQo+ICsNCj4gK3N0cnVjdCBpcnFfYnlwYXNzX3Byb2R1Y2VyIHsNCj4gKwlzdHJ1
Y3QgbGlzdF9oZWFkIG5vZGU7DQo+ICsJdm9pZCAqdG9rZW47DQo+ICsJLyogVEJEICovDQo+ICt9
Ow0KPiArDQo+ICtzdHJ1Y3QgaXJxX2J5cGFzc19jb25zdW1lciB7DQo+ICsJc3RydWN0IGxpc3Rf
aGVhZCBub2RlOw0KPiArCXZvaWQgKnRva2VuOw0KPiArCXZvaWQgKCphZGRfcHJvZHVjZXIpKHN0
cnVjdCBpcnFfYnlwYXNzX3Byb2R1Y2VyICopOw0KPiArCXZvaWQgKCpkZWxfcHJvZHVjZXIpKHN0
cnVjdCBpcnFfYnlwYXNzX3Byb2R1Y2VyICopOw0KDQpUaGVzZSB0d28gY2FsbGJhY2tzIHNob3Vs
ZCBiZSBjb21tb24gZnVuY3Rpb24sIGZvciBQSSwgSSBuZWVkIHRvIGFkZA0Kc29tZXRoaW5nIHNw
ZWNpZmljIHRvIHg4Niwgc3VjaCBhcywgdXBkYXRpbmcgdGhlIGFzc29jaWF0ZWQgSVJURSwgaG93
DQpzaG91bGQgSSBkbyBmb3IgdGhpcz8NCg0KPiArfTsNCj4gKw0KPiAraW50IGlycV9ieXBhc3Nf
cmVnaXN0ZXJfcHJvZHVjZXIoc3RydWN0IGlycV9ieXBhc3NfcHJvZHVjZXIgKik7DQo+ICt2b2lk
IGlycV9ieXBhc3NfdW5yZWdpc3Rlcl9wcm9kdWNlcihzdHJ1Y3QgaXJxX2J5cGFzc19wcm9kdWNl
ciAqKTsNCj4gK2ludCBpcnFfYnlwYXNzX3JlZ2lzdGVyX2NvbnN1bWVyKHN0cnVjdCBpcnFfYnlw
YXNzX2NvbnN1bWVyICopOw0KPiArdm9pZCBpcnFfYnlwYXNzX3VucmVnaXN0ZXJfY29uc3VtZXIo
c3RydWN0IGlycV9ieXBhc3NfY29uc3VtZXIgKik7DQo+ICsjZW5kaWYgLyogSVJRQllQQVNTX0gg
Ki8NCj4gZGlmZiAtLWdpdCBhL2tlcm5lbC9pcnEvS2NvbmZpZyBiL2tlcm5lbC9pcnEvS2NvbmZp
Zw0KPiBpbmRleCA5YTc2ZTNiLi40NTAyY2RjIDEwMDY0NA0KPiAtLS0gYS9rZXJuZWwvaXJxL0tj
b25maWcNCj4gKysrIGIva2VybmVsL2lycS9LY29uZmlnDQo+IEBAIC0xMDAsNCArMTAwLDcgQEAg
Y29uZmlnIFNQQVJTRV9JUlENCj4gDQo+ICAJICBJZiB5b3UgZG9uJ3Qga25vdyB3aGF0IHRvIGRv
IGhlcmUsIHNheSBOLg0KPiANCj4gK2NvbmZpZyBJUlFfQllQQVNTX01BTkFHRVINCj4gKwlib29s
DQo+ICsNCj4gIGVuZG1lbnUNCj4gZGlmZiAtLWdpdCBhL2tlcm5lbC9pcnEvTWFrZWZpbGUgYi9r
ZXJuZWwvaXJxL01ha2VmaWxlDQo+IGluZGV4IGQxMjEyMzUuLmEzMGVkNzcgMTAwNjQ0DQo+IC0t
LSBhL2tlcm5lbC9pcnEvTWFrZWZpbGUNCj4gKysrIGIva2VybmVsL2lycS9NYWtlZmlsZQ0KPiBA
QCAtNywzICs3LDQgQEAgb2JqLSQoQ09ORklHX1BST0NfRlMpICs9IHByb2Mubw0KPiAgb2JqLSQo
Q09ORklHX0dFTkVSSUNfUEVORElOR19JUlEpICs9IG1pZ3JhdGlvbi5vDQo+ICBvYmotJChDT05G
SUdfUE1fU0xFRVApICs9IHBtLm8NCj4gIG9iai0kKENPTkZJR19HRU5FUklDX01TSV9JUlEpICs9
IG1zaS5vDQo+ICtvYmotJChDT05GSUdfSVJRX0JZUEFTU19NQU5BR0VSKSArPSBieXBhc3Mubw0K
PiBkaWZmIC0tZ2l0IGEva2VybmVsL2lycS9ieXBhc3MuYyBiL2tlcm5lbC9pcnEvYnlwYXNzLmMN
Cj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gaW5kZXggMDAwMDAwMC4uNWQwZjkyYg0KPiAtLS0g
L2Rldi9udWxsDQo+ICsrKyBiL2tlcm5lbC9pcnEvYnlwYXNzLmMNCg0KSXMgaXQgYmV0dGVyIHRv
IHB1dCB0aGlzIGNvZGUgaGVyZSBvciBpbiB2ZmlvIGZvbGRlcj8NCg0KVGhhbmtzLA0KRmVuZw0K
DQo+IEBAIC0wLDAgKzEsMTE2IEBADQo+ICsvKg0KPiArICogSVJRIG9mZmxvYWQvYnlwYXNzIG1h
bmFnZXINCj4gKyAqDQo+ICsgKiBWYXJpb3VzIHZpcnR1YWxpemF0aW9uIGhhcmR3YXJlIGFjY2Vs
ZXJhdGlvbiB0ZWNobmlxdWVzIGFsbG93IGJ5cGFzc2luZw0KPiArICogb3Igb2ZmbG9hZGluZyBp
bnRlcnJ1cHRzIHJlY2VpZXZlZCBmcm9tIGRldmljZXMgYXJvdW5kIHRoZSBob3N0IGtlcm5lbC4N
Cj4gKyAqIFBvc3RlZCBJbnRlcnJ1cHRzIG9uIEludGVsIFZULWQgc3lzdGVtcyBjYW4gYWxsb3cg
aW50ZXJydXB0cyB0byBiZQ0KPiArICogcmVjaWV2ZWQgZGlyZWN0bHkgYnkgYSB2aXJ0dWFsIG1h
Y2hpbmUuICBBUk0gSVJRIEZvcndhcmRpbmcgY2FuIGFsbG93DQo+ICsgKiBsZXZlbCB0cmlnZ2Vy
ZWQgZGV2aWNlIGludGVycnVwdHMgdG8gYmUgZGUtYXNzZXJ0ZWQgZGlyZWN0bHkgYnkgdGhlIFZN
Lg0KPiArICogVGhpcyBtYW5hZ2VyIGFsbG93cyBpbnRlcnJ1cHQgcHJvZHVjZXJzIGFuZCBjb25z
dW1lcnMgdG8gZmluZCBlYWNoIG90aGVyDQo+ICsgKiB0byBlbmFibGUgdGhpcyBzb3J0IG9mIGJ5
cGFzcy4NCj4gKyAqLw0KPiArDQo+ICsjaW5jbHVkZSA8bGludXgvaXJxYnlwYXNzLmg+DQo+ICsj
aW5jbHVkZSA8bGludXgvbGlzdC5oPg0KPiArI2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0KPiAr
I2luY2x1ZGUgPGxpbnV4L211dGV4Lmg+DQo+ICsNCj4gK3N0YXRpYyBMSVNUX0hFQUQocHJvZHVj
ZXJzKTsNCj4gK3N0YXRpYyBMSVNUX0hFQUQoY29uc3VtZXJzKTsNCj4gK3N0YXRpYyBERUZJTkVf
TVVURVgobG9jayk7DQo+ICsNCj4gK2ludCBpcnFfYnlwYXNzX3JlZ2lzdGVyX3Byb2R1Y2VyKHN0
cnVjdCBpcnFfYnlwYXNzX3Byb2R1Y2VyICpwcm9kdWNlcikNCj4gK3sNCj4gKwlzdHJ1Y3QgaXJx
X2J5cGFzc19wcm9kdWNlciAqdG1wOw0KPiArCXN0cnVjdCBpcnFfYnlwYXNzX2NvbnN1bWVyICpj
b25zdW1lcjsNCj4gKwlpbnQgcmV0ID0gMDsNCj4gKw0KPiArCW11dGV4X2xvY2soJmxvY2spOw0K
PiArDQo+ICsJbGlzdF9mb3JfZWFjaF9lbnRyeSh0bXAsICZwcm9kdWNlcnMsIG5vZGUpIHsNCj4g
KwkJaWYgKHRtcC0+dG9rZW4gPT0gcHJvZHVjZXItPnRva2VuKSB7DQo+ICsJCQlyZXQgPSAtRUlO
VkFMOw0KPiArCQkJZ290byB1bmxvY2s7DQo+ICsJCX0NCj4gKwl9DQo+ICsNCj4gKwlsaXN0X2Fk
ZCgmcHJvZHVjZXItPm5vZGUsICZwcm9kdWNlcnMpOw0KPiArDQo+ICsJbGlzdF9mb3JfZWFjaF9l
bnRyeShjb25zdW1lciwgJmNvbnN1bWVycywgbm9kZSkgew0KPiArCQlpZiAoY29uc3VtZXItPnRv
a2VuID09IHByb2R1Y2VyLT50b2tlbikgew0KPiArCQkJY29uc3VtZXItPmFkZF9wcm9kdWNlcihw
cm9kdWNlcik7DQo+ICsJCQlicmVhazsNCj4gKwkJfQ0KPiArCX0NCj4gK3VubG9jazoNCj4gKwlt
dXRleF91bmxvY2soJmxvY2spOw0KPiArCXJldHVybiByZXQ7DQo+ICt9DQo+ICtFWFBPUlRfU1lN
Qk9MX0dQTChpcnFfYnlwYXNzX3JlZ2lzdGVyX3Byb2R1Y2VyKTsNCj4gKw0KPiArdm9pZCBpcnFf
YnlwYXNzX3VucmVnaXN0ZXJfcHJvZHVjZXIoc3RydWN0IGlycV9ieXBhc3NfcHJvZHVjZXIgKnBy
b2R1Y2VyKQ0KPiArew0KPiArCXN0cnVjdCBpcnFfYnlwYXNzX2NvbnN1bWVyICpjb25zdW1lcjsN
Cj4gKw0KPiArCW11dGV4X2xvY2soJmxvY2spOw0KPiArDQo+ICsJbGlzdF9mb3JfZWFjaF9lbnRy
eShjb25zdW1lciwgJmNvbnN1bWVycywgbm9kZSkgew0KPiArCQlpZiAoY29uc3VtZXItPnRva2Vu
ID09IHByb2R1Y2VyLT50b2tlbikgew0KPiArCQkJY29uc3VtZXItPmRlbF9wcm9kdWNlcihwcm9k
dWNlcik7DQo+ICsJCQlicmVhazsNCj4gKwkJfQ0KPiArCX0NCj4gKw0KPiArCWxpc3RfZGVsKCZw
cm9kdWNlci0+bm9kZSk7DQo+ICsNCj4gKwltdXRleF91bmxvY2soJmxvY2spOw0KPiArfQ0KPiAr
RVhQT1JUX1NZTUJPTF9HUEwoaXJxX2J5cGFzc191bnJlZ2lzdGVyX3Byb2R1Y2VyKTsNCj4gKw0K
PiAraW50IGlycV9ieXBhc3NfcmVnaXN0ZXJfY29uc3VtZXIoc3RydWN0IGlycV9ieXBhc3NfY29u
c3VtZXIgKmNvbnN1bWVyKQ0KPiArew0KPiArCXN0cnVjdCBpcnFfYnlwYXNzX2NvbnN1bWVyICp0
bXA7DQo+ICsJc3RydWN0IGlycV9ieXBhc3NfcHJvZHVjZXIgKnByb2R1Y2VyOw0KPiArCWludCBy
ZXQgPSAwOw0KPiArDQo+ICsJbXV0ZXhfbG9jaygmbG9jayk7DQo+ICsNCj4gKwlsaXN0X2Zvcl9l
YWNoX2VudHJ5KHRtcCwgJmNvbnN1bWVycywgbm9kZSkgew0KPiArCQlpZiAodG1wLT50b2tlbiA9
PSBjb25zdW1lci0+dG9rZW4pIHsNCj4gKwkJCXJldCA9IC1FSU5WQUw7DQo+ICsJCQlnb3RvIHVu
bG9jazsNCj4gKwkJfQ0KPiArCX0NCj4gKw0KPiArCWxpc3RfYWRkKCZjb25zdW1lci0+bm9kZSwg
JmNvbnN1bWVycyk7DQo+ICsNCj4gKwlsaXN0X2Zvcl9lYWNoX2VudHJ5KHByb2R1Y2VyLCAmcHJv
ZHVjZXJzLCBub2RlKSB7DQo+ICsJCWlmIChwcm9kdWNlci0+dG9rZW4gPT0gY29uc3VtZXItPnRv
a2VuKSB7DQo+ICsJCQljb25zdW1lci0+YWRkX3Byb2R1Y2VyKHByb2R1Y2VyKTsNCj4gKwkJCWJy
ZWFrOw0KPiArCQl9DQo+ICsJfQ0KPiArdW5sb2NrOg0KPiArCW11dGV4X3VubG9jaygmbG9jayk7
DQo+ICsJcmV0dXJuIHJldDsNCj4gK30NCj4gK0VYUE9SVF9TWU1CT0xfR1BMKGlycV9ieXBhc3Nf
cmVnaXN0ZXJfY29uc3VtZXIpOw0KPiArDQo+ICt2b2lkIGlycV9ieXBhc3NfdW5yZWdpc3Rlcl9j
b25zdW1lcihzdHJ1Y3QgaXJxX2J5cGFzc19jb25zdW1lciAqY29uc3VtZXIpDQo+ICt7DQo+ICsJ
c3RydWN0IGlycV9ieXBhc3NfcHJvZHVjZXIgKnByb2R1Y2VyOw0KPiArDQo+ICsJbXV0ZXhfbG9j
aygmbG9jayk7DQo+ICsNCj4gKwlsaXN0X2Zvcl9lYWNoX2VudHJ5KHByb2R1Y2VyLCAmcHJvZHVj
ZXJzLCBub2RlKSB7DQo+ICsJCWlmIChwcm9kdWNlci0+dG9rZW4gPT0gY29uc3VtZXItPnRva2Vu
KSB7DQo+ICsJCQljb25zdW1lci0+ZGVsX3Byb2R1Y2VyKHByb2R1Y2VyKTsNCj4gKwkJCWJyZWFr
Ow0KPiArCQl9DQo+ICsJfQ0KPiArDQo+ICsJbGlzdF9kZWwoJmNvbnN1bWVyLT5ub2RlKTsNCj4g
Kw0KPiArCW11dGV4X3VubG9jaygmbG9jayk7DQo+ICt9DQo+ICtFWFBPUlRfU1lNQk9MX0dQTChp
cnFfYnlwYXNzX3VucmVnaXN0ZXJfY29uc3VtZXIpOw0KPiBkaWZmIC0tZ2l0IGEvdmlydC9rdm0v
ZXZlbnRmZC5jIGIvdmlydC9rdm0vZXZlbnRmZC5jDQo+IGluZGV4IDlmZjQxOTMuLmYzZGExNjEg
MTAwNjQ0DQo+IC0tLSBhL3ZpcnQva3ZtL2V2ZW50ZmQuYw0KPiArKysgYi92aXJ0L2t2bS9ldmVu
dGZkLmMNCj4gQEAgLTQyOSw2ICs0MjksOCBAQCBrdm1faXJxZmRfYXNzaWduKHN0cnVjdCBrdm0g
Kmt2bSwgc3RydWN0IGt2bV9pcnFmZA0KPiAqYXJncykNCj4gIAkgKi8NCj4gIAlmZHB1dChmKTsN
Cj4gDQo+ICsJLyogaXJxX2J5cGFzc19yZWdpc3Rlcl9jb25zdW1lcigpOyAqLw0KPiArDQo+ICAJ
cmV0dXJuIDA7DQo+IA0KPiAgZmFpbDoNCj4gQEAgLTUyOCw2ICs1MzAsOCBAQCBrdm1faXJxZmRf
ZGVhc3NpZ24oc3RydWN0IGt2bSAqa3ZtLCBzdHJ1Y3Qga3ZtX2lycWZkDQo+ICphcmdzKQ0KPiAg
CXN0cnVjdCBfaXJxZmQgKmlycWZkLCAqdG1wOw0KPiAgCXN0cnVjdCBldmVudGZkX2N0eCAqZXZl
bnRmZDsNCj4gDQo+ICsJLyogaXJxX2J5cGFzc191bnJlZ2lzdGVyX2NvbnN1bWVyKCkgKi8NCj4g
Kw0KPiAgCWV2ZW50ZmQgPSBldmVudGZkX2N0eF9mZGdldChhcmdzLT5mZCk7DQo+ICAJaWYgKElT
X0VSUihldmVudGZkKSkNCj4gIAkJcmV0dXJuIFBUUl9FUlIoZXZlbnRmZCk7DQo+IA0KPiANCg0K
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alex Williamson June 29, 2015, 3:18 p.m. UTC | #10
On Mon, 2015-06-29 at 13:27 +0000, Wu, Feng wrote:
> 
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Friday, June 19, 2015 4:04 AM
> > To: Wu, Feng
> > Cc: Eric Auger; Avi Kivity; kvm@vger.kernel.org; linux-kernel@vger.kernel.org;
> > pbonzini@redhat.com; mtosatti@redhat.com; Joerg Roedel
> > Subject: Re: [v4 08/16] KVM: kvm-vfio: User API for IRQ forwarding
> > 
> > [Adding Joerg since he was part of this original idea]
> > 
> > 
> > There are plenty of details to be filled in, but I think the basics
> > looks something like the code below.  The IRQ bypass manager just
> > defines a pair of structures, one for interrupt producers and one for
> > interrupt consumers.  I'm certain that we'll need more callbacks than
> > I've defined below, but figuring out what those should be for the best
> > abstraction is the hardest part of this idea.  The manager provides both
> > registration and de-registration interfaces for both types of objects
> > and keeps lists for each, protected by a lock.  The manager doesn't even
> > really need to know what the match token is, but I assume for our
> > purposes it will be an eventfd_ctx.
> > 
> > On the vfio side, the producer struct would be embedded in the
> > vfio_pci_irq_ctx struct.  KVM would probably embed the consumer struct
> > in _irqfd.  As I've coded below, the IRQ bypass manager calls the
> > consumer callbacks, so the producer struct would need fields or
> > callbacks to provide the consumer the info it needs.  AIUI the Posted
> > Interrupt model, VFIO only needs to provide data to the consumer.  For
> > IRQ Forwarding, I think the producer needs to be informed when bypass is
> > active to model the incoming interrupt as edge vs level.
> > 
> > I've prototyped the base IRQ bypass manager here as static, but I don't
> > see any reason it couldn't be a module that's loaded by dependency when
> > either vfio-pci or kvm-intel is loaded (or other producer/consumer
> > objects).
> > 
> > Is this a reasonable starting point to craft the additional fields and
> > callbacks and interaction of who calls who that we need to support
> > Posted Interrupts and IRQ Forwarding?  Is the AMD version of this still
> > alive?  Thanks,
> > 
> > Alex
> > 
> > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > index 413a7bf..22f6fcb 100644
> > --- a/arch/x86/kvm/Kconfig
> > +++ b/arch/x86/kvm/Kconfig
> > @@ -61,6 +61,7 @@ config KVM_INTEL
> >  	depends on KVM
> >  	# for perf_guest_get_msrs():
> >  	depends on CPU_SUP_INTEL
> > +	select IRQ_BYPASS_MANAGER
> >  	---help---
> >  	  Provides support for KVM on Intel processors equipped with the VT
> >  	  extensions.
> > diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
> > index 579d83b..02912f1 100644
> > --- a/drivers/vfio/pci/Kconfig
> > +++ b/drivers/vfio/pci/Kconfig
> > @@ -2,6 +2,7 @@ config VFIO_PCI
> >  	tristate "VFIO support for PCI devices"
> >  	depends on VFIO && PCI && EVENTFD
> >  	select VFIO_VIRQFD
> > +	select IRQ_BYPASS_MANAGER
> >  	help
> >  	  Support for the PCI VFIO bus driver.  This is required to make
> >  	  use of PCI drivers using the VFIO framework.
> > diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
> > index 1f577b4..4e053be 100644
> > --- a/drivers/vfio/pci/vfio_pci_intrs.c
> > +++ b/drivers/vfio/pci/vfio_pci_intrs.c
> > @@ -181,6 +181,7 @@ static int vfio_intx_set_signal(struct vfio_pci_device
> > *vdev, int fd)
> > 
> >  	if (vdev->ctx[0].trigger) {
> >  		free_irq(pdev->irq, vdev);
> > +		/* irq_bypass_unregister_producer(); */
> >  		kfree(vdev->ctx[0].name);
> >  		eventfd_ctx_put(vdev->ctx[0].trigger);
> >  		vdev->ctx[0].trigger = NULL;
> > @@ -214,6 +215,8 @@ static int vfio_intx_set_signal(struct vfio_pci_device
> > *vdev, int fd)
> >  		return ret;
> >  	}
> > 
> > +	/* irq_bypass_register_producer(); */
> > +
> >  	/*
> >  	 * INTx disable will stick across the new irq setup,
> >  	 * disable_irq won't.
> > @@ -319,6 +322,7 @@ static int vfio_msi_set_vector_signal(struct
> > vfio_pci_device *vdev,
> > 
> >  	if (vdev->ctx[vector].trigger) {
> >  		free_irq(irq, vdev->ctx[vector].trigger);
> > +		/* irq_bypass_unregister_producer(); */
> >  		kfree(vdev->ctx[vector].name);
> >  		eventfd_ctx_put(vdev->ctx[vector].trigger);
> >  		vdev->ctx[vector].trigger = NULL;
> > @@ -360,6 +364,8 @@ static int vfio_msi_set_vector_signal(struct
> > vfio_pci_device *vdev,
> >  		return ret;
> >  	}
> > 
> > +	/* irq_bypass_register_producer(); */
> > +
> >  	vdev->ctx[vector].trigger = trigger;
> > 
> >  	return 0;
> > diff --git a/include/linux/irqbypass.h b/include/linux/irqbypass.h
> > new file mode 100644
> > index 0000000..718508e
> > --- /dev/null
> > +++ b/include/linux/irqbypass.h
> > @@ -0,0 +1,23 @@
> > +#ifndef IRQBYPASS_H
> > +#define IRQBYPASS_H
> > +
> > +#include <linux/list.h>
> > +
> > +struct irq_bypass_producer {
> > +	struct list_head node;
> > +	void *token;
> > +	/* TBD */
> > +};
> > +
> > +struct irq_bypass_consumer {
> > +	struct list_head node;
> > +	void *token;
> > +	void (*add_producer)(struct irq_bypass_producer *);
> > +	void (*del_producer)(struct irq_bypass_producer *);
> 
> These two callbacks should be common function, for PI, I need to add
> something specific to x86, such as, updating the associated IRTE, how
> should I do for this?

These are function pointers, the consumer (kvm in this case) can
populate them with whatever implementation it needs.  The details of
updating the IRTE should be completely hidden from this interface.  This
interface only handles identifying matches between producer and consumer
and providing an API for the handshake.  Feel free to use more
appropriate callbacks and structure fields, these are only meant as a
rough sketch of the idea and possible interaction, but please keep
layering in mind to make a generic interface.

> > +};
> > +
> > +int irq_bypass_register_producer(struct irq_bypass_producer *);
> > +void irq_bypass_unregister_producer(struct irq_bypass_producer *);
> > +int irq_bypass_register_consumer(struct irq_bypass_consumer *);
> > +void irq_bypass_unregister_consumer(struct irq_bypass_consumer *);
> > +#endif /* IRQBYPASS_H */
> > diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
> > index 9a76e3b..4502cdc 100644
> > --- a/kernel/irq/Kconfig
> > +++ b/kernel/irq/Kconfig
> > @@ -100,4 +100,7 @@ config SPARSE_IRQ
> > 
> >  	  If you don't know what to do here, say N.
> > 
> > +config IRQ_BYPASS_MANAGER
> > +	bool
> > +
> >  endmenu
> > diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
> > index d121235..a30ed77 100644
> > --- a/kernel/irq/Makefile
> > +++ b/kernel/irq/Makefile
> > @@ -7,3 +7,4 @@ obj-$(CONFIG_PROC_FS) += proc.o
> >  obj-$(CONFIG_GENERIC_PENDING_IRQ) += migration.o
> >  obj-$(CONFIG_PM_SLEEP) += pm.o
> >  obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
> > +obj-$(CONFIG_IRQ_BYPASS_MANAGER) += bypass.o
> > diff --git a/kernel/irq/bypass.c b/kernel/irq/bypass.c
> > new file mode 100644
> > index 0000000..5d0f92b
> > --- /dev/null
> > +++ b/kernel/irq/bypass.c
> 
> Is it better to put this code here or in vfio folder?

What about it is specific to vfio?  Both vfio and kvm are clients to the
interface, but I don't think we want to add any barriers that restrict
it to that pair.  I think we originally thought of this as an IOMMU
service, so drivers/iommu might be another possible home for it.
Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 413a7bf..22f6fcb 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -61,6 +61,7 @@  config KVM_INTEL
 	depends on KVM
 	# for perf_guest_get_msrs():
 	depends on CPU_SUP_INTEL
+	select IRQ_BYPASS_MANAGER
 	---help---
 	  Provides support for KVM on Intel processors equipped with the VT
 	  extensions.
diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
index 579d83b..02912f1 100644
--- a/drivers/vfio/pci/Kconfig
+++ b/drivers/vfio/pci/Kconfig
@@ -2,6 +2,7 @@  config VFIO_PCI
 	tristate "VFIO support for PCI devices"
 	depends on VFIO && PCI && EVENTFD
 	select VFIO_VIRQFD
+	select IRQ_BYPASS_MANAGER
 	help
 	  Support for the PCI VFIO bus driver.  This is required to make
 	  use of PCI drivers using the VFIO framework.
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 1f577b4..4e053be 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -181,6 +181,7 @@  static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd)
 
 	if (vdev->ctx[0].trigger) {
 		free_irq(pdev->irq, vdev);
+		/* irq_bypass_unregister_producer(); */
 		kfree(vdev->ctx[0].name);
 		eventfd_ctx_put(vdev->ctx[0].trigger);
 		vdev->ctx[0].trigger = NULL;
@@ -214,6 +215,8 @@  static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd)
 		return ret;
 	}
 
+	/* irq_bypass_register_producer(); */
+
 	/*
 	 * INTx disable will stick across the new irq setup,
 	 * disable_irq won't.
@@ -319,6 +322,7 @@  static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
 
 	if (vdev->ctx[vector].trigger) {
 		free_irq(irq, vdev->ctx[vector].trigger);
+		/* irq_bypass_unregister_producer(); */
 		kfree(vdev->ctx[vector].name);
 		eventfd_ctx_put(vdev->ctx[vector].trigger);
 		vdev->ctx[vector].trigger = NULL;
@@ -360,6 +364,8 @@  static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
 		return ret;
 	}
 
+	/* irq_bypass_register_producer(); */
+
 	vdev->ctx[vector].trigger = trigger;
 
 	return 0;
diff --git a/include/linux/irqbypass.h b/include/linux/irqbypass.h
new file mode 100644
index 0000000..718508e
--- /dev/null
+++ b/include/linux/irqbypass.h
@@ -0,0 +1,23 @@ 
+#ifndef IRQBYPASS_H
+#define IRQBYPASS_H
+
+#include <linux/list.h>
+
+struct irq_bypass_producer {
+	struct list_head node;
+	void *token;
+	/* TBD */
+};
+
+struct irq_bypass_consumer {
+	struct list_head node;
+	void *token;
+	void (*add_producer)(struct irq_bypass_producer *);
+	void (*del_producer)(struct irq_bypass_producer *);
+};
+
+int irq_bypass_register_producer(struct irq_bypass_producer *);
+void irq_bypass_unregister_producer(struct irq_bypass_producer *);
+int irq_bypass_register_consumer(struct irq_bypass_consumer *);
+void irq_bypass_unregister_consumer(struct irq_bypass_consumer *);
+#endif /* IRQBYPASS_H */
diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 9a76e3b..4502cdc 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -100,4 +100,7 @@  config SPARSE_IRQ
 
 	  If you don't know what to do here, say N.
 
+config IRQ_BYPASS_MANAGER
+	bool
+
 endmenu
diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
index d121235..a30ed77 100644
--- a/kernel/irq/Makefile
+++ b/kernel/irq/Makefile
@@ -7,3 +7,4 @@  obj-$(CONFIG_PROC_FS) += proc.o
 obj-$(CONFIG_GENERIC_PENDING_IRQ) += migration.o
 obj-$(CONFIG_PM_SLEEP) += pm.o
 obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
+obj-$(CONFIG_IRQ_BYPASS_MANAGER) += bypass.o
diff --git a/kernel/irq/bypass.c b/kernel/irq/bypass.c
new file mode 100644
index 0000000..5d0f92b
--- /dev/null
+++ b/kernel/irq/bypass.c
@@ -0,0 +1,116 @@ 
+/*
+ * IRQ offload/bypass manager
+ *
+ * Various virtualization hardware acceleration techniques allow bypassing
+ * or offloading interrupts receieved from devices around the host kernel.
+ * Posted Interrupts on Intel VT-d systems can allow interrupts to be
+ * recieved directly by a virtual machine.  ARM IRQ Forwarding can allow
+ * level triggered device interrupts to be de-asserted directly by the VM.
+ * This manager allows interrupt producers and consumers to find each other
+ * to enable this sort of bypass.
+ */
+
+#include <linux/irqbypass.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+
+static LIST_HEAD(producers);
+static LIST_HEAD(consumers);
+static DEFINE_MUTEX(lock);
+
+int irq_bypass_register_producer(struct irq_bypass_producer *producer)
+{
+	struct irq_bypass_producer *tmp;
+	struct irq_bypass_consumer *consumer;
+	int ret = 0;
+
+	mutex_lock(&lock);
+
+	list_for_each_entry(tmp, &producers, node) {
+		if (tmp->token == producer->token) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+	}
+
+	list_add(&producer->node, &producers);
+
+	list_for_each_entry(consumer, &consumers, node) {
+		if (consumer->token == producer->token) {
+			consumer->add_producer(producer);
+			break;
+		}
+	}
+unlock:
+	mutex_unlock(&lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(irq_bypass_register_producer);
+
+void irq_bypass_unregister_producer(struct irq_bypass_producer *producer)
+{
+	struct irq_bypass_consumer *consumer;
+
+	mutex_lock(&lock);
+
+	list_for_each_entry(consumer, &consumers, node) {
+		if (consumer->token == producer->token) {
+			consumer->del_producer(producer);
+			break;
+		}
+	}
+
+	list_del(&producer->node);
+
+	mutex_unlock(&lock);
+}
+EXPORT_SYMBOL_GPL(irq_bypass_unregister_producer);
+
+int irq_bypass_register_consumer(struct irq_bypass_consumer *consumer)
+{
+	struct irq_bypass_consumer *tmp;
+	struct irq_bypass_producer *producer;
+	int ret = 0;
+
+	mutex_lock(&lock);
+
+	list_for_each_entry(tmp, &consumers, node) {
+		if (tmp->token == consumer->token) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+	}
+
+	list_add(&consumer->node, &consumers);
+
+	list_for_each_entry(producer, &producers, node) {
+		if (producer->token == consumer->token) {
+			consumer->add_producer(producer);
+			break;
+		}
+	}
+unlock:
+	mutex_unlock(&lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(irq_bypass_register_consumer);
+
+void irq_bypass_unregister_consumer(struct irq_bypass_consumer *consumer)
+{
+	struct irq_bypass_producer *producer;
+
+	mutex_lock(&lock);
+
+	list_for_each_entry(producer, &producers, node) {
+		if (producer->token == consumer->token) {
+			consumer->del_producer(producer);
+			break;
+		}
+	}
+
+	list_del(&consumer->node);
+
+	mutex_unlock(&lock);
+}
+EXPORT_SYMBOL_GPL(irq_bypass_unregister_consumer);
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index 9ff4193..f3da161 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -429,6 +429,8 @@  kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
 	 */
 	fdput(f);
 
+	/* irq_bypass_register_consumer(); */
+
 	return 0;
 
 fail:
@@ -528,6 +530,8 @@  kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd *args)
 	struct _irqfd *irqfd, *tmp;
 	struct eventfd_ctx *eventfd;
 
+	/* irq_bypass_unregister_consumer() */
+
 	eventfd = eventfd_ctx_fdget(args->fd);
 	if (IS_ERR(eventfd))
 		return PTR_ERR(eventfd);