diff mbox

genirq/msi: Make sure PCI MSIs are activated early

Message ID alpine.DEB.2.11.1607261307180.19896@nanos (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Thomas Gleixner July 26, 2016, 11:42 a.m. UTC
On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> written before PCI_MSI_ADDRESS_LO.  That doesn't sound like a good
> idea to me.

Well. That's only a problem if the PCI device does not support masking. But
yes, we missed that case back then.
 
> That does seem like a problem.  Maybe it would be better to delay
> setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> have been set?

I thought about that, but that gets ugly pretty fast. Here is an alternative
solution.

I think that's the proper place to do it _AFTER_ the hierarchical allocation
took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
message is not yet ready to be assembled.

Thanks,

	tglx
---
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Thomas Gleixner July 26, 2016, 1:05 p.m. UTC | #1
On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> > On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> > I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> > written before PCI_MSI_ADDRESS_LO.  That doesn't sound like a good
> > idea to me.
> 
> Well. That's only a problem if the PCI device does not support masking. But
> yes, we missed that case back then.
>  
> > That does seem like a problem.  Maybe it would be better to delay
> > setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> > have been set?
> 
> I thought about that, but that gets ugly pretty fast. Here is an alternative
> solution.
> 
> I think that's the proper place to do it _AFTER_ the hierarchical allocation
> took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
> message is not yet ready to be assembled.

Actually it works, because the MSI domain is the last one which is running the
allocation function. So everything else is initialized already.

I'll take Marc's patch with some additional commentry as it turned out to be a
workaround for the reported VMware issues with PCI/MSI-X pass through.

Thanks,

	tglx

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thomas Gleixner July 26, 2016, 2:05 p.m. UTC | #2
On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> > > On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> > > I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> > > written before PCI_MSI_ADDRESS_LO.  That doesn't sound like a good
> > > idea to me.
> > 
> > Well. That's only a problem if the PCI device does not support masking. But
> > yes, we missed that case back then.
> >  
> > > That does seem like a problem.  Maybe it would be better to delay
> > > setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> > > have been set?
> > 
> > I thought about that, but that gets ugly pretty fast. Here is an alternative
> > solution.
> > 
> > I think that's the proper place to do it _AFTER_ the hierarchical allocation
> > took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
> > message is not yet ready to be assembled.
> 
> Actually it works, because the MSI domain is the last one which is running the
> allocation function. So everything else is initialized already.
> 
> I'll take Marc's patch with some additional commentry as it turned out to be a
> workaround for the reported VMware issues with PCI/MSI-X pass through.

Now I digged a little bit deeper into all that PCI/MSI maze.

When a interrupt is freed, then we write the msi message to 0, but the
PCI_MSI_FLAGS_ENABLE flag is still set. That makes me wonder ...

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thomas Gleixner July 28, 2016, 3:03 p.m. UTC | #3
On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > > On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> > > > On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> > > > I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> > > > written before PCI_MSI_ADDRESS_LO.  That doesn't sound like a good
> > > > idea to me.
> > > 
> > > Well. That's only a problem if the PCI device does not support masking. But
> > > yes, we missed that case back then.
> > >  
> > > > That does seem like a problem.  Maybe it would be better to delay
> > > > setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> > > > have been set?
> > > 
> > > I thought about that, but that gets ugly pretty fast. Here is an alternative
> > > solution.
> > > 
> > > I think that's the proper place to do it _AFTER_ the hierarchical allocation
> > > took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
> > > message is not yet ready to be assembled.
> > 
> > Actually it works, because the MSI domain is the last one which is running the
> > allocation function. So everything else is initialized already.
> > 
> > I'll take Marc's patch with some additional commentry as it turned out to be a
> > workaround for the reported VMware issues with PCI/MSI-X pass through.
> 
> Now I digged a little bit deeper into all that PCI/MSI maze.
> 
> When a interrupt is freed, then we write the msi message to 0, but the
> PCI_MSI_FLAGS_ENABLE flag is still set. That makes me wonder ...

Bjorn, any opinion on that?

Thanks,

	tglx

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Helgaas July 28, 2016, 4:49 p.m. UTC | #4
On Thu, Jul 28, 2016 at 05:03:30PM +0200, Thomas Gleixner wrote:
> On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > > On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > > > On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> > > > > On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> > > > > I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> > > > > written before PCI_MSI_ADDRESS_LO.  That doesn't sound like a good
> > > > > idea to me.
> > > > 
> > > > Well. That's only a problem if the PCI device does not support masking. But
> > > > yes, we missed that case back then.
> > > >  
> > > > > That does seem like a problem.  Maybe it would be better to delay
> > > > > setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> > > > > have been set?
> > > > 
> > > > I thought about that, but that gets ugly pretty fast. Here is an alternative
> > > > solution.
> > > > 
> > > > I think that's the proper place to do it _AFTER_ the hierarchical allocation
> > > > took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
> > > > message is not yet ready to be assembled.
> > > 
> > > Actually it works, because the MSI domain is the last one which is running the
> > > allocation function. So everything else is initialized already.
> > > 
> > > I'll take Marc's patch with some additional commentry as it turned out to be a
> > > workaround for the reported VMware issues with PCI/MSI-X pass through.
> > 
> > Now I digged a little bit deeper into all that PCI/MSI maze.
> > 
> > When a interrupt is freed, then we write the msi message to 0, but the
> > PCI_MSI_FLAGS_ENABLE flag is still set. That makes me wonder ...
> 
> Bjorn, any opinion on that?

I assume you mean we write 0 to PCI_MSI_ADDRESS_LO, PCI_MSI_DATA_32,
and similar registers in the MSI Capability structure.

It doesn't sound safe to me to do that while PCI_MSI_FLAGS_ENABLE is
still set.  I don't see anything in the spec that constrains when a
device latches the values from those registers.  It seems legal to do
it on PCI_MSI_FLAGS_ENABLE transitions, but it also seems legal to do
it whenever the device needs to signal an interrupt.

If a device does the latter, it seems like clearing PCI_MSI_ADDRESS_LO
while PCI_MSI_FLAGS_ENABLE is set could lead to stray DMA writes if
the device for some reason signals an interrupt later.

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index a080f4496fe2..142341f8331b 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -645,6 +645,15 @@  static int msi_capability_init(struct pci_dev *dev, int nvec)
 		return ret;
 	}
 
+	/*
+	 * The mask can be ignored and PCI 2.3 does not specify mask bits for
+	 * each MSI interrupt. So in case of hierarchical irqdomains we need
+	 * to make sure that if masking is not available that the msi message
+	 * is written prior to setting the MSI enable bit in the device.
+	 */
+	if (pci_msi_ignore_mask || !entry->msi_attrib.maskbit)
+		irq_domain_activate_irq(irq_get_irq_data(entry->irq));
+
 	/* Set MSI enabled bits	 */
 	pci_intx_for_msi(dev, 0);
 	pci_msi_set_enable(dev, 1);