diff mbox series

[v1,1/1] PCI: Enable INTx quirk for ATI PCIe-USB adapter

Message ID 20220314101448.90074-1-andriy.shevchenko@linux.intel.com (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show
Series [v1,1/1] PCI: Enable INTx quirk for ATI PCIe-USB adapter | expand

Commit Message

Andy Shevchenko March 14, 2022, 10:14 a.m. UTC
ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
Enable the respective quirk as it's done for other ATI devices on this chipset,

Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
BugLink: https://lore.kernel.org/all/20200702143045.23429-1-andriy.shevchenko@linux.intel.com/
Reported-by: micklorain@protonmail.com
Tested-by: micklorain@protonmail.com
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/pci/quirks.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Bjorn Helgaas March 14, 2022, 7:42 p.m. UTC | #1
On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> Enable the respective quirk as it's done for other ATI devices on this chipset,
> 
> Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")

This is interesting because there must be a TON of these AMD/ATI SB600
USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
appeared in v5.9.

So why would we only get a report now, in February 2022?  Is there
some change more recent than 306c54d0edb6 that exposed this problem?

> BugLink: https://lore.kernel.org/all/20200702143045.23429-1-andriy.shevchenko@linux.intel.com/
> Reported-by: micklorain@protonmail.com
> Tested-by: micklorain@protonmail.com
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---
>  drivers/pci/quirks.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 65f7f6b0576c..cc6a87a32b62 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -3041,6 +3041,13 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
>  			PCI_DEVICE_ID_TIGON3_5715S,
>  			quirk_msi_intx_disable_bug);
>  
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_msi_intx_disable_bug);
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_msi_intx_disable_bug);
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_msi_intx_disable_bug);
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_msi_intx_disable_bug);
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_msi_intx_disable_bug);
> +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_msi_intx_disable_bug);
> +
>  DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4390,
>  			quirk_msi_intx_disable_ati_bug);
>  DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4391,
> -- 
> 2.35.1
>
Andy Shevchenko March 15, 2022, 10:09 a.m. UTC | #2
On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> > Enable the respective quirk as it's done for other ATI devices on this chipset,
> > 
> > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
> 
> This is interesting because there must be a TON of these AMD/ATI SB600
> USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
> appeared in v5.9.
> 
> So why would we only get a report now, in February 2022?  Is there
> some change more recent than 306c54d0edb6 that exposed this problem?

I think it's a rhetorical question. To me it's as simple as the latency
between getting the change into the kernel.

However, I'm a bit worried that in case of ATI there are not so many
platforms that are kept up-to-dated.
Bjorn Helgaas March 15, 2022, 8:22 p.m. UTC | #3
On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> > > Enable the respective quirk as it's done for other ATI devices on this chipset,
> > > 
> > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
> > 
> > This is interesting because there must be a TON of these AMD/ATI SB600
> > USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
> > appeared in v5.9.
> > 
> > So why would we only get a report now, in February 2022?  Is there
> > some change more recent than 306c54d0edb6 that exposed this problem?
> 
> I think it's a rhetorical question. To me it's as simple as the latency
> between getting the change into the kernel.
> 
> However, I'm a bit worried that in case of ATI there are not so many
> platforms that are kept up-to-dated.

This would be a rhetorical question if I were not interested in the
answer but asking only to make a point.  That's not the case at all.

If these SB600 USB devices stopped working in v5.9 (October 2020),
that would affect lots of keyboards and mice, and I would be surprised
if we didn't hear about it until February, 2022.

I looked through https://github.com/linuxhw/Dmesg, and there are at
least 40 dmesg logs from v5.9 or later with SB600 USB, so I'm
still a little skeptical that 306c54d0edb6 by itself is enough to
explain this.

Anyway, I applied this to pci/msi for v5.18 with the following commit
log:

    PCI: Disable broken MSI on ATI SB600 USB adapters

    Some ATI SB600 USB adapters advertise MSI, but MSI doesn't work if INTx is
    disabled.  Disable MSI on these adapters.
Andy Shevchenko March 16, 2022, 10:27 a.m. UTC | #4
On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> > > > Enable the respective quirk as it's done for other ATI devices on this chipset,
> > > > 
> > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
> > > 
> > > This is interesting because there must be a TON of these AMD/ATI SB600
> > > USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
> > > appeared in v5.9.
> > > 
> > > So why would we only get a report now, in February 2022?  Is there
> > > some change more recent than 306c54d0edb6 that exposed this problem?
> > 
> > I think it's a rhetorical question. To me it's as simple as the latency
> > between getting the change into the kernel.
> > 
> > However, I'm a bit worried that in case of ATI there are not so many
> > platforms that are kept up-to-dated.
> 
> This would be a rhetorical question if I were not interested in the
> answer but asking only to make a point.  That's not the case at all.
> 
> If these SB600 USB devices stopped working in v5.9 (October 2020),
> that would affect lots of keyboards and mice, and I would be surprised
> if we didn't hear about it until February, 2022.
> 
> I looked through https://github.com/linuxhw/Dmesg, and there are at
> least 40 dmesg logs from v5.9 or later with SB600 USB, so I'm
> still a little skeptical that 306c54d0edb6 by itself is enough to
> explain this.
> 
> Anyway, I applied this to pci/msi for v5.18 with the following commit
> log:
> 
>     PCI: Disable broken MSI on ATI SB600 USB adapters
> 
>     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't work if INTx is
>     disabled.  Disable MSI on these adapters.

But IIUC MSI is _not_ disabled. That's why I have issued this version of the
patch with different commit message. Did I misunderstand something?
Bjorn Helgaas March 16, 2022, 11:52 a.m. UTC | #5
On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> > > > > Enable the respective quirk as it's done for other ATI devices on this chipset,
> > > > > 
> > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
> > > > 
> > > > This is interesting because there must be a TON of these AMD/ATI SB600
> > > > USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
> > > > appeared in v5.9.
> > > > 
> > > > So why would we only get a report now, in February 2022?  Is there
> > > > some change more recent than 306c54d0edb6 that exposed this problem?
> > > 
> > > I think it's a rhetorical question. To me it's as simple as the latency
> > > between getting the change into the kernel.
> > > 
> > > However, I'm a bit worried that in case of ATI there are not so many
> > > platforms that are kept up-to-dated.
> > 
> > This would be a rhetorical question if I were not interested in the
> > answer but asking only to make a point.  That's not the case at all.
> > 
> > If these SB600 USB devices stopped working in v5.9 (October 2020),
> > that would affect lots of keyboards and mice, and I would be surprised
> > if we didn't hear about it until February, 2022.
> > 
> > I looked through https://github.com/linuxhw/Dmesg, and there are at
> > least 40 dmesg logs from v5.9 or later with SB600 USB, so I'm
> > still a little skeptical that 306c54d0edb6 by itself is enough to
> > explain this.
> > 
> > Anyway, I applied this to pci/msi for v5.18 with the following commit
> > log:
> > 
> >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > 
> >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't work if INTx is
> >     disabled.  Disable MSI on these adapters.
> 
> But IIUC MSI is _not_ disabled. That's why I have issued this version of the
> patch with different commit message. Did I misunderstand something?

Oh, right, of course.  Sorry, I was asleep at the wheel.

I guess it's just that for these devices, we don't disable INTx when
enabling MSI.  I can't remember why we disable INTx when enabling MSI,
but it raises the question of whether it's better to leave INTx
enabled or to just disable use of MSI completely.

Bjorn
Andy Shevchenko March 16, 2022, 4:12 p.m. UTC | #6
On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work if INTx is disabled.
> > > > > > Enable the respective quirk as it's done for other ATI devices on this chipset,
> > > > > > 
> > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on PCI devices")
> > > > > 
> > > > > This is interesting because there must be a TON of these AMD/ATI SB600
> > > > > USB devices in the field, and 306c54d0edb6 was merged in July 2020 and
> > > > > appeared in v5.9.
> > > > > 
> > > > > So why would we only get a report now, in February 2022?  Is there
> > > > > some change more recent than 306c54d0edb6 that exposed this problem?
> > > > 
> > > > I think it's a rhetorical question. To me it's as simple as the latency
> > > > between getting the change into the kernel.
> > > > 
> > > > However, I'm a bit worried that in case of ATI there are not so many
> > > > platforms that are kept up-to-dated.
> > > 
> > > This would be a rhetorical question if I were not interested in the
> > > answer but asking only to make a point.  That's not the case at all.
> > > 
> > > If these SB600 USB devices stopped working in v5.9 (October 2020),
> > > that would affect lots of keyboards and mice, and I would be surprised
> > > if we didn't hear about it until February, 2022.
> > > 
> > > I looked through https://github.com/linuxhw/Dmesg, and there are at
> > > least 40 dmesg logs from v5.9 or later with SB600 USB, so I'm
> > > still a little skeptical that 306c54d0edb6 by itself is enough to
> > > explain this.
> > > 
> > > Anyway, I applied this to pci/msi for v5.18 with the following commit
> > > log:
> > > 
> > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > 
> > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't work if INTx is
> > >     disabled.  Disable MSI on these adapters.
> > 
> > But IIUC MSI is _not_ disabled. That's why I have issued this version of the
> > patch with different commit message. Did I misunderstand something?
> 
> Oh, right, of course.  Sorry, I was asleep at the wheel.

Are you going to fix that?

> I guess it's just that for these devices, we don't disable INTx when
> enabling MSI.  I can't remember why we disable INTx when enabling MSI,
> but it raises the question of whether it's better to leave INTx
> enabled or to just disable use of MSI completely.

It's required by specification to disable INTx if I read 6.1.4.3
Enabling Operation correctly.
Bjorn Helgaas March 16, 2022, 9:15 p.m. UTC | #7
[+cc Alex]

On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > 
> > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > PCI devices")

> > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > commit log:
> > > > 
> > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > 
> > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > 
> > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > version of the patch with different commit message. Did I
> > > misunderstand something?
> > 
> > Oh, right, of course.  Sorry, I was asleep at the wheel.
> 
> Are you going to fix that?

Yes, of course, I'll do something with the commit message after we
figure out how to handle PCI_COMMAND_INTX_DISABLE.

> > I guess it's just that for these devices, we don't disable INTx
> > when enabling MSI.  I can't remember why we disable INTx when
> > enabling MSI, but it raises the question of whether it's better to
> > leave INTx enabled or to just disable use of MSI completely.
> 
> It's required by specification to disable INTx if I read 6.1.4.3
> Enabling Operation correctly.

Thanks for the reference; I was looking for something like that.  But
I don't think this section requires us to set
PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
PCIe r6.0, sec 6.1.4.3 says:

  To maintain backward compatibility, the MSI Enable bit in the
  Message Control Register for MSI and the MSI-X Enable bit in the
  Message Control Register for MSI-X are each Clear by default (MSI
  and MSI-X are both disabled). System configuration software Sets one
  of these bits to enable either MSI or MSI-X, but never both
  simultaneously. Behavior is undefined if both MSI and MSI-X are
  enabled simultaneously. Software disabling either mechanism during
  active operation may result in the Function dropping pending
  interrupt conditions or failing to recognize new interrupt
  conditions. While enabled for MSI or MSI-X operation, a Function is
  prohibited from using INTx interrupts (if implemented) to request
  service (MSI, MSI-X, and INTx are mutually exclusive).

The only *software* constraints I see are (1) software must never
enable both MSI and MSI-X simultaneously, and (2) if software disables
MSI or MSI-X during active operation, the Function may fail to
generate an interrupt when it should.

I read the last sentence as a constraint on the *hardware*: if either
MSI or MSI-X is enabled, the Function is not allowed to use INTx,
regardless of the state of PCI_COMMAND_INTX_DISABLE.

I searched the spec for "Interrupt Disable", looking for situations
where software might be *required* to set it, but I didn't see
anything.

I suspect "Interrupt Disable" was intended to help the OS stop all
activity from a device during hot-plug or reconfiguration, as hinted
at in sec 6.4, "Device Synchronization":

  The ability of the driver and/or system software to block new
  Requests from the device is supported by the Bus Master Enable,
  SERR# Enable, and Interrupt Disable bits in the Command register
  (Section 7.5.1.1.3) of each device Function, and other such control
  bits.

So I'm trying to figure out why when enabling MSI we need to set
PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
for these quirked devices.

Bjorn
Alex Williamson March 16, 2022, 10:13 p.m. UTC | #8
On Wed, 16 Mar 2022 16:15:48 -0500
Bjorn Helgaas <helgaas@kernel.org> wrote:

> [+cc Alex]
> 
> On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:  
> > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:  
> > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:  
> > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:  
> > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:  
> > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:  
> > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > 
> > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > PCI devices")  
> 
> > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > commit log:
> > > > > 
> > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > 
> > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > >     work if INTx is disabled.  Disable MSI on these adapters.  
> > > > 
> > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > version of the patch with different commit message. Did I
> > > > misunderstand something?  
> > > 
> > > Oh, right, of course.  Sorry, I was asleep at the wheel.  
> > 
> > Are you going to fix that?  
> 
> Yes, of course, I'll do something with the commit message after we
> figure out how to handle PCI_COMMAND_INTX_DISABLE.
> 
> > > I guess it's just that for these devices, we don't disable INTx
> > > when enabling MSI.  I can't remember why we disable INTx when
> > > enabling MSI, but it raises the question of whether it's better to
> > > leave INTx enabled or to just disable use of MSI completely.  
> > 
> > It's required by specification to disable INTx if I read 6.1.4.3
> > Enabling Operation correctly.  
> 
> Thanks for the reference; I was looking for something like that.  But
> I don't think this section requires us to set
> PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> PCIe r6.0, sec 6.1.4.3 says:
> 
>   To maintain backward compatibility, the MSI Enable bit in the
>   Message Control Register for MSI and the MSI-X Enable bit in the
>   Message Control Register for MSI-X are each Clear by default (MSI
>   and MSI-X are both disabled). System configuration software Sets one
>   of these bits to enable either MSI or MSI-X, but never both
>   simultaneously. Behavior is undefined if both MSI and MSI-X are
>   enabled simultaneously. Software disabling either mechanism during
>   active operation may result in the Function dropping pending
>   interrupt conditions or failing to recognize new interrupt
>   conditions. While enabled for MSI or MSI-X operation, a Function is
>   prohibited from using INTx interrupts (if implemented) to request
>   service (MSI, MSI-X, and INTx are mutually exclusive).
> 
> The only *software* constraints I see are (1) software must never
> enable both MSI and MSI-X simultaneously, and (2) if software disables
> MSI or MSI-X during active operation, the Function may fail to
> generate an interrupt when it should.
> 
> I read the last sentence as a constraint on the *hardware*: if either
> MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> regardless of the state of PCI_COMMAND_INTX_DISABLE.
> 
> I searched the spec for "Interrupt Disable", looking for situations
> where software might be *required* to set it, but I didn't see
> anything.
> 
> I suspect "Interrupt Disable" was intended to help the OS stop all
> activity from a device during hot-plug or reconfiguration, as hinted
> at in sec 6.4, "Device Synchronization":
> 
>   The ability of the driver and/or system software to block new
>   Requests from the device is supported by the Bus Master Enable,
>   SERR# Enable, and Interrupt Disable bits in the Command register
>   (Section 7.5.1.1.3) of each device Function, and other such control
>   bits.
> 
> So I'm trying to figure out why when enabling MSI we need to set
> PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> for these quirked devices.

I don't have an authoritative answer, but in the conventional PCI 2.3
spec the bit definitions for DisINTx and MSI Enable cross reference each
other from which one might infer a symmetry that we disable one to use
the other.  In PCIe INTx becomes a virtual wire interrupt and there the
DisINTx bit definition states that setting this bit requires the device
to issue the de-assert message for that virtual wire, so there might be
some FUD relative to enabling MSI/X while the device has already
asserted INTx.

At best though, I don't know of anything to support that setting
DisINTx should have any effect on MSI/X, that much seems like it is
certainly a hardware bug.  Thanks,

Alex
Andy Shevchenko March 17, 2022, 8:59 a.m. UTC | #9
On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > 
> > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > PCI devices")
> 
> > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > commit log:
> > > > > 
> > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > 
> > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > 
> > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > version of the patch with different commit message. Did I
> > > > misunderstand something?
> > > 
> > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > 
> > Are you going to fix that?
> 
> Yes, of course, I'll do something with the commit message after we
> figure out how to handle PCI_COMMAND_INTX_DISABLE.
> 
> > > I guess it's just that for these devices, we don't disable INTx
> > > when enabling MSI.  I can't remember why we disable INTx when
> > > enabling MSI, but it raises the question of whether it's better to
> > > leave INTx enabled or to just disable use of MSI completely.
> > 
> > It's required by specification to disable INTx if I read 6.1.4.3
> > Enabling Operation correctly.
> 
> Thanks for the reference; I was looking for something like that.  But
> I don't think this section requires us to set
> PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> PCIe r6.0, sec 6.1.4.3 says:
> 
>   To maintain backward compatibility, the MSI Enable bit in the
>   Message Control Register for MSI and the MSI-X Enable bit in the
>   Message Control Register for MSI-X are each Clear by default (MSI
>   and MSI-X are both disabled). System configuration software Sets one
>   of these bits to enable either MSI or MSI-X, but never both
>   simultaneously. Behavior is undefined if both MSI and MSI-X are
>   enabled simultaneously. Software disabling either mechanism during
>   active operation may result in the Function dropping pending
>   interrupt conditions or failing to recognize new interrupt
>   conditions. While enabled for MSI or MSI-X operation, a Function is
>   prohibited from using INTx interrupts (if implemented) to request
>   service (MSI, MSI-X, and INTx are mutually exclusive).
> 
> The only *software* constraints I see are (1) software must never
> enable both MSI and MSI-X simultaneously, and (2) if software disables
> MSI or MSI-X during active operation, the Function may fail to
> generate an interrupt when it should.
> 
> I read the last sentence as a constraint on the *hardware*: if either
> MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> regardless of the state of PCI_COMMAND_INTX_DISABLE.
> 
> I searched the spec for "Interrupt Disable", looking for situations
> where software might be *required* to set it, but I didn't see
> anything.
> 
> I suspect "Interrupt Disable" was intended to help the OS stop all
> activity from a device during hot-plug or reconfiguration, as hinted
> at in sec 6.4, "Device Synchronization":
> 
>   The ability of the driver and/or system software to block new
>   Requests from the device is supported by the Bus Master Enable,
>   SERR# Enable, and Interrupt Disable bits in the Command register
>   (Section 7.5.1.1.3) of each device Function, and other such control
>   bits.
> 
> So I'm trying to figure out why when enabling MSI we need to set
> PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> for these quirked devices.

I guess it's wrong wording in the last paragraph. It's not safe, but it's
_required_ since HW doesn't follow PCI specification that clearly says:
"MSI, MSI-X, and INTx are mutually exclusive".

This is also supported by the other devices on the same chipset doing
that and by the reported of the issue.
Bjorn Helgaas March 17, 2022, 8:41 p.m. UTC | #10
On Thu, Mar 17, 2022 at 10:59:28AM +0200, Andy Shevchenko wrote:
> On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> > On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > > 
> > > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > > PCI devices")
> > 
> > > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > > commit log:
> > > > > > 
> > > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > > 
> > > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > > 
> > > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > > version of the patch with different commit message. Did I
> > > > > misunderstand something?
> > > > 
> > > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > > 
> > > Are you going to fix that?
> > 
> > Yes, of course, I'll do something with the commit message after we
> > figure out how to handle PCI_COMMAND_INTX_DISABLE.
> > 
> > > > I guess it's just that for these devices, we don't disable INTx
> > > > when enabling MSI.  I can't remember why we disable INTx when
> > > > enabling MSI, but it raises the question of whether it's better to
> > > > leave INTx enabled or to just disable use of MSI completely.
> > > 
> > > It's required by specification to disable INTx if I read 6.1.4.3
> > > Enabling Operation correctly.
> > 
> > Thanks for the reference; I was looking for something like that.  But
> > I don't think this section requires us to set
> > PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> > PCIe r6.0, sec 6.1.4.3 says:
> > 
> >   To maintain backward compatibility, the MSI Enable bit in the
> >   Message Control Register for MSI and the MSI-X Enable bit in the
> >   Message Control Register for MSI-X are each Clear by default (MSI
> >   and MSI-X are both disabled). System configuration software Sets one
> >   of these bits to enable either MSI or MSI-X, but never both
> >   simultaneously. Behavior is undefined if both MSI and MSI-X are
> >   enabled simultaneously. Software disabling either mechanism during
> >   active operation may result in the Function dropping pending
> >   interrupt conditions or failing to recognize new interrupt
> >   conditions. While enabled for MSI or MSI-X operation, a Function is
> >   prohibited from using INTx interrupts (if implemented) to request
> >   service (MSI, MSI-X, and INTx are mutually exclusive).
> > 
> > The only *software* constraints I see are (1) software must never
> > enable both MSI and MSI-X simultaneously, and (2) if software disables
> > MSI or MSI-X during active operation, the Function may fail to
> > generate an interrupt when it should.
> > 
> > I read the last sentence as a constraint on the *hardware*: if either
> > MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> > regardless of the state of PCI_COMMAND_INTX_DISABLE.
> > 
> > I searched the spec for "Interrupt Disable", looking for situations
> > where software might be *required* to set it, but I didn't see
> > anything.
> > 
> > I suspect "Interrupt Disable" was intended to help the OS stop all
> > activity from a device during hot-plug or reconfiguration, as hinted
> > at in sec 6.4, "Device Synchronization":
> > 
> >   The ability of the driver and/or system software to block new
> >   Requests from the device is supported by the Bus Master Enable,
> >   SERR# Enable, and Interrupt Disable bits in the Command register
> >   (Section 7.5.1.1.3) of each device Function, and other such control
> >   bits.
> > 
> > So I'm trying to figure out why when enabling MSI we need to set
> > PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> > for these quirked devices.
> 
> I guess it's wrong wording in the last paragraph. It's not safe, but it's
> _required_ since HW doesn't follow PCI specification that clearly says:
> "MSI, MSI-X, and INTx are mutually exclusive".

I agree there's a defect in these SB600 devices.  My guess is that
PCI_COMMAND_INTX_DISABLE actually disables both INTx and MSI, when
it's only supposed to disable INTx.

I'm pretty sure the spec doesn't actually require software to set
Interrupt Disable when enabling MSI, since MSI was added in PCI r2.2,
which included this text in sec 6.8.2:

  System configuration software sets [the MSI Enable] bit to enable
  MSI. ...  Once enabled, a function is prohibited from using its
  INTx# pin (if implemented) to request service (MSI and INTx# are
  mutually exclusive).

and Interrupt Disable was added later, in PCI r2.3, with no mention of
a connection with MSI.  All the specs from PCI r2.2 to PCIe r6.0
include the text above about not using INTx# if MSI or MSI-X is
enabled, but that's not the same as requiring software to set
Interrupt Disable.  Linux has set Interrupt Disable when enabling MSI
ever since MSI support was added [1], so I would hesitate to change
that even though I don't think it's required.

What I don't like about PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG is that it
changes the generic code path in a sort of random way, i.e., this
device becomes yet another special case in how we handle Interrupt
Disable.

What would you think about just setting pdev->no_msi instead, so we
don't try to use MSI at all on these devices?  I think that's what we
did before 306c54d0edb6.

[1] https://lore.kernel.org/all/200310032215.h93MFnjT005788@snoqualmie.dp.intel.com/ (search for PCI_COMMAND_INTX_DISABLE)
Andy Shevchenko March 18, 2022, 10:42 a.m. UTC | #11
On Thu, Mar 17, 2022 at 11:12 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> On Thu, Mar 17, 2022 at 10:59:28AM +0200, Andy Shevchenko wrote:
> > On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> > > On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > > > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > > >
> > > > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > > > PCI devices")
> > >
> > > > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > > > commit log:
> > > > > > >
> > > > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > > >
> > > > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > > >
> > > > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > > > version of the patch with different commit message. Did I
> > > > > > misunderstand something?
> > > > >
> > > > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > > >
> > > > Are you going to fix that?
> > >
> > > Yes, of course, I'll do something with the commit message after we
> > > figure out how to handle PCI_COMMAND_INTX_DISABLE.
> > >
> > > > > I guess it's just that for these devices, we don't disable INTx
> > > > > when enabling MSI.  I can't remember why we disable INTx when
> > > > > enabling MSI, but it raises the question of whether it's better to
> > > > > leave INTx enabled or to just disable use of MSI completely.
> > > >
> > > > It's required by specification to disable INTx if I read 6.1.4.3
> > > > Enabling Operation correctly.
> > >
> > > Thanks for the reference; I was looking for something like that.  But
> > > I don't think this section requires us to set
> > > PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> > > PCIe r6.0, sec 6.1.4.3 says:
> > >
> > >   To maintain backward compatibility, the MSI Enable bit in the
> > >   Message Control Register for MSI and the MSI-X Enable bit in the
> > >   Message Control Register for MSI-X are each Clear by default (MSI
> > >   and MSI-X are both disabled). System configuration software Sets one
> > >   of these bits to enable either MSI or MSI-X, but never both
> > >   simultaneously. Behavior is undefined if both MSI and MSI-X are
> > >   enabled simultaneously. Software disabling either mechanism during
> > >   active operation may result in the Function dropping pending
> > >   interrupt conditions or failing to recognize new interrupt
> > >   conditions. While enabled for MSI or MSI-X operation, a Function is
> > >   prohibited from using INTx interrupts (if implemented) to request
> > >   service (MSI, MSI-X, and INTx are mutually exclusive).
> > >
> > > The only *software* constraints I see are (1) software must never
> > > enable both MSI and MSI-X simultaneously, and (2) if software disables
> > > MSI or MSI-X during active operation, the Function may fail to
> > > generate an interrupt when it should.
> > >
> > > I read the last sentence as a constraint on the *hardware*: if either
> > > MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> > > regardless of the state of PCI_COMMAND_INTX_DISABLE.
> > >
> > > I searched the spec for "Interrupt Disable", looking for situations
> > > where software might be *required* to set it, but I didn't see
> > > anything.
> > >
> > > I suspect "Interrupt Disable" was intended to help the OS stop all
> > > activity from a device during hot-plug or reconfiguration, as hinted
> > > at in sec 6.4, "Device Synchronization":
> > >
> > >   The ability of the driver and/or system software to block new
> > >   Requests from the device is supported by the Bus Master Enable,
> > >   SERR# Enable, and Interrupt Disable bits in the Command register
> > >   (Section 7.5.1.1.3) of each device Function, and other such control
> > >   bits.
> > >
> > > So I'm trying to figure out why when enabling MSI we need to set
> > > PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> > > for these quirked devices.
> >
> > I guess it's wrong wording in the last paragraph. It's not safe, but it's
> > _required_ since HW doesn't follow PCI specification that clearly says:
> > "MSI, MSI-X, and INTx are mutually exclusive".
>
> I agree there's a defect in these SB600 devices.  My guess is that
> PCI_COMMAND_INTX_DISABLE actually disables both INTx and MSI, when
> it's only supposed to disable INTx.
>
> I'm pretty sure the spec doesn't actually require software to set
> Interrupt Disable when enabling MSI, since MSI was added in PCI r2.2,
> which included this text in sec 6.8.2:
>
>   System configuration software sets [the MSI Enable] bit to enable
>   MSI. ...  Once enabled, a function is prohibited from using its
>   INTx# pin (if implemented) to request service (MSI and INTx# are
>   mutually exclusive).
>
> and Interrupt Disable was added later, in PCI r2.3, with no mention of
> a connection with MSI.  All the specs from PCI r2.2 to PCIe r6.0
> include the text above about not using INTx# if MSI or MSI-X is
> enabled, but that's not the same as requiring software to set
> Interrupt Disable.  Linux has set Interrupt Disable when enabling MSI
> ever since MSI support was added [1], so I would hesitate to change
> that even though I don't think it's required.

Thanks for diving into the history of the specification. What I learnt
about any of the specifications is that it usually has a lot of
implications that are only understandable (known) to the specification
author(s). This gives a room of misinterpretation. In any case I
usually apply my common sense denominator, so I try to go with the
straight logic. In this case it seems to me that keeping both enabled
is illogical and Linux does the right thing (means the author of the
Linux kernel implementation is on the same page with me).

> What I don't like about PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG is that it
> changes the generic code path in a sort of random way, i.e., this
> device becomes yet another special case in how we handle Interrupt
> Disable.
>
> What would you think about just setting pdev->no_msi instead, so we
> don't try to use MSI at all on these devices?  I think that's what we
> did before 306c54d0edb6.

Yes, we did. But why should we go this way if it already established a
special case disregarding my patch(es)? If you want to do that you
need to explain why other devices on the same chipset should enable
MSI and what's wrong with enabling MSI on the USB devices. My
understanding is that the MSI is a good thing to have due to
performance benefits and taking into account other devices that have
already been using it on the other devices of the same chipset tells
me that's okay. Moreover, the reporter of the bug confirmed that MSI
works for them after applying this quirk fix.

> [1] https://lore.kernel.org/all/200310032215.h93MFnjT005788@snoqualmie.dp.intel.com/ (search for PCI_COMMAND_INTX_DISABLE)
Bjorn Helgaas March 18, 2022, 4:47 p.m. UTC | #12
On Fri, Mar 18, 2022 at 12:42:18PM +0200, Andy Shevchenko wrote:
> On Thu, Mar 17, 2022 at 11:12 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Thu, Mar 17, 2022 at 10:59:28AM +0200, Andy Shevchenko wrote:
> > > On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> > > > On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > > > > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > > > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > > > >
> > > > > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > > > > PCI devices")
> > > >
> > > > > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > > > > commit log:
> > > > > > > >
> > > > > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > > > >
> > > > > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > > > >
> > > > > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > > > > version of the patch with different commit message. Did I
> > > > > > > misunderstand something?
> > > > > >
> > > > > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > > > >
> > > > > Are you going to fix that?
> > > >
> > > > Yes, of course, I'll do something with the commit message after we
> > > > figure out how to handle PCI_COMMAND_INTX_DISABLE.
> > > >
> > > > > > I guess it's just that for these devices, we don't disable INTx
> > > > > > when enabling MSI.  I can't remember why we disable INTx when
> > > > > > enabling MSI, but it raises the question of whether it's better to
> > > > > > leave INTx enabled or to just disable use of MSI completely.
> > > > >
> > > > > It's required by specification to disable INTx if I read 6.1.4.3
> > > > > Enabling Operation correctly.
> > > >
> > > > Thanks for the reference; I was looking for something like that.  But
> > > > I don't think this section requires us to set
> > > > PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> > > > PCIe r6.0, sec 6.1.4.3 says:
> > > >
> > > >   To maintain backward compatibility, the MSI Enable bit in the
> > > >   Message Control Register for MSI and the MSI-X Enable bit in the
> > > >   Message Control Register for MSI-X are each Clear by default (MSI
> > > >   and MSI-X are both disabled). System configuration software Sets one
> > > >   of these bits to enable either MSI or MSI-X, but never both
> > > >   simultaneously. Behavior is undefined if both MSI and MSI-X are
> > > >   enabled simultaneously. Software disabling either mechanism during
> > > >   active operation may result in the Function dropping pending
> > > >   interrupt conditions or failing to recognize new interrupt
> > > >   conditions. While enabled for MSI or MSI-X operation, a Function is
> > > >   prohibited from using INTx interrupts (if implemented) to request
> > > >   service (MSI, MSI-X, and INTx are mutually exclusive).
> > > >
> > > > The only *software* constraints I see are (1) software must never
> > > > enable both MSI and MSI-X simultaneously, and (2) if software disables
> > > > MSI or MSI-X during active operation, the Function may fail to
> > > > generate an interrupt when it should.
> > > >
> > > > I read the last sentence as a constraint on the *hardware*: if either
> > > > MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> > > > regardless of the state of PCI_COMMAND_INTX_DISABLE.
> > > >
> > > > I searched the spec for "Interrupt Disable", looking for situations
> > > > where software might be *required* to set it, but I didn't see
> > > > anything.
> > > >
> > > > I suspect "Interrupt Disable" was intended to help the OS stop all
> > > > activity from a device during hot-plug or reconfiguration, as hinted
> > > > at in sec 6.4, "Device Synchronization":
> > > >
> > > >   The ability of the driver and/or system software to block new
> > > >   Requests from the device is supported by the Bus Master Enable,
> > > >   SERR# Enable, and Interrupt Disable bits in the Command register
> > > >   (Section 7.5.1.1.3) of each device Function, and other such control
> > > >   bits.
> > > >
> > > > So I'm trying to figure out why when enabling MSI we need to set
> > > > PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> > > > for these quirked devices.
> > >
> > > I guess it's wrong wording in the last paragraph. It's not safe, but it's
> > > _required_ since HW doesn't follow PCI specification that clearly says:
> > > "MSI, MSI-X, and INTx are mutually exclusive".
> >
> > I agree there's a defect in these SB600 devices.  My guess is that
> > PCI_COMMAND_INTX_DISABLE actually disables both INTx and MSI, when
> > it's only supposed to disable INTx.
> >
> > I'm pretty sure the spec doesn't actually require software to set
> > Interrupt Disable when enabling MSI, since MSI was added in PCI r2.2,
> > which included this text in sec 6.8.2:
> >
> >   System configuration software sets [the MSI Enable] bit to enable
> >   MSI. ...  Once enabled, a function is prohibited from using its
> >   INTx# pin (if implemented) to request service (MSI and INTx# are
> >   mutually exclusive).
> >
> > and Interrupt Disable was added later, in PCI r2.3, with no mention of
> > a connection with MSI.  All the specs from PCI r2.2 to PCIe r6.0
> > include the text above about not using INTx# if MSI or MSI-X is
> > enabled, but that's not the same as requiring software to set
> > Interrupt Disable.  Linux has set Interrupt Disable when enabling MSI
> > ever since MSI support was added [1], so I would hesitate to change
> > that even though I don't think it's required.
> 
> Thanks for diving into the history of the specification. What I learnt
> about any of the specifications is that it usually has a lot of
> implications that are only understandable (known) to the specification
> author(s). This gives a room of misinterpretation. In any case I
> usually apply my common sense denominator, so I try to go with the
> straight logic. In this case it seems to me that keeping both enabled
> is illogical and Linux does the right thing (means the author of the
> Linux kernel implementation is on the same page with me).
> 
> > What I don't like about PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG is that it
> > changes the generic code path in a sort of random way, i.e., this
> > device becomes yet another special case in how we handle Interrupt
> > Disable.
> >
> > What would you think about just setting pdev->no_msi instead, so we
> > don't try to use MSI at all on these devices?  I think that's what we
> > did before 306c54d0edb6.
> 
> Yes, we did. But why should we go this way if it already established a
> special case disregarding my patch(es)? If you want to do that you
> need to explain why other devices on the same chipset should enable
> MSI and what's wrong with enabling MSI on the USB devices. My
> understanding is that the MSI is a good thing to have due to
> performance benefits and taking into account other devices that have
> already been using it on the other devices of the same chipset tells
> me that's okay. Moreover, the reporter of the bug confirmed that MSI
> works for them after applying this quirk fix.

I agree that MSI is generally to be preferred over INTx.  In this
case, it's an old USB device and I don't think there's any real
performance benefit.

The problem with enabling MSI on these USB devices is that the generic
MSI code doesn't work correctly.  Since we've been using them without
MSI in the past, and they have some defect that makes MSI not work
correctly, it seems like the simplest solution is to avoid using MSI,
something like the patch below.

Would I prefer the same for all the other existing users of
PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG?  Yes.  Enough to change the status
quo, which would be a performance regression, and potentially break
something?  Probably not.

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index d2dd6a6cda60..c0e6b031bb5d 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -2675,6 +2675,18 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SERVERWORKS,
 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8132_BRIDGE,
 			 ht_enable_msi_mapping);
 
+static void quirk_no_msi(struct pci_dev *dev)
+{
+	pci_info(dev, "Disabling MSI to avoid hardware defect\n");
+	dev->no_msi = 1;
+}
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi);
+
 /*
  * The P5N32-SLI motherboards from Asus have a problem with MSI
  * for the MCP55 NIC. It is not yet determined whether the MSI problem
Andy Shevchenko March 18, 2022, 6:06 p.m. UTC | #13
On Fri, Mar 18, 2022 at 11:47:40AM -0500, Bjorn Helgaas wrote:
> On Fri, Mar 18, 2022 at 12:42:18PM +0200, Andy Shevchenko wrote:
> > On Thu, Mar 17, 2022 at 11:12 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Thu, Mar 17, 2022 at 10:59:28AM +0200, Andy Shevchenko wrote:
> > > > On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> > > > > On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > > > > > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > > > > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > > > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > > > > >
> > > > > > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > > > > > PCI devices")
> > > > >
> > > > > > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > > > > > commit log:
> > > > > > > > >
> > > > > > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > > > > >
> > > > > > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > > > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > > > > >
> > > > > > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > > > > > version of the patch with different commit message. Did I
> > > > > > > > misunderstand something?
> > > > > > >
> > > > > > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > > > > >
> > > > > > Are you going to fix that?
> > > > >
> > > > > Yes, of course, I'll do something with the commit message after we
> > > > > figure out how to handle PCI_COMMAND_INTX_DISABLE.
> > > > >
> > > > > > > I guess it's just that for these devices, we don't disable INTx
> > > > > > > when enabling MSI.  I can't remember why we disable INTx when
> > > > > > > enabling MSI, but it raises the question of whether it's better to
> > > > > > > leave INTx enabled or to just disable use of MSI completely.
> > > > > >
> > > > > > It's required by specification to disable INTx if I read 6.1.4.3
> > > > > > Enabling Operation correctly.
> > > > >
> > > > > Thanks for the reference; I was looking for something like that.  But
> > > > > I don't think this section requires us to set
> > > > > PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> > > > > PCIe r6.0, sec 6.1.4.3 says:
> > > > >
> > > > >   To maintain backward compatibility, the MSI Enable bit in the
> > > > >   Message Control Register for MSI and the MSI-X Enable bit in the
> > > > >   Message Control Register for MSI-X are each Clear by default (MSI
> > > > >   and MSI-X are both disabled). System configuration software Sets one
> > > > >   of these bits to enable either MSI or MSI-X, but never both
> > > > >   simultaneously. Behavior is undefined if both MSI and MSI-X are
> > > > >   enabled simultaneously. Software disabling either mechanism during
> > > > >   active operation may result in the Function dropping pending
> > > > >   interrupt conditions or failing to recognize new interrupt
> > > > >   conditions. While enabled for MSI or MSI-X operation, a Function is
> > > > >   prohibited from using INTx interrupts (if implemented) to request
> > > > >   service (MSI, MSI-X, and INTx are mutually exclusive).
> > > > >
> > > > > The only *software* constraints I see are (1) software must never
> > > > > enable both MSI and MSI-X simultaneously, and (2) if software disables
> > > > > MSI or MSI-X during active operation, the Function may fail to
> > > > > generate an interrupt when it should.
> > > > >
> > > > > I read the last sentence as a constraint on the *hardware*: if either
> > > > > MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> > > > > regardless of the state of PCI_COMMAND_INTX_DISABLE.
> > > > >
> > > > > I searched the spec for "Interrupt Disable", looking for situations
> > > > > where software might be *required* to set it, but I didn't see
> > > > > anything.
> > > > >
> > > > > I suspect "Interrupt Disable" was intended to help the OS stop all
> > > > > activity from a device during hot-plug or reconfiguration, as hinted
> > > > > at in sec 6.4, "Device Synchronization":
> > > > >
> > > > >   The ability of the driver and/or system software to block new
> > > > >   Requests from the device is supported by the Bus Master Enable,
> > > > >   SERR# Enable, and Interrupt Disable bits in the Command register
> > > > >   (Section 7.5.1.1.3) of each device Function, and other such control
> > > > >   bits.
> > > > >
> > > > > So I'm trying to figure out why when enabling MSI we need to set
> > > > > PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> > > > > for these quirked devices.
> > > >
> > > > I guess it's wrong wording in the last paragraph. It's not safe, but it's
> > > > _required_ since HW doesn't follow PCI specification that clearly says:
> > > > "MSI, MSI-X, and INTx are mutually exclusive".
> > >
> > > I agree there's a defect in these SB600 devices.  My guess is that
> > > PCI_COMMAND_INTX_DISABLE actually disables both INTx and MSI, when
> > > it's only supposed to disable INTx.
> > >
> > > I'm pretty sure the spec doesn't actually require software to set
> > > Interrupt Disable when enabling MSI, since MSI was added in PCI r2.2,
> > > which included this text in sec 6.8.2:
> > >
> > >   System configuration software sets [the MSI Enable] bit to enable
> > >   MSI. ...  Once enabled, a function is prohibited from using its
> > >   INTx# pin (if implemented) to request service (MSI and INTx# are
> > >   mutually exclusive).
> > >
> > > and Interrupt Disable was added later, in PCI r2.3, with no mention of
> > > a connection with MSI.  All the specs from PCI r2.2 to PCIe r6.0
> > > include the text above about not using INTx# if MSI or MSI-X is
> > > enabled, but that's not the same as requiring software to set
> > > Interrupt Disable.  Linux has set Interrupt Disable when enabling MSI
> > > ever since MSI support was added [1], so I would hesitate to change
> > > that even though I don't think it's required.
> > 
> > Thanks for diving into the history of the specification. What I learnt
> > about any of the specifications is that it usually has a lot of
> > implications that are only understandable (known) to the specification
> > author(s). This gives a room of misinterpretation. In any case I
> > usually apply my common sense denominator, so I try to go with the
> > straight logic. In this case it seems to me that keeping both enabled
> > is illogical and Linux does the right thing (means the author of the
> > Linux kernel implementation is on the same page with me).
> > 
> > > What I don't like about PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG is that it
> > > changes the generic code path in a sort of random way, i.e., this
> > > device becomes yet another special case in how we handle Interrupt
> > > Disable.
> > >
> > > What would you think about just setting pdev->no_msi instead, so we
> > > don't try to use MSI at all on these devices?  I think that's what we
> > > did before 306c54d0edb6.
> > 
> > Yes, we did. But why should we go this way if it already established a
> > special case disregarding my patch(es)? If you want to do that you
> > need to explain why other devices on the same chipset should enable
> > MSI and what's wrong with enabling MSI on the USB devices. My
> > understanding is that the MSI is a good thing to have due to
> > performance benefits and taking into account other devices that have
> > already been using it on the other devices of the same chipset tells
> > me that's okay. Moreover, the reporter of the bug confirmed that MSI
> > works for them after applying this quirk fix.
> 
> I agree that MSI is generally to be preferred over INTx.  In this
> case, it's an old USB device and I don't think there's any real
> performance benefit.

I tend to disagree. There are not only USB devices involved but other
devices that may share the same INTx line(s). This is the case, for
example, on EG20T. Dunno if ATI is one of them, but if so, it will
support the idea of MSI.

> The problem with enabling MSI on these USB devices is that the generic
> MSI code doesn't work correctly.

Can you elaborate this, please? Where is that code that doesn't work correctly?

> Since we've been using them without
> MSI in the past, and they have some defect that makes MSI not work
> correctly, it seems like the simplest solution is to avoid using MSI,
> something like the patch below.

I tend to disagree.

> Would I prefer the same for all the other existing users of
> PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG?  Yes.  Enough to change the status
> quo, which would be a performance regression, and potentially break
> something?  Probably not.

Why? What is the point of deliberate degrading of performance?

...

Let do it simple: Apply my patch since it fixes the regression and
move on with whatever solution you may propose latter on. It's really
not good for the end-user who was so kind to report and test all this.
Bjorn Helgaas March 18, 2022, 9:09 p.m. UTC | #14
On Fri, Mar 18, 2022 at 08:06:15PM +0200, Andy Shevchenko wrote:
> On Fri, Mar 18, 2022 at 11:47:40AM -0500, Bjorn Helgaas wrote:
> > On Fri, Mar 18, 2022 at 12:42:18PM +0200, Andy Shevchenko wrote:
> > > On Thu, Mar 17, 2022 at 11:12 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Thu, Mar 17, 2022 at 10:59:28AM +0200, Andy Shevchenko wrote:
> > > > > On Wed, Mar 16, 2022 at 04:15:48PM -0500, Bjorn Helgaas wrote:
> > > > > > On Wed, Mar 16, 2022 at 06:12:19PM +0200, Andy Shevchenko wrote:
> > > > > > > On Wed, Mar 16, 2022 at 06:52:09AM -0500, Bjorn Helgaas wrote:
> > > > > > > > On Wed, Mar 16, 2022 at 12:27:57PM +0200, Andy Shevchenko wrote:
> > > > > > > > > On Tue, Mar 15, 2022 at 03:22:31PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > > On Tue, Mar 15, 2022 at 12:09:08PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > > On Mon, Mar 14, 2022 at 02:42:53PM -0500, Bjorn Helgaas wrote:
> > > > > > > > > > > > On Mon, Mar 14, 2022 at 12:14:48PM +0200, Andy Shevchenko wrote:
> > > > > > > > > > > > > ATI PCIe-USB adapter advertises MSI, but it doesn't work
> > > > > > > > > > > > > if INTx is disabled.  Enable the respective quirk as
> > > > > > > > > > > > > it's done for other ATI devices on this chipset,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Fixes: 306c54d0edb6 ("usb: hcd: Try MSI interrupts on
> > > > > > > > > > > > > PCI devices")
> > > > > >
> > > > > > > > > > Anyway, I applied this to pci/msi for v5.18 with the following
> > > > > > > > > > commit log:
> > > > > > > > > >
> > > > > > > > > >     PCI: Disable broken MSI on ATI SB600 USB adapters
> > > > > > > > > >
> > > > > > > > > >     Some ATI SB600 USB adapters advertise MSI, but MSI doesn't
> > > > > > > > > >     work if INTx is disabled.  Disable MSI on these adapters.
> > > > > > > > >
> > > > > > > > > But IIUC MSI is _not_ disabled. That's why I have issued this
> > > > > > > > > version of the patch with different commit message. Did I
> > > > > > > > > misunderstand something?
> > > > > > > >
> > > > > > > > Oh, right, of course.  Sorry, I was asleep at the wheel.
> > > > > > >
> > > > > > > Are you going to fix that?
> > > > > >
> > > > > > Yes, of course, I'll do something with the commit message after we
> > > > > > figure out how to handle PCI_COMMAND_INTX_DISABLE.
> > > > > >
> > > > > > > > I guess it's just that for these devices, we don't disable INTx
> > > > > > > > when enabling MSI.  I can't remember why we disable INTx when
> > > > > > > > enabling MSI, but it raises the question of whether it's better to
> > > > > > > > leave INTx enabled or to just disable use of MSI completely.
> > > > > > >
> > > > > > > It's required by specification to disable INTx if I read 6.1.4.3
> > > > > > > Enabling Operation correctly.
> > > > > >
> > > > > > Thanks for the reference; I was looking for something like that.  But
> > > > > > I don't think this section requires us to set
> > > > > > PCI_COMMAND_INTX_DISABLE.  For the benefit of folks without the spec,
> > > > > > PCIe r6.0, sec 6.1.4.3 says:
> > > > > >
> > > > > >   To maintain backward compatibility, the MSI Enable bit in the
> > > > > >   Message Control Register for MSI and the MSI-X Enable bit in the
> > > > > >   Message Control Register for MSI-X are each Clear by default (MSI
> > > > > >   and MSI-X are both disabled). System configuration software Sets one
> > > > > >   of these bits to enable either MSI or MSI-X, but never both
> > > > > >   simultaneously. Behavior is undefined if both MSI and MSI-X are
> > > > > >   enabled simultaneously. Software disabling either mechanism during
> > > > > >   active operation may result in the Function dropping pending
> > > > > >   interrupt conditions or failing to recognize new interrupt
> > > > > >   conditions. While enabled for MSI or MSI-X operation, a Function is
> > > > > >   prohibited from using INTx interrupts (if implemented) to request
> > > > > >   service (MSI, MSI-X, and INTx are mutually exclusive).
> > > > > >
> > > > > > The only *software* constraints I see are (1) software must never
> > > > > > enable both MSI and MSI-X simultaneously, and (2) if software disables
> > > > > > MSI or MSI-X during active operation, the Function may fail to
> > > > > > generate an interrupt when it should.
> > > > > >
> > > > > > I read the last sentence as a constraint on the *hardware*: if either
> > > > > > MSI or MSI-X is enabled, the Function is not allowed to use INTx,
> > > > > > regardless of the state of PCI_COMMAND_INTX_DISABLE.
> > > > > >
> > > > > > I searched the spec for "Interrupt Disable", looking for situations
> > > > > > where software might be *required* to set it, but I didn't see
> > > > > > anything.
> > > > > >
> > > > > > I suspect "Interrupt Disable" was intended to help the OS stop all
> > > > > > activity from a device during hot-plug or reconfiguration, as hinted
> > > > > > at in sec 6.4, "Device Synchronization":
> > > > > >
> > > > > >   The ability of the driver and/or system software to block new
> > > > > >   Requests from the device is supported by the Bus Master Enable,
> > > > > >   SERR# Enable, and Interrupt Disable bits in the Command register
> > > > > >   (Section 7.5.1.1.3) of each device Function, and other such control
> > > > > >   bits.
> > > > > >
> > > > > > So I'm trying to figure out why when enabling MSI we need to set
> > > > > > PCI_COMMAND_INTX_DISABLE for most devices, but it's safe to skip that
> > > > > > for these quirked devices.
> > > > >
> > > > > I guess it's wrong wording in the last paragraph. It's not safe, but it's
> > > > > _required_ since HW doesn't follow PCI specification that clearly says:
> > > > > "MSI, MSI-X, and INTx are mutually exclusive".
> > > >
> > > > I agree there's a defect in these SB600 devices.  My guess is that
> > > > PCI_COMMAND_INTX_DISABLE actually disables both INTx and MSI, when
> > > > it's only supposed to disable INTx.
> > > >
> > > > I'm pretty sure the spec doesn't actually require software to set
> > > > Interrupt Disable when enabling MSI, since MSI was added in PCI r2.2,
> > > > which included this text in sec 6.8.2:
> > > >
> > > >   System configuration software sets [the MSI Enable] bit to enable
> > > >   MSI. ...  Once enabled, a function is prohibited from using its
> > > >   INTx# pin (if implemented) to request service (MSI and INTx# are
> > > >   mutually exclusive).
> > > >
> > > > and Interrupt Disable was added later, in PCI r2.3, with no mention of
> > > > a connection with MSI.  All the specs from PCI r2.2 to PCIe r6.0
> > > > include the text above about not using INTx# if MSI or MSI-X is
> > > > enabled, but that's not the same as requiring software to set
> > > > Interrupt Disable.  Linux has set Interrupt Disable when enabling MSI
> > > > ever since MSI support was added [1], so I would hesitate to change
> > > > that even though I don't think it's required.
> > > 
> > > Thanks for diving into the history of the specification. What I learnt
> > > about any of the specifications is that it usually has a lot of
> > > implications that are only understandable (known) to the specification
> > > author(s). This gives a room of misinterpretation. In any case I
> > > usually apply my common sense denominator, so I try to go with the
> > > straight logic. In this case it seems to me that keeping both enabled
> > > is illogical and Linux does the right thing (means the author of the
> > > Linux kernel implementation is on the same page with me).
> > > 
> > > > What I don't like about PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG is that it
> > > > changes the generic code path in a sort of random way, i.e., this
> > > > device becomes yet another special case in how we handle Interrupt
> > > > Disable.
> > > >
> > > > What would you think about just setting pdev->no_msi instead, so we
> > > > don't try to use MSI at all on these devices?  I think that's what we
> > > > did before 306c54d0edb6.
> > > 
> > > Yes, we did. But why should we go this way if it already established a
> > > special case disregarding my patch(es)? If you want to do that you
> > > need to explain why other devices on the same chipset should enable
> > > MSI and what's wrong with enabling MSI on the USB devices. My
> > > understanding is that the MSI is a good thing to have due to
> > > performance benefits and taking into account other devices that have
> > > already been using it on the other devices of the same chipset tells
> > > me that's okay. Moreover, the reporter of the bug confirmed that MSI
> > > works for them after applying this quirk fix.
> > 
> > I agree that MSI is generally to be preferred over INTx.  In this
> > case, it's an old USB device and I don't think there's any real
> > performance benefit.
> 
> I tend to disagree. There are not only USB devices involved but other
> devices that may share the same INTx line(s). This is the case, for
> example, on EG20T. Dunno if ATI is one of them, but if so, it will
> support the idea of MSI.
> 
> > The problem with enabling MSI on these USB devices is that the generic
> > MSI code doesn't work correctly.
> 
> Can you elaborate this, please? Where is that code that doesn't work
> correctly?

On this device, pci_intx_for_msi() breaks MSI, so if we want to use
MSI, we have to set PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG to avoid the
breakage.

If a non-essential feature is broken, I think the cleanest path for
maintainability is to avoid using the feature.  In this case, MSI is
broken because Interrupt Disable seems to disable MSI as well as INTx.

If we enable MSI on these devices, we (which really means "I") have to
audit all the places that set Interrupt Disable to see if masking both
MSI and INTx will break anything.  This is the maintainability issue.

Most places that set PCI_COMMAND_INTX_DISABLE are in drivers for other
devices, which are not an issue, of course.  Some others:

  - pci_check_and_mask_intx(): used by vfio_intx_handler() (not a
    problem since this case uses INTx, not MSI) and the UIO
    irqhandler() (this is a potential problem for user-space drivers,
    but I don't know the implications).

  - vfio_pci_intx_mask() and similar VFIO paths: potentially an issue,
    but I guess guest drivers that use these are on their own and have
    to have their own knowledge of device quirks.

  - pci_dev_save_and_disable(): used in reset paths, so shouldn't be
    an issue.

  - xen_pcibk_get_interrupt_type(): decides interrupt type based on
    reading PCI_COMMAND_INTX_DISABLE and will return
    "INTERRUPT_TYPE_INTX | INTERRUPT_TYPE_MSI" when it should return
    "INTERRUPT_TYPE_MSI".  Will break msi_msix_flags_write(), which
    looks for an exact match, but I don't know the further
    implications.

  - Xen command_write(): No idea how to even figure out if this is an
    issue.

These all look like it *probably* wouldn't seriously break anything to
use PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG.  But the point is that it's a
lot of work for me to even make an educated guess, and "probably safe"
and "unknown benefit" don't make a strong case.

> > Would I prefer the same for all the other existing users of
> > PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG?  Yes.  Enough to change the status
> > quo, which would be a performance regression, and potentially break
> > something?  Probably not.
> 
> Why? What is the point of deliberate degrading of performance?

Sorry, I didn't make that clear enough.  I was trying to say that I
don't think it's a good idea to change *existing* users because it
would degrade performance and it might break something.

Bjorn
diff mbox series

Patch

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 65f7f6b0576c..cc6a87a32b62 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3041,6 +3041,13 @@  DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
 			PCI_DEVICE_ID_TIGON3_5715S,
 			quirk_msi_intx_disable_bug);
 
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_msi_intx_disable_bug);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_msi_intx_disable_bug);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_msi_intx_disable_bug);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_msi_intx_disable_bug);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_msi_intx_disable_bug);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_msi_intx_disable_bug);
+
 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4390,
 			quirk_msi_intx_disable_ati_bug);
 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4391,