diff mbox

xen: do not re-use pirq number cached in pci device msi msg data

Message ID 20170113200751.20125-1-ddstreet@ieee.org (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Dan Streetman Jan. 13, 2017, 8:07 p.m. UTC
Revert the main part of commit:
af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")

That commit introduced reading the pci device's msi message data to see
if a pirq was previously configured for the device's msi/msix, and re-use
that pirq.  At the time, that was the correct behavior.  However, a
later change to Qemu caused it to call into the Xen hypervisor to unmap
all pirqs for a pci device, when the pci device disables its MSI/MSIX
vectors; specifically the Qemu commit:
c976437c7dba9c7444fb41df45468968aaa326ad
("qemu-xen: free all the pirqs for msi/msix when driver unload")

Once Qemu added this pirq unmapping, it was no longer correct for the
kernel to re-use the pirq number cached in the pci device msi message
data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
pirqs when the pci device disables its MSI/MSIX vectors.

This bug is causing failures to initialize multiple NVMe controllers
under Xen, because the NVMe driver sets up a single MSIX vector for
each controller (concurrently), and then after using that to talk to
the controller for some configuration data, it disables the single MSIX
vector and re-configures all the MSIX vectors it needs.  So the MSIX
setup code tries to re-use the cached pirq from the first vector
for each controller, but the hypervisor has already given away that
pirq to another controller, and its initialization fails.

This is discussed in more detail at:
https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html

Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
---
 arch/x86/pci/xen.c | 23 +++++++----------------
 1 file changed, 7 insertions(+), 16 deletions(-)

Comments

Stefano Stabellini Jan. 13, 2017, 8:54 p.m. UTC | #1
On Fri, 13 Jan 2017, Dan Streetman wrote:
> Revert the main part of commit:
> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> 
> That commit introduced reading the pci device's msi message data to see
> if a pirq was previously configured for the device's msi/msix, and re-use
> that pirq.  At the time, that was the correct behavior.  However, a
> later change to Qemu caused it to call into the Xen hypervisor to unmap
> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> vectors; specifically the Qemu commit:
> c976437c7dba9c7444fb41df45468968aaa326ad
> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> 
> Once Qemu added this pirq unmapping, it was no longer correct for the
> kernel to re-use the pirq number cached in the pci device msi message
> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> pirqs when the pci device disables its MSI/MSIX vectors.
> 
> This bug is causing failures to initialize multiple NVMe controllers
> under Xen, because the NVMe driver sets up a single MSIX vector for
> each controller (concurrently), and then after using that to talk to
> the controller for some configuration data, it disables the single MSIX
> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> setup code tries to re-use the cached pirq from the first vector
> for each controller, but the hypervisor has already given away that
> pirq to another controller, and its initialization fails.
> 
> This is discussed in more detail at:
> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> 
> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  arch/x86/pci/xen.c | 23 +++++++----------------
>  1 file changed, 7 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
> index bedfab9..a00a6c0 100644
> --- a/arch/x86/pci/xen.c
> +++ b/arch/x86/pci/xen.c
> @@ -234,23 +234,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
>  		return 1;
>  
>  	for_each_pci_msi_entry(msidesc, dev) {
> -		__pci_read_msi_msg(msidesc, &msg);
> -		pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
> -			((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff);
> -		if (msg.data != XEN_PIRQ_MSI_DATA ||
> -		    xen_irq_from_pirq(pirq) < 0) {
> -			pirq = xen_allocate_pirq_msi(dev, msidesc);
> -			if (pirq < 0) {
> -				irq = -ENODEV;
> -				goto error;
> -			}
> -			xen_msi_compose_msg(dev, pirq, &msg);
> -			__pci_write_msi_msg(msidesc, &msg);
> -			dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
> -		} else {
> -			dev_dbg(&dev->dev,
> -				"xen: msi already bound to pirq=%d\n", pirq);
> +		pirq = xen_allocate_pirq_msi(dev, msidesc);
> +		if (pirq < 0) {
> +			irq = -ENODEV;
> +			goto error;
>  		}
> +		xen_msi_compose_msg(dev, pirq, &msg);
> +		__pci_write_msi_msg(msidesc, &msg);
> +		dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
>  		irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq,
>  					       (type == PCI_CAP_ID_MSI) ? nvec : 1,
>  					       (type == PCI_CAP_ID_MSIX) ?
> -- 
> 2.9.3
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Boris Ostrovsky Jan. 13, 2017, 9:49 p.m. UTC | #2
On 01/13/2017 03:07 PM, Dan Streetman wrote:
> Revert the main part of commit:
> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>
> That commit introduced reading the pci device's msi message data to see
> if a pirq was previously configured for the device's msi/msix, and re-use
> that pirq.  At the time, that was the correct behavior.  However, a
> later change to Qemu caused it to call into the Xen hypervisor to unmap
> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> vectors; specifically the Qemu commit:
> c976437c7dba9c7444fb41df45468968aaa326ad
> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>
> Once Qemu added this pirq unmapping, it was no longer correct for the
> kernel to re-use the pirq number cached in the pci device msi message
> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> pirqs when the pci device disables its MSI/MSIX vectors.
>
> This bug is causing failures to initialize multiple NVMe controllers
> under Xen, because the NVMe driver sets up a single MSIX vector for
> each controller (concurrently), and then after using that to talk to
> the controller for some configuration data, it disables the single MSIX
> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> setup code tries to re-use the cached pirq from the first vector
> for each controller, but the hypervisor has already given away that
> pirq to another controller, and its initialization fails.
>
> This is discussed in more detail at:
> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>
> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

> ---
>  arch/x86/pci/xen.c | 23 +++++++----------------
>  1 file changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
> index bedfab9..a00a6c0 100644
> --- a/arch/x86/pci/xen.c
> +++ b/arch/x86/pci/xen.c
> @@ -234,23 +234,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
>  		return 1;
>  
>  	for_each_pci_msi_entry(msidesc, dev) {
> -		__pci_read_msi_msg(msidesc, &msg);
> -		pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
> -			((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff);
> -		if (msg.data != XEN_PIRQ_MSI_DATA ||
> -		    xen_irq_from_pirq(pirq) < 0) {
> -			pirq = xen_allocate_pirq_msi(dev, msidesc);
> -			if (pirq < 0) {
> -				irq = -ENODEV;
> -				goto error;
> -			}
> -			xen_msi_compose_msg(dev, pirq, &msg);
> -			__pci_write_msi_msg(msidesc, &msg);
> -			dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
> -		} else {
> -			dev_dbg(&dev->dev,
> -				"xen: msi already bound to pirq=%d\n", pirq);
> +		pirq = xen_allocate_pirq_msi(dev, msidesc);
> +		if (pirq < 0) {
> +			irq = -ENODEV;
> +			goto error;
>  		}
> +		xen_msi_compose_msg(dev, pirq, &msg);
> +		__pci_write_msi_msg(msidesc, &msg);
> +		dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
>  		irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq,
>  					       (type == PCI_CAP_ID_MSI) ? nvec : 1,
>  					       (type == PCI_CAP_ID_MSIX) ?

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Konrad Rzeszutek Wilk Jan. 13, 2017, 10:30 p.m. UTC | #3
On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> Revert the main part of commit:
> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> 
> That commit introduced reading the pci device's msi message data to see
> if a pirq was previously configured for the device's msi/msix, and re-use
> that pirq.  At the time, that was the correct behavior.  However, a
> later change to Qemu caused it to call into the Xen hypervisor to unmap
> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> vectors; specifically the Qemu commit:
> c976437c7dba9c7444fb41df45468968aaa326ad
> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> 
> Once Qemu added this pirq unmapping, it was no longer correct for the
> kernel to re-use the pirq number cached in the pci device msi message
> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> pirqs when the pci device disables its MSI/MSIX vectors.
> 
> This bug is causing failures to initialize multiple NVMe controllers
> under Xen, because the NVMe driver sets up a single MSIX vector for
> each controller (concurrently), and then after using that to talk to
> the controller for some configuration data, it disables the single MSIX
> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> setup code tries to re-use the cached pirq from the first vector
> for each controller, but the hypervisor has already given away that
> pirq to another controller, and its initialization fails.
> 
> This is discussed in more detail at:
> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> 
> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dan Streetman Feb. 21, 2017, 3:31 p.m. UTC | #4
On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>> Revert the main part of commit:
>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>
>> That commit introduced reading the pci device's msi message data to see
>> if a pirq was previously configured for the device's msi/msix, and re-use
>> that pirq.  At the time, that was the correct behavior.  However, a
>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>> vectors; specifically the Qemu commit:
>> c976437c7dba9c7444fb41df45468968aaa326ad
>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>
>> Once Qemu added this pirq unmapping, it was no longer correct for the
>> kernel to re-use the pirq number cached in the pci device msi message
>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>> pirqs when the pci device disables its MSI/MSIX vectors.
>>
>> This bug is causing failures to initialize multiple NVMe controllers
>> under Xen, because the NVMe driver sets up a single MSIX vector for
>> each controller (concurrently), and then after using that to talk to
>> the controller for some configuration data, it disables the single MSIX
>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>> setup code tries to re-use the cached pirq from the first vector
>> for each controller, but the hypervisor has already given away that
>> pirq to another controller, and its initialization fails.
>>
>> This is discussed in more detail at:
>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>
>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
>
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

This doesn't seem to be applied yet, is it still waiting on another
ack?  Or maybe I'm looking at the wrong git tree...
Jürgen Groß Feb. 21, 2017, 3:45 p.m. UTC | #5
On 21/02/17 16:31, Dan Streetman wrote:
> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>>> Revert the main part of commit:
>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>
>>> That commit introduced reading the pci device's msi message data to see
>>> if a pirq was previously configured for the device's msi/msix, and re-use
>>> that pirq.  At the time, that was the correct behavior.  However, a
>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>>> vectors; specifically the Qemu commit:
>>> c976437c7dba9c7444fb41df45468968aaa326ad
>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>>
>>> Once Qemu added this pirq unmapping, it was no longer correct for the
>>> kernel to re-use the pirq number cached in the pci device msi message
>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>>> pirqs when the pci device disables its MSI/MSIX vectors.
>>>
>>> This bug is causing failures to initialize multiple NVMe controllers
>>> under Xen, because the NVMe driver sets up a single MSIX vector for
>>> each controller (concurrently), and then after using that to talk to
>>> the controller for some configuration data, it disables the single MSIX
>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>>> setup code tries to re-use the cached pirq from the first vector
>>> for each controller, but the hypervisor has already given away that
>>> pirq to another controller, and its initialization fails.
>>>
>>> This is discussed in more detail at:
>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>>
>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
>>
>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> This doesn't seem to be applied yet, is it still waiting on another
> ack?  Or maybe I'm looking at the wrong git tree...

Am I wrong or shouldn't this go through the PCI tree? Konrad?


Juergen
Boris Ostrovsky Feb. 21, 2017, 3:58 p.m. UTC | #6
On 02/21/2017 10:45 AM, Juergen Gross wrote:
> On 21/02/17 16:31, Dan Streetman wrote:
>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>>>> Revert the main part of commit:
>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>
>>>> That commit introduced reading the pci device's msi message data to see
>>>> if a pirq was previously configured for the device's msi/msix, and re-use
>>>> that pirq.  At the time, that was the correct behavior.  However, a
>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>>>> vectors; specifically the Qemu commit:
>>>> c976437c7dba9c7444fb41df45468968aaa326ad
>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>>>
>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
>>>> kernel to re-use the pirq number cached in the pci device msi message
>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>>>> pirqs when the pci device disables its MSI/MSIX vectors.
>>>>
>>>> This bug is causing failures to initialize multiple NVMe controllers
>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
>>>> each controller (concurrently), and then after using that to talk to
>>>> the controller for some configuration data, it disables the single MSIX
>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>>>> setup code tries to re-use the cached pirq from the first vector
>>>> for each controller, but the hypervisor has already given away that
>>>> pirq to another controller, and its initialization fails.
>>>>
>>>> This is discussed in more detail at:
>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>>>
>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> This doesn't seem to be applied yet, is it still waiting on another
>> ack?  Or maybe I'm looking at the wrong git tree...
> Am I wrong or shouldn't this go through the PCI tree? Konrad?

Konrad is away this week but since pull request for Xen tree just went
out we should probably wait until rc1 anyway (unless something big comes
up before that).

-boris
Bjorn Helgaas Feb. 22, 2017, 2:28 p.m. UTC | #7
On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> On 02/21/2017 10:45 AM, Juergen Gross wrote:
> > On 21/02/17 16:31, Dan Streetman wrote:
> >> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com> wrote:
> >>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> >>>> Revert the main part of commit:
> >>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> >>>>
> >>>> That commit introduced reading the pci device's msi message data to see
> >>>> if a pirq was previously configured for the device's msi/msix, and re-use
> >>>> that pirq.  At the time, that was the correct behavior.  However, a
> >>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
> >>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> >>>> vectors; specifically the Qemu commit:
> >>>> c976437c7dba9c7444fb41df45468968aaa326ad
> >>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> >>>>
> >>>> Once Qemu added this pirq unmapping, it was no longer correct for the
> >>>> kernel to re-use the pirq number cached in the pci device msi message
> >>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> >>>> pirqs when the pci device disables its MSI/MSIX vectors.
> >>>>
> >>>> This bug is causing failures to initialize multiple NVMe controllers
> >>>> under Xen, because the NVMe driver sets up a single MSIX vector for
> >>>> each controller (concurrently), and then after using that to talk to
> >>>> the controller for some configuration data, it disables the single MSIX
> >>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> >>>> setup code tries to re-use the cached pirq from the first vector
> >>>> for each controller, but the hypervisor has already given away that
> >>>> pirq to another controller, and its initialization fails.
> >>>>
> >>>> This is discussed in more detail at:
> >>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> >>>>
> >>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> >>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> >>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> This doesn't seem to be applied yet, is it still waiting on another
> >> ack?  Or maybe I'm looking at the wrong git tree...
> > Am I wrong or shouldn't this go through the PCI tree? Konrad?
> 
> Konrad is away this week but since pull request for Xen tree just went
> out we should probably wait until rc1 anyway (unless something big comes
> up before that).

I assume this should go via the Xen or x86 tree, since that's how most
arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
If you think otherwise, let me know.
Boris Ostrovsky Feb. 22, 2017, 3:14 p.m. UTC | #8
On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
>>> On 21/02/17 16:31, Dan Streetman wrote:
>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@oracle.com> wrote:
>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>>>>>> Revert the main part of commit:
>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>>
>>>>>> That commit introduced reading the pci device's msi message data to see
>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>>>>>> vectors; specifically the Qemu commit:
>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>>>>>
>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
>>>>>> kernel to re-use the pirq number cached in the pci device msi message
>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
>>>>>>
>>>>>> This bug is causing failures to initialize multiple NVMe controllers
>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
>>>>>> each controller (concurrently), and then after using that to talk to
>>>>>> the controller for some configuration data, it disables the single MSIX
>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>>>>>> setup code tries to re-use the cached pirq from the first vector
>>>>>> for each controller, but the hypervisor has already given away that
>>>>>> pirq to another controller, and its initialization fails.
>>>>>>
>>>>>> This is discussed in more detail at:
>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>>>>>
>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>> This doesn't seem to be applied yet, is it still waiting on another
>>>> ack?  Or maybe I'm looking at the wrong git tree...
>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
>> Konrad is away this week but since pull request for Xen tree just went
>> out we should probably wait until rc1 anyway (unless something big comes
>> up before that).
> I assume this should go via the Xen or x86 tree, since that's how most
> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> If you think otherwise, let me know.

OK, I applied it to Xen tree's for-linus-4.11.

-boris
David Woodhouse May 3, 2017, 6:19 p.m. UTC | #9
On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> > 
> > On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> > > 
> > > On 02/21/2017 10:45 AM, Juergen Gross wrote:
> > > > 
> > > > On 21/02/17 16:31, Dan Streetman wrote:
> > > > > 
> > > > > On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> > > > > <konrad.wilk@oracle.com> wrote:
> > > > > > 
> > > > > > On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> > > > > > > 
> > > > > > > Revert the main part of commit:
> > > > > > > af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > > > > > 
> > > > > > > That commit introduced reading the pci device's msi message data to see
> > > > > > > if a pirq was previously configured for the device's msi/msix, and re-use
> > > > > > > that pirq.  At the time, that was the correct behavior.  However, a
> > > > > > > later change to Qemu caused it to call into the Xen hypervisor to unmap
> > > > > > > all pirqs for a pci device, when the pci device disables its MSI/MSIX
> > > > > > > vectors; specifically the Qemu commit:
> > > > > > > c976437c7dba9c7444fb41df45468968aaa326ad
> > > > > > > ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> > > > > > > 
> > > > > > > Once Qemu added this pirq unmapping, it was no longer correct for the
> > > > > > > kernel to re-use the pirq number cached in the pci device msi message
> > > > > > > data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> > > > > > > pirqs when the pci device disables its MSI/MSIX vectors.
> > > > > > > 
> > > > > > > This bug is causing failures to initialize multiple NVMe controllers
> > > > > > > under Xen, because the NVMe driver sets up a single MSIX vector for
> > > > > > > each controller (concurrently), and then after using that to talk to
> > > > > > > the controller for some configuration data, it disables the single MSIX
> > > > > > > vector and re-configures all the MSIX vectors it needs.  So the MSIX
> > > > > > > setup code tries to re-use the cached pirq from the first vector
> > > > > > > for each controller, but the hypervisor has already given away that
> > > > > > > pirq to another controller, and its initialization fails.
> > > > > > > 
> > > > > > > This is discussed in more detail at:
> > > > > > > https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> > > > > > > 
> > > > > > > Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > > > > > Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> > > > > > Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > > This doesn't seem to be applied yet, is it still waiting on another
> > > > > ack?  Or maybe I'm looking at the wrong git tree...
> > > > Am I wrong or shouldn't this go through the PCI tree? Konrad?
> > > Konrad is away this week but since pull request for Xen tree just went
> > > out we should probably wait until rc1 anyway (unless something big comes
> > > up before that).
> > I assume this should go via the Xen or x86 tree, since that's how most
> > arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> > If you think otherwise, let me know.
>
> OK, I applied it to Xen tree's for-linus-4.11.

Hm, we want this (c74fd80f2f4) in stable too, don't we?
Boris Ostrovsky May 3, 2017, 6:43 p.m. UTC | #10
On 05/03/2017 02:19 PM, David Woodhouse wrote:
> On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
>> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
>>> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
>>>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
>>>>> On 21/02/17 16:31, Dan Streetman wrote:
>>>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
>>>>>> <konrad.wilk@oracle.com> wrote:
>>>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>>>>>>>> Revert the main part of commit:
>>>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>>>>
>>>>>>>> That commit introduced reading the pci device's msi message data to see
>>>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
>>>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
>>>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>>>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>>>>>>>> vectors; specifically the Qemu commit:
>>>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
>>>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>>>>>>>
>>>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
>>>>>>>> kernel to re-use the pirq number cached in the pci device msi message
>>>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>>>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
>>>>>>>>
>>>>>>>> This bug is causing failures to initialize multiple NVMe controllers
>>>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
>>>>>>>> each controller (concurrently), and then after using that to talk to
>>>>>>>> the controller for some configuration data, it disables the single MSIX
>>>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>>>>>>>> setup code tries to re-use the cached pirq from the first vector
>>>>>>>> for each controller, but the hypervisor has already given away that
>>>>>>>> pirq to another controller, and its initialization fails.
>>>>>>>>
>>>>>>>> This is discussed in more detail at:
>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>>>>>>>
>>>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
>>>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>>> This doesn't seem to be applied yet, is it still waiting on another
>>>>>> ack?  Or maybe I'm looking at the wrong git tree...
>>>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
>>>> Konrad is away this week but since pull request for Xen tree just went
>>>> out we should probably wait until rc1 anyway (unless something big comes
>>>> up before that).
>>> I assume this should go via the Xen or x86 tree, since that's how most
>>> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
>>> If you think otherwise, let me know.
>> OK, I applied it to Xen tree's for-linus-4.11.
> Hm, we want this (c74fd80f2f4) in stable too, don't we?


Maybe.

Per https://lists.xen.org/archives/html/xen-devel/2017-01/msg00987.html
it may break things on older (4.4-) hypervisors. They are out of
support, which is why this patch went in now but I am not sure this
automatically applies to stable kernels.

Stefano?

-boris
Stefano Stabellini May 3, 2017, 10:59 p.m. UTC | #11
On Wed, 3 May 2017, Boris Ostrovsky wrote:
> On 05/03/2017 02:19 PM, David Woodhouse wrote:
> > On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
> >> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> >>> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> >>>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
> >>>>> On 21/02/17 16:31, Dan Streetman wrote:
> >>>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> >>>>>> <konrad.wilk@oracle.com> wrote:
> >>>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> >>>>>>>> Revert the main part of commit:
> >>>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> >>>>>>>>
> >>>>>>>> That commit introduced reading the pci device's msi message data to see
> >>>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
> >>>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
> >>>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
> >>>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> >>>>>>>> vectors; specifically the Qemu commit:
> >>>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
> >>>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> >>>>>>>>
> >>>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
> >>>>>>>> kernel to re-use the pirq number cached in the pci device msi message
> >>>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> >>>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
> >>>>>>>>
> >>>>>>>> This bug is causing failures to initialize multiple NVMe controllers
> >>>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
> >>>>>>>> each controller (concurrently), and then after using that to talk to
> >>>>>>>> the controller for some configuration data, it disables the single MSIX
> >>>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> >>>>>>>> setup code tries to re-use the cached pirq from the first vector
> >>>>>>>> for each controller, but the hypervisor has already given away that
> >>>>>>>> pirq to another controller, and its initialization fails.
> >>>>>>>>
> >>>>>>>> This is discussed in more detail at:
> >>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> >>>>>>>>
> >>>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> >>>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> >>>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>>>>> This doesn't seem to be applied yet, is it still waiting on another
> >>>>>> ack?  Or maybe I'm looking at the wrong git tree...
> >>>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
> >>>> Konrad is away this week but since pull request for Xen tree just went
> >>>> out we should probably wait until rc1 anyway (unless something big comes
> >>>> up before that).
> >>> I assume this should go via the Xen or x86 tree, since that's how most
> >>> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> >>> If you think otherwise, let me know.
> >> OK, I applied it to Xen tree's for-linus-4.11.
> > Hm, we want this (c74fd80f2f4) in stable too, don't we?
> 
> 
> Maybe.
> 
> Per https://lists.xen.org/archives/html/xen-devel/2017-01/msg00987.html
> it may break things on older (4.4-) hypervisors. They are out of
> support, which is why this patch went in now but I am not sure this
> automatically applies to stable kernels.
> 
> Stefano?

This is a difficult call. We could just say that all the broken Xen
versions are out of support, so let's fix all the Linux kernel stable
trees that we can.

Or we could give a look at the release dates. Linux 3.18 is still
maintained and was tagged on Dec 7 2014. Xen 4.4 was tagged on Mar 10
2014. We could ask a backport for [4.5+], which was released 2 years
after Xen 4.4. Of course, it is still arbitrary but aims at minimizing
breakages.

What do you think?
Greg Kroah-Hartman May 3, 2017, 11:06 p.m. UTC | #12
On Wed, May 03, 2017 at 03:59:15PM -0700, Stefano Stabellini wrote:
> On Wed, 3 May 2017, Boris Ostrovsky wrote:
> > On 05/03/2017 02:19 PM, David Woodhouse wrote:
> > > On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
> > >> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> > >>> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> > >>>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
> > >>>>> On 21/02/17 16:31, Dan Streetman wrote:
> > >>>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> > >>>>>> <konrad.wilk@oracle.com> wrote:
> > >>>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> > >>>>>>>> Revert the main part of commit:
> > >>>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > >>>>>>>>
> > >>>>>>>> That commit introduced reading the pci device's msi message data to see
> > >>>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
> > >>>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
> > >>>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
> > >>>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> > >>>>>>>> vectors; specifically the Qemu commit:
> > >>>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
> > >>>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> > >>>>>>>>
> > >>>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
> > >>>>>>>> kernel to re-use the pirq number cached in the pci device msi message
> > >>>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> > >>>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
> > >>>>>>>>
> > >>>>>>>> This bug is causing failures to initialize multiple NVMe controllers
> > >>>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
> > >>>>>>>> each controller (concurrently), and then after using that to talk to
> > >>>>>>>> the controller for some configuration data, it disables the single MSIX
> > >>>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> > >>>>>>>> setup code tries to re-use the cached pirq from the first vector
> > >>>>>>>> for each controller, but the hypervisor has already given away that
> > >>>>>>>> pirq to another controller, and its initialization fails.
> > >>>>>>>>
> > >>>>>>>> This is discussed in more detail at:
> > >>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> > >>>>>>>>
> > >>>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > >>>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> > >>>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >>>>>> This doesn't seem to be applied yet, is it still waiting on another
> > >>>>>> ack?  Or maybe I'm looking at the wrong git tree...
> > >>>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
> > >>>> Konrad is away this week but since pull request for Xen tree just went
> > >>>> out we should probably wait until rc1 anyway (unless something big comes
> > >>>> up before that).
> > >>> I assume this should go via the Xen or x86 tree, since that's how most
> > >>> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> > >>> If you think otherwise, let me know.
> > >> OK, I applied it to Xen tree's for-linus-4.11.
> > > Hm, we want this (c74fd80f2f4) in stable too, don't we?
> > 
> > 
> > Maybe.
> > 
> > Per https://lists.xen.org/archives/html/xen-devel/2017-01/msg00987.html
> > it may break things on older (4.4-) hypervisors. They are out of
> > support, which is why this patch went in now but I am not sure this
> > automatically applies to stable kernels.
> > 
> > Stefano?
> 
> This is a difficult call. We could just say that all the broken Xen
> versions are out of support, so let's fix all the Linux kernel stable
> trees that we can.
> 
> Or we could give a look at the release dates. Linux 3.18 is still
> maintained and was tagged on Dec 7 2014.

Don't do anything "special" for 3.18 if you have to.  I'm only
semi-maintaining it because some SoC vendors never upstreamed their
trees and lots of devices rely on it.  None of them use Xen on their
platforms, so no need for me to backport any change there.

thanks,

greg k-h
Stefano Stabellini May 3, 2017, 11:12 p.m. UTC | #13
On Wed, 3 May 2017, Greg KH wrote:
> On Wed, May 03, 2017 at 03:59:15PM -0700, Stefano Stabellini wrote:
> > On Wed, 3 May 2017, Boris Ostrovsky wrote:
> > > On 05/03/2017 02:19 PM, David Woodhouse wrote:
> > > > On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
> > > >> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> > > >>> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> > > >>>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
> > > >>>>> On 21/02/17 16:31, Dan Streetman wrote:
> > > >>>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> > > >>>>>> <konrad.wilk@oracle.com> wrote:
> > > >>>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> > > >>>>>>>> Revert the main part of commit:
> > > >>>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > >>>>>>>>
> > > >>>>>>>> That commit introduced reading the pci device's msi message data to see
> > > >>>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
> > > >>>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
> > > >>>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
> > > >>>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> > > >>>>>>>> vectors; specifically the Qemu commit:
> > > >>>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
> > > >>>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> > > >>>>>>>>
> > > >>>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
> > > >>>>>>>> kernel to re-use the pirq number cached in the pci device msi message
> > > >>>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> > > >>>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
> > > >>>>>>>>
> > > >>>>>>>> This bug is causing failures to initialize multiple NVMe controllers
> > > >>>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
> > > >>>>>>>> each controller (concurrently), and then after using that to talk to
> > > >>>>>>>> the controller for some configuration data, it disables the single MSIX
> > > >>>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> > > >>>>>>>> setup code tries to re-use the cached pirq from the first vector
> > > >>>>>>>> for each controller, but the hypervisor has already given away that
> > > >>>>>>>> pirq to another controller, and its initialization fails.
> > > >>>>>>>>
> > > >>>>>>>> This is discussed in more detail at:
> > > >>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> > > >>>>>>>>
> > > >>>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > >>>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> > > >>>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > >>>>>> This doesn't seem to be applied yet, is it still waiting on another
> > > >>>>>> ack?  Or maybe I'm looking at the wrong git tree...
> > > >>>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
> > > >>>> Konrad is away this week but since pull request for Xen tree just went
> > > >>>> out we should probably wait until rc1 anyway (unless something big comes
> > > >>>> up before that).
> > > >>> I assume this should go via the Xen or x86 tree, since that's how most
> > > >>> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> > > >>> If you think otherwise, let me know.
> > > >> OK, I applied it to Xen tree's for-linus-4.11.
> > > > Hm, we want this (c74fd80f2f4) in stable too, don't we?
> > > 
> > > 
> > > Maybe.
> > > 
> > > Per https://lists.xen.org/archives/html/xen-devel/2017-01/msg00987.html
> > > it may break things on older (4.4-) hypervisors. They are out of
> > > support, which is why this patch went in now but I am not sure this
> > > automatically applies to stable kernels.
> > > 
> > > Stefano?
> > 
> > This is a difficult call. We could just say that all the broken Xen
> > versions are out of support, so let's fix all the Linux kernel stable
> > trees that we can.
> > 
> > Or we could give a look at the release dates. Linux 3.18 is still
> > maintained and was tagged on Dec 7 2014.
> 
> Don't do anything "special" for 3.18 if you have to.  I'm only
> semi-maintaining it because some SoC vendors never upstreamed their
> trees and lots of devices rely on it.  None of them use Xen on their
> platforms, so no need for me to backport any change there.

Thanks Greg, that is good info. Is 4.4 the oldest Linux tree fully
maintained?

If so, I think we should just backport this fix to all Linux trees >=
4.4, given that Linux 4.4 is from Jan 2016.
Greg Kroah-Hartman May 3, 2017, 11:19 p.m. UTC | #14
On Wed, May 03, 2017 at 04:12:29PM -0700, Stefano Stabellini wrote:
> On Wed, 3 May 2017, Greg KH wrote:
> > On Wed, May 03, 2017 at 03:59:15PM -0700, Stefano Stabellini wrote:
> > > On Wed, 3 May 2017, Boris Ostrovsky wrote:
> > > > On 05/03/2017 02:19 PM, David Woodhouse wrote:
> > > > > On Wed, 2017-02-22 at 10:14 -0500, Boris Ostrovsky wrote:
> > > > >> On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> > > > >>> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
> > > > >>>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
> > > > >>>>> On 21/02/17 16:31, Dan Streetman wrote:
> > > > >>>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
> > > > >>>>>> <konrad.wilk@oracle.com> wrote:
> > > > >>>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
> > > > >>>>>>>> Revert the main part of commit:
> > > > >>>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > > >>>>>>>>
> > > > >>>>>>>> That commit introduced reading the pci device's msi message data to see
> > > > >>>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
> > > > >>>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
> > > > >>>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
> > > > >>>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
> > > > >>>>>>>> vectors; specifically the Qemu commit:
> > > > >>>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
> > > > >>>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
> > > > >>>>>>>>
> > > > >>>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
> > > > >>>>>>>> kernel to re-use the pirq number cached in the pci device msi message
> > > > >>>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
> > > > >>>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
> > > > >>>>>>>>
> > > > >>>>>>>> This bug is causing failures to initialize multiple NVMe controllers
> > > > >>>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
> > > > >>>>>>>> each controller (concurrently), and then after using that to talk to
> > > > >>>>>>>> the controller for some configuration data, it disables the single MSIX
> > > > >>>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
> > > > >>>>>>>> setup code tries to re-use the cached pirq from the first vector
> > > > >>>>>>>> for each controller, but the hypervisor has already given away that
> > > > >>>>>>>> pirq to another controller, and its initialization fails.
> > > > >>>>>>>>
> > > > >>>>>>>> This is discussed in more detail at:
> > > > >>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
> > > > >>>>>>>>
> > > > >>>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
> > > > >>>>>>>> Signed-off-by: Dan Streetman <dan.streetman@canonical.com>
> > > > >>>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > >>>>>> This doesn't seem to be applied yet, is it still waiting on another
> > > > >>>>>> ack?  Or maybe I'm looking at the wrong git tree...
> > > > >>>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
> > > > >>>> Konrad is away this week but since pull request for Xen tree just went
> > > > >>>> out we should probably wait until rc1 anyway (unless something big comes
> > > > >>>> up before that).
> > > > >>> I assume this should go via the Xen or x86 tree, since that's how most
> > > > >>> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> > > > >>> If you think otherwise, let me know.
> > > > >> OK, I applied it to Xen tree's for-linus-4.11.
> > > > > Hm, we want this (c74fd80f2f4) in stable too, don't we?
> > > > 
> > > > 
> > > > Maybe.
> > > > 
> > > > Per https://lists.xen.org/archives/html/xen-devel/2017-01/msg00987.html
> > > > it may break things on older (4.4-) hypervisors. They are out of
> > > > support, which is why this patch went in now but I am not sure this
> > > > automatically applies to stable kernels.
> > > > 
> > > > Stefano?
> > > 
> > > This is a difficult call. We could just say that all the broken Xen
> > > versions are out of support, so let's fix all the Linux kernel stable
> > > trees that we can.
> > > 
> > > Or we could give a look at the release dates. Linux 3.18 is still
> > > maintained and was tagged on Dec 7 2014.
> > 
> > Don't do anything "special" for 3.18 if you have to.  I'm only
> > semi-maintaining it because some SoC vendors never upstreamed their
> > trees and lots of devices rely on it.  None of them use Xen on their
> > platforms, so no need for me to backport any change there.
> 
> Thanks Greg, that is good info. Is 4.4 the oldest Linux tree fully
> maintained?

Well, the one that _I_ fully maintain :)

There are some older ones, but you are free to say how far back any
patch should go, you're the maintainers of this code, not me...

thanks,

greg k-h
diff mbox

Patch

diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index bedfab9..a00a6c0 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -234,23 +234,14 @@  static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		return 1;
 
 	for_each_pci_msi_entry(msidesc, dev) {
-		__pci_read_msi_msg(msidesc, &msg);
-		pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
-			((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff);
-		if (msg.data != XEN_PIRQ_MSI_DATA ||
-		    xen_irq_from_pirq(pirq) < 0) {
-			pirq = xen_allocate_pirq_msi(dev, msidesc);
-			if (pirq < 0) {
-				irq = -ENODEV;
-				goto error;
-			}
-			xen_msi_compose_msg(dev, pirq, &msg);
-			__pci_write_msi_msg(msidesc, &msg);
-			dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
-		} else {
-			dev_dbg(&dev->dev,
-				"xen: msi already bound to pirq=%d\n", pirq);
+		pirq = xen_allocate_pirq_msi(dev, msidesc);
+		if (pirq < 0) {
+			irq = -ENODEV;
+			goto error;
 		}
+		xen_msi_compose_msg(dev, pirq, &msg);
+		__pci_write_msi_msg(msidesc, &msg);
+		dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
 		irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq,
 					       (type == PCI_CAP_ID_MSI) ? nvec : 1,
 					       (type == PCI_CAP_ID_MSIX) ?