diff mbox series

[V5,4/8] nvme-pci: Simplify interrupt allocation

Message ID 20190214211759.699390983@linutronix.de (mailing list archive)
State Superseded, archived
Headers show
Series genirq/affinity: Overhaul the multiple interrupt sets support | expand

Commit Message

Thomas Gleixner Feb. 14, 2019, 8:47 p.m. UTC
From: Ming Lei <ming.lei@redhat.com>

The NVME PCI driver contains a tedious mechanism for interrupt
allocation, which is necessary to adjust the number and size of interrupt
sets to the maximum available number of interrupts which depends on the
underlying PCI capabilities and the available CPU resources.

It works around the former short comings of the PCI and core interrupt
allocation mechanims in combination with interrupt sets.

The PCI interrupt allocation function allows to provide a maximum and a
minimum number of interrupts to be allocated and tries to allocate as
many as possible. This worked without driver interaction as long as there
was only a single set of interrupts to handle.

With the addition of support for multiple interrupt sets in the generic
affinity spreading logic, which is invoked from the PCI interrupt
allocation, the adaptive loop in the PCI interrupt allocation did not
work for multiple interrupt sets. The reason is that depending on the
total number of interrupts which the PCI allocation adaptive loop tries
to allocate in each step, the number and the size of the interrupt sets
need to be adapted as well. Due to the way the interrupt sets support was
implemented there was no way for the PCI interrupt allocation code or the
core affinity spreading mechanism to invoke a driver specific function
for adapting the interrupt sets configuration.

As a consequence the driver had to implement another adaptive loop around
the PCI interrupt allocation function and calling that with maximum and
minimum interrupts set to the same value. This ensured that the
allocation either succeeded or immediately failed without any attempt to
adjust the number of interrupts in the PCI code.

The core code now allows drivers to provide a callback to recalculate the
number and the size of interrupt sets during PCI interrupt allocation,
which in turn allows the PCI interrupt allocation function to be called
in the same way as with a single set of interrupts. The PCI code handles
the adaptive loop and the interrupt affinity spreading mechanism invokes
the driver callback to adapt the interrupt set configuration to the
current loop value. This replaces the adaptive loop in the driver
completely.

Implement the NVME specific callback which adjusts the interrupt sets
configuration and remove the adaptive allocation loop.

[ tglx: Simplify the callback further and restore the dropped adjustment of
  	number of sets ]

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
 1 file changed, 28 insertions(+), 80 deletions(-)

Comments

Ming Lei Feb. 14, 2019, 10:41 p.m. UTC | #1
On Thu, Feb 14, 2019 at 09:47:59PM +0100, Thomas Gleixner wrote:
> From: Ming Lei <ming.lei@redhat.com>
> 
> The NVME PCI driver contains a tedious mechanism for interrupt
> allocation, which is necessary to adjust the number and size of interrupt
> sets to the maximum available number of interrupts which depends on the
> underlying PCI capabilities and the available CPU resources.
> 
> It works around the former short comings of the PCI and core interrupt
> allocation mechanims in combination with interrupt sets.
> 
> The PCI interrupt allocation function allows to provide a maximum and a
> minimum number of interrupts to be allocated and tries to allocate as
> many as possible. This worked without driver interaction as long as there
> was only a single set of interrupts to handle.
> 
> With the addition of support for multiple interrupt sets in the generic
> affinity spreading logic, which is invoked from the PCI interrupt
> allocation, the adaptive loop in the PCI interrupt allocation did not
> work for multiple interrupt sets. The reason is that depending on the
> total number of interrupts which the PCI allocation adaptive loop tries
> to allocate in each step, the number and the size of the interrupt sets
> need to be adapted as well. Due to the way the interrupt sets support was
> implemented there was no way for the PCI interrupt allocation code or the
> core affinity spreading mechanism to invoke a driver specific function
> for adapting the interrupt sets configuration.
> 
> As a consequence the driver had to implement another adaptive loop around
> the PCI interrupt allocation function and calling that with maximum and
> minimum interrupts set to the same value. This ensured that the
> allocation either succeeded or immediately failed without any attempt to
> adjust the number of interrupts in the PCI code.
> 
> The core code now allows drivers to provide a callback to recalculate the
> number and the size of interrupt sets during PCI interrupt allocation,
> which in turn allows the PCI interrupt allocation function to be called
> in the same way as with a single set of interrupts. The PCI code handles
> the adaptive loop and the interrupt affinity spreading mechanism invokes
> the driver callback to adapt the interrupt set configuration to the
> current loop value. This replaces the adaptive loop in the driver
> completely.
> 
> Implement the NVME specific callback which adjusts the interrupt sets
> configuration and remove the adaptive allocation loop.
> 
> [ tglx: Simplify the callback further and restore the dropped adjustment of
>   	number of sets ]
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
>  1 file changed, 28 insertions(+), 80 deletions(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
>  	return ret;
>  }
>  
> -/* irq_queues covers admin queue */
> -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> +/*
> + * nirqs is the number of interrupts available for write and read
> + * queues. The core already reserved an interrupt for the admin queue.
> + */
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
>  {
> -	unsigned int this_w_queues = write_queues;
> -
> -	WARN_ON(!irq_queues);
> -
> -	/*
> -	 * Setup read/write queue split, assign admin queue one independent
> -	 * irq vector if irq_queues is > 1.
> -	 */
> -	if (irq_queues <= 2) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -		return;
> -	}
> +	struct nvme_dev *dev = affd->priv;
> +	unsigned int nr_read_queues;
>  
>  	/*
> -	 * If 'write_queues' is set, ensure it leaves room for at least
> -	 * one read queue and one admin queue
> -	 */
> -	if (this_w_queues >= irq_queues)
> -		this_w_queues = irq_queues - 2;
> -
> -	/*
> -	 * If 'write_queues' is set to zero, reads and writes will share
> -	 * a queue set.
> -	 */
> -	if (!this_w_queues) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -	} else {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> -	}
> +	 * If only one interrupt is available, combine write and read
> +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> +	 * least one read queue.
> +	 */
> +	if (nrirqs == 1)
> +		nr_read_queues = 0;
> +	else if (write_queues >= nrirqs)
> +		nr_read_queues = nrirqs - 1;
> +	else
> +		nr_read_queues = nrirqs - write_queues;
> +
> +	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->nr_sets = nr_read_queues ? 2 : 1;
>  }

.calc_sets is called only if more than .pre_vectors is available,
then dev->io_queues[HCTX_TYPE_DEFAULT] may not be set in case of
(nvecs == affd->pre_vectors + affd->post_vectors).

Thanks,
Ming
Thomas Gleixner Feb. 14, 2019, 11:55 p.m. UTC | #2
On Fri, 15 Feb 2019, Ming Lei wrote:
> > +	 * If only one interrupt is available, combine write and read
> > +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> > +	 * least one read queue.
> > +	 */
> > +	if (nrirqs == 1)
> > +		nr_read_queues = 0;
> > +	else if (write_queues >= nrirqs)
> > +		nr_read_queues = nrirqs - 1;
> > +	else
> > +		nr_read_queues = nrirqs - write_queues;
> > +
> > +	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> > +	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> > +	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
> > +	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
> > +	affd->nr_sets = nr_read_queues ? 2 : 1;
> >  }
> 
> .calc_sets is called only if more than .pre_vectors is available,
> then dev->io_queues[HCTX_TYPE_DEFAULT] may not be set in case of
> (nvecs == affd->pre_vectors + affd->post_vectors).

Hmm, good catch. The delta patch below should fix that, but I have to go
through all the possible cases in pci_alloc_irq_vectors_affinity() once
more with brain awake.

Thanks,

	tglx

8<---------------------

--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2092,6 +2092,10 @@ static int nvme_setup_irqs(struct nvme_d
 	}
 	dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
 
+	/* Initialize for the single interrupt case */
+	dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
+	dev->io_queues[HCTX_TYPE_READ] = 0;
+
 	return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues,
 			      PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
 }
Marc Zyngier Feb. 15, 2019, 9:24 a.m. UTC | #3
On Thu, 14 Feb 2019 20:47:59 +0000,
Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> From: Ming Lei <ming.lei@redhat.com>
> 
> The NVME PCI driver contains a tedious mechanism for interrupt
> allocation, which is necessary to adjust the number and size of interrupt
> sets to the maximum available number of interrupts which depends on the
> underlying PCI capabilities and the available CPU resources.
> 
> It works around the former short comings of the PCI and core interrupt
> allocation mechanims in combination with interrupt sets.
> 
> The PCI interrupt allocation function allows to provide a maximum and a
> minimum number of interrupts to be allocated and tries to allocate as
> many as possible. This worked without driver interaction as long as there
> was only a single set of interrupts to handle.
> 
> With the addition of support for multiple interrupt sets in the generic
> affinity spreading logic, which is invoked from the PCI interrupt
> allocation, the adaptive loop in the PCI interrupt allocation did not
> work for multiple interrupt sets. The reason is that depending on the
> total number of interrupts which the PCI allocation adaptive loop tries
> to allocate in each step, the number and the size of the interrupt sets
> need to be adapted as well. Due to the way the interrupt sets support was
> implemented there was no way for the PCI interrupt allocation code or the
> core affinity spreading mechanism to invoke a driver specific function
> for adapting the interrupt sets configuration.
> 
> As a consequence the driver had to implement another adaptive loop around
> the PCI interrupt allocation function and calling that with maximum and
> minimum interrupts set to the same value. This ensured that the
> allocation either succeeded or immediately failed without any attempt to
> adjust the number of interrupts in the PCI code.
> 
> The core code now allows drivers to provide a callback to recalculate the
> number and the size of interrupt sets during PCI interrupt allocation,
> which in turn allows the PCI interrupt allocation function to be called
> in the same way as with a single set of interrupts. The PCI code handles
> the adaptive loop and the interrupt affinity spreading mechanism invokes
> the driver callback to adapt the interrupt set configuration to the
> current loop value. This replaces the adaptive loop in the driver
> completely.
> 
> Implement the NVME specific callback which adjusts the interrupt sets
> configuration and remove the adaptive allocation loop.
> 
> [ tglx: Simplify the callback further and restore the dropped adjustment of
>   	number of sets ]
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
>  1 file changed, 28 insertions(+), 80 deletions(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
>  	return ret;
>  }
>  
> -/* irq_queues covers admin queue */
> -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> +/*
> + * nirqs is the number of interrupts available for write and read
> + * queues. The core already reserved an interrupt for the admin queue.
> + */
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
>  {
> -	unsigned int this_w_queues = write_queues;
> -
> -	WARN_ON(!irq_queues);
> -
> -	/*
> -	 * Setup read/write queue split, assign admin queue one independent
> -	 * irq vector if irq_queues is > 1.
> -	 */
> -	if (irq_queues <= 2) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -		return;
> -	}
> +	struct nvme_dev *dev = affd->priv;
> +	unsigned int nr_read_queues;
>  
>  	/*
> -	 * If 'write_queues' is set, ensure it leaves room for at least
> -	 * one read queue and one admin queue
> -	 */
> -	if (this_w_queues >= irq_queues)
> -		this_w_queues = irq_queues - 2;
> -
> -	/*
> -	 * If 'write_queues' is set to zero, reads and writes will share
> -	 * a queue set.
> -	 */
> -	if (!this_w_queues) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -	} else {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> -	}
> +	 * If only one interrupt is available, combine write and read
> +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> +	 * least one read queue.

[Full disclaimer: I only have had two coffees this morning, and it is
only at the fourth that my brain is able to kick in...]

I don't know much about NVME, but I feel like there is a small
disconnect between the code and the above comment, which says "leave
room for at least one read queue"...

> +	 */
> +	if (nrirqs == 1)
> +		nr_read_queues = 0;
> +	else if (write_queues >= nrirqs)
> +		nr_read_queues = nrirqs - 1;

... while this seem to ensure that we carve out one write queue out of
the irq set. It looks like a departure from the original code, which
would set nr_read_queues to 1 in that particular case.

Thanks,

	M.
Thomas Gleixner Feb. 15, 2019, 9:52 a.m. UTC | #4
On Fri, 15 Feb 2019, Marc Zyngier wrote:
> On Thu, 14 Feb 2019 20:47:59 +0000,
> Thomas Gleixner <tglx@linutronix.de> wrote:
> >  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
> >  1 file changed, 28 insertions(+), 80 deletions(-)
> > 
> > --- a/drivers/nvme/host/pci.c
> > +++ b/drivers/nvme/host/pci.c
> > @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
> >  	return ret;
> >  }
> >  
> > -/* irq_queues covers admin queue */
> > -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> > +/*
> > + * nirqs is the number of interrupts available for write and read
> > + * queues. The core already reserved an interrupt for the admin queue.
> > + */
> > +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
> >  {
> > -	unsigned int this_w_queues = write_queues;
> > -
> > -	WARN_ON(!irq_queues);
> > -
> > -	/*
> > -	 * Setup read/write queue split, assign admin queue one independent
> > -	 * irq vector if irq_queues is > 1.
> > -	 */
> > -	if (irq_queues <= 2) {
> > -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> > -		dev->io_queues[HCTX_TYPE_READ] = 0;
> > -		return;
> > -	}
> > +	struct nvme_dev *dev = affd->priv;
> > +	unsigned int nr_read_queues;
> >  
> >  	/*
> > -	 * If 'write_queues' is set, ensure it leaves room for at least
> > -	 * one read queue and one admin queue
> > -	 */
> > -	if (this_w_queues >= irq_queues)
> > -		this_w_queues = irq_queues - 2;
> > -
> > -	/*
> > -	 * If 'write_queues' is set to zero, reads and writes will share
> > -	 * a queue set.
> > -	 */
> > -	if (!this_w_queues) {
> > -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> > -		dev->io_queues[HCTX_TYPE_READ] = 0;
> > -	} else {
> > -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> > -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> > -	}
> > +	 * If only one interrupt is available, combine write and read
> > +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> > +	 * least one read queue.
> 
> [Full disclaimer: I only have had two coffees this morning, and it is
> only at the fourth that my brain is able to kick in...]
> 
> I don't know much about NVME, but I feel like there is a small
> disconnect between the code and the above comment, which says "leave
> room for at least one read queue"...
> 
> > +	 */
> > +	if (nrirqs == 1)
> > +		nr_read_queues = 0;
> > +	else if (write_queues >= nrirqs)
> > +		nr_read_queues = nrirqs - 1;
> 
> ... while this seem to ensure that we carve out one write queue out of
> the irq set. It looks like a departure from the original code, which
> would set nr_read_queues to 1 in that particular case.

Bah. right you are.
Thomas Gleixner Feb. 15, 2019, 9:54 a.m. UTC | #5
On Fri, 15 Feb 2019, Thomas Gleixner wrote:
> On Fri, 15 Feb 2019, Marc Zyngier wrote:
> > > +	 */
> > > +	if (nrirqs == 1)
> > > +		nr_read_queues = 0;
> > > +	else if (write_queues >= nrirqs)
> > > +		nr_read_queues = nrirqs - 1;
> > 
> > ... while this seem to ensure that we carve out one write queue out of
> > the irq set. It looks like a departure from the original code, which
> > would set nr_read_queues to 1 in that particular case.
> 
> Bah. right you are.

That needs to be:

     nr_read_queues = 1;

obviously.

/me blushes.
Thomas Gleixner Feb. 15, 2019, 11 p.m. UTC | #6
On Fri, 15 Feb 2019, Thomas Gleixner wrote:

> On Fri, 15 Feb 2019, Ming Lei wrote:
> > > +	 * If only one interrupt is available, combine write and read
> > > +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> > > +	 * least one read queue.
> > > +	 */
> > > +	if (nrirqs == 1)
> > > +		nr_read_queues = 0;
> > > +	else if (write_queues >= nrirqs)
> > > +		nr_read_queues = nrirqs - 1;
> > > +	else
> > > +		nr_read_queues = nrirqs - write_queues;
> > > +
> > > +	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> > > +	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> > > +	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
> > > +	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
> > > +	affd->nr_sets = nr_read_queues ? 2 : 1;
> > >  }
> > 
> > .calc_sets is called only if more than .pre_vectors is available,
> > then dev->io_queues[HCTX_TYPE_DEFAULT] may not be set in case of
> > (nvecs == affd->pre_vectors + affd->post_vectors).
> 
> Hmm, good catch. The delta patch below should fix that, but I have to go
> through all the possible cases in pci_alloc_irq_vectors_affinity() once
> more with brain awake.

The existing logic in the driver is somewhat strange. If there is only a
single interrupt available, i.e. no separation of admin and I/O queue, then
dev->io_queues[HCTX_TYPE_DEFAULT] is still set to 1.

Now with the callback scheme (independent of my changes) that breaks in
various ways:

  1) irq_create_affinity_masks() bails out early:

	if (nvecs == affd->pre_vectors + affd->post_vectors)
		return NULL;

     So the callback won't be invoked and the last content of
     dev->io_queues is preserved. By chance this might end up with

	dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
	dev->io_queues[HCTX_TYPE_READ] = 0;

    but that does not happen by design.


 2) pci_alloc_irq_vectors_affinity() has the following flow:

    __pci_enable_msix_range()
      for (...) {
            __pci_enable_msix()
      }

    If that fails because MSIX is not supported, then the affinity
    spreading code has not been called yet and neither the callback.
    Now it goes on and tries MSI, which is the same as the above.

    When that fails because MSI is not supported, same situation as above
    and the code tries to allocate the legacy interrupt.

    Now with the initial initialization that case is covered, but not the
    following error case:

      Assume MSIX is enabled, but __pci_enable_msix() fails with -ENOMEM
      after having called irq_create_affinity_masks() and subsequently the
      callback with maxvecs.

      Then pci_alloc_irq_vectors_affinity() will try MSI and fail _BEFORE_
      the callback is invoked.

      The next step is to install the leagcy interrupt, but that does not
      invoke the affinity code and neither the callback.

      At the end dev->io_queues[*] are still initialized with the failed
      attempt of enabling MSIX based on maxvecs.

      Result: inconsistent state ...

I have an idea how to fix that, but that's again a task for brain being
awake. Will take care of that tomorrow morning.

Thanks,

	tglx
diff mbox series

Patch

--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2041,41 +2041,32 @@  static int nvme_setup_host_mem(struct nv
 	return ret;
 }
 
-/* irq_queues covers admin queue */
-static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
+/*
+ * nirqs is the number of interrupts available for write and read
+ * queues. The core already reserved an interrupt for the admin queue.
+ */
+static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
 {
-	unsigned int this_w_queues = write_queues;
-
-	WARN_ON(!irq_queues);
-
-	/*
-	 * Setup read/write queue split, assign admin queue one independent
-	 * irq vector if irq_queues is > 1.
-	 */
-	if (irq_queues <= 2) {
-		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
-		dev->io_queues[HCTX_TYPE_READ] = 0;
-		return;
-	}
+	struct nvme_dev *dev = affd->priv;
+	unsigned int nr_read_queues;
 
 	/*
-	 * If 'write_queues' is set, ensure it leaves room for at least
-	 * one read queue and one admin queue
-	 */
-	if (this_w_queues >= irq_queues)
-		this_w_queues = irq_queues - 2;
-
-	/*
-	 * If 'write_queues' is set to zero, reads and writes will share
-	 * a queue set.
-	 */
-	if (!this_w_queues) {
-		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
-		dev->io_queues[HCTX_TYPE_READ] = 0;
-	} else {
-		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
-		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
-	}
+	 * If only one interrupt is available, combine write and read
+	 * queues. If 'write_queues' is set, ensure it leaves room for at
+	 * least one read queue.
+	 */
+	if (nrirqs == 1)
+		nr_read_queues = 0;
+	else if (write_queues >= nrirqs)
+		nr_read_queues = nrirqs - 1;
+	else
+		nr_read_queues = nrirqs - write_queues;
+
+	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
+	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
+	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
+	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
+	affd->nr_sets = nr_read_queues ? 2 : 1;
 }
 
 static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
@@ -2083,10 +2074,9 @@  static int nvme_setup_irqs(struct nvme_d
 	struct pci_dev *pdev = to_pci_dev(dev->dev);
 	struct irq_affinity affd = {
 		.pre_vectors	= 1,
-		.nr_sets	= 2,
+		.calc_sets	= nvme_calc_irq_sets,
+		.priv		= dev,
 	};
-	unsigned int *irq_sets = affd.set_size;
-	int result = 0;
 	unsigned int irq_queues, this_p_queues;
 
 	/*
@@ -2102,51 +2092,8 @@  static int nvme_setup_irqs(struct nvme_d
 	}
 	dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
 
-	/*
-	 * For irq sets, we have to ask for minvec == maxvec. This passes
-	 * any reduction back to us, so we can adjust our queue counts and
-	 * IRQ vector needs.
-	 */
-	do {
-		nvme_calc_io_queues(dev, irq_queues);
-		irq_sets[0] = dev->io_queues[HCTX_TYPE_DEFAULT];
-		irq_sets[1] = dev->io_queues[HCTX_TYPE_READ];
-		if (!irq_sets[1])
-			affd.nr_sets = 1;
-
-		/*
-		 * If we got a failure and we're down to asking for just
-		 * 1 + 1 queues, just ask for a single vector. We'll share
-		 * that between the single IO queue and the admin queue.
-		 * Otherwise, we assign one independent vector to admin queue.
-		 */
-		if (irq_queues > 1)
-			irq_queues = irq_sets[0] + irq_sets[1] + 1;
-
-		result = pci_alloc_irq_vectors_affinity(pdev, irq_queues,
-				irq_queues,
-				PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
-
-		/*
-		 * Need to reduce our vec counts. If we get ENOSPC, the
-		 * platform should support mulitple vecs, we just need
-		 * to decrease our ask. If we get EINVAL, the platform
-		 * likely does not. Back down to ask for just one vector.
-		 */
-		if (result == -ENOSPC) {
-			irq_queues--;
-			if (!irq_queues)
-				return result;
-			continue;
-		} else if (result == -EINVAL) {
-			irq_queues = 1;
-			continue;
-		} else if (result <= 0)
-			return -EIO;
-		break;
-	} while (1);
-
-	return result;
+	return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues,
+			      PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
 }
 
 static void nvme_disable_io_queues(struct nvme_dev *dev)
@@ -3021,6 +2968,7 @@  static struct pci_driver nvme_driver = {
 
 static int __init nvme_init(void)
 {
+	BUILD_BUG_ON(IRQ_AFFINITY_MAX_SETS < 2);
 	return pci_register_driver(&nvme_driver);
 }