diff mbox

[v4,9/9] PCI/MSI: Introduce pci_auto_enable_msi*() family helpers

Message ID bbf6f73ef3e16e5dfd7d41fc1e4412f641acb0f3.1387140921.git.agordeev@redhat.com (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Alexander Gordeev Dec. 16, 2013, 8:35 a.m. UTC
Currently many device drivers need contiguously call functions
pci_enable_msix() for MSI-X or pci_enable_msi_block() for MSI
in a loop until success or failure. This update generalizes
this usage pattern and introduces pci_auto_enable_msi*() family
helpers.

As result, device drivers do not have to deal with tri-state
return values from pci_enable_msix() and pci_enable_msi_block()
functions directly and expected to have more clearer and straight
code.

So i.e. the request loop described in the documentation...

	int foo_driver_enable_msix(struct foo_adapter *adapter,
				   int nvec)
	{
		while (nvec >= FOO_DRIVER_MINIMUM_NVEC) {
			rc = pci_enable_msix(adapter->pdev,
					     adapter->msix_entries,
					     nvec);
			if (rc > 0)
				nvec = rc;
			else
				return rc;
		}

		return -ENOSPC;
	}

...would turn into a single helper call....

	rc = pci_auto_enable_msix_range(adapter->pdev,
					adapter->msix_entries,
					FOO_DRIVER_MINIMUM_NVEC,
					nvec);

Device drivers with more specific requirements (i.e. a number of
MSI-Xs which is a multiple of a certain number within a specified
range) would still need to implement the loop using the two old
functions.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Suggested-by: Ben Hutchings <bhutchings@solarflare.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
---
 Documentation/PCI/MSI-HOWTO.txt |  134 +++++++++++++++++++++++++++++++++++++--
 drivers/pci/msi.c               |   74 +++++++++++++++++++++
 include/linux/pci.h             |   57 +++++++++++++++++
 3 files changed, 260 insertions(+), 5 deletions(-)

Comments

Bjorn Helgaas Dec. 18, 2013, 12:30 a.m. UTC | #1
On Mon, Dec 16, 2013 at 09:35:02AM +0100, Alexander Gordeev wrote:
> Currently many device drivers need contiguously call functions
> pci_enable_msix() for MSI-X or pci_enable_msi_block() for MSI
> in a loop until success or failure. This update generalizes
> this usage pattern and introduces pci_auto_enable_msi*() family
> helpers.

I think the idea of this series is excellent and will really make MSI/MSI-X
easier to use and less error-prone for drivers, so I don't want this to
sound discouraging.  I haven't been paying attention to this in detail, so
likely some of my questions have already been hashed out and I missed the
answers.

After this patch, we would have:

    pci_enable_msi()				# existing (1 vector)
    pci_enable_msi_block(nvec)			# existing
    pci_enable_msi_block_auto(maxvec)		# existing (removed)

    pci_auto_enable_msi(maxvec)			# new	(1-maxvec)
    pci_auto_enable_msi_range(minvec, maxvec)	# new
    pci_auto_enable_msi_exact(nvec)		# new	(nvec-nvec)

    pci_enable_msix(nvec)			# existing

    pci_auto_enable_msix(maxvec)		# new	(1-maxvec)
    pci_auto_enable_msix_range(minvec, maxvec)	# new
    pci_auto_enable_msix_exact(nvec)		# new	(nvec-nvec)

That seems like a lot of interfaces to document and understand, especially
since most of them are built on each other.  I'd prefer just these:

    pci_enable_msi()				# existing (1 vector)
    pci_enable_msi_range(minvec, maxvec)	# new

    pci_enable_msix(nvec)			# existing
    pci_enable_msix_range(minvec, maxvec)	# new

with examples in the documentation about how to call them with ranges like
(1, maxvec), (nvec, nvec), etc.  I think that will be easier than
understanding several interfaces.

I don't think the "auto" in the names really adds anything, does it?  The
whole point of supplying a range is that the core has the flexibility to
choose any number of vectors within the range.

> As result, device drivers do not have to deal with tri-state
> return values from pci_enable_msix() and pci_enable_msi_block()
> functions directly and expected to have more clearer and straight
> code.

I only see five users of pci_enable_msi_block() (nvme, ath10k, wil6210,
ipr, vfio); we can easily convert those to use pci_enable_msi_range() and
then remove pci_enable_msi_block().

pci_enable_msi() itself can simply be pci_enable_msi_range(1, 1).

There are nearly 80 callers of pci_enable_msix(), so that's a bit harder.
Can we deprecate that somehow, and incrementally convert callers to use
pci_enable_msix_range() instead?  Maybe you're already planning that; I
know you dropped some driver patches from the series for now, and I didn't
look to see exactly what they did.

It would be good if pci_enable_msix() could be implemented in terms of
pci_enable_msix_range(nvec, nvec), with a little extra glue to handle the
positive return values.

> So i.e. the request loop described in the documentation...
> 
> 	int foo_driver_enable_msix(struct foo_adapter *adapter,
> 				   int nvec)
> 	{
> 		while (nvec >= FOO_DRIVER_MINIMUM_NVEC) {
> 			rc = pci_enable_msix(adapter->pdev,
> 					     adapter->msix_entries,
> 					     nvec);
> 			if (rc > 0)
> 				nvec = rc;
> 			else
> 				return rc;
> 		}
> 
> 		return -ENOSPC;
> 	}

I think we should remove this example from the documentation because we
want to get rid of the tri-state return idea completely.  I think the same
thing could be accomplished with pci_enable_msix_range() (correct me if I'm
wrong).

> ...would turn into a single helper call....
> 
> 	rc = pci_auto_enable_msix_range(adapter->pdev,
> 					adapter->msix_entries,
> 					FOO_DRIVER_MINIMUM_NVEC,
> 					nvec);
>
> Device drivers with more specific requirements (i.e. a number of
> MSI-Xs which is a multiple of a certain number within a specified
> range) would still need to implement the loop using the two old
> functions.
> 
> Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
> Suggested-by: Ben Hutchings <bhutchings@solarflare.com>
> Reviewed-by: Tejun Heo <tj@kernel.org>
> ---
>  Documentation/PCI/MSI-HOWTO.txt |  134 +++++++++++++++++++++++++++++++++++++--
>  drivers/pci/msi.c               |   74 +++++++++++++++++++++
>  include/linux/pci.h             |   57 +++++++++++++++++
>  3 files changed, 260 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/PCI/MSI-HOWTO.txt b/Documentation/PCI/MSI-HOWTO.txt
> index 7d19656..168d9c3 100644
> --- a/Documentation/PCI/MSI-HOWTO.txt
> +++ b/Documentation/PCI/MSI-HOWTO.txt
> @@ -127,7 +127,62 @@ on the number of vectors that can be allocated; pci_enable_msi_block()
>  returns as soon as it finds any constraint that doesn't allow the
>  call to succeed.
>  
> -4.2.3 pci_disable_msi
> +4.2.3 pci_auto_enable_msi_range
> +
> +int pci_auto_enable_msi_range(struct pci_dev *dev, struct msix_entry *entries,
> +			      int minvec, int maxvec)
> +
> +This variation on pci_enable_msi_block() call allows a device driver to
> +request any number of MSIs within specified range 'minvec' to 'maxvec'.
> +Whenever possible device drivers are encouraged to use this function
> +rather than explicit request loop calling pci_enable_msi_block().

I think we should remove pci_enable_msi_block() completely, including its
mention here.

> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to request any more MSI interrupts for
> +this device.
> +
> +If this function returns a positive number it indicates at least the
> +returned number of MSI interrupts have been successfully allocated (it may
> +have allocated more in order to satisfy the power-of-two requirement).

I assume this means the return value may be larger than the "maxvec"
requested, right?  And the driver is free to use all the vectors up to the
return value, even those above maxvec, right?

> +Device drivers can use this number to further initialize devices.
> +
> +4.2.4 pci_auto_enable_msi
> +
> +int pci_auto_enable_msi(struct pci_dev *dev,
> +			struct msix_entry *entries, int maxvec)
> +
> +This variation on pci_enable_msi_block() call allows a device driver to
> +request any number of MSIs up to 'maxvec'. Whenever possible device drivers
> +are encouraged to use this function rather than explicit request loop
> +calling pci_enable_msi_block().
> +
> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to request any more MSI interrupts for
> +this device.
> +
> +If this function returns a positive number it indicates at least the
> +returned number of MSI interrupts have been successfully allocated (it may
> +have allocated more in order to satisfy the power-of-two requirement).
> +Device drivers can use this number to further initialize devices.
> +
> +4.2.5 pci_auto_enable_msi_exact
> +
> +int pci_auto_enable_msi_exact(struct pci_dev *dev,
> +			      struct msix_entry *entries, int nvec)
> +
> +This variation on pci_enable_msi_block() call allows a device driver to
> +request exactly 'nvec' MSIs.
> +
> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to request any more MSI interrupts for
> +this device.
> +
> +If this function returns the value of 'nvec' it indicates MSI interrupts
> +have been successfully allocated. No other value in case of success could
> +be returned. Device drivers can use this value to further allocate and
> +initialize device resources.
> +
> +4.2.6 pci_disable_msi
>  
>  void pci_disable_msi(struct pci_dev *dev)
>  
> @@ -142,7 +197,7 @@ on any interrupt for which it previously called request_irq().
>  Failure to do so results in a BUG_ON(), leaving the device with
>  MSI enabled and thus leaking its vector.
>  
> -4.2.4 pci_get_msi_vec_count
> +4.2.7 pci_get_msi_vec_count
>  
>  int pci_get_msi_vec_count(struct pci_dev *dev)
>  
> @@ -222,7 +277,76 @@ static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
>  	return -ENOSPC;
>  }
>  
> -4.3.2 pci_disable_msix
> +4.3.2 pci_auto_enable_msix_range
> +
> +int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
> +			       int minvec, int maxvec)
> +
> +This variation on pci_enable_msix() call allows a device driver to request
> +any number of MSI-Xs within specified range 'minvec' to 'maxvec'. Whenever
> +possible device drivers are encouraged to use this function rather than
> +explicit request loop calling pci_enable_msix().

I guess maybe I'm wrong, and there *are* cases where
pci_enable_msix_range() isn't sufficient to replace pci_enable_msix()?
Can you remind me what they are?

> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to allocate any more MSI-X interrupts for
> +this device.
> +
> +If this function returns a positive number it indicates the number of
> +MSI-X interrupts that have been successfully allocated. Device drivers
> +can use this number to further allocate and initialize device resources.
> +
> +A modified function calling pci_enable_msix() in a loop might look like:

There's no loop in this example.  Don't use "A modified function"; that
only makes sense during the transition from pci_enable_msix() to
pci_enable_msix_range().  "A function using pci_enable_msix_range() might
look like this:" should be sufficient.

> +static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
> +{
> +	rc = pci_auto_enable_msix_range(adapter->pdev, adapter->msix_entries,
> +					FOO_DRIVER_MINIMUM_NVEC, nvec);
> +	if (rc < 0)
> +		return rc;
> +
> +	rc = foo_driver_init_other(adapter, rc);
> +	if (rc < 0)
> +		pci_disable_msix(adapter->pdev);
> +
> +	return rc;
> +}
> +
> +4.3.3 pci_auto_enable_msix
> +
> +int pci_auto_enable_msix(struct pci_dev *dev,
> +			 struct msix_entry *entries, int maxvec)
> +
> +This variation on pci_enable_msix() call allows a device driver to request
> +any number of MSI-Xs up to 'maxvec'. Whenever possible device drivers are
> +encouraged to use this function rather than explicit request loop calling
> +pci_enable_msix().
> +
> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to allocate any more MSI-X interrupts for
> +this device.
> +
> +If this function returns a positive number it indicates the number of
> +MSI-X interrupts that have been successfully allocated. Device drivers
> +can use this number to further allocate and initialize device resources.
> +
> +4.3.4 pci_auto_enable_msix_exact
> +
> +int pci_auto_enable_msix_exact(struct pci_dev *dev,
> +			       struct msix_entry *entries, int nvec)
> +
> +This variation on pci_enable_msix() call allows a device driver to request
> +exactly 'nvec' MSI-Xs.
> +
> +If this function returns a negative number, it indicates an error and
> +the driver should not attempt to allocate any more MSI-X interrupts for
> +this device.
> +
> +If this function returns the value of 'nvec' it indicates MSI-X interrupts
> +have been successfully allocated. No other value in case of success could
> +be returned. Device drivers can use this value to further allocate and
> +initialize device resources.
> +
> +4.3.5 pci_disable_msix
>  
>  void pci_disable_msix(struct pci_dev *dev)
>  
> @@ -236,14 +360,14 @@ on any interrupt for which it previously called request_irq().
>  Failure to do so results in a BUG_ON(), leaving the device with
>  MSI-X enabled and thus leaking its vector.
>  
> -4.3.3 The MSI-X Table
> +4.3.6 The MSI-X Table
>  
>  The MSI-X capability specifies a BAR and offset within that BAR for the
>  MSI-X Table.  This address is mapped by the PCI subsystem, and should not
>  be accessed directly by the device driver.  If the driver wishes to
>  mask or unmask an interrupt, it should call disable_irq() / enable_irq().
>  
> -4.3.4 pci_get_msix_vec_count
> +4.3.7 pci_get_msix_vec_count
>  
>  int pci_get_msix_vec_count(struct pci_dev *dev)
>  
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index 18e877f5..ccfd49b 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -1093,3 +1093,77 @@ void pci_msi_init_pci_dev(struct pci_dev *dev)
>  	if (dev->msix_cap)
>  		msix_set_enable(dev, 0);
>  }
> +
> +/**
> + * pci_auto_enable_msi_range - configure device's MSI capability structure
> + * @dev: device to configure
> + * @minvec: minimal number of interrupts to configure
> + * @maxvec: maximum number of interrupts to configure
> + *
> + * This function tries to allocate a maximum possible number of interrupts in a
> + * range between @minvec and @maxvec. It returns a negative errno if an error
> + * occurs. If it succeeds, it returns the actual number of interrupts allocated
> + * and updates the @dev's irq member to the lowest new interrupt number;
> + * the other interrupt numbers allocated to this device are consecutive.
> + **/
> +int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
> +{
> +	int nvec = maxvec;
> +	int rc;
> +
> +	if (maxvec < minvec)
> +		return -ERANGE;
> +
> +	do {
> +		rc = pci_enable_msi_block(dev, nvec);
> +		if (rc < 0) {
> +			return rc;
> +		} else if (rc > 0) {
> +			if (rc < minvec)
> +				return -ENOSPC;
> +			nvec = rc;
> +		}
> +	} while (rc);
> +
> +	return nvec;
> +}
> +EXPORT_SYMBOL(pci_auto_enable_msi_range);
> +
> +/**
> + * pci_auto_enable_msix_range - configure device's MSI-X capability structure
> + * @dev: pointer to the pci_dev data structure of MSI-X device function
> + * @entries: pointer to an array of MSI-X entries
> + * @minvec: minimum number of MSI-X irqs requested
> + * @maxvec: maximum number of MSI-X irqs requested
> + *
> + * Setup the MSI-X capability structure of device function with a maximum
> + * possible number of interrupts in the range between @minvec and @maxvec
> + * upon its software driver call to request for MSI-X mode enabled on its
> + * hardware device function. It returns a negative errno if an error occurs.
> + * If it succeeds, it returns the actual number of interrupts allocated and
> + * indicates the successful configuration of MSI-X capability structure
> + * with new allocated MSI-X interrupts.
> + **/
> +int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
> +			       int minvec, int maxvec)
> +{
> +	int nvec = maxvec;
> +	int rc;
> +
> +	if (maxvec < minvec)
> +		return -ERANGE;
> +
> +	do {
> +		rc = pci_enable_msix(dev, entries, nvec);
> +		if (rc < 0) {
> +			return rc;
> +		} else if (rc > 0) {
> +			if (rc < minvec)
> +				return -ENOSPC;
> +			nvec = rc;
> +		}
> +	} while (rc);
> +
> +	return nvec;

I think it would be better to make pci_enable_msix_range() the fundamental
implementation, with pci_enable_msix() built on top of it.  That way we
could deprecate and eventually remove pci_enable_msix() and its tri-state
return values.

Bjorn

> +}
> +EXPORT_SYMBOL(pci_auto_enable_msix_range);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 7941f06..7e30b52 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -1193,6 +1193,38 @@ static inline int pci_msi_enabled(void)
>  {
>  	return 0;
>  }
> +
> +int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
> +{
> +	return -ENOSYS;
> +}
> +static inline int pci_auto_enable_msi(struct pci_dev *dev, int maxvec)
> +{
> +	return -ENOSYS;
> +}
> +static inline int pci_auto_enable_msi_exact(struct pci_dev *dev, int nvec)
> +{
> +	return -ENOSYS;
> +}
> +
> +static inline int
> +pci_auto_enable_msix_range(struct pci_dev *dev,
> +			   struct msix_entry *entries, int minvec, int maxvec)
> +{
> +	return -ENOSYS;
> +}
> +static inline int
> +pci_auto_enable_msix(struct pci_dev *dev,
> +		     struct msix_entry *entries, int maxvec)
> +{
> +	return -ENOSYS;
> +}
> +static inline int
> +pci_auto_enable_msix_exact(struct pci_dev *dev,
> +			   struct msix_entry *entries, int nvec)
> +{
> +	return -ENOSYS;
> +}
>  #else
>  int pci_get_msi_vec_count(struct pci_dev *dev);
>  int pci_enable_msi_block(struct pci_dev *dev, int nvec);
> @@ -1205,6 +1237,31 @@ void pci_disable_msix(struct pci_dev *dev);
>  void msi_remove_pci_irq_vectors(struct pci_dev *dev);
>  void pci_restore_msi_state(struct pci_dev *dev);
>  int pci_msi_enabled(void);
> +
> +int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec);
> +static inline int pci_auto_enable_msi(struct pci_dev *dev, int maxvec)
> +{
> +	return pci_auto_enable_msi_range(dev, 1, maxvec);
> +}
> +static inline int pci_auto_enable_msi_exact(struct pci_dev *dev, int nvec)
> +{
> +	return pci_auto_enable_msi_range(dev, nvec, nvec);
> +}
> +
> +int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
> +			       int minvec, int maxvec);
> +static inline int
> +pci_auto_enable_msix(struct pci_dev *dev,
> +		     struct msix_entry *entries, int maxvec)
> +{
> +	return pci_auto_enable_msix_range(dev, entries, 1, maxvec);
> +}
> +static inline int
> +pci_auto_enable_msix_exact(struct pci_dev *dev,
> +			   struct msix_entry *entries, int nvec)
> +{
> +	return pci_auto_enable_msix_range(dev, entries, nvec, nvec);
> +}
>  #endif
>  
>  #ifdef CONFIG_PCIEPORTBUS
> -- 
> 1.7.7.6
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Gordeev Dec. 18, 2013, 1:23 p.m. UTC | #2
On Tue, Dec 17, 2013 at 05:30:02PM -0700, Bjorn Helgaas wrote:

Hi Bjorn,

Thank you for the review!

Sorry for a heavy skipping - I just wanted to focus on a principal
moment in your suggestion and then go on with the original note.

> I only see five users of pci_enable_msi_block() (nvme, ath10k, wil6210,
> ipr, vfio); we can easily convert those to use pci_enable_msi_range() and
> then remove pci_enable_msi_block().

> It would be good if pci_enable_msix() could be implemented in terms of
> pci_enable_msix_range(nvec, nvec), with a little extra glue to handle the
> positive return values.

So you want to get rid of the tri-state "low-level" pci_enable_msi_block()
and pci_enable_msix(), right? I believe we can not do this, since we need
to support a non-standard hardware which (a) can not be asked any arbitrary
number of vectors within a range and (b) needs extra magic to enable MSI
operation.

I.e. below is a snippet from a real device driver Mark Lord has sent in a
previous conversation:

        xx_disable_all_irqs(dev);
        do {
            	if (nvec < 2)
                        xx_prep_for_1_msix_vector(dev);
                else if (nvec < 4)
                        xx_prep_for_2_msix_vectors(dev);
                else if (nvec < 8)
                        xx_prep_for_4_msix_vectors(dev);
                else if (nvec < 16)
                        xx_prep_for_8_msix_vectors(dev);
                else
                    	xx_prep_for_16_msix_vectors(dev);
                nvec = pci_enable_msix(dev->pdev, dev->irqs, dev->num_vectors);
       	} while (nvec > 0);

The same probably could have been done with pci_enable_msix_range(nvec, nvec)
call and checking for -ENOSPC errno, but IMO it would be less graceful and
reliable, since -ENOSPC might come from anywhere.

IOW, I believe we need to keep the door open for custom MSI-enablement (loop)
implementations.
Bjorn Helgaas Dec. 18, 2013, 6:58 p.m. UTC | #3
On Wed, Dec 18, 2013 at 6:23 AM, Alexander Gordeev <agordeev@redhat.com> wrote:
> On Tue, Dec 17, 2013 at 05:30:02PM -0700, Bjorn Helgaas wrote:
>
> Hi Bjorn,
>
> Thank you for the review!
>
> Sorry for a heavy skipping - I just wanted to focus on a principal
> moment in your suggestion and then go on with the original note.
>
>> I only see five users of pci_enable_msi_block() (nvme, ath10k, wil6210,
>> ipr, vfio); we can easily convert those to use pci_enable_msi_range() and
>> then remove pci_enable_msi_block().
>
>> It would be good if pci_enable_msix() could be implemented in terms of
>> pci_enable_msix_range(nvec, nvec), with a little extra glue to handle the
>> positive return values.
>
> So you want to get rid of the tri-state "low-level" pci_enable_msi_block()
> and pci_enable_msix(), right? I believe we can not do this, since we need
> to support a non-standard hardware which (a) can not be asked any arbitrary
> number of vectors within a range and (b) needs extra magic to enable MSI
> operation.
>
> I.e. below is a snippet from a real device driver Mark Lord has sent in a
> previous conversation:
>
>         xx_disable_all_irqs(dev);
>         do {
>                 if (nvec < 2)
>                         xx_prep_for_1_msix_vector(dev);
>                 else if (nvec < 4)
>                         xx_prep_for_2_msix_vectors(dev);
>                 else if (nvec < 8)
>                         xx_prep_for_4_msix_vectors(dev);
>                 else if (nvec < 16)
>                         xx_prep_for_8_msix_vectors(dev);
>                 else
>                         xx_prep_for_16_msix_vectors(dev);
>                 nvec = pci_enable_msix(dev->pdev, dev->irqs, dev->num_vectors);
>         } while (nvec > 0);
>
> The same probably could have been done with pci_enable_msix_range(nvec, nvec)
> call and checking for -ENOSPC errno, but IMO it would be less graceful and
> reliable, since -ENOSPC might come from anywhere.
>
> IOW, I believe we need to keep the door open for custom MSI-enablement (loop)
> implementations.

I think this can still be done even with pci_enable_msix_range().
Here's what I'm thinking, tell me where I'm going wrong:

    rc = pci_enable_msix_range(dev->pdev, dev->irqs, 1, 16);
    if (rc < 0) { /* error */ }
    else { /* rc interrupts allocated */ }

If rc == 13 and the device can only use 8, the extra 5 would be
ignored and wasted.

If the waste is unacceptable, the driver can try this:

    rc = pci_enable_msix_range(dev->pdev, dev->irqs, 16, 16);
    if (rc < 0) {
        rc = pci_enable_msix_range(dev->pdev, dev->irqs, 8, 8);
        if (rc < 0) {
            rc = pci_enable_msix_range(dev->pdev, dev->irqs, 4, 4);
            ...
    }

    if (rc < 0) { /* error, couldn't allocate *any* interrupts */
    else { /* rc interrupts allocated (1, 2, 4, 8, or 16) */ }

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Gordeev Dec. 19, 2013, 1:42 p.m. UTC | #4
On Wed, Dec 18, 2013 at 11:58:47AM -0700, Bjorn Helgaas wrote:
> If rc == 13 and the device can only use 8, the extra 5 would be
> ignored and wasted.
> 
> If the waste is unacceptable, the driver can try this:
> 
>     rc = pci_enable_msix_range(dev->pdev, dev->irqs, 16, 16);
>     if (rc < 0) {
>         rc = pci_enable_msix_range(dev->pdev, dev->irqs, 8, 8);
>         if (rc < 0) {
>             rc = pci_enable_msix_range(dev->pdev, dev->irqs, 4, 4);
>             ...
>     }

I have troubles with this fallback logic. On each failed step we get an
error and we do not know if this is indeed an error or an indication of
insufficient MSI resources. Even -ENOSPC would not tell much, since it
could be thrown from a lower level.

By contrast, with the tri-state return value we can distinguish and bail
out on errors right away.

So the above is bit ungraceful for me. Combined with a possible waste in
logs (if we're hitting the same error) it is quite enough for me to keep
current the interfaces, at least for a time being.

>     if (rc < 0) { /* error, couldn't allocate *any* interrupts */
>     else { /* rc interrupts allocated (1, 2, 4, 8, or 16) */ }
> 
> Bjorn
Tejun Heo Dec. 19, 2013, 1:47 p.m. UTC | #5
Hello,

On Thu, Dec 19, 2013 at 02:42:45PM +0100, Alexander Gordeev wrote:
> I have troubles with this fallback logic. On each failed step we get an
> error and we do not know if this is indeed an error or an indication of
> insufficient MSI resources. Even -ENOSPC would not tell much, since it
> could be thrown from a lower level.

Well, it's not hard to define what -ENOSPC should mean.

> By contrast, with the tri-state return value we can distinguish and bail
> out on errors right away.

I kinda like that the available options are listed explicitly on the
caller side (even if it ends up being a loop).  It decreases the
chance of the caller going "oh I didn't expect _that_" and generaly
makes things easier to follow.

> So the above is bit ungraceful for me. Combined with a possible waste in
> logs (if we're hitting the same error) it is quite enough for me to keep
> current the interfaces, at least for a time being.

FWIW, I like Bjorn's suggestion.  Given that this is mostly corner
case thing, it isn't as important as the common ones but we might as
well while we're at it.

Thanks a lot for your work in the area!  :)
Bjorn Helgaas Dec. 19, 2013, 9:37 p.m. UTC | #6
On Thu, Dec 19, 2013 at 6:42 AM, Alexander Gordeev <agordeev@redhat.com> wrote:
> On Wed, Dec 18, 2013 at 11:58:47AM -0700, Bjorn Helgaas wrote:
>> If rc == 13 and the device can only use 8, the extra 5 would be
>> ignored and wasted.
>>
>> If the waste is unacceptable, the driver can try this:
>>
>>     rc = pci_enable_msix_range(dev->pdev, dev->irqs, 16, 16);
>>     if (rc < 0) {
>>         rc = pci_enable_msix_range(dev->pdev, dev->irqs, 8, 8);
>>         if (rc < 0) {
>>             rc = pci_enable_msix_range(dev->pdev, dev->irqs, 4, 4);
>>             ...
>>     }
>
> I have troubles with this fallback logic. On each failed step we get an
> error and we do not know if this is indeed an error or an indication of
> insufficient MSI resources. Even -ENOSPC would not tell much, since it
> could be thrown from a lower level.
>
> By contrast, with the tri-state return value we can distinguish and bail
> out on errors right away.

I thought the main point of this was to get rid of interfaces that
were prone to misuse, and tri-state return values was a big part of
that.  All we really care about in the driver is success/failure.  I'm
not sure there's much to be gained by analyzing *why* we failed, and I
think it tends to make uncommon error paths more complicated than
necessary.  If we fail four times instead of bailing out after the
first failure, well, that doesn't sound terrible to me.  The last
failure can log the errno, which is enough for debugging.

> So the above is bit ungraceful for me. Combined with a possible waste in
> logs (if we're hitting the same error) it is quite enough for me to keep
> current the interfaces, at least for a time being.
>
>>     if (rc < 0) { /* error, couldn't allocate *any* interrupts */
>>     else { /* rc interrupts allocated (1, 2, 4, 8, or 16) */ }
>>
>> Bjorn
>
> --
> Regards,
> Alexander Gordeev
> agordeev@redhat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Gordeev Dec. 20, 2013, 9:04 a.m. UTC | #7
On Thu, Dec 19, 2013 at 02:37:22PM -0700, Bjorn Helgaas wrote:
> On Thu, Dec 19, 2013 at 6:42 AM, Alexander Gordeev <agordeev@redhat.com> wrote:
> > On Wed, Dec 18, 2013 at 11:58:47AM -0700, Bjorn Helgaas wrote:
> >> If rc == 13 and the device can only use 8, the extra 5 would be
> >> ignored and wasted.
> >>
> >> If the waste is unacceptable, the driver can try this:
> >>
> >>     rc = pci_enable_msix_range(dev->pdev, dev->irqs, 16, 16);
> >>     if (rc < 0) {
> >>         rc = pci_enable_msix_range(dev->pdev, dev->irqs, 8, 8);
> >>         if (rc < 0) {
> >>             rc = pci_enable_msix_range(dev->pdev, dev->irqs, 4, 4);
> >>             ...
> >>     }
> >
> > I have troubles with this fallback logic. On each failed step we get an
> > error and we do not know if this is indeed an error or an indication of
> > insufficient MSI resources. Even -ENOSPC would not tell much, since it
> > could be thrown from a lower level.
> >
> > By contrast, with the tri-state return value we can distinguish and bail
> > out on errors right away.
> 
> I thought the main point of this was to get rid of interfaces that
> were prone to misuse, and tri-state return values was a big part of
> that.  All we really care about in the driver is success/failure.  I'm
> not sure there's much to be gained by analyzing *why* we failed, and I
> think it tends to make uncommon error paths more complicated than
> necessary.  If we fail four times instead of bailing out after the
> first failure, well, that doesn't sound terrible to me.  The last
> failure can log the errno, which is enough for debugging.

Sure, the main point is to get rid of try-state interfaces. I just afraid
to throw out the baby with the bath water for unusual devices (which we do
not have in the tree).

I can only identify two downsides of the approach above - (a) repeated error
logging in a platform code (i.e. caused by -ENOMEM) and (b) repeated attempts
to enable MSI when the platform already reported a fatal error.

I think if a device needs an extra magic to enable MSI (i.e. writing to
specific registers etc.) it would be manageable with pci_enable_msix_range(),
but may be I am missing something?

So my thought is may be we deprecate the tri-state interfaces, but do not
do it immediately.
Alexander Gordeev Dec. 20, 2013, 10:28 a.m. UTC | #8
On Tue, Dec 17, 2013 at 05:30:02PM -0700, Bjorn Helgaas wrote:
> After this patch, we would have:
> 
>     pci_enable_msi()				# existing (1 vector)
>     pci_enable_msi_block(nvec)			# existing
>     pci_enable_msi_block_auto(maxvec)		# existing (removed)
> 
>     pci_auto_enable_msi(maxvec)			# new	(1-maxvec)
>     pci_auto_enable_msi_range(minvec, maxvec)	# new
>     pci_auto_enable_msi_exact(nvec)		# new	(nvec-nvec)
> 
>     pci_enable_msix(nvec)			# existing
> 
>     pci_auto_enable_msix(maxvec)		# new	(1-maxvec)
>     pci_auto_enable_msix_range(minvec, maxvec)	# new
>     pci_auto_enable_msix_exact(nvec)		# new	(nvec-nvec)
> 
> That seems like a lot of interfaces to document and understand, especially
> since most of them are built on each other.  I'd prefer just these:
> 
>     pci_enable_msi()				# existing (1 vector)
>     pci_enable_msi_range(minvec, maxvec)	# new
> 
>     pci_enable_msix(nvec)			# existing
>     pci_enable_msix_range(minvec, maxvec)	# new
> 
> with examples in the documentation about how to call them with ranges like
> (1, maxvec), (nvec, nvec), etc.  I think that will be easier than
> understanding several interfaces.

I agree pci_auto_enable_msix() and pci_auto_enable_msix_exact() are worth
sacrificing for the sake of clarity. My only concern is people will start
defining their own helpers for (1, maxvec) and (nvec, nvec) cases here and
there...

> I don't think the "auto" in the names really adds anything, does it?  The
> whole point of supplying a range is that the core has the flexibility to
> choose any number of vectors within the range.

"Auto" indicates auto-retry, but I see no problem in skipping it, especially
if we deprecate or phase out the existing interfaces.

> I only see five users of pci_enable_msi_block() (nvme, ath10k, wil6210,
> ipr, vfio); we can easily convert those to use pci_enable_msi_range() and
> then remove pci_enable_msi_block().
> 
> pci_enable_msi() itself can simply be pci_enable_msi_range(1, 1).
> 
> There are nearly 80 callers of pci_enable_msix(), so that's a bit harder.
> Can we deprecate that somehow, and incrementally convert callers to use
> pci_enable_msix_range() instead?  Maybe you're already planning that; I
> know you dropped some driver patches from the series for now, and I didn't
> look to see exactly what they did.

Right, the plan is first to introduce pci_auto_* (or whatever) family into
the tree and then gradually convert all drivers to the new interfaces.

> It would be good if pci_enable_msix() could be implemented in terms of
> pci_enable_msix_range(nvec, nvec), with a little extra glue to handle the
> positive return values.

[...]

> I think it would be better to make pci_enable_msix_range() the fundamental
> implementation, with pci_enable_msix() built on top of it.  That way we
> could deprecate and eventually remove pci_enable_msix() and its tri-state
> return values.

We can reuse pci_enable_msix() name, but not before all drivers converted.

But considering the other thread you want to have only pci_enable_msi_range()
and pci_enable_msix_range() interfaces - am I getting it right?
Tejun Heo Dec. 20, 2013, 1:28 p.m. UTC | #9
Hello,

On Fri, Dec 20, 2013 at 10:04:13AM +0100, Alexander Gordeev wrote:
> I can only identify two downsides of the approach above - (a) repeated error
> logging in a platform code (i.e. caused by -ENOMEM) and (b) repeated attempts
> to enable MSI when the platform already reported a fatal error.

I don't think (a) is likely as long as only -ENOSPC is retried, which
also solves (b).

Thanks.
Alexander Gordeev Dec. 23, 2013, 2:44 p.m. UTC | #10
On Tue, Dec 17, 2013 at 05:30:02PM -0700, Bjorn Helgaas wrote:
> > +int pci_auto_enable_msi_range(struct pci_dev *dev, struct msix_entry *entries,
> > +			      int minvec, int maxvec)

[...]

> > +If this function returns a positive number it indicates at least the
> > +returned number of MSI interrupts have been successfully allocated (it may
> > +have allocated more in order to satisfy the power-of-two requirement).
> 
> I assume this means the return value may be larger than the "maxvec"
> requested, right?  And the driver is free to use all the vectors up to the
> return value, even those above maxvec, right?

No, the returned value may not be larger than the "maxvec" ever. This is just
paraphrasing the semantics of exisitng pci_enable_msi_block() interface - a
value written to MMC register might be larger than the returned value, but the
driver may not use the extra vectors it did not request.
Bjorn Helgaas Dec. 23, 2013, 5:19 p.m. UTC | #11
On Mon, Dec 23, 2013 at 7:44 AM, Alexander Gordeev <agordeev@redhat.com> wrote:
> On Tue, Dec 17, 2013 at 05:30:02PM -0700, Bjorn Helgaas wrote:
>> > +int pci_auto_enable_msi_range(struct pci_dev *dev, struct msix_entry *entries,
>> > +                         int minvec, int maxvec)
>
> [...]
>
>> > +If this function returns a positive number it indicates at least the
>> > +returned number of MSI interrupts have been successfully allocated (it may
>> > +have allocated more in order to satisfy the power-of-two requirement).
>>
>> I assume this means the return value may be larger than the "maxvec"
>> requested, right?  And the driver is free to use all the vectors up to the
>> return value, even those above maxvec, right?
>
> No, the returned value may not be larger than the "maxvec" ever. This is just
> paraphrasing the semantics of exisitng pci_enable_msi_block() interface - a
> value written to MMC register might be larger than the returned value, but the
> driver may not use the extra vectors it did not request.

Then I think we should remove the "(it may have allocated more...)"
text.  If the driver can't use those extra vectors, they are an
internal implementation detail, and mentioning them here will only
cause confusion.  The "at least" text should also be removed.  From
the driver's point of view, it can use exactly the number of
interrupts returned.

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/PCI/MSI-HOWTO.txt b/Documentation/PCI/MSI-HOWTO.txt
index 7d19656..168d9c3 100644
--- a/Documentation/PCI/MSI-HOWTO.txt
+++ b/Documentation/PCI/MSI-HOWTO.txt
@@ -127,7 +127,62 @@  on the number of vectors that can be allocated; pci_enable_msi_block()
 returns as soon as it finds any constraint that doesn't allow the
 call to succeed.
 
-4.2.3 pci_disable_msi
+4.2.3 pci_auto_enable_msi_range
+
+int pci_auto_enable_msi_range(struct pci_dev *dev, struct msix_entry *entries,
+			      int minvec, int maxvec)
+
+This variation on pci_enable_msi_block() call allows a device driver to
+request any number of MSIs within specified range 'minvec' to 'maxvec'.
+Whenever possible device drivers are encouraged to use this function
+rather than explicit request loop calling pci_enable_msi_block().
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to request any more MSI interrupts for
+this device.
+
+If this function returns a positive number it indicates at least the
+returned number of MSI interrupts have been successfully allocated (it may
+have allocated more in order to satisfy the power-of-two requirement).
+Device drivers can use this number to further initialize devices.
+
+4.2.4 pci_auto_enable_msi
+
+int pci_auto_enable_msi(struct pci_dev *dev,
+			struct msix_entry *entries, int maxvec)
+
+This variation on pci_enable_msi_block() call allows a device driver to
+request any number of MSIs up to 'maxvec'. Whenever possible device drivers
+are encouraged to use this function rather than explicit request loop
+calling pci_enable_msi_block().
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to request any more MSI interrupts for
+this device.
+
+If this function returns a positive number it indicates at least the
+returned number of MSI interrupts have been successfully allocated (it may
+have allocated more in order to satisfy the power-of-two requirement).
+Device drivers can use this number to further initialize devices.
+
+4.2.5 pci_auto_enable_msi_exact
+
+int pci_auto_enable_msi_exact(struct pci_dev *dev,
+			      struct msix_entry *entries, int nvec)
+
+This variation on pci_enable_msi_block() call allows a device driver to
+request exactly 'nvec' MSIs.
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to request any more MSI interrupts for
+this device.
+
+If this function returns the value of 'nvec' it indicates MSI interrupts
+have been successfully allocated. No other value in case of success could
+be returned. Device drivers can use this value to further allocate and
+initialize device resources.
+
+4.2.6 pci_disable_msi
 
 void pci_disable_msi(struct pci_dev *dev)
 
@@ -142,7 +197,7 @@  on any interrupt for which it previously called request_irq().
 Failure to do so results in a BUG_ON(), leaving the device with
 MSI enabled and thus leaking its vector.
 
-4.2.4 pci_get_msi_vec_count
+4.2.7 pci_get_msi_vec_count
 
 int pci_get_msi_vec_count(struct pci_dev *dev)
 
@@ -222,7 +277,76 @@  static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
 	return -ENOSPC;
 }
 
-4.3.2 pci_disable_msix
+4.3.2 pci_auto_enable_msix_range
+
+int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
+			       int minvec, int maxvec)
+
+This variation on pci_enable_msix() call allows a device driver to request
+any number of MSI-Xs within specified range 'minvec' to 'maxvec'. Whenever
+possible device drivers are encouraged to use this function rather than
+explicit request loop calling pci_enable_msix().
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to allocate any more MSI-X interrupts for
+this device.
+
+If this function returns a positive number it indicates the number of
+MSI-X interrupts that have been successfully allocated. Device drivers
+can use this number to further allocate and initialize device resources.
+
+A modified function calling pci_enable_msix() in a loop might look like:
+
+static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
+{
+	rc = pci_auto_enable_msix_range(adapter->pdev, adapter->msix_entries,
+					FOO_DRIVER_MINIMUM_NVEC, nvec);
+	if (rc < 0)
+		return rc;
+
+	rc = foo_driver_init_other(adapter, rc);
+	if (rc < 0)
+		pci_disable_msix(adapter->pdev);
+
+	return rc;
+}
+
+4.3.3 pci_auto_enable_msix
+
+int pci_auto_enable_msix(struct pci_dev *dev,
+			 struct msix_entry *entries, int maxvec)
+
+This variation on pci_enable_msix() call allows a device driver to request
+any number of MSI-Xs up to 'maxvec'. Whenever possible device drivers are
+encouraged to use this function rather than explicit request loop calling
+pci_enable_msix().
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to allocate any more MSI-X interrupts for
+this device.
+
+If this function returns a positive number it indicates the number of
+MSI-X interrupts that have been successfully allocated. Device drivers
+can use this number to further allocate and initialize device resources.
+
+4.3.4 pci_auto_enable_msix_exact
+
+int pci_auto_enable_msix_exact(struct pci_dev *dev,
+			       struct msix_entry *entries, int nvec)
+
+This variation on pci_enable_msix() call allows a device driver to request
+exactly 'nvec' MSI-Xs.
+
+If this function returns a negative number, it indicates an error and
+the driver should not attempt to allocate any more MSI-X interrupts for
+this device.
+
+If this function returns the value of 'nvec' it indicates MSI-X interrupts
+have been successfully allocated. No other value in case of success could
+be returned. Device drivers can use this value to further allocate and
+initialize device resources.
+
+4.3.5 pci_disable_msix
 
 void pci_disable_msix(struct pci_dev *dev)
 
@@ -236,14 +360,14 @@  on any interrupt for which it previously called request_irq().
 Failure to do so results in a BUG_ON(), leaving the device with
 MSI-X enabled and thus leaking its vector.
 
-4.3.3 The MSI-X Table
+4.3.6 The MSI-X Table
 
 The MSI-X capability specifies a BAR and offset within that BAR for the
 MSI-X Table.  This address is mapped by the PCI subsystem, and should not
 be accessed directly by the device driver.  If the driver wishes to
 mask or unmask an interrupt, it should call disable_irq() / enable_irq().
 
-4.3.4 pci_get_msix_vec_count
+4.3.7 pci_get_msix_vec_count
 
 int pci_get_msix_vec_count(struct pci_dev *dev)
 
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 18e877f5..ccfd49b 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1093,3 +1093,77 @@  void pci_msi_init_pci_dev(struct pci_dev *dev)
 	if (dev->msix_cap)
 		msix_set_enable(dev, 0);
 }
+
+/**
+ * pci_auto_enable_msi_range - configure device's MSI capability structure
+ * @dev: device to configure
+ * @minvec: minimal number of interrupts to configure
+ * @maxvec: maximum number of interrupts to configure
+ *
+ * This function tries to allocate a maximum possible number of interrupts in a
+ * range between @minvec and @maxvec. It returns a negative errno if an error
+ * occurs. If it succeeds, it returns the actual number of interrupts allocated
+ * and updates the @dev's irq member to the lowest new interrupt number;
+ * the other interrupt numbers allocated to this device are consecutive.
+ **/
+int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
+{
+	int nvec = maxvec;
+	int rc;
+
+	if (maxvec < minvec)
+		return -ERANGE;
+
+	do {
+		rc = pci_enable_msi_block(dev, nvec);
+		if (rc < 0) {
+			return rc;
+		} else if (rc > 0) {
+			if (rc < minvec)
+				return -ENOSPC;
+			nvec = rc;
+		}
+	} while (rc);
+
+	return nvec;
+}
+EXPORT_SYMBOL(pci_auto_enable_msi_range);
+
+/**
+ * pci_auto_enable_msix_range - configure device's MSI-X capability structure
+ * @dev: pointer to the pci_dev data structure of MSI-X device function
+ * @entries: pointer to an array of MSI-X entries
+ * @minvec: minimum number of MSI-X irqs requested
+ * @maxvec: maximum number of MSI-X irqs requested
+ *
+ * Setup the MSI-X capability structure of device function with a maximum
+ * possible number of interrupts in the range between @minvec and @maxvec
+ * upon its software driver call to request for MSI-X mode enabled on its
+ * hardware device function. It returns a negative errno if an error occurs.
+ * If it succeeds, it returns the actual number of interrupts allocated and
+ * indicates the successful configuration of MSI-X capability structure
+ * with new allocated MSI-X interrupts.
+ **/
+int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
+			       int minvec, int maxvec)
+{
+	int nvec = maxvec;
+	int rc;
+
+	if (maxvec < minvec)
+		return -ERANGE;
+
+	do {
+		rc = pci_enable_msix(dev, entries, nvec);
+		if (rc < 0) {
+			return rc;
+		} else if (rc > 0) {
+			if (rc < minvec)
+				return -ENOSPC;
+			nvec = rc;
+		}
+	} while (rc);
+
+	return nvec;
+}
+EXPORT_SYMBOL(pci_auto_enable_msix_range);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 7941f06..7e30b52 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1193,6 +1193,38 @@  static inline int pci_msi_enabled(void)
 {
 	return 0;
 }
+
+int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
+{
+	return -ENOSYS;
+}
+static inline int pci_auto_enable_msi(struct pci_dev *dev, int maxvec)
+{
+	return -ENOSYS;
+}
+static inline int pci_auto_enable_msi_exact(struct pci_dev *dev, int nvec)
+{
+	return -ENOSYS;
+}
+
+static inline int
+pci_auto_enable_msix_range(struct pci_dev *dev,
+			   struct msix_entry *entries, int minvec, int maxvec)
+{
+	return -ENOSYS;
+}
+static inline int
+pci_auto_enable_msix(struct pci_dev *dev,
+		     struct msix_entry *entries, int maxvec)
+{
+	return -ENOSYS;
+}
+static inline int
+pci_auto_enable_msix_exact(struct pci_dev *dev,
+			   struct msix_entry *entries, int nvec)
+{
+	return -ENOSYS;
+}
 #else
 int pci_get_msi_vec_count(struct pci_dev *dev);
 int pci_enable_msi_block(struct pci_dev *dev, int nvec);
@@ -1205,6 +1237,31 @@  void pci_disable_msix(struct pci_dev *dev);
 void msi_remove_pci_irq_vectors(struct pci_dev *dev);
 void pci_restore_msi_state(struct pci_dev *dev);
 int pci_msi_enabled(void);
+
+int pci_auto_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec);
+static inline int pci_auto_enable_msi(struct pci_dev *dev, int maxvec)
+{
+	return pci_auto_enable_msi_range(dev, 1, maxvec);
+}
+static inline int pci_auto_enable_msi_exact(struct pci_dev *dev, int nvec)
+{
+	return pci_auto_enable_msi_range(dev, nvec, nvec);
+}
+
+int pci_auto_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
+			       int minvec, int maxvec);
+static inline int
+pci_auto_enable_msix(struct pci_dev *dev,
+		     struct msix_entry *entries, int maxvec)
+{
+	return pci_auto_enable_msix_range(dev, entries, 1, maxvec);
+}
+static inline int
+pci_auto_enable_msix_exact(struct pci_dev *dev,
+			   struct msix_entry *entries, int nvec)
+{
+	return pci_auto_enable_msix_range(dev, entries, nvec, nvec);
+}
 #endif
 
 #ifdef CONFIG_PCIEPORTBUS