Message ID | 20221121083326.002153609@linutronix.de (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | genirq, PCI/MSI: Support for per device MSI and PCI/IMS - Part 2 API rework | expand |
On Mon, 21 Nov 2022 14:36:29 +0000, Thomas Gleixner <tglx@linutronix.de> wrote: > > To support multiple MSI interrupt domains per device it is necessary to > segment the xarray MSI descriptor storage. Each domain gets up to > MSI_MAX_INDEX entries. > > Change the iterators so they operate with domain ids and take the domain > offsets into account. > > The publicly available iterators which are mostly used in legacy > implementations and the PCI/MSI core default to MSI_DEFAULT_DOMAIN (0) > which is the id for the existing "global" domains. > > No functional change. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > --- > V2: Fix the off by one so the index space is including MSI_MAX_INDEX (Kevin) > --- > include/linux/msi.h | 45 +++++++++++++++++++++++++++++++++++++++++---- > kernel/irq/msi.c | 43 +++++++++++++++++++++++++++++++++++-------- > 2 files changed, 76 insertions(+), 12 deletions(-) > > --- a/include/linux/msi.h > +++ b/include/linux/msi.h > @@ -181,6 +181,7 @@ enum msi_desc_filter { > * @mutex: Mutex protecting the MSI descriptor store > * @__store: Xarray for storing MSI descriptor pointers > * @__iter_idx: Index to search the next entry for iterators > + * @__iter_max: Index to limit the search > * @__irqdomains: Per device interrupt domains > */ > struct msi_device_data { > @@ -189,6 +190,7 @@ struct msi_device_data { > struct mutex mutex; > struct xarray __store; > unsigned long __iter_idx; > + unsigned long __iter_max; > struct irq_domain *__irqdomains[MSI_MAX_DEVICE_IRQDOMAINS]; > }; > > @@ -197,14 +199,34 @@ int msi_setup_device_data(struct device > void msi_lock_descs(struct device *dev); > void msi_unlock_descs(struct device *dev); > > -struct msi_desc *msi_first_desc(struct device *dev, enum msi_desc_filter filter); > +struct msi_desc *msi_domain_first_desc(struct device *dev, unsigned int domid, > + enum msi_desc_filter filter); > + > +/** > + * msi_first_desc - Get the first MSI descriptor of the default irqdomain > + * @dev: Device to operate on > + * @filter: Descriptor state filter > + * > + * Must be called with the MSI descriptor mutex held, i.e. msi_lock_descs() > + * must be invoked before the call. > + * > + * Return: Pointer to the first MSI descriptor matching the search > + * criteria, NULL if none found. > + */ > +static inline struct msi_desc *msi_first_desc(struct device *dev, > + enum msi_desc_filter filter) > +{ > + return msi_domain_first_desc(dev, MSI_DEFAULT_DOMAIN, filter); > +} > + > struct msi_desc *msi_next_desc(struct device *dev, enum msi_desc_filter filter); > > /** > - * msi_for_each_desc - Iterate the MSI descriptors > + * msi_domain_for_each_desc - Iterate the MSI descriptors in a specific domain > * > * @desc: struct msi_desc pointer used as iterator > * @dev: struct device pointer - device to iterate > + * @domid: The id of the interrupt domain which should be walked. > * @filter: Filter for descriptor selection > * > * Notes: > @@ -212,10 +234,25 @@ struct msi_desc *msi_next_desc(struct de > * pair. > * - It is safe to remove a retrieved MSI descriptor in the loop. > */ > -#define msi_for_each_desc(desc, dev, filter) \ > - for ((desc) = msi_first_desc((dev), (filter)); (desc); \ > +#define msi_domain_for_each_desc(desc, dev, domid, filter) \ > + for ((desc) = msi_domain_first_desc((dev), (domid), (filter)); (desc); \ > (desc) = msi_next_desc((dev), (filter))) > > +/** > + * msi_for_each_desc - Iterate the MSI descriptors in the default irqdomain > + * > + * @desc: struct msi_desc pointer used as iterator > + * @dev: struct device pointer - device to iterate > + * @filter: Filter for descriptor selection > + * > + * Notes: > + * - The loop must be protected with a msi_lock_descs()/msi_unlock_descs() > + * pair. > + * - It is safe to remove a retrieved MSI descriptor in the loop. > + */ > +#define msi_for_each_desc(desc, dev, filter) \ > + msi_domain_for_each_desc((desc), (dev), MSI_DEFAULT_DOMAIN, (filter)) > + > #define msi_desc_to_dev(desc) ((desc)->dev) > > #ifdef CONFIG_IRQ_MSI_IOMMU > --- a/kernel/irq/msi.c > +++ b/kernel/irq/msi.c > @@ -21,6 +21,10 @@ > > static inline int msi_sysfs_create_group(struct device *dev); > > +/* Invalid XA index which is outside of any searchable range */ > +#define MSI_XA_MAX_INDEX (ULONG_MAX - 1) > +#define MSI_XA_DOMAIN_SIZE (MSI_MAX_INDEX + 1) > + > static inline void msi_setup_default_irqdomain(struct device *dev, struct msi_device_data *md) > { > if (!dev->msi.domain) > @@ -33,6 +37,20 @@ static inline void msi_setup_default_irq > md->__irqdomains[MSI_DEFAULT_DOMAIN] = dev->msi.domain; > } > > +static int msi_get_domain_base_index(struct device *dev, unsigned int domid) > +{ > + lockdep_assert_held(&dev->msi.data->mutex); > + > + if (WARN_ON_ONCE(domid >= MSI_MAX_DEVICE_IRQDOMAINS)) > + return -ENODEV; > + > + if (WARN_ON_ONCE(!dev->msi.data->__irqdomains[domid])) > + return -ENODEV; > + > + return domid * MSI_XA_DOMAIN_SIZE; > +} So what I understand of this is that we split the index space into segments, one per msi_domain_ids, MSI_XA_DOMAIN_SIZE apart. Why didn't you decide to go all the way and have one xarray per irqdomain? It's not that big a structure, and it would make the whole thing a bit more straightforward. Or do you anticipate cases where you'd walk the __store xarray across irqdomains? Thanks, M.
On Thu, Nov 24 2022 at 15:46, Marc Zyngier wrote: > On Mon, 21 Nov 2022 14:36:29 +0000, > Thomas Gleixner <tglx@linutronix.de> wrote: >> +static int msi_get_domain_base_index(struct device *dev, unsigned int domid) >> +{ >> + lockdep_assert_held(&dev->msi.data->mutex); >> + >> + if (WARN_ON_ONCE(domid >= MSI_MAX_DEVICE_IRQDOMAINS)) >> + return -ENODEV; >> + >> + if (WARN_ON_ONCE(!dev->msi.data->__irqdomains[domid])) >> + return -ENODEV; >> + >> + return domid * MSI_XA_DOMAIN_SIZE; >> +} > > So what I understand of this is that we split the index space into > segments, one per msi_domain_ids, MSI_XA_DOMAIN_SIZE apart. > > Why didn't you decide to go all the way and have one xarray per > irqdomain? It's not that big a structure, and it would make the whole > thing a bit more straightforward. > > Or do you anticipate cases where you'd walk the __store xarray across > irqdomains? Not really. I just found it conveniant to deal with one, but yes we could do the same thing with two xarrays. But at the very end it does not make a huge difference. Fine with me either way. Thanks, tglx
--- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -181,6 +181,7 @@ enum msi_desc_filter { * @mutex: Mutex protecting the MSI descriptor store * @__store: Xarray for storing MSI descriptor pointers * @__iter_idx: Index to search the next entry for iterators + * @__iter_max: Index to limit the search * @__irqdomains: Per device interrupt domains */ struct msi_device_data { @@ -189,6 +190,7 @@ struct msi_device_data { struct mutex mutex; struct xarray __store; unsigned long __iter_idx; + unsigned long __iter_max; struct irq_domain *__irqdomains[MSI_MAX_DEVICE_IRQDOMAINS]; }; @@ -197,14 +199,34 @@ int msi_setup_device_data(struct device void msi_lock_descs(struct device *dev); void msi_unlock_descs(struct device *dev); -struct msi_desc *msi_first_desc(struct device *dev, enum msi_desc_filter filter); +struct msi_desc *msi_domain_first_desc(struct device *dev, unsigned int domid, + enum msi_desc_filter filter); + +/** + * msi_first_desc - Get the first MSI descriptor of the default irqdomain + * @dev: Device to operate on + * @filter: Descriptor state filter + * + * Must be called with the MSI descriptor mutex held, i.e. msi_lock_descs() + * must be invoked before the call. + * + * Return: Pointer to the first MSI descriptor matching the search + * criteria, NULL if none found. + */ +static inline struct msi_desc *msi_first_desc(struct device *dev, + enum msi_desc_filter filter) +{ + return msi_domain_first_desc(dev, MSI_DEFAULT_DOMAIN, filter); +} + struct msi_desc *msi_next_desc(struct device *dev, enum msi_desc_filter filter); /** - * msi_for_each_desc - Iterate the MSI descriptors + * msi_domain_for_each_desc - Iterate the MSI descriptors in a specific domain * * @desc: struct msi_desc pointer used as iterator * @dev: struct device pointer - device to iterate + * @domid: The id of the interrupt domain which should be walked. * @filter: Filter for descriptor selection * * Notes: @@ -212,10 +234,25 @@ struct msi_desc *msi_next_desc(struct de * pair. * - It is safe to remove a retrieved MSI descriptor in the loop. */ -#define msi_for_each_desc(desc, dev, filter) \ - for ((desc) = msi_first_desc((dev), (filter)); (desc); \ +#define msi_domain_for_each_desc(desc, dev, domid, filter) \ + for ((desc) = msi_domain_first_desc((dev), (domid), (filter)); (desc); \ (desc) = msi_next_desc((dev), (filter))) +/** + * msi_for_each_desc - Iterate the MSI descriptors in the default irqdomain + * + * @desc: struct msi_desc pointer used as iterator + * @dev: struct device pointer - device to iterate + * @filter: Filter for descriptor selection + * + * Notes: + * - The loop must be protected with a msi_lock_descs()/msi_unlock_descs() + * pair. + * - It is safe to remove a retrieved MSI descriptor in the loop. + */ +#define msi_for_each_desc(desc, dev, filter) \ + msi_domain_for_each_desc((desc), (dev), MSI_DEFAULT_DOMAIN, (filter)) + #define msi_desc_to_dev(desc) ((desc)->dev) #ifdef CONFIG_IRQ_MSI_IOMMU --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -21,6 +21,10 @@ static inline int msi_sysfs_create_group(struct device *dev); +/* Invalid XA index which is outside of any searchable range */ +#define MSI_XA_MAX_INDEX (ULONG_MAX - 1) +#define MSI_XA_DOMAIN_SIZE (MSI_MAX_INDEX + 1) + static inline void msi_setup_default_irqdomain(struct device *dev, struct msi_device_data *md) { if (!dev->msi.domain) @@ -33,6 +37,20 @@ static inline void msi_setup_default_irq md->__irqdomains[MSI_DEFAULT_DOMAIN] = dev->msi.domain; } +static int msi_get_domain_base_index(struct device *dev, unsigned int domid) +{ + lockdep_assert_held(&dev->msi.data->mutex); + + if (WARN_ON_ONCE(domid >= MSI_MAX_DEVICE_IRQDOMAINS)) + return -ENODEV; + + if (WARN_ON_ONCE(!dev->msi.data->__irqdomains[domid])) + return -ENODEV; + + return domid * MSI_XA_DOMAIN_SIZE; +} + + /** * msi_alloc_desc - Allocate an initialized msi_desc * @dev: Pointer to the device for which this is allocated @@ -229,6 +247,7 @@ int msi_setup_device_data(struct device xa_init(&md->__store); mutex_init(&md->mutex); + md->__iter_idx = MSI_XA_MAX_INDEX; dev->msi.data = md; devres_add(dev, md); return 0; @@ -251,7 +270,7 @@ EXPORT_SYMBOL_GPL(msi_lock_descs); void msi_unlock_descs(struct device *dev) { /* Invalidate the index wich was cached by the iterator */ - dev->msi.data->__iter_idx = MSI_MAX_INDEX; + dev->msi.data->__iter_idx = MSI_XA_MAX_INDEX; mutex_unlock(&dev->msi.data->mutex); } EXPORT_SYMBOL_GPL(msi_unlock_descs); @@ -260,17 +279,18 @@ static struct msi_desc *msi_find_desc(st { struct msi_desc *desc; - xa_for_each_start(&md->__store, md->__iter_idx, desc, md->__iter_idx) { + xa_for_each_range(&md->__store, md->__iter_idx, desc, md->__iter_idx, md->__iter_max) { if (msi_desc_match(desc, filter)) return desc; } - md->__iter_idx = MSI_MAX_INDEX; + md->__iter_idx = MSI_XA_MAX_INDEX; return NULL; } /** - * msi_first_desc - Get the first MSI descriptor of a device + * msi_domain_first_desc - Get the first MSI descriptor of an irqdomain associated to a device * @dev: Device to operate on + * @domid: The id of the interrupt domain which should be walked. * @filter: Descriptor state filter * * Must be called with the MSI descriptor mutex held, i.e. msi_lock_descs() @@ -279,19 +299,26 @@ static struct msi_desc *msi_find_desc(st * Return: Pointer to the first MSI descriptor matching the search * criteria, NULL if none found. */ -struct msi_desc *msi_first_desc(struct device *dev, enum msi_desc_filter filter) +struct msi_desc *msi_domain_first_desc(struct device *dev, unsigned int domid, + enum msi_desc_filter filter) { struct msi_device_data *md = dev->msi.data; + int baseidx; if (WARN_ON_ONCE(!md)) return NULL; lockdep_assert_held(&md->mutex); - md->__iter_idx = 0; + baseidx = msi_get_domain_base_index(dev, domid); + if (baseidx < 0) + return NULL; + + md->__iter_idx = baseidx; + md->__iter_max = baseidx + MSI_MAX_INDEX; return msi_find_desc(md, filter); } -EXPORT_SYMBOL_GPL(msi_first_desc); +EXPORT_SYMBOL_GPL(msi_domain_first_desc); /** * msi_next_desc - Get the next MSI descriptor of a device @@ -315,7 +342,7 @@ struct msi_desc *msi_next_desc(struct de lockdep_assert_held(&md->mutex); - if (md->__iter_idx >= (unsigned long)MSI_MAX_INDEX) + if (md->__iter_idx >= md->__iter_max) return NULL; md->__iter_idx++;
To support multiple MSI interrupt domains per device it is necessary to segment the xarray MSI descriptor storage. Each domain gets up to MSI_MAX_INDEX entries. Change the iterators so they operate with domain ids and take the domain offsets into account. The publicly available iterators which are mostly used in legacy implementations and the PCI/MSI core default to MSI_DEFAULT_DOMAIN (0) which is the id for the existing "global" domains. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- V2: Fix the off by one so the index space is including MSI_MAX_INDEX (Kevin) --- include/linux/msi.h | 45 +++++++++++++++++++++++++++++++++++++++++---- kernel/irq/msi.c | 43 +++++++++++++++++++++++++++++++++++-------- 2 files changed, 76 insertions(+), 12 deletions(-)