diff mbox

[1/3] pci: introduce read_bridge/write_bridge pci ops

Message ID 1464784332-3775650-1-git-send-email-arnd@arndb.de (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Arnd Bergmann June 1, 2016, 12:31 p.m. UTC
A lot of PCI host bridges require different methods for initiating
type 0 and type 1 config space accesses, leading to duplication of
code.

This adds support for the two different kinds at the pci_ops
level, with the newly added map_bridge/read_bridge/write_bridge
operations for type 1 accesses.

When these are not set, we fall back to the regular map_bus/read/write
operations, so all existing drivers keep working, and bridges that
have identical operations continue to only require one set.

In most cases, a driver will only have to override either map_bridge
or read_bridge/write_bridge but not both.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
This is slightly refined over what I had in the "Add PCIe driver for
Rockchip Soc" thread earlier, but probably not the final version yet,
so I'd like to get more feedback on it.

In particular, I think it may be useful to add a third set of
functions for the config space of devices that are directly attached
to the host bridge, as those are sometimes (designware, rcar, mvebu)
yet again different from the host bridge itself and from all other
devices. On the other hand, that adds further complexity that we
may want to leave out of the common code, and I honestly can't
seem to come up for a catchy name form the callbacks.

 drivers/pci/access.c | 35 +++++++++++++++++++++++++++--------
 include/linux/pci.h  |  3 +++
 2 files changed, 30 insertions(+), 8 deletions(-)

Comments

Bjorn Helgaas June 1, 2016, 3:09 p.m. UTC | #1
Hi Arnd,

On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> A lot of PCI host bridges require different methods for initiating
> type 0 and type 1 config space accesses, leading to duplication of
> code.
> 
> This adds support for the two different kinds at the pci_ops
> level, with the newly added map_bridge/read_bridge/write_bridge
> operations for type 1 accesses.
> 
> When these are not set, we fall back to the regular map_bus/read/write
> operations, so all existing drivers keep working, and bridges that
> have identical operations continue to only require one set.

This adds new config accessor functions to struct pci_ops and makes
the callers responsible for figuring out which one to use.  The
benefit is to reduce code duplication in some host bridge drivers
(DesignWare and MVEBU so far).

From a design perspective, I'm not comfortable with moving this burden
from the host bridge drivers to the callers of the config accessors.

The pci_ops struct is a pretty low-level thing, but it is declared in
include/linux/pci.h and while I think it's ugly to use it outside the
PCI core, there's nothing that actually prevents that, and there are
several places that do use it, including at least the ones below.  We
could argue that many of these don't need updating because they don't
need .read_bridge() accessors, but I don't think pci_ops users should
be making assumptions like that.

  arch/sparc/kernel/pci_schizo.c: pbm->pci_ops->read(pbm->pci_bus, 0, PCI_STATUS, 2, &stat);
  arch/x86/pci/common.c:          return raw_pci_ops->read(domain, bus, devfn, reg, len, val);
  arch/x86/pci/common.c:          return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val);
  arch/x86/pci/intel_mid_pci.c:           if (raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
  arch/x86/pci/intel_mid_pci.c:                   raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
  arch/x86/pci/intel_mid_pci.c:           raw_pci_ext_ops->read(domain, busnum, devfn,
  arch/x86/pci/intel_mid_pci.c:   return raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
  arch/x86/pci/mmconfig-shared.c: raw_pci_ops->read(0, 0, PCI_DEVFN(0, 0), 0xce, 2, &win);
  arch/x86/pci/mmconfig-shared.c: raw_pci_ops->read(0, 0, PCI_DEVFN(0, 0), 0x48, 4, &pciexbar);
  arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, PCI_DEVFN(0, 0), 0, 4, &l);
  arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, PCI_DEVFN(0, 0), extcfg_regnum,
  arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, devfn, 0, 4, &l);
  drivers/pci/access.c:   res = bus->ops->read(bus, devfn, pos, len, &data);              \
  drivers/pci/access.c:   ret = dev->bus->ops->read(dev->bus, dev->devfn,                 \
  drivers/pci/access.c:   return dev->vpd->ops->read(dev, pos, count, buf);
  drivers/pci/pci.c:      bus->ops->read(bus, dev->devfn, PCI_COMMAND, 4, &cmd_status_dword);
  drivers/pci/pcie/aer/aer_inject.c:      rv = ops->read(bus, devfn, where, size, val);

> In most cases, a driver will only have to override either map_bridge
> or read_bridge/write_bridge but not both.
> 
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> This is slightly refined over what I had in the "Add PCIe driver for
> Rockchip Soc" thread earlier, but probably not the final version yet,
> so I'd like to get more feedback on it.
> 
> In particular, I think it may be useful to add a third set of
> functions for the config space of devices that are directly attached
> to the host bridge, as those are sometimes (designware, rcar, mvebu)
> yet again different from the host bridge itself and from all other
> devices. On the other hand, that adds further complexity that we
> may want to leave out of the common code, and I honestly can't
> seem to come up for a catchy name form the callbacks.
> 
>  drivers/pci/access.c | 35 +++++++++++++++++++++++++++--------
>  include/linux/pci.h  |  3 +++
>  2 files changed, 30 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/access.c b/drivers/pci/access.c
> index d11cdbb8fba3..263606ece211 100644
> --- a/drivers/pci/access.c
> +++ b/drivers/pci/access.c
> @@ -34,9 +34,12 @@ int pci_bus_read_config_##size \
>  	u32 data = 0;							\
>  	if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER;	\
>  	raw_spin_lock_irqsave(&pci_lock, flags);			\
> -	res = bus->ops->read(bus, devfn, pos, len, &data);		\
> +	if (!bus->parent == 0 && bus->ops->read_bridge)			\

  if (!bus->parent && ...) ?

> +		res = bus->ops->read_bridge(bus, devfn, pos, len, &data);	\
> +	else								\
> +		res = bus->ops->read(bus, devfn, pos, len, &data);	\
>  	*value = (type)data;						\
> -	raw_spin_unlock_irqrestore(&pci_lock, flags);		\
> +	raw_spin_unlock_irqrestore(&pci_lock, flags);			\
>  	return res;							\
>  }
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Arnd Bergmann June 1, 2016, 3:41 p.m. UTC | #2
On Wednesday, June 1, 2016 10:09:29 AM CEST Bjorn Helgaas wrote:
> Hi Arnd,
> 
> On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> > A lot of PCI host bridges require different methods for initiating
> > type 0 and type 1 config space accesses, leading to duplication of
> > code.
> > 
> > This adds support for the two different kinds at the pci_ops
> > level, with the newly added map_bridge/read_bridge/write_bridge
> > operations for type 1 accesses.
> > 
> > When these are not set, we fall back to the regular map_bus/read/write
> > operations, so all existing drivers keep working, and bridges that
> > have identical operations continue to only require one set.
> 
> This adds new config accessor functions to struct pci_ops and makes
> the callers responsible for figuring out which one to use.  The
> benefit is to reduce code duplication in some host bridge drivers
> (DesignWare and MVEBU so far).
> 
> From a design perspective, I'm not comfortable with moving this burden
> from the host bridge drivers to the callers of the config accessors.

I see

> The pci_ops struct is a pretty low-level thing, but it is declared in
> include/linux/pci.h and while I think it's ugly to use it outside the
> PCI core, there's nothing that actually prevents that, and there are
> several places that do use it, including at least the ones below.  We
> could argue that many of these don't need updating because they don't
> need .read_bridge() accessors, but I don't think pci_ops users should
> be making assumptions like that.
> 
>   arch/x86/pci/common.c:          return raw_pci_ops->read(domain, bus, devfn, reg, len, val);
>   arch/x86/pci/common.c:          return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val);
>   arch/x86/pci/intel_mid_pci.c:           if (raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
>   arch/x86/pci/intel_mid_pci.c:                   raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
>   arch/x86/pci/intel_mid_pci.c:           raw_pci_ext_ops->read(domain, busnum, devfn,
>   arch/x86/pci/intel_mid_pci.c:   return raw_pci_ext_ops->read(pci_domain_nr(bus), bus->number,
>   arch/x86/pci/mmconfig-shared.c: raw_pci_ops->read(0, 0, PCI_DEVFN(0, 0), 0xce, 2, &win);
>   arch/x86/pci/mmconfig-shared.c: raw_pci_ops->read(0, 0, PCI_DEVFN(0, 0), 0x48, 4, &pciexbar);
>   arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, PCI_DEVFN(0, 0), 0, 4, &l);
>   arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, PCI_DEVFN(0, 0), extcfg_regnum,
>   arch/x86/pci/mmconfig-shared.c:         raw_pci_ops->read(0, bus, devfn, 0, 4, &l);

All the x86 examples are pci_raw_ops though, not pci_ops.

>   drivers/pci/access.c:   res = bus->ops->read(bus, devfn, pos, len, &data);              \
>   drivers/pci/access.c:   ret = dev->bus->ops->read(dev->bus, dev->devfn,                 \

These implement the interface that is expected to be used.

>   drivers/pci/access.c:   return dev->vpd->ops->read(dev, pos, count, buf);

and this is pci_vpd_ops, not pci_ops

>   arch/sparc/kernel/pci_schizo.c: pbm->pci_ops->read(pbm->pci_bus, 0, PCI_STATUS, 2, &stat);
>   drivers/pci/pci.c:      bus->ops->read(bus, dev->devfn, PCI_COMMAND, 4, &cmd_status_dword);
>   drivers/pci/pcie/aer/aer_inject.c:      rv = ops->read(bus, devfn, where, size, val);

These are indeed some that I missed, but there are only very few of them.

Maybe we can simply change them to use the normal API and come up with
a way to make the pci_ops harder to misuse? Would it make you feel better
if we also renamed .read/.write into .read_type0/.write_type0 or something
like that?

> > In most cases, a driver will only have to override either map_bridge
> > or read_bridge/write_bridge but not both.
> > 
> > Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> > This is slightly refined over what I had in the "Add PCIe driver for
> > Rockchip Soc" thread earlier, but probably not the final version yet,
> > so I'd like to get more feedback on it.
> > 
> > In particular, I think it may be useful to add a third set of
> > functions for the config space of devices that are directly attached
> > to the host bridge, as those are sometimes (designware, rcar, mvebu)
> > yet again different from the host bridge itself and from all other
> > devices. On the other hand, that adds further complexity that we
> > may want to leave out of the common code, and I honestly can't
> > seem to come up for a catchy name form the callbacks.
> > 
> >  drivers/pci/access.c | 35 +++++++++++++++++++++++++++--------
> >  include/linux/pci.h  |  3 +++
> >  2 files changed, 30 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/pci/access.c b/drivers/pci/access.c
> > index d11cdbb8fba3..263606ece211 100644
> > --- a/drivers/pci/access.c
> > +++ b/drivers/pci/access.c
> > @@ -34,9 +34,12 @@ int pci_bus_read_config_##size \
> >  	u32 data = 0;							\
> >  	if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER;	\
> >  	raw_spin_lock_irqsave(&pci_lock, flags);			\
> > -	res = bus->ops->read(bus, devfn, pos, len, &data);		\
> > +	if (!bus->parent == 0 && bus->ops->read_bridge)			\
> 
>   if (!bus->parent && ...) ?

Fixed, thanks for noticing.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Helgaas June 1, 2016, 7:04 p.m. UTC | #3
On Wed, Jun 01, 2016 at 05:41:53PM +0200, Arnd Bergmann wrote:
> On Wednesday, June 1, 2016 10:09:29 AM CEST Bjorn Helgaas wrote:
> > Hi Arnd,
> > 
> > On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> > > A lot of PCI host bridges require different methods for initiating
> > > type 0 and type 1 config space accesses, leading to duplication of
> > > code.
> > > 
> > > This adds support for the two different kinds at the pci_ops
> > > level, with the newly added map_bridge/read_bridge/write_bridge
> > > operations for type 1 accesses.
> > > 
> > > When these are not set, we fall back to the regular map_bus/read/write
> > > operations, so all existing drivers keep working, and bridges that
> > > have identical operations continue to only require one set.
> > 
> > This adds new config accessor functions to struct pci_ops and makes
> > the callers responsible for figuring out which one to use.  The
> > benefit is to reduce code duplication in some host bridge drivers
> > (DesignWare and MVEBU so far).
> > 
> > From a design perspective, I'm not comfortable with moving this burden
> > from the host bridge drivers to the callers of the config accessors.
> ...

> Maybe we can simply change them to use the normal API and come up with
> a way to make the pci_ops harder to misuse? Would it make you feel better
> if we also renamed .read/.write into .read_type0/.write_type0 or something
> like that?

I'm trying to get a better feel for the tradeoff here.  It seems like
an API complication vs. code duplication.

I don't really think the callers should have to figure out which
accessor to use.  How much of a benefit do we really gain by
complicating the callers?  We've managed for quite a few years with
the current scheme, and it seems like only a couple new ARM platforms
would benefit.

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Arnd Bergmann June 1, 2016, 8:37 p.m. UTC | #4
On Wednesday, June 1, 2016 2:04:30 PM CEST Bjorn Helgaas wrote:
> On Wed, Jun 01, 2016 at 05:41:53PM +0200, Arnd Bergmann wrote:
> > On Wednesday, June 1, 2016 10:09:29 AM CEST Bjorn Helgaas wrote:
> > > Hi Arnd,
> > > 
> > > On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> > > > A lot of PCI host bridges require different methods for initiating
> > > > type 0 and type 1 config space accesses, leading to duplication of
> > > > code.
> > > > 
> > > > This adds support for the two different kinds at the pci_ops
> > > > level, with the newly added map_bridge/read_bridge/write_bridge
> > > > operations for type 1 accesses.
> > > > 
> > > > When these are not set, we fall back to the regular map_bus/read/write
> > > > operations, so all existing drivers keep working, and bridges that
> > > > have identical operations continue to only require one set.
> > > 
> > > This adds new config accessor functions to struct pci_ops and makes
> > > the callers responsible for figuring out which one to use.  The
> > > benefit is to reduce code duplication in some host bridge drivers
> > > (DesignWare and MVEBU so far).
> > > 
> > > From a design perspective, I'm not comfortable with moving this burden
> > > from the host bridge drivers to the callers of the config accessors.
> > ...
> 
> > Maybe we can simply change them to use the normal API and come up with
> > a way to make the pci_ops harder to misuse? Would it make you feel better
> > if we also renamed .read/.write into .read_type0/.write_type0 or something
> > like that?
> 
> I'm trying to get a better feel for the tradeoff here.  It seems like
> an API complication vs. code duplication.
> 
> I don't really think the callers should have to figure out which
> accessor to use.  How much of a benefit do we really gain by
> complicating the callers?  We've managed for quite a few years with
> the current scheme, and it seems like only a couple new ARM platforms
> would benefit.

I just did a count of the implementations of pci_ops: I found 107
instances of 'struct pci_ops', and 67 of them treat type0 and type1
access differently in some form.

I'd estimate that about half of them, or roughly a third of the total
instances would benefit from my change, if we were to do them again.
Clearly there is no need to change the existing code here when it works,
unless the benefit is very clear and the code is actively maintained.

In some cases, the difference is only that the root bus has a limited
set of devices that are allowed to be accessed, so there would
likely be no benefit of this, compared to e.g. yet another callback
that checks the validity.
Some other instances have type0 registers at a different memory location
from type1, some use different layout inside of that space, and some
are completely different.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Helgaas June 2, 2016, 2 p.m. UTC | #5
On Wed, Jun 01, 2016 at 10:37:28PM +0200, Arnd Bergmann wrote:
> On Wednesday, June 1, 2016 2:04:30 PM CEST Bjorn Helgaas wrote:
> > On Wed, Jun 01, 2016 at 05:41:53PM +0200, Arnd Bergmann wrote:
> > > On Wednesday, June 1, 2016 10:09:29 AM CEST Bjorn Helgaas wrote:
> > > > Hi Arnd,
> > > > 
> > > > On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> > > > > A lot of PCI host bridges require different methods for initiating
> > > > > type 0 and type 1 config space accesses, leading to duplication of
> > > > > code.
> > > > > 
> > > > > This adds support for the two different kinds at the pci_ops
> > > > > level, with the newly added map_bridge/read_bridge/write_bridge
> > > > > operations for type 1 accesses.
> > > > > 
> > > > > When these are not set, we fall back to the regular map_bus/read/write
> > > > > operations, so all existing drivers keep working, and bridges that
> > > > > have identical operations continue to only require one set.
> > > > 
> > > > This adds new config accessor functions to struct pci_ops and makes
> > > > the callers responsible for figuring out which one to use.  The
> > > > benefit is to reduce code duplication in some host bridge drivers
> > > > (DesignWare and MVEBU so far).
> > > > 
> > > > From a design perspective, I'm not comfortable with moving this burden
> > > > from the host bridge drivers to the callers of the config accessors.
> > > ...
> > 
> > > Maybe we can simply change them to use the normal API and come up with
> > > a way to make the pci_ops harder to misuse? Would it make you feel better
> > > if we also renamed .read/.write into .read_type0/.write_type0 or something
> > > like that?
> > 
> > I'm trying to get a better feel for the tradeoff here.  It seems like
> > an API complication vs. code duplication.
> > 
> > I don't really think the callers should have to figure out which
> > accessor to use.  How much of a benefit do we really gain by
> > complicating the callers?  We've managed for quite a few years with
> > the current scheme, and it seems like only a couple new ARM platforms
> > would benefit.
> 
> I just did a count of the implementations of pci_ops: I found 107
> instances of 'struct pci_ops', and 67 of them treat type0 and type1
> access differently in some form.
> 
> I'd estimate that about half of them, or roughly a third of the total
> instances would benefit from my change, if we were to do them again.
> Clearly there is no need to change the existing code here when it works,
> unless the benefit is very clear and the code is actively maintained.
> 
> In some cases, the difference is only that the root bus has a limited
> set of devices that are allowed to be accessed, so there would
> likely be no benefit of this, compared to e.g. yet another callback
> that checks the validity.
> Some other instances have type0 registers at a different memory location
> from type1, some use different layout inside of that space, and some
> are completely different.

The type0/type1 distinction still seems out of place to me at the call
site.  Is there any other reason a caller would care about the
difference between type0 and type1?

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Arnd Bergmann June 2, 2016, 3:06 p.m. UTC | #6
On Thursday, June 2, 2016 9:00:01 AM CEST Bjorn Helgaas wrote:
> > I just did a count of the implementations of pci_ops: I found 107
> > instances of 'struct pci_ops', and 67 of them treat type0 and type1
> > access differently in some form.
> > 
> > I'd estimate that about half of them, or roughly a third of the total
> > instances would benefit from my change, if we were to do them again.
> > Clearly there is no need to change the existing code here when it works,
> > unless the benefit is very clear and the code is actively maintained.
> > 
> > In some cases, the difference is only that the root bus has a limited
> > set of devices that are allowed to be accessed, so there would
> > likely be no benefit of this, compared to e.g. yet another callback
> > that checks the validity.
> > Some other instances have type0 registers at a different memory location
> > from type1, some use different layout inside of that space, and some
> > are completely different.
> 
> The type0/type1 distinction still seems out of place to me at the call
> site.  Is there any other reason a caller would care about the
> difference between type0 and type1?

The callers really shouldn't care, but they also shouldn't call the
pci_ops function pointer (and as we found earlier, there are only
three such callers).

The distinction between type0 and type1 in my mind is an implementation
detail of the pci_{read,write}_config_{byte,word,dword} functions
that call the low-level operations here.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Arnd Bergmann June 2, 2016, 3:44 p.m. UTC | #7
On Thursday, June 2, 2016 9:00:01 AM CEST Bjorn Helgaas wrote:
> On Wed, Jun 01, 2016 at 10:37:28PM +0200, Arnd Bergmann wrote:
> > On Wednesday, June 1, 2016 2:04:30 PM CEST Bjorn Helgaas wrote:
> > > On Wed, Jun 01, 2016 at 05:41:53PM +0200, Arnd Bergmann wrote:
> > > > On Wednesday, June 1, 2016 10:09:29 AM CEST Bjorn Helgaas wrote:
> > > > > Hi Arnd,
> > > > > 
> > > > > On Wed, Jun 01, 2016 at 02:31:22PM +0200, Arnd Bergmann wrote:
> > > > > > A lot of PCI host bridges require different methods for initiating
> > > > > > type 0 and type 1 config space accesses, leading to duplication of
> > > > > > code.
> > > > > > 
> > > > > > This adds support for the two different kinds at the pci_ops
> > > > > > level, with the newly added map_bridge/read_bridge/write_bridge
> > > > > > operations for type 1 accesses.
> > > > > > 
> > > > > > When these are not set, we fall back to the regular map_bus/read/write
> > > > > > operations, so all existing drivers keep working, and bridges that
> > > > > > have identical operations continue to only require one set.
> > > > > 
> > > > > This adds new config accessor functions to struct pci_ops and makes
> > > > > the callers responsible for figuring out which one to use.  The
> > > > > benefit is to reduce code duplication in some host bridge drivers
> > > > > (DesignWare and MVEBU so far).
> > > > > 
> > > > > From a design perspective, I'm not comfortable with moving this burden
> > > > > from the host bridge drivers to the callers of the config accessors.
> > > > ...
> > > 
> > > > Maybe we can simply change them to use the normal API and come up with
> > > > a way to make the pci_ops harder to misuse? Would it make you feel better
> > > > if we also renamed .read/.write into .read_type0/.write_type0 or something
> > > > like that?
> > > 
> > > I'm trying to get a better feel for the tradeoff here.  It seems like
> > > an API complication vs. code duplication.
> > > 
> > > I don't really think the callers should have to figure out which
> > > accessor to use.  How much of a benefit do we really gain by
> > > complicating the callers?  We've managed for quite a few years with
> > > the current scheme, and it seems like only a couple new ARM platforms
> > > would benefit.
> > 
> > I just did a count of the implementations of pci_ops: I found 107
> > instances of 'struct pci_ops', and 67 of them treat type0 and type1
> > access differently in some form.
> > 
> > I'd estimate that about half of them, or roughly a third of the total
> > instances would benefit from my change, if we were to do them again.
> > Clearly there is no need to change the existing code here when it works,
> > unless the benefit is very clear and the code is actively maintained.
> > 
> > In some cases, the difference is only that the root bus has a limited
> > set of devices that are allowed to be accessed, so there would
> > likely be no benefit of this, compared to e.g. yet another callback
> > that checks the validity.
> > Some other instances have type0 registers at a different memory location
> > from type1, some use different layout inside of that space, and some
> > are completely different.
> 
> The type0/type1 distinction still seems out of place to me at the call
> site.  Is there any other reason a caller would care about the
> difference between type0 and type1?

Another idea based on my RFC patches to make pci_host_bridge the primary
structure for probing PCI: we could split out the old 'bus::pci_ops' with
the traditional read/write interface from a new structure that becomes
pci_host_bridge::pci_host_bridge_ops, and also contains the other callbacks
that we recently added to pci_ops, alongside type0/type1 accessors.

We could then have a set of default pci_ops that call
pci_host_bridge_ops->type0_read/type0_write/type1_read/type1_write,
and those in turn get a pci_host_bridge as an argument along with the
bus, device, function and register numbers instead of bus pointer
and devfn/where.

This way all existing code can keep working, but we can convert host
drivers (if desired) to provide only pci_host_bridge_ops and no
pci_ops, while making it easier to define those with a more modern
interface.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Helgaas June 7, 2016, 12:28 a.m. UTC | #8
On Thu, Jun 02, 2016 at 05:06:55PM +0200, Arnd Bergmann wrote:
> On Thursday, June 2, 2016 9:00:01 AM CEST Bjorn Helgaas wrote:
> > > I just did a count of the implementations of pci_ops: I found 107
> > > instances of 'struct pci_ops', and 67 of them treat type0 and type1
> > > access differently in some form.
> > > 
> > > I'd estimate that about half of them, or roughly a third of the total
> > > instances would benefit from my change, if we were to do them again.
> > > Clearly there is no need to change the existing code here when it works,
> > > unless the benefit is very clear and the code is actively maintained.
> > > 
> > > In some cases, the difference is only that the root bus has a limited
> > > set of devices that are allowed to be accessed, so there would
> > > likely be no benefit of this, compared to e.g. yet another callback
> > > that checks the validity.
> > > Some other instances have type0 registers at a different memory location
> > > from type1, some use different layout inside of that space, and some
> > > are completely different.
> > 
> > The type0/type1 distinction still seems out of place to me at the call
> > site.  Is there any other reason a caller would care about the
> > difference between type0 and type1?
> 
> The callers really shouldn't care, but they also shouldn't call the
> pci_ops function pointer (and as we found earlier, there are only
> three such callers).
> 
> The distinction between type0 and type1 in my mind is an implementation
> detail of the pci_{read,write}_config_{byte,word,dword} functions
> that call the low-level operations here.

The caller is performing one abstract operation: reading or writing
config space of a PCI device.  In the caller's context, it makes no
difference whether it's a type0 or type1 access.

Moving the test from the host bridge driver to pci_read_config_byte()
does move a little code from the callee to the caller, and there are
more callees than callers, so in that sense it does remove code
overall.  If you consider a single driver by itself, I'm not sure it
makes much difference.

The pcie-designware.c patch is a net removal of 17 lines, but that's
not all from moving the type0/type1 test: restructuring along the same
lines but keeping the original type0/type1 test removes about 9 lines.

Anyway, I think I'd rather work first on your RFC patches to make
pci_host_bridge the primary structure for probing PCI.  I think
there will be a *lot* of benefit there.

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Arnd Bergmann June 7, 2016, 8:13 a.m. UTC | #9
On Monday, June 6, 2016 7:28:22 PM CEST Bjorn Helgaas wrote:
> The caller is performing one abstract operation: reading or writing
> config space of a PCI device.  In the caller's context, it makes no
> difference whether it's a type0 or type1 access.
> 
> Moving the test from the host bridge driver to pci_read_config_byte()
> does move a little code from the callee to the caller, and there are
> more callees than callers, so in that sense it does remove code
> overall.  If you consider a single driver by itself, I'm not sure it
> makes much difference.
> 
> The pcie-designware.c patch is a net removal of 17 lines, but that's
> not all from moving the type0/type1 test: restructuring along the same
> lines but keeping the original type0/type1 test removes about 9 lines.
> 
> Anyway, I think I'd rather work first on your RFC patches to make
> pci_host_bridge the primary structure for probing PCI.  I think
> there will be a *lot* of benefit there.

Fair enough. This series started from the review of the Rockchip
driver, and the idea was to make that one simpler, but given that
we already have several dozen drivers doing the same thing, adding
one more isn't going to have a significant impact now.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index d11cdbb8fba3..263606ece211 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -34,9 +34,12 @@  int pci_bus_read_config_##size \
 	u32 data = 0;							\
 	if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER;	\
 	raw_spin_lock_irqsave(&pci_lock, flags);			\
-	res = bus->ops->read(bus, devfn, pos, len, &data);		\
+	if (!bus->parent == 0 && bus->ops->read_bridge)			\
+		res = bus->ops->read_bridge(bus, devfn, pos, len, &data);	\
+	else								\
+		res = bus->ops->read(bus, devfn, pos, len, &data);	\
 	*value = (type)data;						\
-	raw_spin_unlock_irqrestore(&pci_lock, flags);		\
+	raw_spin_unlock_irqrestore(&pci_lock, flags);			\
 	return res;							\
 }
 
@@ -48,8 +51,11 @@  int pci_bus_write_config_##size \
 	unsigned long flags;						\
 	if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER;	\
 	raw_spin_lock_irqsave(&pci_lock, flags);			\
-	res = bus->ops->write(bus, devfn, pos, len, value);		\
-	raw_spin_unlock_irqrestore(&pci_lock, flags);		\
+	if (!bus->parent && bus->ops->write_bridge)			\
+		res = bus->ops->write_bridge(bus, devfn, pos, len, value);\
+	else								\
+		res = bus->ops->write(bus, devfn, pos, len, value);	\
+	raw_spin_unlock_irqrestore(&pci_lock, flags);			\
 	return res;							\
 }
 
@@ -72,7 +78,11 @@  int pci_generic_config_read(struct pci_bus *bus, unsigned int devfn,
 {
 	void __iomem *addr;
 
-	addr = bus->ops->map_bus(bus, devfn, where);
+	if (!bus->parent && bus->ops->map_bridge)
+		addr = bus->ops->map_bridge(bus, devfn, where);
+	else
+		addr = bus->ops->map_bus(bus, devfn, where);
+
 	if (!addr) {
 		*val = ~0;
 		return PCIBIOS_DEVICE_NOT_FOUND;
@@ -94,7 +104,10 @@  int pci_generic_config_write(struct pci_bus *bus, unsigned int devfn,
 {
 	void __iomem *addr;
 
-	addr = bus->ops->map_bus(bus, devfn, where);
+	if (!bus->parent && bus->ops->map_bridge)
+		addr = bus->ops->map_bridge(bus, devfn, where);
+	else
+		addr = bus->ops->map_bus(bus, devfn, where);
 	if (!addr)
 		return PCIBIOS_DEVICE_NOT_FOUND;
 
@@ -114,7 +127,10 @@  int pci_generic_config_read32(struct pci_bus *bus, unsigned int devfn,
 {
 	void __iomem *addr;
 
-	addr = bus->ops->map_bus(bus, devfn, where & ~0x3);
+	if (!bus->parent && bus->ops->map_bridge)
+		addr = bus->ops->map_bridge(bus, devfn, where);
+	else
+		addr = bus->ops->map_bus(bus, devfn, where & ~0x3);
 	if (!addr) {
 		*val = ~0;
 		return PCIBIOS_DEVICE_NOT_FOUND;
@@ -135,7 +151,10 @@  int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
 	void __iomem *addr;
 	u32 mask, tmp;
 
-	addr = bus->ops->map_bus(bus, devfn, where & ~0x3);
+	if (!bus->parent && bus->ops->map_bridge)
+		addr = bus->ops->map_bridge(bus, devfn, where);
+	else
+		addr = bus->ops->map_bus(bus, devfn, where & ~0x3);
 	if (!addr)
 		return PCIBIOS_DEVICE_NOT_FOUND;
 
diff --git a/include/linux/pci.h b/include/linux/pci.h
index df41c4645911..2b1d08771b36 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -580,6 +580,9 @@  struct pci_ops {
 	void __iomem *(*map_bus)(struct pci_bus *bus, unsigned int devfn, int where);
 	int (*read)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val);
 	int (*write)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val);
+	void __iomem *(*map_bridge)(struct pci_bus *bus, unsigned int devfn, int where);
+	int (*read_bridge)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val);
+	int (*write_bridge)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val);
 };
 
 /*