diff mbox series

[v6,2/2] PCI: Enable runtime pm of the host bridge

Message ID 20241017-runtime_pm-v6-2-55eab5c2c940@quicinc.com (mailing list archive)
State Superseded
Headers show
Series PCI: Enable runtime pm of the host bridge | expand

Commit Message

Krishna Chaitanya Chundru Oct. 17, 2024, 3:35 p.m. UTC
The Controller driver is the parent device of the PCIe host bridge,
PCI-PCI bridge and PCIe endpoint as shown below.

        PCIe controller(Top level parent & parent of host bridge)
                        |
                        v
        PCIe Host bridge(Parent of PCI-PCI bridge)
                        |
                        v
        PCI-PCI bridge(Parent of endpoint driver)
                        |
                        v
                PCIe endpoint driver

Now, when the controller device goes to runtime suspend, PM framework
will check the runtime PM state of the child device (host bridge) and
will find it to be disabled. So it will allow the parent (controller
device) to go to runtime suspend. Only if the child device's state was
'active' it will prevent the parent to get suspended.

It is a property of the runtime PM framework that it can only
follow continuous dependency chains.  That is, if there is a device
with runtime PM disabled in a dependency chain, runtime PM cannot be
enabled for devices below it and above it in that chain both at the
same time.

Since runtime PM is disabled for host bridge, the state of the child
devices under the host bridge is not taken into account by PM framework
for the top level parent, PCIe controller. So PM framework, allows
the controller driver to enter runtime PM irrespective of the state
of the devices under the host bridge. And this causes the topology
breakage and also possible PM issues like controller driver goes to
runtime suspend while endpoint driver is doing some transfers.

Because of the above, in order to enable runtime PM for a PCIe
controller device, one needs to ensure that runtime PM is enabled for
all devices in every dependency chain between it and any PCIe endpoint
(as runtime PM is enabled for PCIe endpoints).

This means that runtime PM needs to be enabled for the host bridge
device, which is present in all of these dependency chains.

After this change, the host bridge device will be runtime-suspended
by the runtime PM framework automatically after suspending its last
child and it will be runtime-resumed automatically before resuming its
first child which will allow the runtime PM framework to track
dependencies between the host bridge device and all of its
descendants.

Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
---
Changes in v6:
- no change
Changes in v5:
- call pm_runtime_no_callbacks() as suggested by Rafael.
- include the commit texts as suggested by Rafael.
- Link to v4: https://lore.kernel.org/linux-pci/20240708-runtime_pm-v4-1-c02a3663243b@quicinc.com/
Changes in v4:
- Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
- Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@quicinc.com/
Changes in v3:
- Moved the runtime API call's from the dwc driver to PCI framework
  as it is applicable for all (suggested by mani)
- Updated the commit message.
- Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@quicinc.com
Changes in v2:
- Updated commit message as suggested by mani.
- Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@quicinc.com
---
 drivers/pci/probe.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Bjorn Helgaas Oct. 29, 2024, 3:35 p.m. UTC | #1
On Thu, Oct 17, 2024 at 09:05:51PM +0530, Krishna chaitanya chundru wrote:
> The Controller driver is the parent device of the PCIe host bridge,
> PCI-PCI bridge and PCIe endpoint as shown below.
> 
>         PCIe controller(Top level parent & parent of host bridge)
>                         |
>                         v
>         PCIe Host bridge(Parent of PCI-PCI bridge)
>                         |
>                         v
>         PCI-PCI bridge(Parent of endpoint driver)
>                         |
>                         v
>                 PCIe endpoint driver
> 
> Now, when the controller device goes to runtime suspend, PM framework
> will check the runtime PM state of the child device (host bridge) and
> will find it to be disabled. So it will allow the parent (controller
> device) to go to runtime suspend. Only if the child device's state was
> 'active' it will prevent the parent to get suspended.
> 
> It is a property of the runtime PM framework that it can only
> follow continuous dependency chains.  That is, if there is a device
> with runtime PM disabled in a dependency chain, runtime PM cannot be
> enabled for devices below it and above it in that chain both at the
> same time.
> 
> Since runtime PM is disabled for host bridge, the state of the child
> devices under the host bridge is not taken into account by PM framework
> for the top level parent, PCIe controller. So PM framework, allows
> the controller driver to enter runtime PM irrespective of the state
> of the devices under the host bridge. And this causes the topology
> breakage and also possible PM issues like controller driver goes to
> runtime suspend while endpoint driver is doing some transfers.
> 
> Because of the above, in order to enable runtime PM for a PCIe
> controller device, one needs to ensure that runtime PM is enabled for
> all devices in every dependency chain between it and any PCIe endpoint
> (as runtime PM is enabled for PCIe endpoints).
> 
> This means that runtime PM needs to be enabled for the host bridge
> device, which is present in all of these dependency chains.

Earlier I asked about how we can verify that no other drivers need a
change like the starfive one:
https://lore.kernel.org/r/20241012140852.GA603197@bhelgaas

I guess this sentence is basically how we verify all drivers are safe
with this change?  

Since this patch adds devm_pm_runtime_enable() in pci_host_probe(),
can we expand this along the lines of this so it's more specific about
what we need to verify?

  Every host bridge driver must call pm_runtime_enable() before
  runtime PM is enabled by pci_host_probe().

Please correct me if that's not the right requirement.

> After this change, the host bridge device will be runtime-suspended
> by the runtime PM framework automatically after suspending its last
> child and it will be runtime-resumed automatically before resuming its
> first child which will allow the runtime PM framework to track
> dependencies between the host bridge device and all of its
> descendants.
> 
> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> ---
> Changes in v6:
> - no change
> Changes in v5:
> - call pm_runtime_no_callbacks() as suggested by Rafael.
> - include the commit texts as suggested by Rafael.
> - Link to v4: https://lore.kernel.org/linux-pci/20240708-runtime_pm-v4-1-c02a3663243b@quicinc.com/
> Changes in v4:
> - Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
> - Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@quicinc.com/
> Changes in v3:
> - Moved the runtime API call's from the dwc driver to PCI framework
>   as it is applicable for all (suggested by mani)
> - Updated the commit message.
> - Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@quicinc.com
> Changes in v2:
> - Updated commit message as suggested by mani.
> - Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@quicinc.com
> ---
>  drivers/pci/probe.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> index 4f68414c3086..8409e1dde0d1 100644
> --- a/drivers/pci/probe.c
> +++ b/drivers/pci/probe.c
> @@ -3106,6 +3106,11 @@ int pci_host_probe(struct pci_host_bridge *bridge)
>  		pcie_bus_configure_settings(child);
>  
>  	pci_bus_add_devices(bus);
> +
> +	pm_runtime_set_active(&bridge->dev);
> +	pm_runtime_no_callbacks(&bridge->dev);
> +	devm_pm_runtime_enable(&bridge->dev);
> +
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(pci_host_probe);
> 
> -- 
> 2.34.1
>
Krishna Chaitanya Chundru Nov. 1, 2024, 1:34 a.m. UTC | #2
On 10/29/2024 9:05 PM, Bjorn Helgaas wrote:
> On Thu, Oct 17, 2024 at 09:05:51PM +0530, Krishna chaitanya chundru wrote:
>> The Controller driver is the parent device of the PCIe host bridge,
>> PCI-PCI bridge and PCIe endpoint as shown below.
>>
>>          PCIe controller(Top level parent & parent of host bridge)
>>                          |
>>                          v
>>          PCIe Host bridge(Parent of PCI-PCI bridge)
>>                          |
>>                          v
>>          PCI-PCI bridge(Parent of endpoint driver)
>>                          |
>>                          v
>>                  PCIe endpoint driver
>>
>> Now, when the controller device goes to runtime suspend, PM framework
>> will check the runtime PM state of the child device (host bridge) and
>> will find it to be disabled. So it will allow the parent (controller
>> device) to go to runtime suspend. Only if the child device's state was
>> 'active' it will prevent the parent to get suspended.
>>
>> It is a property of the runtime PM framework that it can only
>> follow continuous dependency chains.  That is, if there is a device
>> with runtime PM disabled in a dependency chain, runtime PM cannot be
>> enabled for devices below it and above it in that chain both at the
>> same time.
>>
>> Since runtime PM is disabled for host bridge, the state of the child
>> devices under the host bridge is not taken into account by PM framework
>> for the top level parent, PCIe controller. So PM framework, allows
>> the controller driver to enter runtime PM irrespective of the state
>> of the devices under the host bridge. And this causes the topology
>> breakage and also possible PM issues like controller driver goes to
>> runtime suspend while endpoint driver is doing some transfers.
>>
>> Because of the above, in order to enable runtime PM for a PCIe
>> controller device, one needs to ensure that runtime PM is enabled for
>> all devices in every dependency chain between it and any PCIe endpoint
>> (as runtime PM is enabled for PCIe endpoints).
>>
>> This means that runtime PM needs to be enabled for the host bridge
>> device, which is present in all of these dependency chains.
> 
> Earlier I asked about how we can verify that no other drivers need a
> change like the starfive one:
> https://lore.kernel.org/r/20241012140852.GA603197@bhelgaas
>
Hi Bjorn,

I added those details in cover letter as you suggested to add them in
cover letter.
"PM framework expectes parent runtime pm enabled before enabling runtime
pm of the child. As PCIe starfive device is enabling runtime pm after
the pci_host_probe which enables runtime pm of the child device i.e for
the bridge device a warning is shown saying "pcie-starfive 940000000.pcie:
Enabling runtime PM for inactive device with active children" and also
shows possible circular locking dependency detected message.

As it is must to enable parent device's runtime PM before enabling child's
runtime pm as the pcie-starfive device runtime pm is enabled after child
runtime starfive device is seeing the warning.

In the first patch fix the pcie-starfive driver by enabling runtime
pm before calling pci_host_probe().

All other PCIe controller drivers are enabling runtime pm before
calling pci_host_probe() which is as expected so don't require any
fix like pcie-starfive driver."

> I guess this sentence is basically how we verify all drivers are safe
> with this change?
> 
> Since this patch adds devm_pm_runtime_enable() in pci_host_probe(),
> can we expand this along the lines of this so it's more specific about
> what we need to verify?
> 
>    Every host bridge driver must call pm_runtime_enable() before
>    runtime PM is enabled by pci_host_probe().
> 
> Please correct me if that's not the right requirement.> 
yes this is correct requirement only. Do you want us to add this for
this patch .

- Krishna Chaitanya.
>> After this change, the host bridge device will be runtime-suspended
>> by the runtime PM framework automatically after suspending its last
>> child and it will be runtime-resumed automatically before resuming its
>> first child which will allow the runtime PM framework to track
>> dependencies between the host bridge device and all of its
>> descendants.
>>
>> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>> Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
>> ---
>> Changes in v6:
>> - no change
>> Changes in v5:
>> - call pm_runtime_no_callbacks() as suggested by Rafael.
>> - include the commit texts as suggested by Rafael.
>> - Link to v4: https://lore.kernel.org/linux-pci/20240708-runtime_pm-v4-1-c02a3663243b@quicinc.com/
>> Changes in v4:
>> - Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
>> - Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@quicinc.com/
>> Changes in v3:
>> - Moved the runtime API call's from the dwc driver to PCI framework
>>    as it is applicable for all (suggested by mani)
>> - Updated the commit message.
>> - Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@quicinc.com
>> Changes in v2:
>> - Updated commit message as suggested by mani.
>> - Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@quicinc.com
>> ---
>>   drivers/pci/probe.c | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
>> index 4f68414c3086..8409e1dde0d1 100644
>> --- a/drivers/pci/probe.c
>> +++ b/drivers/pci/probe.c
>> @@ -3106,6 +3106,11 @@ int pci_host_probe(struct pci_host_bridge *bridge)
>>   		pcie_bus_configure_settings(child);
>>   
>>   	pci_bus_add_devices(bus);
>> +
>> +	pm_runtime_set_active(&bridge->dev);
>> +	pm_runtime_no_callbacks(&bridge->dev);
>> +	devm_pm_runtime_enable(&bridge->dev);
>> +
>>   	return 0;
>>   }
>>   EXPORT_SYMBOL_GPL(pci_host_probe);
>>
>> -- 
>> 2.34.1
>>
Bjorn Helgaas Nov. 1, 2024, 10:20 p.m. UTC | #3
On Fri, Nov 01, 2024 at 07:04:46AM +0530, Krishna Chaitanya Chundru wrote:
> On 10/29/2024 9:05 PM, Bjorn Helgaas wrote:
> > On Thu, Oct 17, 2024 at 09:05:51PM +0530, Krishna chaitanya chundru wrote:
> > > The Controller driver is the parent device of the PCIe host bridge,
> > > PCI-PCI bridge and PCIe endpoint as shown below.
> > > 
> > >          PCIe controller(Top level parent & parent of host bridge)
> > >                          |
> > >                          v
> > >          PCIe Host bridge(Parent of PCI-PCI bridge)
> > >                          |
> > >                          v
> > >          PCI-PCI bridge(Parent of endpoint driver)
> > >                          |
> > >                          v
> > >                  PCIe endpoint driver
> > > 
> > > Now, when the controller device goes to runtime suspend, PM framework
> > > will check the runtime PM state of the child device (host bridge) and
> > > will find it to be disabled. So it will allow the parent (controller
> > > device) to go to runtime suspend. Only if the child device's state was
> > > 'active' it will prevent the parent to get suspended.
> > > 
> > > It is a property of the runtime PM framework that it can only
> > > follow continuous dependency chains.  That is, if there is a device
> > > with runtime PM disabled in a dependency chain, runtime PM cannot be
> > > enabled for devices below it and above it in that chain both at the
> > > same time.
> > > 
> > > Since runtime PM is disabled for host bridge, the state of the child
> > > devices under the host bridge is not taken into account by PM framework
> > > for the top level parent, PCIe controller. So PM framework, allows
> > > the controller driver to enter runtime PM irrespective of the state
> > > of the devices under the host bridge. And this causes the topology
> > > breakage and also possible PM issues like controller driver goes to
> > > runtime suspend while endpoint driver is doing some transfers.
> > > 
> > > Because of the above, in order to enable runtime PM for a PCIe
> > > controller device, one needs to ensure that runtime PM is enabled for
> > > all devices in every dependency chain between it and any PCIe endpoint
> > > (as runtime PM is enabled for PCIe endpoints).
> > > 
> > > This means that runtime PM needs to be enabled for the host bridge
> > > device, which is present in all of these dependency chains.
> > 
> > Earlier I asked about how we can verify that no other drivers need a
> > change like the starfive one:
> > https://lore.kernel.org/r/20241012140852.GA603197@bhelgaas
> 
> I added those details in cover letter as you suggested to add them in
> cover letter.

Indeed I did suggest it for the cover letter, sorry for my confusion
at not finding it in the commit log.

I actually think we need something in the patch commit log itself,
since the cover letter doesn't make it into git.

And probably a comment in the code as well, since this seems to change
the requirements on the callers of pci_host_probe().

> "PM framework expectes parent runtime pm enabled before enabling runtime
> pm of the child. As PCIe starfive device is enabling runtime pm after
> the pci_host_probe which enables runtime pm of the child device i.e for
> the bridge device a warning is shown saying "pcie-starfive 940000000.pcie:
> Enabling runtime PM for inactive device with active children" and also
> shows possible circular locking dependency detected message.
> 
> As it is must to enable parent device's runtime PM before enabling child's
> runtime pm as the pcie-starfive device runtime pm is enabled after child
> runtime starfive device is seeing the warning.
> 
> In the first patch fix the pcie-starfive driver by enabling runtime
> pm before calling pci_host_probe().
> 
> All other PCIe controller drivers are enabling runtime pm before
> calling pci_host_probe() which is as expected so don't require any
> fix like pcie-starfive driver."

I'm sure that you looked at the following paths through
pci_host_common_probe(), which as far as I can tell, do not call
pm_runtime_enable() before pci_host_probe():

  apple_pcie_probe
    pci_host_common_probe
      pci_host_probe

  mc_host_probe
    pci_host_common_probe
      pci_host_probe

And the following use pci_host_common_probe() directly as their
.probe() method:

  gen_pci_driver in pci-host-common.c
  thunder_ecam_driver in pci-thunder-ecam.c
  thunder_pem_driver in pci-thunder-pem.c
  hisi_pcie_almost_ecam_driver in dwc/pcie-hisi.c
  
Are all these safe as well?  These all end up in pci_host_probe()
without having done anything to enable runtime PM on the
PCIe controller platform_device.

Looking at your diagram above, IIUC this patch enables runtime PM for
the PCIe host bridge, and the requirement is that runtime PM is
already enabled for the PCIe controller above it?

Is it always *possible* for that PCIe controller to enable runtime PM?
Might there exist PCIe controllers that cannot enable runtime PM
because they lack something in hardware or in the driver?

Maybe this patch should only enable runtime PM for the host bridge if
the controller already has runtime PM enabled?

> > I guess this sentence is basically how we verify all drivers are safe
> > with this change?
> > 
> > Since this patch adds devm_pm_runtime_enable() in pci_host_probe(),
> > can we expand this along the lines of this so it's more specific about
> > what we need to verify?
> > 
> >    Every host bridge driver must call pm_runtime_enable() before
> >    runtime PM is enabled by pci_host_probe().
> > 
> > Please correct me if that's not the right requirement.>
>
> yes this is correct requirement only. Do you want us to add this for
> this patch .

I would like to have a one-sentence statement of what the callers need
to do, including the actual function names.  Otherwise it's a pretty
big burden on reviewers to verify things.

> > > After this change, the host bridge device will be runtime-suspended
> > > by the runtime PM framework automatically after suspending its last
> > > child and it will be runtime-resumed automatically before resuming its
> > > first child which will allow the runtime PM framework to track
> > > dependencies between the host bridge device and all of its
> > > descendants.
> > > 
> > > Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > > Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> > > ---
> > > Changes in v6:
> > > - no change
> > > Changes in v5:
> > > - call pm_runtime_no_callbacks() as suggested by Rafael.
> > > - include the commit texts as suggested by Rafael.
> > > - Link to v4: https://lore.kernel.org/linux-pci/20240708-runtime_pm-v4-1-c02a3663243b@quicinc.com/
> > > Changes in v4:
> > > - Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
> > > - Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@quicinc.com/
> > > Changes in v3:
> > > - Moved the runtime API call's from the dwc driver to PCI framework
> > >    as it is applicable for all (suggested by mani)
> > > - Updated the commit message.
> > > - Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@quicinc.com
> > > Changes in v2:
> > > - Updated commit message as suggested by mani.
> > > - Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@quicinc.com
> > > ---
> > >   drivers/pci/probe.c | 5 +++++
> > >   1 file changed, 5 insertions(+)
> > > 
> > > diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> > > index 4f68414c3086..8409e1dde0d1 100644
> > > --- a/drivers/pci/probe.c
> > > +++ b/drivers/pci/probe.c
> > > @@ -3106,6 +3106,11 @@ int pci_host_probe(struct pci_host_bridge *bridge)
> > >   		pcie_bus_configure_settings(child);
> > >   	pci_bus_add_devices(bus);
> > > +
> > > +	pm_runtime_set_active(&bridge->dev);
> > > +	pm_runtime_no_callbacks(&bridge->dev);
> > > +	devm_pm_runtime_enable(&bridge->dev);
> > > +
> > >   	return 0;
> > >   }
> > >   EXPORT_SYMBOL_GPL(pci_host_probe);
> > > 
> > > -- 
> > > 2.34.1
> > >
Krishna Chaitanya Chundru Nov. 4, 2024, 6:09 a.m. UTC | #4
On 11/2/2024 3:50 AM, Bjorn Helgaas wrote:
> On Fri, Nov 01, 2024 at 07:04:46AM +0530, Krishna Chaitanya Chundru wrote:
>> On 10/29/2024 9:05 PM, Bjorn Helgaas wrote:
>>> On Thu, Oct 17, 2024 at 09:05:51PM +0530, Krishna chaitanya chundru wrote:
>>>> The Controller driver is the parent device of the PCIe host bridge,
>>>> PCI-PCI bridge and PCIe endpoint as shown below.
>>>>
>>>>           PCIe controller(Top level parent & parent of host bridge)
>>>>                           |
>>>>                           v
>>>>           PCIe Host bridge(Parent of PCI-PCI bridge)
>>>>                           |
>>>>                           v
>>>>           PCI-PCI bridge(Parent of endpoint driver)
>>>>                           |
>>>>                           v
>>>>                   PCIe endpoint driver
>>>>
>>>> Now, when the controller device goes to runtime suspend, PM framework
>>>> will check the runtime PM state of the child device (host bridge) and
>>>> will find it to be disabled. So it will allow the parent (controller
>>>> device) to go to runtime suspend. Only if the child device's state was
>>>> 'active' it will prevent the parent to get suspended.
>>>>
>>>> It is a property of the runtime PM framework that it can only
>>>> follow continuous dependency chains.  That is, if there is a device
>>>> with runtime PM disabled in a dependency chain, runtime PM cannot be
>>>> enabled for devices below it and above it in that chain both at the
>>>> same time.
>>>>
>>>> Since runtime PM is disabled for host bridge, the state of the child
>>>> devices under the host bridge is not taken into account by PM framework
>>>> for the top level parent, PCIe controller. So PM framework, allows
>>>> the controller driver to enter runtime PM irrespective of the state
>>>> of the devices under the host bridge. And this causes the topology
>>>> breakage and also possible PM issues like controller driver goes to
>>>> runtime suspend while endpoint driver is doing some transfers.
>>>>
>>>> Because of the above, in order to enable runtime PM for a PCIe
>>>> controller device, one needs to ensure that runtime PM is enabled for
>>>> all devices in every dependency chain between it and any PCIe endpoint
>>>> (as runtime PM is enabled for PCIe endpoints).
>>>>
>>>> This means that runtime PM needs to be enabled for the host bridge
>>>> device, which is present in all of these dependency chains.
>>>
>>> Earlier I asked about how we can verify that no other drivers need a
>>> change like the starfive one:
>>> https://lore.kernel.org/r/20241012140852.GA603197@bhelgaas
>>
>> I added those details in cover letter as you suggested to add them in
>> cover letter.
> 
> Indeed I did suggest it for the cover letter, sorry for my confusion
> at not finding it in the commit log.
> 
> I actually think we need something in the patch commit log itself,
> since the cover letter doesn't make it into git.
> 
> And probably a comment in the code as well, since this seems to change
> the requirements on the callers of pci_host_probe().
> 
ack
>> "PM framework expectes parent runtime pm enabled before enabling runtime
>> pm of the child. As PCIe starfive device is enabling runtime pm after
>> the pci_host_probe which enables runtime pm of the child device i.e for
>> the bridge device a warning is shown saying "pcie-starfive 940000000.pcie:
>> Enabling runtime PM for inactive device with active children" and also
>> shows possible circular locking dependency detected message.
>>
>> As it is must to enable parent device's runtime PM before enabling child's
>> runtime pm as the pcie-starfive device runtime pm is enabled after child
>> runtime starfive device is seeing the warning.
>>
>> In the first patch fix the pcie-starfive driver by enabling runtime
>> pm before calling pci_host_probe().
>>
>> All other PCIe controller drivers are enabling runtime pm before
>> calling pci_host_probe() which is as expected so don't require any
>> fix like pcie-starfive driver."
> 
> I'm sure that you looked at the following paths through
> pci_host_common_probe(), which as far as I can tell, do not call
> pm_runtime_enable() before pci_host_probe():
> 
>    apple_pcie_probe
>      pci_host_common_probe
>        pci_host_probe
> 
>    mc_host_probe
>      pci_host_common_probe
>        pci_host_probe
> 
> And the following use pci_host_common_probe() directly as their
> .probe() method:
> 
>    gen_pci_driver in pci-host-common.c
>    thunder_ecam_driver in pci-thunder-ecam.c
>    thunder_pem_driver in pci-thunder-pem.c
>    hisi_pcie_almost_ecam_driver in dwc/pcie-hisi.c
>    
> Are all these safe as well?  These all end up in pci_host_probe()
> without having done anything to enable runtime PM on the
> PCIe controller platform_device.
> 
these drivers are not calling runtime_pm_enable in their drivers and
due to that it will not have any impact on these drivers.
> Looking at your diagram above, IIUC this patch enables runtime PM for
> the PCIe host bridge, and the requirement is that runtime PM is
> already enabled for the PCIe controller above it?
> 
> Is it always *possible* for that PCIe controller to enable runtime PM?
> Might there exist PCIe controllers that cannot enable runtime PM
> because they lack something in hardware or in the driver?
> 
> Maybe this patch should only enable runtime PM for the host bridge if
> the controller already has runtime PM enabled?
> 
irrespective of the controller runtime pm, we can enable host bridge
runtime pm. if the controller driver want to enable runtime pm they
need to make sure runtime pm is enabled before we enable the runtime
of the host bridge, otherwise it will not have any impact as they are
not even registering with runtime pm here.

>>> I guess this sentence is basically how we verify all drivers are safe
>>> with this change?
>>>
>>> Since this patch adds devm_pm_runtime_enable() in pci_host_probe(),
>>> can we expand this along the lines of this so it's more specific about
>>> what we need to verify?
>>>
>>>     Every host bridge driver must call pm_runtime_enable() before
>>>     runtime PM is enabled by pci_host_probe().
>>>
>>> Please correct me if that's not the right requirement.>
>>
>> yes this is correct requirement only. Do you want us to add this for
>> this patch .
> 
> I would like to have a one-sentence statement of what the callers need
> to do, including the actual function names.  Otherwise it's a pretty
> big burden on reviewers to verify things.
> 
ack, once above discussions gets concluded I will send a new patch
series with these details.

- Krishna chaitanya
>>>> After this change, the host bridge device will be runtime-suspended
>>>> by the runtime PM framework automatically after suspending its last
>>>> child and it will be runtime-resumed automatically before resuming its
>>>> first child which will allow the runtime PM framework to track
>>>> dependencies between the host bridge device and all of its
>>>> descendants.
>>>>
>>>> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>>> Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
>>>> ---
>>>> Changes in v6:
>>>> - no change
>>>> Changes in v5:
>>>> - call pm_runtime_no_callbacks() as suggested by Rafael.
>>>> - include the commit texts as suggested by Rafael.
>>>> - Link to v4: https://lore.kernel.org/linux-pci/20240708-runtime_pm-v4-1-c02a3663243b@quicinc.com/
>>>> Changes in v4:
>>>> - Changed pm_runtime_enable() to devm_pm_runtime_enable() (suggested by mayank)
>>>> - Link to v3: https://lore.kernel.org/lkml/20240609-runtime_pm-v3-1-3d0460b49d60@quicinc.com/
>>>> Changes in v3:
>>>> - Moved the runtime API call's from the dwc driver to PCI framework
>>>>     as it is applicable for all (suggested by mani)
>>>> - Updated the commit message.
>>>> - Link to v2: https://lore.kernel.org/all/20240305-runtime_pm_enable-v2-1-a849b74091d1@quicinc.com
>>>> Changes in v2:
>>>> - Updated commit message as suggested by mani.
>>>> - Link to v1: https://lore.kernel.org/r/20240219-runtime_pm_enable-v1-1-d39660310504@quicinc.com
>>>> ---
>>>>    drivers/pci/probe.c | 5 +++++
>>>>    1 file changed, 5 insertions(+)
>>>>
>>>> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
>>>> index 4f68414c3086..8409e1dde0d1 100644
>>>> --- a/drivers/pci/probe.c
>>>> +++ b/drivers/pci/probe.c
>>>> @@ -3106,6 +3106,11 @@ int pci_host_probe(struct pci_host_bridge *bridge)
>>>>    		pcie_bus_configure_settings(child);
>>>>    	pci_bus_add_devices(bus);
>>>> +
>>>> +	pm_runtime_set_active(&bridge->dev);
>>>> +	pm_runtime_no_callbacks(&bridge->dev);
>>>> +	devm_pm_runtime_enable(&bridge->dev);
>>>> +
>>>>    	return 0;
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(pci_host_probe);
>>>>
>>>> -- 
>>>> 2.34.1
>>>>
diff mbox series

Patch

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 4f68414c3086..8409e1dde0d1 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -3106,6 +3106,11 @@  int pci_host_probe(struct pci_host_bridge *bridge)
 		pcie_bus_configure_settings(child);
 
 	pci_bus_add_devices(bus);
+
+	pm_runtime_set_active(&bridge->dev);
+	pm_runtime_no_callbacks(&bridge->dev);
+	devm_pm_runtime_enable(&bridge->dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(pci_host_probe);