diff mbox series

[v2] PCI: Do not use pcie_get_speed_cap() to determine when to start waiting

Message ID 20200514133043.27429-1-mika.westerberg@linux.intel.com (mailing list archive)
State Accepted, archived
Delegated to: Bjorn Helgaas
Headers show
Series [v2] PCI: Do not use pcie_get_speed_cap() to determine when to start waiting | expand

Commit Message

Mika Westerberg May 14, 2020, 1:30 p.m. UTC
Kai-Heng Feng reported that it takes long time (>1s) to resume
Thunderbolt connected PCIe devices from both runtime suspend and system
sleep (s2idle).

These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
announces support for speeds > 5 GT/s but it is then capped by the
second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
it ended up waiting for 1100 ms as these ports do not support active
link layer reporting either.

PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
before sending configuration request to the device below, if the port
does not support speeds > 5 GT/s, and if it does we first need to wait
for the data link layer to become active before waiting for that 100 ms.

PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
that support speeds > 5 GT/s must support active link layer reporting so
instead of looking for the speed we can check for the active link layer
reporting capability and determine how to wait based on that (as they go
hand in hand).

While there restructure the code a bit so that the delay is always
issued in pci_bridge_wait_for_secondary_bus() by passing value of 0 to
pcie_wait_for_link_delay().

Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
v2: Restructured the code a bit so that it should be more readable now

Previous version can be found here:

  https://lore.kernel.org/linux-pci/20200514123105.GW2571@lahna.fi.intel.com/

@Kai-heng, since this patch is slightly different than what you tried, I
wonder if you could check that it still solves the and does not break
anything? I tested it myself but it would be nice to get your Tested-by to
make sure it still works.

 drivers/pci/pci.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

Comments

Bjorn Helgaas May 14, 2020, 10:41 p.m. UTC | #1
On Thu, May 14, 2020 at 04:30:43PM +0300, Mika Westerberg wrote:
> Kai-Heng Feng reported that it takes long time (>1s) to resume
> Thunderbolt connected PCIe devices from both runtime suspend and system
> sleep (s2idle).
> 
> These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> announces support for speeds > 5 GT/s but it is then capped by the
> second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> it ended up waiting for 1100 ms as these ports do not support active
> link layer reporting either.

I don't think PCI_EXP_LNKCTL2 is relevant here.  I think the lack of
Data Link Layer Link Active is just a chip erratum.  Is that
documented anywhere?

> PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
> before sending configuration request to the device below, if the port
> does not support speeds > 5 GT/s, and if it does we first need to wait
> for the data link layer to become active before waiting for that 100 ms.
> 
> PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
> that support speeds > 5 GT/s must support active link layer reporting so
> instead of looking for the speed we can check for the active link layer
> reporting capability and determine how to wait based on that (as they go
> hand in hand).
> 
> While there restructure the code a bit so that the delay is always
> issued in pci_bridge_wait_for_secondary_bus() by passing value of 0 to
> pcie_wait_for_link_delay().
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
> Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> ---
> v2: Restructured the code a bit so that it should be more readable now
> 
> Previous version can be found here:
> 
>   https://lore.kernel.org/linux-pci/20200514123105.GW2571@lahna.fi.intel.com/
> 
> @Kai-heng, since this patch is slightly different than what you tried, I
> wonder if you could check that it still solves the and does not break
> anything? I tested it myself but it would be nice to get your Tested-by to
> make sure it still works.
> 
>  drivers/pci/pci.c | 23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 595fcf59843f..590c73dc7e0d 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
>   * pcie_wait_for_link_delay - Wait until link is active or inactive
>   * @pdev: Bridge device
>   * @active: waiting for active or inactive?
> - * @delay: Delay to wait after link has become active (in ms)
> + * @delay: Delay to wait after link has become active (in ms). Specify %0
> + *	   for no delay.
>   *
>   * Use this to wait till link becomes active or inactive.
>   */
> @@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
>  		msleep(10);
>  		timeout -= 10;
>  	}

I think maybe the code above (not included in the context here) should
say:

  if (!pdev->link_active_reporting) {
    msleep(timeout + delay);
    return true;
  }

to match the rest of 4827d63891b6 ("PCI/PM: Add
pcie_wait_for_link_delay()").  What do you think?  If you agree, I'd
make that a separate patch because it's not related to the current
fix.

> -	if (active && ret)
> +	if (active && ret && delay)
>  		msleep(delay);
>  	else if (ret != active)
>  		pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
> @@ -4822,17 +4823,21 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>  	if (!pcie_downstream_port(dev))
>  		return;
>  
> -	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> -		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> -		msleep(delay);
> -	} else {
> -		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> -			delay);
> -		if (!pcie_wait_for_link_delay(dev, true, delay)) {
> +	/*
> +	 * Since PCIe spec mandates that all downstream ports that support
> +	 * speeds greater than 5 GT/s must support data link layer active
> +	 * reporting so if it is supported we poll for the link to become
> +	 * active before issuing the mandatory delay.
> +	 */
> +	if (dev->link_active_reporting) {
> +		pci_dbg(dev, "waiting for link to train\n");
> +		if (!pcie_wait_for_link_delay(dev, true, 0)) {
>  			/* Did not train, no need to wait any further */
>  			return;
>  		}
>  	}
> +	pci_dbg(child, "waiting %d ms to become accessible\n", delay);
> +	msleep(delay);

Nice solution, I like this a lot.

>  	if (!pci_device_is_present(child)) {
>  		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
> -- 
> 2.26.2
>
Kai-Heng Feng May 15, 2020, 7:56 a.m. UTC | #2
> On May 14, 2020, at 21:30, Mika Westerberg <mika.westerberg@linux.intel.com> wrote:
> 
> Kai-Heng Feng reported that it takes long time (>1s) to resume
> Thunderbolt connected PCIe devices from both runtime suspend and system
> sleep (s2idle).
> 
> These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> announces support for speeds > 5 GT/s but it is then capped by the
> second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> it ended up waiting for 1100 ms as these ports do not support active
> link layer reporting either.
> 
> PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
> before sending configuration request to the device below, if the port
> does not support speeds > 5 GT/s, and if it does we first need to wait
> for the data link layer to become active before waiting for that 100 ms.
> 
> PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
> that support speeds > 5 GT/s must support active link layer reporting so
> instead of looking for the speed we can check for the active link layer
> reporting capability and determine how to wait based on that (as they go
> hand in hand).
> 
> While there restructure the code a bit so that the delay is always
> issued in pci_bridge_wait_for_secondary_bus() by passing value of 0 to
> pcie_wait_for_link_delay().
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
> Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>

Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>

Thanks!

> ---
> v2: Restructured the code a bit so that it should be more readable now
> 
> Previous version can be found here:
> 
>  https://lore.kernel.org/linux-pci/20200514123105.GW2571@lahna.fi.intel.com/
> 
> @Kai-heng, since this patch is slightly different than what you tried, I
> wonder if you could check that it still solves the and does not break
> anything? I tested it myself but it would be nice to get your Tested-by to
> make sure it still works.
> 
> drivers/pci/pci.c | 23 ++++++++++++++---------
> 1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 595fcf59843f..590c73dc7e0d 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
>  * pcie_wait_for_link_delay - Wait until link is active or inactive
>  * @pdev: Bridge device
>  * @active: waiting for active or inactive?
> - * @delay: Delay to wait after link has become active (in ms)
> + * @delay: Delay to wait after link has become active (in ms). Specify %0
> + *	   for no delay.
>  *
>  * Use this to wait till link becomes active or inactive.
>  */
> @@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
> 		msleep(10);
> 		timeout -= 10;
> 	}
> -	if (active && ret)
> +	if (active && ret && delay)
> 		msleep(delay);
> 	else if (ret != active)
> 		pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
> @@ -4822,17 +4823,21 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
> 	if (!pcie_downstream_port(dev))
> 		return;
> 
> -	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> -		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> -		msleep(delay);
> -	} else {
> -		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> -			delay);
> -		if (!pcie_wait_for_link_delay(dev, true, delay)) {
> +	/*
> +	 * Since PCIe spec mandates that all downstream ports that support
> +	 * speeds greater than 5 GT/s must support data link layer active
> +	 * reporting so if it is supported we poll for the link to become
> +	 * active before issuing the mandatory delay.
> +	 */
> +	if (dev->link_active_reporting) {
> +		pci_dbg(dev, "waiting for link to train\n");
> +		if (!pcie_wait_for_link_delay(dev, true, 0)) {
> 			/* Did not train, no need to wait any further */
> 			return;
> 		}
> 	}
> +	pci_dbg(child, "waiting %d ms to become accessible\n", delay);
> +	msleep(delay);
> 
> 	if (!pci_device_is_present(child)) {
> 		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
> -- 
> 2.26.2
>
Mika Westerberg May 15, 2020, 9:55 a.m. UTC | #3
On Thu, May 14, 2020 at 05:41:32PM -0500, Bjorn Helgaas wrote:
> On Thu, May 14, 2020 at 04:30:43PM +0300, Mika Westerberg wrote:
> > Kai-Heng Feng reported that it takes long time (>1s) to resume
> > Thunderbolt connected PCIe devices from both runtime suspend and system
> > sleep (s2idle).
> > 
> > These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> > announces support for speeds > 5 GT/s but it is then capped by the
> > second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> > possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> > it ended up waiting for 1100 ms as these ports do not support active
> > link layer reporting either.
> 
> I don't think PCI_EXP_LNKCTL2 is relevant here.  I think the lack of
> Data Link Layer Link Active is just a chip erratum.  Is that
> documented anywhere?

I think it is relevant because if you hard-code (hardware) LNKCTL2 to
always target 2.5GT/s then it effectively does not need to implement
data link layer active because the link speed never goes higher than
that.

I don't think these devices have public documentation.

> > PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
> > before sending configuration request to the device below, if the port
> > does not support speeds > 5 GT/s, and if it does we first need to wait
> > for the data link layer to become active before waiting for that 100 ms.
> > 
> > PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
> > that support speeds > 5 GT/s must support active link layer reporting so
> > instead of looking for the speed we can check for the active link layer
> > reporting capability and determine how to wait based on that (as they go
> > hand in hand).
> > 
> > While there restructure the code a bit so that the delay is always
> > issued in pci_bridge_wait_for_secondary_bus() by passing value of 0 to
> > pcie_wait_for_link_delay().
> > 
> > Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
> > Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> > ---
> > v2: Restructured the code a bit so that it should be more readable now
> > 
> > Previous version can be found here:
> > 
> >   https://lore.kernel.org/linux-pci/20200514123105.GW2571@lahna.fi.intel.com/
> > 
> > @Kai-heng, since this patch is slightly different than what you tried, I
> > wonder if you could check that it still solves the and does not break
> > anything? I tested it myself but it would be nice to get your Tested-by to
> > make sure it still works.
> > 
> >  drivers/pci/pci.c | 23 ++++++++++++++---------
> >  1 file changed, 14 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > index 595fcf59843f..590c73dc7e0d 100644
> > --- a/drivers/pci/pci.c
> > +++ b/drivers/pci/pci.c
> > @@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
> >   * pcie_wait_for_link_delay - Wait until link is active or inactive
> >   * @pdev: Bridge device
> >   * @active: waiting for active or inactive?
> > - * @delay: Delay to wait after link has become active (in ms)
> > + * @delay: Delay to wait after link has become active (in ms). Specify %0
> > + *	   for no delay.
> >   *
> >   * Use this to wait till link becomes active or inactive.
> >   */
> > @@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
> >  		msleep(10);
> >  		timeout -= 10;
> >  	}
> 
> I think maybe the code above (not included in the context here) should
> say:
> 
>   if (!pdev->link_active_reporting) {
>     msleep(timeout + delay);
>     return true;
>   }
> 
> to match the rest of 4827d63891b6 ("PCI/PM: Add
> pcie_wait_for_link_delay()").  What do you think?  If you agree, I'd
> make that a separate patch because it's not related to the current
> fix.

Yes, I agree.

> > -	if (active && ret)
> > +	if (active && ret && delay)
> >  		msleep(delay);
> >  	else if (ret != active)
> >  		pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
> > @@ -4822,17 +4823,21 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
> >  	if (!pcie_downstream_port(dev))
> >  		return;
> >  
> > -	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> > -		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> > -		msleep(delay);
> > -	} else {
> > -		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> > -			delay);
> > -		if (!pcie_wait_for_link_delay(dev, true, delay)) {
> > +	/*
> > +	 * Since PCIe spec mandates that all downstream ports that support
> > +	 * speeds greater than 5 GT/s must support data link layer active
> > +	 * reporting so if it is supported we poll for the link to become
> > +	 * active before issuing the mandatory delay.
> > +	 */
> > +	if (dev->link_active_reporting) {
> > +		pci_dbg(dev, "waiting for link to train\n");
> > +		if (!pcie_wait_for_link_delay(dev, true, 0)) {
> >  			/* Did not train, no need to wait any further */
> >  			return;
> >  		}
> >  	}
> > +	pci_dbg(child, "waiting %d ms to become accessible\n", delay);
> > +	msleep(delay);
> 
> Nice solution, I like this a lot.

Thanks :)
Bjorn Helgaas May 15, 2020, 8:53 p.m. UTC | #4
On Fri, May 15, 2020 at 12:55:35PM +0300, Mika Westerberg wrote:
> On Thu, May 14, 2020 at 05:41:32PM -0500, Bjorn Helgaas wrote:
> > On Thu, May 14, 2020 at 04:30:43PM +0300, Mika Westerberg wrote:
> > > Kai-Heng Feng reported that it takes long time (>1s) to resume
> > > Thunderbolt connected PCIe devices from both runtime suspend and system
> > > sleep (s2idle).
> > > 
> > > These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> > > announces support for speeds > 5 GT/s but it is then capped by the
> > > second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> > > possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> > > it ended up waiting for 1100 ms as these ports do not support active
> > > link layer reporting either.
> > 
> > I don't think PCI_EXP_LNKCTL2 is relevant here.  I think the lack of
> > Data Link Layer Link Active is just a chip erratum.  Is that
> > documented anywhere?
> 
> I think it is relevant because if you hard-code (hardware) LNKCTL2 to
> always target 2.5GT/s then it effectively does not need to implement
> data link layer active because the link speed never goes higher than
> that.

I don't think it's reasonable to expect software to check Link
Capabilities 2, then try to write Link Control 2 and figure out
whether the target speed is hard-wired.  I think these devices
are just broken (at least per spec).

> > > @@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
> > >  		msleep(10);
> > >  		timeout -= 10;
> > >  	}
> > 
> > I think maybe the code above (not included in the context here) should
> > say:
> > 
> >   if (!pdev->link_active_reporting) {
> >     msleep(timeout + delay);
> >     return true;
> >   }
> > 
> > to match the rest of 4827d63891b6 ("PCI/PM: Add
> > pcie_wait_for_link_delay()").  What do you think?  If you agree, I'd
> > make that a separate patch because it's not related to the current
> > fix.
> 
> Yes, I agree.

OK, I added a patch to do this and applied both to pci/pm for v5.8.
Thanks!

Bjorn
Mika Westerberg May 16, 2020, 7:55 a.m. UTC | #5
On Fri, May 15, 2020 at 03:53:21PM -0500, Bjorn Helgaas wrote:
> On Fri, May 15, 2020 at 12:55:35PM +0300, Mika Westerberg wrote:
> > On Thu, May 14, 2020 at 05:41:32PM -0500, Bjorn Helgaas wrote:
> > > On Thu, May 14, 2020 at 04:30:43PM +0300, Mika Westerberg wrote:
> > > > Kai-Heng Feng reported that it takes long time (>1s) to resume
> > > > Thunderbolt connected PCIe devices from both runtime suspend and system
> > > > sleep (s2idle).
> > > > 
> > > > These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> > > > announces support for speeds > 5 GT/s but it is then capped by the
> > > > second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> > > > possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> > > > it ended up waiting for 1100 ms as these ports do not support active
> > > > link layer reporting either.
> > > 
> > > I don't think PCI_EXP_LNKCTL2 is relevant here.  I think the lack of
> > > Data Link Layer Link Active is just a chip erratum.  Is that
> > > documented anywhere?
> > 
> > I think it is relevant because if you hard-code (hardware) LNKCTL2 to
> > always target 2.5GT/s then it effectively does not need to implement
> > data link layer active because the link speed never goes higher than
> > that.
> 
> I don't think it's reasonable to expect software to check Link
> Capabilities 2, then try to write Link Control 2 and figure out
> whether the target speed is hard-wired.  I think these devices
> are just broken (at least per spec).

Software does not need to figure that out. It needs to check this field
if it needs to know the "actual" supported link speed.

Spec specifically allows this:

  The default value of this field is the highest Link speed supported by
  the component (as reported in the Max Link Speed field of the Link
  Capabilities Register) unless the corresponding platform/form factor
  requires a different default value.

and it even allows hardwiring this:

  Components that support only the 2.5 GT/s speed are permitted to
  hardwire this field to 0000b.
diff mbox series

Patch

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 595fcf59843f..590c73dc7e0d 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4660,7 +4660,8 @@  static int pci_pm_reset(struct pci_dev *dev, int probe)
  * pcie_wait_for_link_delay - Wait until link is active or inactive
  * @pdev: Bridge device
  * @active: waiting for active or inactive?
- * @delay: Delay to wait after link has become active (in ms)
+ * @delay: Delay to wait after link has become active (in ms). Specify %0
+ *	   for no delay.
  *
  * Use this to wait till link becomes active or inactive.
  */
@@ -4701,7 +4702,7 @@  static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
 		msleep(10);
 		timeout -= 10;
 	}
-	if (active && ret)
+	if (active && ret && delay)
 		msleep(delay);
 	else if (ret != active)
 		pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
@@ -4822,17 +4823,21 @@  void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	if (!pcie_downstream_port(dev))
 		return;
 
-	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
-		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
-		msleep(delay);
-	} else {
-		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
-			delay);
-		if (!pcie_wait_for_link_delay(dev, true, delay)) {
+	/*
+	 * Since PCIe spec mandates that all downstream ports that support
+	 * speeds greater than 5 GT/s must support data link layer active
+	 * reporting so if it is supported we poll for the link to become
+	 * active before issuing the mandatory delay.
+	 */
+	if (dev->link_active_reporting) {
+		pci_dbg(dev, "waiting for link to train\n");
+		if (!pcie_wait_for_link_delay(dev, true, 0)) {
 			/* Did not train, no need to wait any further */
 			return;
 		}
 	}
+	pci_dbg(child, "waiting %d ms to become accessible\n", delay);
+	msleep(delay);
 
 	if (!pci_device_is_present(child)) {
 		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);