diff mbox series

[v4,01/12] IB/hfi1: Check if pcie_capability_read_*() reads ~0

Message ID 20200731110240.98326-2-refactormyself@gmail.com
State New
Delegated to: Bjorn Helgaas
Headers show
Series PCI: Remove '*val = 0' from pcie_capability_read_*() | expand

Commit Message

Saheed O. Bolarinwa July 31, 2020, 11:02 a.m. UTC
On failure pcie_capability_read_dword() sets it's last parameter,
val to 0. In this case dn and up will be 0, so aspm_hw_l1_supported()
will return false.
However, with Patch 12/12, it is possible that val is set to ~0 on
failure. This would introduce a bug because (x & x) == (~0 & x). So
with dn and up being 0x02, a true value is return when the read has
actually failed.

Since, the value ~0 is invalid here,

Reset dn and up to 0 when a value of ~0 is read into them, this
ensures false is returned on failure in this case.

Suggested-by: Bjorn Helgaas <bjorn@helgaas.com>
Signed-off-by: Saheed O. Bolarinwa <refactormyself@gmail.com>
---

 drivers/infiniband/hw/hfi1/aspm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Bjorn Helgaas July 31, 2020, 1:55 p.m. UTC | #1
[+cc Michael, Ashutosh, Ian, Puranjay]

On Fri, Jul 31, 2020 at 01:02:29PM +0200, Saheed O. Bolarinwa wrote:
> On failure pcie_capability_read_dword() sets it's last parameter,
> val to 0. In this case dn and up will be 0, so aspm_hw_l1_supported()
> will return false.
> However, with Patch 12/12, it is possible that val is set to ~0 on
> failure. This would introduce a bug because (x & x) == (~0 & x). So
> with dn and up being 0x02, a true value is return when the read has
> actually failed.
> 
> Since, the value ~0 is invalid here,
> 
> Reset dn and up to 0 when a value of ~0 is read into them, this
> ensures false is returned on failure in this case.
> 
> Suggested-by: Bjorn Helgaas <bjorn@helgaas.com>
> Signed-off-by: Saheed O. Bolarinwa <refactormyself@gmail.com>
> ---
> 
>  drivers/infiniband/hw/hfi1/aspm.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hfi1/aspm.c b/drivers/infiniband/hw/hfi1/aspm.c
> index a3c53be4072c..9605b2145d19 100644
> --- a/drivers/infiniband/hw/hfi1/aspm.c
> +++ b/drivers/infiniband/hw/hfi1/aspm.c
> @@ -33,13 +33,13 @@ static bool aspm_hw_l1_supported(struct hfi1_devdata *dd)
>  		return false;
>  
>  	pcie_capability_read_dword(dd->pcidev, PCI_EXP_LNKCAP, &dn);
> -	dn = ASPM_L1_SUPPORTED(dn);
> +	dn = (dn == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(dn);
>  
>  	pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &up);
> -	up = ASPM_L1_SUPPORTED(up);
> +	up = (up == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(up);

I don't want to change this.  The driver shouldn't be mucking with
ASPM at all.  The PCI core should take care of this automatically.  If
it doesn't, we need to fix the core.

If the driver needs to disable ASPM to work around device errata or
something, the core has an interface for that.  But the driver should
not override the system-wide policy for managing ASPM.

Ah, some archaeology finds affa48de8417 ("staging/rdma/hfi1: Add
support for enabling/disabling PCIe ASPM"), which says:

  hfi1 HW has a high PCIe ASPM L1 exit latency and also advertises an
  acceptable latency less than actual ASPM latencies.

That suggests that either there is a device defect, e.g., advertising
incorrect ASPM latencies, or a PCI core defect, e.g., incorrectly
enabling ASPM when the path exit latency exceeds that hfi1 can
tolerate.

Coincidentally, Ian recently debugged a problem in how the PCI core
computes exit latencies over a path [1].

Can anybody supply details about the hfi1 ASPM parameters, e.g., the
output of "sudo lspci -vv"?  Any details about the configuration where
the problem occurs?  Is there a switch in the path?

[1] https://lore.kernel.org/r/20200727213045.2117855-1-ian.kumlien@gmail.com

>  	/* ASPM works on A-step but is reported as not supported */
> -	return (!!dn || is_ax(dd)) && !!up;
> +	return (dn || is_ax(dd)) && up;
>  }
>  
>  /* Set L1 entrance latency for slower entry to L1 */
> -- 
> 2.18.4
>
Ian Kumlien Aug. 3, 2020, 11:46 a.m. UTC | #2
On Fri, Jul 31, 2020 at 3:55 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Michael, Ashutosh, Ian, Puranjay]
>
> On Fri, Jul 31, 2020 at 01:02:29PM +0200, Saheed O. Bolarinwa wrote:
> > On failure pcie_capability_read_dword() sets it's last parameter,
> > val to 0. In this case dn and up will be 0, so aspm_hw_l1_supported()
> > will return false.
> > However, with Patch 12/12, it is possible that val is set to ~0 on
> > failure. This would introduce a bug because (x & x) == (~0 & x). So
> > with dn and up being 0x02, a true value is return when the read has
> > actually failed.
> >
> > Since, the value ~0 is invalid here,
> >
> > Reset dn and up to 0 when a value of ~0 is read into them, this
> > ensures false is returned on failure in this case.
> >
> > Suggested-by: Bjorn Helgaas <bjorn@helgaas.com>
> > Signed-off-by: Saheed O. Bolarinwa <refactormyself@gmail.com>
> > ---
> >
> >  drivers/infiniband/hw/hfi1/aspm.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/infiniband/hw/hfi1/aspm.c b/drivers/infiniband/hw/hfi1/aspm.c
> > index a3c53be4072c..9605b2145d19 100644
> > --- a/drivers/infiniband/hw/hfi1/aspm.c
> > +++ b/drivers/infiniband/hw/hfi1/aspm.c
> > @@ -33,13 +33,13 @@ static bool aspm_hw_l1_supported(struct hfi1_devdata *dd)
> >               return false;
> >
> >       pcie_capability_read_dword(dd->pcidev, PCI_EXP_LNKCAP, &dn);
> > -     dn = ASPM_L1_SUPPORTED(dn);
> > +     dn = (dn == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(dn);
> >
> >       pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &up);
> > -     up = ASPM_L1_SUPPORTED(up);
> > +     up = (up == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(up);
>
> I don't want to change this.  The driver shouldn't be mucking with
> ASPM at all.  The PCI core should take care of this automatically.  If
> it doesn't, we need to fix the core.
>
> If the driver needs to disable ASPM to work around device errata or
> something, the core has an interface for that.  But the driver should
> not override the system-wide policy for managing ASPM.
>
> Ah, some archaeology finds affa48de8417 ("staging/rdma/hfi1: Add
> support for enabling/disabling PCIe ASPM"), which says:
>
>   hfi1 HW has a high PCIe ASPM L1 exit latency and also advertises an
>   acceptable latency less than actual ASPM latencies.
>
> That suggests that either there is a device defect, e.g., advertising
> incorrect ASPM latencies, or a PCI core defect, e.g., incorrectly
> enabling ASPM when the path exit latency exceeds that hfi1 can
> tolerate.
>
> Coincidentally, Ian recently debugged a problem in how the PCI core
> computes exit latencies over a path [1].
>
> Can anybody supply details about the hfi1 ASPM parameters, e.g., the
> output of "sudo lspci -vv"?  Any details about the configuration where
> the problem occurs?  Is there a switch in the path?
>
> [1] https://lore.kernel.org/r/20200727213045.2117855-1-ian.kumlien@gmail.com
>
> >       /* ASPM works on A-step but is reported as not supported */
> > -     return (!!dn || is_ax(dd)) && !!up;
> > +     return (dn || is_ax(dd)) && up;
> >  }
> >
> >  /* Set L1 entrance latency for slower entry to L1 */
> > --
> > 2.18.4
> >

My experience with pcie is very limited, but the more I look at things
the more I get worried...

Anyway, I have made some changes, could you try the attached patch and
see if it makes a difference?

Changes:
L0s and L1 should only apply to links that actually has it enabled,
don't store or increase values if they don't.
Work on L0s as well, currently it clobbers since I'm not certain about
upstream/downstream distinctions.

diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
index b17e5ffd31b1..0d93ae065f73 100644
--- a/drivers/pci/pcie/aspm.c
+++ b/drivers/pci/pcie/aspm.c
@@ -434,7 +434,8 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,

 static void pcie_aspm_check_latency(struct pci_dev *endpoint)
 {
-       u32 latency, l1_switch_latency = 0;
+       u32 latency, l1_max_latency = 0, l1_switch_latency = 0,
+               l0s_max_latency = 0;
        struct aspm_latency *acceptable;
        struct pcie_link_state *link;

@@ -447,15 +448,24 @@ static void pcie_aspm_check_latency(struct
pci_dev *endpoint)
        acceptable = &link->acceptable[PCI_FUNC(endpoint->devfn)];

        while (link) {
-               /* Check upstream direction L0s latency */
-               if ((link->aspm_capable & ASPM_STATE_L0S_UP) &&
-                   (link->latency_up.l0s > acceptable->l0s))
-                       link->aspm_capable &= ~ASPM_STATE_L0S_UP;
-
-               /* Check downstream direction L0s latency */
-               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
-                   (link->latency_dw.l0s > acceptable->l0s))
-                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
+               if (link->aspm_capable & ASPM_STATE_L0S) {
+                       u32 l0s_up = 0, l0s_dw = 0;
+
+                       /* Check upstream direction L0s latency */
+                       if (link->aspm_capable & ASPM_STATE_L0S_UP)
+                               l0s_up = link->latency_up.l0s;
+
+                       /* Check downstream direction L0s latency */
+                       if (link->aspm_capable & ASPM_STATE_L0S_DW)
+                               l0s_dw = link->latency_dw.l0s;
+
+                       l0s_max_latency += max_t(u32, l0s_up, l0s_dw);
+
+                       /* If the latency exceeds, disable both */
+                       if (l0s_max_latency > acceptable->l0s)
+                               link->aspm_capable &= ~ASPM_STATE_L0S;
+               }
+
                /*
                 * Check L1 latency.
                 * Every switch on the path to root complex need 1
@@ -469,11 +479,13 @@ static void pcie_aspm_check_latency(struct
pci_dev *endpoint)
                 * L1 exit latencies advertised by a device include L1
                 * substate latencies (and hence do not do any check).
                 */
-               latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
-               if ((link->aspm_capable & ASPM_STATE_L1) &&
-                   (latency + l1_switch_latency > acceptable->l1))
-                       link->aspm_capable &= ~ASPM_STATE_L1;
-               l1_switch_latency += 1000;
+               if (link->aspm_capable & ASPM_STATE_L1) {
+                       latency = max_t(u32, link->latency_up.l1,
link->latency_dw.l1);
+                       l1_max_latency = max_t(u32, latency, l1_max_latency);
+                       if (l1_max_latency + l1_switch_latency > acceptable->l1)
+                               link->aspm_capable &= ~ASPM_STATE_L1;
+                       l1_switch_latency += 1000;
+               }

                link = link->parent;
        }
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/hfi1/aspm.c b/drivers/infiniband/hw/hfi1/aspm.c
index a3c53be4072c..9605b2145d19 100644
--- a/drivers/infiniband/hw/hfi1/aspm.c
+++ b/drivers/infiniband/hw/hfi1/aspm.c
@@ -33,13 +33,13 @@  static bool aspm_hw_l1_supported(struct hfi1_devdata *dd)
 		return false;
 
 	pcie_capability_read_dword(dd->pcidev, PCI_EXP_LNKCAP, &dn);
-	dn = ASPM_L1_SUPPORTED(dn);
+	dn = (dn == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(dn);
 
 	pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &up);
-	up = ASPM_L1_SUPPORTED(up);
+	up = (up == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(up);
 
 	/* ASPM works on A-step but is reported as not supported */
-	return (!!dn || is_ax(dd)) && !!up;
+	return (dn || is_ax(dd)) && up;
 }
 
 /* Set L1 entrance latency for slower entry to L1 */