diff mbox series

[2/2] infiniband: Modify the reference to xa_store_irq() because the parameter of this function has changed

Message ID 20201104023213.760-2-xiaofeng.yan2012@gmail.com (mailing list archive)
State New, archived
Headers show
Series [1/2,xarry] :Fixed an issue with memory allocated using the GFP_KERNEL flag in spinlocks | expand

Commit Message

xiaofeng.yan Nov. 4, 2020, 2:32 a.m. UTC
From: "xiaofeng.yan" <yanxiaofeng7@jd.com>

function xa_store_irq() has three parameters because of removing
patameter "gfp_t gfp"

Signed-off-by: xiaofeng.yan <yanxiaofeng7@jd.com>
---
 drivers/infiniband/core/cm.c            | 2 +-
 drivers/infiniband/hw/hns/hns_roce_qp.c | 2 +-
 drivers/infiniband/hw/mlx5/srq_cmd.c    | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

Comments

Jason Gunthorpe Nov. 4, 2020, 6:58 p.m. UTC | #1
On Wed, Nov 04, 2020 at 10:32:13AM +0800, xiaofeng.yan wrote:
> From: "xiaofeng.yan" <yanxiaofeng7@jd.com>
> 
> function xa_store_irq() has three parameters because of removing
> patameter "gfp_t gfp"
> 
> Signed-off-by: xiaofeng.yan <yanxiaofeng7@jd.com>
>  drivers/infiniband/core/cm.c            | 2 +-
>  drivers/infiniband/hw/hns/hns_roce_qp.c | 2 +-
>  drivers/infiniband/hw/mlx5/srq_cmd.c    | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
> index 5740d1ba3568..afcb5711270b 100644
> +++ b/drivers/infiniband/core/cm.c
> @@ -879,7 +879,7 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
>  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
>  {
>  	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> -		     cm_id_priv, GFP_KERNEL);
> +		     cm_id_priv);
>  }

This one is almost a bug, the entry is preallocated with NULL though:

	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
				  &cm.local_id_next, GFP_KERNEL);

so it should never allocate here:

static int cm_req_handler(struct cm_work *work)
{
	spin_lock_irq(&cm_id_priv->lock);
	cm_finalize_id(cm_id_priv);

Still, woops.

Matt, maybe a might_sleep is deserved in here someplace?

@@ -1534,6 +1534,8 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
        XA_STATE(xas, xa, index);
        void *curr;
 
+       might_sleep_if(gfpflags_allow_blocking(gfp));
+
        if (WARN_ON_ONCE(xa_is_advanced(entry)))
                return XA_ERROR(-EINVAL);
        if (xa_track_free(xa) && !entry)

And similar in the other places that conditionally call __xas_nomem()
?

I also still wish there was a proper 'xa store in already allocated
but null' idiom - I remember you thought about using gfp flags == 0 at
one point.

Jason
Matthew Wilcox Nov. 4, 2020, 7:30 p.m. UTC | #2
On Wed, Nov 04, 2020 at 02:58:43PM -0400, Jason Gunthorpe wrote:
> >  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
> >  {
> >  	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > -		     cm_id_priv, GFP_KERNEL);
> > +		     cm_id_priv);
> >  }
> 
> This one is almost a bug, the entry is preallocated with NULL though:
> 
> 	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
> 				  &cm.local_id_next, GFP_KERNEL);
> 
> so it should never allocate here:
> 
> static int cm_req_handler(struct cm_work *work)
> {
> 	spin_lock_irq(&cm_id_priv->lock);
> 	cm_finalize_id(cm_id_priv);

Uhm.  I think you want a different debugging check from this.  The actual
bug here is that you'll get back from calling cm_finalize_id() with
interrupts enabled.  Can you switch to xa_store(), or do we need an
xa_store_irqsave()?

> Still, woops.
> 
> Matt, maybe a might_sleep is deserved in here someplace?
> 
> @@ -1534,6 +1534,8 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
>         XA_STATE(xas, xa, index);
>         void *curr;
>  
> +       might_sleep_if(gfpflags_allow_blocking(gfp));
> +
>         if (WARN_ON_ONCE(xa_is_advanced(entry)))
>                 return XA_ERROR(-EINVAL);
>         if (xa_track_free(xa) && !entry)
> 
> And similar in the other places that conditionally call __xas_nomem()
> ?
> 
> I also still wish there was a proper 'xa store in already allocated
> but null' idiom - I remember you thought about using gfp flags == 0 at
> one point.

An xa_replace(), perhaps?
Jason Gunthorpe Nov. 4, 2020, 9:34 p.m. UTC | #3
On Wed, Nov 04, 2020 at 07:30:36PM +0000, Matthew Wilcox wrote:
> On Wed, Nov 04, 2020 at 02:58:43PM -0400, Jason Gunthorpe wrote:
> > >  static void cm_finalize_id(struct cm_id_private *cm_id_priv)
> > >  {
> > >  	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
> > > -		     cm_id_priv, GFP_KERNEL);
> > > +		     cm_id_priv);
> > >  }
> > 
> > This one is almost a bug, the entry is preallocated with NULL though:
> > 
> > 	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
> > 				  &cm.local_id_next, GFP_KERNEL);
> > 
> > so it should never allocate here:
> > 
> > static int cm_req_handler(struct cm_work *work)
> > {
> > 	spin_lock_irq(&cm_id_priv->lock);
> > 	cm_finalize_id(cm_id_priv);
> 
> Uhm.  I think you want a different debugging check from this.  The actual
> bug here is that you'll get back from calling cm_finalize_id() with
> interrupts enabled. 

Ooh, that is just no fun too :\

Again surprised some lockdep didn't catch wrongly nesting irq locks

> Can you switch to xa_store(), or do we need an
> xa_store_irqsave()?

Yes, it looks like there is no reason for this, all users of the
xarray are from sleeping contexts, so it shouldn't need the IRQ
version.. I made a patch for this thanks

The cm_id_priv->lock is probably also not needing to be irq either,
but that is much harder to tell for sure

> > Still, woops.
> > 
> > Matt, maybe a might_sleep is deserved in here someplace?
> >
> > @@ -1534,6 +1534,8 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
> >         XA_STATE(xas, xa, index);
> >         void *curr;
> >  
> > +       might_sleep_if(gfpflags_allow_blocking(gfp));
> > +
> >         if (WARN_ON_ONCE(xa_is_advanced(entry)))
> >                 return XA_ERROR(-EINVAL);
> >         if (xa_track_free(xa) && !entry)
> > 
> > And similar in the other places that conditionally call __xas_nomem()
> > ?

But this debugging would still catch the wrong nesting of a GFP_KERNEL
inside a spinlock, you don't like it?

> > I also still wish there was a proper 'xa store in already allocated
> > but null' idiom - I remember you thought about using gfp flags == 0 at
> > one point.
> 
> An xa_replace(), perhaps?

Make sense.. But I've also done this with cmpxchg. A magic GFP flag,
as you tried to do with 0, is appealing in many ways

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 5740d1ba3568..afcb5711270b 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -879,7 +879,7 @@  static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
 static void cm_finalize_id(struct cm_id_private *cm_id_priv)
 {
 	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
-		     cm_id_priv, GFP_KERNEL);
+		     cm_id_priv);
 }
 
 struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 6c081dd985fc..1876a51f9e08 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -237,7 +237,7 @@  static int hns_roce_qp_store(struct hns_roce_dev *hr_dev,
 	if (!hr_qp->qpn)
 		return -EINVAL;
 
-	ret = xa_err(xa_store_irq(xa, hr_qp->qpn, hr_qp, GFP_KERNEL));
+	ret = xa_err(xa_store_irq(xa, hr_qp->qpn, hr_qp));
 	if (ret)
 		dev_err(hr_dev->dev, "Failed to xa store for QPC\n");
 	else
diff --git a/drivers/infiniband/hw/mlx5/srq_cmd.c b/drivers/infiniband/hw/mlx5/srq_cmd.c
index db889ec3fd48..f277e264ceab 100644
--- a/drivers/infiniband/hw/mlx5/srq_cmd.c
+++ b/drivers/infiniband/hw/mlx5/srq_cmd.c
@@ -578,7 +578,7 @@  int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq,
 	refcount_set(&srq->common.refcount, 1);
 	init_completion(&srq->common.free);
 
-	err = xa_err(xa_store_irq(&table->array, srq->srqn, srq, GFP_KERNEL));
+	err = xa_err(xa_store_irq(&table->array, srq->srqn, srq));
 	if (err)
 		goto err_destroy_srq_split;