diff mbox series

IB/hfi1: Fix potential deadlock on &sde->flushlist_lock

Message ID 20230628045925.5261-1-dg573847474@gmail.com (mailing list archive)
State Rejected
Headers show
Series IB/hfi1: Fix potential deadlock on &sde->flushlist_lock | expand

Commit Message

Chengfeng Ye June 28, 2023, 4:59 a.m. UTC
As &sde->flushlist_lock is acquired by timer sdma_err_progress_check()
through layer of calls under softirq context, other process
context code acquiring the lock should disable irq.

Possible deadlock scenario
sdma_send_txreq()
    -> spin_lock(&sde->flushlist_lock)
        <timer interrupt>
        -> sdma_err_progress_check()
        -> __sdma_process_event()
        -> sdma_set_state()
        -> sdma_flush()
        -> spin_lock_irqsave(&sde->flushlist_lock, flags) (deadlock here)

This flaw was found using an experimental static analysis tool we are
developing for irq-related deadlock.

The tentative patch fix the potential deadlock by spin_lock_irqsave().

Signed-off-by: Chengfeng Ye <dg573847474@gmail.com>
---
 drivers/infiniband/hw/hfi1/sdma.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Leon Romanovsky July 4, 2023, 11:48 a.m. UTC | #1
On Wed, Jun 28, 2023 at 04:59:25AM +0000, Chengfeng Ye wrote:
> As &sde->flushlist_lock is acquired by timer sdma_err_progress_check()
> through layer of calls under softirq context, other process
> context code acquiring the lock should disable irq.
> 
> Possible deadlock scenario
> sdma_send_txreq()
>     -> spin_lock(&sde->flushlist_lock)
>         <timer interrupt>
>         -> sdma_err_progress_check()
>         -> __sdma_process_event()
>         -> sdma_set_state()
>         -> sdma_flush()
>         -> spin_lock_irqsave(&sde->flushlist_lock, flags) (deadlock here)
> 
> This flaw was found using an experimental static analysis tool we are
> developing for irq-related deadlock.
> 
> The tentative patch fix the potential deadlock by spin_lock_irqsave().
> 
> Signed-off-by: Chengfeng Ye <dg573847474@gmail.com>
> ---
>  drivers/infiniband/hw/hfi1/sdma.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
> index bb2552dd29c1..0431f575c861 100644
> --- a/drivers/infiniband/hw/hfi1/sdma.c
> +++ b/drivers/infiniband/hw/hfi1/sdma.c
> @@ -2371,9 +2371,9 @@ int sdma_send_txreq(struct sdma_engine *sde,
>  	tx->sn = sde->tail_sn++;
>  	trace_hfi1_sdma_in_sn(sde, tx->sn);
>  #endif
> -	spin_lock(&sde->flushlist_lock);
> +	spin_lock_irqsave(&sde->flushlist_lock, flags);
>  	list_add_tail(&tx->list, &sde->flushlist);
> -	spin_unlock(&sde->flushlist_lock);
> +	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
>  	iowait_inc_wait_count(wait, tx->num_desc);
>  	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
>  	ret = -ECOMM;

It can't work as exactly after "ret = -ECOMM;" line, there is "goto unlock"
and there hfi1 calls to spin_unlock_irqrestore(..) with same "flags".

Plus, we already in context where interrupts are stopped.

Thanks

> @@ -2459,7 +2459,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
>  	*count_out = total_count;
>  	return ret;
>  unlock_noconn:
> -	spin_lock(&sde->flushlist_lock);
> +	spin_lock_irqsave(&sde->flushlist_lock, flags);
>  	list_for_each_entry_safe(tx, tx_next, tx_list, list) {
>  		tx->wait = iowait_ioww_to_iow(wait);
>  		list_del_init(&tx->list);
> @@ -2472,7 +2472,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
>  		flush_count++;
>  		iowait_inc_wait_count(wait, tx->num_desc);
>  	}
> -	spin_unlock(&sde->flushlist_lock);
> +	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
>  	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
>  	ret = -ECOMM;
>  	goto update_tail;
> -- 
> 2.17.1
>
Chengfeng Ye July 4, 2023, 5:42 p.m. UTC | #2
> Plus, we already in context where interrupts are stopped.

Indeed they can be called from .ndo_start_xmit callback and
the document said it is with bh disabled.

But I found some call chain from the user process that seems could
be called from irq disabled context. For sdma_send_txlist(),
there is a call chain.

-> hfi1_write_iter()  (.write_iter callback)
-> hfi1_user_sdma_process_request()
-> user_sdma_send_pkts()
-> sdma_send_txlist()

The .write_iter seems not to disable irq by default, as mentioned by
https://www.kernel.org/doc/Documentation/filesystems/vfs.txt
And I didn't find any explicit disabling or bh or irq along the call path,
and also see several  copy_from_usr() which cannot be invoked under
irq context.


For sdma_send_txreq(), there is a call chain.

-> qp_priv_alloc()
-> iowait_init() (register _hfi1_do_tid_send() as a work queue)
-> _hfi1_do_tid_send() (workqueue)
-> hfi1_do_tid_send()
-> hfi1_verbs_send()
-> sr(qp, ps, 0) (sr could points to hfi1_verbs_send_dm())
-> hfi1_verbs_send_dma()
-> sdma_send_txreq()

_hfi1_do_tid_send() is a work queue without irq disabled by default,
I also check the remaining call path and also found that there is no explicit
irq disable, instead the call site of hfi1_verbs_send() is exactly after
spin_lock_irq_restore(), seems like a hint that it is probably called withirq
enable.

Another hint is that the lock acquisition of
spin_lock_irqsave(&sde->tail_lock, flags);
just before my patch in the same function also disable irq, seems like another
hint that this function could be called with interrupt disable,
otherwise the lock/unlock
for sde->tail_lock does not need to disable irq?

Would be appreciated if you could further check with this.


> It can't work as exactly after "ret = -ECOMM;" line, there is "goto unlock"
> and there hfi1 calls to spin_unlock_irqrestore(..) with same "flags".

Yeah, that's my negligence, and sorry for this. Once you confirm that
there should
be some fixes, I would like to provide with v2 patch with the correct fix.

Best Regards,
Chengfeng


Leon Romanovsky <leon@kernel.org> 于2023年7月4日周二 19:48写道:
>
> On Wed, Jun 28, 2023 at 04:59:25AM +0000, Chengfeng Ye wrote:
> > As &sde->flushlist_lock is acquired by timer sdma_err_progress_check()
> > through layer of calls under softirq context, other process
> > context code acquiring the lock should disable irq.
> >
> > Possible deadlock scenario
> > sdma_send_txreq()
> >     -> spin_lock(&sde->flushlist_lock)
> >         <timer interrupt>
> >         -> sdma_err_progress_check()
> >         -> __sdma_process_event()
> >         -> sdma_set_state()
> >         -> sdma_flush()
> >         -> spin_lock_irqsave(&sde->flushlist_lock, flags) (deadlock here)
> >
> > This flaw was found using an experimental static analysis tool we are
> > developing for irq-related deadlock.
> >
> > The tentative patch fix the potential deadlock by spin_lock_irqsave().
> >
> > Signed-off-by: Chengfeng Ye <dg573847474@gmail.com>
> > ---
> >  drivers/infiniband/hw/hfi1/sdma.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
> > index bb2552dd29c1..0431f575c861 100644
> > --- a/drivers/infiniband/hw/hfi1/sdma.c
> > +++ b/drivers/infiniband/hw/hfi1/sdma.c
> > @@ -2371,9 +2371,9 @@ int sdma_send_txreq(struct sdma_engine *sde,
> >       tx->sn = sde->tail_sn++;
> >       trace_hfi1_sdma_in_sn(sde, tx->sn);
> >  #endif
> > -     spin_lock(&sde->flushlist_lock);
> > +     spin_lock_irqsave(&sde->flushlist_lock, flags);
> >       list_add_tail(&tx->list, &sde->flushlist);
> > -     spin_unlock(&sde->flushlist_lock);
> > +     spin_unlock_irqrestore(&sde->flushlist_lock, flags);
> >       iowait_inc_wait_count(wait, tx->num_desc);
> >       queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
> >       ret = -ECOMM;
>
> It can't work as exactly after "ret = -ECOMM;" line, there is "goto unlock"
> and there hfi1 calls to spin_unlock_irqrestore(..) with same "flags".
>
> Plus, we already in context where interrupts are stopped.
>
> Thanks
>
> > @@ -2459,7 +2459,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
> >       *count_out = total_count;
> >       return ret;
> >  unlock_noconn:
> > -     spin_lock(&sde->flushlist_lock);
> > +     spin_lock_irqsave(&sde->flushlist_lock, flags);
> >       list_for_each_entry_safe(tx, tx_next, tx_list, list) {
> >               tx->wait = iowait_ioww_to_iow(wait);
> >               list_del_init(&tx->list);
> > @@ -2472,7 +2472,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
> >               flush_count++;
> >               iowait_inc_wait_count(wait, tx->num_desc);
> >       }
> > -     spin_unlock(&sde->flushlist_lock);
> > +     spin_unlock_irqrestore(&sde->flushlist_lock, flags);
> >       queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
> >       ret = -ECOMM;
> >       goto update_tail;
> > --
> > 2.17.1
> >
Leon Romanovsky July 5, 2023, 5:52 a.m. UTC | #3
On Wed, Jul 05, 2023 at 01:42:31AM +0800, Chengfeng Ye wrote:
> > Plus, we already in context where interrupts are stopped.
> 
> Indeed they can be called from .ndo_start_xmit callback and
> the document said it is with bh disabled.
> 
> But I found some call chain from the user process that seems could
> be called from irq disabled context. For sdma_send_txlist(),
> there is a call chain.
> 
> -> hfi1_write_iter()  (.write_iter callback)
> -> hfi1_user_sdma_process_request()
> -> user_sdma_send_pkts()
> -> sdma_send_txlist()
> 
> The .write_iter seems not to disable irq by default, as mentioned by
> https://www.kernel.org/doc/Documentation/filesystems/vfs.txt
> And I didn't find any explicit disabling or bh or irq along the call path,
> and also see several  copy_from_usr() which cannot be invoked under
> irq context.
> 
> 
> For sdma_send_txreq(), there is a call chain.
> 
> -> qp_priv_alloc()
> -> iowait_init() (register _hfi1_do_tid_send() as a work queue)
> -> _hfi1_do_tid_send() (workqueue)
> -> hfi1_do_tid_send()
> -> hfi1_verbs_send()
> -> sr(qp, ps, 0) (sr could points to hfi1_verbs_send_dm())
> -> hfi1_verbs_send_dma()
> -> sdma_send_txreq()
> 
> _hfi1_do_tid_send() is a work queue without irq disabled by default,
> I also check the remaining call path and also found that there is no explicit
> irq disable, instead the call site of hfi1_verbs_send() is exactly after
> spin_lock_irq_restore(), seems like a hint that it is probably called withirq
> enable.

Right, that path is called in process context and can sleep, there is no
need in irq disabled variant there.

> 
> Another hint is that the lock acquisition of
> spin_lock_irqsave(&sde->tail_lock, flags);
> just before my patch in the same function also disable irq, seems like another
> hint that this function could be called with interrupt disable,

Exactly, we already called to spin_lock_irqsave(), there is no value in
doing it twice.
void f() {
	spin_lock_irqsave(...)
	spin_lock_irqsave(...)
	....
	spin_unlock_irqrestore(...)
	spin_unlock_irqrestore(...)
}

is exactly the same as
void f() {
	spin_lock_irqsave(...)
	spin_lock(...)
	....
	spin_unlock(...)
	spin_unlock_irqrestore(...)
}

Thanks
Chengfeng Ye July 5, 2023, 6:47 a.m. UTC | #4
> Exactly, we already called to spin_lock_irqsave(), there is no value in
> doing it twice.

Oh yeah, I just notice that the lock acquisition of &sde->flushlist_lock
is always nested inside &sde->tail_lock due to the goto. Then it is true
that no need for irq invariant lock/unlock on &sde->flushlist_lock.

Thanks much for your reply and your time.

Best Regards,
Chengfeng
Dennis Dalessandro July 5, 2023, 2:08 p.m. UTC | #5
On 7/5/23 2:47 AM, Chengfeng Ye wrote:
>> Exactly, we already called to spin_lock_irqsave(), there is no value in
>> doing it twice.
> 
> Oh yeah, I just notice that the lock acquisition of &sde->flushlist_lock
> is always nested inside &sde->tail_lock due to the goto. Then it is true
> that no need for irq invariant lock/unlock on &sde->flushlist_lock.
> 
> Thanks much for your reply and your time.

Agree. Thanks Leon for looking at this. I was out of the office and just now
seen it.

-Denny
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index bb2552dd29c1..0431f575c861 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -2371,9 +2371,9 @@  int sdma_send_txreq(struct sdma_engine *sde,
 	tx->sn = sde->tail_sn++;
 	trace_hfi1_sdma_in_sn(sde, tx->sn);
 #endif
-	spin_lock(&sde->flushlist_lock);
+	spin_lock_irqsave(&sde->flushlist_lock, flags);
 	list_add_tail(&tx->list, &sde->flushlist);
-	spin_unlock(&sde->flushlist_lock);
+	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
 	iowait_inc_wait_count(wait, tx->num_desc);
 	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
@@ -2459,7 +2459,7 @@  int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
 	*count_out = total_count;
 	return ret;
 unlock_noconn:
-	spin_lock(&sde->flushlist_lock);
+	spin_lock_irqsave(&sde->flushlist_lock, flags);
 	list_for_each_entry_safe(tx, tx_next, tx_list, list) {
 		tx->wait = iowait_ioww_to_iow(wait);
 		list_del_init(&tx->list);
@@ -2472,7 +2472,7 @@  int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
 		flush_count++;
 		iowait_inc_wait_count(wait, tx->num_desc);
 	}
-	spin_unlock(&sde->flushlist_lock);
+	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
 	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto update_tail;