diff mbox series

[net,v2] net: core: make napi_disable more robust

Message ID 20210414080845.11426-1-lijunp213@gmail.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net,v2] net: core: make napi_disable more robust | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net
netdev/subject_prefix success Link
netdev/cc_maintainers fail 2 blamed authors not CCed: shemminger@linux-foundation.org davem@davemloft.net; 11 maintainers not CCed: daniel@iogearbox.net andriin@fb.com cong.wang@bytedance.com ast@kernel.org ap420073@gmail.com kuba@kernel.org edumazet@google.com shemminger@linux-foundation.org bjorn@kernel.org davem@davemloft.net weiwan@google.com
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 10 this patch: 10
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 24 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 10 this patch: 10
netdev/header_inline success Link

Commit Message

Lijun Pan April 14, 2021, 8:08 a.m. UTC
There are chances that napi_disable can be called twice by NIC driver.
This could generate deadlock. For example,
the first napi_disable will spin until NAPI_STATE_SCHED is cleared
by napi_complete_done, then set it again.
When napi_disable is called the second time, it will loop infinitely
because no dev->poll will be running to clear NAPI_STATE_SCHED.

Though it is driver writer's responsibility to make sure it being
called only once, making napi_disable more robust does not hurt, not
to say it can prevent a buggy driver from crashing a system.
So, we check the napi state bit to make sure that if napi is already
disabled, we exit the call early enough to avoid spinning infinitely.

Fixes: bea3348eef27 ("[NET]: Make NAPI polling independent of struct net_device objects.")
Signed-off-by: Lijun Pan <lijunp213@gmail.com>
---
v2: justify that this patch makes napi_disable more robust.

 net/core/dev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Yunsheng Lin April 14, 2021, 8:45 a.m. UTC | #1
On 2021/4/14 16:08, Lijun Pan wrote:
> There are chances that napi_disable can be called twice by NIC driver.
> This could generate deadlock. For example,
> the first napi_disable will spin until NAPI_STATE_SCHED is cleared
> by napi_complete_done, then set it again.
> When napi_disable is called the second time, it will loop infinitely
> because no dev->poll will be running to clear NAPI_STATE_SCHED.
> 
> Though it is driver writer's responsibility to make sure it being
> called only once, making napi_disable more robust does not hurt, not
> to say it can prevent a buggy driver from crashing a system.
> So, we check the napi state bit to make sure that if napi is already
> disabled, we exit the call early enough to avoid spinning infinitely.
> 
> Fixes: bea3348eef27 ("[NET]: Make NAPI polling independent of struct net_device objects.")
> Signed-off-by: Lijun Pan <lijunp213@gmail.com>
> ---
> v2: justify that this patch makes napi_disable more robust.
> 
>  net/core/dev.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 1f79b9aa9a3f..fa0aa212b7bb 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6830,6 +6830,24 @@ EXPORT_SYMBOL(netif_napi_add);
>  void napi_disable(struct napi_struct *n)
>  {
>  	might_sleep();
> +
> +	/* make sure napi_disable() runs only once,
> +	 * When napi is disabled, the state bits are like:
> +	 * NAPI_STATE_SCHED (set by previous napi_disable)
> +	 * NAPI_STATE_NPSVC (set by previous napi_disable)
> +	 * NAPI_STATE_DISABLE (cleared by previous napi_disable)
> +	 * NAPI_STATE_PREFER_BUSY_POLL (cleared by previous napi_complete_done)
> +	 * NAPI_STATE_MISSED (cleared by previous napi_complete_done)
> +	 */
> +
> +	if (napi_disable_pending(n))
> +		return;
> +	if (test_bit(NAPI_STATE_SCHED, &n->state) &&
> +	    test_bit(NAPI_STATE_NPSVC, &n->state) &&
> +	    !test_bit(NAPI_STATE_MISSED, &n->state) &&
> +	    !test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state))
> +		return;

The NAPI_STATE_DISABLE is cleared at the end of napi_disable(),
and if a buggy driver/hw triggers a interrupt and driver calls
napi_schedule_irqoff(), which may set NAPI_STATE_MISSED
if NAPI_STATE_SCHED is set(in napi_schedule_prep()), the above
checking does not seem to handle it?

> +
>  	set_bit(NAPI_STATE_DISABLE, &n->state);
>  
>  	while (test_and_set_bit(NAPI_STATE_SCHED, &n->state))
>
Lijun Pan April 14, 2021, 5:31 p.m. UTC | #2
On Wed, Apr 14, 2021 at 3:45 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2021/4/14 16:08, Lijun Pan wrote:
> > There are chances that napi_disable can be called twice by NIC driver.
> > This could generate deadlock. For example,
> > the first napi_disable will spin until NAPI_STATE_SCHED is cleared
> > by napi_complete_done, then set it again.
> > When napi_disable is called the second time, it will loop infinitely
> > because no dev->poll will be running to clear NAPI_STATE_SCHED.
> >
> > Though it is driver writer's responsibility to make sure it being
> > called only once, making napi_disable more robust does not hurt, not
> > to say it can prevent a buggy driver from crashing a system.
> > So, we check the napi state bit to make sure that if napi is already
> > disabled, we exit the call early enough to avoid spinning infinitely.
> >
> > Fixes: bea3348eef27 ("[NET]: Make NAPI polling independent of struct net_device objects.")
> > Signed-off-by: Lijun Pan <lijunp213@gmail.com>
> > ---
> > v2: justify that this patch makes napi_disable more robust.
> >
> >  net/core/dev.c | 18 ++++++++++++++++++
> >  1 file changed, 18 insertions(+)
> >
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 1f79b9aa9a3f..fa0aa212b7bb 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -6830,6 +6830,24 @@ EXPORT_SYMBOL(netif_napi_add);
> >  void napi_disable(struct napi_struct *n)
> >  {
> >       might_sleep();
> > +
> > +     /* make sure napi_disable() runs only once,
> > +      * When napi is disabled, the state bits are like:
> > +      * NAPI_STATE_SCHED (set by previous napi_disable)
> > +      * NAPI_STATE_NPSVC (set by previous napi_disable)
> > +      * NAPI_STATE_DISABLE (cleared by previous napi_disable)
> > +      * NAPI_STATE_PREFER_BUSY_POLL (cleared by previous napi_complete_done)
> > +      * NAPI_STATE_MISSED (cleared by previous napi_complete_done)
> > +      */
> > +
> > +     if (napi_disable_pending(n))
> > +             return;
> > +     if (test_bit(NAPI_STATE_SCHED, &n->state) &&
> > +         test_bit(NAPI_STATE_NPSVC, &n->state) &&
> > +         !test_bit(NAPI_STATE_MISSED, &n->state) &&
> > +         !test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state))
> > +             return;
>
> The NAPI_STATE_DISABLE is cleared at the end of napi_disable(),
> and if a buggy driver/hw triggers a interrupt and driver calls
> napi_schedule_irqoff(), which may set NAPI_STATE_MISSED
> if NAPI_STATE_SCHED is set(in napi_schedule_prep()), the above
> checking does not seem to handle it?

What I described in the commit message is the napi_disable() being
called from the same instance, same cpu. e.g.,
funcA {
    napi_disable();
    ...
    funcB{
        if (blah)
            napi_disable();
            ...
    }
    funcC;
}

The scenario you mentioned above seems to have napi already enabled
and scheduled, such that napi_schedule_prep() would set NAPI_STATE_MISSED.
The two scenarios are different per my understanding. Is there a way
that your scenario will finally call into my scenario?
Let me know if I understand you correctly.

Maybe testing NAPI_STATE_MISSED bit is not needed
because this bit is not that reliable.

Lijun
Jakub Kicinski April 14, 2021, 11:21 p.m. UTC | #3
On Wed, 14 Apr 2021 03:08:45 -0500 Lijun Pan wrote:
> There are chances that napi_disable can be called twice by NIC driver.
> This could generate deadlock. For example,
> the first napi_disable will spin until NAPI_STATE_SCHED is cleared
> by napi_complete_done, then set it again.
> When napi_disable is called the second time, it will loop infinitely
> because no dev->poll will be running to clear NAPI_STATE_SCHED.
> 
> Though it is driver writer's responsibility to make sure it being
> called only once, making napi_disable more robust does not hurt, not
> to say it can prevent a buggy driver from crashing a system.
> So, we check the napi state bit to make sure that if napi is already
> disabled, we exit the call early enough to avoid spinning infinitely.

You've already been told by Eric & Dave to fix the driver instead.

Your check is _not_ correct - SCHED && NPSVC && !MISSED && !BUSY_POLL 
can well arise without disabling the NAPI.

But regardless, a driver bug should be relatively easy to identify with
task getting stuck in napi_disable(). We don't provide "protection" 
for taking spin locks or ref counts twice either. Unless you can show 
a strong use case please stop posting new versions of this patch.
Eric Dumazet April 15, 2021, 6:46 a.m. UTC | #4
On 4/15/21 1:21 AM, Jakub Kicinski wrote:
> On Wed, 14 Apr 2021 03:08:45 -0500 Lijun Pan wrote:
>> There are chances that napi_disable can be called twice by NIC driver.
>> This could generate deadlock. For example,
>> the first napi_disable will spin until NAPI_STATE_SCHED is cleared
>> by napi_complete_done, then set it again.
>> When napi_disable is called the second time, it will loop infinitely
>> because no dev->poll will be running to clear NAPI_STATE_SCHED.
>>
>> Though it is driver writer's responsibility to make sure it being
>> called only once, making napi_disable more robust does not hurt, not
>> to say it can prevent a buggy driver from crashing a system.
>> So, we check the napi state bit to make sure that if napi is already
>> disabled, we exit the call early enough to avoid spinning infinitely.
> 
> You've already been told by Eric & Dave to fix the driver instead.
> 
> Your check is _not_ correct - SCHED && NPSVC && !MISSED && !BUSY_POLL 
> can well arise without disabling the NAPI.
> 
> But regardless, a driver bug should be relatively easy to identify with
> task getting stuck in napi_disable(). We don't provide "protection" 
> for taking spin locks or ref counts twice either. Unless you can show 
> a strong use case please stop posting new versions of this patch.
> 

+222

I notice this v2 does not even mention which driver has the issue.

I suspect an out-of-tree driver.
Eric Dumazet April 15, 2021, 6:47 a.m. UTC | #5
On 4/14/21 10:08 AM, Lijun Pan wrote:
> There are chances that napi_disable can be called twice by NIC driver.
> This could generate deadlock. For example,
> the first napi_disable will spin until NAPI_STATE_SCHED is cleared
> by napi_complete_done, then set it again.
> When napi_disable is called the second time, it will loop infinitely
> because no dev->poll will be running to clear NAPI_STATE_SCHED.
> 
> Though it is driver writer's responsibility to make sure it being
> called only once, making napi_disable more robust does not hurt, not
> to say it can prevent a buggy driver from crashing a system.

This is hard to digest. A buggy driver has plenty of ways to crash the system.

If you need help to fix the buggy driver, please ask for help.
diff mbox series

Patch

diff --git a/net/core/dev.c b/net/core/dev.c
index 1f79b9aa9a3f..fa0aa212b7bb 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6830,6 +6830,24 @@  EXPORT_SYMBOL(netif_napi_add);
 void napi_disable(struct napi_struct *n)
 {
 	might_sleep();
+
+	/* make sure napi_disable() runs only once,
+	 * When napi is disabled, the state bits are like:
+	 * NAPI_STATE_SCHED (set by previous napi_disable)
+	 * NAPI_STATE_NPSVC (set by previous napi_disable)
+	 * NAPI_STATE_DISABLE (cleared by previous napi_disable)
+	 * NAPI_STATE_PREFER_BUSY_POLL (cleared by previous napi_complete_done)
+	 * NAPI_STATE_MISSED (cleared by previous napi_complete_done)
+	 */
+
+	if (napi_disable_pending(n))
+		return;
+	if (test_bit(NAPI_STATE_SCHED, &n->state) &&
+	    test_bit(NAPI_STATE_NPSVC, &n->state) &&
+	    !test_bit(NAPI_STATE_MISSED, &n->state) &&
+	    !test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state))
+		return;
+
 	set_bit(NAPI_STATE_DISABLE, &n->state);
 
 	while (test_and_set_bit(NAPI_STATE_SCHED, &n->state))