diff mbox series

qed: avoid spin loops in _qed_mcp_cmd_and_union()

Message ID 20211027214519.606096-1-csander@purestorage.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series qed: avoid spin loops in _qed_mcp_cmd_and_union() | expand

Checks

Context Check Description
netdev/cover_letter success Single patches do not need cover letters
netdev/fixes_present success Fixes tag not required for -next series
netdev/patch_count success Link
netdev/tree_selection success Guessed tree name to be net-next
netdev/subject_prefix warning Target tree name not specified in the subject
netdev/cc_maintainers warning 2 maintainers not CCed: davem@davemloft.net kuba@kernel.org
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 1 this patch: 1
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success No Fixes tag
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 28 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 1 this patch: 1
netdev/header_inline success No static functions without inline keyword in header files

Commit Message

Caleb Sander Mateos Oct. 27, 2021, 9:45 p.m. UTC
By default, qed_mcp_cmd_and_union() sets max_retries to 500K and
usecs to 10, so these loops can together delay up to 5s.
We observed thread scheduling delays of over 700ms in production,
with stacktraces pointing to this code as the culprit.

Add calls to cond_resched() in both loops to yield the CPU if necessary.

Signed-off-by: Caleb Sander <csander@purestorage.com>
Reviewed-by: Joern Engel <joern@purestorage.com>
---
 drivers/net/ethernet/qlogic/qed/qed_mcp.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Eric Dumazet Oct. 27, 2021, 10:25 p.m. UTC | #1
On 10/27/21 2:45 PM, Caleb Sander wrote:
> By default, qed_mcp_cmd_and_union() sets max_retries to 500K and
> usecs to 10, so these loops can together delay up to 5s.
> We observed thread scheduling delays of over 700ms in production,
> with stacktraces pointing to this code as the culprit.
> 
> Add calls to cond_resched() in both loops to yield the CPU if necessary.
> 
> Signed-off-by: Caleb Sander <csander@purestorage.com>
> Reviewed-by: Joern Engel <joern@purestorage.com>
> ---
>  drivers/net/ethernet/qlogic/qed/qed_mcp.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> index 24cd41567..d6944f020 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> @@ -485,10 +485,12 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
>  
>  		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
>  
> -		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
> +		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {

I do not know this driver, but apparently, there is this CAN_SLEEP test
hinting about being able to sleep.

>  			msleep(msecs);
> -		else
> +		} else {
> +			cond_resched();

Here you might sleep/schedule, while CAN_SLEEP was not set ?

>  			udelay(usecs);


I would suggest using usleep_range() instead, because cond_resched()
can be a NOP under some circumstances.

> +		}
>  	} while (++cnt < max_retries);

Then perhaps not count against max_retries, but based on total elapsed time ?

>  
>  	if (cnt >= max_retries) {
> @@ -517,10 +519,12 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
>  		 * The spinlock stays locked until the list element is removed.
>  		 */
>  
> -		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
> +		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
>  			msleep(msecs);
> -		else
> +		} else {
> +			cond_resched();
>  			udelay(usecs);
> +		}
>  
>  		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
>  
>
Caleb Sander Mateos Oct. 28, 2021, 12:20 a.m. UTC | #2
> Here you might sleep/schedule, while CAN_SLEEP was not set ?

I also do not know this driver, just trying to fix an observed latency issue.
As far as I can tell, the CAN_SLEEP flag is set/unset depending on
which function called qed_mcp_cmd_and_union();
it does not indicate whether the function is running in atomic context.
For example, qed_mcp_cmd() calls it without CAN_SLEEP,
yet qed_mcp_drain() calls msleep() immediately after qed_mcp_cmd().

We were concerned that this function might be called in atomic context,
so we added a WARN_ON_ONCE(in_atomic()). We never saw the warning fire
during two weeks of testing, so we believe sleeping is possible here.

> I would suggest using usleep_range() instead, because cond_resched()
> can be a NOP under some circumstances.
> Then perhaps not count against max_retries, but based on total elapsed time ?

I agree these would both be improvements to the current code.
I was trying to provide a minimal change that would allow these loops
to yield the CPU,
but will happily do this refactoring if the driver authors think it
would be beneficial.

On Wed, Oct 27, 2021 at 3:25 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
>
>
> On 10/27/21 2:45 PM, Caleb Sander wrote:
> > By default, qed_mcp_cmd_and_union() sets max_retries to 500K and
> > usecs to 10, so these loops can together delay up to 5s.
> > We observed thread scheduling delays of over 700ms in production,
> > with stacktraces pointing to this code as the culprit.
> >
> > Add calls to cond_resched() in both loops to yield the CPU if necessary.
> >
> > Signed-off-by: Caleb Sander <csander@purestorage.com>
> > Reviewed-by: Joern Engel <joern@purestorage.com>
> > ---
> >  drivers/net/ethernet/qlogic/qed/qed_mcp.c | 12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > index 24cd41567..d6944f020 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > @@ -485,10 +485,12 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
> >
> >               spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
> >
> > -             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
> > +             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
>
> I do not know this driver, but apparently, there is this CAN_SLEEP test
> hinting about being able to sleep.
>
> >                       msleep(msecs);
> > -             else
> > +             } else {
> > +                     cond_resched();
>
> Here you might sleep/schedule, while CAN_SLEEP was not set ?
>
> >                       udelay(usecs);
>
>
> I would suggest using usleep_range() instead, because cond_resched()
> can be a NOP under some circumstances.
>
> > +             }
> >       } while (++cnt < max_retries);
>
> Then perhaps not count against max_retries, but based on total elapsed time ?
>
> >
> >       if (cnt >= max_retries) {
> > @@ -517,10 +519,12 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
> >                * The spinlock stays locked until the list element is removed.
> >                */
> >
> > -             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
> > +             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
> >                       msleep(msecs);
> > -             else
> > +             } else {
> > +                     cond_resched();
> >                       udelay(usecs);
> > +             }
> >
> >               spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
> >
> >
Ariel Elior Oct. 28, 2021, 5:47 a.m. UTC | #3
> > On 10/27/21 2:45 PM, Caleb Sander wrote:
> > > By default, qed_mcp_cmd_and_union() sets max_retries to 500K and
> > > usecs to 10, so these loops can together delay up to 5s.
> > > We observed thread scheduling delays of over 700ms in production,
> > > with stacktraces pointing to this code as the culprit.
> > >
> > > Add calls to cond_resched() in both loops to yield the CPU if necessary.
> > >
> > > Signed-off-by: Caleb Sander <csander@purestorage.com>
> > > Reviewed-by: Joern Engel <joern@purestorage.com>
> > > ---
> > >  drivers/net/ethernet/qlogic/qed/qed_mcp.c | 12 ++++++++----
> > >  1 file changed, 8 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > > b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > > index 24cd41567..d6944f020 100644
> > > --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > > +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
> > > @@ -485,10 +485,12 @@ _qed_mcp_cmd_and_union(struct qed_hwfn
> > > *p_hwfn,
> > >
> > >               spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
> > >
> > > -             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
> > > +             if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
> >
> > I do not know this driver, but apparently, there is this CAN_SLEEP
> > test hinting about being able to sleep.
Hi,
Indeed this function sends messages to the management FW, and may
be invoked both from atomic contexts and from non atomic ones.
CAN_SLEEP indicated whether it is permissible in the context from which
it was invoked to sleep.
Jörn Engel Oct. 28, 2021, 2:59 p.m. UTC | #4
On Thu, Oct 28, 2021 at 05:47:10AM +0000, Ariel Elior wrote:
>
> Indeed this function sends messages to the management FW, and may
> be invoked both from atomic contexts and from non atomic ones.
> CAN_SLEEP indicated whether it is permissible in the context from which
> it was invoked to sleep.

That is a rather unfortunate pattern.  I understand the desire for code
reuse, but the result is often to use udelay-loops that can take
seconds.  In case of unresponsive firmware you tend to always hit the
timeouts and incur maximum latency.

Since the scheduler is blocked on the local CPU for the time of the spin
loop and won't even bother migrating high-priority threads away - the
assumption is that the current thread will not loop for a long time -
the result can be pretty bad for latency-sensitive code.  You cannot
guarantee any latencies below the timeout of those loops, essentially.

Having a flag or some other means to switch between sleeping and
spinning would help to reduce the odds.  Avoiding calls from atomic
contexts would help even more.  Ideally I would like to remove all
such calls.  The only legitimate exceptions should be those handling
with high-volume packet RX/TX and never involve long-running loops.
Anything else can be handled from a kworker or similar.  If a 1s loop is
acceptable, waiting a few ms for the scheduler must also be acceptable.

Jörn

--
If a problem has a hardware solution, and a software solution,
do it in software.
-- Arnd Bergmann
diff mbox series

Patch

diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
index 24cd41567..d6944f020 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
@@ -485,10 +485,12 @@  _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
 
 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
 
-		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
+		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
 			msleep(msecs);
-		else
+		} else {
+			cond_resched();
 			udelay(usecs);
+		}
 	} while (++cnt < max_retries);
 
 	if (cnt >= max_retries) {
@@ -517,10 +519,12 @@  _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
 		 * The spinlock stays locked until the list element is removed.
 		 */
 
-		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
+		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
 			msleep(msecs);
-		else
+		} else {
+			cond_resched();
 			udelay(usecs);
+		}
 
 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);