diff mbox series

[06/22] net: thunderx: Use alloc_ordered_workqueue() to create ordered workqueues

Message ID 20230421025046.4008499-7-tj@kernel.org (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series None | expand

Commit Message

Tejun Heo April 21, 2023, 2:50 a.m. UTC
BACKGROUND
==========

When multiple work items are queued to a workqueue, their execution order
doesn't match the queueing order. They may get executed in any order and
simultaneously. When fully serialized execution - one by one in the queueing
order - is needed, an ordered workqueue should be used which can be created
with alloc_ordered_workqueue().

However, alloc_ordered_workqueue() was a later addition. Before it, an
ordered workqueue could be obtained by creating an UNBOUND workqueue with
@max_active==1. This originally was an implementation side-effect which was
broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
ordered"). Because there were users that depended on the ordered execution,
5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered")
made workqueue allocation path to implicitly promote UNBOUND workqueues w/
@max_active==1 to ordered workqueues.

While this has worked okay, overloading the UNBOUND allocation interface
this way creates other issues. It's difficult to tell whether a given
workqueue actually needs to be ordered and users that legitimately want a
min concurrency level wq unexpectedly gets an ordered one instead. With
planned UNBOUND workqueue updates to improve execution locality and more
prevalence of chiplet designs which can benefit from such improvements, this
isn't a state we wanna be in forever.

This patch series audits all callsites that create an UNBOUND workqueue w/
@max_active==1 and converts them to alloc_ordered_workqueue() as necessary.

WHAT TO LOOK FOR
================

The conversions are from

  alloc_workqueue(WQ_UNBOUND | flags, 1, args..)

to

  alloc_ordered_workqueue(flags, args...)

which don't cause any functional changes. If you know that fully ordered
execution is not ncessary, please let me know. I'll drop the conversion and
instead add a comment noting the fact to reduce confusion while conversion
is in progress.

If you aren't fully sure, it's completely fine to let the conversion
through. The behavior will stay exactly the same and we can always
reconsider later.

As there are follow-up workqueue core changes, I'd really appreciate if the
patch can be routed through the workqueue tree w/ your acks. Thanks.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Sunil Goutham <sgoutham@marvell.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: netdev@vger.kernel.org
---
 drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Sunil Kovvuri Goutham April 21, 2023, 6:19 a.m. UTC | #1
> -----Original Message-----
> From: Tejun Heo <htejun@gmail.com> On Behalf Of Tejun Heo
> Sent: Friday, April 21, 2023 8:21 AM
> To: jiangshanlai@gmail.com
> Cc: linux-kernel@vger.kernel.org; kernel-team@meta.com; Tejun Heo
> <tj@kernel.org>; Sunil Kovvuri Goutham <sgoutham@marvell.com>; David S.
> Miller <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; Jakub
> Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>; linux-arm-
> kernel@lists.infradead.org; netdev@vger.kernel.org
> Subject: [EXT] [PATCH 06/22] net: thunderx: Use alloc_ordered_workqueue() to
> create ordered workqueues
> 
> External Email
> 
> ----------------------------------------------------------------------
> BACKGROUND
> ==========
> 
> When multiple work items are queued to a workqueue, their execution order
> doesn't match the queueing order. They may get executed in any order and
> simultaneously. When fully serialized execution - one by one in the queueing
> order - is needed, an ordered workqueue should be used which can be created
> with alloc_ordered_workqueue().
> 
> However, alloc_ordered_workqueue() was a later addition. Before it, an ordered
> workqueue could be obtained by creating an UNBOUND workqueue with
> @max_active==1. This originally was an implementation side-effect which was
> broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1
> to be ordered"). Because there were users that depended on the ordered
> execution,
> 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
> ordered") made workqueue allocation path to implicitly promote UNBOUND
> workqueues w/
> @max_active==1 to ordered workqueues.
> 
> While this has worked okay, overloading the UNBOUND allocation interface this
> way creates other issues. It's difficult to tell whether a given workqueue actually
> needs to be ordered and users that legitimately want a min concurrency level wq
> unexpectedly gets an ordered one instead. With planned UNBOUND workqueue
> updates to improve execution locality and more prevalence of chiplet designs
> which can benefit from such improvements, this isn't a state we wanna be in
> forever.
> 
> This patch series audits all callsites that create an UNBOUND workqueue w/
> @max_active==1 and converts them to alloc_ordered_workqueue() as
> necessary.
> 
> WHAT TO LOOK FOR
> ================
> 
> The conversions are from
> 
>   alloc_workqueue(WQ_UNBOUND | flags, 1, args..)
> 
> to
> 
>   alloc_ordered_workqueue(flags, args...)
> 
> which don't cause any functional changes. If you know that fully ordered
> execution is not ncessary, please let me know. I'll drop the conversion and
> instead add a comment noting the fact to reduce confusion while conversion is
> in progress.
> 
> If you aren't fully sure, it's completely fine to let the conversion through. The
> behavior will stay exactly the same and we can always reconsider later.
> 
> As there are follow-up workqueue core changes, I'd really appreciate if the
> patch can be routed through the workqueue tree w/ your acks. Thanks.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Sunil Goutham <sgoutham@marvell.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: netdev@vger.kernel.org
> ---
>  drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> index 7eb2ddbe9bad..a317feb8decb 100644
> --- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> +++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> @@ -1126,8 +1126,7 @@ static int bgx_lmac_enable(struct bgx *bgx, u8
> lmacid)
>  	}
> 
>  poll:
> -	lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND |
> -					   WQ_MEM_RECLAIM, 1);
> +	lmac->check_link = alloc_ordered_workqueue("check_link",
> +WQ_MEM_RECLAIM);
>  	if (!lmac->check_link)
>  		return -ENOMEM;
>  	INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link);
> --
> 2.40.0

Reviewed-by: Sunil Goutham <sgoutham@marvell.com>
Jakub Kicinski April 21, 2023, 2:01 p.m. UTC | #2
On Thu, 20 Apr 2023 16:50:30 -1000 Tejun Heo wrote:
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Sunil Goutham <sgoutham@marvell.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: netdev@vger.kernel.org

You take this via your tree directly to Linus T?
Tejun Heo April 21, 2023, 2:13 p.m. UTC | #3
On Fri, Apr 21, 2023 at 07:01:08AM -0700, Jakub Kicinski wrote:
> On Thu, 20 Apr 2023 16:50:30 -1000 Tejun Heo wrote:
> > Signed-off-by: Tejun Heo <tj@kernel.org>
> > Cc: Sunil Goutham <sgoutham@marvell.com>
> > Cc: "David S. Miller" <davem@davemloft.net>
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: Jakub Kicinski <kuba@kernel.org>
> > Cc: Paolo Abeni <pabeni@redhat.com>
> > Cc: linux-arm-kernel@lists.infradead.org
> > Cc: netdev@vger.kernel.org
> 
> You take this via your tree directly to Linus T?

Yeah, that'd be my preference unless someone is really against it.

Thanks.
Jakub Kicinski April 21, 2023, 2:28 p.m. UTC | #4
On Fri, 21 Apr 2023 04:13:20 -1000 Tejun Heo wrote:
> On Fri, Apr 21, 2023 at 07:01:08AM -0700, Jakub Kicinski wrote:
> > On Thu, 20 Apr 2023 16:50:30 -1000 Tejun Heo wrote:  
> > > Signed-off-by: Tejun Heo <tj@kernel.org>
> > > Cc: Sunil Goutham <sgoutham@marvell.com>
> > > Cc: "David S. Miller" <davem@davemloft.net>
> > > Cc: Eric Dumazet <edumazet@google.com>
> > > Cc: Jakub Kicinski <kuba@kernel.org>
> > > Cc: Paolo Abeni <pabeni@redhat.com>
> > > Cc: linux-arm-kernel@lists.infradead.org
> > > Cc: netdev@vger.kernel.org  
> > 
> > You take this via your tree directly to Linus T?  
> 
> Yeah, that'd be my preference unless someone is really against it.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Tejun Heo May 8, 2023, 11:57 p.m. UTC | #5
Applied to wq/for-6.5-cleanup-ordered.

Thanks.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
index 7eb2ddbe9bad..a317feb8decb 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
@@ -1126,8 +1126,7 @@  static int bgx_lmac_enable(struct bgx *bgx, u8 lmacid)
 	}
 
 poll:
-	lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND |
-					   WQ_MEM_RECLAIM, 1);
+	lmac->check_link = alloc_ordered_workqueue("check_link", WQ_MEM_RECLAIM);
 	if (!lmac->check_link)
 		return -ENOMEM;
 	INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link);