From patchwork Tue May 9 01:50:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13235286 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 073BF383 for ; Tue, 9 May 2023 01:50:42 +0000 (UTC) Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83A41A279; Mon, 8 May 2023 18:50:40 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1aaec9ad820so49939085ad.0; Mon, 08 May 2023 18:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683597040; x=1686189040; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=qmncjnYD0YCdbiAl0XbmIMAaaEtLkUY+JzhiUY0+yqE=; b=mv4gwcbyAom9v/ntOCrdkUNTBnkSLE3OOiSUtx6dQ8HkLqT/mVu7JDDB5Le9seWEEd WG1G4VJ0jXB9J4Gh+9NGBPHBHN69z4cLU71mMesa1qSIVoz4SAMd5D43GgBYcr4/eOF5 z6veea5w9lhlE+i9g77L6ZJ8nWrjS5pMA6yALEGdzh++U2t92996kBHuNNOdRTgdUBTH x6+7yeuDY1yYej+NZcqwW2g8m4jLfcpGYadW1uPpfO734plAo3mo81gjPbQPWthmt+sk Go/DSTsxb+NKYc9TfxS0hn6CkVutlx1YNvj/QSPebwnWbntwuyFhZaatY5iguhCP7hNz gKaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683597040; x=1686189040; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=qmncjnYD0YCdbiAl0XbmIMAaaEtLkUY+JzhiUY0+yqE=; b=dztFFMGVcBGKZGr+o3EmJRYl79qclNkBOjAN8yMdXAAN3Q+wrkY3EEvd1vhwhNsIUE yddBNqxSF93Gp3XyxvQER+O/zD/nfGautuRsAqW8sNnOFQRBJd+HXWfgTaYpdbmE5mzq EckKY2LQ5DbCbuYar3vX1OU3NMI6IBHVWz0y8le79rMM1skyKrYCbrJAM5SuCYhduyGL rma8KAVI15EjPDFDZ6oXivRTXY8iFwk4IyxmgkmOO3U6eYSFxHEKeNqF5SD5LIQtUR6U QqbPobN4Vg9629+b1RSNIP5b+8ocGNpz74IBtvfwap9g63zf8zp23NtQ2HVa8ObRVDYY mVPQ== X-Gm-Message-State: AC+VfDxMSFrevwhEwyPn27tDoIafAmA4bnrw548ThoNpgVqBcC2fR7p6 JtcYgBN3vxpPCBZETnLB1Xc= X-Google-Smtp-Source: ACHHUZ4sn+rHmNnAd7dRWLEd4zXW1JdTWXWI0Hd1JVnCAs9S0NZvu0ksCDUYgOdsqVDUFH5uxxyKVQ== X-Received: by 2002:a17:902:efc5:b0:19c:f096:bbef with SMTP id ja5-20020a170902efc500b0019cf096bbefmr11597712plb.49.1683597039617; Mon, 08 May 2023 18:50:39 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id u9-20020a170902e5c900b001ab12545508sm160050plf.67.2023.05.08.18.50.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 18:50:39 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, Tejun Heo , Amitkumar Karwar , Ganapathi Bhat , Sharvari Harisangam , Xinming Hu , Kalle Valo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-wireless@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 02/13] wifi: mwifiex: Use default @max_active for workqueues Date: Mon, 8 May 2023 15:50:21 -1000 Message-Id: <20230509015032.3768622-3-tj@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230509015032.3768622-1-tj@kernel.org> References: <20230509015032.3768622-1-tj@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org These workqueues only host a single work item and thus doen't need explicit concurrency limit. Let's use the default @max_active. This doesn't cost anything and clearly expresses that @max_active doesn't matter. Signed-off-by: Tejun Heo Cc: Amitkumar Karwar Cc: Ganapathi Bhat Cc: Sharvari Harisangam Cc: Xinming Hu Cc: Kalle Valo Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: linux-wireless@vger.kernel.org Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Kalle Valo Reviewed-by: Brian Norris --- drivers/net/wireless/marvell/mwifiex/cfg80211.c | 4 ++-- drivers/net/wireless/marvell/mwifiex/main.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c index bcd564dc3554..5337ee4b6f10 100644 --- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c +++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c @@ -3127,7 +3127,7 @@ struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy, priv->dfs_cac_workqueue = alloc_workqueue("MWIFIEX_DFS_CAC%s", WQ_HIGHPRI | WQ_MEM_RECLAIM | - WQ_UNBOUND, 1, name); + WQ_UNBOUND, 0, name); if (!priv->dfs_cac_workqueue) { mwifiex_dbg(adapter, ERROR, "cannot alloc DFS CAC queue\n"); ret = -ENOMEM; @@ -3138,7 +3138,7 @@ struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy, priv->dfs_chan_sw_workqueue = alloc_workqueue("MWIFIEX_DFS_CHSW%s", WQ_HIGHPRI | WQ_UNBOUND | - WQ_MEM_RECLAIM, 1, name); + WQ_MEM_RECLAIM, 0, name); if (!priv->dfs_chan_sw_workqueue) { mwifiex_dbg(adapter, ERROR, "cannot alloc DFS channel sw queue\n"); ret = -ENOMEM; diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c index ea22a08e6c08..1cd9d20cca16 100644 --- a/drivers/net/wireless/marvell/mwifiex/main.c +++ b/drivers/net/wireless/marvell/mwifiex/main.c @@ -1547,7 +1547,7 @@ mwifiex_reinit_sw(struct mwifiex_adapter *adapter) adapter->workqueue = alloc_workqueue("MWIFIEX_WORK_QUEUE", - WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 1); + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!adapter->workqueue) goto err_kmalloc; @@ -1557,7 +1557,7 @@ mwifiex_reinit_sw(struct mwifiex_adapter *adapter) adapter->rx_workqueue = alloc_workqueue("MWIFIEX_RX_WORK_QUEUE", WQ_HIGHPRI | WQ_MEM_RECLAIM | - WQ_UNBOUND, 1); + WQ_UNBOUND, 0); if (!adapter->rx_workqueue) goto err_kmalloc; INIT_WORK(&adapter->rx_work, mwifiex_rx_work_queue); @@ -1702,7 +1702,7 @@ mwifiex_add_card(void *card, struct completion *fw_done, adapter->workqueue = alloc_workqueue("MWIFIEX_WORK_QUEUE", - WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 1); + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!adapter->workqueue) goto err_kmalloc; @@ -1712,7 +1712,7 @@ mwifiex_add_card(void *card, struct completion *fw_done, adapter->rx_workqueue = alloc_workqueue("MWIFIEX_RX_WORK_QUEUE", WQ_HIGHPRI | WQ_MEM_RECLAIM | - WQ_UNBOUND, 1); + WQ_UNBOUND, 0); if (!adapter->rx_workqueue) goto err_kmalloc; From patchwork Tue May 9 01:50:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13235287 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A764B383 for ; Tue, 9 May 2023 01:51:03 +0000 (UTC) Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37473AD23; Mon, 8 May 2023 18:50:46 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1aaebed5bd6so37148685ad.1; Mon, 08 May 2023 18:50:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683597045; x=1686189045; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=QMTTy8Xq3DQysjr0Apwc28XqtPWPIN/Qk2iVDooviS4=; b=DcJ7t//ro7U/ttHMBFC9attG41EeH8VZtSXuwOroBYbJWhoKZNI4AV1KG5CeHOcRmO ECsAMMzBkOBp29bRrORU2z666uQHw0dYQm+6fgQkpamYB4oKMGKokl0W8OZrujBS6ltA oJkH2HEjJ/yL50TyNebQGtX6iXTrrOCoja74sdoYXYeZ36wllotLKdhFh6JwLGfRZJvZ XXy8Sfcu2kvmbNC4oHLLSbLaM40HpkKcN8u4d+BFupNEL5GNVORMOj1sRkjxxbEovwk8 ySdD1vMNYGpJZXuXyMdszB62AKUBeDypRwSg7TaF1EN00A3P4fDhAvnQrY8Rhzixh4Cu sONQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683597045; x=1686189045; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=QMTTy8Xq3DQysjr0Apwc28XqtPWPIN/Qk2iVDooviS4=; b=LaUZsJrc4COiF5Ik6UCdJxCoWH9zTIxpGsFFxLX2L0CUa+YmbF8q0h21j2cdmSq1OM w0yrgcuTJ4S1joDnKAgiQlE6pgljP11IAVWtMzAKi6BK3nacigyNh3KwMjflQHG7vfIs mDa64VRw8eax3LXJ30ZM14WHQ5EsIQbr1ZkBjcGVGip6fa9Rdv63MT0NHGynV4NzHibA AMbdAggVLKRtoTbT016EeX4Cdo+hs2XhdjlVuXhXvWAGtNYnzFmq04OYywGLSz77qqO2 NHZ1CDWqOkFyG3l+8gbb1hyJag0lJKQzmDXGjzuEHvvzUx7AyMtYNa9vLWF3tQ9yWCgP 1tbw== X-Gm-Message-State: AC+VfDwma+Hk9fYb5w4pEsQYz+/ZoXPTUPBG2FEhYIWvsAeuCoOjOLyS srRtQZC7Ny7EDuyqWiVNHO4= X-Google-Smtp-Source: ACHHUZ4bv7//zjoXAVozZ9A4o/qCIb2Vg+jgfHItIU4zO6YD64j7Wt4dENV1GblH0pUluA5l6Rlrhw== X-Received: by 2002:a17:902:ef96:b0:1aa:ce4d:c77d with SMTP id iz22-20020a170902ef9600b001aace4dc77dmr11830781plb.24.1683597045176; Mon, 08 May 2023 18:50:45 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id i12-20020a17090332cc00b001ac5896e96esm139330plr.207.2023.05.08.18.50.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 18:50:44 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, Tejun Heo , Kalle Valo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-wireless@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 05/13] wifi: ath10/11/12k: Use alloc_ordered_workqueue() to create ordered workqueues Date: Mon, 8 May 2023 15:50:24 -1000 Message-Id: <20230509015032.3768622-6-tj@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230509015032.3768622-1-tj@kernel.org> References: <20230509015032.3768622-1-tj@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org BACKGROUND ========== When multiple work items are queued to a workqueue, their execution order doesn't match the queueing order. They may get executed in any order and simultaneously. When fully serialized execution - one by one in the queueing order - is needed, an ordered workqueue should be used which can be created with alloc_ordered_workqueue(). However, alloc_ordered_workqueue() was a later addition. Before it, an ordered workqueue could be obtained by creating an UNBOUND workqueue with @max_active==1. This originally was an implementation side-effect which was broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered"). Because there were users that depended on the ordered execution, 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered") made workqueue allocation path to implicitly promote UNBOUND workqueues w/ @max_active==1 to ordered workqueues. While this has worked okay, overloading the UNBOUND allocation interface this way creates other issues. It's difficult to tell whether a given workqueue actually needs to be ordered and users that legitimately want a min concurrency level wq unexpectedly gets an ordered one instead. With planned UNBOUND workqueue updates to improve execution locality and more prevalence of chiplet designs which can benefit from such improvements, this isn't a state we wanna be in forever. This patch series audits all callsites that create an UNBOUND workqueue w/ @max_active==1 and converts them to alloc_ordered_workqueue() as necessary. WHAT TO LOOK FOR ================ The conversions are from alloc_workqueue(WQ_UNBOUND | flags, 1, args..) to alloc_ordered_workqueue(flags, args...) which don't cause any functional changes. If you know that fully ordered execution is not ncessary, please let me know. I'll drop the conversion and instead add a comment noting the fact to reduce confusion while conversion is in progress. If you aren't fully sure, it's completely fine to let the conversion through. The behavior will stay exactly the same and we can always reconsider later. As there are follow-up workqueue core changes, I'd really appreciate if the patch can be routed through the workqueue tree w/ your acks. Thanks. Signed-off-by: Tejun Heo Cc: Kalle Valo Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: linux-wireless@vger.kernel.org Cc: netdev@vger.kernel.org Acked-by: Kalle Valo --- drivers/net/wireless/ath/ath10k/qmi.c | 3 +-- drivers/net/wireless/ath/ath11k/qmi.c | 3 +-- drivers/net/wireless/ath/ath12k/qmi.c | 3 +-- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c index 038c5903c0dc..52c1a3de8da6 100644 --- a/drivers/net/wireless/ath/ath10k/qmi.c +++ b/drivers/net/wireless/ath/ath10k/qmi.c @@ -1082,8 +1082,7 @@ int ath10k_qmi_init(struct ath10k *ar, u32 msa_size) if (ret) goto err; - qmi->event_wq = alloc_workqueue("ath10k_qmi_driver_event", - WQ_UNBOUND, 1); + qmi->event_wq = alloc_ordered_workqueue("ath10k_qmi_driver_event", 0); if (!qmi->event_wq) { ath10k_err(ar, "failed to allocate workqueue\n"); ret = -EFAULT; diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c index ab923e24b0a9..26b252e62909 100644 --- a/drivers/net/wireless/ath/ath11k/qmi.c +++ b/drivers/net/wireless/ath/ath11k/qmi.c @@ -3256,8 +3256,7 @@ int ath11k_qmi_init_service(struct ath11k_base *ab) return ret; } - ab->qmi.event_wq = alloc_workqueue("ath11k_qmi_driver_event", - WQ_UNBOUND, 1); + ab->qmi.event_wq = alloc_ordered_workqueue("ath11k_qmi_driver_event", 0); if (!ab->qmi.event_wq) { ath11k_err(ab, "failed to allocate workqueue\n"); return -EFAULT; diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c index 03ba245fbee9..0a7892b1a8f8 100644 --- a/drivers/net/wireless/ath/ath12k/qmi.c +++ b/drivers/net/wireless/ath/ath12k/qmi.c @@ -3056,8 +3056,7 @@ int ath12k_qmi_init_service(struct ath12k_base *ab) return ret; } - ab->qmi.event_wq = alloc_workqueue("ath12k_qmi_driver_event", - WQ_UNBOUND, 1); + ab->qmi.event_wq = alloc_ordered_workqueue("ath12k_qmi_driver_event", 0); if (!ab->qmi.event_wq) { ath12k_err(ab, "failed to allocate workqueue\n"); return -EFAULT; From patchwork Tue May 9 01:50:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13235288 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7A05396 for ; Tue, 9 May 2023 01:51:03 +0000 (UTC) Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06C8FAD1A; Mon, 8 May 2023 18:50:48 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-64115eef620so38952521b3a.1; Mon, 08 May 2023 18:50:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683597047; x=1686189047; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=81wtSULTB0ChL61hyyooDIvEy586bIFEIldimjXDoCw=; b=Wus/yyNbT6hCeCqfAjbRPgG9xEFpVMm+qPLAE/Id1SAoC/43EZnAXhfUrPO/0lxqEI EmY7kmttOyTPtq/JaLn0n92/O7yWkKXK+jeT1RsnV3G55QL8dnn7tROfuIEzSScjpCYc cLRoiZ2I9idWdzZ+2wOOY1Rwh5WQX84C8LaFkYB7+Ej42dR2pnebYgzcySmg5WSFOCTM rjvEfJ5phohVQQjckUxQNzndDGd1NV9wGmfeFACjGqYxGqU/XBlzhUIZrPxZEmS8QN9C XuOHeIavBlij1/aHO0rrrdQthhKwFR499hjKCG/fWzYrmTv9PIz4tQu7MwMbZsAMk154 2Ptw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683597047; x=1686189047; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=81wtSULTB0ChL61hyyooDIvEy586bIFEIldimjXDoCw=; b=W8ZSwpXILmsNnB6FfsjNR8idBtzjTeYvZWmIqP0iG4Cw8ownjelXl3XqFpy+IuCgil 6yMVHqIT7f12+qc8XwNysrRXjghKND8rcyJLH3+S6ncjLY6afOZH4P2gIs1CxPAlogZK SHKvIQA/HpTuNx2UNrxw5es7RlkHk4FRCRcw3u13K88TrUn7qna/xlFw4x3/OF1ftquZ FFiEP91SrF0Dr5xmvxiNWLC11+A0KjbEYMLqpx4wLr8U3FZW40mQr5soQwY1X8XnhNUN taNHdCMV9+bJWF+OEdsgnD//5XMlS63e7gR/CedrhPpkInNl/qiviUwUPJknRyOvU0bJ 11Gw== X-Gm-Message-State: AC+VfDySLrUj80jx+wlbEmHfL+B4fsSRC/xCdpaSKol1LyUOWK2nbT1p JETAU8YcsC13Ah7Tfpbzs6M= X-Google-Smtp-Source: ACHHUZ5lBi/vJVd3FkKzXLtYKKsFetl0+biKdLlL3uSBu90oEO/3tLDMkppj06DEPvUA6N61O425ig== X-Received: by 2002:a17:90a:744c:b0:246:9c75:351a with SMTP id o12-20020a17090a744c00b002469c75351amr14109220pjk.12.1683597047142; Mon, 08 May 2023 18:50:47 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id g20-20020a17090ace9400b0024dfbac9e2fsm10533231pju.21.2023.05.08.18.50.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 18:50:46 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, Tejun Heo , Chandrashekar Devegowda , Intel Corporation , Chiranjeevi Rapolu , Liu Haijun , M Chetan Kumar , Ricardo Martinez , Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH 06/13] net: wwan: t7xx: Use alloc_ordered_workqueue() to create ordered workqueues Date: Mon, 8 May 2023 15:50:25 -1000 Message-Id: <20230509015032.3768622-7-tj@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230509015032.3768622-1-tj@kernel.org> References: <20230509015032.3768622-1-tj@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org BACKGROUND ========== When multiple work items are queued to a workqueue, their execution order doesn't match the queueing order. They may get executed in any order and simultaneously. When fully serialized execution - one by one in the queueing order - is needed, an ordered workqueue should be used which can be created with alloc_ordered_workqueue(). However, alloc_ordered_workqueue() was a later addition. Before it, an ordered workqueue could be obtained by creating an UNBOUND workqueue with @max_active==1. This originally was an implementation side-effect which was broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered"). Because there were users that depended on the ordered execution, 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered") made workqueue allocation path to implicitly promote UNBOUND workqueues w/ @max_active==1 to ordered workqueues. While this has worked okay, overloading the UNBOUND allocation interface this way creates other issues. It's difficult to tell whether a given workqueue actually needs to be ordered and users that legitimately want a min concurrency level wq unexpectedly gets an ordered one instead. With planned UNBOUND workqueue updates to improve execution locality and more prevalence of chiplet designs which can benefit from such improvements, this isn't a state we wanna be in forever. This patch series audits all callsites that create an UNBOUND workqueue w/ @max_active==1 and converts them to alloc_ordered_workqueue() as necessary. WHAT TO LOOK FOR ================ The conversions are from alloc_workqueue(WQ_UNBOUND | flags, 1, args..) to alloc_ordered_workqueue(flags, args...) which don't cause any functional changes. If you know that fully ordered execution is not ncessary, please let me know. I'll drop the conversion and instead add a comment noting the fact to reduce confusion while conversion is in progress. If you aren't fully sure, it's completely fine to let the conversion through. The behavior will stay exactly the same and we can always reconsider later. As there are follow-up workqueue core changes, I'd really appreciate if the patch can be routed through the workqueue tree w/ your acks. Thanks. Signed-off-by: Tejun Heo Cc: Chandrashekar Devegowda Cc: Intel Corporation Cc: Chiranjeevi Rapolu Cc: Liu Haijun Cc: M Chetan Kumar Cc: Ricardo Martinez Cc: Loic Poulain Cc: Sergey Ryazanov Cc: Johannes Berg Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: netdev@vger.kernel.org --- drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 +++++++------ drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 5 +++-- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c index aec3a18d44bd..7162bf38a8c9 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c @@ -1293,9 +1293,9 @@ int t7xx_cldma_init(struct cldma_ctrl *md_ctrl) for (i = 0; i < CLDMA_TXQ_NUM; i++) { md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i); md_ctrl->txq[i].worker = - alloc_workqueue("md_hif%d_tx%d_worker", - WQ_UNBOUND | WQ_MEM_RECLAIM | (i ? 0 : WQ_HIGHPRI), - 1, md_ctrl->hif_id, i); + alloc_ordered_workqueue("md_hif%d_tx%d_worker", + WQ_MEM_RECLAIM | (i ? 0 : WQ_HIGHPRI), + md_ctrl->hif_id, i); if (!md_ctrl->txq[i].worker) goto err_workqueue; @@ -1306,9 +1306,10 @@ int t7xx_cldma_init(struct cldma_ctrl *md_ctrl) md_cd_queue_struct_init(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i); INIT_WORK(&md_ctrl->rxq[i].cldma_work, t7xx_cldma_rx_done); - md_ctrl->rxq[i].worker = alloc_workqueue("md_hif%d_rx%d_worker", - WQ_UNBOUND | WQ_MEM_RECLAIM, - 1, md_ctrl->hif_id, i); + md_ctrl->rxq[i].worker = + alloc_ordered_workqueue("md_hif%d_rx%d_worker", + WQ_MEM_RECLAIM, + md_ctrl->hif_id, i); if (!md_ctrl->rxq[i].worker) goto err_workqueue; } diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c index 46514208d4f9..8dab025a088a 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c @@ -618,8 +618,9 @@ int t7xx_dpmaif_txq_init(struct dpmaif_tx_queue *txq) return ret; } - txq->worker = alloc_workqueue("md_dpmaif_tx%d_worker", WQ_UNBOUND | WQ_MEM_RECLAIM | - (txq->index ? 0 : WQ_HIGHPRI), 1, txq->index); + txq->worker = alloc_ordered_workqueue("md_dpmaif_tx%d_worker", + WQ_MEM_RECLAIM | (txq->index ? 0 : WQ_HIGHPRI), + txq->index); if (!txq->worker) return -ENOMEM; From patchwork Tue May 9 01:50:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13235289 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3E71383 for ; Tue, 9 May 2023 01:51:22 +0000 (UTC) Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EE9EA26E; Mon, 8 May 2023 18:50:54 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1ab0c697c2bso49726325ad.1; Mon, 08 May 2023 18:50:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683597053; x=1686189053; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=lP5IaIFdiCV3CLFQyofnNGbKobWaxQ8hETIzTLkOsHM=; b=IuzHeRYk9bZCJjY+c51BTwSfwNB3YJPrwNIPsXNAYGwynBe5GubLNbMQUh3GkPIOx1 ABfJV/ZnmRKgJM6p5LnQWxKslcwTg9CUpSJjjkc+11iRUCyaGTcwxShndkiTqlidzp41 dVrrF2zYq0a86ifYSpcqfu8MOK6rWMIYJn9suM7zv/mgxVvwCqXCRTDGuaxQuY3FMYe8 QAD6dgtLiBySt4uA3CL/zd/ocT5WWm/G6xQ+gqrebAlZvFbAlIiAsWiG4G5bzBQ3BLMr 7GKmdD5ou02+spSYoQ7FJjl383SXdeENzKhNCYO/Wd7MbUIwC0BrvRLFZo5zYqo4w9zW 486w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683597053; x=1686189053; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=lP5IaIFdiCV3CLFQyofnNGbKobWaxQ8hETIzTLkOsHM=; b=UKu+3zAbhr0PmycHDId26v2RFBG1vUWYca/NwZg/wAqkOvp8j8+8h+ugBGWF9HNaVq eTLg2up3q2dUGctEyNPVxIYjgzTTcEf9UvVsyR9Xd93ia++Lor3z4fJgYQcKxvM5doES y1Fwc2KPeT9Z8ng+4xWngPMZRj6wzOLmFTrBJsbfaT9s1LZZV2tFiOQqe1EbF1bkHYAJ pz8NhsOib37NWm9WEqRnHXHoXXCscQtTf9kdsoi5UUGnOKd2IGsBmLtBXPw2JMyLTQab GdfoJEFg2cNOcpoygyffKqkD2jrrZsTHua1ZOsDZ3UonAhbdIPErnQsXWtmiUQ9dxUT6 iV9Q== X-Gm-Message-State: AC+VfDzhoYqSj3OlsPANQytFN7NvPLFF6VjYVbo7H896ia3vLDCw8YiP 6NELDi2ls7fK25k4P+JZudw= X-Google-Smtp-Source: ACHHUZ4ITCiXUXq8qffdksAkMo2gD+tX6SA8ac8w9NL+1hHgfMb58opbrOs2qVpb1OWg5QV+TvJyrg== X-Received: by 2002:a17:902:f816:b0:1a9:3916:c2d1 with SMTP id ix22-20020a170902f81600b001a93916c2d1mr13097348plb.54.1683597053120; Mon, 08 May 2023 18:50:53 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id bb5-20020a170902bc8500b001ab05aaae2fsm149237plb.107.2023.05.08.18.50.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 18:50:52 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, Tejun Heo , Manivannan Sadhasivam , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-arm-msm@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 09/13] net: qrtr: Use alloc_ordered_workqueue() to create ordered workqueues Date: Mon, 8 May 2023 15:50:28 -1000 Message-Id: <20230509015032.3768622-10-tj@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230509015032.3768622-1-tj@kernel.org> References: <20230509015032.3768622-1-tj@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org BACKGROUND ========== When multiple work items are queued to a workqueue, their execution order doesn't match the queueing order. They may get executed in any order and simultaneously. When fully serialized execution - one by one in the queueing order - is needed, an ordered workqueue should be used which can be created with alloc_ordered_workqueue(). However, alloc_ordered_workqueue() was a later addition. Before it, an ordered workqueue could be obtained by creating an UNBOUND workqueue with @max_active==1. This originally was an implementation side-effect which was broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered"). Because there were users that depended on the ordered execution, 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered") made workqueue allocation path to implicitly promote UNBOUND workqueues w/ @max_active==1 to ordered workqueues. While this has worked okay, overloading the UNBOUND allocation interface this way creates other issues. It's difficult to tell whether a given workqueue actually needs to be ordered and users that legitimately want a min concurrency level wq unexpectedly gets an ordered one instead. With planned UNBOUND workqueue updates to improve execution locality and more prevalence of chiplet designs which can benefit from such improvements, this isn't a state we wanna be in forever. This patch series audits all callsites that create an UNBOUND workqueue w/ @max_active==1 and converts them to alloc_ordered_workqueue() as necessary. WHAT TO LOOK FOR ================ The conversions are from alloc_workqueue(WQ_UNBOUND | flags, 1, args..) to alloc_ordered_workqueue(flags, args...) which don't cause any functional changes. If you know that fully ordered execution is not ncessary, please let me know. I'll drop the conversion and instead add a comment noting the fact to reduce confusion while conversion is in progress. If you aren't fully sure, it's completely fine to let the conversion through. The behavior will stay exactly the same and we can always reconsider later. As there are follow-up workqueue core changes, I'd really appreciate if the patch can be routed through the workqueue tree w/ your acks. Thanks. Signed-off-by: Tejun Heo Cc: Manivannan Sadhasivam Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: linux-arm-msm@vger.kernel.org Cc: netdev@vger.kernel.org --- net/qrtr/ns.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c index 0f25a386138c..0f7a729f1a1f 100644 --- a/net/qrtr/ns.c +++ b/net/qrtr/ns.c @@ -783,7 +783,7 @@ int qrtr_ns_init(void) goto err_sock; } - qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1); + qrtr_ns.workqueue = alloc_ordered_workqueue("qrtr_ns_handler", 0); if (!qrtr_ns.workqueue) { ret = -ENOMEM; goto err_sock; From patchwork Tue May 9 01:50:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13235290 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4004517F0 for ; Tue, 9 May 2023 01:51:28 +0000 (UTC) Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75F54A5FA; Mon, 8 May 2023 18:50:56 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1ab0c697c2bso49726565ad.1; Mon, 08 May 2023 18:50:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683597055; x=1686189055; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=4wzzVWCLT7CybQfeQQoKtalxgsWtn1vK9dyFV4Pj3x8=; b=dCnxds2h5O2JmS2nsr5K6wfDnSJwjmLkmuWoZ1BKnNWPks1H/TMA9WWYDqkkNmOl2U Il4En8UeeDp+SHRwIagQnWPiKjBYH0qxsuqGVbpBnoKZZPK9/YXQ4ublXUCHRlhryEOF vDXkFY1yxE5CjldZNuAfvDsMb0UNXwBy2tCDkOGo0HUvB7pLF97rnLrT3RI5QH3o2WEF RruJx2RPaet2NyvHcHki5HqrXuF7BPwka9wJSy+EVUhQvU6E1DaOszxDjirQVpEYUwLN WVfgRENEINfJgaCOJorTn1blvsAfZKKWhdum4RUllaImWV+qIboofUJcKiIr+xIWt2Kp QRcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683597055; x=1686189055; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=4wzzVWCLT7CybQfeQQoKtalxgsWtn1vK9dyFV4Pj3x8=; b=j1LDOKYBQCXRSbDjdv9gHEqrUlmec4zWEKaemXiWTCE4HyNbUJb90GNBGxqkLp//WQ PznfIVEHPP9J58oRLCRTDTD5o+kdyqK5AxIHNNJnX/cKPKrFVlpKjB8vFDZ3r8uOmE19 JoqRQQxqqWB1qJb6gNjHuILzYqnjfaz3jLY3ZmpFaC1esEdM26fn4xOyagNYWxHN89dP UiCvVj3/R7sXV691T8Ll71T4ZTTuZFKGRFaCmcUa9nKrZyCVauCDZo01U7fznLINEF9g fVFpTtkwFAJmYqClHVfbjjloEcZiaeuIQ+u+bOK2RbsOJ1f50Ihe+bNs9HehaCFEXpS4 VSFw== X-Gm-Message-State: AC+VfDx7yr7JmYPhuy1I1DKbQE8IUlTlCw3+pz96zwFLEyRVmxVeQ+9l DLWYc+FlUsPNcB0xW3cINiU= X-Google-Smtp-Source: ACHHUZ6YLDxJo1rwXIAisaqpgSMigXLyXSlmwBj7vvQFm+3mgPQ2CFRhLRwxkgopk/t9TqwjFbLJ7A== X-Received: by 2002:a17:903:2292:b0:1a1:bff4:49e9 with SMTP id b18-20020a170903229200b001a1bff449e9mr17671752plh.23.1683597055036; Mon, 08 May 2023 18:50:55 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id jd12-20020a170903260c00b001ac937171e4sm133425plb.254.2023.05.08.18.50.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 18:50:54 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: jiangshanlai@gmail.com Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, Tejun Heo , David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, netdev@vger.kernel.org Subject: [PATCH 10/13] rxrpc: Use alloc_ordered_workqueue() to create ordered workqueues Date: Mon, 8 May 2023 15:50:29 -1000 Message-Id: <20230509015032.3768622-11-tj@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230509015032.3768622-1-tj@kernel.org> References: <20230509015032.3768622-1-tj@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org BACKGROUND ========== When multiple work items are queued to a workqueue, their execution order doesn't match the queueing order. They may get executed in any order and simultaneously. When fully serialized execution - one by one in the queueing order - is needed, an ordered workqueue should be used which can be created with alloc_ordered_workqueue(). However, alloc_ordered_workqueue() was a later addition. Before it, an ordered workqueue could be obtained by creating an UNBOUND workqueue with @max_active==1. This originally was an implementation side-effect which was broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered"). Because there were users that depended on the ordered execution, 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered") made workqueue allocation path to implicitly promote UNBOUND workqueues w/ @max_active==1 to ordered workqueues. While this has worked okay, overloading the UNBOUND allocation interface this way creates other issues. It's difficult to tell whether a given workqueue actually needs to be ordered and users that legitimately want a min concurrency level wq unexpectedly gets an ordered one instead. With planned UNBOUND workqueue updates to improve execution locality and more prevalence of chiplet designs which can benefit from such improvements, this isn't a state we wanna be in forever. This patch series audits all callsites that create an UNBOUND workqueue w/ @max_active==1 and converts them to alloc_ordered_workqueue() as necessary. WHAT TO LOOK FOR ================ The conversions are from alloc_workqueue(WQ_UNBOUND | flags, 1, args..) to alloc_ordered_workqueue(flags, args...) which don't cause any functional changes. If you know that fully ordered execution is not ncessary, please let me know. I'll drop the conversion and instead add a comment noting the fact to reduce confusion while conversion is in progress. If you aren't fully sure, it's completely fine to let the conversion through. The behavior will stay exactly the same and we can always reconsider later. As there are follow-up workqueue core changes, I'd really appreciate if the patch can be routed through the workqueue tree w/ your acks. Thanks. Signed-off-by: Tejun Heo Cc: David Howells Cc: Marc Dionne Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: linux-afs@lists.infradead.org Cc: netdev@vger.kernel.org --- net/rxrpc/af_rxrpc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 31f738d65f1c..7eb24c25c731 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -988,7 +988,7 @@ static int __init af_rxrpc_init(void) goto error_call_jar; } - rxrpc_workqueue = alloc_workqueue("krxrpcd", WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_UNBOUND, 1); + rxrpc_workqueue = alloc_ordered_workqueue("krxrpcd", WQ_HIGHPRI | WQ_MEM_RECLAIM); if (!rxrpc_workqueue) { pr_notice("Failed to allocate work queue\n"); goto error_work_queue;