From patchwork Mon Sep 19 22:18:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981059 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE3EECAAD3 for ; Mon, 19 Sep 2022 22:19:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229804AbiISWTp (ORCPT ); Mon, 19 Sep 2022 18:19:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbiISWTN (ORCPT ); Mon, 19 Sep 2022 18:19:13 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71B8B4E606 for ; Mon, 19 Sep 2022 15:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=epaCviXyRlRI3zLJkfN8KlkL1yQDyTlaunnS07fc87c=; b=2C+ABRLnDKaYy0fmU1QOBvycHc lkvcnAhriOGVkFO9qVZToAP1TA3SFvgHVxEbVrshWL7GZKiNB2m6qUUtiCRMm+oiuBRbFB4/Cf9Im XxGVvyASRkgkPdlwNUsq1EFLCQ8gVn20IKho1OU8Wl5Cg/wAj+PEa4qxnqGjvoCl5bLw=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6Q-00HBR6-Tc; Tue, 20 Sep 2022 00:18:58 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 1/9] net: dsa: qca8k: Fix inconsistent use of jiffies vs milliseconds Date: Tue, 20 Sep 2022 00:18:45 +0200 Message-Id: <20220919221853.4095491-2-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC wait_for_complete_timeout() expects a timeout in jiffies. With the driver, some call sites converted QCA8K_ETHERNET_TIMEOUT to jiffies, others did not. Make the code consistent by changes the #define to include a call to msecs_to_jiffies, and remove all other calls to msecs_to_jiffies. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 4 ++-- drivers/net/dsa/qca/qca8k.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index c181346388a4..1c9a8764d1d9 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -258,7 +258,7 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - msecs_to_jiffies(QCA8K_ETHERNET_TIMEOUT)); + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -310,7 +310,7 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - msecs_to_jiffies(QCA8K_ETHERNET_TIMEOUT)); + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index e36ecc9777f4..74578b7c3283 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -15,7 +15,7 @@ #define QCA8K_ETHERNET_MDIO_PRIORITY 7 #define QCA8K_ETHERNET_PHY_PRIORITY 6 -#define QCA8K_ETHERNET_TIMEOUT 5 +#define QCA8K_ETHERNET_TIMEOUT msecs_to_jiffies(5) #define QCA8K_NUM_PORTS 7 #define QCA8K_NUM_CPU_PORTS 2 From patchwork Mon Sep 19 22:18:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981052 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1669BECAAA1 for ; Mon, 19 Sep 2022 22:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229861AbiISWTM (ORCPT ); Mon, 19 Sep 2022 18:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229773AbiISWTF (ORCPT ); Mon, 19 Sep 2022 18:19:05 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC0CD3AB06 for ; Mon, 19 Sep 2022 15:19:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=mS1jZiFgrmqKWW6CVbeGpzdk/cc/wnQP8J0cVhY17vE=; b=Pb4QbJ5NAq08phthTxsmuDEfjX WzNWsTCagptTkylR9VGj6GobWcO7jaL9En9+qF/PPekCwPbmBdE24lJo1gZ6oSM2u08dfqlohIIjJ v1N6thbTSTz3qGCWCQLRX1D08iFnG+G7GtzsjBn5mN0EGBRlZJOHuXrnulKiuULoSDNA=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6Q-00HBR9-V0; Tue, 20 Sep 2022 00:18:58 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 2/9] net: dsa: qca8k: Move completion into DSA core Date: Tue, 20 Sep 2022 00:18:46 +0200 Message-Id: <20220919221853.4095491-3-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC When performing operations on a remote switch using Ethernet frames, a completion is used between the sender of the request and the code which receives the reply. Move this completion into the DSA core, simplifying the driver. The initialisation and reinitialisation of the completion is now performed in the core. Also, the conversion of milliseconds to jiffies is also in the core. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 49 ++++++++++++-------------------- drivers/net/dsa/qca/qca8k.h | 6 ++-- include/net/dsa.h | 12 ++++++++ net/dsa/dsa.c | 22 ++++++++++++++ 4 files changed, 55 insertions(+), 34 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 1c9a8764d1d9..f4e92156bd32 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -160,7 +160,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) QCA_HDR_MGMT_DATA2_LEN); } - complete(&mgmt_eth_data->rw_done); + dsa_inband_complete(&mgmt_eth_data->inband); } static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *val, @@ -248,8 +248,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) skb->dev = priv->mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the mdio pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -257,8 +255,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -300,8 +298,6 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) skb->dev = priv->mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the mdio pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -309,8 +305,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -448,8 +444,6 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, bool ack; int ret; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the copy pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -457,8 +451,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -540,8 +534,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the write pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); @@ -549,8 +541,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(write_skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -577,8 +569,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the read pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); @@ -586,8 +576,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(read_skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -606,8 +596,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the clear pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); @@ -615,8 +603,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(clear_skb); - wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -1528,7 +1516,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk exit: /* Complete on receiving all the mib packet */ if (refcount_dec_and_test(&mib_eth_data->port_parsed)) - complete(&mib_eth_data->rw_done); + dsa_inband_complete(&mib_eth_data->inband); } static int @@ -1543,8 +1531,6 @@ qca8k_get_ethtool_stats_eth(struct dsa_switch *ds, int port, u64 *data) mutex_lock(&mib_eth_data->mutex); - reinit_completion(&mib_eth_data->rw_done); - mib_eth_data->req_port = dp->index; mib_eth_data->data = data; refcount_set(&mib_eth_data->port_parsed, QCA8K_NUM_PORTS); @@ -1562,7 +1548,8 @@ qca8k_get_ethtool_stats_eth(struct dsa_switch *ds, int port, u64 *data) if (ret) goto exit; - ret = wait_for_completion_timeout(&mib_eth_data->rw_done, QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mib_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); exit: mutex_unlock(&mib_eth_data->mutex); @@ -1929,10 +1916,10 @@ qca8k_sw_probe(struct mdio_device *mdiodev) return -ENOMEM; mutex_init(&priv->mgmt_eth_data.mutex); - init_completion(&priv->mgmt_eth_data.rw_done); + dsa_inband_init(&priv->mgmt_eth_data.inband); mutex_init(&priv->mib_eth_data.mutex); - init_completion(&priv->mib_eth_data.rw_done); + dsa_inband_init(&priv->mib_eth_data.inband); priv->ds->dev = &mdiodev->dev; priv->ds->num_ports = QCA8K_NUM_PORTS; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 74578b7c3283..685628716ed2 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -15,7 +15,7 @@ #define QCA8K_ETHERNET_MDIO_PRIORITY 7 #define QCA8K_ETHERNET_PHY_PRIORITY 6 -#define QCA8K_ETHERNET_TIMEOUT msecs_to_jiffies(5) +#define QCA8K_ETHERNET_TIMEOUT 5 #define QCA8K_NUM_PORTS 7 #define QCA8K_NUM_CPU_PORTS 2 @@ -346,7 +346,7 @@ enum { }; struct qca8k_mgmt_eth_data { - struct completion rw_done; + struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ bool ack; u32 seq; @@ -354,7 +354,7 @@ struct qca8k_mgmt_eth_data { }; struct qca8k_mib_eth_data { - struct completion rw_done; + struct dsa_inband inband; struct mutex mutex; /* Process one command at time */ refcount_t port_parsed; /* Counter to track parsed port */ u8 req_port; diff --git a/include/net/dsa.h b/include/net/dsa.h index f2ce12860546..ca81541703f4 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -7,6 +7,7 @@ #ifndef __LINUX_NET_DSA_H #define __LINUX_NET_DSA_H +#include #include #include #include @@ -1276,6 +1277,17 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, const struct switchdev_obj_port_mdb *mdb, struct dsa_db db); +/* Perform operations on a switch by sending it request in Ethernet + * frames and expecting a response in a frame. + */ +struct dsa_inband { + struct completion completion; +}; + +void dsa_inband_init(struct dsa_inband *inband); +void dsa_inband_complete(struct dsa_inband *inband); +int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); + /* Keep inline for faster access in hot path */ static inline bool netdev_uses_dsa(const struct net_device *dev) { diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index be7b320cda76..382dbb9e921a 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -518,6 +518,28 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, } EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); +void dsa_inband_init(struct dsa_inband *inband) +{ + init_completion(&inband->completion); +} +EXPORT_SYMBOL_GPL(dsa_inband_init); + +void dsa_inband_complete(struct dsa_inband *inband) +{ + complete(&inband->completion); +} +EXPORT_SYMBOL_GPL(dsa_inband_complete); + +int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms) +{ + unsigned long jiffies = msecs_to_jiffies(timeout_ms); + + reinit_completion(&inband->completion); + + return wait_for_completion_timeout(&inband->completion, jiffies); +} +EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); + static int __init dsa_init_module(void) { int rc; From patchwork Mon Sep 19 22:18:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981055 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA11ECAAA1 for ; Mon, 19 Sep 2022 22:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229884AbiISWTS (ORCPT ); Mon, 19 Sep 2022 18:19:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbiISWTI (ORCPT ); Mon, 19 Sep 2022 18:19:08 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A3CE4E603 for ; Mon, 19 Sep 2022 15:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=eHl0ZCe+JKtLYwzQiC8ZxiolQIEQWzlJfarvuSsLvTY=; b=SiYwsvS0aIrBk7axSXLhHBgA9a 74RLWCiTACo4L36tAWQfrA/kbaytg6tU3EnQadBqTz6f1kU0VfTwFIMTMpswSVZEFM6xvWSB7GgBI f48VaDw0QL3hBGdi034wR4qAmbKynj0sIQKuh6lwpzlPrDTNd95vYlzr8dKzX1W1y8XU=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRC-0D; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 3/9] net: dsa: qca8K: Move queuing for request frame into the core Date: Tue, 20 Sep 2022 00:18:47 +0200 Message-Id: <20220919221853.4095491-4-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Combine the queuing of the request and waiting for the completion into one core helper. Add the function dsa_rmu_request() to perform this. Access to statistics is not a strict request/reply, so the dsa_rmu_wait_for_completion needs to be kept. It is also no possible to combine dsa_rmu_request() and dsa_rmu_wait_for_completion() since we need to avoid the race of sending the request, receiving a reply, and the completion has not been reinitialised because the schedule at decided to do other things. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 32 ++++++++++---------------------- include/net/dsa.h | 2 ++ net/dsa/dsa.c | 16 ++++++++++++++++ 3 files changed, 28 insertions(+), 22 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index f4e92156bd32..9c44a09590a6 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -253,10 +253,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -303,10 +301,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -449,10 +445,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -539,10 +533,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(write_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -574,10 +566,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(read_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -601,8 +591,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(clear_skb); - dsa_inband_wait_for_completion(&mgmt_eth_data->inband, QCA8K_ETHERNET_TIMEOUT); diff --git a/include/net/dsa.h b/include/net/dsa.h index ca81541703f4..50c319832939 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1286,6 +1286,8 @@ struct dsa_inband { void dsa_inband_init(struct dsa_inband *inband); void dsa_inband_complete(struct dsa_inband *inband); +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); /* Keep inline for faster access in hot path */ diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 382dbb9e921a..8de0c3124abf 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -540,6 +540,22 @@ int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms) } EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); +/* Cannot use dsa_inband_wait_completion since the completion needs to be + * reinitialized before the skb is queue to avoid races. + */ +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms) +{ + unsigned long jiffies = msecs_to_jiffies(timeout_ms); + + reinit_completion(&inband->completion); + + dev_queue_xmit(skb); + + return wait_for_completion_timeout(&inband->completion, jiffies); +} +EXPORT_SYMBOL_GPL(dsa_inband_request); + static int __init dsa_init_module(void) { int rc; From patchwork Mon Sep 19 22:18:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981054 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF71ECAAD3 for ; Mon, 19 Sep 2022 22:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229535AbiISWTR (ORCPT ); Mon, 19 Sep 2022 18:19:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbiISWTI (ORCPT ); Mon, 19 Sep 2022 18:19:08 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66D8E4E606 for ; Mon, 19 Sep 2022 15:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=iOh2OYhgLa3XzJ+6lwdU/0DYjkO7bQB+/2KA0vvcDuM=; b=wchTP/PYEl6uTpYMegj7MpRyHv kDSu5sQRUJNASahgIY5eu/H2rxW/Xw5oQgqSkNoXl0FasagdwY4LFh0biBYSI4XbFcshC2d+FDRdi 3DLs5RI0sCwCiMVrkGp9XdvBd5zrlOh4RE/zdYgelCb6tykIngrp92JbIPvVdFosZL28=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRF-1e; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 4/9] net: dsa: qca8k: dsa_inband_request: More normal return values Date: Tue, 20 Sep 2022 00:18:48 +0200 Message-Id: <20220919221853.4095491-5-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC wait_for_completion_timeout() has unusual return values. It can return negative error conditions. If it times out, it returns 0, and on success it returns the number of remaining jiffies for the timeout. For the use case here, the remaining time is not needed. All that is really interesting is, it succeeded and returns 0, or there was an error or a timeout. Massage the return value to fit this, and modify the callers to the more usual pattern of ret < 0 is an error. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 23 ++++++++++------------- net/dsa/dsa.c | 8 +++++++- 2 files changed, 17 insertions(+), 14 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 9c44a09590a6..9481a248273a 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -264,8 +264,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) mutex_unlock(&mgmt_eth_data->mutex); - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -308,8 +308,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) mutex_unlock(&mgmt_eth_data->mutex); - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -450,8 +450,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, ack = mgmt_eth_data->ack; - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -538,8 +538,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, ack = mgmt_eth_data->ack; - if (ret <= 0) { - ret = -ETIMEDOUT; + if (ret) { kfree_skb(read_skb); goto exit; } @@ -571,10 +570,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, ack = mgmt_eth_data->ack; - if (ret <= 0) { - ret = -ETIMEDOUT; + if (ret) goto exit; - } if (!ack) { ret = -EINVAL; @@ -591,8 +588,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, clear_skb, + QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 8de0c3124abf..68576f1c5b02 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -547,12 +547,18 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, int timeout_ms) { unsigned long jiffies = msecs_to_jiffies(timeout_ms); + int ret; reinit_completion(&inband->completion); dev_queue_xmit(skb); - return wait_for_completion_timeout(&inband->completion, jiffies); + ret = wait_for_completion_timeout(&inband->completion, jiffies); + if (ret < 0) + return ret; + if (ret == 0) + return -ETIMEDOUT; + return 0; } EXPORT_SYMBOL_GPL(dsa_inband_request); From patchwork Mon Sep 19 22:18:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981053 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EC73C6FA82 for ; Mon, 19 Sep 2022 22:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229873AbiISWTO (ORCPT ); Mon, 19 Sep 2022 18:19:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbiISWTF (ORCPT ); Mon, 19 Sep 2022 18:19:05 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFD234DF2A for ; Mon, 19 Sep 2022 15:19:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=Gy6Y78/sd6dWowkaXHqVSk3fONB8c3dj7iEzbU5pNyI=; b=vuq7dtNOLMyPT9zFJykdPRB6RP an4lO02iZh085aGrlJMSS1SZ4AsUqW4QckdKjyKsgaut7W+GxNWYWj6xfxsx019/EdBBz35r+eVin XkY95uQ6vFcyvvD24SMq7y4T3c5QmRgDtGDzvMux1RNQI8AeLpmD8dLwdKCjMMbNuoNY=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRI-3C; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 5/9] net: dsa: qca8k: Move request sequence number handling into core Date: Tue, 20 Sep 2022 00:18:49 +0200 Message-Id: <20220919221853.4095491-6-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Each request/reply frame is likely to have a sequence number so that request and the reply can be matched together. Move this sequence number into the inband structure. The driver must provide a helper to insert the sequence number into the skb, and the core will perform the increment. To allow different devices to have different size sequence numbers, a mask is provided. This can be used for example to reduce the u32 sequence number down to a u8. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 35 +++++++++----------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 6 +++++- net/dsa/dsa.c | 16 ++++++++++++++- 4 files changed, 30 insertions(+), 28 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 9481a248273a..a354ba070d33 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -146,7 +146,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command); /* Make sure the seq match the requested packet */ - if (mgmt_ethhdr->seq == mgmt_eth_data->seq) + if (mgmt_ethhdr->seq == dsa_inband_seqno(&mgmt_eth_data->inband)) mgmt_eth_data->ack = true; if (cmd == MDIO_READ) { @@ -247,14 +247,11 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - - /* Increment seq_num and set it in the mdio pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, - QCA8K_ETHERNET_TIMEOUT); + qca8k_mdio_header_fill_seq_num, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -295,13 +292,10 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - - /* Increment seq_num and set it in the mdio pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -440,12 +434,10 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, bool ack; int ret; - /* Increment seq_num and set it in the copy pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -527,13 +519,10 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, read_skb->dev = mgmt_master; clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - - /* Increment seq_num and set it in the write pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -560,12 +549,10 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - /* Increment seq_num and set it in the read pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -583,12 +570,10 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - /* Increment seq_num and set it in the clear pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, clear_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -1901,10 +1886,10 @@ qca8k_sw_probe(struct mdio_device *mdiodev) return -ENOMEM; mutex_init(&priv->mgmt_eth_data.mutex); - dsa_inband_init(&priv->mgmt_eth_data.inband); + dsa_inband_init(&priv->mgmt_eth_data.inband, U32_MAX); mutex_init(&priv->mib_eth_data.mutex); - dsa_inband_init(&priv->mib_eth_data.inband); + dsa_inband_init(&priv->mib_eth_data.inband, U32_MAX); priv->ds->dev = &mdiodev->dev; priv->ds->num_ports = QCA8K_NUM_PORTS; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 685628716ed2..a5abc340471c 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -349,7 +349,6 @@ struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ bool ack; - u32 seq; u32 data[4]; }; diff --git a/include/net/dsa.h b/include/net/dsa.h index 50c319832939..2d6b7c7f158b 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1282,13 +1282,17 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, */ struct dsa_inband { struct completion completion; + u32 seqno; + u32 seqno_mask; }; -void dsa_inband_init(struct dsa_inband *inband); +void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask); void dsa_inband_complete(struct dsa_inband *inband); int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + void (*insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); +u32 dsa_inband_seqno(struct dsa_inband *inband); /* Keep inline for faster access in hot path */ static inline bool netdev_uses_dsa(const struct net_device *dev) diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 68576f1c5b02..5a8d95f8acec 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -518,9 +518,11 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, } EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); -void dsa_inband_init(struct dsa_inband *inband) +void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) { init_completion(&inband->completion); + inband->seqno_mask = seqno_mask; + inband->seqno = 0; } EXPORT_SYMBOL_GPL(dsa_inband_init); @@ -544,6 +546,7 @@ EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); * reinitialized before the skb is queue to avoid races. */ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + void (*insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms) { unsigned long jiffies = msecs_to_jiffies(timeout_ms); @@ -551,6 +554,11 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, reinit_completion(&inband->completion); + if (insert_seqno) { + inband->seqno++; + insert_seqno(skb, inband->seqno & inband->seqno_mask); + } + dev_queue_xmit(skb); ret = wait_for_completion_timeout(&inband->completion, jiffies); @@ -562,6 +570,12 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, } EXPORT_SYMBOL_GPL(dsa_inband_request); +u32 dsa_inband_seqno(struct dsa_inband *inband) +{ + return inband->seqno & inband->seqno_mask; +} +EXPORT_SYMBOL_GPL(dsa_inband_seqno); + static int __init dsa_init_module(void) { int rc; From patchwork Mon Sep 19 22:18:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981057 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A68A1ECAAA1 for ; Mon, 19 Sep 2022 22:19:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229904AbiISWTX (ORCPT ); Mon, 19 Sep 2022 18:19:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229819AbiISWTI (ORCPT ); Mon, 19 Sep 2022 18:19:08 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 657E2419B3 for ; Mon, 19 Sep 2022 15:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=ZYwyGzSbjhrFtr/cLHjJzkRxxh7VJAWFZdu32z6tn0s=; b=vLAIapLgilpaGU7LSWHO+mI1U8 Yl1Hkv09045KkSwKKBA1grP7b3Q+Q0he8UNK1x8J2h9+X1jIss012Zyo/tEmG70JAl2/SmCZjCyo8 2Jr9vRokE7sll+unA6HDTTbgyW8AX8amb2xnwS1cFs0TQMqvX0NAS0Z0+cPb8roAQOzA=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRL-5W; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 6/9] net: dsa: qca8k: Refactor sequence number mismatch to use error code Date: Tue, 20 Sep 2022 00:18:50 +0200 Message-Id: <20220919221853.4095491-7-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Replace the boolean that the sequence numbers matches with an error code. Set the error code to -EINVAL if the sequence numbers are wrong, otherwise 0. The value is only safe to us if the completion happens. Ensure the return from the completion is always considered, and if it fails, a timeout error is returned. This is a preparation step to moving the error tracking into the DSA core. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 50 ++++++++++++++------------------ drivers/net/dsa/qca/qca8k.h | 2 +- 2 files changed, 23 insertions(+), 29 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index a354ba070d33..69b807d87367 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -147,7 +147,9 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) /* Make sure the seq match the requested packet */ if (mgmt_ethhdr->seq == dsa_inband_seqno(&mgmt_eth_data->inband)) - mgmt_eth_data->ack = true; + mgmt_eth_data->err = 0; + else + mgmt_eth_data->err = -EINVAL; if (cmd == MDIO_READ) { mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data; @@ -229,7 +231,7 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - bool ack; + int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_READ, reg, NULL, @@ -247,7 +249,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, @@ -257,15 +258,15 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, mgmt_eth_data->data + 1, len - QCA_HDR_MGMT_DATA1_LEN); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; mutex_unlock(&mgmt_eth_data->mutex); if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return -ret; return 0; } @@ -274,7 +275,7 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - bool ack; + int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, @@ -292,21 +293,20 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; mutex_unlock(&mgmt_eth_data->mutex); if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return err; return 0; } @@ -431,22 +431,20 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, struct sk_buff *read_skb, u32 *val) { struct sk_buff *skb = skb_copy(read_skb, GFP_KERNEL); - bool ack; + int err; int ret; - mgmt_eth_data->ack = false; - ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return err; *val = mgmt_eth_data->data[0]; @@ -462,7 +460,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, u32 write_val, clear_val = 0, val; struct net_device *mgmt_master; int ret, ret1; - bool ack; + int err; if (regnum >= QCA8K_MDIO_MASTER_MAX_REG) return -EINVAL; @@ -519,21 +517,20 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, read_skb->dev = mgmt_master; clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) { kfree_skb(read_skb); goto exit; } - if (!ack) { - ret = -EINVAL; + if (err) { + ret = err; kfree_skb(read_skb); goto exit; } @@ -549,19 +546,17 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - mgmt_eth_data->ack = false; - ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) goto exit; - if (!ack) { - ret = -EINVAL; + if (err) { + ret = err; goto exit; } @@ -570,7 +565,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, clear_skb, qca8k_mdio_header_fill_seq_num, diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index a5abc340471c..79f7197a1790 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -348,7 +348,7 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ - bool ack; + int err; u32 data[4]; }; From patchwork Mon Sep 19 22:18:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981056 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57CC6C6FA8B for ; Mon, 19 Sep 2022 22:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229890AbiISWTU (ORCPT ); Mon, 19 Sep 2022 18:19:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229816AbiISWTI (ORCPT ); Mon, 19 Sep 2022 18:19:08 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6E2840E1C for ; Mon, 19 Sep 2022 15:19:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=S5+iRaOc2peMiHkrLZFu6Ozv3l3yq1Pck8xVbV+OOhU=; b=IRKz5R9NwJBlN4lq2KG01/hurq A3IvQp/203+vNCoY77qaS4QKVP/HJEayRWQTabw5bxyMC91LkvZ8k96Q2ppl6uzs4V9CDOL2wLpTl 2iJ1ozd//Np3QW9Jzd8JRgOp5DWNfO4ZPP836UYUsOpPbyCiQoAnpayNWKHaOeSQNDxw=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRO-71; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 7/9] net: dsa: qca8k: Pass error code from reply decoder to requester Date: Tue, 20 Sep 2022 00:18:51 +0200 Message-Id: <20220919221853.4095491-8-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The code which decodes the frame and signals the complete can experience error, such as wrong sequence number. Pass an error code between the completer and the function waiting on the complete. This simplifies the error handling, since all errors are combined into one place. At the same time, return -EPROTO if the sequence numbers don't match. This is more appropriate than EINVAL. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 60 ++++++-------------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 3 +- net/dsa/dsa.c | 7 ++-- 4 files changed, 18 insertions(+), 53 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 69b807d87367..55a781851e28 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -138,6 +138,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) struct qca8k_priv *priv = ds->priv; struct qca_mgmt_ethhdr *mgmt_ethhdr; u8 len, cmd; + int err = 0; mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb); mgmt_eth_data = &priv->mgmt_eth_data; @@ -146,10 +147,8 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command); /* Make sure the seq match the requested packet */ - if (mgmt_ethhdr->seq == dsa_inband_seqno(&mgmt_eth_data->inband)) - mgmt_eth_data->err = 0; - else - mgmt_eth_data->err = -EINVAL; + if (mgmt_ethhdr->seq != dsa_inband_seqno(&mgmt_eth_data->inband)) + err = -EPROTO; if (cmd == MDIO_READ) { mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data; @@ -162,7 +161,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) QCA_HDR_MGMT_DATA2_LEN); } - dsa_inband_complete(&mgmt_eth_data->inband); + dsa_inband_complete(&mgmt_eth_data->inband, err); } static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *val, @@ -231,7 +230,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_READ, reg, NULL, @@ -258,24 +256,15 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, mgmt_eth_data->data + 1, len - QCA_HDR_MGMT_DATA1_LEN); - err = mgmt_eth_data->err; - mutex_unlock(&mgmt_eth_data->mutex); - if (ret) - return ret; - - if (err) - return -ret; - - return 0; + return ret; } static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, @@ -298,17 +287,9 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - mutex_unlock(&mgmt_eth_data->mutex); - if (ret) - return ret; - - if (err) - return err; - - return 0; + return ret; } static int @@ -431,21 +412,15 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, struct sk_buff *read_skb, u32 *val) { struct sk_buff *skb = skb_copy(read_skb, GFP_KERNEL); - int err; int ret; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) return ret; - if (err) - return err; - *val = mgmt_eth_data->data[0]; return 0; @@ -460,7 +435,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, u32 write_val, clear_val = 0, val; struct net_device *mgmt_master; int ret, ret1; - int err; if (regnum >= QCA8K_MDIO_MASTER_MAX_REG) return -EINVAL; @@ -522,19 +496,11 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) { kfree_skb(read_skb); goto exit; } - if (err) { - ret = err; - kfree_skb(read_skb); - goto exit; - } - ret = read_poll_timeout(qca8k_phy_eth_busy_wait, ret1, !(val & QCA8K_MDIO_MASTER_BUSY), 0, QCA8K_BUSY_WAIT_TIMEOUT * USEC_PER_MSEC, false, @@ -550,16 +516,9 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) goto exit; - if (err) { - ret = err; - goto exit; - } - ret = mgmt_eth_data->data[0] & QCA8K_MDIO_MASTER_DATA_MASK; } else { kfree_skb(read_skb); @@ -1440,6 +1399,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk const struct qca8k_mib_desc *mib; struct mib_ethhdr *mib_ethhdr; int i, mib_len, offset = 0; + int err = 0; u64 *data; u8 port; @@ -1450,8 +1410,10 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk * parse only the requested one. */ port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, ntohs(mib_ethhdr->hdr)); - if (port != mib_eth_data->req_port) + if (port != mib_eth_data->req_port) { + err = -EPROTO; goto exit; + } data = mib_eth_data->data; @@ -1480,7 +1442,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk exit: /* Complete on receiving all the mib packet */ if (refcount_dec_and_test(&mib_eth_data->port_parsed)) - dsa_inband_complete(&mib_eth_data->inband); + dsa_inband_complete(&mib_eth_data->inband, err); } static int diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 79f7197a1790..682106206282 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -348,7 +348,6 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ - int err; u32 data[4]; }; diff --git a/include/net/dsa.h b/include/net/dsa.h index 2d6b7c7f158b..1a920f89b667 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1284,10 +1284,11 @@ struct dsa_inband { struct completion completion; u32 seqno; u32 seqno_mask; + int err; }; void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask); -void dsa_inband_complete(struct dsa_inband *inband); +void dsa_inband_complete(struct dsa_inband *inband, int err); int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, void (* insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms); diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 5a8d95f8acec..0de283ac0bfc 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -526,8 +526,9 @@ void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) } EXPORT_SYMBOL_GPL(dsa_inband_init); -void dsa_inband_complete(struct dsa_inband *inband) +void dsa_inband_complete(struct dsa_inband *inband, int err) { + inband->err = err; complete(&inband->completion); } EXPORT_SYMBOL_GPL(dsa_inband_complete); @@ -553,6 +554,7 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, int ret; reinit_completion(&inband->completion); + inband->err = 0; if (insert_seqno) { inband->seqno++; @@ -566,7 +568,8 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, return ret; if (ret == 0) return -ETIMEDOUT; - return 0; + + return inband->err; } EXPORT_SYMBOL_GPL(dsa_inband_request); From patchwork Mon Sep 19 22:18:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981058 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7668AECAAD3 for ; Mon, 19 Sep 2022 22:19:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbiISWTl (ORCPT ); Mon, 19 Sep 2022 18:19:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229830AbiISWTJ (ORCPT ); Mon, 19 Sep 2022 18:19:09 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8291A4C60D for ; Mon, 19 Sep 2022 15:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=vj5lan9UGgllZdqhsLuVHs2ouaJnMlEzRAz9B1cVqyk=; b=s/s26VuUfYf+qbY9oXnN6oPt4D UrpEuajCNfmJvvnKgH+LhUZ/4WqYTrkXaiYatguiU30mnrtTQMqrv8y09lJgkJES03+FaZt9Ur128 Y1Ww0mBckevnlN57aYz9Vvc96JfwDGAxLk+GXfHIyBisxvQPX6DCyvVCHOj6BLp8caaM=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRU-AO; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 9/9] net: dsa: qca8k: Move inband mutex into DSA core Date: Tue, 20 Sep 2022 00:18:53 +0200 Message-Id: <20220919221853.4095491-10-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The mutex serves two purposes: It serialises operations on the switch, so that only one request/response can be happening at once. It protects priv->mgmt_master, which itself has two purposes. If the hardware is wrongly wires, the wrong switch port is connected to the cpu, inband cannot be used. In this case it has the value NULL. Additionally, if the master is down, it is set to NULL. Otherwise it points to the netdev used to send frames to the switch. The protection of priv->mgmt_master is not required. It is a single pointer, which will be updated atomically. It is not expected that the interface disappears, it only goes down. Hence mgmt_master will always be valid, or NULL. Move the check for the master device being NULL into the core. Also, move the mutex for serialisation into the core. The MIB operations don't follow request/response semantics, so its mutex is left untouched. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 68 ++++++-------------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 1 + net/dsa/dsa.c | 7 ++++ 4 files changed, 19 insertions(+), 58 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 234d79a09e78..3e60bbe2570d 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -238,15 +238,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (!skb) return -ENOMEM; - mutex_lock(&mgmt_eth_data->mutex); - - /* Check mgmt_master if is operational */ - if (!priv->mgmt_master) { - kfree_skb(skb); - mutex_unlock(&mgmt_eth_data->mutex); - return -EINVAL; - } - skb->dev = priv->mgmt_master; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, @@ -258,8 +249,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, &data[1], len - QCA_HDR_MGMT_DATA1_LEN); - mutex_unlock(&mgmt_eth_data->mutex); - return ret; } @@ -267,32 +256,18 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, QCA8K_ETHERNET_MDIO_PRIORITY, len); if (!skb) return -ENOMEM; - mutex_lock(&mgmt_eth_data->mutex); - - /* Check mgmt_master if is operational */ - if (!priv->mgmt_master) { - kfree_skb(skb); - mutex_unlock(&mgmt_eth_data->mutex); - return -EINVAL; - } - skb->dev = priv->mgmt_master; - ret = dsa_inband_request(&mgmt_eth_data->inband, skb, - qca8k_mdio_header_fill_seq_num, - NULL, 0, - QCA8K_ETHERNET_TIMEOUT); - - mutex_unlock(&mgmt_eth_data->mutex); - - return ret; + return dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, + NULL, 0, + QCA8K_ETHERNET_TIMEOUT); } static int @@ -438,7 +413,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, struct sk_buff *write_skb, *clear_skb, *read_skb; struct qca8k_mgmt_eth_data *mgmt_eth_data; u32 write_val, clear_val = 0, val; - struct net_device *mgmt_master; u32 resp_data[4]; int ret, ret1; @@ -484,19 +458,9 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, * 3. Get the data if we are reading * 4. Reset the mdio master (even with error) */ - mutex_lock(&mgmt_eth_data->mutex); - - /* Check if mgmt_master is operational */ - mgmt_master = priv->mgmt_master; - if (!mgmt_master) { - mutex_unlock(&mgmt_eth_data->mutex); - ret = -EINVAL; - goto err_mgmt_master; - } - - read_skb->dev = mgmt_master; - clear_skb->dev = mgmt_master; - write_skb->dev = mgmt_master; + read_skb->dev = priv->mgmt_master; + clear_skb->dev = priv->mgmt_master; + write_skb->dev = priv->mgmt_master; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, qca8k_mdio_header_fill_seq_num, @@ -533,18 +497,11 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } exit: - ret = dsa_inband_request(&mgmt_eth_data->inband, clear_skb, - qca8k_mdio_header_fill_seq_num, - NULL, 0, - QCA8K_ETHERNET_TIMEOUT); - - mutex_unlock(&mgmt_eth_data->mutex); - - return ret; + return dsa_inband_request(&mgmt_eth_data->inband, clear_skb, + qca8k_mdio_header_fill_seq_num, + NULL, 0, + QCA8K_ETHERNET_TIMEOUT); - /* Error handling before lock */ -err_mgmt_master: - kfree_skb(read_skb); err_read_skb: kfree_skb(clear_skb); err_clear_skb: @@ -1526,13 +1483,11 @@ qca8k_master_change(struct dsa_switch *ds, const struct net_device *master, if (dp->index != 0) return; - mutex_lock(&priv->mgmt_eth_data.mutex); mutex_lock(&priv->mib_eth_data.mutex); priv->mgmt_master = operational ? (struct net_device *)master : NULL; mutex_unlock(&priv->mib_eth_data.mutex); - mutex_unlock(&priv->mgmt_eth_data.mutex); } static int qca8k_connect_tag_protocol(struct dsa_switch *ds, @@ -1850,7 +1805,6 @@ qca8k_sw_probe(struct mdio_device *mdiodev) if (!priv->ds) return -ENOMEM; - mutex_init(&priv->mgmt_eth_data.mutex); dsa_inband_init(&priv->mgmt_eth_data.inband, U32_MAX); mutex_init(&priv->mib_eth_data.mutex); diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 70494096e251..6da36ed6486b 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -347,7 +347,6 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; - struct mutex mutex; /* Enforce one mdio read/write at time */ }; struct qca8k_mib_eth_data { diff --git a/include/net/dsa.h b/include/net/dsa.h index dad9e31d36ce..7a545b781e7d 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1281,6 +1281,7 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, * frames and expecting a response in a frame. */ struct dsa_inband { + struct mutex lock; /* Serialise operations */ struct completion completion; u32 seqno; u32 seqno_mask; diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 4fa0ab4ae58e..82c729d631eb 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -521,6 +521,7 @@ EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) { init_completion(&inband->completion); + mutex_init(&inband->lock); mutex_init(&inband->resp_lock); inband->seqno_mask = seqno_mask; inband->seqno = 0; @@ -567,6 +568,11 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, reinit_completion(&inband->completion); inband->err = 0; + if (!skb->dev) + return -EOPNOTSUPP; + + mutex_lock(&inband->lock); + mutex_lock(&inband->resp_lock); inband->resp = resp; inband->resp_len = resp_len; @@ -585,6 +591,7 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, inband->resp = NULL; inband->resp_len = 0; mutex_unlock(&inband->resp_lock); + mutex_unlock(&inband->lock); if (ret < 0) return ret;