From patchwork Thu Sep 22 17:58:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985627 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D53F1C54EE9 for ; Thu, 22 Sep 2022 17:59:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232276AbiIVR7E (ORCPT ); Thu, 22 Sep 2022 13:59:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232316AbiIVR6p (ORCPT ); Thu, 22 Sep 2022 13:58:45 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82DB510652A for ; Thu, 22 Sep 2022 10:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=Y7MnRAAV45a/7JnYD6ANPiXoOM+4c6pTLg2q8X41rO4=; b=PE/898TnrYBwf/WqWijmwbIuOo 6LdRfoSdYQweny0F5B97tKz1U5IAEHm29ME/4nH+EhE5c69jH9kADDvJ0feYfDx3t89vYkyQPRs2u zGzjEDBVa144Q6ID4ylPhcULQzbkOqAycwf6qS5AiVO/V8dtk76PGinTsjo0dbJmqhpQ=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYch-9L; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 01/10] net: dsa: qca8k: Fix inconsistent use of jiffies vs milliseconds Date: Thu, 22 Sep 2022 19:58:12 +0200 Message-Id: <20220922175821.4184622-2-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC wait_for_complete_timeout() expects a timeout in jiffies. With the driver, some call sites converted QCA8K_ETHERNET_TIMEOUT to jiffies, others did not. Make the code consistent by changes the #define to include a call to msecs_to_jiffies, and remove all other calls to msecs_to_jiffies. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 4 ++-- drivers/net/dsa/qca/qca8k.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index c181346388a4..1c9a8764d1d9 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -258,7 +258,7 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - msecs_to_jiffies(QCA8K_ETHERNET_TIMEOUT)); + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -310,7 +310,7 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - msecs_to_jiffies(QCA8K_ETHERNET_TIMEOUT)); + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 0b7a5cb12321..900382aa8c96 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -15,7 +15,7 @@ #define QCA8K_ETHERNET_MDIO_PRIORITY 7 #define QCA8K_ETHERNET_PHY_PRIORITY 6 -#define QCA8K_ETHERNET_TIMEOUT 5 +#define QCA8K_ETHERNET_TIMEOUT msecs_to_jiffies(5) #define QCA8K_NUM_PORTS 7 #define QCA8K_NUM_CPU_PORTS 2 From patchwork Thu Sep 22 17:58:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985619 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 566D1ECAAD8 for ; Thu, 22 Sep 2022 17:58:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232229AbiIVR6y (ORCPT ); Thu, 22 Sep 2022 13:58:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232336AbiIVR6s (ORCPT ); Thu, 22 Sep 2022 13:58:48 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DC6FF3903 for ; Thu, 22 Sep 2022 10:58:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=l4dGjDd4NyohDXbARXe9ROoednGUrvCBeItaYgOM3bY=; b=SyaeNVEl5TbKMXyK0QlrqBJnRH YGt+3oCi23VOxxC9MubZ0HDCBXZ1N83nBd07tNlmqCrXIG+AS1AxMrzrDLVsRZVeQg1DCOQCrLjN2 q6nbmMQSo5v4srREduee++4LvyvysHkd+50c29AHNplkLeF2HBRQe24/GPd39m2TV+hQ=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYck-Am; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 02/10] net: dsa: qca8k: Move completion into DSA core Date: Thu, 22 Sep 2022 19:58:13 +0200 Message-Id: <20220922175821.4184622-3-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC When performing operations on a remote switch using Ethernet frames, a completion is used between the sender of the request and the code which receives the reply. Move this completion into the DSA core, simplifying the driver. The initialisation and reinitialisation of the completion is now performed in the core. Also, the conversion of milliseconds to jiffies is also in the core. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 49 ++++++++++++-------------------- drivers/net/dsa/qca/qca8k.h | 6 ++-- include/net/dsa.h | 12 ++++++++ net/dsa/dsa.c | 22 ++++++++++++++ 4 files changed, 55 insertions(+), 34 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 1c9a8764d1d9..f4e92156bd32 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -160,7 +160,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) QCA_HDR_MGMT_DATA2_LEN); } - complete(&mgmt_eth_data->rw_done); + dsa_inband_complete(&mgmt_eth_data->inband); } static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *val, @@ -248,8 +248,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) skb->dev = priv->mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the mdio pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -257,8 +255,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -300,8 +298,6 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) skb->dev = priv->mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the mdio pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -309,8 +305,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -448,8 +444,6 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, bool ack; int ret; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the copy pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); @@ -457,8 +451,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, dev_queue_xmit(skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -540,8 +534,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the write pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); @@ -549,8 +541,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(write_skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -577,8 +569,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the read pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); @@ -586,8 +576,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(read_skb); - ret = wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -606,8 +596,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - reinit_completion(&mgmt_eth_data->rw_done); - /* Increment seq_num and set it in the clear pkt */ mgmt_eth_data->seq++; qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); @@ -615,8 +603,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, dev_queue_xmit(clear_skb); - wait_for_completion_timeout(&mgmt_eth_data->rw_done, - QCA8K_ETHERNET_TIMEOUT); + dsa_inband_wait_for_completion(&mgmt_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -1528,7 +1516,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk exit: /* Complete on receiving all the mib packet */ if (refcount_dec_and_test(&mib_eth_data->port_parsed)) - complete(&mib_eth_data->rw_done); + dsa_inband_complete(&mib_eth_data->inband); } static int @@ -1543,8 +1531,6 @@ qca8k_get_ethtool_stats_eth(struct dsa_switch *ds, int port, u64 *data) mutex_lock(&mib_eth_data->mutex); - reinit_completion(&mib_eth_data->rw_done); - mib_eth_data->req_port = dp->index; mib_eth_data->data = data; refcount_set(&mib_eth_data->port_parsed, QCA8K_NUM_PORTS); @@ -1562,7 +1548,8 @@ qca8k_get_ethtool_stats_eth(struct dsa_switch *ds, int port, u64 *data) if (ret) goto exit; - ret = wait_for_completion_timeout(&mib_eth_data->rw_done, QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_wait_for_completion(&mib_eth_data->inband, + QCA8K_ETHERNET_TIMEOUT); exit: mutex_unlock(&mib_eth_data->mutex); @@ -1929,10 +1916,10 @@ qca8k_sw_probe(struct mdio_device *mdiodev) return -ENOMEM; mutex_init(&priv->mgmt_eth_data.mutex); - init_completion(&priv->mgmt_eth_data.rw_done); + dsa_inband_init(&priv->mgmt_eth_data.inband); mutex_init(&priv->mib_eth_data.mutex); - init_completion(&priv->mib_eth_data.rw_done); + dsa_inband_init(&priv->mib_eth_data.inband); priv->ds->dev = &mdiodev->dev; priv->ds->num_ports = QCA8K_NUM_PORTS; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 900382aa8c96..84d6b02d75fb 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -15,7 +15,7 @@ #define QCA8K_ETHERNET_MDIO_PRIORITY 7 #define QCA8K_ETHERNET_PHY_PRIORITY 6 -#define QCA8K_ETHERNET_TIMEOUT msecs_to_jiffies(5) +#define QCA8K_ETHERNET_TIMEOUT 5 #define QCA8K_NUM_PORTS 7 #define QCA8K_NUM_CPU_PORTS 2 @@ -346,7 +346,7 @@ enum { }; struct qca8k_mgmt_eth_data { - struct completion rw_done; + struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ bool ack; u32 seq; @@ -354,7 +354,7 @@ struct qca8k_mgmt_eth_data { }; struct qca8k_mib_eth_data { - struct completion rw_done; + struct dsa_inband inband; struct mutex mutex; /* Process one command at time */ refcount_t port_parsed; /* Counter to track parsed port */ u8 req_port; diff --git a/include/net/dsa.h b/include/net/dsa.h index d777eac5694f..59dd5855dcbd 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -7,6 +7,7 @@ #ifndef __LINUX_NET_DSA_H #define __LINUX_NET_DSA_H +#include #include #include #include @@ -1303,6 +1304,17 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, const struct switchdev_obj_port_mdb *mdb, struct dsa_db db); +/* Perform operations on a switch by sending it request in Ethernet + * frames and expecting a response in a frame. + */ +struct dsa_inband { + struct completion completion; +}; + +void dsa_inband_init(struct dsa_inband *inband); +void dsa_inband_complete(struct dsa_inband *inband); +int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); + /* Keep inline for faster access in hot path */ static inline bool netdev_uses_dsa(const struct net_device *dev) { diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 64b14f655b23..fc031e9693ee 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -518,6 +518,28 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, } EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); +void dsa_inband_init(struct dsa_inband *inband) +{ + init_completion(&inband->completion); +} +EXPORT_SYMBOL_GPL(dsa_inband_init); + +void dsa_inband_complete(struct dsa_inband *inband) +{ + complete(&inband->completion); +} +EXPORT_SYMBOL_GPL(dsa_inband_complete); + +int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms) +{ + unsigned long jiffies = msecs_to_jiffies(timeout_ms); + + reinit_completion(&inband->completion); + + return wait_for_completion_timeout(&inband->completion, jiffies); +} +EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); + static int __init dsa_init_module(void) { int rc; From patchwork Thu Sep 22 17:58:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985624 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34792ECAAD8 for ; Thu, 22 Sep 2022 17:59:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232268AbiIVR7B (ORCPT ); Thu, 22 Sep 2022 13:59:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232319AbiIVR6p (ORCPT ); Thu, 22 Sep 2022 13:58:45 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 171C710652B for ; Thu, 22 Sep 2022 10:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=Yn5ODjXsWIxCHBGRebYEzjF9dfkT8rmI7H73fe9Skyk=; b=Q0t8X3F400fdkDalmUmyhgJvYA UWxyPNM7jGXtRq8QObHwZ4yQbRa2xjdJEuc+D2UBKyHW8G8RAA9iUcXHolouUwumfDdLuWJ7KIFMF teY8ygfPn4v1GkJPCcFkq/+zkw/FxwVHIw5AVi4R9sk7AMwh27qwX8AFbrnf3tjHw3XY=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYcn-CP; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 03/10] net: dsa: qca8K: Move queuing for request frame into the core Date: Thu, 22 Sep 2022 19:58:14 +0200 Message-Id: <20220922175821.4184622-4-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Combine the queuing of the request and waiting for the completion into one core helper. Add the function dsa_rmu_request() to perform this. Access to statistics is not a strict request/reply, so the dsa_rmu_wait_for_completion needs to be kept. It is also no possible to combine dsa_rmu_request() and dsa_rmu_wait_for_completion() since we need to avoid the race of sending the request, receiving a reply, and the completion has not been reinitialised because the schedule at decided to do other things. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 32 ++++++++++---------------------- include/net/dsa.h | 2 ++ net/dsa/dsa.c | 16 ++++++++++++++++ 3 files changed, 28 insertions(+), 22 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index f4e92156bd32..9c44a09590a6 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -253,10 +253,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -303,10 +301,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -449,10 +445,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -539,10 +533,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(write_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -574,10 +566,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(read_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -601,8 +591,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(clear_skb); - dsa_inband_wait_for_completion(&mgmt_eth_data->inband, QCA8K_ETHERNET_TIMEOUT); diff --git a/include/net/dsa.h b/include/net/dsa.h index 59dd5855dcbd..a5bcd9f021d4 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1313,6 +1313,8 @@ struct dsa_inband { void dsa_inband_init(struct dsa_inband *inband); void dsa_inband_complete(struct dsa_inband *inband); +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); /* Keep inline for faster access in hot path */ diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index fc031e9693ee..4e89b483d3e5 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -540,6 +540,22 @@ int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms) } EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); +/* Cannot use dsa_inband_wait_completion since the completion needs to be + * reinitialized before the skb is queue to avoid races. + */ +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms) +{ + unsigned long jiffies = msecs_to_jiffies(timeout_ms); + + reinit_completion(&inband->completion); + + dev_queue_xmit(skb); + + return wait_for_completion_timeout(&inband->completion, jiffies); +} +EXPORT_SYMBOL_GPL(dsa_inband_request); + static int __init dsa_init_module(void) { int rc; From patchwork Thu Sep 22 17:58:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985620 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7808C54EE9 for ; Thu, 22 Sep 2022 17:58:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232245AbiIVR64 (ORCPT ); Thu, 22 Sep 2022 13:58:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232308AbiIVR6o (ORCPT ); Thu, 22 Sep 2022 13:58:44 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 660DE10650A for ; Thu, 22 Sep 2022 10:58:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=TiNXYxDA5SUKKvnmlQFJqfr4sq76ccdO73FY3dRiLB4=; b=MsrkbMRI6pJGI5/9ClwJS0Kb18 ggB1VWczC7ZlsCNxX2abwJZ/kFrJj2tcswpbQR6NCjfmmD6au/v4M88//hdoGA52YDjje/rDxU5XH 4cqhGjUC56JVw+E1BsX+eFEn1hNzqVi5c6NBHpPAqxM5ZS794Dpfo2eKiafL6vTfaMB0=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYcq-Dl; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 04/10] net: dsa: qca8k: dsa_inband_request: More normal return values Date: Thu, 22 Sep 2022 19:58:15 +0200 Message-Id: <20220922175821.4184622-5-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC wait_for_completion_timeout() has unusual return values. If it times out, it returns 0, and on success it returns the number of remaining jiffies for the timeout. For the use case here, the remaining time is not needed. All that is really interesting is, it succeeded and returns 0, or a timeout. Massage the return value to fit this, and modify the callers to the more usual pattern of ret < 0 is an error. Sending the clear message is expected to fail, so don't check the return value, and add a comment about this. Signed-off-by: Andrew Lunn --- v2 Remove check on the clear message. wait_for_completion_timeout() does not return negative values --- drivers/net/dsa/qca/qca8k-8xxx.c | 24 +++++++++++------------- net/dsa/dsa.c | 6 +++++- 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 9c44a09590a6..2f92cabe21af 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -264,8 +264,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) mutex_unlock(&mgmt_eth_data->mutex); - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -308,8 +308,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) mutex_unlock(&mgmt_eth_data->mutex); - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -450,8 +450,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, ack = mgmt_eth_data->ack; - if (ret <= 0) - return -ETIMEDOUT; + if (ret) + return ret; if (!ack) return -EINVAL; @@ -538,8 +538,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, ack = mgmt_eth_data->ack; - if (ret <= 0) { - ret = -ETIMEDOUT; + if (ret) { kfree_skb(read_skb); goto exit; } @@ -571,10 +570,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, ack = mgmt_eth_data->ack; - if (ret <= 0) { - ret = -ETIMEDOUT; + if (ret) goto exit; - } if (!ack) { ret = -EINVAL; @@ -591,8 +588,9 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + /* This is expected to fail sometimes, so don't check return value. */ + dsa_inband_request(&mgmt_eth_data->inband, clear_skb, + QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 4e89b483d3e5..8c40bc4c5944 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -547,12 +547,16 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, int timeout_ms) { unsigned long jiffies = msecs_to_jiffies(timeout_ms); + int ret; reinit_completion(&inband->completion); dev_queue_xmit(skb); - return wait_for_completion_timeout(&inband->completion, jiffies); + ret = wait_for_completion_timeout(&inband->completion, jiffies); + if (ret == 0) + return -ETIMEDOUT; + return 0; } EXPORT_SYMBOL_GPL(dsa_inband_request); From patchwork Thu Sep 22 17:58:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985622 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE955C6FA82 for ; Thu, 22 Sep 2022 17:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232262AbiIVR67 (ORCPT ); Thu, 22 Sep 2022 13:58:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232324AbiIVR6q (ORCPT ); Thu, 22 Sep 2022 13:58:46 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EBFD106538 for ; Thu, 22 Sep 2022 10:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=lqAWsxfXxdpX4gKv//U+Esm+JzwJ01Tg0WcoaWgfcqQ=; b=2y66qXn49RerR7lGgnN6Mn4MF0 k0NX3O/lPiwxVSndMFdFdnLPUBrOi7RLHtifRU3w4nGL1O8nlDY37d7Q5GdD1XHGHZHO+aUyUejQW sxnmX/tvJhW3P3UMAq82s0yfzYcxsriKP8+mu+jMsmF4xZZAD0ITEmqQasf4tUPHjE7w=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYct-Ez; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 05/10] net: dsa: qca8k: Drop replies with wrong sequence numbers Date: Thu, 22 Sep 2022 19:58:16 +0200 Message-Id: <20220922175821.4184622-6-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC A response with the wrong sequence number is likely to be a late arriving responses, which the driver has already given up waiting for. Drop it rather than signalling the complete. If the complete was signalled, this late response could take the place of the genuine reply which is soon to follow. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 2f92cabe21af..fdf27306d169 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -145,9 +145,9 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) cmd = FIELD_GET(QCA_HDR_MGMT_CMD, mgmt_ethhdr->command); len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command); - /* Make sure the seq match the requested packet */ + /* Make sure the seq match the requested packet. If not, drop. */ if (mgmt_ethhdr->seq == mgmt_eth_data->seq) - mgmt_eth_data->ack = true; + return; if (cmd == MDIO_READ) { mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data; From patchwork Thu Sep 22 17:58:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985621 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16384ECAAD8 for ; Thu, 22 Sep 2022 17:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232257AbiIVR66 (ORCPT ); Thu, 22 Sep 2022 13:58:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232315AbiIVR6p (ORCPT ); Thu, 22 Sep 2022 13:58:45 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51DE1106510 for ; Thu, 22 Sep 2022 10:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=HxB3OyNlsKQec/EYMatwXhC42Eu9yQtHOarjom2s/hg=; b=L6rmhnCT3tbxUm5XqPhz8DgG5v +gD+B3EoYfU0zey7cCuVocmbx0jBiDZMvc5JN9k5/GZWu/aV0Suae8rCbYGJzhVCz6vp4PirrWzdq 1vYYNPXyoiv355nmgDdIZIklP8bf70WQN4pnuSoP1qdxJWpabTxfHPWtNcFv0UTG4siw=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYcw-GB; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 06/10] net: dsa: qca8k: Move request sequence number handling into core Date: Thu, 22 Sep 2022 19:58:17 +0200 Message-Id: <20220922175821.4184622-7-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Each request/reply frame is likely to have a sequence number so that request and the reply can be matched together. Move this sequence number into the inband structure. The driver must provide a helper to insert the sequence number into the skb, and the core will perform the increment. To allow different devices to have different size sequence numbers, a mask is provided. This can be used for example to reduce the u32 sequence number down to a u8. Signed-off-by: Andrew Lunn --- v2 Fix wrong indentation of parameters Move seqno increment before reinitializing completion to close race Add a READ_ONCE() to stop compiler mischief. --- drivers/net/dsa/qca/qca8k-8xxx.c | 33 +++++++++----------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 6 +++++- net/dsa/dsa.c | 16 +++++++++++++++- 4 files changed, 29 insertions(+), 27 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index fdf27306d169..35dbb16f9d62 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -146,7 +146,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command); /* Make sure the seq match the requested packet. If not, drop. */ - if (mgmt_ethhdr->seq == mgmt_eth_data->seq) + if (mgmt_ethhdr->seq == dsa_inband_seqno(&mgmt_eth_data->inband)) return; if (cmd == MDIO_READ) { @@ -247,13 +247,10 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - - /* Increment seq_num and set it in the mdio pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; @@ -295,13 +292,10 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - - /* Increment seq_num and set it in the mdio pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -440,12 +434,10 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, bool ack; int ret; - /* Increment seq_num and set it in the copy pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -527,13 +519,10 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, read_skb->dev = mgmt_master; clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - - /* Increment seq_num and set it in the write pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -560,12 +549,10 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - /* Increment seq_num and set it in the read pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -583,13 +570,11 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - /* Increment seq_num and set it in the clear pkt */ - mgmt_eth_data->seq++; - qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; /* This is expected to fail sometimes, so don't check return value. */ dsa_inband_request(&mgmt_eth_data->inband, clear_skb, + qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -1902,10 +1887,10 @@ qca8k_sw_probe(struct mdio_device *mdiodev) return -ENOMEM; mutex_init(&priv->mgmt_eth_data.mutex); - dsa_inband_init(&priv->mgmt_eth_data.inband); + dsa_inband_init(&priv->mgmt_eth_data.inband, U32_MAX); mutex_init(&priv->mib_eth_data.mutex); - dsa_inband_init(&priv->mib_eth_data.inband); + dsa_inband_init(&priv->mib_eth_data.inband, U32_MAX); priv->ds->dev = &mdiodev->dev; priv->ds->num_ports = QCA8K_NUM_PORTS; diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 84d6b02d75fb..ae5815365a86 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -349,7 +349,6 @@ struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ bool ack; - u32 seq; u32 data[4]; }; diff --git a/include/net/dsa.h b/include/net/dsa.h index a5bcd9f021d4..e8cbd67279ea 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1309,13 +1309,17 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, */ struct dsa_inband { struct completion completion; + u32 seqno; + u32 seqno_mask; }; -void dsa_inband_init(struct dsa_inband *inband); +void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask); void dsa_inband_complete(struct dsa_inband *inband); int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + void (*insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); +u32 dsa_inband_seqno(struct dsa_inband *inband); /* Keep inline for faster access in hot path */ static inline bool netdev_uses_dsa(const struct net_device *dev) diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 8c40bc4c5944..cddf62916932 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -518,9 +518,11 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, } EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); -void dsa_inband_init(struct dsa_inband *inband) +void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) { init_completion(&inband->completion); + inband->seqno_mask = seqno_mask; + inband->seqno = 0; } EXPORT_SYMBOL_GPL(dsa_inband_init); @@ -544,11 +546,17 @@ EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); * reinitialized before the skb is queue to avoid races. */ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + void (*insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms) { unsigned long jiffies = msecs_to_jiffies(timeout_ms); int ret; + if (insert_seqno) { + inband->seqno++; + insert_seqno(skb, inband->seqno & inband->seqno_mask); + } + reinit_completion(&inband->completion); dev_queue_xmit(skb); @@ -560,6 +568,12 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, } EXPORT_SYMBOL_GPL(dsa_inband_request); +u32 dsa_inband_seqno(struct dsa_inband *inband) +{ + return READ_ONCE(inband->seqno) & inband->seqno_mask; +} +EXPORT_SYMBOL_GPL(dsa_inband_seqno); + static int __init dsa_init_module(void) { int rc; From patchwork Thu Sep 22 17:58:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985623 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92B5AC6FA91 for ; Thu, 22 Sep 2022 17:59:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232265AbiIVR7A (ORCPT ); Thu, 22 Sep 2022 13:59:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232342AbiIVR6s (ORCPT ); Thu, 22 Sep 2022 13:58:48 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 015CD10653E for ; Thu, 22 Sep 2022 10:58:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=pvY/kxtPAYphSW76zOLyHOFh3S18Jk2KB1MOihpbnzw=; b=QvtUN88HCZeBL7eI6goapNhJnp dVWEDxwHddqjtvVZN1RVLTdBQPx8P5GUQN2HY/Jd4YaKZvHVvFGpVavu6h0LNs00zEszb8gCd1Mly sxSEZykeF519pKX7JFs8N9G98I/xMSxxakQbXdlmkRQriKP9ou0stU3uvYe7UIH98/v0=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYcz-Hh; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 07/10] net: dsa: qca8k: Refactor sequence number mismatch to use error code Date: Thu, 22 Sep 2022 19:58:18 +0200 Message-Id: <20220922175821.4184622-8-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Replace the boolean that the sequence numbers matches with an error code. Now that responses with the wrong sequence number are dropped, this is actually unused in this driver. However, other devices can perform additional validation of a response with the correct sequence number and potentially return -EPROTO to indicate some other sort of error. The value is only safe to use if the completion happens. Ensure the return from the completion is always considered, and if it fails, a timeout error is returned. This is a preparation step to moving the error tracking into the DSA core. This intermediate step is a bit ugly, but that all gets cleaned up in the next patch. Signed-off-by: Andrew Lunn --- v2 -ret -> err Extended commit message warning the code is ugly Point out it is not actually used by this driver --- drivers/net/dsa/qca/qca8k-8xxx.c | 46 +++++++++++++------------------- drivers/net/dsa/qca/qca8k.h | 2 +- 2 files changed, 20 insertions(+), 28 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 35dbb16f9d62..1127fa138960 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -229,7 +229,7 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - bool ack; + int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_READ, reg, NULL, @@ -247,7 +247,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, @@ -257,15 +256,15 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, mgmt_eth_data->data + 1, len - QCA_HDR_MGMT_DATA1_LEN); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; mutex_unlock(&mgmt_eth_data->mutex); if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return err; return 0; } @@ -274,7 +273,7 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - bool ack; + int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, @@ -292,21 +291,20 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) } skb->dev = priv->mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; mutex_unlock(&mgmt_eth_data->mutex); if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return err; return 0; } @@ -431,22 +429,20 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, struct sk_buff *read_skb, u32 *val) { struct sk_buff *skb = skb_copy(read_skb, GFP_KERNEL); - bool ack; + int err; int ret; - mgmt_eth_data->ack = false; - ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) return ret; - if (!ack) - return -EINVAL; + if (err) + return err; *val = mgmt_eth_data->data[0]; @@ -462,7 +458,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, u32 write_val, clear_val = 0, val; struct net_device *mgmt_master; int ret, ret1; - bool ack; + int err; if (regnum >= QCA8K_MDIO_MASTER_MAX_REG) return -EINVAL; @@ -519,21 +515,20 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, read_skb->dev = mgmt_master; clear_skb->dev = mgmt_master; write_skb->dev = mgmt_master; - mgmt_eth_data->ack = false; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) { kfree_skb(read_skb); goto exit; } - if (!ack) { - ret = -EINVAL; + if (err) { + ret = err; kfree_skb(read_skb); goto exit; } @@ -549,19 +544,17 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, } if (read) { - mgmt_eth_data->ack = false; - ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - ack = mgmt_eth_data->ack; + err = mgmt_eth_data->err; if (ret) goto exit; - if (!ack) { - ret = -EINVAL; + if (err) { + ret = err; goto exit; } @@ -570,7 +563,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, kfree_skb(read_skb); } exit: - mgmt_eth_data->ack = false; /* This is expected to fail sometimes, so don't check return value. */ dsa_inband_request(&mgmt_eth_data->inband, clear_skb, diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index ae5815365a86..7b928c160c0e 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -348,7 +348,7 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ - bool ack; + int err; u32 data[4]; }; From patchwork Thu Sep 22 17:58:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985628 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61DD2ECAAD8 for ; Thu, 22 Sep 2022 17:59:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232279AbiIVR7F (ORCPT ); Thu, 22 Sep 2022 13:59:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232346AbiIVR6s (ORCPT ); Thu, 22 Sep 2022 13:58:48 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45164106517 for ; Thu, 22 Sep 2022 10:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=mFEYsaZvxhwZIrz4ilQiyzkyJaHpsNKtI9EF545CUpg=; b=0P/5I8l8ptYkYkmejrt5ljBg4x nC8a4+GQTyILNo9AzAIonpRxxvl5AIvlBlsLsrsuOb4Bgr8kK4aTQzRE4xAtuvEqmfngNHBcDPkfR AHDfOj8yAeiNLlNHrD+fve4mHJm5ww83alikAxIoOU1UuBF02mXY6C5RFkKffTEySB2A=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYd2-JO; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 08/10] net: dsa: qca8k: Pass error code from reply decoder to requester Date: Thu, 22 Sep 2022 19:58:19 +0200 Message-Id: <20220922175821.4184622-9-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The code which decodes the frame and signals the complete can detect error within the reply, such as fields have unexpected values. Pass an error code between the completer and the function waiting on the complete. This simplifies the error handling, since all errors are combined into one place. Signed-off-by: Andrew Lunn --- v2: Remove EPROTO if the sequence numbers don't match, drop the reply --- drivers/net/dsa/qca/qca8k-8xxx.c | 54 ++++++-------------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 3 +- net/dsa/dsa.c | 8 +++-- 4 files changed, 17 insertions(+), 49 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 1127fa138960..096b43f89869 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -138,6 +138,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) struct qca8k_priv *priv = ds->priv; struct qca_mgmt_ethhdr *mgmt_ethhdr; u8 len, cmd; + int err = 0; mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb); mgmt_eth_data = &priv->mgmt_eth_data; @@ -160,7 +161,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) QCA_HDR_MGMT_DATA2_LEN); } - dsa_inband_complete(&mgmt_eth_data->inband); + dsa_inband_complete(&mgmt_eth_data->inband, err); } static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *val, @@ -229,7 +230,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_READ, reg, NULL, @@ -256,24 +256,15 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, mgmt_eth_data->data + 1, len - QCA_HDR_MGMT_DATA1_LEN); - err = mgmt_eth_data->err; - mutex_unlock(&mgmt_eth_data->mutex); - if (ret) - return ret; - - if (err) - return err; - - return 0; + return ret; } static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int err; int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, @@ -296,17 +287,9 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - mutex_unlock(&mgmt_eth_data->mutex); - if (ret) - return ret; - - if (err) - return err; - - return 0; + return ret; } static int @@ -429,21 +412,15 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, struct sk_buff *read_skb, u32 *val) { struct sk_buff *skb = skb_copy(read_skb, GFP_KERNEL); - int err; int ret; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) return ret; - if (err) - return err; - *val = mgmt_eth_data->data[0]; return 0; @@ -458,7 +435,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, u32 write_val, clear_val = 0, val; struct net_device *mgmt_master; int ret, ret1; - int err; if (regnum >= QCA8K_MDIO_MASTER_MAX_REG) return -EINVAL; @@ -520,19 +496,11 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) { kfree_skb(read_skb); goto exit; } - if (err) { - ret = err; - kfree_skb(read_skb); - goto exit; - } - ret = read_poll_timeout(qca8k_phy_eth_busy_wait, ret1, !(val & QCA8K_MDIO_MASTER_BUSY), 0, QCA8K_BUSY_WAIT_TIMEOUT * USEC_PER_MSEC, false, @@ -548,16 +516,9 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num, QCA8K_ETHERNET_TIMEOUT); - err = mgmt_eth_data->err; - if (ret) goto exit; - if (err) { - ret = err; - goto exit; - } - ret = mgmt_eth_data->data[0] & QCA8K_MDIO_MASTER_DATA_MASK; } else { kfree_skb(read_skb); @@ -1439,6 +1400,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk const struct qca8k_mib_desc *mib; struct mib_ethhdr *mib_ethhdr; int i, mib_len, offset = 0; + int err = 0; u64 *data; u8 port; @@ -1449,8 +1411,10 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk * parse only the requested one. */ port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, ntohs(mib_ethhdr->hdr)); - if (port != mib_eth_data->req_port) + if (port != mib_eth_data->req_port) { + err = -EPROTO; goto exit; + } data = mib_eth_data->data; @@ -1479,7 +1443,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk exit: /* Complete on receiving all the mib packet */ if (refcount_dec_and_test(&mib_eth_data->port_parsed)) - dsa_inband_complete(&mib_eth_data->inband); + dsa_inband_complete(&mib_eth_data->inband, err); } static int diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index 7b928c160c0e..ce27a732dba0 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -348,7 +348,6 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ - int err; u32 data[4]; }; diff --git a/include/net/dsa.h b/include/net/dsa.h index e8cbd67279ea..497d0eb85cf1 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1311,10 +1311,11 @@ struct dsa_inband { struct completion completion; u32 seqno; u32 seqno_mask; + int err; }; void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask); -void dsa_inband_complete(struct dsa_inband *inband); +void dsa_inband_complete(struct dsa_inband *inband, int err); int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, void (*insert_seqno)(struct sk_buff *skb, u32 seqno), int timeout_ms); diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index cddf62916932..a426459c74c5 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -526,8 +526,9 @@ void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) } EXPORT_SYMBOL_GPL(dsa_inband_init); -void dsa_inband_complete(struct dsa_inband *inband) +void dsa_inband_complete(struct dsa_inband *inband, int err) { + inband->err = err; complete(&inband->completion); } EXPORT_SYMBOL_GPL(dsa_inband_complete); @@ -552,6 +553,8 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, unsigned long jiffies = msecs_to_jiffies(timeout_ms); int ret; + inband->err = 0; + if (insert_seqno) { inband->seqno++; insert_seqno(skb, inband->seqno & inband->seqno_mask); @@ -564,7 +567,8 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, ret = wait_for_completion_timeout(&inband->completion, jiffies); if (ret == 0) return -ETIMEDOUT; - return 0; + + return inband->err; } EXPORT_SYMBOL_GPL(dsa_inband_request); From patchwork Thu Sep 22 17:58:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985626 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D767C6FA8B for ; Thu, 22 Sep 2022 17:59:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232272AbiIVR7D (ORCPT ); Thu, 22 Sep 2022 13:59:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232327AbiIVR6r (ORCPT ); Thu, 22 Sep 2022 13:58:47 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9469106533 for ; Thu, 22 Sep 2022 10:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=xhnEXnME6jxat0DKxtYNU/EQ66A7POLznSGwmRyvOH8=; b=IRf0O4eQnQ7doJsujw87f61TlU Vnc94Ms7hWjMK8I6Q6E9ClcjRJMEc70wu9St4ZVxnIzl2wTkGOnqkTSUKyjK7U0yZhgfpr+FXpiET TiOXvS9KzJ/z9gUSHkPcrLotcuamqcWEuHw+P2EsG+26CRP09GkIv+qrguLixib0/hXI=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYd5-L5; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 09/10] net: dsa: qca8k: Pass response buffer via dsa_rmu_request Date: Thu, 22 Sep 2022 19:58:20 +0200 Message-Id: <20220922175821.4184622-10-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Make the calling of operations on the switch more like a request response API by passing the address of the response buffer, rather than making use of global state. To avoid race conditions with the completion timeout, and late arriving responses, protect the resp members via a spinlock. The qca8k response frame has an odd layout, the reply is not contiguous. Use a small intermediary buffer to convert the reply into something which can be memcpy'ed. Signed-off-by: Andrew Lunn --- v2: Replace mutex with spinlock, since processing the response cannot block. Add more documentation about what the lock protects. Add a return value check --- drivers/net/dsa/qca/qca8k-8xxx.c | 30 +++++++++++++++++++++--------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 12 +++++++++++- net/dsa/dsa.c | 24 +++++++++++++++++++++++- 4 files changed, 55 insertions(+), 12 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index 096b43f89869..e0dc6dbafefe 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -138,6 +138,7 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) struct qca8k_priv *priv = ds->priv; struct qca_mgmt_ethhdr *mgmt_ethhdr; u8 len, cmd; + u32 data[4]; int err = 0; mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb); @@ -151,17 +152,16 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb) return; if (cmd == MDIO_READ) { - mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data; + data[0] = mgmt_ethhdr->mdio_data; /* Get the rest of the 12 byte of data. * The read/write function will extract the requested data. */ if (len > QCA_HDR_MGMT_DATA1_LEN) - memcpy(mgmt_eth_data->data + 1, skb->data, - QCA_HDR_MGMT_DATA2_LEN); + memcpy(&data[1], skb->data, QCA_HDR_MGMT_DATA2_LEN); } - dsa_inband_complete(&mgmt_eth_data->inband, err); + dsa_inband_complete(&mgmt_eth_data->inband, &data, sizeof(data), err); } static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *val, @@ -230,6 +230,7 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; + u32 data[4]; int ret; skb = qca8k_alloc_mdio_header(MDIO_READ, reg, NULL, @@ -250,12 +251,16 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, + &data, sizeof(data), QCA8K_ETHERNET_TIMEOUT); + if (ret < 0) + goto out; - *val = mgmt_eth_data->data[0]; + *val = data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) - memcpy(val + 1, mgmt_eth_data->data + 1, len - QCA_HDR_MGMT_DATA1_LEN); + memcpy(val + 1, &data[1], len - QCA_HDR_MGMT_DATA1_LEN); +out: mutex_unlock(&mgmt_eth_data->mutex); return ret; @@ -285,6 +290,7 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, + NULL, 0, QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -412,16 +418,18 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, struct sk_buff *read_skb, u32 *val) { struct sk_buff *skb = skb_copy(read_skb, GFP_KERNEL); + u32 data[4]; int ret; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, qca8k_mdio_header_fill_seq_num, + &data, sizeof(data), QCA8K_ETHERNET_TIMEOUT); if (ret) return ret; - *val = mgmt_eth_data->data[0]; + *val = data[0]; return 0; } @@ -434,6 +442,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, struct qca8k_mgmt_eth_data *mgmt_eth_data; u32 write_val, clear_val = 0, val; struct net_device *mgmt_master; + u32 resp_data[4]; int ret, ret1; if (regnum >= QCA8K_MDIO_MASTER_MAX_REG) @@ -494,6 +503,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, qca8k_mdio_header_fill_seq_num, + NULL, 0, QCA8K_ETHERNET_TIMEOUT); if (ret) { @@ -514,12 +524,13 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, if (read) { ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, qca8k_mdio_header_fill_seq_num, + &resp_data, sizeof(resp_data), QCA8K_ETHERNET_TIMEOUT); if (ret) goto exit; - ret = mgmt_eth_data->data[0] & QCA8K_MDIO_MASTER_DATA_MASK; + ret = resp_data[0] & QCA8K_MDIO_MASTER_DATA_MASK; } else { kfree_skb(read_skb); } @@ -528,6 +539,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, /* This is expected to fail sometimes, so don't check return value. */ dsa_inband_request(&mgmt_eth_data->inband, clear_skb, qca8k_mdio_header_fill_seq_num, + NULL, 0, QCA8K_ETHERNET_TIMEOUT); mutex_unlock(&mgmt_eth_data->mutex); @@ -1443,7 +1455,7 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk exit: /* Complete on receiving all the mib packet */ if (refcount_dec_and_test(&mib_eth_data->port_parsed)) - dsa_inband_complete(&mib_eth_data->inband, err); + dsa_inband_complete(&mib_eth_data->inband, NULL, 0, err); } static int diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index ce27a732dba0..e58cd7a5f91a 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -348,7 +348,6 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; struct mutex mutex; /* Enforce one mdio read/write at time */ - u32 data[4]; }; struct qca8k_mib_eth_data { diff --git a/include/net/dsa.h b/include/net/dsa.h index 497d0eb85cf1..7450f413b709 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1306,18 +1306,28 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, /* Perform operations on a switch by sending it request in Ethernet * frames and expecting a response in a frame. + * + * resp_lock protects resp and resp_len to ensure they are consistent. + * If there is a thread waiting for the response, resp will point to a + * buffer to copy the response to. If the thread has given up waiting, + * resp will be a NULL pointer. */ struct dsa_inband { struct completion completion; u32 seqno; u32 seqno_mask; int err; + spinlock_t resp_lock; + void *resp; + unsigned int resp_len; }; void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask); -void dsa_inband_complete(struct dsa_inband *inband, int err); +void dsa_inband_complete(struct dsa_inband *inband, + void *resp, unsigned int resp_len, int err); int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, void (*insert_seqno)(struct sk_buff *skb, u32 seqno), + void *resp, unsigned int resp_len, int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); u32 dsa_inband_seqno(struct dsa_inband *inband); diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index a426459c74c5..fc7be84dbafb 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -521,14 +521,24 @@ EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) { init_completion(&inband->completion); + spin_lock_init(&inband->resp_lock); inband->seqno_mask = seqno_mask; inband->seqno = 0; } EXPORT_SYMBOL_GPL(dsa_inband_init); -void dsa_inband_complete(struct dsa_inband *inband, int err) +void dsa_inband_complete(struct dsa_inband *inband, + void *resp, unsigned int resp_len, + int err) { inband->err = err; + + spin_lock(&inband->resp_lock); + resp_len = min(inband->resp_len, resp_len); + if (inband->resp && resp) + memcpy(inband->resp, resp, resp_len); + spin_unlock(&inband->resp_lock); + complete(&inband->completion); } EXPORT_SYMBOL_GPL(dsa_inband_complete); @@ -548,6 +558,7 @@ EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); */ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, void (*insert_seqno)(struct sk_buff *skb, u32 seqno), + void *resp, unsigned int resp_len, int timeout_ms) { unsigned long jiffies = msecs_to_jiffies(timeout_ms); @@ -555,6 +566,11 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, inband->err = 0; + spin_lock(&inband->resp_lock); + inband->resp = resp; + inband->resp_len = resp_len; + spin_unlock(&inband->resp_lock); + if (insert_seqno) { inband->seqno++; insert_seqno(skb, inband->seqno & inband->seqno_mask); @@ -565,6 +581,12 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, dev_queue_xmit(skb); ret = wait_for_completion_timeout(&inband->completion, jiffies); + + spin_lock(&inband->resp_lock); + inband->resp = NULL; + inband->resp_len = 0; + spin_unlock(&inband->resp_lock); + if (ret == 0) return -ETIMEDOUT; From patchwork Thu Sep 22 17:58:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12985625 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25841C6FA82 for ; Thu, 22 Sep 2022 17:59:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232270AbiIVR7D (ORCPT ); Thu, 22 Sep 2022 13:59:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232347AbiIVR6t (ORCPT ); Thu, 22 Sep 2022 13:58:49 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A9EB10650A for ; Thu, 22 Sep 2022 10:58:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=PCJuPnFuESBj3348iFEowR861KdHVFBRCdQCus820yc=; b=uOng4EmnxdPRJ8YSvSWUmbiGFR uk8Y/IHDctUMKxJgXDgppbebQY5tcbttmAl/lez96u1Xb6aMQl5jznV4NL12jnSBkkgX4fMp06ePO jW4O6O4lVJVitNA4bZEIeqCH/eszQJfxPGFcoEvzrOhFAx0PJywHjTfnuD6xgvnu8LQM=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1obQTA-00HYd8-Mn; Thu, 22 Sep 2022 19:58:40 +0200 From: Andrew Lunn To: netdev Cc: mattias.forsblad@gmail.com, Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v2 10/10] net: dsa: qca8: Move inband mutex into DSA core Date: Thu, 22 Sep 2022 19:58:21 +0200 Message-Id: <20220922175821.4184622-11-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220922175821.4184622-1-andrew@lunn.ch> References: <20220922175821.4184622-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The mutex serves two purposes: It serialises operations on the switch, so that only one request/response can be happening at once. It protects priv->mgmt_master, which itself has two purposes. If the hardware is wrongly wired, the wrong switch port is connected to the cpu, inband cannot be used. In this case it has the value NULL. Additionally, if the master is down, it is set to NULL. Otherwise it points to the netdev used to send frames to the switch. The protection of priv->mgmt_master is not required. It is a single pointer, which will be updated atomically. It is not expected that the interface disappears, it only goes down. Hence mgmt_master will always be valid, or NULL. Move the check for the master device being NULL into the core. Also, move the mutex for serialisation into the core. The MIB operations don't follow request/response semantics, so its mutex is left untouched. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 65 +++++--------------------------- drivers/net/dsa/qca/qca8k.h | 1 - include/net/dsa.h | 4 ++ net/dsa/dsa.c | 7 ++++ 4 files changed, 21 insertions(+), 56 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index e0dc6dbafefe..d35f813e0024 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -238,15 +238,6 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) if (!skb) return -ENOMEM; - mutex_lock(&mgmt_eth_data->mutex); - - /* Check mgmt_master if is operational */ - if (!priv->mgmt_master) { - kfree_skb(skb); - mutex_unlock(&mgmt_eth_data->mutex); - return -EINVAL; - } - skb->dev = priv->mgmt_master; ret = dsa_inband_request(&mgmt_eth_data->inband, skb, @@ -254,48 +245,31 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) &data, sizeof(data), QCA8K_ETHERNET_TIMEOUT); if (ret < 0) - goto out; + return ret; *val = data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) memcpy(val + 1, &data[1], len - QCA_HDR_MGMT_DATA1_LEN); -out: - mutex_unlock(&mgmt_eth_data->mutex); - - return ret; + return 0; } static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) { struct qca8k_mgmt_eth_data *mgmt_eth_data = &priv->mgmt_eth_data; struct sk_buff *skb; - int ret; skb = qca8k_alloc_mdio_header(MDIO_WRITE, reg, val, QCA8K_ETHERNET_MDIO_PRIORITY, len); if (!skb) return -ENOMEM; - mutex_lock(&mgmt_eth_data->mutex); - - /* Check mgmt_master if is operational */ - if (!priv->mgmt_master) { - kfree_skb(skb); - mutex_unlock(&mgmt_eth_data->mutex); - return -EINVAL; - } - skb->dev = priv->mgmt_master; - ret = dsa_inband_request(&mgmt_eth_data->inband, skb, - qca8k_mdio_header_fill_seq_num, - NULL, 0, - QCA8K_ETHERNET_TIMEOUT); - - mutex_unlock(&mgmt_eth_data->mutex); - - return ret; + return dsa_inband_request(&mgmt_eth_data->inband, skb, + qca8k_mdio_header_fill_seq_num, + NULL, 0, + QCA8K_ETHERNET_TIMEOUT); } static int @@ -441,7 +415,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, struct sk_buff *write_skb, *clear_skb, *read_skb; struct qca8k_mgmt_eth_data *mgmt_eth_data; u32 write_val, clear_val = 0, val; - struct net_device *mgmt_master; u32 resp_data[4]; int ret, ret1; @@ -487,19 +460,9 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, * 3. Get the data if we are reading * 4. Reset the mdio master (even with error) */ - mutex_lock(&mgmt_eth_data->mutex); - - /* Check if mgmt_master is operational */ - mgmt_master = priv->mgmt_master; - if (!mgmt_master) { - mutex_unlock(&mgmt_eth_data->mutex); - ret = -EINVAL; - goto err_mgmt_master; - } - - read_skb->dev = mgmt_master; - clear_skb->dev = mgmt_master; - write_skb->dev = mgmt_master; + read_skb->dev = priv->mgmt_master; + clear_skb->dev = priv->mgmt_master; + write_skb->dev = priv->mgmt_master; ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, qca8k_mdio_header_fill_seq_num, @@ -542,13 +505,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, NULL, 0, QCA8K_ETHERNET_TIMEOUT); - mutex_unlock(&mgmt_eth_data->mutex); - - return ret; + return 0; - /* Error handling before lock */ -err_mgmt_master: - kfree_skb(read_skb); err_read_skb: kfree_skb(clear_skb); err_clear_skb: @@ -1530,13 +1488,11 @@ qca8k_master_change(struct dsa_switch *ds, const struct net_device *master, if (dp->index != 0) return; - mutex_lock(&priv->mgmt_eth_data.mutex); mutex_lock(&priv->mib_eth_data.mutex); priv->mgmt_master = operational ? (struct net_device *)master : NULL; mutex_unlock(&priv->mib_eth_data.mutex); - mutex_unlock(&priv->mgmt_eth_data.mutex); } static int qca8k_connect_tag_protocol(struct dsa_switch *ds, @@ -1854,7 +1810,6 @@ qca8k_sw_probe(struct mdio_device *mdiodev) if (!priv->ds) return -ENOMEM; - mutex_init(&priv->mgmt_eth_data.mutex); dsa_inband_init(&priv->mgmt_eth_data.inband, U32_MAX); mutex_init(&priv->mib_eth_data.mutex); diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h index e58cd7a5f91a..fe7928bccbf0 100644 --- a/drivers/net/dsa/qca/qca8k.h +++ b/drivers/net/dsa/qca/qca8k.h @@ -347,7 +347,6 @@ enum { struct qca8k_mgmt_eth_data { struct dsa_inband inband; - struct mutex mutex; /* Enforce one mdio read/write at time */ }; struct qca8k_mib_eth_data { diff --git a/include/net/dsa.h b/include/net/dsa.h index 7450f413b709..81cec156564d 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1311,8 +1311,12 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port, * If there is a thread waiting for the response, resp will point to a * buffer to copy the response to. If the thread has given up waiting, * resp will be a NULL pointer. + * + * The mutex is used to serialise all inband operations. It also protects + * the seqno, which is incremented while holding the lock. */ struct dsa_inband { + struct mutex lock; /* Serialise operations */ struct completion completion; u32 seqno; u32 seqno_mask; diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index fc7be84dbafb..5596cdcd9a5a 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -521,6 +521,7 @@ EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); void dsa_inband_init(struct dsa_inband *inband, u32 seqno_mask) { init_completion(&inband->completion); + mutex_init(&inband->lock); spin_lock_init(&inband->resp_lock); inband->seqno_mask = seqno_mask; inband->seqno = 0; @@ -566,6 +567,11 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, inband->err = 0; + if (!skb->dev) + return -EOPNOTSUPP; + + mutex_lock(&inband->lock); + spin_lock(&inband->resp_lock); inband->resp = resp; inband->resp_len = resp_len; @@ -586,6 +592,7 @@ int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, inband->resp = NULL; inband->resp_len = 0; spin_unlock(&inband->resp_lock); + mutex_unlock(&inband->lock); if (ret == 0) return -ETIMEDOUT;