From patchwork Fri Sep 8 11:20:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377370 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 230CA4C6D for ; Fri, 8 Sep 2023 11:21:15 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E99F81BFA for ; Fri, 8 Sep 2023 04:21:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=hE/rKmpHmo2vicBj4jxfZULGMGoGG+uUDivYD6dMMjk=; b=RIhNq+aUE/xcuEDrnfwfZQ/+8F Mq/zn333whfSNFhxEteJbXCj5JHq0D+JpNEvcl4JA4yuRB+jXeJ3R/wLUuXeJ57CGsckqu3m5xJUZ dhTTmf4MZigFjhlwHvalrAxGg52kGfhqEx44oeYsTy2GePhPzIXnefrC5UUr1S1qYclsTppqCJjKA FDjTu28YkMOWtQWzCIUcy0Me+kkBCWXG2DmLXZHj1QdHnhD/YYTeyXfBKPQNnCatxf0DIHlJ8ahSc C9Yw8MTY/LsHmxJn7wlCx7iysmhP1KKxO8j2ReeOJ/dYXrgJ+GjsyTsS1hbXHgcgJhirbo3+ifZir tXagTWeQ==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:58822 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXY-0004sL-2B; Fri, 08 Sep 2023 12:20:44 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXZ-007G4D-7w; Fri, 08 Sep 2023 12:20:45 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 1/7] net: phy: always call phy_process_state_change() under lock Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:20:45 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC phy_stop() calls phy_process_state_change() while holding the phydev lock, so also arrange for phy_state_machine() to do the same, so that this function is called with consistent locking. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index df54c137c5f5..1e5218935eb3 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1506,6 +1506,7 @@ void phy_state_machine(struct work_struct *work) if (err < 0) phy_error_precise(phydev, func, err); + mutex_lock(&phydev->lock); phy_process_state_change(phydev, old_state); /* Only re-schedule a PHY state machine change if we are polling the @@ -1516,7 +1517,6 @@ void phy_state_machine(struct work_struct *work) * state machine would be pointless and possibly error prone when * called from phy_disconnect() synchronously. */ - mutex_lock(&phydev->lock); if (phy_polling_mode(phydev) && phy_is_started(phydev)) phy_queue_state_machine(phydev, PHY_STATE_TIME); mutex_unlock(&phydev->lock); From patchwork Fri Sep 8 11:20:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377371 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E8284C7D for ; Fri, 8 Sep 2023 11:21:15 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E74991BF6 for ; Fri, 8 Sep 2023 04:21:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=YK6ztRqh3NayYUC/LdR0oXCI6nVH9KznqHgmWUMtyiw=; b=GWSjlTIGRUMmgOiAuxnJCUXVbu mDSaYOMhOLvjpHkxu3aMqQIwX35sIJb5IxHfv+lezUBhAEwCGj5aIezJKAv4t8dxh69k+TViIqoFX SYRe/Fdtc5Q1u+Wlr4jAPcEVvNxR4YuVz0ik6lMIAQk/Gt8EXRR/Q7cA02BIrTuzXmiYQ6ME8pdbC c9cjzg6AZuWhci+SV8cbbhp+/Xd6fPxmKfguKFgiEg/4cN1sfsGAPmtKiyke002kEfBu5Z1u1vAAn A53T2oiGab2tnkT7ftGsVH8dVsSzvLnzZO094Cmm+QxSPNeSAfCh6LzryhEsB2rInkxnA5uItWW98 gl5DjZHg==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:58830 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXd-0004sW-2W; Fri, 08 Sep 2023 12:20:49 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXe-007G4J-CU; Fri, 08 Sep 2023 12:20:50 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 2/7] net: phy: call phy_error_precise() while holding the lock Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:20:50 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Move the locking out of phy_error_precise() and to its only call site, merging with the locked region that has already been taken. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 1e5218935eb3..990d387b31bd 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1231,9 +1231,7 @@ static void phy_error_precise(struct phy_device *phydev, const void *func, int err) { WARN(1, "%pS: returned: %d\n", func, err); - mutex_lock(&phydev->lock); phy_process_error(phydev); - mutex_unlock(&phydev->lock); } /** @@ -1503,10 +1501,10 @@ void phy_state_machine(struct work_struct *work) if (err == -ENODEV) return; + mutex_lock(&phydev->lock); if (err < 0) phy_error_precise(phydev, func, err); - mutex_lock(&phydev->lock); phy_process_state_change(phydev, old_state); /* Only re-schedule a PHY state machine change if we are polling the From patchwork Fri Sep 8 11:20:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377373 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E4505385 for ; Fri, 8 Sep 2023 11:21:21 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DF9311B for ; Fri, 8 Sep 2023 04:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=RrKsgyMyOnilFqqcj3MrXwUCJ/Of0LEJNcP4LCorgHQ=; b=IpgXul0SySP6VJyOG3mptAFqoe nWQHoOLeLg/yWLvjh6/IPKWFxMJBHf7uiiXwnITWBAH2clCTu7Pm5pvqCqaLurT1/h7D27bXoWyWc 6y4L9qrQPKrv1KbK6hST7Q1nnanane5EpdCkeXpQvC79XbYsuzQ9pVEI/N4xlURv5iLuBbGHRSWdH qGtgIIc480LrFgoYPFrCdoqU3jrspsCPUSSVYratUwOXqDrgyayj875Q9X7HeyQkJGGrQf/RybiCf 1SqoTuBH65HQZ8/M9ZtD+N8QE4TXD1O7VyrQl2wi3aWUsDx6I4AZG7c6mDGoIVkattr5fEYgZg4OC NO5RigOw==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:60628 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXi-0004sk-2v; Fri, 08 Sep 2023 12:20:54 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXj-007G4P-Gt; Fri, 08 Sep 2023 12:20:55 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 3/7] net: phy: move call to start aneg Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:20:55 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Move the call to start auto-negotiation inside the lock in the PHYLIB state machine, calling the locked variant _phy_start_aneg(). This avoids unnecessarily releasing and re-acquiring the lock. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 990d387b31bd..5bb33af2a4cb 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1489,14 +1489,15 @@ void phy_state_machine(struct work_struct *work) break; } + if (needs_aneg) { + err = _phy_start_aneg(phydev); + func = &_phy_start_aneg; + } + mutex_unlock(&phydev->lock); - if (needs_aneg) { - err = phy_start_aneg(phydev); - func = &phy_start_aneg; - } else if (do_suspend) { + if (do_suspend) phy_suspend(phydev); - } if (err == -ENODEV) return; From patchwork Fri Sep 8 11:21:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377372 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0663C4428 for ; Fri, 8 Sep 2023 11:21:18 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D97F211B for ; Fri, 8 Sep 2023 04:21:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1Cq+dnT8Mi7UVDkZ7VwMUC8qD0fQGq0V6oKtLjz8J4Q=; b=Mo0nw98sIWCAozQSL0XZTYspV5 SBI0tgLiwHXJZ5rbNSgmtHaihgM1Wbt5OcprquC6zfv00kVcx2d1X92ZQ51Z3qMlDFAzl0a0tNoBq 5QUbwptQs0TXXXBEgRvwKKzIQkJmrkran/dUD2mVMHv/DwvDu7wp4FF3q46rTxueuegyhAOzVPuib w6wUNMJUQ0WyZclM35dd3FI/UuLgUkrvGlrFVTDuQxlVH7TIMXcKVx/ivDpgebZPRfVziNld22PQ2 aVvdlP4hmDNAMxMly5oBmAT5EwH4joE13WmnQXOsHEtqxgqym9d9/fdyn5bBB/Nw5pJSWQJqtZz02 nX6Y7sKA==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:60638 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXo-0004sw-0C; Fri, 08 Sep 2023 12:21:00 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXo-007G4V-LF; Fri, 08 Sep 2023 12:21:00 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 4/7] net: phy: move phy_suspend() to end of phy_state_machine() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:21:00 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Move the call to phy_suspend() to the end of phy_state_machine() after we release the lock so that we can combine the locked areas. phy_suspend() can not be called while holding phydev->lock as it has caused deadlocks in the past. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 5bb33af2a4cb..756326f38b14 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1494,15 +1494,11 @@ void phy_state_machine(struct work_struct *work) func = &_phy_start_aneg; } - mutex_unlock(&phydev->lock); - - if (do_suspend) - phy_suspend(phydev); - - if (err == -ENODEV) + if (err == -ENODEV) { + mutex_unlock(&phydev->lock); return; + } - mutex_lock(&phydev->lock); if (err < 0) phy_error_precise(phydev, func, err); @@ -1519,6 +1515,9 @@ void phy_state_machine(struct work_struct *work) if (phy_polling_mode(phydev) && phy_is_started(phydev)) phy_queue_state_machine(phydev, PHY_STATE_TIME); mutex_unlock(&phydev->lock); + + if (do_suspend) + phy_suspend(phydev); } /** From patchwork Fri Sep 8 11:21:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377374 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28FE15385 for ; Fri, 8 Sep 2023 11:21:24 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C573E11B for ; Fri, 8 Sep 2023 04:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=X4NPzGZ2KK4jPkfZuyoDIOBWTp2DTQZ5cWAvHB5ZS5w=; b=bpYQ9GtzALse1i4Tr8jRuJnRNu YC9JP0lebcTutkTuhCDxDyiL3udikOf6nECJXKZZDrm3gFP+Ur1WkQPlJrJ1exAbEeIwz8utKgJKM 5wzMKYWYwCEu/td4ZonhVVp+xivpWPuasJlBTcLe9nfS1HMuFdE/18EcEVqHVkb/xcjv+Qgi7bN4S PTfn3SbvRd4+EQULTRJvB+MDgztF0Oq6el+c0evtN5cP//chR+UsKHfDexO03LXNoRIYa8j1pdThz p0DmVaPLBuvTcxD8KGXZwV2dF9fX2yXpb6wlWa82NChfbDTNr7o0NveUr3yanXbRevSh/dmeDeP9w Dhzn5hlQ==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:42098 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXt-0004tF-0j; Fri, 08 Sep 2023 12:21:05 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXt-007G4b-Pi; Fri, 08 Sep 2023 12:21:05 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 5/7] net: phy: move phy_state_machine() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:21:05 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Move phy_state_machine() before phy_stop() to avoid subsequent patches introducing forward references. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 152 +++++++++++++++++++++--------------------- 1 file changed, 76 insertions(+), 76 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 756326f38b14..20e23fa9cf96 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1353,82 +1353,6 @@ void phy_free_interrupt(struct phy_device *phydev) } EXPORT_SYMBOL(phy_free_interrupt); -/** - * phy_stop - Bring down the PHY link, and stop checking the status - * @phydev: target phy_device struct - */ -void phy_stop(struct phy_device *phydev) -{ - struct net_device *dev = phydev->attached_dev; - enum phy_state old_state; - - if (!phy_is_started(phydev) && phydev->state != PHY_DOWN && - phydev->state != PHY_ERROR) { - WARN(1, "called from state %s\n", - phy_state_to_str(phydev->state)); - return; - } - - mutex_lock(&phydev->lock); - old_state = phydev->state; - - if (phydev->state == PHY_CABLETEST) { - phy_abort_cable_test(phydev); - netif_testing_off(dev); - } - - if (phydev->sfp_bus) - sfp_upstream_stop(phydev->sfp_bus); - - phydev->state = PHY_HALTED; - phy_process_state_change(phydev, old_state); - - mutex_unlock(&phydev->lock); - - phy_state_machine(&phydev->state_queue.work); - phy_stop_machine(phydev); - - /* Cannot call flush_scheduled_work() here as desired because - * of rtnl_lock(), but PHY_HALTED shall guarantee irq handler - * will not reenable interrupts. - */ -} -EXPORT_SYMBOL(phy_stop); - -/** - * phy_start - start or restart a PHY device - * @phydev: target phy_device struct - * - * Description: Indicates the attached device's readiness to - * handle PHY-related work. Used during startup to start the - * PHY, and after a call to phy_stop() to resume operation. - * Also used to indicate the MDIO bus has cleared an error - * condition. - */ -void phy_start(struct phy_device *phydev) -{ - mutex_lock(&phydev->lock); - - if (phydev->state != PHY_READY && phydev->state != PHY_HALTED) { - WARN(1, "called from state %s\n", - phy_state_to_str(phydev->state)); - goto out; - } - - if (phydev->sfp_bus) - sfp_upstream_start(phydev->sfp_bus); - - /* if phy was suspended, bring the physical link up again */ - __phy_resume(phydev); - - phydev->state = PHY_UP; - - phy_start_machine(phydev); -out: - mutex_unlock(&phydev->lock); -} -EXPORT_SYMBOL(phy_start); - /** * phy_state_machine - Handle the state machine * @work: work_struct that describes the work to be done @@ -1520,6 +1444,82 @@ void phy_state_machine(struct work_struct *work) phy_suspend(phydev); } +/** + * phy_stop - Bring down the PHY link, and stop checking the status + * @phydev: target phy_device struct + */ +void phy_stop(struct phy_device *phydev) +{ + struct net_device *dev = phydev->attached_dev; + enum phy_state old_state; + + if (!phy_is_started(phydev) && phydev->state != PHY_DOWN && + phydev->state != PHY_ERROR) { + WARN(1, "called from state %s\n", + phy_state_to_str(phydev->state)); + return; + } + + mutex_lock(&phydev->lock); + old_state = phydev->state; + + if (phydev->state == PHY_CABLETEST) { + phy_abort_cable_test(phydev); + netif_testing_off(dev); + } + + if (phydev->sfp_bus) + sfp_upstream_stop(phydev->sfp_bus); + + phydev->state = PHY_HALTED; + phy_process_state_change(phydev, old_state); + + mutex_unlock(&phydev->lock); + + phy_state_machine(&phydev->state_queue.work); + phy_stop_machine(phydev); + + /* Cannot call flush_scheduled_work() here as desired because + * of rtnl_lock(), but PHY_HALTED shall guarantee irq handler + * will not reenable interrupts. + */ +} +EXPORT_SYMBOL(phy_stop); + +/** + * phy_start - start or restart a PHY device + * @phydev: target phy_device struct + * + * Description: Indicates the attached device's readiness to + * handle PHY-related work. Used during startup to start the + * PHY, and after a call to phy_stop() to resume operation. + * Also used to indicate the MDIO bus has cleared an error + * condition. + */ +void phy_start(struct phy_device *phydev) +{ + mutex_lock(&phydev->lock); + + if (phydev->state != PHY_READY && phydev->state != PHY_HALTED) { + WARN(1, "called from state %s\n", + phy_state_to_str(phydev->state)); + goto out; + } + + if (phydev->sfp_bus) + sfp_upstream_start(phydev->sfp_bus); + + /* if phy was suspended, bring the physical link up again */ + __phy_resume(phydev); + + phydev->state = PHY_UP; + + phy_start_machine(phydev); +out: + mutex_unlock(&phydev->lock); +} +EXPORT_SYMBOL(phy_start); + /** * phy_mac_interrupt - MAC says the link has changed * @phydev: phy_device struct with changed link From patchwork Fri Sep 8 11:21:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377376 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3998C4C6D for ; Fri, 8 Sep 2023 11:21:40 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2268811B for ; Fri, 8 Sep 2023 04:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1p+b8iwveyuMaSQhj6+T7zPMj4vLFnwbJOcwCKGKkSo=; b=Egt3JcP/Q9QjOrgDf9BZHITno1 jL9xam1KGxWsqc7iOoGfBZGiqC+c4KL4R8NBqv3z5VphIoQyDMQFYTiTPvl97TsBqmGca3llbf7Kq yHXFaaCeNNIAkFaZ0FjzyuyOLCTWYT3UxdpVKNgmJv8/FcMpCXzmEoGa4CZ7xAXbo4/wPW+x1vMEp 50Y5YywtE4+PkkSbpcsYmJB0AfJleDz97j9mU8Tohz7XOGljukZuPI1eFDJUxLysvCKVYZISRgVCe T/W3jngZ5nUqjjIK/Ip8BRWTZ8m8kLHzmKYKO15nEDN20Nl/5wcCHverXnhij3m5NIlvxTB7frXCb Oyrf+zHA==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:42100 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZXy-0004ta-17; Fri, 08 Sep 2023 12:21:10 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZXy-007G4h-Uj; Fri, 08 Sep 2023 12:21:10 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 6/7] net: phy: split locked and unlocked section of phy_state_machine() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:21:10 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Split out the locked and unlocked sections of phy_state_machine() into two separate functions which can be called inside the phydev lock and outside the phydev lock as appropriate, thus allowing us to combine the locked regions in the caller of phy_state_machine() with the locked region inside phy_state_machine(). This avoids unnecessarily dropping the phydev lock which may allow races to occur. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 68 ++++++++++++++++++++++++++----------------- 1 file changed, 42 insertions(+), 26 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 20e23fa9cf96..d78c2cc003ce 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1353,33 +1353,27 @@ void phy_free_interrupt(struct phy_device *phydev) } EXPORT_SYMBOL(phy_free_interrupt); -/** - * phy_state_machine - Handle the state machine - * @work: work_struct that describes the work to be done - */ -void phy_state_machine(struct work_struct *work) +enum phy_state_work { + PHY_STATE_WORK_NONE, + PHY_STATE_WORK_ANEG, + PHY_STATE_WORK_SUSPEND, +}; + +static enum phy_state_work _phy_state_machine(struct phy_device *phydev) { - struct delayed_work *dwork = to_delayed_work(work); - struct phy_device *phydev = - container_of(dwork, struct phy_device, state_queue); + enum phy_state_work state_work = PHY_STATE_WORK_NONE; struct net_device *dev = phydev->attached_dev; - bool needs_aneg = false, do_suspend = false; - enum phy_state old_state; + enum phy_state old_state = phydev->state; const void *func = NULL; bool finished = false; int err = 0; - mutex_lock(&phydev->lock); - - old_state = phydev->state; - switch (phydev->state) { case PHY_DOWN: case PHY_READY: break; case PHY_UP: - needs_aneg = true; - + state_work = PHY_STATE_WORK_ANEG; break; case PHY_NOLINK: case PHY_RUNNING: @@ -1391,7 +1385,7 @@ void phy_state_machine(struct work_struct *work) if (err) { phy_abort_cable_test(phydev); netif_testing_off(dev); - needs_aneg = true; + state_work = PHY_STATE_WORK_ANEG; phydev->state = PHY_UP; break; } @@ -1399,7 +1393,7 @@ void phy_state_machine(struct work_struct *work) if (finished) { ethnl_cable_test_finished(phydev); netif_testing_off(dev); - needs_aneg = true; + state_work = PHY_STATE_WORK_ANEG; phydev->state = PHY_UP; } break; @@ -1409,19 +1403,17 @@ void phy_state_machine(struct work_struct *work) phydev->link = 0; phy_link_down(phydev); } - do_suspend = true; + state_work = PHY_STATE_WORK_SUSPEND; break; } - if (needs_aneg) { + if (state_work == PHY_STATE_WORK_ANEG) { err = _phy_start_aneg(phydev); func = &_phy_start_aneg; } - if (err == -ENODEV) { - mutex_unlock(&phydev->lock); - return; - } + if (err == -ENODEV) + return state_work; if (err < 0) phy_error_precise(phydev, func, err); @@ -1438,12 +1430,36 @@ void phy_state_machine(struct work_struct *work) */ if (phy_polling_mode(phydev) && phy_is_started(phydev)) phy_queue_state_machine(phydev, PHY_STATE_TIME); - mutex_unlock(&phydev->lock); - if (do_suspend) + return state_work; +} + +/* unlocked part of the PHY state machine */ +static void _phy_state_machine_post_work(struct phy_device *phydev, + enum phy_state_work state_work) +{ + if (state_work == PHY_STATE_WORK_SUSPEND) phy_suspend(phydev); } +/** + * phy_state_machine - Handle the state machine + * @work: work_struct that describes the work to be done + */ +void phy_state_machine(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct phy_device *phydev = + container_of(dwork, struct phy_device, state_queue); + enum phy_state_work state_work; + + mutex_lock(&phydev->lock); + state_work = _phy_state_machine(phydev); + mutex_unlock(&phydev->lock); + + _phy_state_machine_post_work(phydev, state_work); +} + /** * phy_stop - Bring down the PHY link, and stop checking the status * @phydev: target phy_device struct From patchwork Fri Sep 8 11:21:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Russell King (Oracle)" X-Patchwork-Id: 13377375 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBF2E567F for ; Fri, 8 Sep 2023 11:21:35 +0000 (UTC) Received: from pandora.armlinux.org.uk (unknown [IPv6:2001:4d48:ad52:32c8:5054:ff:fe00:142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D831B11B for ; Fri, 8 Sep 2023 04:21:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Date:Sender:Message-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References: In-Reply-To:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=WY+bgrVOm+y/PmPw8ddL21bvHH5Y6vOp3D2FYDnaxjo=; b=B6Y3lQM/B04eg1FwL7iWm0P/QC atFRq2DMH9EG17dAmT3jmzJu07/u+IKFvMLkVAfn1zXy7N8Z4VpZYgVgySXpPxuOyIUZRnPZU4tI0 vh9zD+GkYXqJNjt2GjPAmlo9hefjPdBnpfQxTExW52i4rIjWPGLu0J4h6B4mkT80Ro7U5MEuzplT1 5gUBiw+qdYIeKBbbuuK90ia1vcEZHJ9/TLE1wp5sKgwn3v+5jk4tAecLrMtvnNrDKQEnFnKc5TkvI rtl49KAF2Vj4B5KxgqnQzcn8o5psU3MCiBhWBRDA2a2xyMWBQdH4M/Q3e70s/O9rHSHImGN+sAg5F ZKRDsp6g==; Received: from e0022681537dd.dyn.armlinux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:45748 helo=rmk-PC.armlinux.org.uk) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qeZY3-0004tu-1a; Fri, 08 Sep 2023 12:21:15 +0100 Received: from rmk by rmk-PC.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1qeZY4-007G4n-2Z; Fri, 08 Sep 2023 12:21:16 +0100 In-Reply-To: References: From: "Russell King (Oracle)" To: Andrew Lunn , Heiner Kallweit Cc: Jijie Shao , shaojijie@huawei.com, shenjian15@huawei.com, liuyonglong@huawei.com, wangjie125@huawei.com, chenhao418@huawei.com, lanhao@huawei.com, wangpeiyang1@huawei.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH RFC net-next 7/7] net: phy: convert phy_stop() to use split state machine Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline Message-Id: Sender: Russell King Date: Fri, 08 Sep 2023 12:21:16 +0100 X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED,RDNS_NONE, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Convert phy_stop() to use the new locked-section and unlocked-section parts of the PHY state machine. Signed-off-by: Russell King (Oracle) --- drivers/net/phy/phy.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index d78c2cc003ce..93a8676dd8d8 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -1467,6 +1467,7 @@ void phy_state_machine(struct work_struct *work) void phy_stop(struct phy_device *phydev) { struct net_device *dev = phydev->attached_dev; + enum phy_state_work state_work; enum phy_state old_state; if (!phy_is_started(phydev) && phydev->state != PHY_DOWN && @@ -1490,9 +1491,10 @@ void phy_stop(struct phy_device *phydev) phydev->state = PHY_HALTED; phy_process_state_change(phydev, old_state); + state_work = _phy_state_machine(phydev); mutex_unlock(&phydev->lock); - phy_state_machine(&phydev->state_queue.work); + _phy_state_machine_post_work(phydev, state_work); phy_stop_machine(phydev); /* Cannot call flush_scheduled_work() here as desired because