From patchwork Wed Nov 17 03:29:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 12623591 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18AB6C433F5 for ; Wed, 17 Nov 2021 03:29:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1F2F600CD for ; Wed, 17 Nov 2021 03:29:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232884AbhKQDc2 (ORCPT ); Tue, 16 Nov 2021 22:32:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232473AbhKQDc1 (ORCPT ); Tue, 16 Nov 2021 22:32:27 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05E65C061570 for ; Tue, 16 Nov 2021 19:29:30 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id gb13-20020a17090b060d00b001a674e2c4a8so1261295pjb.4 for ; Tue, 16 Nov 2021 19:29:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5Babaruj0XBTQborZwEH4EzaqrqMRkYLo0Z0jQCwEXI=; b=J546tGnJO6lHrf/f2gcX4i6sj6YaBpFs4mEmARux7q/nnC3lmfshZ43j3gyqwvhZeg T0NpERzBFHL2GbA7rLCLinIHQhk/9g1sOyBBWT7SjpS9bFKR+9SGNmbCnH/3FQcFxkez sqGy5N+N4PCWtblHAhKxzl2m+GwqA5U2VIZj+J6KAbhunGow7e3AE8lWZohdCXSjZmGJ P3lljob8Fc7np9EFOgDpKid+w+FoLIVOOnoaBoS1PakqPyG8i0SGFf+fJ4ShD4IGxbvL Utz+sOjQCDsXQrhp8rnfTxB19omkVk6sswKUUNFW2ZJfBuJwTLZqa/C/Ue7KGfPK7kE8 uWpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5Babaruj0XBTQborZwEH4EzaqrqMRkYLo0Z0jQCwEXI=; b=LRGRnz6P3OlvkZWCeDK5DOHq6uhuOAnS4LWfG//hkFtIoqAJkktNGyvwdAG1Epn4l6 4HL7Goz0PYzlKB1dqZO69IsmHFV1pTwbW2wt42yyvSbXukUq3BAqcgOpZRb3MLwkhJcC zbjsJnj16+O0UgyQGl0GnD+tf0SD0naZmUOlbhZaHHES0p+vLIW/p5z0effv29mT9BN7 l6p6OHBlYbRsO66Gl0B0f73Vyma9r3HqnPZbmgyYFQHeNoq7mFw410U5g2ZovvcCaA5m GtMT5cUYz0SFJNKTIwmbqO4B5KwhsRXVMXwlGICcWcpdrG3RGVDf3THaFudchrBVSHO+ QIfg== X-Gm-Message-State: AOAM5308onSo+H47E9Jz4xsUNjHqmlja+3Wswb0L1v26RM878Usk++5J zGp+rEflqnmRDLuhc7IocYc= X-Google-Smtp-Source: ABdhPJzObVibw5jI5htLJiUXBe2cb+93f4eA/E6SrBycBNejf+JKTC+nRlMDUtk2qQvGx8HMrfhC0A== X-Received: by 2002:a17:90b:97:: with SMTP id bb23mr5383133pjb.201.1637119769629; Tue, 16 Nov 2021 19:29:29 -0800 (PST) Received: from edumazet1.svl.corp.google.com ([2620:15c:2c4:201:bea:143e:3360:c708]) by smtp.gmail.com with ESMTPSA id mi18sm4042394pjb.13.2021.11.16.19.29.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:29:29 -0800 (PST) From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski Cc: netdev , Eric Dumazet , Eric Dumazet , david decotigny Subject: [PATCH net-next 1/4] net: use an atomic_long_t for queue->trans_timeout Date: Tue, 16 Nov 2021 19:29:21 -0800 Message-Id: <20211117032924.1740327-2-eric.dumazet@gmail.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117032924.1740327-1-eric.dumazet@gmail.com> References: <20211117032924.1740327-1-eric.dumazet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet tx_timeout_show() assumed dev_watchdog() would stop all the queues, to fetch queue->trans_timeout under protection of the queue->_xmit_lock. As we want to no longer disrupt transmits, we use an atomic_long_t instead. Signed-off-by: Eric Dumazet Cc: david decotigny --- include/linux/netdevice.h | 2 +- net/core/net-sysfs.c | 6 +----- net/sched/sch_generic.c | 2 +- 3 files changed, 3 insertions(+), 7 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 31a7e6b2768123021690a3dc8572c5e8cb0e0027..143ac02c7f1cc90cf6704574fb0012e1ba830c70 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -592,7 +592,7 @@ struct netdev_queue { * Number of TX timeouts for this queue * (/sys/class/net/DEV/Q/trans_timeout) */ - unsigned long trans_timeout; + atomic_long_t trans_timeout; /* Subordinate device that the queue has been assigned to */ struct net_device *sb_dev; diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 9c01c642cf9ef384fe54e56243b102ef838d0a62..addbef5419fbb62ce83f5132ae21c9d2872e95f5 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -1201,11 +1201,7 @@ static const struct sysfs_ops netdev_queue_sysfs_ops = { static ssize_t tx_timeout_show(struct netdev_queue *queue, char *buf) { - unsigned long trans_timeout; - - spin_lock_irq(&queue->_xmit_lock); - trans_timeout = queue->trans_timeout; - spin_unlock_irq(&queue->_xmit_lock); + unsigned long trans_timeout = atomic_long_read(&queue->trans_timeout); return sprintf(buf, fmt_ulong, trans_timeout); } diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 3b0f620958037eb46e395f172c2315fdd98be914..1b4328bd495d54d44a9d51b53c8e8bc18b9cc294 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -467,7 +467,7 @@ static void dev_watchdog(struct timer_list *t) time_after(jiffies, (trans_start + dev->watchdog_timeo))) { some_queue_timedout = 1; - txq->trans_timeout++; + atomic_long_inc(&txq->trans_timeout); break; } } From patchwork Wed Nov 17 03:29:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 12623593 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CAC4C433F5 for ; Wed, 17 Nov 2021 03:29:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 07E31613D3 for ; Wed, 17 Nov 2021 03:29:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232887AbhKQDc3 (ORCPT ); Tue, 16 Nov 2021 22:32:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232885AbhKQDc3 (ORCPT ); Tue, 16 Nov 2021 22:32:29 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAE14C061570 for ; Tue, 16 Nov 2021 19:29:31 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id m14so1302624pfc.9 for ; Tue, 16 Nov 2021 19:29:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=195m4L1AEz3C00l4iwV+xs/UMtYvFmbN/Wn3E0rnD3U=; b=US8q9I1cwxC6sQhwXgEwGuOjxiph6MDiK6nRXVEIMXlBxdv23OnY1JtrI8o8oa60Aw lRbGbfDRAvzyoBvGaYbFZcuyWcjWOI5TGkngPTI3ZXVW2MdhGFUw//rFwiyv3NG2bRXy 7YWKT3906IMS9DrC7yXEL747y+hmTE1YHzQCXg3Q2mYvebFHFoERIqLF05+2PeCLOUgt 2kQYwZ+T9HY/Ej9SiVCBpGMG6CiPxbu5gvnuCruAIv1vqS4SNG+JHp73Au3/XlqLXoyt FF0RSU9Xfu6JprZW2pz4RGdq6J7NLJ4hM7/MPSSgUXrT88Vzzad+9QWsqMKRj+5o2YW4 oDEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=195m4L1AEz3C00l4iwV+xs/UMtYvFmbN/Wn3E0rnD3U=; b=ZTX5gB3TNqgdgY7M845EiFavxk03iMLkNR++cKH1z5IV/zkWyfld1ezfuA7lmBzrEt aalUSXa1jwWwDuz/2LHyH1TSq8kiiGuIfB3lNRArzWDmlQHUV5IEdoaUp1/cYcI9hR1S XGxhjx1NY2cixZ80I84o4c2zbetUuR2JXn19+q0zdR8wQbPQJy4FTwGEeAGDiWCj55m7 XpE54Kx3t/HJoEbKLe8talV+o1ssiPbe11+oZfjz2Kgt9kqHope/yM3Ss5ZnO4MOyO6c 8CBFZWuiHiBgdbgOq0uWWhj2IEtDZjxPexRViaPZdVBM3Z2FU75XuTgxid3Z7R+G2v8n 6ZCg== X-Gm-Message-State: AOAM530weNrmCrKAdD7ju//Wm4Lft01IMGUf2n9oP/tZXsI6mSFgfEdq MvfPOW5Ewf0lZ0m8kkhpFyM= X-Google-Smtp-Source: ABdhPJw/n544tCUUS+YIPc0VzE0G8nHnbpYtPRDJVha88oDjqT6ckGytDZ36Gpv4rRpqQ/eGpJW0QQ== X-Received: by 2002:a62:1614:0:b0:4a2:fa59:c1ad with SMTP id 20-20020a621614000000b004a2fa59c1admr4067785pfw.80.1637119771168; Tue, 16 Nov 2021 19:29:31 -0800 (PST) Received: from edumazet1.svl.corp.google.com ([2620:15c:2c4:201:bea:143e:3360:c708]) by smtp.gmail.com with ESMTPSA id mi18sm4042394pjb.13.2021.11.16.19.29.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:29:30 -0800 (PST) From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski Cc: netdev , Eric Dumazet , Eric Dumazet Subject: [PATCH net-next 2/4] net: annotate accesses to queue->trans_start Date: Tue, 16 Nov 2021 19:29:22 -0800 Message-Id: <20211117032924.1740327-3-eric.dumazet@gmail.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117032924.1740327-1-eric.dumazet@gmail.com> References: <20211117032924.1740327-1-eric.dumazet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet In following patches, dev_watchdog() will no longer stop all queues. It will read queue->trans_start locklessly. Signed-off-by: Eric Dumazet --- drivers/net/ethernet/apm/xgene/xgene_enet_main.c | 2 +- drivers/net/ethernet/atheros/ag71xx.c | 2 +- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 4 ++-- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +- drivers/net/ethernet/ibm/ibmvnic.c | 2 +- drivers/net/ethernet/intel/igb/igb_main.c | 4 ++-- .../ethernet/mellanox/mlx5/core/en/reporter_tx.c | 2 +- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 6 +++--- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 2 +- drivers/net/virtio_net.c | 2 +- drivers/net/wireless/marvell/mwifiex/init.c | 2 +- drivers/staging/rtl8192e/rtllib_softmac.c | 2 +- include/linux/netdevice.h | 16 +++++++++++++--- net/sched/sch_generic.c | 8 ++++---- 14 files changed, 33 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c index 220dc42af31ae1200ca05441000bb2a1abd0fd89..ff2d099aab218b30783266d5f905e3f0846f0951 100644 --- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c +++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c @@ -869,7 +869,7 @@ static void xgene_enet_timeout(struct net_device *ndev, unsigned int txqueue) for (i = 0; i < pdata->txq_cnt; i++) { txq = netdev_get_tx_queue(ndev, i); - txq->trans_start = jiffies; + txq_trans_cond_update(txq); netif_tx_start_queue(txq); } } diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c index 88d2ab7483994a850efcea1228d9718e0f6bc2ae..e4f30bb7498fec02742376a37a22495cba38a9ea 100644 --- a/drivers/net/ethernet/atheros/ag71xx.c +++ b/drivers/net/ethernet/atheros/ag71xx.c @@ -766,7 +766,7 @@ static bool ag71xx_check_dma_stuck(struct ag71xx *ag) unsigned long timestamp; u32 rx_sm, tx_sm, rx_fd; - timestamp = netdev_get_tx_queue(ag->ndev, 0)->trans_start; + timestamp = READ_ONCE(netdev_get_tx_queue(ag->ndev, 0)->trans_start); if (likely(time_before(jiffies, timestamp + HZ / 10))) return false; diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 6b2927d863e2cc7569dadd4cd1e974dcc347274e..d6871437d9515df18c7819817799f9ade8bcb57e 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2325,7 +2325,7 @@ dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev) txq = netdev_get_tx_queue(net_dev, queue_mapping); /* LLTX requires to do our own update of trans_start */ - txq->trans_start = jiffies; + txq_trans_cond_update(txq); if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { fd.cmd |= cpu_to_be32(FM_FD_CMD_UPD); @@ -2531,7 +2531,7 @@ static int dpaa_xdp_xmit_frame(struct net_device *net_dev, /* Bump the trans_start */ txq = netdev_get_tx_queue(net_dev, smp_processor_id()); - txq->trans_start = jiffies; + txq_trans_cond_update(txq); err = dpaa_xmit(priv, percpu_stats, smp_processor_id(), &fd); if (err) { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 13835a37b3a2fd13d557750fb3943172fc2eb700..d5100179f8d589dc4ea517f5bafca0612a5fcc38 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2679,7 +2679,7 @@ static bool hns3_get_tx_timeo_queue_info(struct net_device *ndev) unsigned long trans_start; q = netdev_get_tx_queue(ndev, i); - trans_start = q->trans_start; + trans_start = READ_ONCE(q->trans_start); if (netif_xmit_stopped(q) && time_after(jiffies, (trans_start + ndev->watchdog_timeo))) { diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 3cca51735421a7435f4c7f32fa3f5af9003f2d37..c327fc8860da20e16b6801adc071bfb90dd05c36 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2058,7 +2058,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_packets++; tx_bytes += skb->len; - txq->trans_start = jiffies; + txq_trans_cond_update(txq); ret = NETDEV_TX_OK; goto out; diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 836be0d3b29105d48530e2ce6b3f8db13c730e71..18a019a47182218ff85a83a00e75c99005e22a34 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2927,7 +2927,7 @@ static int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); /* Avoid transmit queue timeout since we share it with the slow path */ - nq->trans_start = jiffies; + txq_trans_cond_update(nq); ret = igb_xmit_xdp_ring(adapter, tx_ring, xdpf); __netif_tx_unlock(nq); @@ -2961,7 +2961,7 @@ static int igb_xdp_xmit(struct net_device *dev, int n, __netif_tx_lock(nq, cpu); /* Avoid transmit queue timeout since we share it with the slow path */ - nq->trans_start = jiffies; + txq_trans_cond_update(nq); for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c index 4f4bc8726ec4fa2b20a32edb25780ad177f6860a..86060513328739ecd6702525a6902bd82c7d2db6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c @@ -565,7 +565,7 @@ int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq) snprintf(err_str, sizeof(err_str), "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u", sq->ch_ix, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc, - jiffies_to_usecs(jiffies - sq->txq->trans_start)); + jiffies_to_usecs(jiffies - READ_ONCE(sq->txq->trans_start))); mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx); return to_ctx.status; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 033c35c09a54876eeb87e30aad5ec8e8613f13b9..389d125310c151e54b428d19616fd07531051f54 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -2356,7 +2356,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget) bool work_done = true; /* Avoids TX time-out as we are sharing with slow path */ - nq->trans_start = jiffies; + txq_trans_cond_update(nq->trans_start); budget = min(budget, stmmac_tx_avail(priv, queue)); @@ -4657,7 +4657,7 @@ static int stmmac_xdp_xmit_back(struct stmmac_priv *priv, __netif_tx_lock(nq, cpu); /* Avoids TX time-out as we are sharing with slow path */ - nq->trans_start = jiffies; + txq_trans_cond_update(nq->trans_start); res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, false); if (res == STMMAC_XDP_TX) @@ -6293,7 +6293,7 @@ static int stmmac_xdp_xmit(struct net_device *dev, int num_frames, __netif_tx_lock(nq, cpu); /* Avoids TX time-out as we are sharing with slow path */ - nq->trans_start = jiffies; + txq_trans_cond_update(nq); for (i = 0; i < num_frames; i++) { int res; diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index c092cb61416a180a5ce1d0d28bd163e4a1dab302..750cea23e9cd02bba139a58553c4b1753956ad10 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -345,7 +345,7 @@ static void am65_cpsw_nuss_ndo_host_tx_timeout(struct net_device *ndev, netif_txq = netdev_get_tx_queue(ndev, txqueue); tx_chn = &common->tx_chns[txqueue]; - trans_start = netif_txq->trans_start; + trans_start = READ_ONCE(netif_txq->trans_start); netdev_err(ndev, "txq:%d DRV_XOFF:%d tmo:%u dql_avail:%d free_desc:%zu\n", txqueue, diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1771d6e5224fd834a4dfca4ba578134439d4d201..03e38e38ee4b5a97567eb692cd84a55722a1a8b2 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2694,7 +2694,7 @@ static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue) netdev_err(dev, "TX timeout on queue: %u, sq: %s, vq: 0x%x, name: %s, %u usecs ago\n", txqueue, sq->name, sq->vq->index, sq->vq->name, - jiffies_to_usecs(jiffies - txq->trans_start)); + jiffies_to_usecs(jiffies - READ_ONCE(txq->trans_start))); } static const struct net_device_ops virtnet_netdev = { diff --git a/drivers/net/wireless/marvell/mwifiex/init.c b/drivers/net/wireless/marvell/mwifiex/init.c index f006a3d72b4046f435739e9218e3be6bf7001adc..88c72d1827a00d608e90e03beb20202228cd8699 100644 --- a/drivers/net/wireless/marvell/mwifiex/init.c +++ b/drivers/net/wireless/marvell/mwifiex/init.c @@ -332,7 +332,7 @@ void mwifiex_set_trans_start(struct net_device *dev) int i; for (i = 0; i < dev->num_tx_queues; i++) - netdev_get_tx_queue(dev, i)->trans_start = jiffies; + txq_trans_cond_update(netdev_get_tx_queue(dev, i)); netif_trans_update(dev); } diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c index d2726d01c7573fe8230e0a7e0f7f811c1ff8cffc..aabbea48223d2f7915285c883e2ae94111bd91b6 100644 --- a/drivers/staging/rtl8192e/rtllib_softmac.c +++ b/drivers/staging/rtl8192e/rtllib_softmac.c @@ -2515,7 +2515,7 @@ void rtllib_stop_all_queues(struct rtllib_device *ieee) unsigned int i; for (i = 0; i < ieee->dev->num_tx_queues; i++) - netdev_get_tx_queue(ieee->dev, i)->trans_start = jiffies; + txq_trans_cond_update(netdev_get_tx_queue(ieee->dev, i)); netif_tx_stop_all_queues(ieee->dev); } diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 143ac02c7f1cc90cf6704574fb0012e1ba830c70..83e6204c0ba3491b56eec5c7f94e55eab7159223 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4095,10 +4095,21 @@ static inline void __netif_tx_unlock_bh(struct netdev_queue *txq) spin_unlock_bh(&txq->_xmit_lock); } +/* + * txq->trans_start can be read locklessly from dev_watchdog() + */ static inline void txq_trans_update(struct netdev_queue *txq) { if (txq->xmit_lock_owner != -1) - txq->trans_start = jiffies; + WRITE_ONCE(txq->trans_start, jiffies); +} + +static inline void txq_trans_cond_update(struct netdev_queue *txq) +{ + unsigned long now = jiffies; + + if (READ_ONCE(txq->trans_start) != now) + WRITE_ONCE(txq->trans_start, now); } /* legacy drivers only, netdev_start_xmit() sets txq->trans_start */ @@ -4106,8 +4117,7 @@ static inline void netif_trans_update(struct net_device *dev) { struct netdev_queue *txq = netdev_get_tx_queue(dev, 0); - if (txq->trans_start != jiffies) - txq->trans_start = jiffies; + txq_trans_cond_update(txq); } /** diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 1b4328bd495d54d44a9d51b53c8e8bc18b9cc294..02c46041f76e85571fd2862e02fb409bfd8e6611 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -434,9 +434,9 @@ unsigned long dev_trans_start(struct net_device *dev) dev = vlan_dev_real_dev(dev); else if (netif_is_macvlan(dev)) dev = macvlan_dev_real_dev(dev); - res = netdev_get_tx_queue(dev, 0)->trans_start; + res = READ_ONCE(netdev_get_tx_queue(dev, 0)->trans_start); for (i = 1; i < dev->num_tx_queues; i++) { - val = netdev_get_tx_queue(dev, i)->trans_start; + val = READ_ONCE(netdev_get_tx_queue(dev, i)->trans_start); if (val && time_after(val, res)) res = val; } @@ -462,7 +462,7 @@ static void dev_watchdog(struct timer_list *t) struct netdev_queue *txq; txq = netdev_get_tx_queue(dev, i); - trans_start = txq->trans_start; + trans_start = READ_ONCE(txq->trans_start); if (netif_xmit_stopped(txq) && time_after(jiffies, (trans_start + dev->watchdog_timeo))) { @@ -1148,7 +1148,7 @@ static void transition_one_qdisc(struct net_device *dev, rcu_assign_pointer(dev_queue->qdisc, new_qdisc); if (need_watchdog_p) { - dev_queue->trans_start = 0; + WRITE_ONCE(dev_queue->trans_start, 0); *need_watchdog_p = 1; } } From patchwork Wed Nov 17 03:29:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 12623595 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E81C9C433F5 for ; Wed, 17 Nov 2021 03:29:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD6AE613A3 for ; Wed, 17 Nov 2021 03:29:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232888AbhKQDcd (ORCPT ); Tue, 16 Nov 2021 22:32:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232885AbhKQDca (ORCPT ); Tue, 16 Nov 2021 22:32:30 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F10D0C061570 for ; Tue, 16 Nov 2021 19:29:32 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id fv9-20020a17090b0e8900b001a6a5ab1392so1287848pjb.1 for ; Tue, 16 Nov 2021 19:29:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Woqt5ISaGQV8hS1AlZu+KIyS3oVxiY9AlS2a8rTt3eI=; b=fYKKZwJ7gZYl1UIe2a7f1GDuQpVG7H2I/VnGKCEexrb9rwG6TWVLmlQAk4FtSK5tpD z1e40mJbQxGxZrIg9lf56dddnYHHtJVfQLtKIy9AFNk3nx+eptqd5zdKoAWz9WEIochq +jtdMd8V8wE5WYISNXAABfi0GYxCD2nBbi64xTh46EwjH3ygoTO2Fcjl9Pzy190cEPYJ o+Fs4Q160oQDu1WoH8XbU2V+Re69T79BKTbTL5iD7KzkYmtWGWlUS3u6aR9UB+MtMVjI LKLYCSZ9p7qPocjVQFtTFrtsbr9xSn3wdVnlOGnRde5h4SGTjPYnH0JSBqSDulECZuDG CdaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Woqt5ISaGQV8hS1AlZu+KIyS3oVxiY9AlS2a8rTt3eI=; b=IFwTWeYf5ZnAc+61bcG9DD9AS0nvtj9Tn43bbU53qU6I2mtZHQ8mhcI1Sqkc0T2bpe /4ygP62hBNn/GWsw9zSFj8gLv0NT2muh6DoR2IEyj/OCGLLRTno3jxOOoMAjt7xMxrph NJswMo++7FaIkvNRJV18TPrwHGugJ00hjHHM/KzXpg5qdwgvHiLWoAZrowzq8Ik9F8TY D80zwP+xuMN8UE7qqCPKBjNNtHQHemT4qI2Gs1lqj9NdJeZ/eOXlcuCDW7aNpghekCBH SatozRsCZPJkzdB96WxWEqtyBiMC37OvIxUO2ajn0C5x+aC8xEAJck0V/thRCmvVEGyL 8Tcg== X-Gm-Message-State: AOAM530XaQDXu96/Gk7mVKXIuuNSc1pei2MUCDB/8weEoI8tIFXkgR8q oVSuvrdvBhACWg9hbTOfzjo= X-Google-Smtp-Source: ABdhPJz298QDMYrasEBmEp2YDAPwsgidRLsYLspE4rN1qE+e36etjBTFVQPS28FcBIl9TowzLpsfJw== X-Received: by 2002:a17:90a:fe85:: with SMTP id co5mr5531668pjb.110.1637119772575; Tue, 16 Nov 2021 19:29:32 -0800 (PST) Received: from edumazet1.svl.corp.google.com ([2620:15c:2c4:201:bea:143e:3360:c708]) by smtp.gmail.com with ESMTPSA id mi18sm4042394pjb.13.2021.11.16.19.29.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:29:32 -0800 (PST) From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski Cc: netdev , Eric Dumazet , Eric Dumazet Subject: [PATCH net-next 3/4] net: do not inline netif_tx_lock()/netif_tx_unlock() Date: Tue, 16 Nov 2021 19:29:23 -0800 Message-Id: <20211117032924.1740327-4-eric.dumazet@gmail.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117032924.1740327-1-eric.dumazet@gmail.com> References: <20211117032924.1740327-1-eric.dumazet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet These are not fast path, there is no point in inlining them. Also provide netif_freeze_queues()/netif_unfreeze_queues() so that we can use them from dev_watchdog() in the following patch. Signed-off-by: Eric Dumazet --- include/linux/netdevice.h | 39 ++---------------------------- net/sched/sch_generic.c | 51 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 37 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 83e6204c0ba3491b56eec5c7f94e55eab7159223..28e79ef5ca06f66a788ce3e3f59d158be9150332 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4126,27 +4126,7 @@ static inline void netif_trans_update(struct net_device *dev) * * Get network device transmit lock */ -static inline void netif_tx_lock(struct net_device *dev) -{ - unsigned int i; - int cpu; - - spin_lock(&dev->tx_global_lock); - cpu = smp_processor_id(); - for (i = 0; i < dev->num_tx_queues; i++) { - struct netdev_queue *txq = netdev_get_tx_queue(dev, i); - - /* We are the only thread of execution doing a - * freeze, but we have to grab the _xmit_lock in - * order to synchronize with threads which are in - * the ->hard_start_xmit() handler and already - * checked the frozen bit. - */ - __netif_tx_lock(txq, cpu); - set_bit(__QUEUE_STATE_FROZEN, &txq->state); - __netif_tx_unlock(txq); - } -} +void netif_tx_lock(struct net_device *dev); static inline void netif_tx_lock_bh(struct net_device *dev) { @@ -4154,22 +4134,7 @@ static inline void netif_tx_lock_bh(struct net_device *dev) netif_tx_lock(dev); } -static inline void netif_tx_unlock(struct net_device *dev) -{ - unsigned int i; - - for (i = 0; i < dev->num_tx_queues; i++) { - struct netdev_queue *txq = netdev_get_tx_queue(dev, i); - - /* No need to grab the _xmit_lock here. If the - * queue is not stopped for another reason, we - * force a schedule. - */ - clear_bit(__QUEUE_STATE_FROZEN, &txq->state); - netif_schedule_queue(txq); - } - spin_unlock(&dev->tx_global_lock); -} +void netif_tx_unlock(struct net_device *dev); static inline void netif_tx_unlock_bh(struct net_device *dev) { diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 02c46041f76e85571fd2862e02fb409bfd8e6611..389e0d8fc68d12cf092a975511729a8dae1b29fb 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -445,6 +445,57 @@ unsigned long dev_trans_start(struct net_device *dev) } EXPORT_SYMBOL(dev_trans_start); +static void netif_freeze_queues(struct net_device *dev) +{ + unsigned int i; + int cpu; + + cpu = smp_processor_id(); + for (i = 0; i < dev->num_tx_queues; i++) { + struct netdev_queue *txq = netdev_get_tx_queue(dev, i); + + /* We are the only thread of execution doing a + * freeze, but we have to grab the _xmit_lock in + * order to synchronize with threads which are in + * the ->hard_start_xmit() handler and already + * checked the frozen bit. + */ + __netif_tx_lock(txq, cpu); + set_bit(__QUEUE_STATE_FROZEN, &txq->state); + __netif_tx_unlock(txq); + } +} + +void netif_tx_lock(struct net_device *dev) +{ + spin_lock(&dev->tx_global_lock); + netif_freeze_queues(dev); +} +EXPORT_SYMBOL(netif_tx_lock); + +static void netif_unfreeze_queues(struct net_device *dev) +{ + unsigned int i; + + for (i = 0; i < dev->num_tx_queues; i++) { + struct netdev_queue *txq = netdev_get_tx_queue(dev, i); + + /* No need to grab the _xmit_lock here. If the + * queue is not stopped for another reason, we + * force a schedule. + */ + clear_bit(__QUEUE_STATE_FROZEN, &txq->state); + netif_schedule_queue(txq); + } +} + +void netif_tx_unlock(struct net_device *dev) +{ + netif_unfreeze_queues(dev); + spin_unlock(&dev->tx_global_lock); +} +EXPORT_SYMBOL(netif_tx_unlock); + static void dev_watchdog(struct timer_list *t) { struct net_device *dev = from_timer(dev, t, watchdog_timer); From patchwork Wed Nov 17 03:29:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 12623597 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A32AC433EF for ; Wed, 17 Nov 2021 03:29:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 82AE0600CD for ; Wed, 17 Nov 2021 03:29:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232897AbhKQDce (ORCPT ); Tue, 16 Nov 2021 22:32:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232890AbhKQDcc (ORCPT ); Tue, 16 Nov 2021 22:32:32 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DF33C061746 for ; Tue, 16 Nov 2021 19:29:34 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id y14-20020a17090a2b4e00b001a5824f4918so4006678pjc.4 for ; Tue, 16 Nov 2021 19:29:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=88Y73PJpVDIU99CIUpZfsJ0795Fl3wyCkWysDx/vZzw=; b=PyXri97wlh3IWXShV7eHRiaeL3EFbbPwxtk2OO+0Gq36b6XUCbICsP8oIfz1vtllb6 mjJ8A7m5EwmHXdh71yl8JMIT8K8G5qv9EAG3vpraopMjPzucC166gdo524Q7Mns0sXCM QZFbmxYH62ZgC+MZ2hmcP8vFoOmBDw/DEiXUkhXab7Pu2+CrAIkouVbuUhRwFznaSK0e hxm1GnumAdp1ctdACRQqIgbU4TTVuv1FIH0M+bg5FidMhCDl8ijoxY/zNaR/OPGUpvCT gzD7N8dZxA/HJhFfXk1G4JvVh6Gg50m/dD7kGv0szUqiKuLDpvlotwBcjras5+idogWe sGjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=88Y73PJpVDIU99CIUpZfsJ0795Fl3wyCkWysDx/vZzw=; b=GjRWMCPIiVXyfz/ibMUkno+1xARx0ye7ome3fyvwjNQlrMDULs8yXjdFEPG0WL0q63 GXByuWcbtctNSurW3XNAPAdgfLGDG/EdBhYQd9jQk9PsEWKBPY6tGrNgAlRc0i1WHw1i lsYtvhlbNVZoAb5dXMHoy3hjpEmpv7gk2dBg5O3ppIV/5uI3vvkBM259Y9+dvLlJzsAh J6nISbDDkXzjm6TY4SHP8ZxMTS/kz/a1M+cozRLtN7SXsy87DBKYdAcOlE/kbHUbEyCr j06zqmPPeJCYdMnTArC/DyaVCIlbmCHqH8ZWsIcphMk607NzVCq6PZaoobUivB+zI6Yp j5MA== X-Gm-Message-State: AOAM531bvGSspyroW9rLRE/+kOBXG8c6EMtObnJ42el7nqTQFgQsVL0u xmgaVICH/WKXJoyD3UHTAHc= X-Google-Smtp-Source: ABdhPJxC/N39MJHOpHHSpYkdG4xIi5f48E9xdcvUcGTKl8r3tr/z2B6VGicwA8trAgO8PgbfaSe4iw== X-Received: by 2002:a17:903:2352:b0:142:76bc:de3b with SMTP id c18-20020a170903235200b0014276bcde3bmr51538206plh.36.1637119773857; Tue, 16 Nov 2021 19:29:33 -0800 (PST) Received: from edumazet1.svl.corp.google.com ([2620:15c:2c4:201:bea:143e:3360:c708]) by smtp.gmail.com with ESMTPSA id mi18sm4042394pjb.13.2021.11.16.19.29.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:29:33 -0800 (PST) From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski Cc: netdev , Eric Dumazet , Eric Dumazet Subject: [PATCH net-next 4/4] net: no longer stop all TX queues in dev_watchdog() Date: Tue, 16 Nov 2021 19:29:24 -0800 Message-Id: <20211117032924.1740327-5-eric.dumazet@gmail.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117032924.1740327-1-eric.dumazet@gmail.com> References: <20211117032924.1740327-1-eric.dumazet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet There is no reason for stopping all TX queues from dev_watchdog() Not only this stops feeding the NIC, it also migrates all qdiscs to be serviced on the cpu calling netif_tx_unlock(), causing a potential latency artifact. Signed-off-by: Eric Dumazet --- net/sched/sch_generic.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 389e0d8fc68d12cf092a975511729a8dae1b29fb..d33804d41c5c5a9047c808fd37ba65ae8875fc79 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -500,7 +500,7 @@ static void dev_watchdog(struct timer_list *t) { struct net_device *dev = from_timer(dev, t, watchdog_timer); - netif_tx_lock(dev); + spin_lock(&dev->tx_global_lock); if (!qdisc_tx_is_noop(dev)) { if (netif_device_present(dev) && netif_running(dev) && @@ -523,11 +523,13 @@ static void dev_watchdog(struct timer_list *t) } } - if (some_queue_timedout) { + if (unlikely(some_queue_timedout)) { trace_net_dev_xmit_timeout(dev, i); WARN_ONCE(1, KERN_INFO "NETDEV WATCHDOG: %s (%s): transmit queue %u timed out\n", dev->name, netdev_drivername(dev), i); + netif_freeze_queues(dev); dev->netdev_ops->ndo_tx_timeout(dev, i); + netif_unfreeze_queues(dev); } if (!mod_timer(&dev->watchdog_timer, round_jiffies(jiffies + @@ -535,7 +537,7 @@ static void dev_watchdog(struct timer_list *t) dev_hold(dev); } } - netif_tx_unlock(dev); + spin_unlock(&dev->tx_global_lock); dev_put(dev); }