From patchwork Wed Apr 12 01:50:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13208362 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D557DC77B73 for ; Wed, 12 Apr 2023 01:50:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229585AbjDLBuw (ORCPT ); Tue, 11 Apr 2023 21:50:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229707AbjDLBur (ORCPT ); Tue, 11 Apr 2023 21:50:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A97504C24; Tue, 11 Apr 2023 18:50:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0F3BD62D28; Wed, 12 Apr 2023 01:50:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C7CBC433D2; Wed, 12 Apr 2023 01:50:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681264245; bh=SQVkvpkkVpFYKUjaj0YFjy6WYRkxtetS3WPY8FhJEPI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BPK+P/FYXKkiqMG/hIEkXfS6Zspq+WXs66mXcekL5ZQ5FjFXjLM1BRGDE3yExAdhG cJG2ckMFBtlg0B/yLt8AzYX7eVUHf9GK3rnYvjqr4EiAykkCp79imMw4TURqIQIaFl wEOAeH7ixMB4ZMl1i4XMDicdoRe0vboSCuWqpX6i4gxMtQluQTo7vpVw7PYMgIvVOG W8m44oxbWsViVW2nqch7MEiGvbuIM2mhvpAQea4f8aIUG7TAy2CbPbXvDbinl/IWLb GCuRGuG+CHsiICHvQg2dixNPWm/AAEaMT1qGRx2VfDE2msuUYADzfW6OKA20xBZgfP r9uReVN/Wz4DA== From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com, Jakub Kicinski , Jesse Brandeburg , corbet@lwn.net, linux-doc@vger.kernel.org Subject: [PATCH net-next v2 1/3] net: docs: update the sample code in driver.rst Date: Tue, 11 Apr 2023 18:50:36 -0700 Message-Id: <20230412015038.674023-2-kuba@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412015038.674023-1-kuba@kernel.org> References: <20230412015038.674023-1-kuba@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The sample code talks about single-queue devices and uses locks. Update it to something resembling more modern code. Make sure we mention use of READ_ONCE() / WRITE_ONCE(). Change the comment which talked about consumer on the xmit side. AFAIU xmit is the producer and completions are a consumer. Reviewed-by: Eric Dumazet Reviewed-by: Jesse Brandeburg Signed-off-by: Jakub Kicinski --- CC: corbet@lwn.net CC: linux-doc@vger.kernel.org --- Documentation/networking/driver.rst | 61 +++++++++++++---------------- 1 file changed, 27 insertions(+), 34 deletions(-) diff --git a/Documentation/networking/driver.rst b/Documentation/networking/driver.rst index 4071f2c00f8b..4f5dfa9c022e 100644 --- a/Documentation/networking/driver.rst +++ b/Documentation/networking/driver.rst @@ -47,30 +47,43 @@ Instead it must maintain the queue properly. For example, .. code-block:: c + static u32 drv_tx_avail(struct drv_ring *dr) + { + u32 used = READ_ONCE(dr->prod) - READ_ONCE(dr->cons); + + return dr->tx_ring_size - (used & bp->tx_ring_mask); + } + static netdev_tx_t drv_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct drv *dp = netdev_priv(dev); + struct netdev_queue *txq; + struct drv_ring *dr; + int idx; + + idx = skb_get_queue_mapping(skb); + dr = dp->tx_rings[idx]; + txq = netdev_get_tx_queue(dev, idx); - lock_tx(dp); //... - /* This is a hard error log it. */ - if (TX_BUFFS_AVAIL(dp) <= (skb_shinfo(skb)->nr_frags + 1)) { + /* This should be a very rare race - log it. */ + if (drv_tx_avail(dr) <= skb_shinfo(skb)->nr_frags + 1) { netif_stop_queue(dev); - unlock_tx(dp); - printk(KERN_ERR PFX "%s: BUG! Tx Ring full when queue awake!\n", - dev->name); + netdev_warn(dev, "Tx Ring full when queue awake!\n"); return NETDEV_TX_BUSY; } //... queue packet to card ... - //... update tx consumer index ... - if (TX_BUFFS_AVAIL(dp) <= (MAX_SKB_FRAGS + 1)) - netif_stop_queue(dev); + netdev_tx_sent_queue(txq, skb->len); + + //... update tx producer index using WRITE_ONCE() ... + + if (!netif_txq_maybe_stop(txq, drv_tx_avail(dr), + MAX_SKB_FRAGS + 1, 2 * MAX_SKB_FRAGS)) + dr->stats.stopped++; - //... - unlock_tx(dp); //... return NETDEV_TX_OK; } @@ -79,30 +92,10 @@ Instead it must maintain the queue properly. For example, .. code-block:: c - if (netif_queue_stopped(dp->dev) && - TX_BUFFS_AVAIL(dp) > (MAX_SKB_FRAGS + 1)) - netif_wake_queue(dp->dev); - -For a non-scatter-gather supporting card, the three tests simply become: - -.. code-block:: c - - /* This is a hard error log it. */ - if (TX_BUFFS_AVAIL(dp) <= 0) - -and: - -.. code-block:: c - - if (TX_BUFFS_AVAIL(dp) == 0) - -and: - -.. code-block:: c + //... update tx consumer index using WRITE_ONCE() ... - if (netif_queue_stopped(dp->dev) && - TX_BUFFS_AVAIL(dp) > 0) - netif_wake_queue(dp->dev); + netif_txq_completed_wake(txq, cmpl_pkts, cmpl_bytes, + drv_tx_avail(dr), 2 * MAX_SKB_FRAGS); Lockless queue stop / wake helper macros ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Apr 12 01:50:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13208363 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADD75C7619A for ; Wed, 12 Apr 2023 01:50:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229722AbjDLBux (ORCPT ); Tue, 11 Apr 2023 21:50:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229711AbjDLBut (ORCPT ); Tue, 11 Apr 2023 21:50:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05B0C4EDA for ; Tue, 11 Apr 2023 18:50:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7F10D62C29 for ; Wed, 12 Apr 2023 01:50:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BC9FC4339E; Wed, 12 Apr 2023 01:50:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681264245; bh=2rbWuO+4EqTH7iH4aBTn0LQxQtvVu1SdjxFAAn9m+ko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QyXL6ZMHRxmsqTOk9wLeP7YhINqdlksK9yiGQ/Un+hTFMhz14VjpksfJkoIQAJ0Rz NT+5GXCCdwZb9WB/+6CqWSrx1qBWowNa/5l9tX8UY771Anjwf0PdidzvHFZknBGUF4 cQvK+tj5x3YB57C+/QjvlumR0gXnvkjAUhd82XrDT95wUPGEB2k0sTRRRUX0qvIRAO MTS5l5MJOjxOXkElkCTLrNohXOIh6ZSmyDvOafXcx+kvMJxzYWUylJYo6eQZ2YyOID eD40dO6V0LQS2nv5752NEunzXEcFKo8+vYu4fYx/IindcZcCxKlbdCXz8s/oX6vHoM 55EoolBe6+40w== From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com, Jakub Kicinski , Jesse Brandeburg , michael.chan@broadcom.com Subject: [PATCH net-next v2 2/3] bnxt: use READ_ONCE/WRITE_ONCE for ring indexes Date: Tue, 11 Apr 2023 18:50:37 -0700 Message-Id: <20230412015038.674023-3-kuba@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412015038.674023-1-kuba@kernel.org> References: <20230412015038.674023-1-kuba@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Eric points out that we should make sure that ring index updates are wrapped in the appropriate READ_ONCE/WRITE_ONCE macros. Suggested-by: Eric Dumazet Reviewed-by: Eric Dumazet Reviewed-by: Jesse Brandeburg Signed-off-by: Jakub Kicinski Reviewed-by: Michael Chan --- v2: - cover writes in bnxt_xdp.c v1: https://lore.kernel.org/all/20230411013323.513688-3-kuba@kernel.org/ --- CC: michael.chan@broadcom.com --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +++--- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 9 ++++----- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +++--- 3 files changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index f7602d8d79e3..92289ab2f34a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -472,7 +472,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) prod = NEXT_TX(prod); tx_push->doorbell = cpu_to_le32(DB_KEY_TX_PUSH | DB_LONG_TX_PUSH | prod); - txr->tx_prod = prod; + WRITE_ONCE(txr->tx_prod, prod); tx_buf->is_push = 1; netdev_tx_sent_queue(txq, skb->len); @@ -583,7 +583,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) wmb(); prod = NEXT_TX(prod); - txr->tx_prod = prod; + WRITE_ONCE(txr->tx_prod, prod); if (!netdev_xmit_more() || netif_xmit_stopped(txq)) bnxt_txr_db_kick(bp, txr, prod); @@ -688,7 +688,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) dev_kfree_skb_any(skb); } - txr->tx_cons = cons; + WRITE_ONCE(txr->tx_cons, cons); __netif_txq_completed_wake(txq, nr_pkts, tx_bytes, bnxt_tx_avail(bp, txr), bp->tx_wake_thresh, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 18cac98ba58e..080e73496066 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -2231,13 +2231,12 @@ struct bnxt { #define SFF_MODULE_ID_QSFP28 0x11 #define BNXT_MAX_PHY_I2C_RESP_SIZE 64 -static inline u32 bnxt_tx_avail(struct bnxt *bp, struct bnxt_tx_ring_info *txr) +static inline u32 bnxt_tx_avail(struct bnxt *bp, + const struct bnxt_tx_ring_info *txr) { - /* Tell compiler to fetch tx indices from memory. */ - barrier(); + u32 used = READ_ONCE(txr->tx_prod) - READ_ONCE(txr->tx_cons); - return bp->tx_ring_size - - ((txr->tx_prod - txr->tx_cons) & bp->tx_ring_mask); + return bp->tx_ring_size - (used & bp->tx_ring_mask); } static inline void bnxt_writeq(struct bnxt *bp, u64 val, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 5843c93b1711..4efa5fe6972b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -64,7 +64,7 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, int frag_len; prod = NEXT_TX(prod); - txr->tx_prod = prod; + WRITE_ONCE(txr->tx_prod, prod); /* first fill up the first buffer */ frag_tx_buf = &txr->tx_buf_ring[prod]; @@ -94,7 +94,7 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, /* Sync TX BD */ wmb(); prod = NEXT_TX(prod); - txr->tx_prod = prod; + WRITE_ONCE(txr->tx_prod, prod); return tx_buf; } @@ -161,7 +161,7 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) } tx_cons = NEXT_TX(tx_cons); } - txr->tx_cons = tx_cons; + WRITE_ONCE(txr->tx_cons, tx_cons); if (rx_doorbell_needed) { tx_buf = &txr->tx_buf_ring[last_tx_cons]; bnxt_db_write(bp, &rxr->rx_db, tx_buf->rx_prod); From patchwork Wed Apr 12 01:50:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13208364 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62E79C77B73 for ; Wed, 12 Apr 2023 01:50:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbjDLBuy (ORCPT ); Tue, 11 Apr 2023 21:50:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229717AbjDLBuu (ORCPT ); Tue, 11 Apr 2023 21:50:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E22046A5; Tue, 11 Apr 2023 18:50:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0D6816266D; Wed, 12 Apr 2023 01:50:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C0CAC433A4; Wed, 12 Apr 2023 01:50:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681264246; bh=vz6JaBu952Y7R9T/By79cfYuGYTnLCjKxYhwVtV7+8g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NKBfwpTn/dTxVKv0pL6XFqVHtPx+lcA++Hf+6yJXZJpZoZdhJVrXQPTkZCpjMJw4U YjZQi0AlTlb4X/UqXe5MYzxdnsNTl6RkhUyvOmBdrGG1EvP9zhCTsDFDZCWadk5HTW zcgoFMsooXQKUk7phJ9Z08tzdorTo+tYgMr/UMa03goatranQdC/2sV5paeV47XKO9 VO1AB/Zb6w4wp80cft+UACS9dl3N6wV3kz0bybqfFwgf8w3FVJpU07SSNOY1//RFrw +SLjgT1c5ZI4XSrb0n3pe6qrWxam6I7siP9hAlMhWd9VTcOR51yRGzEbSAL0/Uh4wn SYPLMeHcJ5Pkw== From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com, Jakub Kicinski , Jesse Brandeburg , tariqt@nvidia.com, linux-rdma@vger.kernel.org Subject: [PATCH net-next v2 3/3] mlx4: use READ_ONCE/WRITE_ONCE for ring indexes Date: Tue, 11 Apr 2023 18:50:38 -0700 Message-Id: <20230412015038.674023-4-kuba@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412015038.674023-1-kuba@kernel.org> References: <20230412015038.674023-1-kuba@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Eric points out that we should make sure that ring index updates are wrapped in the appropriate READ_ONCE/WRITE_ONCE macros. Suggested-by: Eric Dumazet Reviewed-by: Eric Dumazet Reviewed-by: Jesse Brandeburg Signed-off-by: Jakub Kicinski Reviewed-by: Tariq Toukan --- CC: tariqt@nvidia.com CC: linux-rdma@vger.kernel.org --- drivers/net/ethernet/mellanox/mlx4/en_tx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index 2f79378fbf6e..65cb63f6c465 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -228,7 +228,9 @@ void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv, static inline bool mlx4_en_is_tx_ring_full(struct mlx4_en_tx_ring *ring) { - return ring->prod - ring->cons > ring->full_size; + u32 used = READ_ONCE(ring->prod) - READ_ONCE(ring->cons); + + return used > ring->full_size; } static void mlx4_en_stamp_wqe(struct mlx4_en_priv *priv, @@ -1083,7 +1085,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev) op_own |= cpu_to_be32(MLX4_WQE_CTRL_IIP); } - ring->prod += nr_txbb; + WRITE_ONCE(ring->prod, ring->prod + nr_txbb); /* If we used a bounce buffer then copy descriptor back into place */ if (unlikely(bounce)) @@ -1214,7 +1216,7 @@ netdev_tx_t mlx4_en_xmit_frame(struct mlx4_en_rx_ring *rx_ring, rx_ring->xdp_tx++; - ring->prod += MLX4_EN_XDP_TX_NRTXBB; + WRITE_ONCE(ring->prod, ring->prod + MLX4_EN_XDP_TX_NRTXBB); /* Ensure new descriptor hits memory * before setting ownership of this descriptor to HW