From patchwork Thu Dec 24 14:24:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989655 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE700C433E0 for ; Thu, 24 Dec 2020 14:27:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFE9A22517 for ; Thu, 24 Dec 2020 14:27:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728771AbgLXO0i (ORCPT ); Thu, 24 Dec 2020 09:26:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgLXO0h (ORCPT ); Thu, 24 Dec 2020 09:26:37 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB923C0617A6; Thu, 24 Dec 2020 06:25:56 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id g3so1369797plp.2; Thu, 24 Dec 2020 06:25:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1jYNLvaZPXWnnMKmqFijAtk3wCJypCnIEErPxeILOoQ=; b=B1ac8RYcfR0HVkbb2jS+BocnJrDLlducbmElKAAV48TAnyaiN6fYRRAEK4ZhLWYXev rjX6vkQ6KuViscu5QkeAZApPOuTyv+VLlaww0nndzoG4z9HbKE0bF/oXN92uZWNLIETe 3vggtJ/MoEdvdQeJ5xoEO0AgTWe4cUNyU1aPzfM0b2xdBMS3jWZ/p1S5oqBD+DpYzOcm MMlJdxqHONk6IHZYLNnc49bTgaKcQjwkTeL1vlBMVJiKnG4iwJLpohRCEUQd0AyVexpA Zxldb6prPm/rR9RR8oKTcsxWJGx+dNAPuV7wl0MNjz8qj8yeBEJ60MgEn31F2fh/1rKj TTwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1jYNLvaZPXWnnMKmqFijAtk3wCJypCnIEErPxeILOoQ=; b=Tem2OJhdrNEGt3/i5iyMLYIxBU1YCciv3wjkHfDEYvjAcbJDN/8i+5pTvJrY1oDUUT cYb9mPD/BHnJTRrlOtvsZUXANGwA9N0syjXv4KI2ciGccvs/J7hCw6WAuCdNNAs/C/fp 0GY+l6kmRGdqcImlzzd5uNjejswZr0cvyxbcg7N6w9xd7B3AFBZFY/Qa19qhmLIqFHSX SQzf2De24dYVKF9REfav6UbpuGTeGgZDkYqCTkQOaFM5aVlYqbX82PSnU+AmpjawR2rD nn0UkXMNuEx2ivB1oBzkaXI5+NUfhE07DgrrtuJr01Wtys9o+xLVL2PjnrlqaGtsTRZa hf+g== X-Gm-Message-State: AOAM532nat40axrkP+kMP2JjTXoWH0tPh+x/ZhPwNm4G5ebgL1Oc+VRZ uUJ78mJuKX9MRNrtI1rfCdYUC17mWEM= X-Google-Smtp-Source: ABdhPJxzyet+DtM30DbI22ABXk9AOvvdUz4j1jffPaDFAWabwG7TS2+/RF74L2MMl0a3HVXwisvOBA== X-Received: by 2002:a17:902:c113:b029:da:9461:10da with SMTP id 19-20020a170902c113b02900da946110damr24880988pli.42.1608819956471; Thu, 24 Dec 2020 06:25:56 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.25.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:25:56 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 1/6] bcm63xx_enet: batch process rx path Date: Thu, 24 Dec 2020 22:24:16 +0800 Message-Id: <20201224142421.32350-2-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use netif_receive_skb_list to batch process rx skb. Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance by 12.5%. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver After: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 136 MBytes 37.9 Mbits/sec 203 sender [ 4] 0.00-30.00 sec 135 MBytes 37.7 Mbits/sec receiver Signed-off-by: Sieng Piaw Liew Acked-by: Florian Fainelli --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 916824cca3fd..b82b7805c36a 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -297,10 +297,12 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t) static int bcm_enet_receive_queue(struct net_device *dev, int budget) { struct bcm_enet_priv *priv; + struct list_head rx_list; struct device *kdev; int processed; priv = netdev_priv(dev); + INIT_LIST_HEAD(&rx_list); kdev = &priv->pdev->dev; processed = 0; @@ -391,10 +393,12 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) skb->protocol = eth_type_trans(skb, dev); dev->stats.rx_packets++; dev->stats.rx_bytes += len; - netif_receive_skb(skb); + list_add_tail(&skb->list, &rx_list); } while (--budget > 0); + netif_receive_skb_list(&rx_list); + if (processed || !priv->rx_desc_count) { bcm_enet_refill_rx(dev); From patchwork Thu Dec 24 14:24:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989659 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D7B0C43381 for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EECEF224B1 for ; Thu, 24 Dec 2020 14:27:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728840AbgLXO0l (ORCPT ); Thu, 24 Dec 2020 09:26:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgLXO0j (ORCPT ); Thu, 24 Dec 2020 09:26:39 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B759FC0617A7; Thu, 24 Dec 2020 06:25:59 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id d2so1306922pfq.5; Thu, 24 Dec 2020 06:25:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=W6/+FtIew+DXRW3+N/9J9qYZ49HugSJ5H2PNvgyUorQ=; b=BlAw9L/THAbK3gaynjVqpc+xMoWbIqbMESC9F24u+T/5j3g1AnSW1/TLieDU0AsqV8 Zu72Nkpvaswp0dVGj4aESVa3bbZL5qwB2+CbGL6O6xzXtspDwHxHVSexZ2b3+JC93ky7 X7dV15DilA/MfzOjQSL4oBMHv/bpTqRlpI221gZT22z42r8DoJQ4Zs098VJC8ykmk1mh h2FAZGpvzNIMw158sBkeb8fM19EScx8KC2QYjPe+4ywac2BR9ji0h7YOPcwbHwcy6Y06 hirzP3D3mCYWcrE41CloLfPXjdymPMcnMZ6rof+DPY3GY4M29EUrsNXHknbeGIMhOHtp hbAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=W6/+FtIew+DXRW3+N/9J9qYZ49HugSJ5H2PNvgyUorQ=; b=HATxsqM+KTKj0kWP4hVOfEMlUS5WhSZmzNWMD6ssO61aiXho8WfQuvHTQ1B3PXR0GL fmQGprDeAyjQyKY25rjErljnEpZbZtIBK7xFGfO+9dAtiEUl6CkzZA/kmsnoe9BItcTd APhWr/qeGnIaaI32hc2jszvJEWfyefS0+fBGoQawOwwkhA1dadD5kFiCTvwHtcu9WQtW 5dj+oogoCe4umWAoZepVVS3WKHzxZrZDZQUqhGakhNfpOV4tk9Hy8LVdaYKFYK9XYRjb L9OVVgJF5jMXJ8Gs5L6pPNLRhnEHRN3SbruQcA3tI0zvlModIyMQ27GtQm/TtOHbxvJY XtjA== X-Gm-Message-State: AOAM530PhVd1lAu/01W2oZXD9cdcXhyQ3HLfufJfMEtY9ToRADmiDaIv 2NN3hCxCMLOTkF8p8SxEoSw= X-Google-Smtp-Source: ABdhPJwD23ZszShDda46K8utLRcxeEl4xqOAR5CZs8xZmNX9PlvzP5zuvm+UxwJxz+MBluRswSbelg== X-Received: by 2002:a62:764a:0:b029:19d:9fa8:5bc6 with SMTP id r71-20020a62764a0000b029019d9fa85bc6mr27433865pfc.76.1608819959335; Thu, 24 Dec 2020 06:25:59 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.25.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:25:58 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 2/6] bcm63xx_enet: add BQL support Date: Thu, 24 Dec 2020 22:24:17 +0800 Message-Id: <20201224142421.32350-3-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add Byte Queue Limits support to reduce/remove bufferbloat in bcm63xx_enet. Signed-off-by: Sieng Piaw Liew Acked-by: Florian Fainelli --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index b82b7805c36a..90f8214b4d22 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -417,9 +417,11 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) static int bcm_enet_tx_reclaim(struct net_device *dev, int force) { struct bcm_enet_priv *priv; + unsigned int bytes; int released; priv = netdev_priv(dev); + bytes = 0; released = 0; while (priv->tx_desc_count < priv->tx_ring_size) { @@ -456,10 +458,13 @@ static int bcm_enet_tx_reclaim(struct net_device *dev, int force) if (desc->len_stat & DMADESC_UNDER_MASK) dev->stats.tx_errors++; + bytes += skb->len; dev_kfree_skb(skb); released++; } + netdev_completed_queue(dev, released, bytes); + if (netif_queue_stopped(dev) && released) netif_wake_queue(dev); @@ -626,6 +631,8 @@ bcm_enet_start_xmit(struct sk_buff *skb, struct net_device *dev) desc->len_stat = len_stat; wmb(); + netdev_sent_queue(dev, skb->len); + /* kick tx dma */ enet_dmac_writel(priv, priv->dma_chan_en_mask, ENETDMAC_CHANCFG, priv->tx_chan); @@ -1169,6 +1176,7 @@ static int bcm_enet_stop(struct net_device *dev) kdev = &priv->pdev->dev; netif_stop_queue(dev); + netdev_reset_queue(dev); napi_disable(&priv->napi); if (priv->has_phy) phy_stop(dev->phydev); @@ -2338,6 +2346,7 @@ static int bcm_enetsw_stop(struct net_device *dev) del_timer_sync(&priv->swphy_poll); netif_stop_queue(dev); + netdev_reset_queue(dev); napi_disable(&priv->napi); del_timer_sync(&priv->rx_timeout); From patchwork Thu Dec 24 14:24:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989657 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EAC0C433E9 for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1286022D6E for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728891AbgLXO0o (ORCPT ); Thu, 24 Dec 2020 09:26:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgLXO0n (ORCPT ); Thu, 24 Dec 2020 09:26:43 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4F1DC06138C; Thu, 24 Dec 2020 06:26:02 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id w1so2472081pjc.0; Thu, 24 Dec 2020 06:26:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=p5OOA0vAPiVA8ZIkD7KgNkPxCYfKKWPfzTIiwGStjcc=; b=NKFeTK1FPdhwu/0w/5r3XpnZUiLir3X1fXJoj1c3B/fxnxjzVf7pPyBAEIdmOXJTf8 CYM5kMC7uJ2W7rnumdqt1V/y9Cdg5lGdFpucoAKbRnlqmPN4+LcA0HaG93/mxv7+Zcfw qRCuMJ2bl72+c9eZVqG0LTJBBDIgFTi4isHmfe5SHsRlqVusAEQGvBFOQpo+TCmK4CK1 MxlUii8FiS8h7T3nGP8+o9zUG1jh9alsSsa8ykZIej6ZA3KnkfLrahEyaI8HZ6/rgRIi 8a8QEojj+1KauSWDo6G7ytxzmal+wgqHc+AyoB3HnSygWlCoARLwL78hHNwBjQS8XxPi 9idQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=p5OOA0vAPiVA8ZIkD7KgNkPxCYfKKWPfzTIiwGStjcc=; b=bQ9edTlNYA2YzBYSEkAECP1UV4zHVjcox6kB5ZpqU9h/mXR8SOZzVHoKJjvDPbkqCX wakqcjHPuZ6QCB4bBWFi1twtboZfzNyJoJiCd6rbTHO4fA7IYB5FPeFSs8/xRaAsW5XE Py4D0qbQTvO4LqSteJ1taqsgaVFQI+PFMvL7VrOp2xe9rPDWdFMjai8CanXVxPbVDveq BRLXz0dv9XQzxShHVX/v3fz2vp4yz+Uwl8gISR5DPBxgKc0ENPVdxZuC3n1CSwhD25w7 LY5bBjer/vX5B4ggS8NsSiN8ScnxKwzWgjqua43f1Zlui0jc6VWw9tDJozmDcw1T5Ecl m1rA== X-Gm-Message-State: AOAM531PZk4gb5axKM+JoNSvMk+LmZYtEobAwr+w6TdGjSGv7vgp7afw p1Sa25jatXTZ+Iv4qBmF7BxLeB+AIbQ= X-Google-Smtp-Source: ABdhPJzu8Ot+YFAcDFa6AYDYxYNBf3QgYbPdHpZAkjqHHDJ9ExXkEphkz89et0z+ocVTNu/44GNCnQ== X-Received: by 2002:a17:90a:7844:: with SMTP id y4mr4738006pjl.68.1608819962297; Thu, 24 Dec 2020 06:26:02 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.25.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:26:01 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 3/6] bcm63xx_enet: add xmit_more support Date: Thu, 24 Dec 2020 22:24:18 +0800 Message-Id: <20201224142421.32350-4-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Support bulking hardware TX queue by using netdev_xmit_more(). Signed-off-by: Sieng Piaw Liew Acked-by: Florian Fainelli --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 90f8214b4d22..452968f168ed 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -633,14 +633,17 @@ bcm_enet_start_xmit(struct sk_buff *skb, struct net_device *dev) netdev_sent_queue(dev, skb->len); - /* kick tx dma */ - enet_dmac_writel(priv, priv->dma_chan_en_mask, - ENETDMAC_CHANCFG, priv->tx_chan); - /* stop queue if no more desc available */ if (!priv->tx_desc_count) netif_stop_queue(dev); + /* kick tx dma */ + if (!netdev_xmit_more() || !priv->tx_desc_count) + enet_dmac_writel(priv, priv->dma_chan_en_mask, + ENETDMAC_CHANCFG, priv->tx_chan); + + + dev->stats.tx_bytes += skb->len; dev->stats.tx_packets++; ret = NETDEV_TX_OK; From patchwork Thu Dec 24 14:24:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989637 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75545C433DB for ; Thu, 24 Dec 2020 14:26:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47C1D224B0 for ; Thu, 24 Dec 2020 14:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728455AbgLXO0V (ORCPT ); Thu, 24 Dec 2020 09:26:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726611AbgLXO0U (ORCPT ); Thu, 24 Dec 2020 09:26:20 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE8F8C061794; Thu, 24 Dec 2020 06:26:05 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id e2so1338060plt.12; Thu, 24 Dec 2020 06:26:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=k0cWkaPR7sxvjSMJSz2pqKDGwauEeHa3F+oQo0gd5HI=; b=ky8KHdOA9V1EmSxBUaZWvi58lfudWqIzG98R94RMPtKRNx3YkouE1oSoXtWJb4vuR6 usXqkGQStMGpxnqMFlu0YObYmOEOw+9fXreGcOojgCa/wyENEFuI70/lV1dlzCrTjC2t nswT1K2C7bf6UYM0cNNGGajKDqVXRsfRym+7zCNJzw0yqSvNlYDrSsGhnBMn0iaY8p4r 6VlhvhGP87L12WxIBiEFxh/Dvl82q9FE/iuuNwoQbIiU7yZZFXoEup8NRExJDn2YwigE GjeEP4cEjuxufyilGDcCELOtdyoJpje1KOFYMJJNYwEUudnEqxB97JfKTj+uykibo70x i19g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=k0cWkaPR7sxvjSMJSz2pqKDGwauEeHa3F+oQo0gd5HI=; b=TZ+Rs9Vs7GyhVO1hZ9o5R6CPNWOEIVhlcw/UZRvFaAM0sBdGaDOeZjLowZJQv9eDRl /hq4uH9Ngt9y9LMPbVfiQ8Kt23xyf/JrppTRMbhcZcERjL5abvTGuzL4r1VPx/c5b4DB 1tj9x+gGxZhfsOVRNcfZQ72EJRwNp7zIf6MDhrboaes/ab/XzjAw2k/NtmxW6G4NIQuv sgWrQFEkyHcoor3e8uS3VF0kWgjRilzlNylj4W7UyPDqHRa834A9u7okGwbWZYQ/Iz8j YLdQXFmmVWGf4s2ryTUFtDYT1APwEmjwN2aC31fk6UUCtF5gelmCmP34LOkIORcLGRpx MwTQ== X-Gm-Message-State: AOAM532m3ia3l6N59oFppeLq/7DCwtBugTmcv1duLJPIONmzExKG5WGZ PcXbheJgoWTJBySyIPHSRaY= X-Google-Smtp-Source: ABdhPJyS8a+H04/RwMNnC+h8OGTnvSOfr6CIYQdcDuqF6PW5bJnaXkxEouIG8Qm7ByD9qN8G8bgBVw== X-Received: by 2002:a17:90a:9707:: with SMTP id x7mr4706099pjo.72.1608819965332; Thu, 24 Dec 2020 06:26:05 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.26.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:26:04 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 4/6] bcm63xx_enet: alloc rx skb with NET_IP_ALIGN Date: Thu, 24 Dec 2020 22:24:19 +0800 Message-Id: <20201224142421.32350-5-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use netdev_alloc_skb_ip_align on newer SoCs with integrated switch (enetsw) when refilling RX. Increases packet processing performance by 30% (with netif_receive_skb_list). Non-enetsw SoCs cannot function with the extra pad so continue to use the regular netdev_alloc_skb. Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec performance. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver After (+netif_receive_skb_list): [ 4] 0.00-30.00 sec 155 MBytes 43.3 Mbits/sec 354 sender [ 4] 0.00-30.00 sec 154 MBytes 43.1 Mbits/sec receiver Signed-off-by: Sieng Piaw Liew Acked-by: Florian Fainelli --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 452968f168ed..51976ed87d2d 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -237,7 +237,10 @@ static int bcm_enet_refill_rx(struct net_device *dev) desc = &priv->rx_desc_cpu[desc_idx]; if (!priv->rx_skb[desc_idx]) { - skb = netdev_alloc_skb(dev, priv->rx_skb_size); + if (priv->enet_is_sw) + skb = netdev_alloc_skb_ip_align(dev, priv->rx_skb_size); + else + skb = netdev_alloc_skb(dev, priv->rx_skb_size); if (!skb) break; priv->rx_skb[desc_idx] = skb; From patchwork Thu Dec 24 14:24:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989663 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6674C43333 for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87DE922AAA for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728945AbgLXO0u (ORCPT ); Thu, 24 Dec 2020 09:26:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgLXO0t (ORCPT ); Thu, 24 Dec 2020 09:26:49 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4B4AC061282; Thu, 24 Dec 2020 06:26:08 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id b5so1249323pjk.2; Thu, 24 Dec 2020 06:26:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jAEXiqW9yG35s2MZldq2yVN2iK/ftCn1750/UR2O//A=; b=bUokllJDv1jJ44Wl4s7fmy6dYY1y2xrjsHXmViJ923GU1tQuG02nrg7728Od79nw0i Xi+0qFz9zMmVxxFPiW0KpayaYtu1PCxTBfixSz/NIS38miHs071RlA9lvad3/nZJ6TAu R9drvxZNmACu8c10+oAqgPB+RMFvHlaZgSRgZqijqeXMQ9DUDXNvnA02558HsYHDBfFf wzgHIZbRlKf4cSudDRlh5EZ7sWFkzTUVN9QxLUrCifHt2S1qHj1ZK8agm/21ZSTgwGZE nCaDa2q0f9C8ZC5RwPYdYEMdRYRamhsicajc9kkUdviJPqqnuyjFyvJ7mM5eQT8xOP4O QmpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jAEXiqW9yG35s2MZldq2yVN2iK/ftCn1750/UR2O//A=; b=rO5/hkPcSysyeoIunwshO/pmWdj2eZeA5hh0RhY4aZGd2n67ZZoIbk1yBPZRg8oj/9 cxnXHgrsLtKDm3FvDrO7FGIZP4TKEA2ChULsHI+1MVOrKOpv2eyssJ1CNz1UBfoNaHov pJyujtcyN0WfqJV/9RAzu0r6YwZx+UdYdFpZlBd/HJYTocjtqCh8vWom2ggg2ZnCquOm WsvbtazCQBexvkK5ZOdo/g9vuFYlDZuBOhQ6WVrckthGR5/2FK4Xejztm4KyZYWPRpA7 9dJD4hkkT3910TYc3YHyUMtC/OTURCwrlzXYzzEc5dGvgrz4CwyotlQa/zXPs0A0Kzu1 tmHA== X-Gm-Message-State: AOAM531ujhZ0tubHTIJzxkvya6n8Uh5qVVcuhVivpTC/P8j5eQbHQWfn QLkqDG6DEUfKWk64es26464= X-Google-Smtp-Source: ABdhPJxjtElVhM2/u1mxmIJnlcyEQWH3vflNey+wvaLv+kl08UDmpfG+tF0kfMQvkGfNy9bB5UIkBA== X-Received: by 2002:a17:902:7292:b029:dc:ac9:25b5 with SMTP id d18-20020a1709027292b02900dc0ac925b5mr30440767pll.2.1608819968481; Thu, 24 Dec 2020 06:26:08 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.26.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:26:08 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 5/6] bcm63xx_enet: convert to build_skb Date: Thu, 24 Dec 2020 22:24:20 +0800 Message-Id: <20201224142421.32350-6-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We can increase the efficiency of rx path by using buffers to receive packets then build SKBs around them just before passing into the network stack. In contrast, preallocating SKBs too early reduces CPU cache efficiency. Check if we're in NAPI context when refilling RX. Normally we're almost always running in NAPI context. Dispatch to napi_alloc_frag directly instead of relying on netdev_alloc_frag which does the same but with the overhead of local_bh_disable/enable. Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec performance. Included netif_receive_skb_list and NET_IP_ALIGN optimizations. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 49.9 MBytes 41.9 Mbits/sec 197 sender [ 4] 0.00-10.00 sec 49.3 MBytes 41.3 Mbits/sec receiver After: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 171 MBytes 47.8 Mbits/sec 272 sender [ 4] 0.00-30.00 sec 170 MBytes 47.6 Mbits/sec receiver Signed-off-by: Sieng Piaw Liew --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 157 +++++++++---------- drivers/net/ethernet/broadcom/bcm63xx_enet.h | 14 +- 2 files changed, 80 insertions(+), 91 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 51976ed87d2d..8c2e97311a2c 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -220,7 +220,7 @@ static void bcm_enet_mdio_write_mii(struct net_device *dev, int mii_id, /* * refill rx queue */ -static int bcm_enet_refill_rx(struct net_device *dev) +static int bcm_enet_refill_rx(struct net_device *dev, bool napi_mode) { struct bcm_enet_priv *priv; @@ -228,29 +228,29 @@ static int bcm_enet_refill_rx(struct net_device *dev) while (priv->rx_desc_count < priv->rx_ring_size) { struct bcm_enet_desc *desc; - struct sk_buff *skb; - dma_addr_t p; int desc_idx; u32 len_stat; desc_idx = priv->rx_dirty_desc; desc = &priv->rx_desc_cpu[desc_idx]; - if (!priv->rx_skb[desc_idx]) { - if (priv->enet_is_sw) - skb = netdev_alloc_skb_ip_align(dev, priv->rx_skb_size); + if (!priv->rx_buf[desc_idx]) { + void *buf; + + if (likely(napi_mode)) + buf = napi_alloc_frag(priv->rx_frag_size); else - skb = netdev_alloc_skb(dev, priv->rx_skb_size); - if (!skb) + buf = netdev_alloc_frag(priv->rx_frag_size); + if (unlikely(!buf)) break; - priv->rx_skb[desc_idx] = skb; - p = dma_map_single(&priv->pdev->dev, skb->data, - priv->rx_skb_size, - DMA_FROM_DEVICE); - desc->address = p; + priv->rx_buf[desc_idx] = buf; + desc->address = dma_map_single(&priv->pdev->dev, + buf + priv->rx_buf_offset, + priv->rx_buf_size, + DMA_FROM_DEVICE); } - len_stat = priv->rx_skb_size << DMADESC_LENGTH_SHIFT; + len_stat = priv->rx_buf_size << DMADESC_LENGTH_SHIFT; len_stat |= DMADESC_OWNER_MASK; if (priv->rx_dirty_desc == priv->rx_ring_size - 1) { len_stat |= (DMADESC_WRAP_MASK >> priv->dma_desc_shift); @@ -290,7 +290,7 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t) struct net_device *dev = priv->net_dev; spin_lock(&priv->rx_lock); - bcm_enet_refill_rx(dev); + bcm_enet_refill_rx(dev, false); spin_unlock(&priv->rx_lock); } @@ -320,6 +320,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) int desc_idx; u32 len_stat; unsigned int len; + void *buf; desc_idx = priv->rx_curr_desc; desc = &priv->rx_desc_cpu[desc_idx]; @@ -365,16 +366,14 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) } /* valid packet */ - skb = priv->rx_skb[desc_idx]; + buf = priv->rx_buf[desc_idx]; len = (len_stat & DMADESC_LENGTH_MASK) >> DMADESC_LENGTH_SHIFT; /* don't include FCS */ len -= 4; if (len < copybreak) { - struct sk_buff *nskb; - - nskb = napi_alloc_skb(&priv->napi, len); - if (!nskb) { + skb = napi_alloc_skb(&priv->napi, len); + if (unlikely(!skb)) { /* forget packet, just rearm desc */ dev->stats.rx_dropped++; continue; @@ -382,14 +381,21 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) dma_sync_single_for_cpu(kdev, desc->address, len, DMA_FROM_DEVICE); - memcpy(nskb->data, skb->data, len); + memcpy(skb->data, buf + priv->rx_buf_offset, len); dma_sync_single_for_device(kdev, desc->address, len, DMA_FROM_DEVICE); - skb = nskb; } else { - dma_unmap_single(&priv->pdev->dev, desc->address, - priv->rx_skb_size, DMA_FROM_DEVICE); - priv->rx_skb[desc_idx] = NULL; + dma_unmap_single(kdev, desc->address, + priv->rx_buf_size, DMA_FROM_DEVICE); + priv->rx_buf[desc_idx] = NULL; + + skb = build_skb(buf, priv->rx_frag_size); + if (unlikely(!skb)) { + skb_free_frag(buf); + dev->stats.rx_dropped++; + continue; + } + skb_reserve(skb, priv->rx_buf_offset); } skb_put(skb, len); @@ -403,7 +409,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) netif_receive_skb_list(&rx_list); if (processed || !priv->rx_desc_count) { - bcm_enet_refill_rx(dev); + bcm_enet_refill_rx(dev, true); /* kick rx dma */ enet_dmac_writel(priv, priv->dma_chan_en_mask, @@ -862,6 +868,24 @@ static void bcm_enet_adjust_link(struct net_device *dev) priv->pause_tx ? "tx" : "off"); } +static void bcm_enet_free_rx_buf_ring(struct device *kdev, struct bcm_enet_priv *priv) +{ + int i; + + for (i = 0; i < priv->rx_ring_size; i++) { + struct bcm_enet_desc *desc; + + if (!priv->rx_buf[i]) + continue; + + desc = &priv->rx_desc_cpu[i]; + dma_unmap_single(kdev, desc->address, priv->rx_buf_size, + DMA_FROM_DEVICE); + skb_free_frag(priv->rx_buf[i]); + } + kfree(priv->rx_buf); +} + /* * open callback, allocate dma rings & buffers and start rx operation */ @@ -971,10 +995,10 @@ static int bcm_enet_open(struct net_device *dev) priv->tx_curr_desc = 0; spin_lock_init(&priv->tx_lock); - /* init & fill rx ring with skbs */ - priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *), + /* init & fill rx ring with buffers */ + priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *), GFP_KERNEL); - if (!priv->rx_skb) { + if (!priv->rx_buf) { ret = -ENOMEM; goto out_free_tx_skb; } @@ -991,7 +1015,7 @@ static int bcm_enet_open(struct net_device *dev) enet_dmac_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0, ENETDMAC_BUFALLOC, priv->rx_chan); - if (bcm_enet_refill_rx(dev)) { + if (bcm_enet_refill_rx(dev, false)) { dev_err(kdev, "cannot allocate rx skb queue\n"); ret = -ENOMEM; goto out; @@ -1086,18 +1110,7 @@ static int bcm_enet_open(struct net_device *dev) return 0; out: - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } - kfree(priv->rx_skb); + bcm_enet_free_rx_buf_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -1176,7 +1189,6 @@ static int bcm_enet_stop(struct net_device *dev) { struct bcm_enet_priv *priv; struct device *kdev; - int i; priv = netdev_priv(dev); kdev = &priv->pdev->dev; @@ -1204,21 +1216,10 @@ static int bcm_enet_stop(struct net_device *dev) /* force reclaim of all tx buffers */ bcm_enet_tx_reclaim(dev, 1); - /* free the rx skb ring */ - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } + /* free the rx buffer ring */ + bcm_enet_free_rx_buf_ring(kdev, priv); /* free remaining allocated memory */ - kfree(priv->rx_skb); kfree(priv->tx_skb); dma_free_coherent(kdev, priv->rx_desc_alloc_size, priv->rx_desc_cpu, priv->rx_desc_dma); @@ -1640,9 +1641,12 @@ static int bcm_enet_change_mtu(struct net_device *dev, int new_mtu) * align rx buffer size to dma burst len, account FCS since * it's appended */ - priv->rx_skb_size = ALIGN(actual_mtu + ETH_FCS_LEN, + priv->rx_buf_size = ALIGN(actual_mtu + ETH_FCS_LEN, priv->dma_maxburst * 4); + priv->rx_frag_size = SKB_DATA_ALIGN(priv->rx_buf_offset + priv->rx_buf_size) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + dev->mtu = new_mtu; return 0; } @@ -1727,6 +1731,7 @@ static int bcm_enet_probe(struct platform_device *pdev) priv->enet_is_sw = false; priv->dma_maxburst = BCMENET_DMA_MAXBURST; + priv->rx_buf_offset = NET_SKB_PAD; ret = bcm_enet_change_mtu(dev, dev->mtu); if (ret) @@ -2154,10 +2159,10 @@ static int bcm_enetsw_open(struct net_device *dev) priv->tx_curr_desc = 0; spin_lock_init(&priv->tx_lock); - /* init & fill rx ring with skbs */ - priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *), + /* init & fill rx ring with buffers */ + priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *), GFP_KERNEL); - if (!priv->rx_skb) { + if (!priv->rx_buf) { dev_err(kdev, "cannot allocate rx skb queue\n"); ret = -ENOMEM; goto out_free_tx_skb; @@ -2205,7 +2210,7 @@ static int bcm_enetsw_open(struct net_device *dev) enet_dma_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0, ENETDMA_BUFALLOC_REG(priv->rx_chan)); - if (bcm_enet_refill_rx(dev)) { + if (bcm_enet_refill_rx(dev, false)) { dev_err(kdev, "cannot allocate rx skb queue\n"); ret = -ENOMEM; goto out; @@ -2305,18 +2310,7 @@ static int bcm_enetsw_open(struct net_device *dev) return 0; out: - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } - kfree(priv->rx_skb); + bcm_enet_free_rx_buf_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -2345,7 +2339,6 @@ static int bcm_enetsw_stop(struct net_device *dev) { struct bcm_enet_priv *priv; struct device *kdev; - int i; priv = netdev_priv(dev); kdev = &priv->pdev->dev; @@ -2367,21 +2360,10 @@ static int bcm_enetsw_stop(struct net_device *dev) /* force reclaim of all tx buffers */ bcm_enet_tx_reclaim(dev, 1); - /* free the rx skb ring */ - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } + /* free the rx buffer ring */ + bcm_enet_free_rx_buf_ring(kdev, priv); /* free remaining allocated memory */ - kfree(priv->rx_skb); kfree(priv->tx_skb); dma_free_coherent(kdev, priv->rx_desc_alloc_size, priv->rx_desc_cpu, priv->rx_desc_dma); @@ -2678,6 +2660,7 @@ static int bcm_enetsw_probe(struct platform_device *pdev) priv->rx_ring_size = BCMENET_DEF_RX_DESC; priv->tx_ring_size = BCMENET_DEF_TX_DESC; priv->dma_maxburst = BCMENETSW_DMA_MAXBURST; + priv->rx_buf_offset = NET_SKB_PAD + NET_IP_ALIGN; pd = dev_get_platdata(&pdev->dev); if (pd) { diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.h b/drivers/net/ethernet/broadcom/bcm63xx_enet.h index 1d3c917eb830..78f1830fb3cb 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.h +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.h @@ -230,11 +230,17 @@ struct bcm_enet_priv { /* next dirty rx descriptor to refill */ int rx_dirty_desc; - /* size of allocated rx skbs */ - unsigned int rx_skb_size; + /* size of allocated rx buffers */ + unsigned int rx_buf_size; - /* list of skb given to hw for rx */ - struct sk_buff **rx_skb; + /* allocated rx buffer offset */ + unsigned int rx_buf_offset; + + /* size of allocated rx frag */ + unsigned int rx_frag_size; + + /* list of buffer given to hw for rx */ + void **rx_buf; /* used when rx skb allocation failed, so we defer rx queue * refill */ From patchwork Thu Dec 24 14:24:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng-Piaw Liew X-Patchwork-Id: 11989661 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7281DC4332E for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D39322AAA for ; Thu, 24 Dec 2020 14:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728990AbgLXO0w (ORCPT ); Thu, 24 Dec 2020 09:26:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgLXO0v (ORCPT ); Thu, 24 Dec 2020 09:26:51 -0500 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA302C061285; Thu, 24 Dec 2020 06:26:11 -0800 (PST) Received: by mail-pg1-x52d.google.com with SMTP id i5so1660082pgo.1; Thu, 24 Dec 2020 06:26:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wInGYZW902haXHZAS7+jdP03lOS8ZfWDpKlc2Tn45Ao=; b=I8xmwus3SEPujM74SNuswxSxEoRpFhK/jDLHHWMqFyaaPztpMTc6zTfp4STG9OgEXm ZeuQd2oHIT07bwKHbcSnXWMa8M5g5sGAgI0uHglVNZxxNGsT5uESubAPgw4AaV+ErZ8r A7lni0GJ14LppPWthKkUC6dZoWzkOjxCPHnFz8/XoqdvzNQSxTpVItUWEpnswRoiIVxJ ZP/OaQopViNQEpElfxWPvjjRwGsYhkaNlb1TcO0UzePJVdSRnJrr3D0H33T5PflQLIvE seYKXOYTiWFs6/cI8J6TxYmqGMM82hkTUZmm4u5I5sn/HiK2vfvlr1F2mAY54nkS5XLY aS0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wInGYZW902haXHZAS7+jdP03lOS8ZfWDpKlc2Tn45Ao=; b=NfkjK0lz0ZamYJr3Ijz1NXp9uibsWlFN8NR3t0JOTN9ZX3xRFOBMnR06BrjRxDoi8y SqDLFWhQ/7u//LTG+BbQ9ZiQRJwy/zfb+bF7HS3xZN3S83QYSWHjEJ64XcVDnxivNGfO CwH8zyP+CjmbBWUiTCPliSJicOBbERhaS0WWPhIHoGzmjynvqEFNlOu7yMooLKRZWW2S Xrwysb94V2zvygPi21XddCShPCsFuNtmdnfbYVYWrS9US6LmvXbzUGXS8MiWJfXAFaVx B7Q965GidcbHRonfHBVa9+08noahGbY1hSTYz/J5B4Ey5l7iVnvPjmUCJn0Lyt/j4FA2 zPzg== X-Gm-Message-State: AOAM533VYjlHHe8vwfJ9O3zx+xpDt169Lh4Uq8qsj2OLOheF4OgvLOtQ 3VmipWjmHEIm4c7MSBbOs98= X-Google-Smtp-Source: ABdhPJzfgac7PKJRU95xB5cjnCwfgXMEfr4mk9P5ourOpELETk3+5iGbIKQVB3Bu1c63G6QeFBhnfw== X-Received: by 2002:a62:2cc:0:b029:1a8:4d9b:8e8d with SMTP id 195-20020a6202cc0000b02901a84d9b8e8dmr5655649pfc.8.1608819971314; Thu, 24 Dec 2020 06:26:11 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id r185sm26936351pfc.53.2020.12.24.06.26.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Dec 2020 06:26:10 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 6/6] bcm63xx_enet: improve rx loop Date: Thu, 24 Dec 2020 22:24:21 +0800 Message-Id: <20201224142421.32350-7-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201224142421.32350-1-liew.s.piaw@gmail.com> References: <20201224142421.32350-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use existing rx processed count to track against budget, thereby making budget decrement operation redundant. rx_desc_count can be calculated outside the rx loop, making the loop a bit smaller. Signed-off-by: Sieng Piaw Liew Acked-by: Florian Fainelli --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 8c2e97311a2c..5ff0d39be2b2 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -339,7 +339,6 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) priv->rx_curr_desc++; if (priv->rx_curr_desc == priv->rx_ring_size) priv->rx_curr_desc = 0; - priv->rx_desc_count--; /* if the packet does not have start of packet _and_ * end of packet flag set, then just recycle it */ @@ -404,9 +403,10 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) dev->stats.rx_bytes += len; list_add_tail(&skb->list, &rx_list); - } while (--budget > 0); + } while (processed < budget); netif_receive_skb_list(&rx_list); + priv->rx_desc_count -= processed; if (processed || !priv->rx_desc_count) { bcm_enet_refill_rx(dev, true);