From patchwork Tue Jun 28 01:33:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Marangi X-Patchwork-Id: 12897511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8880EC433EF for ; Tue, 28 Jun 2022 01:35:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u0v8jrunTy1CLSJzXqh/7xZ4p+mNZeyri9Acj7c8ULI=; b=gxVRvL8pokynzn TUTTWsr/rwFBwzsBeeX8SRm7Uq4vWX55NQzgqUXVzthshMB3kfArQMGb/dDgRvUexRewK/sTQW5hw 7lXjCpxu8nl/zTuEvY1XR869PQNZCACRt5DuP5AWhP1Q4hWN6wYCK/7I6vw0Gx+k8INNfdpkFHAIe keeU7llEHw04+vsgE8aLzHSDkvBd8cUSdg/NT1fHj5ssVccqJDoRqhEo1JOfIZv9GATMr6Bmo12tP jYZ0NyGjyhDGalybuyT4L2KxOsYrzogkBJOc8fs30PALSny8kPmJAnVcUGLGKGnlbblxf60V7Cvi6 COEnjwGI3VY/WnfiTFCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607S-003WM1-8X; Tue, 28 Jun 2022 01:34:22 +0000 Received: from mail-ej1-x62d.google.com ([2a00:1450:4864:20::62d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607G-003WGt-Az for linux-arm-kernel@lists.infradead.org; Tue, 28 Jun 2022 01:34:11 +0000 Received: by mail-ej1-x62d.google.com with SMTP id mf9so22773374ejb.0 for ; Mon, 27 Jun 2022 18:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yF4+Fitkqfa+QuxFTrcLIvH1UEWA7T7dfHKeApzKf9c=; b=oyGBmRGYd6qLeGP9HPaDKkS2Wi65q/Hpq1EIROev7SyfFaGN5bJDUnJnlhlrj46ZBy K/R86rtJDHdWfUk9Okn5lH6gQinSYNvWzm3JMRDGMuvcLsYhbc5frh5oWl+nNYEFW74V RfUr8MzqieNyYlra1s0+wa0BJ5f7pt7x2et1Kert8obv4E/evF1fd1CbuaBQivuNMZ0k YYBNxVV+aky5zEn4ZA6w9uVVRlHvw3cuID6NuhdKXA3fSwuvw4ggjKJ5ZqIoQkQsOZsN qs/UVKe36rAAnBa4zQctALQ2FIEnGwAZtHGRQ4Cf/Ujfw/+G867HI207NP2vxbK3wGkZ GwRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yF4+Fitkqfa+QuxFTrcLIvH1UEWA7T7dfHKeApzKf9c=; b=YXQvkFdZeV4cYZeRRTC+qGeNySreXu66UCws2pouVR92+TPAtVaPkTVbjSnHKtqMDP xLnvQtbqX1Cq4+oGH1vNpwsy6p788kTefJIAhMWffxaWU9YgotEpCWK8p3hrwX54Jwun 561WBDuGCXJoc2D9SeKV9Twr339jdOH25AIrvakhlnudaD7OG9LtUUM6lanfdQRtbzKs bV4bw+uZP3tN5NLc/qIkREsdSfNwmEuPsspgpTsCHJqGadoNC4jIgUTcy1G0YSN9W19C c4XVYAVWHXV9v1fIF3T+r/tyxIt2xTj8Y/AH/undhty4eZtS7N/PNx/wocKtbXZzMwEs VGrw== X-Gm-Message-State: AJIora9r9uO9IoDsvYoeNgQLxrJaNg6qhzFoFVyEr+IZbZ4W0i4DdbK5 YO0lEnXAJXMGVo4JdZdyWaU= X-Google-Smtp-Source: AGRyM1t0Q70YGNd4a3J68aU0nXmjkoBLTIx9ya4AHqRUUTAiaOQsZPZWsttSFIoTEeBTzq2duRXn5Q== X-Received: by 2002:a17:907:969f:b0:726:94a0:26fd with SMTP id hd31-20020a170907969f00b0072694a026fdmr12096238ejc.234.1656380046538; Mon, 27 Jun 2022 18:34:06 -0700 (PDT) Received: from localhost.localdomain (93-42-70-190.ip85.fastwebnet.it. [93.42.70.190]) by smtp.googlemail.com with ESMTPSA id x13-20020a170906b08d00b00724261b592esm5693492ejy.186.2022.06.27.18.34.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jun 2022 18:34:06 -0700 (PDT) From: Christian Marangi To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Russell King , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Christian Marangi Subject: [net-next PATCH RFC 1/5] net: ethernet: stmicro: stmmac: move queue reset to dedicated functions Date: Tue, 28 Jun 2022 03:33:38 +0200 Message-Id: <20220628013342.13581-2-ansuelsmth@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628013342.13581-1-ansuelsmth@gmail.com> References: <20220628013342.13581-1-ansuelsmth@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220627_183410_416118_49F9DD9D X-CRM114-Status: GOOD ( 18.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move queue reset to dedicated functions. This aside from a simple cleanup is also required to allocate a dma conf without resetting the tx queue while the device is temporarily detached as now the reset is not part of the dma init function and can be done later in the code flow. Signed-off-by: Christian Marangi --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 59 ++++++++++--------- 1 file changed, 31 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index d1a7cf4567bc..f861246de2e5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -130,6 +130,9 @@ static irqreturn_t stmmac_mac_interrupt(int irq, void *dev_id); static irqreturn_t stmmac_safety_interrupt(int irq, void *dev_id); static irqreturn_t stmmac_msi_intr_tx(int irq, void *data); static irqreturn_t stmmac_msi_intr_rx(int irq, void *data); +static void stmmac_reset_rx_queue(struct stmmac_priv *priv, u32 queue); +static void stmmac_reset_tx_queue(struct stmmac_priv *priv, u32 queue); +static void stmmac_reset_queues_param(struct stmmac_priv *priv); static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue); static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue); static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode, @@ -1646,9 +1649,6 @@ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t f return -ENOMEM; } - rx_q->cur_rx = 0; - rx_q->dirty_rx = 0; - /* Setup the chained descriptor addresses */ if (priv->mode == STMMAC_CHAIN_MODE) { if (priv->extend_desc) @@ -1751,12 +1751,6 @@ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) tx_q->tx_skbuff[i] = NULL; } - tx_q->dirty_tx = 0; - tx_q->cur_tx = 0; - tx_q->mss = 0; - - netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue)); - return 0; } @@ -2642,10 +2636,7 @@ static void stmmac_tx_err(struct stmmac_priv *priv, u32 chan) stmmac_stop_tx_dma(priv, chan); dma_free_tx_skbufs(priv, chan); stmmac_clear_tx_descriptors(priv, chan); - tx_q->dirty_tx = 0; - tx_q->cur_tx = 0; - tx_q->mss = 0; - netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, chan)); + stmmac_reset_tx_queue(priv, chan); stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, tx_q->dma_tx_phy, chan); stmmac_start_tx_dma(priv, chan); @@ -3704,6 +3695,8 @@ static int stmmac_open(struct net_device *dev) goto init_error; } + stmmac_reset_queues_param(priv); + ret = stmmac_hw_setup(dev, true); if (ret < 0) { netdev_err(priv->dev, "%s: Hw setup failed\n", __func__); @@ -6330,6 +6323,7 @@ void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) return; } + stmmac_reset_rx_queue(priv, queue); stmmac_clear_rx_descriptors(priv, queue); stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, @@ -6391,6 +6385,7 @@ void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue) return; } + stmmac_reset_tx_queue(priv, queue); stmmac_clear_tx_descriptors(priv, queue); stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, @@ -7317,6 +7312,25 @@ int stmmac_suspend(struct device *dev) } EXPORT_SYMBOL_GPL(stmmac_suspend); +static void stmmac_reset_rx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + + rx_q->cur_rx = 0; + rx_q->dirty_rx = 0; +} + +static void stmmac_reset_tx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + + tx_q->cur_tx = 0; + tx_q->dirty_tx = 0; + tx_q->mss = 0; + + netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue)); +} + /** * stmmac_reset_queues_param - reset queue parameters * @priv: device pointer @@ -7327,22 +7341,11 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv) u32 tx_cnt = priv->plat->tx_queues_to_use; u32 queue; - for (queue = 0; queue < rx_cnt; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - - rx_q->cur_rx = 0; - rx_q->dirty_rx = 0; - } - - for (queue = 0; queue < tx_cnt; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + for (queue = 0; queue < rx_cnt; queue++) + stmmac_reset_rx_queue(priv, queue); - tx_q->cur_tx = 0; - tx_q->dirty_tx = 0; - tx_q->mss = 0; - - netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue)); - } + for (queue = 0; queue < tx_cnt; queue++) + stmmac_reset_tx_queue(priv, queue); } /** From patchwork Tue Jun 28 01:33:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Marangi X-Patchwork-Id: 12897512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5EFD2C433EF for ; Tue, 28 Jun 2022 01:35:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PFl3KHP82Ybgx7Gibt7Hc/QYzMYnVC8d2Utt3BM6Mg4=; b=fiERRzyWjks+WM Z77m+JO4/ob0XjhEqeSEaAhB6Zwz1GnC4XtMikqbWFy87cQHSiYiz1zXvQh6AsuEPNu5P9Hf7clwT A6iB4g6vpQGxpnmpkqlnXutqrTo5kJveZlPmbjSWpod106WwZYyIAF+KL8itKS3C5jA6E++4ZBEpG AJhh9DCq2+yPLhtJKjguDT/z9QEWjwdRu4+gT8Fl8rfHJgSOifuFO2c6/ukXcl0za1yXHWWh1INtE g5tP3rRSoWAM0PDvlU1p9XuqOsUH95wZc2mjOZiKIUSbz9bgVavGzUhKMUvwgJdImOGXtkL/vlNmO 6+WPORFZZVMj/IfUu7Dw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607b-003WPA-O3; Tue, 28 Jun 2022 01:34:31 +0000 Received: from mail-ed1-x530.google.com ([2a00:1450:4864:20::530]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607H-003WGu-Aj for linux-arm-kernel@lists.infradead.org; Tue, 28 Jun 2022 01:34:12 +0000 Received: by mail-ed1-x530.google.com with SMTP id ej4so15454920edb.7 for ; Mon, 27 Jun 2022 18:34:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iXMh8QpPv/1aXKHGIBmnrm9Vg4DXhE4ZP8Hc+hLQFGQ=; b=iSLPEcMimUmFNWA7S9EVs9feLPDJLUCwZ1xnX5Dcv0912ADR3eLqtn2mTAJQRCUhph +46PzVI+CD25zqDIk+OFmZ+PKs91pgaN0eh6W5ATIJARMUnCBhfXq7+l+SftyJG0hyj5 iTVsnb7boNeWXirGktLqd/702oIVnobEs+rei18XCwz8nzWh8py8kKGavZOVIsUhiehV eKT3X/p0F6XWJypvR1kqw5DOMdk13/EBnLmwE68eB86+7yW5obwVeRFj0JIFvAf5ElJR z5+AKA4watCMXSjQRHNfj9w41uQhaFqdfvRq4+YMjsPtwcus3EmHM2W/KLVYyHf0zzuP zBTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iXMh8QpPv/1aXKHGIBmnrm9Vg4DXhE4ZP8Hc+hLQFGQ=; b=ZstC5pHTSKYWmz8TBOs7Wd3bDD7tqvG5MUeZUkPnbMZRzeAB+2xiLmNTRwKMu0LJa0 QC+yxJumtu6k3t8ACUH2FGTSJ00/Snoux5vz4sfUMtBMYDHnzwHXzlVituzyZJx5I/z1 +HXloiF7DcN6/AZKBgQrY2U/lLKjNGGDzo5dPGDnpDzcUp0xMvYw9GgL5v1n+74GMkiS 5GWsXcws/oqPWmi9wvVwiJct+GWGuoTZFaid2iFKAisSOOGwTMiWhXaQP8kes+sLnIZz 91NGUkLlbHZAw7UItwydBFBqaMXCH0jWDjAVN8/SgbVjM73w997/2Yd2ff6QF/sTx5UF R6iA== X-Gm-Message-State: AJIora/qfURdfBixdSW4edsOfhT+IqpBkJzMIBLVaPZv61So9JGM1ddn rCy07Beh9PXgcNqoCQ+/F4Y= X-Google-Smtp-Source: AGRyM1vrS+YccdrvNawuCtViR0sTpIxGP/KIv7SC+cYNjQupzDKy6zZAQKeNq1r36/eUZZ2HEe+pqw== X-Received: by 2002:a05:6402:2047:b0:435:67e0:44fe with SMTP id bc7-20020a056402204700b0043567e044femr19876442edb.360.1656380047685; Mon, 27 Jun 2022 18:34:07 -0700 (PDT) Received: from localhost.localdomain (93-42-70-190.ip85.fastwebnet.it. [93.42.70.190]) by smtp.googlemail.com with ESMTPSA id x13-20020a170906b08d00b00724261b592esm5693492ejy.186.2022.06.27.18.34.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jun 2022 18:34:07 -0700 (PDT) From: Christian Marangi To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Russell King , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Christian Marangi Subject: [net-next PATCH RFC 2/5] net: ethernet: stmicro: stmmac: first disable all queues in release Date: Tue, 28 Jun 2022 03:33:39 +0200 Message-Id: <20220628013342.13581-3-ansuelsmth@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628013342.13581-1-ansuelsmth@gmail.com> References: <20220628013342.13581-1-ansuelsmth@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220627_183411_414079_FEE713DD X-CRM114-Status: GOOD ( 14.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Disable all queues before tx_disable in stmmac_release to prevent a corner case where packet may be still queued at the same time tx_disable is called resulting in kernel panic if some packet still has to be processed. Signed-off-by: Christian Marangi --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index f861246de2e5..f4ba27c1c7e0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3756,6 +3756,11 @@ static int stmmac_release(struct net_device *dev) struct stmmac_priv *priv = netdev_priv(dev); u32 chan; + stmmac_disable_all_queues(priv); + + for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) + hrtimer_cancel(&priv->tx_queue[chan].txtimer); + netif_tx_disable(dev); if (device_may_wakeup(priv->device)) @@ -3764,11 +3769,6 @@ static int stmmac_release(struct net_device *dev) phylink_stop(priv->phylink); phylink_disconnect_phy(priv->phylink); - stmmac_disable_all_queues(priv); - - for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); - /* Free the IRQ lines */ stmmac_free_irq(dev, REQ_IRQ_ERR_ALL, 0); From patchwork Tue Jun 28 01:33:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Marangi X-Patchwork-Id: 12897513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51DA0C433EF for ; Tue, 28 Jun 2022 01:36:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZzUV0csnjnbtK6KfN3LJJYrirYTxCUg2hm0PVtk3L/k=; b=W89jL2v7QW69+q ZkPgjqluEJSB+T0bS7pbPr+htiSRNTe1XzC/uclqiMoX+kfRR7s8PLyT5szEARlbarblhUElCBlR4 Ogd+5mbpPnwNnApk+B+4LW2Gml52NeX70dfaZBtjyaKZgyGG7c24mRjxbdAD96kenfk+VAc08IYyu ugCALpJ+UjVSEsRlyvMfgPAZFQhsgx8m0fvm+t/xOHwFd87XWNwlx7bYmECbVbxargqJRoayct+aV rRhjKpjtzH1DkxpCQecg31cFpt0aUH8Lfn8CGnV3flcEhI9NiGbKUvZLo+OCM3pdcifuBIXSNWHN6 pGBPB6DxxTFoWPVNjdaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607m-003WRp-V1; Tue, 28 Jun 2022 01:34:43 +0000 Received: from mail-ej1-x62c.google.com ([2a00:1450:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607I-003WGz-Qx for linux-arm-kernel@lists.infradead.org; Tue, 28 Jun 2022 01:34:17 +0000 Received: by mail-ej1-x62c.google.com with SMTP id fi2so22619826ejb.9 for ; Mon, 27 Jun 2022 18:34:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aJdwa/UjHEpdFkwXFI55W6Oik+ASbLBiAgY0tsSI47A=; b=B14k9TY87faD+kQiZm5zKeyGeopG2HdKbUq4OFGPRE8GXuwcshIKN75+KyfpKoQGWD HLhgDzVsjNCz31Ei3ekyFv6/7wv61hXFN26REmXK0t7/IuO0psKf18sCNM4VFHk5KnSh FhEYGXnC50CTm7WGGBYB+FjwvDZ6dsvnK8xYX/C0xp7XSEHN7cHSLUq4R/m4BJsYgihT 9Yh82AYO72vlj6+NO/zVnYwOGIL7uneeI9AMwnkG5ABEgPIzdibwRrtDVl1XprrHnSb7 G9QgebLH0JGpBtXZBPYpZyz6tpoTU3P8NssywFYbxmZbENvwdjjbN+H1Ldck+ncZuWgi bV0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aJdwa/UjHEpdFkwXFI55W6Oik+ASbLBiAgY0tsSI47A=; b=PFByt0lpoDGBKlf0IwDO/pvRiEQWK3jB9hfyZEuDvh8GI+XDVQ9ObShMPyk6bGQC6E SME/ndO/wg2Dy+jUjmfEHnHbTYLJpZ9j4wB0iafEv2ZfCn+sgVunZ1iPwYtYHr4gt5pA d9+i6lHWWClLu7jay568Tj1BVNgOzWBIbkc0MMxBzAQTdU9VmHkpBagMaKFiUq7FfXdc mWsmKiJ9eJ6ftR0rGDu1yy6JsCqFhXQuoDj2raaUiNtUg0B7perYkQekNoG9PX3UbuiP p2oDEAe9h1HTmE6aRczlezj+RE/t1Z7B+ypDcq6j8E6VCWjy5Vv/64mFNUSIpa1RX5FO J3MQ== X-Gm-Message-State: AJIora/nyMFYgAsYOhfCpCovHIoFrzrcVQL9XqAlWwThPYspDDRdGwRk MrcQqR0baGl7Sb+fbYcEcR0= X-Google-Smtp-Source: AGRyM1vEr7xDHwUSmsa9wcD/monyaHrwDd+d7AYoHF5iRJxTdy6w2SVA2bg7eryqXfwYJrsdSmYxxA== X-Received: by 2002:a17:907:1caa:b0:726:c4e5:2428 with SMTP id nb42-20020a1709071caa00b00726c4e52428mr3549212ejc.556.1656380048895; Mon, 27 Jun 2022 18:34:08 -0700 (PDT) Received: from localhost.localdomain (93-42-70-190.ip85.fastwebnet.it. [93.42.70.190]) by smtp.googlemail.com with ESMTPSA id x13-20020a170906b08d00b00724261b592esm5693492ejy.186.2022.06.27.18.34.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jun 2022 18:34:08 -0700 (PDT) From: Christian Marangi To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Russell King , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Christian Marangi Subject: [net-next PATCH RFC 3/5] net: ethernet: stmicro: stmmac: move dma conf to dedicated struct Date: Tue, 28 Jun 2022 03:33:40 +0200 Message-Id: <20220628013342.13581-4-ansuelsmth@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628013342.13581-1-ansuelsmth@gmail.com> References: <20220628013342.13581-1-ansuelsmth@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220627_183412_971297_83D46FF4 X-CRM114-Status: GOOD ( 22.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move dma buf conf to dedicated struct. This in preparation for code rework that will permit to allocate separate dma_conf without affecting the priv struct. Signed-off-by: Christian Marangi --- .../net/ethernet/stmicro/stmmac/chain_mode.c | 6 +- .../net/ethernet/stmicro/stmmac/ring_mode.c | 4 +- drivers/net/ethernet/stmicro/stmmac/stmmac.h | 21 +- .../ethernet/stmicro/stmmac/stmmac_ethtool.c | 4 +- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 286 +++++++++--------- 5 files changed, 165 insertions(+), 156 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c index d2cdc02d9f94..2e8744ac6b91 100644 --- a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c +++ b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c @@ -46,7 +46,7 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum) while (len != 0) { tx_q->tx_skbuff[entry] = NULL; - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); desc = tx_q->dma_tx + entry; if (len > bmax) { @@ -137,7 +137,7 @@ static void refill_desc3(void *priv_ptr, struct dma_desc *p) */ p->des3 = cpu_to_le32((unsigned int)(rx_q->dma_rx_phy + (((rx_q->dirty_rx) + 1) % - priv->dma_rx_size) * + priv->dma_conf.dma_rx_size) * sizeof(struct dma_desc))); } @@ -155,7 +155,7 @@ static void clean_desc3(void *priv_ptr, struct dma_desc *p) */ p->des3 = cpu_to_le32((unsigned int)((tx_q->dma_tx_phy + ((tx_q->dirty_tx + 1) % - priv->dma_tx_size)) + priv->dma_conf.dma_tx_size)) * sizeof(struct dma_desc))); } diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c index 8ad900949dc8..2b5b17d8b8a0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c +++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c @@ -51,7 +51,7 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum) stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum, STMMAC_RING_MODE, 0, false, skb->len); tx_q->tx_skbuff[entry] = NULL; - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); if (priv->extend_desc) desc = (struct dma_desc *)(tx_q->dma_etx + entry); @@ -107,7 +107,7 @@ static void refill_desc3(void *priv_ptr, struct dma_desc *p) struct stmmac_priv *priv = rx_q->priv_data; /* Fill DES3 in case of RING mode */ - if (priv->dma_buf_sz == BUF_SIZE_16KiB) + if (priv->dma_conf.dma_buf_sz == BUF_SIZE_16KiB) p->des3 = cpu_to_le32(le32_to_cpu(p->des2) + BUF_SIZE_8KiB); } diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index 57970ae2178d..8ef44c9d84f4 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -188,6 +188,18 @@ struct stmmac_rfs_entry { int tc; }; +struct stmmac_dma_conf { + unsigned int dma_buf_sz; + + /* RX Queue */ + struct stmmac_rx_queue rx_queue[MTL_MAX_RX_QUEUES]; + unsigned int dma_rx_size; + + /* TX Queue */ + struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES]; + unsigned int dma_tx_size; +}; + struct stmmac_priv { /* Frequently used values are kept adjacent for cache effect */ u32 tx_coal_frames[MTL_MAX_TX_QUEUES]; @@ -201,7 +213,6 @@ struct stmmac_priv { int sph_cap; u32 sarc_type; - unsigned int dma_buf_sz; unsigned int rx_copybreak; u32 rx_riwt[MTL_MAX_TX_QUEUES]; int hwts_rx_en; @@ -213,13 +224,7 @@ struct stmmac_priv { int (*hwif_quirks)(struct stmmac_priv *priv); struct mutex lock; - /* RX Queue */ - struct stmmac_rx_queue rx_queue[MTL_MAX_RX_QUEUES]; - unsigned int dma_rx_size; - - /* TX Queue */ - struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES]; - unsigned int dma_tx_size; + struct stmmac_dma_conf dma_conf; /* Generic channel for NAPI */ struct stmmac_channel channel[STMMAC_CH_MAX]; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c index abfb3cd5958d..fdf5575aedb8 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c @@ -485,8 +485,8 @@ static void stmmac_get_ringparam(struct net_device *netdev, ring->rx_max_pending = DMA_MAX_RX_SIZE; ring->tx_max_pending = DMA_MAX_TX_SIZE; - ring->rx_pending = priv->dma_rx_size; - ring->tx_pending = priv->dma_tx_size; + ring->rx_pending = priv->dma_conf.dma_rx_size; + ring->tx_pending = priv->dma_conf.dma_tx_size; } static int stmmac_set_ringparam(struct net_device *netdev, diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index f4ba27c1c7e0..c211d0274bba 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -74,8 +74,8 @@ static int phyaddr = -1; module_param(phyaddr, int, 0444); MODULE_PARM_DESC(phyaddr, "Physical device address"); -#define STMMAC_TX_THRESH(x) ((x)->dma_tx_size / 4) -#define STMMAC_RX_THRESH(x) ((x)->dma_rx_size / 4) +#define STMMAC_TX_THRESH(x) ((x)->dma_conf.dma_tx_size / 4) +#define STMMAC_RX_THRESH(x) ((x)->dma_conf.dma_rx_size / 4) /* Limit to make sure XDP TX and slow path can coexist */ #define STMMAC_XSK_TX_BUDGET_MAX 256 @@ -234,7 +234,7 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv) /* synchronize_rcu() needed for pending XDP buffers to drain */ for (queue = 0; queue < rx_queues_cnt; queue++) { - rx_q = &priv->rx_queue[queue]; + rx_q = &priv->dma_conf.rx_queue[queue]; if (rx_q->xsk_pool) { synchronize_rcu(); break; @@ -360,13 +360,13 @@ static void print_pkt(unsigned char *buf, int len) static inline u32 stmmac_tx_avail(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; u32 avail; if (tx_q->dirty_tx > tx_q->cur_tx) avail = tx_q->dirty_tx - tx_q->cur_tx - 1; else - avail = priv->dma_tx_size - tx_q->cur_tx + tx_q->dirty_tx - 1; + avail = priv->dma_conf.dma_tx_size - tx_q->cur_tx + tx_q->dirty_tx - 1; return avail; } @@ -378,13 +378,13 @@ static inline u32 stmmac_tx_avail(struct stmmac_priv *priv, u32 queue) */ static inline u32 stmmac_rx_dirty(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; u32 dirty; if (rx_q->dirty_rx <= rx_q->cur_rx) dirty = rx_q->cur_rx - rx_q->dirty_rx; else - dirty = priv->dma_rx_size - rx_q->dirty_rx + rx_q->cur_rx; + dirty = priv->dma_conf.dma_rx_size - rx_q->dirty_rx + rx_q->cur_rx; return dirty; } @@ -412,7 +412,7 @@ static int stmmac_enable_eee_mode(struct stmmac_priv *priv) /* check if all TX queues have the work finished */ for (queue = 0; queue < tx_cnt; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; if (tx_q->dirty_tx != tx_q->cur_tx) return -EBUSY; /* still unfinished work */ @@ -1239,7 +1239,7 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv) /* Display RX rings */ for (queue = 0; queue < rx_cnt; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; pr_info("\tRX Queue %u rings\n", queue); @@ -1252,7 +1252,7 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv) } /* Display RX ring */ - stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true, + stmmac_display_ring(priv, head_rx, priv->dma_conf.dma_rx_size, true, rx_q->dma_rx_phy, desc_size); } } @@ -1266,7 +1266,7 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv) /* Display TX rings */ for (queue = 0; queue < tx_cnt; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; pr_info("\tTX Queue %d rings\n", queue); @@ -1281,7 +1281,7 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv) desc_size = sizeof(struct dma_desc); } - stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false, + stmmac_display_ring(priv, head_tx, priv->dma_conf.dma_tx_size, false, tx_q->dma_tx_phy, desc_size); } } @@ -1322,21 +1322,21 @@ static int stmmac_set_bfsize(int mtu, int bufsize) */ static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int i; /* Clear the RX descriptors */ - for (i = 0; i < priv->dma_rx_size; i++) + for (i = 0; i < priv->dma_conf.dma_rx_size; i++) if (priv->extend_desc) stmmac_init_rx_desc(priv, &rx_q->dma_erx[i].basic, priv->use_riwt, priv->mode, - (i == priv->dma_rx_size - 1), - priv->dma_buf_sz); + (i == priv->dma_conf.dma_rx_size - 1), + priv->dma_conf.dma_buf_sz); else stmmac_init_rx_desc(priv, &rx_q->dma_rx[i], priv->use_riwt, priv->mode, - (i == priv->dma_rx_size - 1), - priv->dma_buf_sz); + (i == priv->dma_conf.dma_rx_size - 1), + priv->dma_conf.dma_buf_sz); } /** @@ -1348,12 +1348,12 @@ static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, u32 queue) */ static void stmmac_clear_tx_descriptors(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; int i; /* Clear the TX descriptors */ - for (i = 0; i < priv->dma_tx_size; i++) { - int last = (i == (priv->dma_tx_size - 1)); + for (i = 0; i < priv->dma_conf.dma_tx_size; i++) { + int last = (i == (priv->dma_conf.dma_tx_size - 1)); struct dma_desc *p; if (priv->extend_desc) @@ -1401,7 +1401,7 @@ static void stmmac_clear_descriptors(struct stmmac_priv *priv) static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, int i, gfp_t flags, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); @@ -1430,7 +1430,7 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; stmmac_set_desc_addr(priv, p, buf->addr); - if (priv->dma_buf_sz == BUF_SIZE_16KiB) + if (priv->dma_conf.dma_buf_sz == BUF_SIZE_16KiB) stmmac_init_desc3(priv, p); return 0; @@ -1444,7 +1444,7 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, */ static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; if (buf->page) @@ -1464,7 +1464,7 @@ static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i) */ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; if (tx_q->tx_skbuff_dma[i].buf && tx_q->tx_skbuff_dma[i].buf_type != STMMAC_TXBUF_T_XDP_TX) { @@ -1509,17 +1509,17 @@ static void dma_free_rx_skbufs(struct stmmac_priv *priv, u32 queue) { int i; - for (i = 0; i < priv->dma_rx_size; i++) + for (i = 0; i < priv->dma_conf.dma_rx_size; i++) stmmac_free_rx_buffer(priv, queue, i); } static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, gfp_t flags) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int i; - for (i = 0; i < priv->dma_rx_size; i++) { + for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { struct dma_desc *p; int ret; @@ -1546,10 +1546,10 @@ static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, */ static void dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int i; - for (i = 0; i < priv->dma_rx_size; i++) { + for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; if (!buf->xdp) @@ -1562,10 +1562,10 @@ static void dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) static int stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int i; - for (i = 0; i < priv->dma_rx_size; i++) { + for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { struct stmmac_rx_buffer *buf; dma_addr_t dma_addr; struct dma_desc *p; @@ -1608,7 +1608,7 @@ static struct xsk_buff_pool *stmmac_get_xsk_pool(struct stmmac_priv *priv, u32 q */ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t flags) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int ret; netif_dbg(priv, probe, priv->dev, @@ -1654,11 +1654,11 @@ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t f if (priv->extend_desc) stmmac_mode_init(priv, rx_q->dma_erx, rx_q->dma_rx_phy, - priv->dma_rx_size, 1); + priv->dma_conf.dma_rx_size, 1); else stmmac_mode_init(priv, rx_q->dma_rx, rx_q->dma_rx_phy, - priv->dma_rx_size, 0); + priv->dma_conf.dma_rx_size, 0); } return 0; @@ -1685,7 +1685,7 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) err_init_rx_buffers: while (queue >= 0) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; if (rx_q->xsk_pool) dma_free_rx_xskbufs(priv, queue); @@ -1711,7 +1711,7 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) */ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; int i; netif_dbg(priv, probe, priv->dev, @@ -1723,16 +1723,16 @@ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) if (priv->extend_desc) stmmac_mode_init(priv, tx_q->dma_etx, tx_q->dma_tx_phy, - priv->dma_tx_size, 1); + priv->dma_conf.dma_tx_size, 1); else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) stmmac_mode_init(priv, tx_q->dma_tx, tx_q->dma_tx_phy, - priv->dma_tx_size, 0); + priv->dma_conf.dma_tx_size, 0); } tx_q->xsk_pool = stmmac_get_xsk_pool(priv, queue); - for (i = 0; i < priv->dma_tx_size; i++) { + for (i = 0; i < priv->dma_conf.dma_tx_size; i++) { struct dma_desc *p; if (priv->extend_desc) @@ -1802,12 +1802,12 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags) */ static void dma_free_tx_skbufs(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; int i; tx_q->xsk_frames_done = 0; - for (i = 0; i < priv->dma_tx_size; i++) + for (i = 0; i < priv->dma_conf.dma_tx_size; i++) stmmac_free_tx_buffer(priv, queue, i); if (tx_q->xsk_pool && tx_q->xsk_frames_done) { @@ -1837,7 +1837,7 @@ static void stmmac_free_tx_skbufs(struct stmmac_priv *priv) */ static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; /* Release the DMA RX socket buffers */ if (rx_q->xsk_pool) @@ -1850,11 +1850,11 @@ static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) /* Free DMA regions of consistent memory previously allocated */ if (!priv->extend_desc) - dma_free_coherent(priv->device, priv->dma_rx_size * + dma_free_coherent(priv->device, priv->dma_conf.dma_rx_size * sizeof(struct dma_desc), rx_q->dma_rx, rx_q->dma_rx_phy); else - dma_free_coherent(priv->device, priv->dma_rx_size * + dma_free_coherent(priv->device, priv->dma_conf.dma_rx_size * sizeof(struct dma_extended_desc), rx_q->dma_erx, rx_q->dma_rx_phy); @@ -1883,7 +1883,7 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv) */ static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; size_t size; void *addr; @@ -1901,7 +1901,7 @@ static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) addr = tx_q->dma_tx; } - size *= priv->dma_tx_size; + size *= priv->dma_conf.dma_tx_size; dma_free_coherent(priv->device, size, addr, tx_q->dma_tx_phy); @@ -1930,7 +1930,7 @@ static void free_dma_tx_desc_resources(struct stmmac_priv *priv) */ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_channel *ch = &priv->channel[queue]; bool xdp_prog = stmmac_xdp_is_enabled(priv); struct page_pool_params pp_params = { 0 }; @@ -1942,8 +1942,8 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) rx_q->priv_data = priv; pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; - pp_params.pool_size = priv->dma_rx_size; - num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); + pp_params.pool_size = priv->dma_conf.dma_rx_size; + num_pages = DIV_ROUND_UP(priv->dma_conf.dma_buf_sz, PAGE_SIZE); pp_params.order = ilog2(num_pages); pp_params.nid = dev_to_node(priv->device); pp_params.dev = priv->device; @@ -1958,7 +1958,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) return ret; } - rx_q->buf_pool = kcalloc(priv->dma_rx_size, + rx_q->buf_pool = kcalloc(priv->dma_conf.dma_rx_size, sizeof(*rx_q->buf_pool), GFP_KERNEL); if (!rx_q->buf_pool) @@ -1966,7 +1966,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) if (priv->extend_desc) { rx_q->dma_erx = dma_alloc_coherent(priv->device, - priv->dma_rx_size * + priv->dma_conf.dma_rx_size * sizeof(struct dma_extended_desc), &rx_q->dma_rx_phy, GFP_KERNEL); @@ -1975,7 +1975,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) } else { rx_q->dma_rx = dma_alloc_coherent(priv->device, - priv->dma_rx_size * + priv->dma_conf.dma_rx_size * sizeof(struct dma_desc), &rx_q->dma_rx_phy, GFP_KERNEL); @@ -2032,20 +2032,20 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) */ static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; size_t size; void *addr; tx_q->queue_index = queue; tx_q->priv_data = priv; - tx_q->tx_skbuff_dma = kcalloc(priv->dma_tx_size, + tx_q->tx_skbuff_dma = kcalloc(priv->dma_conf.dma_tx_size, sizeof(*tx_q->tx_skbuff_dma), GFP_KERNEL); if (!tx_q->tx_skbuff_dma) return -ENOMEM; - tx_q->tx_skbuff = kcalloc(priv->dma_tx_size, + tx_q->tx_skbuff = kcalloc(priv->dma_conf.dma_tx_size, sizeof(struct sk_buff *), GFP_KERNEL); if (!tx_q->tx_skbuff) @@ -2058,7 +2058,7 @@ static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) else size = sizeof(struct dma_desc); - size *= priv->dma_tx_size; + size *= priv->dma_conf.dma_tx_size; addr = dma_alloc_coherent(priv->device, size, &tx_q->dma_tx_phy, GFP_KERNEL); @@ -2302,7 +2302,7 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv) /* configure all channels */ for (chan = 0; chan < rx_channels_count; chan++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan]; u32 buf_size; qmode = priv->plat->rx_queues_cfg[chan].mode_to_use; @@ -2317,7 +2317,7 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv) chan); } else { stmmac_set_dma_bfsize(priv, priv->ioaddr, - priv->dma_buf_sz, + priv->dma_conf.dma_buf_sz, chan); } } @@ -2333,7 +2333,7 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv) static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget) { struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue); - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; struct xsk_buff_pool *pool = tx_q->xsk_pool; unsigned int entry = tx_q->cur_tx; struct dma_desc *tx_desc = NULL; @@ -2408,7 +2408,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget) stmmac_enable_dma_transmission(priv, priv->ioaddr); - tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_tx_size); + tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size); entry = tx_q->cur_tx; } @@ -2449,7 +2449,7 @@ static void stmmac_bump_dma_threshold(struct stmmac_priv *priv, u32 chan) */ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; unsigned int bytes_compl = 0, pkts_compl = 0; unsigned int entry, xmits = 0, count = 0; @@ -2462,7 +2462,7 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) entry = tx_q->dirty_tx; /* Try to clean all TX complete frame in 1 shot */ - while ((entry != tx_q->cur_tx) && count < priv->dma_tx_size) { + while ((entry != tx_q->cur_tx) && count < priv->dma_conf.dma_tx_size) { struct xdp_frame *xdpf; struct sk_buff *skb; struct dma_desc *p; @@ -2564,7 +2564,7 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) stmmac_release_tx_desc(priv, p, priv->mode); - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); } tx_q->dirty_tx = entry; @@ -2629,7 +2629,7 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) */ static void stmmac_tx_err(struct stmmac_priv *priv, u32 chan) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, chan)); @@ -2696,8 +2696,8 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan, u32 dir) { int status = stmmac_dma_interrupt_status(priv, priv->ioaddr, &priv->xstats, chan, dir); - struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; struct stmmac_channel *ch = &priv->channel[chan]; struct napi_struct *rx_napi; struct napi_struct *tx_napi; @@ -2863,7 +2863,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) /* DMA RX Channel Configuration */ for (chan = 0; chan < rx_channels_count; chan++) { - rx_q = &priv->rx_queue[chan]; + rx_q = &priv->dma_conf.rx_queue[chan]; stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, rx_q->dma_rx_phy, chan); @@ -2877,7 +2877,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) /* DMA TX Channel Configuration */ for (chan = 0; chan < tx_channels_count; chan++) { - tx_q = &priv->tx_queue[chan]; + tx_q = &priv->dma_conf.tx_queue[chan]; stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, tx_q->dma_tx_phy, chan); @@ -2892,7 +2892,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; hrtimer_start(&tx_q->txtimer, STMMAC_COAL_TIMER(priv->tx_coal_timer[queue]), @@ -2942,7 +2942,7 @@ static void stmmac_init_coalesce(struct stmmac_priv *priv) u32 chan; for (chan = 0; chan < tx_channel_count; chan++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; priv->tx_coal_frames[chan] = STMMAC_TX_FRAMES; priv->tx_coal_timer[chan] = STMMAC_COAL_TX_TIMER; @@ -2964,12 +2964,12 @@ static void stmmac_set_rings_length(struct stmmac_priv *priv) /* set TX ring length */ for (chan = 0; chan < tx_channels_count; chan++) stmmac_set_tx_ring_len(priv, priv->ioaddr, - (priv->dma_tx_size - 1), chan); + (priv->dma_conf.dma_tx_size - 1), chan); /* set RX ring length */ for (chan = 0; chan < rx_channels_count; chan++) stmmac_set_rx_ring_len(priv, priv->ioaddr, - (priv->dma_rx_size - 1), chan); + (priv->dma_conf.dma_rx_size - 1), chan); } /** @@ -3296,7 +3296,7 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register) /* Enable TSO */ if (priv->tso) { for (chan = 0; chan < tx_cnt; chan++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; /* TSO and TBS cannot co-exist */ if (tx_q->tbs & STMMAC_TBS_AVAIL) @@ -3318,7 +3318,7 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register) /* TBS */ for (chan = 0; chan < tx_cnt; chan++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; int enable = tx_q->tbs & STMMAC_TBS_AVAIL; stmmac_enable_tbs(priv, priv->ioaddr, enable, chan); @@ -3362,7 +3362,7 @@ static void stmmac_free_irq(struct net_device *dev, for (j = irq_idx - 1; j >= 0; j--) { if (priv->tx_irq[j] > 0) { irq_set_affinity_hint(priv->tx_irq[j], NULL); - free_irq(priv->tx_irq[j], &priv->tx_queue[j]); + free_irq(priv->tx_irq[j], &priv->dma_conf.tx_queue[j]); } } irq_idx = priv->plat->rx_queues_to_use; @@ -3371,7 +3371,7 @@ static void stmmac_free_irq(struct net_device *dev, for (j = irq_idx - 1; j >= 0; j--) { if (priv->rx_irq[j] > 0) { irq_set_affinity_hint(priv->rx_irq[j], NULL); - free_irq(priv->rx_irq[j], &priv->rx_queue[j]); + free_irq(priv->rx_irq[j], &priv->dma_conf.rx_queue[j]); } } @@ -3506,7 +3506,7 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev) sprintf(int_name, "%s:%s-%d", dev->name, "rx", i); ret = request_irq(priv->rx_irq[i], stmmac_msi_intr_rx, - 0, int_name, &priv->rx_queue[i]); + 0, int_name, &priv->dma_conf.rx_queue[i]); if (unlikely(ret < 0)) { netdev_err(priv->dev, "%s: alloc rx-%d MSI %d (error: %d)\n", @@ -3531,7 +3531,7 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev) sprintf(int_name, "%s:%s-%d", dev->name, "tx", i); ret = request_irq(priv->tx_irq[i], stmmac_msi_intr_tx, - 0, int_name, &priv->tx_queue[i]); + 0, int_name, &priv->dma_conf.tx_queue[i]); if (unlikely(ret < 0)) { netdev_err(priv->dev, "%s: alloc tx-%d MSI %d (error: %d)\n", @@ -3660,21 +3660,21 @@ static int stmmac_open(struct net_device *dev) bfsize = 0; if (bfsize < BUF_SIZE_16KiB) - bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_buf_sz); + bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_conf.dma_buf_sz); - priv->dma_buf_sz = bfsize; + priv->dma_conf.dma_buf_sz = bfsize; buf_sz = bfsize; priv->rx_copybreak = STMMAC_RX_COPYBREAK; - if (!priv->dma_tx_size) - priv->dma_tx_size = DMA_DEFAULT_TX_SIZE; - if (!priv->dma_rx_size) - priv->dma_rx_size = DMA_DEFAULT_RX_SIZE; + if (!priv->dma_conf.dma_tx_size) + priv->dma_conf.dma_tx_size = DMA_DEFAULT_TX_SIZE; + if (!priv->dma_conf.dma_rx_size) + priv->dma_conf.dma_rx_size = DMA_DEFAULT_RX_SIZE; /* Earlier check for TBS */ for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en; /* Setup per-TXQ tbs flag before TX descriptor alloc */ @@ -3723,7 +3723,7 @@ static int stmmac_open(struct net_device *dev) phylink_stop(priv->phylink); for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); + hrtimer_cancel(&priv->dma_conf.tx_queue[chan].txtimer); stmmac_hw_teardown(dev); init_error: @@ -3759,7 +3759,7 @@ static int stmmac_release(struct net_device *dev) stmmac_disable_all_queues(priv); for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); + hrtimer_cancel(&priv->dma_conf.tx_queue[chan].txtimer); netif_tx_disable(dev); @@ -3825,7 +3825,7 @@ static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, return false; stmmac_set_tx_owner(priv, p); - tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_tx_size); + tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size); return true; } @@ -3843,7 +3843,7 @@ static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, int total_len, bool last_segment, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; struct dma_desc *desc; u32 buff_size; int tmp_len; @@ -3854,7 +3854,7 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, dma_addr_t curr_addr; tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, - priv->dma_tx_size); + priv->dma_conf.dma_tx_size); WARN_ON(tx_q->tx_skbuff[tx_q->cur_tx]); if (tx_q->tbs & STMMAC_TBS_AVAIL) @@ -3882,7 +3882,7 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; int desc_size; if (likely(priv->extend_desc)) @@ -3944,7 +3944,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) dma_addr_t des; int i; - tx_q = &priv->tx_queue[queue]; + tx_q = &priv->dma_conf.tx_queue[queue]; first_tx = tx_q->cur_tx; /* Compute header lengths */ @@ -3984,7 +3984,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) stmmac_set_mss(priv, mss_desc, mss); tx_q->mss = mss; tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, - priv->dma_tx_size); + priv->dma_conf.dma_tx_size); WARN_ON(tx_q->tx_skbuff[tx_q->cur_tx]); } @@ -4096,7 +4096,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) * ndo_start_xmit will fill this descriptor the next time it's * called and stmmac_tx_clean may clean up to this descriptor. */ - tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_tx_size); + tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size); if (unlikely(stmmac_tx_avail(priv, queue) <= (MAX_SKB_FRAGS + 1))) { netif_dbg(priv, hw, priv->dev, "%s: stop transmitted packets\n", @@ -4184,7 +4184,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) int entry, first_tx; dma_addr_t des; - tx_q = &priv->tx_queue[queue]; + tx_q = &priv->dma_conf.tx_queue[queue]; first_tx = tx_q->cur_tx; if (priv->tx_path_in_lpi_mode && priv->eee_sw_timer_en) @@ -4247,7 +4247,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) int len = skb_frag_size(frag); bool last_segment = (i == (nfrags - 1)); - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); WARN_ON(tx_q->tx_skbuff[entry]); if (likely(priv->extend_desc)) @@ -4318,7 +4318,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) * ndo_start_xmit will fill this descriptor the next time it's * called and stmmac_tx_clean may clean up to this descriptor. */ - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); tx_q->cur_tx = entry; if (netif_msg_pktdata(priv)) { @@ -4433,7 +4433,7 @@ static void stmmac_rx_vlan(struct net_device *dev, struct sk_buff *skb) */ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; int dirty = stmmac_rx_dirty(priv, queue); unsigned int entry = rx_q->dirty_rx; gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); @@ -4487,7 +4487,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) dma_wmb(); stmmac_set_rx_owner(priv, p, use_rx_wd); - entry = STMMAC_GET_ENTRY(entry, priv->dma_rx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_rx_size); } rx_q->dirty_rx = entry; rx_q->rx_tail_addr = rx_q->dma_rx_phy + @@ -4515,12 +4515,12 @@ static unsigned int stmmac_rx_buf1_len(struct stmmac_priv *priv, /* First descriptor, not last descriptor and not split header */ if (status & rx_not_ls) - return priv->dma_buf_sz; + return priv->dma_conf.dma_buf_sz; plen = stmmac_get_rx_frame_len(priv, p, coe); /* First descriptor and last descriptor and not split header */ - return min_t(unsigned int, priv->dma_buf_sz, plen); + return min_t(unsigned int, priv->dma_conf.dma_buf_sz, plen); } static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, @@ -4536,7 +4536,7 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, /* Not last descriptor */ if (status & rx_not_ls) - return priv->dma_buf_sz; + return priv->dma_conf.dma_buf_sz; plen = stmmac_get_rx_frame_len(priv, p, coe); @@ -4547,7 +4547,7 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, struct xdp_frame *xdpf, bool dma_map) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; unsigned int entry = tx_q->cur_tx; struct dma_desc *tx_desc; dma_addr_t dma_addr; @@ -4610,7 +4610,7 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, stmmac_enable_dma_transmission(priv, priv->ioaddr); - entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size); tx_q->cur_tx = entry; return STMMAC_XDP_TX; @@ -4784,7 +4784,7 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue, static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; unsigned int entry = rx_q->dirty_rx; struct dma_desc *rx_desc = NULL; bool ret = true; @@ -4827,7 +4827,7 @@ static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget) dma_wmb(); stmmac_set_rx_owner(priv, rx_desc, use_rx_wd); - entry = STMMAC_GET_ENTRY(entry, priv->dma_rx_size); + entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_rx_size); } if (rx_desc) { @@ -4842,7 +4842,7 @@ static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget) static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; unsigned int count = 0, error = 0, len = 0; int dirty = stmmac_rx_dirty(priv, queue); unsigned int next_entry = rx_q->cur_rx; @@ -4864,7 +4864,7 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) desc_size = sizeof(struct dma_desc); } - stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true, + stmmac_display_ring(priv, rx_head, priv->dma_conf.dma_rx_size, true, rx_q->dma_rx_phy, desc_size); } while (count < limit) { @@ -4911,7 +4911,7 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) /* Prefetch the next RX descriptor */ rx_q->cur_rx = STMMAC_GET_ENTRY(rx_q->cur_rx, - priv->dma_rx_size); + priv->dma_conf.dma_rx_size); next_entry = rx_q->cur_rx; if (priv->extend_desc) @@ -5032,7 +5032,7 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) */ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_channel *ch = &priv->channel[queue]; unsigned int count = 0, error = 0, len = 0; int status = 0, coe = priv->hw->rx_csum; @@ -5045,7 +5045,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) int buf_sz; dma_dir = page_pool_get_dma_dir(rx_q->page_pool); - buf_sz = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE) * PAGE_SIZE; + buf_sz = DIV_ROUND_UP(priv->dma_conf.dma_buf_sz, PAGE_SIZE) * PAGE_SIZE; if (netif_msg_rx_status(priv)) { void *rx_head; @@ -5059,7 +5059,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) desc_size = sizeof(struct dma_desc); } - stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true, + stmmac_display_ring(priv, rx_head, priv->dma_conf.dma_rx_size, true, rx_q->dma_rx_phy, desc_size); } while (count < limit) { @@ -5103,7 +5103,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) break; rx_q->cur_rx = STMMAC_GET_ENTRY(rx_q->cur_rx, - priv->dma_rx_size); + priv->dma_conf.dma_rx_size); next_entry = rx_q->cur_rx; if (priv->extend_desc) @@ -5238,7 +5238,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) buf1_len, dma_dir); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, buf->page, buf->page_offset, buf1_len, - priv->dma_buf_sz); + priv->dma_conf.dma_buf_sz); /* Data payload appended into SKB */ page_pool_release_page(rx_q->page_pool, buf->page); @@ -5250,7 +5250,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) buf2_len, dma_dir); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, buf->sec_page, 0, buf2_len, - priv->dma_buf_sz); + priv->dma_conf.dma_buf_sz); /* Data payload appended into SKB */ page_pool_release_page(rx_q->page_pool, buf->sec_page); @@ -5692,11 +5692,13 @@ static irqreturn_t stmmac_safety_interrupt(int irq, void *dev_id) static irqreturn_t stmmac_msi_intr_tx(int irq, void *data) { struct stmmac_tx_queue *tx_q = (struct stmmac_tx_queue *)data; + struct stmmac_dma_conf *dma_conf; int chan = tx_q->queue_index; struct stmmac_priv *priv; int status; - priv = container_of(tx_q, struct stmmac_priv, tx_queue[chan]); + dma_conf = container_of(tx_q, struct stmmac_dma_conf, tx_queue[chan]); + priv = container_of(dma_conf, struct stmmac_priv, dma_conf); if (unlikely(!data)) { netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); @@ -5722,10 +5724,12 @@ static irqreturn_t stmmac_msi_intr_tx(int irq, void *data) static irqreturn_t stmmac_msi_intr_rx(int irq, void *data) { struct stmmac_rx_queue *rx_q = (struct stmmac_rx_queue *)data; + struct stmmac_dma_conf *dma_conf; int chan = rx_q->queue_index; struct stmmac_priv *priv; - priv = container_of(rx_q, struct stmmac_priv, rx_queue[chan]); + dma_conf = container_of(rx_q, struct stmmac_dma_conf, rx_queue[chan]); + priv = container_of(dma_conf, struct stmmac_priv, dma_conf); if (unlikely(!data)) { netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); @@ -5756,10 +5760,10 @@ static void stmmac_poll_controller(struct net_device *dev) if (priv->plat->multi_msi_en) { for (i = 0; i < priv->plat->rx_queues_to_use; i++) - stmmac_msi_intr_rx(0, &priv->rx_queue[i]); + stmmac_msi_intr_rx(0, &priv->dma_conf.rx_queue[i]); for (i = 0; i < priv->plat->tx_queues_to_use; i++) - stmmac_msi_intr_tx(0, &priv->tx_queue[i]); + stmmac_msi_intr_tx(0, &priv->dma_conf.tx_queue[i]); } else { disable_irq(dev->irq); stmmac_interrupt(dev->irq, dev); @@ -5938,34 +5942,34 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v) return 0; for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; seq_printf(seq, "RX Queue %d:\n", queue); if (priv->extend_desc) { seq_printf(seq, "Extended descriptor ring:\n"); sysfs_display_ring((void *)rx_q->dma_erx, - priv->dma_rx_size, 1, seq, rx_q->dma_rx_phy); + priv->dma_conf.dma_rx_size, 1, seq, rx_q->dma_rx_phy); } else { seq_printf(seq, "Descriptor ring:\n"); sysfs_display_ring((void *)rx_q->dma_rx, - priv->dma_rx_size, 0, seq, rx_q->dma_rx_phy); + priv->dma_conf.dma_rx_size, 0, seq, rx_q->dma_rx_phy); } } for (queue = 0; queue < tx_count; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; seq_printf(seq, "TX Queue %d:\n", queue); if (priv->extend_desc) { seq_printf(seq, "Extended descriptor ring:\n"); sysfs_display_ring((void *)tx_q->dma_etx, - priv->dma_tx_size, 1, seq, tx_q->dma_tx_phy); + priv->dma_conf.dma_tx_size, 1, seq, tx_q->dma_tx_phy); } else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) { seq_printf(seq, "Descriptor ring:\n"); sysfs_display_ring((void *)tx_q->dma_tx, - priv->dma_tx_size, 0, seq, tx_q->dma_tx_phy); + priv->dma_conf.dma_tx_size, 0, seq, tx_q->dma_tx_phy); } } @@ -6304,7 +6308,7 @@ void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue) void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_channel *ch = &priv->channel[queue]; unsigned long flags; u32 buf_size; @@ -6341,7 +6345,7 @@ void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) rx_q->queue_index); } else { stmmac_set_dma_bfsize(priv, priv->ioaddr, - priv->dma_buf_sz, + priv->dma_conf.dma_buf_sz, rx_q->queue_index); } @@ -6367,7 +6371,7 @@ void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue) void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; struct stmmac_channel *ch = &priv->channel[queue]; unsigned long flags; int ret; @@ -6414,7 +6418,7 @@ void stmmac_xdp_release(struct net_device *dev) stmmac_disable_all_queues(priv); for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); + hrtimer_cancel(&priv->dma_conf.tx_queue[chan].txtimer); /* Free the IRQ lines */ stmmac_free_irq(dev, REQ_IRQ_ERR_ALL, 0); @@ -6473,7 +6477,7 @@ int stmmac_xdp_open(struct net_device *dev) /* DMA RX Channel Configuration */ for (chan = 0; chan < rx_cnt; chan++) { - rx_q = &priv->rx_queue[chan]; + rx_q = &priv->dma_conf.rx_queue[chan]; stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, rx_q->dma_rx_phy, chan); @@ -6491,7 +6495,7 @@ int stmmac_xdp_open(struct net_device *dev) rx_q->queue_index); } else { stmmac_set_dma_bfsize(priv, priv->ioaddr, - priv->dma_buf_sz, + priv->dma_conf.dma_buf_sz, rx_q->queue_index); } @@ -6500,7 +6504,7 @@ int stmmac_xdp_open(struct net_device *dev) /* DMA TX Channel Configuration */ for (chan = 0; chan < tx_cnt; chan++) { - tx_q = &priv->tx_queue[chan]; + tx_q = &priv->dma_conf.tx_queue[chan]; stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, tx_q->dma_tx_phy, chan); @@ -6533,7 +6537,7 @@ int stmmac_xdp_open(struct net_device *dev) irq_error: for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); + hrtimer_cancel(&priv->dma_conf.tx_queue[chan].txtimer); stmmac_hw_teardown(dev); init_error: @@ -6560,8 +6564,8 @@ int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags) queue >= priv->plat->tx_queues_to_use) return -EINVAL; - rx_q = &priv->rx_queue[queue]; - tx_q = &priv->tx_queue[queue]; + rx_q = &priv->dma_conf.rx_queue[queue]; + tx_q = &priv->dma_conf.tx_queue[queue]; ch = &priv->channel[queue]; if (!rx_q->xsk_pool && !tx_q->xsk_pool) @@ -6816,8 +6820,8 @@ int stmmac_reinit_ringparam(struct net_device *dev, u32 rx_size, u32 tx_size) if (netif_running(dev)) stmmac_release(dev); - priv->dma_rx_size = rx_size; - priv->dma_tx_size = tx_size; + priv->dma_conf.dma_rx_size = rx_size; + priv->dma_conf.dma_tx_size = tx_size; if (netif_running(dev)) ret = stmmac_open(dev); @@ -7263,7 +7267,7 @@ int stmmac_suspend(struct device *dev) stmmac_disable_all_queues(priv); for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) - hrtimer_cancel(&priv->tx_queue[chan].txtimer); + hrtimer_cancel(&priv->dma_conf.tx_queue[chan].txtimer); if (priv->eee_enabled) { priv->tx_path_in_lpi_mode = false; @@ -7314,7 +7318,7 @@ EXPORT_SYMBOL_GPL(stmmac_suspend); static void stmmac_reset_rx_queue(struct stmmac_priv *priv, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; rx_q->cur_rx = 0; rx_q->dirty_rx = 0; @@ -7322,7 +7326,7 @@ static void stmmac_reset_rx_queue(struct stmmac_priv *priv, u32 queue) static void stmmac_reset_tx_queue(struct stmmac_priv *priv, u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; tx_q->cur_tx = 0; tx_q->dirty_tx = 0; From patchwork Tue Jun 28 01:33:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Marangi X-Patchwork-Id: 12897515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8874CCA47B for ; Tue, 28 Jun 2022 01:36:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1jMRJ0jhPvp1BVaX+EqC3SFk+D6/PsEZmCMab/yvbHM=; b=BiCfZ3UIAPp5zC I+Nhu/P4MeJb4S0z205IxQhRNk4P8l+LHsTldZwFX7GtuSaE+rnJL4avEcIVC/J1RMlX6lv6zZ5Ba ygN0ltOqM3KeLwP21+w23rbt+w8rG5xDZ1CxWVNx2YzLmc85JzvcSq7X/wcpWU1WFOPcz/U5H21xN 0ih6B/ksjx/3cQzMse2A9yot4DD7AkAlbPO/2yhRWQ2amygT8LCH4lAMmSGMlI37ICDWSLqBKX8x8 bDgWOWlhAWKx4Z+u4s0gqczegTI9p9GN+2Mep/Ltj6pAUCWhWjyDSxhTuBcPG0y6dRkD7N2M1Ekv/ eiVmvlyot1i6MKX+U8ew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6081-003WWF-L9; Tue, 28 Jun 2022 01:34:57 +0000 Received: from mail-ed1-x52d.google.com ([2a00:1450:4864:20::52d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607K-003WHV-21 for linux-arm-kernel@lists.infradead.org; Tue, 28 Jun 2022 01:34:18 +0000 Received: by mail-ed1-x52d.google.com with SMTP id z7so15416107edm.13 for ; Mon, 27 Jun 2022 18:34:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xYY35n8nUk+Y7mWK36XJugw/PKkh3pxzjpPi6keFzO4=; b=FuO4H1WbZkjqb2hsvCvsyOqf/pewagNgUeEKbkxATiasS2lVsM0A6LrIW6CyLurGfC 4fPf/SEdqcge8FXqio7wTZ8psBkgsmXTEmiL6f1M2oXzTTe7BFn1c8GxVwWhxD28edzI C65AbD8AR975rcq2ll58wxqLWj1mVgFRkNm+J/9KMZEKw4Ap/Hr5QY2qHDDvFZmqhChv d0G3kiwPYoKCzeE+WnnpbGnTMZ6x8gbwIjbOPb76r0wqGX2yoE3/T5GBpF7P7PHPe/Cu tAtQaPNcwxcZZx9DUwa6tgtgIZqTnpZySEyEuHOS3e5Ot4aZ1ipofpVzE7+i1mMI1Ah/ rurg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xYY35n8nUk+Y7mWK36XJugw/PKkh3pxzjpPi6keFzO4=; b=3jJKtUAxKhQNREofJj6NCtBU4jej5CxHLYMjbvkv8UAETnDSTY8ZPdeWnUO97vv6d8 UI7L01ZDBrQbfomu8Vb3e6ck6qbKPrdYFH6xGZZACxl4idTzIVQb/7YMJqJnqpWMwqu5 9663aZvdag+ZGIeibo8eXC+ipYGclhqYIORtW4XWhf40060o7HaS5Wzon8DklN7cpeb7 x8gO4Fh+b7UekyRzjRyuRbxpdkRGpVvibQDMFgPeuiMmIkLPA+4b4sZDw5UQVqu87jNa 0iFs/nSJmAKlFTp3RzQRS+3BVA1JbSraP8Z2wn1QjuPGmQsc4PHke1ezeXFKgCERO+1M OLZw== X-Gm-Message-State: AJIora+gRUa6DKKEbERMy2rL0Y5Sf4ev+a3Yt1z41AuE+QmsSwEDyWF5 FFqWSa/D1RDaezgWcriC1ME= X-Google-Smtp-Source: AGRyM1tlWyVMUjAN52UFSGMTTFjvk/9Nanv3++j8FzBJW6NH3iQL38/jX2l6KSiLB8k+MYd8KhXe1w== X-Received: by 2002:a05:6402:3591:b0:436:c109:1fa7 with SMTP id y17-20020a056402359100b00436c1091fa7mr19881097edc.208.1656380050210; Mon, 27 Jun 2022 18:34:10 -0700 (PDT) Received: from localhost.localdomain (93-42-70-190.ip85.fastwebnet.it. [93.42.70.190]) by smtp.googlemail.com with ESMTPSA id x13-20020a170906b08d00b00724261b592esm5693492ejy.186.2022.06.27.18.34.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jun 2022 18:34:09 -0700 (PDT) From: Christian Marangi To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Russell King , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Christian Marangi Subject: [net-next PATCH RFC 4/5] net: ethernet: stmicro: stmmac: generate stmmac dma conf before open Date: Tue, 28 Jun 2022 03:33:41 +0200 Message-Id: <20220628013342.13581-5-ansuelsmth@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628013342.13581-1-ansuelsmth@gmail.com> References: <20220628013342.13581-1-ansuelsmth@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220627_183414_177220_E536291D X-CRM114-Status: GOOD ( 22.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rework the driver to generate the stmmac dma_conf before stmmac_open. This permits a function to first check if it's possible to allocate a new dma_config and then pass it directly to __stmmac_open and "open" the interface with the new configuration. Signed-off-by: Christian Marangi --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 459 +++++++++++------- 1 file changed, 287 insertions(+), 172 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index c211d0274bba..330aac10a6e7 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1230,7 +1230,8 @@ static int stmmac_phy_setup(struct stmmac_priv *priv) return 0; } -static void stmmac_display_rx_rings(struct stmmac_priv *priv) +static void stmmac_display_rx_rings(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 rx_cnt = priv->plat->rx_queues_to_use; unsigned int desc_size; @@ -1239,7 +1240,7 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv) /* Display RX rings */ for (queue = 0; queue < rx_cnt; queue++) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; pr_info("\tRX Queue %u rings\n", queue); @@ -1252,12 +1253,13 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv) } /* Display RX ring */ - stmmac_display_ring(priv, head_rx, priv->dma_conf.dma_rx_size, true, + stmmac_display_ring(priv, head_rx, dma_conf->dma_rx_size, true, rx_q->dma_rx_phy, desc_size); } } -static void stmmac_display_tx_rings(struct stmmac_priv *priv) +static void stmmac_display_tx_rings(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 tx_cnt = priv->plat->tx_queues_to_use; unsigned int desc_size; @@ -1266,7 +1268,7 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv) /* Display TX rings */ for (queue = 0; queue < tx_cnt; queue++) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; pr_info("\tTX Queue %d rings\n", queue); @@ -1281,18 +1283,19 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv) desc_size = sizeof(struct dma_desc); } - stmmac_display_ring(priv, head_tx, priv->dma_conf.dma_tx_size, false, + stmmac_display_ring(priv, head_tx, dma_conf->dma_tx_size, false, tx_q->dma_tx_phy, desc_size); } } -static void stmmac_display_rings(struct stmmac_priv *priv) +static void stmmac_display_rings(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { /* Display RX ring */ - stmmac_display_rx_rings(priv); + stmmac_display_rx_rings(priv, dma_conf); /* Display TX ring */ - stmmac_display_tx_rings(priv); + stmmac_display_tx_rings(priv, dma_conf); } static int stmmac_set_bfsize(int mtu, int bufsize) @@ -1316,44 +1319,50 @@ static int stmmac_set_bfsize(int mtu, int bufsize) /** * stmmac_clear_rx_descriptors - clear RX descriptors * @priv: driver private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index * Description: this function is called to clear the RX descriptors * in case of both basic and extended descriptors are used. */ -static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, u32 queue) +static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int i; /* Clear the RX descriptors */ - for (i = 0; i < priv->dma_conf.dma_rx_size; i++) + for (i = 0; i < dma_conf->dma_rx_size; i++) if (priv->extend_desc) stmmac_init_rx_desc(priv, &rx_q->dma_erx[i].basic, priv->use_riwt, priv->mode, - (i == priv->dma_conf.dma_rx_size - 1), - priv->dma_conf.dma_buf_sz); + (i == dma_conf->dma_rx_size - 1), + dma_conf->dma_buf_sz); else stmmac_init_rx_desc(priv, &rx_q->dma_rx[i], priv->use_riwt, priv->mode, - (i == priv->dma_conf.dma_rx_size - 1), - priv->dma_conf.dma_buf_sz); + (i == dma_conf->dma_rx_size - 1), + dma_conf->dma_buf_sz); } /** * stmmac_clear_tx_descriptors - clear tx descriptors * @priv: driver private structure + * @dma_conf: structure to take the dma data * @queue: TX queue index. * Description: this function is called to clear the TX descriptors * in case of both basic and extended descriptors are used. */ -static void stmmac_clear_tx_descriptors(struct stmmac_priv *priv, u32 queue) +static void stmmac_clear_tx_descriptors(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; int i; /* Clear the TX descriptors */ - for (i = 0; i < priv->dma_conf.dma_tx_size; i++) { - int last = (i == (priv->dma_conf.dma_tx_size - 1)); + for (i = 0; i < dma_conf->dma_tx_size; i++) { + int last = (i == (dma_conf->dma_tx_size - 1)); struct dma_desc *p; if (priv->extend_desc) @@ -1370,10 +1379,12 @@ static void stmmac_clear_tx_descriptors(struct stmmac_priv *priv, u32 queue) /** * stmmac_clear_descriptors - clear descriptors * @priv: driver private structure + * @dma_conf: structure to take the dma data * Description: this function is called to clear the TX and RX descriptors * in case of both basic and extended descriptors are used. */ -static void stmmac_clear_descriptors(struct stmmac_priv *priv) +static void stmmac_clear_descriptors(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 rx_queue_cnt = priv->plat->rx_queues_to_use; u32 tx_queue_cnt = priv->plat->tx_queues_to_use; @@ -1381,16 +1392,17 @@ static void stmmac_clear_descriptors(struct stmmac_priv *priv) /* Clear the RX descriptors */ for (queue = 0; queue < rx_queue_cnt; queue++) - stmmac_clear_rx_descriptors(priv, queue); + stmmac_clear_rx_descriptors(priv, dma_conf, queue); /* Clear the TX descriptors */ for (queue = 0; queue < tx_queue_cnt; queue++) - stmmac_clear_tx_descriptors(priv, queue); + stmmac_clear_tx_descriptors(priv, dma_conf, queue); } /** * stmmac_init_rx_buffers - init the RX descriptor buffer. * @priv: driver private structure + * @dma_conf: structure to take the dma data * @p: descriptor pointer * @i: descriptor index * @flags: gfp flag @@ -1398,10 +1410,12 @@ static void stmmac_clear_descriptors(struct stmmac_priv *priv) * Description: this function is called to allocate a receive buffer, perform * the DMA mapping and init the descriptor. */ -static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, +static int stmmac_init_rx_buffers(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + struct dma_desc *p, int i, gfp_t flags, u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); @@ -1430,7 +1444,7 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; stmmac_set_desc_addr(priv, p, buf->addr); - if (priv->dma_conf.dma_buf_sz == BUF_SIZE_16KiB) + if (dma_conf->dma_buf_sz == BUF_SIZE_16KiB) stmmac_init_desc3(priv, p); return 0; @@ -1439,12 +1453,13 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, /** * stmmac_free_rx_buffer - free RX dma buffers * @priv: private structure - * @queue: RX queue index + * @rx_q: RX queue * @i: buffer index. */ -static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i) +static void stmmac_free_rx_buffer(struct stmmac_priv *priv, + struct stmmac_rx_queue *rx_q, + int i) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; if (buf->page) @@ -1459,12 +1474,15 @@ static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i) /** * stmmac_free_tx_buffer - free RX dma buffers * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index * @i: buffer index. */ -static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) +static void stmmac_free_tx_buffer(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue, int i) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; if (tx_q->tx_skbuff_dma[i].buf && tx_q->tx_skbuff_dma[i].buf_type != STMMAC_TXBUF_T_XDP_TX) { @@ -1503,23 +1521,28 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) /** * dma_free_rx_skbufs - free RX dma buffers * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index */ -static void dma_free_rx_skbufs(struct stmmac_priv *priv, u32 queue) +static void dma_free_rx_skbufs(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int i; - for (i = 0; i < priv->dma_conf.dma_rx_size; i++) - stmmac_free_rx_buffer(priv, queue, i); + for (i = 0; i < dma_conf->dma_rx_size; i++) + stmmac_free_rx_buffer(priv, rx_q, i); } -static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, - gfp_t flags) +static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue, gfp_t flags) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int i; - for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { + for (i = 0; i < dma_conf->dma_rx_size; i++) { struct dma_desc *p; int ret; @@ -1528,7 +1551,7 @@ static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, else p = rx_q->dma_rx + i; - ret = stmmac_init_rx_buffers(priv, p, i, flags, + ret = stmmac_init_rx_buffers(priv, dma_conf, p, i, flags, queue); if (ret) return ret; @@ -1542,14 +1565,17 @@ static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, /** * dma_free_rx_xskbufs - free RX dma buffers from XSK pool * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index */ -static void dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) +static void dma_free_rx_xskbufs(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int i; - for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { + for (i = 0; i < dma_conf->dma_rx_size; i++) { struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; if (!buf->xdp) @@ -1560,12 +1586,14 @@ static void dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) } } -static int stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) +static int stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int i; - for (i = 0; i < priv->dma_conf.dma_rx_size; i++) { + for (i = 0; i < dma_conf->dma_rx_size; i++) { struct stmmac_rx_buffer *buf; dma_addr_t dma_addr; struct dma_desc *p; @@ -1600,22 +1628,25 @@ static struct xsk_buff_pool *stmmac_get_xsk_pool(struct stmmac_priv *priv, u32 q /** * __init_dma_rx_desc_rings - init the RX descriptor ring (per queue) * @priv: driver private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index * @flags: gfp flag. * Description: this function initializes the DMA RX descriptors * and allocates the socket buffers. It supports the chained and ring * modes. */ -static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t flags) +static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue, gfp_t flags) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; int ret; netif_dbg(priv, probe, priv->dev, "(%s) dma_rx_phy=0x%08x\n", __func__, (u32)rx_q->dma_rx_phy); - stmmac_clear_rx_descriptors(priv, queue); + stmmac_clear_rx_descriptors(priv, dma_conf, queue); xdp_rxq_info_unreg_mem_model(&rx_q->xdp_rxq); @@ -1642,9 +1673,9 @@ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t f /* RX XDP ZC buffer pool may not be populated, e.g. * xdpsock TX-only. */ - stmmac_alloc_rx_buffers_zc(priv, queue); + stmmac_alloc_rx_buffers_zc(priv, dma_conf, queue); } else { - ret = stmmac_alloc_rx_buffers(priv, queue, flags); + ret = stmmac_alloc_rx_buffers(priv, dma_conf, queue, flags); if (ret < 0) return -ENOMEM; } @@ -1654,17 +1685,19 @@ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t f if (priv->extend_desc) stmmac_mode_init(priv, rx_q->dma_erx, rx_q->dma_rx_phy, - priv->dma_conf.dma_rx_size, 1); + dma_conf->dma_rx_size, 1); else stmmac_mode_init(priv, rx_q->dma_rx, rx_q->dma_rx_phy, - priv->dma_conf.dma_rx_size, 0); + dma_conf->dma_rx_size, 0); } return 0; } -static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) +static int init_dma_rx_desc_rings(struct net_device *dev, + struct stmmac_dma_conf *dma_conf, + gfp_t flags) { struct stmmac_priv *priv = netdev_priv(dev); u32 rx_count = priv->plat->rx_queues_to_use; @@ -1676,7 +1709,7 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) "SKB addresses:\nskb\t\tskb data\tdma data\n"); for (queue = 0; queue < rx_count; queue++) { - ret = __init_dma_rx_desc_rings(priv, queue, flags); + ret = __init_dma_rx_desc_rings(priv, dma_conf, queue, flags); if (ret) goto err_init_rx_buffers; } @@ -1685,12 +1718,12 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) err_init_rx_buffers: while (queue >= 0) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; if (rx_q->xsk_pool) - dma_free_rx_xskbufs(priv, queue); + dma_free_rx_xskbufs(priv, dma_conf, queue); else - dma_free_rx_skbufs(priv, queue); + dma_free_rx_skbufs(priv, dma_conf, queue); rx_q->buf_alloc_num = 0; rx_q->xsk_pool = NULL; @@ -1709,9 +1742,11 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) * and allocates the socket buffers. It supports the chained and ring * modes. */ -static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) +static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; int i; netif_dbg(priv, probe, priv->dev, @@ -1723,16 +1758,16 @@ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) if (priv->extend_desc) stmmac_mode_init(priv, tx_q->dma_etx, tx_q->dma_tx_phy, - priv->dma_conf.dma_tx_size, 1); + dma_conf->dma_tx_size, 1); else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) stmmac_mode_init(priv, tx_q->dma_tx, tx_q->dma_tx_phy, - priv->dma_conf.dma_tx_size, 0); + dma_conf->dma_tx_size, 0); } tx_q->xsk_pool = stmmac_get_xsk_pool(priv, queue); - for (i = 0; i < priv->dma_conf.dma_tx_size; i++) { + for (i = 0; i < dma_conf->dma_tx_size; i++) { struct dma_desc *p; if (priv->extend_desc) @@ -1754,7 +1789,8 @@ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) return 0; } -static int init_dma_tx_desc_rings(struct net_device *dev) +static int init_dma_tx_desc_rings(struct net_device *dev, + struct stmmac_dma_conf *dma_conf) { struct stmmac_priv *priv = netdev_priv(dev); u32 tx_queue_cnt; @@ -1763,7 +1799,7 @@ static int init_dma_tx_desc_rings(struct net_device *dev) tx_queue_cnt = priv->plat->tx_queues_to_use; for (queue = 0; queue < tx_queue_cnt; queue++) - __init_dma_tx_desc_rings(priv, queue); + __init_dma_tx_desc_rings(priv, dma_conf, queue); return 0; } @@ -1771,26 +1807,29 @@ static int init_dma_tx_desc_rings(struct net_device *dev) /** * init_dma_desc_rings - init the RX/TX descriptor rings * @dev: net device structure + * @dma_conf: structure to take the dma data * @flags: gfp flag. * Description: this function initializes the DMA RX/TX descriptors * and allocates the socket buffers. It supports the chained and ring * modes. */ -static int init_dma_desc_rings(struct net_device *dev, gfp_t flags) +static int init_dma_desc_rings(struct net_device *dev, + struct stmmac_dma_conf *dma_conf, + gfp_t flags) { struct stmmac_priv *priv = netdev_priv(dev); int ret; - ret = init_dma_rx_desc_rings(dev, flags); + ret = init_dma_rx_desc_rings(dev, dma_conf, flags); if (ret) return ret; - ret = init_dma_tx_desc_rings(dev); + ret = init_dma_tx_desc_rings(dev, dma_conf); - stmmac_clear_descriptors(priv); + stmmac_clear_descriptors(priv, dma_conf); if (netif_msg_hw(priv)) - stmmac_display_rings(priv); + stmmac_display_rings(priv, dma_conf); return ret; } @@ -1798,17 +1837,20 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags) /** * dma_free_tx_skbufs - free TX dma buffers * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: TX queue index */ -static void dma_free_tx_skbufs(struct stmmac_priv *priv, u32 queue) +static void dma_free_tx_skbufs(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; int i; tx_q->xsk_frames_done = 0; - for (i = 0; i < priv->dma_conf.dma_tx_size; i++) - stmmac_free_tx_buffer(priv, queue, i); + for (i = 0; i < dma_conf->dma_tx_size; i++) + stmmac_free_tx_buffer(priv, dma_conf, queue, i); if (tx_q->xsk_pool && tx_q->xsk_frames_done) { xsk_tx_completed(tx_q->xsk_pool, tx_q->xsk_frames_done); @@ -1827,34 +1869,37 @@ static void stmmac_free_tx_skbufs(struct stmmac_priv *priv) u32 queue; for (queue = 0; queue < tx_queue_cnt; queue++) - dma_free_tx_skbufs(priv, queue); + dma_free_tx_skbufs(priv, &priv->dma_conf, queue); } /** * __free_dma_rx_desc_resources - free RX dma desc resources (per queue) * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index */ -static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) +static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; /* Release the DMA RX socket buffers */ if (rx_q->xsk_pool) - dma_free_rx_xskbufs(priv, queue); + dma_free_rx_xskbufs(priv, dma_conf, queue); else - dma_free_rx_skbufs(priv, queue); + dma_free_rx_skbufs(priv, dma_conf, queue); rx_q->buf_alloc_num = 0; rx_q->xsk_pool = NULL; /* Free DMA regions of consistent memory previously allocated */ if (!priv->extend_desc) - dma_free_coherent(priv->device, priv->dma_conf.dma_rx_size * + dma_free_coherent(priv->device, dma_conf->dma_rx_size * sizeof(struct dma_desc), rx_q->dma_rx, rx_q->dma_rx_phy); else - dma_free_coherent(priv->device, priv->dma_conf.dma_rx_size * + dma_free_coherent(priv->device, dma_conf->dma_rx_size * sizeof(struct dma_extended_desc), rx_q->dma_erx, rx_q->dma_rx_phy); @@ -1866,29 +1911,33 @@ static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) page_pool_destroy(rx_q->page_pool); } -static void free_dma_rx_desc_resources(struct stmmac_priv *priv) +static void free_dma_rx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 rx_count = priv->plat->rx_queues_to_use; u32 queue; /* Free RX queue resources */ for (queue = 0; queue < rx_count; queue++) - __free_dma_rx_desc_resources(priv, queue); + __free_dma_rx_desc_resources(priv, dma_conf, queue); } /** * __free_dma_tx_desc_resources - free TX dma desc resources (per queue) * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: TX queue index */ -static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) +static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; size_t size; void *addr; /* Release the DMA TX socket buffers */ - dma_free_tx_skbufs(priv, queue); + dma_free_tx_skbufs(priv, dma_conf, queue); if (priv->extend_desc) { size = sizeof(struct dma_extended_desc); @@ -1901,7 +1950,7 @@ static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) addr = tx_q->dma_tx; } - size *= priv->dma_conf.dma_tx_size; + size *= dma_conf->dma_tx_size; dma_free_coherent(priv->device, size, addr, tx_q->dma_tx_phy); @@ -1909,28 +1958,32 @@ static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) kfree(tx_q->tx_skbuff); } -static void free_dma_tx_desc_resources(struct stmmac_priv *priv) +static void free_dma_tx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 tx_count = priv->plat->tx_queues_to_use; u32 queue; /* Free TX queue resources */ for (queue = 0; queue < tx_count; queue++) - __free_dma_tx_desc_resources(priv, queue); + __free_dma_tx_desc_resources(priv, dma_conf, queue); } /** * __alloc_dma_rx_desc_resources - alloc RX resources (per queue). * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: RX queue index * Description: according to which descriptor can be used (extend or basic) * this function allocates the resources for TX and RX paths. In case of * reception, for example, it pre-allocated the RX socket buffer in order to * allow zero-copy mechanism. */ -static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) +static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue]; + struct stmmac_rx_queue *rx_q = &dma_conf->rx_queue[queue]; struct stmmac_channel *ch = &priv->channel[queue]; bool xdp_prog = stmmac_xdp_is_enabled(priv); struct page_pool_params pp_params = { 0 }; @@ -1942,8 +1995,8 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) rx_q->priv_data = priv; pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; - pp_params.pool_size = priv->dma_conf.dma_rx_size; - num_pages = DIV_ROUND_UP(priv->dma_conf.dma_buf_sz, PAGE_SIZE); + pp_params.pool_size = dma_conf->dma_rx_size; + num_pages = DIV_ROUND_UP(dma_conf->dma_buf_sz, PAGE_SIZE); pp_params.order = ilog2(num_pages); pp_params.nid = dev_to_node(priv->device); pp_params.dev = priv->device; @@ -1958,7 +2011,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) return ret; } - rx_q->buf_pool = kcalloc(priv->dma_conf.dma_rx_size, + rx_q->buf_pool = kcalloc(dma_conf->dma_rx_size, sizeof(*rx_q->buf_pool), GFP_KERNEL); if (!rx_q->buf_pool) @@ -1966,7 +2019,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) if (priv->extend_desc) { rx_q->dma_erx = dma_alloc_coherent(priv->device, - priv->dma_conf.dma_rx_size * + dma_conf->dma_rx_size * sizeof(struct dma_extended_desc), &rx_q->dma_rx_phy, GFP_KERNEL); @@ -1975,7 +2028,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) } else { rx_q->dma_rx = dma_alloc_coherent(priv->device, - priv->dma_conf.dma_rx_size * + dma_conf->dma_rx_size * sizeof(struct dma_desc), &rx_q->dma_rx_phy, GFP_KERNEL); @@ -2000,7 +2053,8 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) return 0; } -static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) +static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 rx_count = priv->plat->rx_queues_to_use; u32 queue; @@ -2008,7 +2062,7 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) /* RX queues buffers and DMA */ for (queue = 0; queue < rx_count; queue++) { - ret = __alloc_dma_rx_desc_resources(priv, queue); + ret = __alloc_dma_rx_desc_resources(priv, dma_conf, queue); if (ret) goto err_dma; } @@ -2016,7 +2070,7 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) return 0; err_dma: - free_dma_rx_desc_resources(priv); + free_dma_rx_desc_resources(priv, dma_conf); return ret; } @@ -2024,28 +2078,31 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) /** * __alloc_dma_tx_desc_resources - alloc TX resources (per queue). * @priv: private structure + * @dma_conf: structure to take the dma data * @queue: TX queue index * Description: according to which descriptor can be used (extend or basic) * this function allocates the resources for TX and RX paths. In case of * reception, for example, it pre-allocated the RX socket buffer in order to * allow zero-copy mechanism. */ -static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) +static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf, + u32 queue) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[queue]; size_t size; void *addr; tx_q->queue_index = queue; tx_q->priv_data = priv; - tx_q->tx_skbuff_dma = kcalloc(priv->dma_conf.dma_tx_size, + tx_q->tx_skbuff_dma = kcalloc(dma_conf->dma_tx_size, sizeof(*tx_q->tx_skbuff_dma), GFP_KERNEL); if (!tx_q->tx_skbuff_dma) return -ENOMEM; - tx_q->tx_skbuff = kcalloc(priv->dma_conf.dma_tx_size, + tx_q->tx_skbuff = kcalloc(dma_conf->dma_tx_size, sizeof(struct sk_buff *), GFP_KERNEL); if (!tx_q->tx_skbuff) @@ -2058,7 +2115,7 @@ static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) else size = sizeof(struct dma_desc); - size *= priv->dma_conf.dma_tx_size; + size *= dma_conf->dma_tx_size; addr = dma_alloc_coherent(priv->device, size, &tx_q->dma_tx_phy, GFP_KERNEL); @@ -2075,7 +2132,8 @@ static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) return 0; } -static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) +static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { u32 tx_count = priv->plat->tx_queues_to_use; u32 queue; @@ -2083,7 +2141,7 @@ static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) /* TX queues buffers and DMA */ for (queue = 0; queue < tx_count; queue++) { - ret = __alloc_dma_tx_desc_resources(priv, queue); + ret = __alloc_dma_tx_desc_resources(priv, dma_conf, queue); if (ret) goto err_dma; } @@ -2091,27 +2149,29 @@ static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) return 0; err_dma: - free_dma_tx_desc_resources(priv); + free_dma_tx_desc_resources(priv, dma_conf); return ret; } /** * alloc_dma_desc_resources - alloc TX/RX resources. * @priv: private structure + * @dma_conf: structure to take the dma data * Description: according to which descriptor can be used (extend or basic) * this function allocates the resources for TX and RX paths. In case of * reception, for example, it pre-allocated the RX socket buffer in order to * allow zero-copy mechanism. */ -static int alloc_dma_desc_resources(struct stmmac_priv *priv) +static int alloc_dma_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { /* RX Allocation */ - int ret = alloc_dma_rx_desc_resources(priv); + int ret = alloc_dma_rx_desc_resources(priv, dma_conf); if (ret) return ret; - ret = alloc_dma_tx_desc_resources(priv); + ret = alloc_dma_tx_desc_resources(priv, dma_conf); return ret; } @@ -2119,16 +2179,18 @@ static int alloc_dma_desc_resources(struct stmmac_priv *priv) /** * free_dma_desc_resources - free dma desc resources * @priv: private structure + * @dma_conf: structure to take the dma data */ -static void free_dma_desc_resources(struct stmmac_priv *priv) +static void free_dma_desc_resources(struct stmmac_priv *priv, + struct stmmac_dma_conf *dma_conf) { /* Release the DMA TX socket buffers */ - free_dma_tx_desc_resources(priv); + free_dma_tx_desc_resources(priv, dma_conf); /* Release the DMA RX socket buffers later * to ensure all pending XDP_TX buffers are returned. */ - free_dma_rx_desc_resources(priv); + free_dma_rx_desc_resources(priv, dma_conf); } /** @@ -2634,8 +2696,8 @@ static void stmmac_tx_err(struct stmmac_priv *priv, u32 chan) netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, chan)); stmmac_stop_tx_dma(priv, chan); - dma_free_tx_skbufs(priv, chan); - stmmac_clear_tx_descriptors(priv, chan); + dma_free_tx_skbufs(priv, &priv->dma_conf, chan); + stmmac_clear_tx_descriptors(priv, &priv->dma_conf, chan); stmmac_reset_tx_queue(priv, chan); stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, tx_q->dma_tx_phy, chan); @@ -3618,19 +3680,93 @@ static int stmmac_request_irq(struct net_device *dev) } /** - * stmmac_open - open entry point of the driver + * stmmac_setup_dma_desc - Generate a dma_conf and allocate DMA queue + * @priv: driver private structure + * @mtu: MTU to setup the dma queue and buf with + * Description: Allocate and generate a dma_conf based on the provided MTU. + * Allocate the Tx/Rx DMA queue and init them. + * Return value: + * the dma_conf allocated struct on success and an appropriate ERR_PTR on failure. + */ +static struct stmmac_dma_conf * +stmmac_setup_dma_desc(struct stmmac_priv *priv, unsigned int mtu) +{ + struct stmmac_dma_conf *dma_conf; + int chan, bfsize, ret; + + dma_conf = kzalloc(sizeof(*dma_conf), GFP_KERNEL); + if (!dma_conf) { + netdev_err(priv->dev, "%s: DMA conf allocation failed\n", + __func__); + return ERR_PTR(-ENOMEM); + } + + bfsize = stmmac_set_16kib_bfsize(priv, mtu); + if (bfsize < 0) + bfsize = 0; + + if (bfsize < BUF_SIZE_16KiB) + bfsize = stmmac_set_bfsize(mtu, 0); + + dma_conf->dma_buf_sz = bfsize; + /* Chose the tx/rx size from the already defined one in the + * priv struct. (if defined) + */ + dma_conf->dma_tx_size = priv->dma_conf.dma_tx_size; + dma_conf->dma_rx_size = priv->dma_conf.dma_rx_size; + + if (!dma_conf->dma_tx_size) + dma_conf->dma_tx_size = DMA_DEFAULT_TX_SIZE; + if (!dma_conf->dma_rx_size) + dma_conf->dma_rx_size = DMA_DEFAULT_RX_SIZE; + + /* Earlier check for TBS */ + for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) { + struct stmmac_tx_queue *tx_q = &dma_conf->tx_queue[chan]; + int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en; + + /* Setup per-TXQ tbs flag before TX descriptor alloc */ + tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0; + } + + ret = alloc_dma_desc_resources(priv, dma_conf); + if (ret < 0) { + netdev_err(priv->dev, "%s: DMA descriptors allocation failed\n", + __func__); + goto alloc_error; + } + + ret = init_dma_desc_rings(priv->dev, dma_conf, GFP_KERNEL); + if (ret < 0) { + netdev_err(priv->dev, "%s: DMA descriptors initialization failed\n", + __func__); + goto init_error; + } + + return dma_conf; + +init_error: + free_dma_desc_resources(priv, dma_conf); +alloc_error: + kfree(dma_conf); + return ERR_PTR(ret); +} + +/** + * __stmmac_open - open entry point of the driver * @dev : pointer to the device structure. + * @dma_conf : structure to take the dma data * Description: * This function is the open entry point of the driver. * Return value: * 0 on success and an appropriate (-)ve integer as defined in errno.h * file on failure. */ -static int stmmac_open(struct net_device *dev) +static int __stmmac_open(struct net_device *dev, + struct stmmac_dma_conf *dma_conf) { struct stmmac_priv *priv = netdev_priv(dev); int mode = priv->plat->phy_interface; - int bfsize = 0; u32 chan; int ret; @@ -3655,45 +3791,10 @@ static int stmmac_open(struct net_device *dev) memset(&priv->xstats, 0, sizeof(struct stmmac_extra_stats)); priv->xstats.threshold = tc; - bfsize = stmmac_set_16kib_bfsize(priv, dev->mtu); - if (bfsize < 0) - bfsize = 0; - - if (bfsize < BUF_SIZE_16KiB) - bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_conf.dma_buf_sz); - - priv->dma_conf.dma_buf_sz = bfsize; - buf_sz = bfsize; - priv->rx_copybreak = STMMAC_RX_COPYBREAK; - if (!priv->dma_conf.dma_tx_size) - priv->dma_conf.dma_tx_size = DMA_DEFAULT_TX_SIZE; - if (!priv->dma_conf.dma_rx_size) - priv->dma_conf.dma_rx_size = DMA_DEFAULT_RX_SIZE; - - /* Earlier check for TBS */ - for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) { - struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan]; - int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en; - - /* Setup per-TXQ tbs flag before TX descriptor alloc */ - tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0; - } - - ret = alloc_dma_desc_resources(priv); - if (ret < 0) { - netdev_err(priv->dev, "%s: DMA descriptors allocation failed\n", - __func__); - goto dma_desc_error; - } - - ret = init_dma_desc_rings(dev, GFP_KERNEL); - if (ret < 0) { - netdev_err(priv->dev, "%s: DMA descriptors initialization failed\n", - __func__); - goto init_error; - } + buf_sz = dma_conf->dma_buf_sz; + memcpy(&priv->dma_conf, dma_conf, sizeof(*dma_conf)); stmmac_reset_queues_param(priv); @@ -3727,14 +3828,28 @@ static int stmmac_open(struct net_device *dev) stmmac_hw_teardown(dev); init_error: - free_dma_desc_resources(priv); -dma_desc_error: + free_dma_desc_resources(priv, &priv->dma_conf); phylink_disconnect_phy(priv->phylink); init_phy_error: pm_runtime_put(priv->device); return ret; } +static int stmmac_open(struct net_device *dev) +{ + struct stmmac_priv *priv = netdev_priv(dev); + struct stmmac_dma_conf *dma_conf; + int ret; + + dma_conf = stmmac_setup_dma_desc(priv, dev->mtu); + if (IS_ERR(dma_conf)) + return PTR_ERR(dma_conf); + + ret = __stmmac_open(dev, dma_conf); + kfree(dma_conf); + return ret; +} + static void stmmac_fpe_stop_wq(struct stmmac_priv *priv) { set_bit(__FPE_REMOVING, &priv->fpe_task_state); @@ -3781,7 +3896,7 @@ static int stmmac_release(struct net_device *dev) stmmac_stop_all_dma(priv); /* Release and free the Rx/Tx resources */ - free_dma_desc_resources(priv); + free_dma_desc_resources(priv, &priv->dma_conf); /* Disable the MAC Rx/Tx */ stmmac_mac_set(priv, priv->ioaddr, false); @@ -6303,7 +6418,7 @@ void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue) spin_unlock_irqrestore(&ch->lock, flags); stmmac_stop_rx_dma(priv, queue); - __free_dma_rx_desc_resources(priv, queue); + __free_dma_rx_desc_resources(priv, &priv->dma_conf, queue); } void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) @@ -6314,21 +6429,21 @@ void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) u32 buf_size; int ret; - ret = __alloc_dma_rx_desc_resources(priv, queue); + ret = __alloc_dma_rx_desc_resources(priv, &priv->dma_conf, queue); if (ret) { netdev_err(priv->dev, "Failed to alloc RX desc.\n"); return; } - ret = __init_dma_rx_desc_rings(priv, queue, GFP_KERNEL); + ret = __init_dma_rx_desc_rings(priv, &priv->dma_conf, queue, GFP_KERNEL); if (ret) { - __free_dma_rx_desc_resources(priv, queue); + __free_dma_rx_desc_resources(priv, &priv->dma_conf, queue); netdev_err(priv->dev, "Failed to init RX desc.\n"); return; } stmmac_reset_rx_queue(priv, queue); - stmmac_clear_rx_descriptors(priv, queue); + stmmac_clear_rx_descriptors(priv, &priv->dma_conf, queue); stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, rx_q->dma_rx_phy, rx_q->queue_index); @@ -6366,7 +6481,7 @@ void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue) spin_unlock_irqrestore(&ch->lock, flags); stmmac_stop_tx_dma(priv, queue); - __free_dma_tx_desc_resources(priv, queue); + __free_dma_tx_desc_resources(priv, &priv->dma_conf, queue); } void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue) @@ -6376,21 +6491,21 @@ void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue) unsigned long flags; int ret; - ret = __alloc_dma_tx_desc_resources(priv, queue); + ret = __alloc_dma_tx_desc_resources(priv, &priv->dma_conf, queue); if (ret) { netdev_err(priv->dev, "Failed to alloc TX desc.\n"); return; } - ret = __init_dma_tx_desc_rings(priv, queue); + ret = __init_dma_tx_desc_rings(priv, &priv->dma_conf, queue); if (ret) { - __free_dma_tx_desc_resources(priv, queue); + __free_dma_tx_desc_resources(priv, &priv->dma_conf, queue); netdev_err(priv->dev, "Failed to init TX desc.\n"); return; } stmmac_reset_tx_queue(priv, queue); - stmmac_clear_tx_descriptors(priv, queue); + stmmac_clear_tx_descriptors(priv, &priv->dma_conf, queue); stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, tx_q->dma_tx_phy, tx_q->queue_index); @@ -6427,7 +6542,7 @@ void stmmac_xdp_release(struct net_device *dev) stmmac_stop_all_dma(priv); /* Release and free the Rx/Tx resources */ - free_dma_desc_resources(priv); + free_dma_desc_resources(priv, &priv->dma_conf); /* Disable the MAC Rx/Tx */ stmmac_mac_set(priv, priv->ioaddr, false); @@ -6452,14 +6567,14 @@ int stmmac_xdp_open(struct net_device *dev) u32 chan; int ret; - ret = alloc_dma_desc_resources(priv); + ret = alloc_dma_desc_resources(priv, &priv->dma_conf); if (ret < 0) { netdev_err(dev, "%s: DMA descriptors allocation failed\n", __func__); goto dma_desc_error; } - ret = init_dma_desc_rings(dev, GFP_KERNEL); + ret = init_dma_desc_rings(dev, &priv->dma_conf, GFP_KERNEL); if (ret < 0) { netdev_err(dev, "%s: DMA descriptors initialization failed\n", __func__); @@ -6541,7 +6656,7 @@ int stmmac_xdp_open(struct net_device *dev) stmmac_hw_teardown(dev); init_error: - free_dma_desc_resources(priv); + free_dma_desc_resources(priv, &priv->dma_conf); dma_desc_error: return ret; } @@ -7409,7 +7524,7 @@ int stmmac_resume(struct device *dev) stmmac_reset_queues_param(priv); stmmac_free_tx_skbufs(priv); - stmmac_clear_descriptors(priv); + stmmac_clear_descriptors(priv, &priv->dma_conf); stmmac_hw_setup(ndev, false); stmmac_init_coalesce(priv); From patchwork Tue Jun 28 01:33:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Marangi X-Patchwork-Id: 12897514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6DACCC43334 for ; Tue, 28 Jun 2022 01:36:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r1HyTjp/zK8h1XUbOX98fopAS6F4d6X4nNri5x8a6tk=; b=nMHGnVYaCuj61Q YGr/Qr0/DGJiNLeC9MsP4tPO4lQXXvq1iwLEnJi9n8QgFEuL2QKHHewY8km0Egc9CzqcBcv2XicHQ B5neDBJW76ATIpiAA7dHssOkc8DwJUtS7S0gn0NPB2Q6OmDOHlHdFozzii9I0851df3YXYJGFvoNc RU6cn2KZ7o7fWwOmvdG8/pDI9QUquMQAn9S8QIPlHOmZzgD1lWZGIk6WVHCT0j0/2a0+E9O9rC4q/ aNgUDJRmgOdphBcRnYhjC+eG/plN8XMFTjJ89mq5nPSRAjyip1PpTJB8n0rzelBpM0vOmO2EDa+0d 90u0YMzwBJLO56cCJlrw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o608G-003Wb6-DV; Tue, 28 Jun 2022 01:35:12 +0000 Received: from mail-ej1-x635.google.com ([2a00:1450:4864:20::635]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o607L-003WI6-74 for linux-arm-kernel@lists.infradead.org; Tue, 28 Jun 2022 01:34:18 +0000 Received: by mail-ej1-x635.google.com with SMTP id u12so22610920eja.8 for ; Mon, 27 Jun 2022 18:34:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VFjFynZy6Y6LV26469hAz6HcHpk+ATW9nFKisizAaS0=; b=HxBQCz+7pRrd8vYnRhzxjsYUhYVbh2740UtczLVj5Qadg66MxqD6v7B1UGENMsXSIh NZ95KoNdnEnToocmddB8hVQm3umvd7EQ4umzgQcF+7ZKMGvwj21qIm4vl50UEADmBO8W z/n9v0ZYxB6fIIjOLkTK7wQB9KorIFItL/CT/BUqj8or+kBD0Nu6JTxuVmz9x5N9Wrd3 Xvxps116OsXXejOkFAw3yct++Gvjy8/h29Uqxo0JpQx+hftir5R5vHETuPWv2Xepgycf 2xEUyuNHXxB0rkSuRMiahmG8E7kIlZRTSqiNp3yYJc/aCPu7xpb21I6Fji6+dgEIKsTG NsaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VFjFynZy6Y6LV26469hAz6HcHpk+ATW9nFKisizAaS0=; b=tBzOF3mdNlmp9WrjdzuekN/BfWDb2b0s1eEfu/BoGwVAAP5GXiKiSyWd1hGcbsEY52 WCXfpCswPVRwmoj57Rwg4uG95yVit5V2nzDwedz/cUWPMNUPbnPEs/rkoY5jrM4dtYi2 4lxRM6Ik5/8CeP+P0PksO+5LEbH328py8qoley3pkuoJwjjHGhnqeC0DmJxM6m2vcDp6 25DdVuOeYlPkFdNqNGpUx3Syrmggas4ckrFAWWTYLm+qB8vRn8bpXQpPmnFr54zlvxMb 0faAYjr4ii3z6dH7z+q9IKp2oYUsBkOMtWIyl1e1XpNGsQekLb0aBRREm0rwQ6ZsIhij ak3Q== X-Gm-Message-State: AJIora8ItZgFdLtj/1PcUUkPMudl/1GKFLDw14DVZ8OHas6HyLwWf+7h 6MlJ+HRP1H+9lHlrh7A+o14= X-Google-Smtp-Source: AGRyM1uPYb3BGEYytx9I9d6gFT5JgkCfrvivXpMHCUnwEbmD4KdUhvJMfNpU409V+fTN58hTaPKq2A== X-Received: by 2002:a17:907:7251:b0:723:dc32:aefb with SMTP id ds17-20020a170907725100b00723dc32aefbmr15069172ejc.91.1656380051539; Mon, 27 Jun 2022 18:34:11 -0700 (PDT) Received: from localhost.localdomain (93-42-70-190.ip85.fastwebnet.it. [93.42.70.190]) by smtp.googlemail.com with ESMTPSA id x13-20020a170906b08d00b00724261b592esm5693492ejy.186.2022.06.27.18.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jun 2022 18:34:11 -0700 (PDT) From: Christian Marangi To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Russell King , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Christian Marangi Subject: [net-next PATCH RFC 5/5] net: ethernet: stmicro: stmmac: permit MTU change with interface up Date: Tue, 28 Jun 2022 03:33:42 +0200 Message-Id: <20220628013342.13581-6-ansuelsmth@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628013342.13581-1-ansuelsmth@gmail.com> References: <20220628013342.13581-1-ansuelsmth@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220627_183415_285004_22BD3607 X-CRM114-Status: GOOD ( 19.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Remove the limitation where the interface needs to be down to change MTU by releasing and opening the stmmac driver to set the new MTU. Also call the set_filter function to correctly init the port. This permits to remove the EBUSY error while the ethernet port is running permitting a correct MTU change if for example a DSA request a MTU change for a switch CPU port. Signed-off-by: Christian Marangi --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 28 +++++++++++++++---- 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 330aac10a6e7..2e08be895cde 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -5549,18 +5549,15 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu) { struct stmmac_priv *priv = netdev_priv(dev); int txfifosz = priv->plat->tx_fifo_size; + struct stmmac_dma_conf *dma_conf; const int mtu = new_mtu; + int ret; if (txfifosz == 0) txfifosz = priv->dma_cap.tx_fifo_size; txfifosz /= priv->plat->tx_queues_to_use; - if (netif_running(dev)) { - netdev_err(priv->dev, "must be stopped to change its MTU\n"); - return -EBUSY; - } - if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) { netdev_dbg(priv->dev, "Jumbo frames not supported for XDP\n"); return -EINVAL; @@ -5572,8 +5569,27 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu) if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB)) return -EINVAL; - dev->mtu = mtu; + if (netif_running(dev)) { + netdev_dbg(priv->dev, "restarting interface to change its MTU\n"); + /* Try to allocate the new DMA conf with the new mtu */ + dma_conf = stmmac_setup_dma_desc(priv, mtu); + if (IS_ERR(dma_conf)) { + netdev_err(priv->dev, "failed allocating new dma conf for new MTU %d\n", + mtu); + return PTR_ERR(dma_conf); + } + stmmac_release(dev); + + ret = __stmmac_open(dev, dma_conf); + kfree(dma_conf); + if (ret) { + netdev_err(priv->dev, "failed reopening the interface after MTU change\n"); + return ret; + } + } + + dev->mtu = mtu; netdev_update_features(dev); return 0;