From patchwork Tue Apr 13 09:36:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44143C433B4 for ; Tue, 13 Apr 2021 09:35:12 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 776476120E for ; Tue, 13 Apr 2021 09:35:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 776476120E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Q8aWRKI/MTu9GsCeQ7FJ1HL7pdxcIGKZk26IRs14VAs=; b=dXmdmQN7OcTO1aEk+3OsCXy35 Bshz5Ma/vPJZW+/PLmxI0F8wXUsJZgutsMTp8F9UOFdrvVwAwPII/KTSO76AodJhRXmGwk0veNyRN jHtw+UsbyOxbYuTTPLjvQ3tZBsbHrXdolGemS/dv5nA8wAuUllLoEMsA+Rox2OVXpdn58z0/mWUHm GTi1cfBtKjP42ZVUB4tRQEcTTdOWJ+i/my4y4F3Bbk4w47BX7QZvyj3O0UOwg3yCEM7JWAXQEDwRW dbkoGXUkQli89YCrUdulAQX/SHaqNgUW/U2htqUN09NAOkFzTuZ5a6AwgFhO072LrER+YRadBYlDO Gr/SgU4BQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPe-008igw-SX; Tue, 13 Apr 2021 09:32:51 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPC-008ics-Rj for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=asT62JkJAidvdD7UWG30F3qINeBSoKs5djMCZj9XWco=; b=2Aw+ZXqP0lbQCOz/L6fMWLmmqK tkyFX64xQ/0CjaYxQaOTS/65lyETGOeLTKIlViQDqFFuiW+0IrOmk19vPBllf6K4QoI6J5rLuIJpw 4OUogJtMT9W/eQzzNhNKTSom0C4RWCKqRN8Avx2UB9UjYb4R0tzPu+keCGr7BeZeoG9Ts6HTRk0VT lFfrhVAJxwV7Q/UvV6Cw/yBrU1KEnZFeX0k16SCVCzxKV/d7Cd+q5uJ2Fy412Bpc//9EYRkKE6vNE lt6HCP5TzE0bnnzbvxBTdqd/vOqhwmI+KjX943QKQyhy7fIDqF1Ls8583NtYuZ7It7QDU4xCPcyrp E3d6SPPA==; Received: from mga17.intel.com ([192.55.52.151]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPA-006sgO-AE for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:21 +0000 IronPort-SDR: toxENPh2lP6qHD12IDePP8EKjpGrfjQt/ze7+IRmsdh4wd71OdMOFKyeO4v+jFKUQNrguu35Ln iR1gak2Sh9oQ== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="174474491" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="174474491" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:19 -0700 IronPort-SDR: fqePouQHEYHxvaMPgMveVf7pX/F8esCGV9XWO3d6g0cJ/4AI2Jv4TgtL0ScUmxTEKWr8xiaNxq R080VoOgKxUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178074" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:14 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 1/7] net: stmmac: rearrange RX buffer allocation and free functions Date: Tue, 13 Apr 2021 17:36:20 +0800 Message-Id: <20210413093626.3447-2-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023220_374225_3EFEA75C X-CRM114-Status: GOOD ( 17.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch restructures the per RX queue buffer allocation from page_pool to stmmac_alloc_rx_buffers(). We also rearrange dma_free_rx_skbufs() so that it can be used in init_dma_rx_desc_rings() during freeing of RX buffer in the event of page_pool allocation failure to replace the more efficient method earlier. The replacement is needed to make the RX buffer alloc and free method scalable to XDP ZC xsk_pool alloc and free later. Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 84 +++++++++++-------- 1 file changed, 47 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 77285646c5fc..f6d3d26ce45a 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1475,6 +1475,43 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) tx_q->tx_skbuff_dma[i].map_as_page = false; } +/** + * dma_free_rx_skbufs - free RX dma buffers + * @priv: private structure + * @queue: RX queue index + */ +static void dma_free_rx_skbufs(struct stmmac_priv *priv, u32 queue) +{ + int i; + + for (i = 0; i < priv->dma_rx_size; i++) + stmmac_free_rx_buffer(priv, queue, i); +} + +static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, + gfp_t flags) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int i; + + for (i = 0; i < priv->dma_rx_size; i++) { + struct dma_desc *p; + int ret; + + if (priv->extend_desc) + p = &((rx_q->dma_erx + i)->basic); + else + p = rx_q->dma_rx + i; + + ret = stmmac_init_rx_buffers(priv, p, i, flags, + queue); + if (ret) + return ret; + } + + return 0; +} + /** * stmmac_reinit_rx_buffers - reinit the RX descriptor buffer. * @priv: driver private structure @@ -1547,15 +1584,14 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) return; err_reinit_rx_buffers: - do { - while (--i >= 0) - stmmac_free_rx_buffer(priv, queue, i); + while (queue >= 0) { + dma_free_rx_skbufs(priv, queue); if (queue == 0) break; - i = priv->dma_rx_size; - } while (queue-- > 0); + queue--; + } } /** @@ -1572,7 +1608,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) u32 rx_count = priv->plat->rx_queues_to_use; int ret = -ENOMEM; int queue; - int i; /* RX INITIALIZATION */ netif_dbg(priv, probe, priv->dev, @@ -1580,7 +1615,7 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) for (queue = 0; queue < rx_count; queue++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - int ret; + netif_dbg(priv, probe, priv->dev, "(%s) dma_rx_phy=0x%08x\n", __func__, @@ -1596,22 +1631,12 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", rx_q->queue_index); - for (i = 0; i < priv->dma_rx_size; i++) { - struct dma_desc *p; - - if (priv->extend_desc) - p = &((rx_q->dma_erx + i)->basic); - else - p = rx_q->dma_rx + i; - - ret = stmmac_init_rx_buffers(priv, p, i, flags, - queue); - if (ret) - goto err_init_rx_buffers; - } + ret = stmmac_alloc_rx_buffers(priv, queue, flags); + if (ret < 0) + goto err_init_rx_buffers; rx_q->cur_rx = 0; - rx_q->dirty_rx = (unsigned int)(i - priv->dma_rx_size); + rx_q->dirty_rx = 0; /* Setup the chained descriptor addresses */ if (priv->mode == STMMAC_CHAIN_MODE) { @@ -1630,13 +1655,11 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) err_init_rx_buffers: while (queue >= 0) { - while (--i >= 0) - stmmac_free_rx_buffer(priv, queue, i); + dma_free_rx_skbufs(priv, queue); if (queue == 0) break; - i = priv->dma_rx_size; queue--; } @@ -1731,19 +1754,6 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags) return ret; } -/** - * dma_free_rx_skbufs - free RX dma buffers - * @priv: private structure - * @queue: RX queue index - */ -static void dma_free_rx_skbufs(struct stmmac_priv *priv, u32 queue) -{ - int i; - - for (i = 0; i < priv->dma_rx_size; i++) - stmmac_free_rx_buffer(priv, queue, i); -} - /** * dma_free_tx_skbufs - free TX dma buffers * @priv: private structure From patchwork Tue Apr 13 09:36:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ED5CC43461 for ; Tue, 13 Apr 2021 09:35:10 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E48B16120E for ; Tue, 13 Apr 2021 09:35:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E48B16120E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=d5d5rgJ4LM+ulD7GUhsheEYJtijOwQ9MXVux+bLl7VA=; b=EIvRd09laSmouiQONYYbpC3WC hLT7JNtXKp2e94ZFskuB98KjUt1nDUjaAg+936mLe2Tj6Up/RsGzNJ5pSTnw83TBunJg6Qm98Bknp GiqWAIkGmpT4VfWEN0+OrStUfhWmqUoBNV3JAQgUjK27BxeMK4W4Hj54Hg8Wzzw2ByWHOo9fbUbBo LhI4ZDGgrOZQ31dxvpvbEOwsz5to+Bcx0i4JjNhFUQZonisd+tHSfYoyDStaFkARh5RjEdktS8rXD cxb9mxM1Mk/Eo+j1gpZF4zXnd+hD3TMTTfdiRNGX7nUKiC2RGlbKxtYZkxawxEwcW7cW6UL+5XvyW Bh2fFFsiQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPt-008iiW-Jj; Tue, 13 Apr 2021 09:33:06 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPH-008idk-5e for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=XQ7CvwByubJccqd4IV+Dr/4SUTcO2lLLiuJFrCOHTIs=; b=Q0qKGTIaSTA//ZlFAtEWoNnkeh eh8JVatjZ/1dLvYdd8Izr7XaO4HAQAejse5vZ/v6qQ3UlaEp5WgFlkse6oSURwUEm/aRh4Qdq4piU bbwzljG/jEETGDSS/EXu1/0+M6ozYVUAoSwbLwIXQbzKe3EDB042md0CI1r2BJBmoc+HWUwvJ8wCs H3xF/bBirPk6nDgThZMMoMlcXX5XyVYwLP5ehvEseDc94LK8fb3Zr9jCkNAkQyQL1t3csJcV+cQ66 MMzXRPppCKBkqeBlzcjX0B2vmphu4nRAKJgidMRFqtsZd87S5FNYxFSglfwlVAL6woeaKZtA180m6 IRpJ+8nw==; Received: from mga17.intel.com ([192.55.52.151]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPE-006sgO-Nj for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:25 +0000 IronPort-SDR: nKxmlF1rSiad99pMCEDxzWyPZ7JMUguYgLV9J06ty9Xk3tHMHS29Qb+jfjZk95K8ZGdp7rtmAt EmLpx5r4p6NQ== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="174474539" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="174474539" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:24 -0700 IronPort-SDR: rKmz5953TVTjrac7+HQsxRg+QzizWZAvu7ZVaUyWJXiddu2k4mczWcqTf76K84pDmNgbsz+LZj asyQzj5l+NJQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178125" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:19 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 2/7] net: stmmac: introduce dma_recycle_rx_skbufs for stmmac_reinit_rx_buffers Date: Tue, 13 Apr 2021 17:36:21 +0800 Message-Id: <20210413093626.3447-3-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023224_788726_8D6305D8 X-CRM114-Status: GOOD ( 11.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rearrange RX buffer page_pool recycling logics into dma_recycle_rx_skbufs, so that we prepare stmmac_reinit_rx_buffers() for XSK pool expansion. Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 44 ++++++++++++------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index f6d3d26ce45a..a6c3414fd231 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1512,6 +1512,31 @@ static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, return 0; } +/** + * dma_recycle_rx_skbufs - recycle RX dma buffers + * @priv: private structure + * @queue: RX queue index + */ +static void dma_recycle_rx_skbufs(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int i; + + for (i = 0; i < priv->dma_rx_size; i++) { + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; + + if (buf->page) { + page_pool_recycle_direct(rx_q->page_pool, buf->page); + buf->page = NULL; + } + + if (priv->sph && buf->sec_page) { + page_pool_recycle_direct(rx_q->page_pool, buf->sec_page); + buf->sec_page = NULL; + } + } +} + /** * stmmac_reinit_rx_buffers - reinit the RX descriptor buffer. * @priv: driver private structure @@ -1524,23 +1549,8 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) u32 queue; int i; - for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - - for (i = 0; i < priv->dma_rx_size; i++) { - struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; - - if (buf->page) { - page_pool_recycle_direct(rx_q->page_pool, buf->page); - buf->page = NULL; - } - - if (priv->sph && buf->sec_page) { - page_pool_recycle_direct(rx_q->page_pool, buf->sec_page); - buf->sec_page = NULL; - } - } - } + for (queue = 0; queue < rx_count; queue++) + dma_recycle_rx_skbufs(priv, queue); for (queue = 0; queue < rx_count; queue++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; From patchwork Tue Apr 13 09:36:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8BD4C433B4 for ; Tue, 13 Apr 2021 09:35:36 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B8F5611F2 for ; Tue, 13 Apr 2021 09:35:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B8F5611F2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YKSYBfZqI+XV+hTohHVyEELshe18ozPQT0f7la70ejU=; b=Co9sVAO+dq8rMt1EDb/CWFw7W 4zhIA65GWl7zSBuMyrR+z+SaxPx6YFvkGONBDmTOjxGBqsouVJXk5xHhAeEt4cnj7px8fKqNIjX7l jsfXGLVBpk15PiB2UBlmNw6Nlku779Ygw16WdS8TR9X+jEEvKTCR7yyGNp0PoxZJjYVGdIVsv3ETo tAlIhB2cE8y7y3l+m/aqRlJK2DUaHaiGwGc0XH4GjYf+eb/YPuylV8TedI6UJt3LzQREWNET2Ugx8 TV8/j9GZPeQTrQlFsTFQ/g9UFgKvcOtE1NuBHk1gzOIcWvLbZ9k0LwTJJl7gKvxckmaGXTteGEl+z pYdtE4DdA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFQM-008inY-RH; Tue, 13 Apr 2021 09:33:35 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPP-008if7-Dm for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=wZmWoZOtdo9onzLZgQ3kvzOxXRCRz61kLZW3KW3cjnM=; b=CSl640b7L/LpQm/1lkTR4BGc46 psHNLrZ0pL3hhKkgsqHPay98bYa3WXRPdcGKipfDixn5t3UaML2qIN3Bnj8gnpPW77Lj1qPBGH21Z NZ48+HWEJ23w8w25+J2QOS0WH20thGo1+NnRXf7P7ndQOaxM3zgh4xNd5bHbSbFwGgrLAEOmoRHNX 7AufW8J1VT5xSrQEaeUNimhMpZ0NQEzJbfxvMDCTrAi3GCWhQBWiWbn1s5G4xFbXjVDrNeUTcChnl PqLRuT5P+PeElASCnQuJ0oGQELF4liMg921DoViTIPHuzXyqVT0yQHB8YIdSXuehXu+fg+LO5qN18 Tltf/lpQ==; Received: from mga02.intel.com ([134.134.136.20]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPL-006shr-Tz for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:34 +0000 IronPort-SDR: K4MoNLC8t8sN29ng3VzWA/aSGlHlL7SNA6cIaBGRozxghRk452zMFMxVgZwiWY75sFAgUeCMP6 gTJ8dWx4pc+g== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="181502524" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="181502524" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:29 -0700 IronPort-SDR: sq66Kb+zGBZ6GtxebDWkkL5H5QQB/hUtKiKfpvIE1BMuHMYqDPrS3YGPUTn9jAOpO9VOVf26d4 XYfs7nVIBYYg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178161" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:24 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 3/7] net: stmmac: refactor stmmac_init_rx_buffers for stmmac_reinit_rx_buffers Date: Tue, 13 Apr 2021 17:36:22 +0800 Message-Id: <20210413093626.3447-4-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023232_136568_D28EE1A3 X-CRM114-Status: GOOD ( 22.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The per-queue RX buffer allocation in stmmac_reinit_rx_buffers() can be made to use stmmac_alloc_rx_buffers() by merging the page_pool alloc checks for "buf->page" and "buf->sec_page" in stmmac_init_rx_buffers(). This is in preparation for XSK pool allocation later. Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 378 +++++++++--------- 1 file changed, 189 insertions(+), 189 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index a6c3414fd231..7e889ef0c7b5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1388,12 +1388,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); - if (!buf->page) - return -ENOMEM; - buf->page_offset = stmmac_rx_offset(priv); + if (!buf->page) { + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); + if (!buf->page) + return -ENOMEM; + buf->page_offset = stmmac_rx_offset(priv); + } - if (priv->sph) { + if (priv->sph && !buf->sec_page) { buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); if (!buf->sec_page) return -ENOMEM; @@ -1547,48 +1549,16 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) { u32 rx_count = priv->plat->rx_queues_to_use; u32 queue; - int i; for (queue = 0; queue < rx_count; queue++) dma_recycle_rx_skbufs(priv, queue); for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - - for (i = 0; i < priv->dma_rx_size; i++) { - struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; - struct dma_desc *p; - - if (priv->extend_desc) - p = &((rx_q->dma_erx + i)->basic); - else - p = rx_q->dma_rx + i; - - if (!buf->page) { - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); - if (!buf->page) - goto err_reinit_rx_buffers; - - buf->addr = page_pool_get_dma_addr(buf->page) + - buf->page_offset; - } - - if (priv->sph && !buf->sec_page) { - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); - if (!buf->sec_page) - goto err_reinit_rx_buffers; - - buf->sec_addr = page_pool_get_dma_addr(buf->sec_page); - } + int ret; - stmmac_set_desc_addr(priv, p, buf->addr); - if (priv->sph) - stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true); - else - stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false); - if (priv->dma_buf_sz == BUF_SIZE_16KiB) - stmmac_init_desc3(priv, p); - } + ret = stmmac_alloc_rx_buffers(priv, queue, GFP_KERNEL); + if (ret < 0) + goto err_reinit_rx_buffers; } return; @@ -1791,153 +1761,173 @@ static void stmmac_free_tx_skbufs(struct stmmac_priv *priv) } /** - * free_dma_rx_desc_resources - free RX dma desc resources + * __free_dma_rx_desc_resources - free RX dma desc resources (per queue) * @priv: private structure + * @queue: RX queue index */ -static void free_dma_rx_desc_resources(struct stmmac_priv *priv) +static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) { - u32 rx_count = priv->plat->rx_queues_to_use; - u32 queue; + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - /* Free RX queue resources */ - for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + /* Release the DMA RX socket buffers */ + dma_free_rx_skbufs(priv, queue); - /* Release the DMA RX socket buffers */ - dma_free_rx_skbufs(priv, queue); + /* Free DMA regions of consistent memory previously allocated */ + if (!priv->extend_desc) + dma_free_coherent(priv->device, priv->dma_rx_size * + sizeof(struct dma_desc), + rx_q->dma_rx, rx_q->dma_rx_phy); + else + dma_free_coherent(priv->device, priv->dma_rx_size * + sizeof(struct dma_extended_desc), + rx_q->dma_erx, rx_q->dma_rx_phy); - /* Free DMA regions of consistent memory previously allocated */ - if (!priv->extend_desc) - dma_free_coherent(priv->device, priv->dma_rx_size * - sizeof(struct dma_desc), - rx_q->dma_rx, rx_q->dma_rx_phy); - else - dma_free_coherent(priv->device, priv->dma_rx_size * - sizeof(struct dma_extended_desc), - rx_q->dma_erx, rx_q->dma_rx_phy); + if (xdp_rxq_info_is_reg(&rx_q->xdp_rxq)) + xdp_rxq_info_unreg(&rx_q->xdp_rxq); - if (xdp_rxq_info_is_reg(&rx_q->xdp_rxq)) - xdp_rxq_info_unreg(&rx_q->xdp_rxq); + kfree(rx_q->buf_pool); + if (rx_q->page_pool) + page_pool_destroy(rx_q->page_pool); +} - kfree(rx_q->buf_pool); - if (rx_q->page_pool) - page_pool_destroy(rx_q->page_pool); - } +static void free_dma_rx_desc_resources(struct stmmac_priv *priv) +{ + u32 rx_count = priv->plat->rx_queues_to_use; + u32 queue; + + /* Free RX queue resources */ + for (queue = 0; queue < rx_count; queue++) + __free_dma_rx_desc_resources(priv, queue); } /** - * free_dma_tx_desc_resources - free TX dma desc resources + * __free_dma_tx_desc_resources - free TX dma desc resources (per queue) * @priv: private structure + * @queue: TX queue index */ -static void free_dma_tx_desc_resources(struct stmmac_priv *priv) +static void __free_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) { - u32 tx_count = priv->plat->tx_queues_to_use; - u32 queue; + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + size_t size; + void *addr; - /* Free TX queue resources */ - for (queue = 0; queue < tx_count; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; - size_t size; - void *addr; + /* Release the DMA TX socket buffers */ + dma_free_tx_skbufs(priv, queue); + + if (priv->extend_desc) { + size = sizeof(struct dma_extended_desc); + addr = tx_q->dma_etx; + } else if (tx_q->tbs & STMMAC_TBS_AVAIL) { + size = sizeof(struct dma_edesc); + addr = tx_q->dma_entx; + } else { + size = sizeof(struct dma_desc); + addr = tx_q->dma_tx; + } - /* Release the DMA TX socket buffers */ - dma_free_tx_skbufs(priv, queue); + size *= priv->dma_tx_size; - if (priv->extend_desc) { - size = sizeof(struct dma_extended_desc); - addr = tx_q->dma_etx; - } else if (tx_q->tbs & STMMAC_TBS_AVAIL) { - size = sizeof(struct dma_edesc); - addr = tx_q->dma_entx; - } else { - size = sizeof(struct dma_desc); - addr = tx_q->dma_tx; - } + dma_free_coherent(priv->device, size, addr, tx_q->dma_tx_phy); - size *= priv->dma_tx_size; + kfree(tx_q->tx_skbuff_dma); + kfree(tx_q->tx_skbuff); +} - dma_free_coherent(priv->device, size, addr, tx_q->dma_tx_phy); +static void free_dma_tx_desc_resources(struct stmmac_priv *priv) +{ + u32 tx_count = priv->plat->tx_queues_to_use; + u32 queue; - kfree(tx_q->tx_skbuff_dma); - kfree(tx_q->tx_skbuff); - } + /* Free TX queue resources */ + for (queue = 0; queue < tx_count; queue++) + __free_dma_tx_desc_resources(priv, queue); } /** - * alloc_dma_rx_desc_resources - alloc RX resources. + * __alloc_dma_rx_desc_resources - alloc RX resources (per queue). * @priv: private structure + * @queue: RX queue index * Description: according to which descriptor can be used (extend or basic) * this function allocates the resources for TX and RX paths. In case of * reception, for example, it pre-allocated the RX socket buffer in order to * allow zero-copy mechanism. */ -static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) +static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) { + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_channel *ch = &priv->channel[queue]; bool xdp_prog = stmmac_xdp_is_enabled(priv); - u32 rx_count = priv->plat->rx_queues_to_use; - int ret = -ENOMEM; - u32 queue; + struct page_pool_params pp_params = { 0 }; + unsigned int num_pages; + int ret; - /* RX queues buffers and DMA */ - for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - struct stmmac_channel *ch = &priv->channel[queue]; - struct page_pool_params pp_params = { 0 }; - unsigned int num_pages; - int ret; + rx_q->queue_index = queue; + rx_q->priv_data = priv; + + pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; + pp_params.pool_size = priv->dma_rx_size; + num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); + pp_params.order = ilog2(num_pages); + pp_params.nid = dev_to_node(priv->device); + pp_params.dev = priv->device; + pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; + pp_params.offset = stmmac_rx_offset(priv); + pp_params.max_len = STMMAC_MAX_RX_BUF_SIZE(num_pages); + + rx_q->page_pool = page_pool_create(&pp_params); + if (IS_ERR(rx_q->page_pool)) { + ret = PTR_ERR(rx_q->page_pool); + rx_q->page_pool = NULL; + return ret; + } - rx_q->queue_index = queue; - rx_q->priv_data = priv; - - pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; - pp_params.pool_size = priv->dma_rx_size; - num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); - pp_params.order = ilog2(num_pages); - pp_params.nid = dev_to_node(priv->device); - pp_params.dev = priv->device; - pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; - pp_params.offset = stmmac_rx_offset(priv); - pp_params.max_len = STMMAC_MAX_RX_BUF_SIZE(num_pages); - - rx_q->page_pool = page_pool_create(&pp_params); - if (IS_ERR(rx_q->page_pool)) { - ret = PTR_ERR(rx_q->page_pool); - rx_q->page_pool = NULL; - goto err_dma; - } + rx_q->buf_pool = kcalloc(priv->dma_rx_size, + sizeof(*rx_q->buf_pool), + GFP_KERNEL); + if (!rx_q->buf_pool) + return -ENOMEM; - rx_q->buf_pool = kcalloc(priv->dma_rx_size, - sizeof(*rx_q->buf_pool), - GFP_KERNEL); - if (!rx_q->buf_pool) - goto err_dma; + if (priv->extend_desc) { + rx_q->dma_erx = dma_alloc_coherent(priv->device, + priv->dma_rx_size * + sizeof(struct dma_extended_desc), + &rx_q->dma_rx_phy, + GFP_KERNEL); + if (!rx_q->dma_erx) + return -ENOMEM; - if (priv->extend_desc) { - rx_q->dma_erx = dma_alloc_coherent(priv->device, - priv->dma_rx_size * - sizeof(struct dma_extended_desc), - &rx_q->dma_rx_phy, - GFP_KERNEL); - if (!rx_q->dma_erx) - goto err_dma; + } else { + rx_q->dma_rx = dma_alloc_coherent(priv->device, + priv->dma_rx_size * + sizeof(struct dma_desc), + &rx_q->dma_rx_phy, + GFP_KERNEL); + if (!rx_q->dma_rx) + return -ENOMEM; + } - } else { - rx_q->dma_rx = dma_alloc_coherent(priv->device, - priv->dma_rx_size * - sizeof(struct dma_desc), - &rx_q->dma_rx_phy, - GFP_KERNEL); - if (!rx_q->dma_rx) - goto err_dma; - } + ret = xdp_rxq_info_reg(&rx_q->xdp_rxq, priv->dev, + rx_q->queue_index, + ch->rx_napi.napi_id); + if (ret) { + netdev_err(priv->dev, "Failed to register xdp rxq info\n"); + return -EINVAL; + } - ret = xdp_rxq_info_reg(&rx_q->xdp_rxq, priv->dev, - rx_q->queue_index, - ch->rx_napi.napi_id); - if (ret) { - netdev_err(priv->dev, "Failed to register xdp rxq info\n"); + return 0; +} + +static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) +{ + u32 rx_count = priv->plat->rx_queues_to_use; + u32 queue; + int ret; + + /* RX queues buffers and DMA */ + for (queue = 0; queue < rx_count; queue++) { + ret = __alloc_dma_rx_desc_resources(priv, queue); + if (ret) goto err_dma; - } } return 0; @@ -1949,60 +1939,70 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) } /** - * alloc_dma_tx_desc_resources - alloc TX resources. + * __alloc_dma_tx_desc_resources - alloc TX resources (per queue). * @priv: private structure + * @queue: TX queue index * Description: according to which descriptor can be used (extend or basic) * this function allocates the resources for TX and RX paths. In case of * reception, for example, it pre-allocated the RX socket buffer in order to * allow zero-copy mechanism. */ -static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) +static int __alloc_dma_tx_desc_resources(struct stmmac_priv *priv, u32 queue) { - u32 tx_count = priv->plat->tx_queues_to_use; - int ret = -ENOMEM; - u32 queue; + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + size_t size; + void *addr; - /* TX queues buffers and DMA */ - for (queue = 0; queue < tx_count; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; - size_t size; - void *addr; + tx_q->queue_index = queue; + tx_q->priv_data = priv; - tx_q->queue_index = queue; - tx_q->priv_data = priv; + tx_q->tx_skbuff_dma = kcalloc(priv->dma_tx_size, + sizeof(*tx_q->tx_skbuff_dma), + GFP_KERNEL); + if (!tx_q->tx_skbuff_dma) + return -ENOMEM; - tx_q->tx_skbuff_dma = kcalloc(priv->dma_tx_size, - sizeof(*tx_q->tx_skbuff_dma), - GFP_KERNEL); - if (!tx_q->tx_skbuff_dma) - goto err_dma; + tx_q->tx_skbuff = kcalloc(priv->dma_tx_size, + sizeof(struct sk_buff *), + GFP_KERNEL); + if (!tx_q->tx_skbuff) + return -ENOMEM; - tx_q->tx_skbuff = kcalloc(priv->dma_tx_size, - sizeof(struct sk_buff *), - GFP_KERNEL); - if (!tx_q->tx_skbuff) - goto err_dma; + if (priv->extend_desc) + size = sizeof(struct dma_extended_desc); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + size = sizeof(struct dma_edesc); + else + size = sizeof(struct dma_desc); - if (priv->extend_desc) - size = sizeof(struct dma_extended_desc); - else if (tx_q->tbs & STMMAC_TBS_AVAIL) - size = sizeof(struct dma_edesc); - else - size = sizeof(struct dma_desc); + size *= priv->dma_tx_size; - size *= priv->dma_tx_size; + addr = dma_alloc_coherent(priv->device, size, + &tx_q->dma_tx_phy, GFP_KERNEL); + if (!addr) + return -ENOMEM; - addr = dma_alloc_coherent(priv->device, size, - &tx_q->dma_tx_phy, GFP_KERNEL); - if (!addr) - goto err_dma; + if (priv->extend_desc) + tx_q->dma_etx = addr; + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + tx_q->dma_entx = addr; + else + tx_q->dma_tx = addr; - if (priv->extend_desc) - tx_q->dma_etx = addr; - else if (tx_q->tbs & STMMAC_TBS_AVAIL) - tx_q->dma_entx = addr; - else - tx_q->dma_tx = addr; + return 0; +} + +static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) +{ + u32 tx_count = priv->plat->tx_queues_to_use; + u32 queue; + int ret; + + /* TX queues buffers and DMA */ + for (queue = 0; queue < tx_count; queue++) { + ret = __alloc_dma_tx_desc_resources(priv, queue); + if (ret) + goto err_dma; } return 0; From patchwork Tue Apr 13 09:36:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D55F7C433B4 for ; Tue, 13 Apr 2021 09:36:40 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 606856121E for ; Tue, 13 Apr 2021 09:36:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 606856121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mBFEFR+osTu76RmbSeXlaVsraLML1vlXitTWGqGjtnY=; b=bd88mS3PVrX5l8DFY30V1LPgd +/7nALElRCY95DsoxxgvgqF7kTbyiwelvz/B8AHEvQTYEus9lWXy2awV7vOgwWtNbuXO8OXKu4TAQ XVcAl0PBEOwMb5r1OvM/jiH4/YxseKInSr0x1dpAAxMaaDkLL0ZSVm6xxRnVCXSQHXtbRHDOeeCzq 2R8rWU5ItIZk8l+DF53x815xp0rVLaDzEgY72sKyg82PzeYQ2MeshcZMH7WO1nPAwRynSZoPP4ayz rBZ/QL5hKyI49pbJRJbe2sopH6aXl2Sk+rXz4b0dXt34VL39seqwy4yrs/1+k6OA58yIPr9KZwUpJ 7KPIHGTVQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFQg-008irK-Hr; Tue, 13 Apr 2021 09:33:54 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPU-008ifv-N0 for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=iBpRQ80slsPryjgOk6QEa17/w5ayI9riewvo1gF99p8=; b=JBwplPiMGovf6f0axK9gZFpRqO E4TUkQr5QxdQDg2Rj4eqHhAYyAj/APgEdeK7+vzM3xxhk8RFyhkl1/0+jf1VHReUNDwkK37dZK3eZ BuhkL38JE7+E9ngQUfddTJhdtFLzbHBG+NIoV8n2TtSUSZRncz6HEmnfH4Va8lWefTupKX2RLYDvt JmWRusx0HHpsUv2kaVuSg8UxXrQleBYsDckfvjuxpYlz7O3hfnFuhY+RUyY1yg85akLmPhkvMJfzj J5UkwRmB+a9DCd3/Mi8N+rfamu7aa+ecwxC7lUTm7KBt2oY7tgmvPGvD5QV0rGNfrYnQbYjMDclsb rXMNaOzg==; Received: from mga14.intel.com ([192.55.52.115]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPR-006siH-RM for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:39 +0000 IronPort-SDR: ni0On+3fPZWUrMhclr0wCCWWZnVphBJXsS9XwLubiuh8drZ+kLUlKK5ieIPQ1rUYmSXB/S4T37 V1XPPpOSXlng== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="193937712" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="193937712" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:35 -0700 IronPort-SDR: eYjPnXmx0dR9mpDdGeFmDxICiB8/QGzxfYBMt6jXExw0mVVtcSBUUvslPWRkTa07swzFc/BLIq y3nCTLIPAMDg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178191" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:29 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 4/7] net: stmmac: rearrange RX and TX desc init into per-queue basis Date: Tue, 13 Apr 2021 17:36:23 +0800 Message-Id: <20210413093626.3447-5-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023237_932828_4F785C5E X-CRM114-Status: GOOD ( 16.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Below functions are made to be per-queue in preparation of XDP ZC: __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t flags) __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) The original functions below are stay maintained for all queue usage: init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) init_dma_tx_desc_rings(struct net_device *dev) Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 180 ++++++++++-------- 1 file changed, 100 insertions(+), 80 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 7e889ef0c7b5..0804674e628e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1575,60 +1575,70 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) } /** - * init_dma_rx_desc_rings - init the RX descriptor rings - * @dev: net device structure + * __init_dma_rx_desc_rings - init the RX descriptor ring (per queue) + * @priv: driver private structure + * @queue: RX queue index * @flags: gfp flag. * Description: this function initializes the DMA RX descriptors * and allocates the socket buffers. It supports the chained and ring * modes. */ -static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) +static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t flags) { - struct stmmac_priv *priv = netdev_priv(dev); - u32 rx_count = priv->plat->rx_queues_to_use; - int ret = -ENOMEM; - int queue; + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int ret; - /* RX INITIALIZATION */ netif_dbg(priv, probe, priv->dev, - "SKB addresses:\nskb\t\tskb data\tdma data\n"); + "(%s) dma_rx_phy=0x%08x\n", __func__, + (u32)rx_q->dma_rx_phy); - for (queue = 0; queue < rx_count; queue++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + stmmac_clear_rx_descriptors(priv, queue); + WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, + MEM_TYPE_PAGE_POOL, + rx_q->page_pool)); - netif_dbg(priv, probe, priv->dev, - "(%s) dma_rx_phy=0x%08x\n", __func__, - (u32)rx_q->dma_rx_phy); + netdev_info(priv->dev, + "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", + rx_q->queue_index); - stmmac_clear_rx_descriptors(priv, queue); + ret = stmmac_alloc_rx_buffers(priv, queue, flags); + if (ret < 0) + return -ENOMEM; - WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, - MEM_TYPE_PAGE_POOL, - rx_q->page_pool)); + rx_q->cur_rx = 0; + rx_q->dirty_rx = 0; - netdev_info(priv->dev, - "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", - rx_q->queue_index); + /* Setup the chained descriptor addresses */ + if (priv->mode == STMMAC_CHAIN_MODE) { + if (priv->extend_desc) + stmmac_mode_init(priv, rx_q->dma_erx, + rx_q->dma_rx_phy, + priv->dma_rx_size, 1); + else + stmmac_mode_init(priv, rx_q->dma_rx, + rx_q->dma_rx_phy, + priv->dma_rx_size, 0); + } - ret = stmmac_alloc_rx_buffers(priv, queue, flags); - if (ret < 0) - goto err_init_rx_buffers; + return 0; +} - rx_q->cur_rx = 0; - rx_q->dirty_rx = 0; +static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) +{ + struct stmmac_priv *priv = netdev_priv(dev); + u32 rx_count = priv->plat->rx_queues_to_use; + u32 queue; + int ret; - /* Setup the chained descriptor addresses */ - if (priv->mode == STMMAC_CHAIN_MODE) { - if (priv->extend_desc) - stmmac_mode_init(priv, rx_q->dma_erx, - rx_q->dma_rx_phy, - priv->dma_rx_size, 1); - else - stmmac_mode_init(priv, rx_q->dma_rx, - rx_q->dma_rx_phy, - priv->dma_rx_size, 0); - } + /* RX INITIALIZATION */ + netif_dbg(priv, probe, priv->dev, + "SKB addresses:\nskb\t\tskb data\tdma data\n"); + + for (queue = 0; queue < rx_count; queue++) { + ret = __init_dma_rx_desc_rings(priv, queue, flags); + if (ret) + goto err_init_rx_buffers; } return 0; @@ -1647,63 +1657,73 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) } /** - * init_dma_tx_desc_rings - init the TX descriptor rings - * @dev: net device structure. + * __init_dma_tx_desc_rings - init the TX descriptor ring (per queue) + * @priv: driver private structure + * @queue : TX queue index * Description: this function initializes the DMA TX descriptors * and allocates the socket buffers. It supports the chained and ring * modes. */ -static int init_dma_tx_desc_rings(struct net_device *dev) +static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) { - struct stmmac_priv *priv = netdev_priv(dev); - u32 tx_queue_cnt = priv->plat->tx_queues_to_use; - u32 queue; + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; int i; - for (queue = 0; queue < tx_queue_cnt; queue++) { - struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; - - netif_dbg(priv, probe, priv->dev, - "(%s) dma_tx_phy=0x%08x\n", __func__, - (u32)tx_q->dma_tx_phy); - - /* Setup the chained descriptor addresses */ - if (priv->mode == STMMAC_CHAIN_MODE) { - if (priv->extend_desc) - stmmac_mode_init(priv, tx_q->dma_etx, - tx_q->dma_tx_phy, - priv->dma_tx_size, 1); - else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) - stmmac_mode_init(priv, tx_q->dma_tx, - tx_q->dma_tx_phy, - priv->dma_tx_size, 0); - } + netif_dbg(priv, probe, priv->dev, + "(%s) dma_tx_phy=0x%08x\n", __func__, + (u32)tx_q->dma_tx_phy); - for (i = 0; i < priv->dma_tx_size; i++) { - struct dma_desc *p; - if (priv->extend_desc) - p = &((tx_q->dma_etx + i)->basic); - else if (tx_q->tbs & STMMAC_TBS_AVAIL) - p = &((tx_q->dma_entx + i)->basic); - else - p = tx_q->dma_tx + i; + /* Setup the chained descriptor addresses */ + if (priv->mode == STMMAC_CHAIN_MODE) { + if (priv->extend_desc) + stmmac_mode_init(priv, tx_q->dma_etx, + tx_q->dma_tx_phy, + priv->dma_tx_size, 1); + else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) + stmmac_mode_init(priv, tx_q->dma_tx, + tx_q->dma_tx_phy, + priv->dma_tx_size, 0); + } - stmmac_clear_desc(priv, p); + for (i = 0; i < priv->dma_tx_size; i++) { + struct dma_desc *p; - tx_q->tx_skbuff_dma[i].buf = 0; - tx_q->tx_skbuff_dma[i].map_as_page = false; - tx_q->tx_skbuff_dma[i].len = 0; - tx_q->tx_skbuff_dma[i].last_segment = false; - tx_q->tx_skbuff[i] = NULL; - } + if (priv->extend_desc) + p = &((tx_q->dma_etx + i)->basic); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + p = &((tx_q->dma_entx + i)->basic); + else + p = tx_q->dma_tx + i; - tx_q->dirty_tx = 0; - tx_q->cur_tx = 0; - tx_q->mss = 0; + stmmac_clear_desc(priv, p); - netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue)); + tx_q->tx_skbuff_dma[i].buf = 0; + tx_q->tx_skbuff_dma[i].map_as_page = false; + tx_q->tx_skbuff_dma[i].len = 0; + tx_q->tx_skbuff_dma[i].last_segment = false; + tx_q->tx_skbuff[i] = NULL; } + tx_q->dirty_tx = 0; + tx_q->cur_tx = 0; + tx_q->mss = 0; + + netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue)); + + return 0; +} + +static int init_dma_tx_desc_rings(struct net_device *dev) +{ + struct stmmac_priv *priv = netdev_priv(dev); + u32 tx_queue_cnt; + u32 queue; + + tx_queue_cnt = priv->plat->tx_queues_to_use; + + for (queue = 0; queue < tx_queue_cnt; queue++) + __init_dma_tx_desc_rings(priv, queue); + return 0; } From patchwork Tue Apr 13 09:36:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CD87C433B4 for ; Tue, 13 Apr 2021 09:37:22 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B383E6121E for ; Tue, 13 Apr 2021 09:37:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B383E6121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ZCP/YBvpuMBXMSPXYdMa1PDsBuvJXe/I9PSwXbugbDY=; b=TNs2E2NrTKhZoiWyfTVvsOs8u X5Fu7ZtLzJDdeZy0RYD6M2hEa04GFF685qdQtE2Z3shpeVwcPIty9YIFlbJW1NeCkOLIe9zpHVFIU CyY/N7c+WU4/6lGW24QyWXvpHqnff8MoN4Tge93Yp6ILS9J1I4wpjKSCl9DaLOXWQiQnnoKHyEnm0 RG1jZViF6vQTNJv4qo5BOFaJmx5S/QNi91LHwmxx2Lcxg9Hj8TBqO8nvUDmUz6pzx1RQwWCP280Cb /h12QEb3w2S01r/48Lk46GLge/PmsN4CHOib90GYlQFG7gjOoxoGaibKYCuFF9YBxX5Ywn20rlajh CBCSbDjqg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFR4-008iwL-WA; Tue, 13 Apr 2021 09:34:19 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPZ-008igB-Eh for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=aJME64vi6vashiw71EUO4x514amznXm9bLQASfY4lkE=; b=COo/Jocp3zD46p466nINManjrA 3ZZz4dS4xZYyVMDeqqEcmidHf0l+7Adw7xnGqY0NxK4tIg8yD7lttAy2vtEeOL/wBO0bXzIf3eyNE kwBtFwDtE+SYGfjcZ575F6ZBjBuMqHQKA7yhm3FSo1dkx1AYrA44IN7dRnXigPGPN4gpROIycjeZn YU/f4bQMVOFyTEdRmniXHDsVmfUWeMR+xUds3+NY9H4/9cJAVfqS8o5LPjd+z81OTNi+NlGbm3SLI sW9dNpv7dhI52oeS6mD8z/R0PrfOFoElLQiHyul0Lp7tIdnp6GEFLUqNZW1/S5lymomHiLZA9OtOO 2tdZatYg==; Received: from mga14.intel.com ([192.55.52.115]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPX-006siH-0R for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:44 +0000 IronPort-SDR: 4/Xx3nmHestZrJARKiVD2C/4BoQEFcdI8hEsc62Tie7afSHMlAlT66QdEwdbp4I0wTn2y9ACo0 5QHZ4uf8SB+Q== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="193937720" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="193937720" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:42 -0700 IronPort-SDR: /d4kk3xEeYhVjKG9EI0x4bPz/abpWv5I8UQjIAXFp58MaJkYFp6qLnFC4uj5h/cBD6i1BhYotO HMrh5eCfaAIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178214" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:35 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 5/7] net: stmmac: Refactor __stmmac_xdp_run_prog for XDP ZC Date: Tue, 13 Apr 2021 17:36:24 +0800 Message-Id: <20210413093626.3447-6-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023243_114862_BD525FDB X-CRM114-Status: GOOD ( 12.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prepare stmmac_xdp_run_prog() for AF_XDP zero-copy support which will be added by upcoming patches by splitting out the XDP verdict processing into __stmmac_xdp_run_prog() and it callable for XDP ZC path which does not need to verify bpf_prog is not NULL. The stmmac_xdp_run_prog() is used for regular XDP Rx path which requires bpf_prog to be verified. Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 35 ++++++++++++------- 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 0804674e628e..329a3abbac76 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -4408,20 +4408,13 @@ static int stmmac_xdp_xmit_back(struct stmmac_priv *priv, return res; } -static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, - struct xdp_buff *xdp) +/* This function assumes rcu_read_lock() is held by the caller. */ +static int __stmmac_xdp_run_prog(struct stmmac_priv *priv, + struct bpf_prog *prog, + struct xdp_buff *xdp) { - struct bpf_prog *prog; - int res; u32 act; - - rcu_read_lock(); - - prog = READ_ONCE(priv->xdp_prog); - if (!prog) { - res = STMMAC_XDP_PASS; - goto unlock; - } + int res; act = bpf_prog_run_xdp(prog, xdp); switch (act) { @@ -4448,6 +4441,24 @@ static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, break; } + return res; +} + +static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, + struct xdp_buff *xdp) +{ + struct bpf_prog *prog; + int res; + + rcu_read_lock(); + + prog = READ_ONCE(priv->xdp_prog); + if (!prog) { + res = STMMAC_XDP_PASS; + goto unlock; + } + + res = __stmmac_xdp_run_prog(priv, prog, xdp); unlock: rcu_read_unlock(); return ERR_PTR(-res); From patchwork Tue Apr 13 09:36:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03ED0C433B4 for ; Tue, 13 Apr 2021 09:37:27 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 61DDB61222 for ; Tue, 13 Apr 2021 09:37:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61DDB61222 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xCyFAyEg1HRTZgwe8qf9eF2VxpDpfio639WP321cGH4=; b=aM/Qqu0CK9DIqqgarKf7PsT+8 P4CGV76PGmwbzuStuEw66ixL9jnaIMMWpv8D7TaL6GIQBeBAeCemASDJLINkGWB9lC2h8/WrrF1LE 0teAE825n95oBpS4ZDGmd/fu61AhB2Ud2XdFwpieNIbG5ZjZBl50L3c5PY3xou73qvc17WJevp+Ti dkcbeGOBWqIQi61jovP3taSJXvln4JUwqy498T6oAD+t2iKb+f5R+L8s8eTeNWipacrc5jjgXgQZt eADmyyvXyIFZS54x3a+VVx3f9gyvS6WNHDok3SkW6BKLDXKQPL8p+wuaQJAZPSH48SqXcc2kosCMS N2umfG+GA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFRZ-008j46-2s; Tue, 13 Apr 2021 09:34:49 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPg-008ih9-Pf for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:32:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hQiI9qLGEQ7fZw8FUicXaAzJLZj4L753xxdnQPvIybI=; b=nkxVt0/rTFihuFh03FjAI8Y7+/ hAKyQY3lQdNhCgi43tgy9quv+TOXPteyYBUSTzGvF3gMID6Rm4eHCFq8wDq/p5MzmOsuWIWAO3O/7 kYZQYr5CtvZpv4/q92mwyN+IQvBVglngvTVBlM/KL/TyBr8Jli9NJVabByWrXoPZMznldAT7IuP2y RgPi6E3oOmUjZAazEe30NndaVItl26U6m21XYS4krbOxlNxUnvFOPEW8M+MjjIqAS+y6T0U4LPHxT lGS7lf+QMFzpb6s1K98QQ0IOn5R2bNuQ3FrdbOweFEdMYhwURIrt9i7TgJ74cpnSqy0CnyrtELB1v d7JUowCQ==; Received: from mga14.intel.com ([192.55.52.115]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPc-006siH-66 for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:51 +0000 IronPort-SDR: fNIXYEYuSfJccJAdHd1Km7oApnWY6cfHC8P/p57zk8liDc+oOesQPeXGgi1iqYeGuX0OIOm+no 7ddTftNGmdTw== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="193937739" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="193937739" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:47 -0700 IronPort-SDR: EQYk7wZoUkdPz9Or2nMTKf79q6bHfUpfeFRoqNL2/e2nLNknwwRDBaF83P5LGlImrxzyXH3Vup fkVkptV816rQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178242" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:41 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 6/7] net: stmmac: Enable RX via AF_XDP zero-copy Date: Tue, 13 Apr 2021 17:36:25 +0800 Message-Id: <20210413093626.3447-7-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023248_297946_3BCDE05C X-CRM114-Status: GOOD ( 28.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong --- drivers/net/ethernet/stmicro/stmmac/stmmac.h | 18 +- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 600 +++++++++++++++++- .../net/ethernet/stmicro/stmmac/stmmac_xdp.c | 87 +++ .../net/ethernet/stmicro/stmmac/stmmac_xdp.h | 3 + 4 files changed, 679 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index c49debb62b05..aa0db622ee69 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -77,9 +77,14 @@ struct stmmac_tx_queue { }; struct stmmac_rx_buffer { - struct page *page; - dma_addr_t addr; - __u32 page_offset; + union { + struct { + struct page *page; + dma_addr_t addr; + __u32 page_offset; + }; + struct xdp_buff *xdp; + }; struct page *sec_page; dma_addr_t sec_addr; }; @@ -88,6 +93,7 @@ struct stmmac_rx_queue { u32 rx_count_frames; u32 queue_index; struct xdp_rxq_info xdp_rxq; + struct xsk_buff_pool *xsk_pool; struct page_pool *page_pool; struct stmmac_rx_buffer *buf_pool; struct stmmac_priv *priv_data; @@ -95,6 +101,7 @@ struct stmmac_rx_queue { struct dma_desc *dma_rx ____cacheline_aligned_in_smp; unsigned int cur_rx; unsigned int dirty_rx; + unsigned int buf_alloc_num; u32 rx_zeroc_thresh; dma_addr_t dma_rx_phy; u32 rx_tail_addr; @@ -283,6 +290,7 @@ struct stmmac_priv { struct stmmac_rss rss; /* XDP BPF Program */ + unsigned long *af_xdp_zc_qps; struct bpf_prog *xdp_prog; }; @@ -328,6 +336,10 @@ static inline unsigned int stmmac_rx_offset(struct stmmac_priv *priv) return 0; } +void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue); +void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue); +int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags); + #if IS_ENABLED(CONFIG_STMMAC_SELFTESTS) void stmmac_selftest_run(struct net_device *dev, struct ethtool_test *etest, u64 *buf); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 329a3abbac76..48e755ebcf2b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "stmmac_ptp.h" #include "stmmac.h" #include "stmmac_xdp.h" @@ -69,6 +70,8 @@ MODULE_PARM_DESC(phyaddr, "Physical device address"); #define STMMAC_TX_THRESH(x) ((x)->dma_tx_size / 4) #define STMMAC_RX_THRESH(x) ((x)->dma_rx_size / 4) +#define STMMAC_RX_FILL_BATCH 16 + #define STMMAC_XDP_PASS 0 #define STMMAC_XDP_CONSUMED BIT(0) #define STMMAC_XDP_TX BIT(1) @@ -179,11 +182,7 @@ static void stmmac_verify_args(void) eee_timer = STMMAC_DEFAULT_LPI_TIMER; } -/** - * stmmac_disable_all_queues - Disable all queues - * @priv: driver private structure - */ -static void stmmac_disable_all_queues(struct stmmac_priv *priv) +static void __stmmac_disable_all_queues(struct stmmac_priv *priv) { u32 rx_queues_cnt = priv->plat->rx_queues_to_use; u32 tx_queues_cnt = priv->plat->tx_queues_to_use; @@ -200,6 +199,28 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv) } } +/** + * stmmac_disable_all_queues - Disable all queues + * @priv: driver private structure + */ +static void stmmac_disable_all_queues(struct stmmac_priv *priv) +{ + u32 rx_queues_cnt = priv->plat->rx_queues_to_use; + struct stmmac_rx_queue *rx_q; + u32 queue; + + /* synchronize_rcu() needed for pending XDP buffers to drain */ + for (queue = 0; queue < rx_queues_cnt; queue++) { + rx_q = &priv->rx_queue[queue]; + if (rx_q->xsk_pool) { + synchronize_rcu(); + break; + } + } + + __stmmac_disable_all_queues(priv); +} + /** * stmmac_enable_all_queues - Enable all queues * @priv: driver private structure @@ -1509,6 +1530,8 @@ static int stmmac_alloc_rx_buffers(struct stmmac_priv *priv, u32 queue, queue); if (ret) return ret; + + rx_q->buf_alloc_num++; } return 0; @@ -1539,6 +1562,56 @@ static void dma_recycle_rx_skbufs(struct stmmac_priv *priv, u32 queue) } } +/** + * dma_free_rx_xskbufs - free RX dma buffers from XSK pool + * @priv: private structure + * @queue: RX queue index + */ +static void dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int i; + + for (i = 0; i < priv->dma_rx_size; i++) { + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; + + if (!buf->xdp) + continue; + + xsk_buff_free(buf->xdp); + buf->xdp = NULL; + } +} + +static int stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int i; + + for (i = 0; i < priv->dma_rx_size; i++) { + struct stmmac_rx_buffer *buf; + dma_addr_t dma_addr; + struct dma_desc *p; + + if (priv->extend_desc) + p = (struct dma_desc *)(rx_q->dma_erx + i); + else + p = rx_q->dma_rx + i; + + buf = &rx_q->buf_pool[i]; + + buf->xdp = xsk_buff_alloc(rx_q->xsk_pool); + if (!buf->xdp) + return -ENOMEM; + + dma_addr = xsk_buff_xdp_get_dma(buf->xdp); + stmmac_set_desc_addr(priv, p, dma_addr); + rx_q->buf_alloc_num++; + } + + return 0; +} + /** * stmmac_reinit_rx_buffers - reinit the RX descriptor buffer. * @priv: driver private structure @@ -1550,15 +1623,31 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) u32 rx_count = priv->plat->rx_queues_to_use; u32 queue; - for (queue = 0; queue < rx_count; queue++) - dma_recycle_rx_skbufs(priv, queue); + for (queue = 0; queue < rx_count; queue++) { + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + + if (rx_q->xsk_pool) + dma_free_rx_xskbufs(priv, queue); + else + dma_recycle_rx_skbufs(priv, queue); + + rx_q->buf_alloc_num = 0; + } for (queue = 0; queue < rx_count; queue++) { + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; int ret; - ret = stmmac_alloc_rx_buffers(priv, queue, GFP_KERNEL); - if (ret < 0) - goto err_reinit_rx_buffers; + if (rx_q->xsk_pool) { + /* RX XDP ZC buffer pool may not be populated, e.g. + * xdpsock TX-only. + */ + stmmac_alloc_rx_buffers_zc(priv, queue); + } else { + ret = stmmac_alloc_rx_buffers(priv, queue, GFP_KERNEL); + if (ret < 0) + goto err_reinit_rx_buffers; + } } return; @@ -1574,6 +1663,14 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) } } +static struct xsk_buff_pool *stmmac_get_xsk_pool(struct stmmac_priv *priv, u32 queue) +{ + if (!stmmac_xdp_is_enabled(priv) || !test_bit(queue, priv->af_xdp_zc_qps)) + return NULL; + + return xsk_get_pool_from_qid(priv->dev, queue); +} + /** * __init_dma_rx_desc_rings - init the RX descriptor ring (per queue) * @priv: driver private structure @@ -1594,17 +1691,37 @@ static int __init_dma_rx_desc_rings(struct stmmac_priv *priv, u32 queue, gfp_t f stmmac_clear_rx_descriptors(priv, queue); - WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, - MEM_TYPE_PAGE_POOL, - rx_q->page_pool)); + xdp_rxq_info_unreg_mem_model(&rx_q->xdp_rxq); - netdev_info(priv->dev, - "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", - rx_q->queue_index); + rx_q->xsk_pool = stmmac_get_xsk_pool(priv, queue); - ret = stmmac_alloc_rx_buffers(priv, queue, flags); - if (ret < 0) - return -ENOMEM; + if (rx_q->xsk_pool) { + WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL)); + netdev_info(priv->dev, + "Register MEM_TYPE_XSK_BUFF_POOL RxQ-%d\n", + rx_q->queue_index); + xsk_pool_set_rxq_info(rx_q->xsk_pool, &rx_q->xdp_rxq); + } else { + WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, + MEM_TYPE_PAGE_POOL, + rx_q->page_pool)); + netdev_info(priv->dev, + "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", + rx_q->queue_index); + } + + if (rx_q->xsk_pool) { + /* RX XDP ZC buffer pool may not be populated, e.g. + * xdpsock TX-only. + */ + stmmac_alloc_rx_buffers_zc(priv, queue); + } else { + ret = stmmac_alloc_rx_buffers(priv, queue, flags); + if (ret < 0) + return -ENOMEM; + } rx_q->cur_rx = 0; rx_q->dirty_rx = 0; @@ -1645,7 +1762,15 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) err_init_rx_buffers: while (queue >= 0) { - dma_free_rx_skbufs(priv, queue); + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + + if (rx_q->xsk_pool) + dma_free_rx_xskbufs(priv, queue); + else + dma_free_rx_skbufs(priv, queue); + + rx_q->buf_alloc_num = 0; + rx_q->xsk_pool = NULL; if (queue == 0) break; @@ -1790,7 +1915,13 @@ static void __free_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; /* Release the DMA RX socket buffers */ - dma_free_rx_skbufs(priv, queue); + if (rx_q->xsk_pool) + dma_free_rx_xskbufs(priv, queue); + else + dma_free_rx_skbufs(priv, queue); + + rx_q->buf_alloc_num = 0; + rx_q->xsk_pool = NULL; /* Free DMA regions of consistent memory previously allocated */ if (!priv->extend_desc) @@ -2222,12 +2353,24 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv) /* configure all channels */ for (chan = 0; chan < rx_channels_count; chan++) { + struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; + u32 buf_size; + qmode = priv->plat->rx_queues_cfg[chan].mode_to_use; stmmac_dma_rx_mode(priv, priv->ioaddr, rxmode, chan, rxfifosz, qmode); - stmmac_set_dma_bfsize(priv, priv->ioaddr, priv->dma_buf_sz, - chan); + + if (rx_q->xsk_pool) { + buf_size = xsk_pool_get_rx_frame_size(rx_q->xsk_pool); + stmmac_set_dma_bfsize(priv, priv->ioaddr, + buf_size, + chan); + } else { + stmmac_set_dma_bfsize(priv, priv->ioaddr, + priv->dma_buf_sz, + chan); + } } for (chan = 0; chan < tx_channels_count; chan++) { @@ -2638,7 +2781,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) rx_q->dma_rx_phy, chan); rx_q->rx_tail_addr = rx_q->dma_rx_phy + - (priv->dma_rx_size * + (rx_q->buf_alloc_num * sizeof(struct dma_desc)); stmmac_set_rx_tail_ptr(priv, priv->ioaddr, rx_q->rx_tail_addr, chan); @@ -4479,6 +4622,302 @@ static void stmmac_finalize_xdp_rx(struct stmmac_priv *priv, xdp_do_flush(); } +static struct sk_buff *stmmac_construct_skb_zc(struct stmmac_channel *ch, + struct xdp_buff *xdp) +{ + unsigned int metasize = xdp->data - xdp->data_meta; + unsigned int datasize = xdp->data_end - xdp->data; + struct sk_buff *skb; + + skb = __napi_alloc_skb(&ch->rx_napi, + xdp->data_end - xdp->data_hard_start, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + memcpy(__skb_put(skb, datasize), xdp->data, datasize); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; +} + +static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue, + struct dma_desc *p, struct dma_desc *np, + struct xdp_buff *xdp) +{ + struct stmmac_channel *ch = &priv->channel[queue]; + unsigned int len = xdp->data_end - xdp->data; + enum pkt_hash_types hash_type; + int coe = priv->hw->rx_csum; + struct sk_buff *skb; + u32 hash; + + skb = stmmac_construct_skb_zc(ch, xdp); + if (!skb) { + priv->dev->stats.rx_dropped++; + return; + } + + stmmac_get_rx_hwtstamp(priv, p, np, skb); + stmmac_rx_vlan(priv->dev, skb); + skb->protocol = eth_type_trans(skb, priv->dev); + + if (unlikely(!coe)) + skb_checksum_none_assert(skb); + else + skb->ip_summed = CHECKSUM_UNNECESSARY; + + if (!stmmac_get_rx_hash(priv, p, &hash, &hash_type)) + skb_set_hash(skb, hash, hash_type); + + skb_record_rx_queue(skb, queue); + napi_gro_receive(&ch->rx_napi, skb); + + priv->dev->stats.rx_packets++; + priv->dev->stats.rx_bytes += len; +} + +static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + unsigned int entry = rx_q->dirty_rx; + struct dma_desc *rx_desc = NULL; + bool ret = true; + + budget = min(budget, stmmac_rx_dirty(priv, queue)); + + while (budget-- > 0 && entry != rx_q->cur_rx) { + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; + dma_addr_t dma_addr; + bool use_rx_wd; + + if (!buf->xdp) { + buf->xdp = xsk_buff_alloc(rx_q->xsk_pool); + if (!buf->xdp) { + ret = false; + break; + } + } + + if (priv->extend_desc) + rx_desc = (struct dma_desc *)(rx_q->dma_erx + entry); + else + rx_desc = rx_q->dma_rx + entry; + + dma_addr = xsk_buff_xdp_get_dma(buf->xdp); + stmmac_set_desc_addr(priv, rx_desc, dma_addr); + stmmac_set_desc_sec_addr(priv, rx_desc, 0, false); + stmmac_refill_desc3(priv, rx_q, rx_desc); + + rx_q->rx_count_frames++; + rx_q->rx_count_frames += priv->rx_coal_frames[queue]; + if (rx_q->rx_count_frames > priv->rx_coal_frames[queue]) + rx_q->rx_count_frames = 0; + + use_rx_wd = !priv->rx_coal_frames[queue]; + use_rx_wd |= rx_q->rx_count_frames > 0; + if (!priv->use_riwt) + use_rx_wd = false; + + dma_wmb(); + stmmac_set_rx_owner(priv, rx_desc, use_rx_wd); + + entry = STMMAC_GET_ENTRY(entry, priv->dma_rx_size); + } + + if (rx_desc) { + rx_q->dirty_rx = entry; + rx_q->rx_tail_addr = rx_q->dma_rx_phy + + (rx_q->dirty_rx * sizeof(struct dma_desc)); + stmmac_set_rx_tail_ptr(priv, priv->ioaddr, rx_q->rx_tail_addr, queue); + } + + return ret; +} + +static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + unsigned int count = 0, error = 0, len = 0; + int dirty = stmmac_rx_dirty(priv, queue); + unsigned int next_entry = rx_q->cur_rx; + unsigned int desc_size; + struct bpf_prog *prog; + bool failure = false; + int xdp_status = 0; + int status = 0; + + if (netif_msg_rx_status(priv)) { + void *rx_head; + + netdev_dbg(priv->dev, "%s: descriptor ring:\n", __func__); + if (priv->extend_desc) { + rx_head = (void *)rx_q->dma_erx; + desc_size = sizeof(struct dma_extended_desc); + } else { + rx_head = (void *)rx_q->dma_rx; + desc_size = sizeof(struct dma_desc); + } + + stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true, + rx_q->dma_rx_phy, desc_size); + } + while (count < limit) { + struct stmmac_rx_buffer *buf; + unsigned int buf1_len = 0; + struct dma_desc *np, *p; + int entry; + int res; + + if (!count && rx_q->state_saved) { + error = rx_q->state.error; + len = rx_q->state.len; + } else { + rx_q->state_saved = false; + error = 0; + len = 0; + } + + if (count >= limit) + break; + +read_again: + buf1_len = 0; + entry = next_entry; + buf = &rx_q->buf_pool[entry]; + + if (dirty >= STMMAC_RX_FILL_BATCH) { + failure = failure || + !stmmac_rx_refill_zc(priv, queue, dirty); + dirty = 0; + } + + if (priv->extend_desc) + p = (struct dma_desc *)(rx_q->dma_erx + entry); + else + p = rx_q->dma_rx + entry; + + /* read the status of the incoming frame */ + status = stmmac_rx_status(priv, &priv->dev->stats, + &priv->xstats, p); + /* check if managed by the DMA otherwise go ahead */ + if (unlikely(status & dma_own)) + break; + + /* Prefetch the next RX descriptor */ + rx_q->cur_rx = STMMAC_GET_ENTRY(rx_q->cur_rx, + priv->dma_rx_size); + next_entry = rx_q->cur_rx; + + if (priv->extend_desc) + np = (struct dma_desc *)(rx_q->dma_erx + next_entry); + else + np = rx_q->dma_rx + next_entry; + + prefetch(np); + + if (priv->extend_desc) + stmmac_rx_extended_status(priv, &priv->dev->stats, + &priv->xstats, + rx_q->dma_erx + entry); + if (unlikely(status == discard_frame)) { + xsk_buff_free(buf->xdp); + buf->xdp = NULL; + dirty++; + error = 1; + if (!priv->hwts_rx_en) + priv->dev->stats.rx_errors++; + } + + if (unlikely(error && (status & rx_not_ls))) + goto read_again; + if (unlikely(error)) { + count++; + continue; + } + + /* Ensure a valid XSK buffer before proceed */ + if (!buf->xdp) + break; + + /* XSK pool expects RX frame 1:1 mapped to XSK buffer */ + if (likely(status & rx_not_ls)) { + xsk_buff_free(buf->xdp); + buf->xdp = NULL; + dirty++; + count++; + goto read_again; + } + + /* XDP ZC Frame only support primary buffers for now */ + buf1_len = stmmac_rx_buf1_len(priv, p, status, len); + len += buf1_len; + + /* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3 + * Type frames (LLC/LLC-SNAP) + * + * llc_snap is never checked in GMAC >= 4, so this ACS + * feature is always disabled and packets need to be + * stripped manually. + */ + if (likely(!(status & rx_not_ls)) && + (likely(priv->synopsys_id >= DWMAC_CORE_4_00) || + unlikely(status != llc_snap))) { + buf1_len -= ETH_FCS_LEN; + len -= ETH_FCS_LEN; + } + + /* RX buffer is good and fit into a XSK pool buffer */ + buf->xdp->data_end = buf->xdp->data + buf1_len; + xsk_buff_dma_sync_for_cpu(buf->xdp, rx_q->xsk_pool); + + rcu_read_lock(); + prog = READ_ONCE(priv->xdp_prog); + res = __stmmac_xdp_run_prog(priv, prog, buf->xdp); + rcu_read_unlock(); + + switch (res) { + case STMMAC_XDP_PASS: + stmmac_dispatch_skb_zc(priv, queue, p, np, buf->xdp); + xsk_buff_free(buf->xdp); + break; + case STMMAC_XDP_CONSUMED: + xsk_buff_free(buf->xdp); + priv->dev->stats.rx_dropped++; + break; + case STMMAC_XDP_TX: + case STMMAC_XDP_REDIRECT: + xdp_status |= res; + break; + } + + buf->xdp = NULL; + dirty++; + count++; + } + + if (status & rx_not_ls) { + rx_q->state_saved = true; + rx_q->state.error = error; + rx_q->state.len = len; + } + + stmmac_finalize_xdp_rx(priv, xdp_status); + + if (xsk_uses_need_wakeup(rx_q->xsk_pool)) { + if (failure || stmmac_rx_dirty(priv, queue) > 0) + xsk_set_rx_need_wakeup(rx_q->xsk_pool); + else + xsk_clear_rx_need_wakeup(rx_q->xsk_pool); + + return (int)count; + } + + return failure ? limit : (int)count; +} + /** * stmmac_rx - manage the receive process * @priv: driver private structure @@ -4766,12 +5205,17 @@ static int stmmac_napi_poll_rx(struct napi_struct *napi, int budget) struct stmmac_channel *ch = container_of(napi, struct stmmac_channel, rx_napi); struct stmmac_priv *priv = ch->priv_data; + struct stmmac_rx_queue *rx_q; u32 chan = ch->index; int work_done; priv->xstats.napi_poll++; - work_done = stmmac_rx(priv, budget, chan); + rx_q = &priv->rx_queue[chan]; + + work_done = rx_q->xsk_pool ? + stmmac_rx_zc(priv, budget, chan) : + stmmac_rx(priv, budget, chan); if (work_done < budget && napi_complete_done(napi, work_done)) { unsigned long flags; @@ -5254,7 +5698,7 @@ static int stmmac_setup_tc_block_cb(enum tc_setup_type type, void *type_data, if (!tc_cls_can_offload_and_chain0(priv->dev, type_data)) return ret; - stmmac_disable_all_queues(priv); + __stmmac_disable_all_queues(priv); switch (type) { case TC_SETUP_CLSU32: @@ -5675,6 +6119,9 @@ static int stmmac_bpf(struct net_device *dev, struct netdev_bpf *bpf) switch (bpf->command) { case XDP_SETUP_PROG: return stmmac_xdp_set_prog(priv, bpf->prog, bpf->extack); + case XDP_SETUP_XSK_POOL: + return stmmac_xdp_setup_pool(priv, bpf->xsk.pool, + bpf->xsk.queue_id); default: return -EOPNOTSUPP; } @@ -5722,6 +6169,102 @@ static int stmmac_xdp_xmit(struct net_device *dev, int num_frames, return nxmit; } +void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_channel *ch = &priv->channel[queue]; + unsigned long flags; + + spin_lock_irqsave(&ch->lock, flags); + stmmac_disable_dma_irq(priv, priv->ioaddr, queue, 1, 0); + spin_unlock_irqrestore(&ch->lock, flags); + + stmmac_stop_rx_dma(priv, queue); + __free_dma_rx_desc_resources(priv, queue); +} + +void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_channel *ch = &priv->channel[queue]; + unsigned long flags; + u32 buf_size; + int ret; + + ret = __alloc_dma_rx_desc_resources(priv, queue); + if (ret) { + netdev_err(priv->dev, "Failed to alloc RX desc.\n"); + return; + } + + ret = __init_dma_rx_desc_rings(priv, queue, GFP_KERNEL); + if (ret) { + __free_dma_rx_desc_resources(priv, queue); + netdev_err(priv->dev, "Failed to init RX desc.\n"); + return; + } + + stmmac_clear_rx_descriptors(priv, queue); + + stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, + rx_q->dma_rx_phy, rx_q->queue_index); + + rx_q->rx_tail_addr = rx_q->dma_rx_phy + (rx_q->buf_alloc_num * + sizeof(struct dma_desc)); + stmmac_set_rx_tail_ptr(priv, priv->ioaddr, + rx_q->rx_tail_addr, rx_q->queue_index); + + if (rx_q->xsk_pool && rx_q->buf_alloc_num) { + buf_size = xsk_pool_get_rx_frame_size(rx_q->xsk_pool); + stmmac_set_dma_bfsize(priv, priv->ioaddr, + buf_size, + rx_q->queue_index); + } else { + stmmac_set_dma_bfsize(priv, priv->ioaddr, + priv->dma_buf_sz, + rx_q->queue_index); + } + + stmmac_start_rx_dma(priv, queue); + + spin_lock_irqsave(&ch->lock, flags); + stmmac_enable_dma_irq(priv, priv->ioaddr, queue, 1, 0); + spin_unlock_irqrestore(&ch->lock, flags); +} + +int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags) +{ + struct stmmac_priv *priv = netdev_priv(dev); + struct stmmac_rx_queue *rx_q; + struct stmmac_channel *ch; + + if (test_bit(STMMAC_DOWN, &priv->state) || + !netif_carrier_ok(priv->dev)) + return -ENETDOWN; + + if (!stmmac_xdp_is_enabled(priv)) + return -ENXIO; + + if (queue >= priv->plat->rx_queues_to_use) + return -EINVAL; + + rx_q = &priv->rx_queue[queue]; + ch = &priv->channel[queue]; + + if (!rx_q->xsk_pool) + return -ENXIO; + + if (flags & XDP_WAKEUP_RX && + !napi_if_scheduled_mark_missed(&ch->rx_napi)) { + /* EQoS does not have per-DMA channel SW interrupt, + * so we schedule RX Napi straight-away. + */ + if (likely(napi_schedule_prep(&ch->rx_napi))) + __napi_schedule(&ch->rx_napi); + } + + return 0; +} + static const struct net_device_ops stmmac_netdev_ops = { .ndo_open = stmmac_open, .ndo_start_xmit = stmmac_xmit, @@ -5742,6 +6285,7 @@ static const struct net_device_ops stmmac_netdev_ops = { .ndo_vlan_rx_kill_vid = stmmac_vlan_rx_kill_vid, .ndo_bpf = stmmac_bpf, .ndo_xdp_xmit = stmmac_xdp_xmit, + .ndo_xsk_wakeup = stmmac_xsk_wakeup, }; static void stmmac_reset_subtask(struct stmmac_priv *priv) @@ -6075,6 +6619,10 @@ int stmmac_dvr_probe(struct device *device, /* Verify driver arguments */ stmmac_verify_args(); + priv->af_xdp_zc_qps = bitmap_zalloc(MTL_MAX_TX_QUEUES, GFP_KERNEL); + if (!priv->af_xdp_zc_qps) + return -ENOMEM; + /* Allocate workqueue */ priv->wq = create_singlethread_workqueue("stmmac_wq"); if (!priv->wq) { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c index bf38d231860b..caff0dfc6f4b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c @@ -1,9 +1,96 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2021, Intel Corporation. */ +#include + #include "stmmac.h" #include "stmmac_xdp.h" +static int stmmac_xdp_enable_pool(struct stmmac_priv *priv, + struct xsk_buff_pool *pool, u16 queue) +{ + struct stmmac_channel *ch = &priv->channel[queue]; + bool need_update; + u32 frame_size; + int err; + + if (queue >= priv->plat->rx_queues_to_use) + return -EINVAL; + + frame_size = xsk_pool_get_rx_frame_size(pool); + /* XDP ZC does not span multiple frame, make sure XSK pool buffer + * size can at least store Q-in-Q frame. + */ + if (frame_size < ETH_FRAME_LEN + VLAN_HLEN * 2) + return -EOPNOTSUPP; + + err = xsk_pool_dma_map(pool, priv->device, STMMAC_RX_DMA_ATTR); + if (err) { + netdev_err(priv->dev, "Failed to map xsk pool\n"); + return err; + } + + need_update = netif_running(priv->dev) && stmmac_xdp_is_enabled(priv); + + if (need_update) { + stmmac_disable_rx_queue(priv, queue); + napi_disable(&ch->rx_napi); + } + + set_bit(queue, priv->af_xdp_zc_qps); + + if (need_update) { + napi_enable(&ch->rx_napi); + stmmac_enable_rx_queue(priv, queue); + + err = stmmac_xsk_wakeup(priv->dev, queue, XDP_WAKEUP_RX); + if (err) + return err; + } + + return 0; +} + +static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue) +{ + struct stmmac_channel *ch = &priv->channel[queue]; + struct xsk_buff_pool *pool; + bool need_update; + + if (queue >= priv->plat->rx_queues_to_use) + return -EINVAL; + + pool = xsk_get_pool_from_qid(priv->dev, queue); + if (!pool) + return -EINVAL; + + need_update = netif_running(priv->dev) && stmmac_xdp_is_enabled(priv); + + if (need_update) { + stmmac_disable_rx_queue(priv, queue); + synchronize_rcu(); + napi_disable(&ch->rx_napi); + } + + xsk_pool_dma_unmap(pool, STMMAC_RX_DMA_ATTR); + + clear_bit(queue, priv->af_xdp_zc_qps); + + if (need_update) { + napi_enable(&ch->rx_napi); + stmmac_enable_rx_queue(priv, queue); + } + + return 0; +} + +int stmmac_xdp_setup_pool(struct stmmac_priv *priv, struct xsk_buff_pool *pool, + u16 queue) +{ + return pool ? stmmac_xdp_enable_pool(priv, pool, queue) : + stmmac_xdp_disable_pool(priv, queue); +} + int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog, struct netlink_ext_ack *extack) { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h index 93948569d92a..896dc987d4ef 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h @@ -5,7 +5,10 @@ #define _STMMAC_XDP_H_ #define STMMAC_MAX_RX_BUF_SIZE(num) (((num) * PAGE_SIZE) - XDP_PACKET_HEADROOM) +#define STMMAC_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) +int stmmac_xdp_setup_pool(struct stmmac_priv *priv, struct xsk_buff_pool *pool, + u16 queue); int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog, struct netlink_ext_ack *extack); From patchwork Tue Apr 13 09:36:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 12199829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14E38C433ED for ; Tue, 13 Apr 2021 09:37:43 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6D1806121E for ; Tue, 13 Apr 2021 09:37:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D1806121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WPw+jo4glUEZaSeicafMm2kbdc75fpZ0/5vK8HZPgus=; b=ZVoYUgI/4mFq7H/HslJaomtxy kas/vjOUdZIDw4XU3nMqHqkU+xV86P+RHrBkECU52FRnYxiIkcYjQXmfXEs2GJxrV7YPSgvTmzBuA EFLz+3You85+OhUK8FTvadU6ydWfj16AaAs9/7J2dTEX2ydjoZM34xxEK8TQcl/+i0Wp9xLkAOCfQ YyrkovHI1YK84uFiC2CNMVNcE5RRzxMHVUL224mbVBzDybh0l+eWfQgOgC9to2M3NS4yAz1GybOMj VFh2Jg7sBHzbQDr2NAJ/xOP9Qo7oCmBfHkhUWExvCoGD+v981zE6xBbJYygEAQNiIK97kOXFurDqZ Qb9M+rjfQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWFRu-008j8v-NF; Tue, 13 Apr 2021 09:35:11 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPm-008ihV-Vf for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 09:33:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=kE8EWwa8v/ECGIWgcgIxuPVKjeU0x53gro7/GHbuKKw=; b=h/OUA3mLneImZjMygmBwecWHnm xRqKKX3pSYg1SZP7aQIfg8uZSz+8UKr7LZTLGASuifM8isjw9q72c+SXiTDBPOpW/Y1UfWGWFLBzt MSlewvXv9F0J2EOdjQvM38wSOWMccg4ZJ9mlhbk3+mje68bsqhfhBEzRteOPn9ifuCgftyE673lHA KfiVKVuY7AzM3rHfRBRPVXTdlWOQuth0FnYDKZWkSj9wqFCYKDUL0Lno4i2je7vVcb9zsuB9gMab/ DIk3v7sNlTq6zTqL8w/70VQAZfzLri2d/PkQIO1LV7bSE+9ah3rItQ5I7xOIOt8C1Ts0aBkvEPKZ1 eXII+KWw==; Received: from mga06.intel.com ([134.134.136.31]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWFPi-006slz-U1 for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 09:32:57 +0000 IronPort-SDR: WqmQOr/9pN/03LidD16EAcXRznlAGpXkiqXL7bac4jGpawtE2KEAAC+vTGXs4Avx985si1yb2k eJYjOfGuC/zg== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="255700248" X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="255700248" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 02:32:53 -0700 IronPort-SDR: b/cfYJaT1P2ZHQQzSBLnn104q//z2B33CiRzFAVyAwknAYlcGzSbd7If4cq18esZ4vuUROrqVq 5qW7nvsmOFSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,219,1613462400"; d="scan'208";a="424178276" Received: from glass.png.intel.com ([10.158.65.59]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 02:32:47 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: alexandre.torgue@foss.st.com, Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next v2 7/7] net: stmmac: Add TX via XDP zero-copy socket Date: Tue, 13 Apr 2021 17:36:26 +0800 Message-Id: <20210413093626.3447-8-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210413093626.3447-1-boon.leong.ong@intel.com> References: <20210413093626.3447-1-boon.leong.ong@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_023255_111653_8B89C9FD X-CRM114-Status: GOOD ( 30.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong --- drivers/net/ethernet/stmicro/stmmac/stmmac.h | 6 + .../net/ethernet/stmicro/stmmac/stmmac_main.c | 317 ++++++++++++++++-- .../net/ethernet/stmicro/stmmac/stmmac_xdp.c | 16 +- 3 files changed, 310 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index aa0db622ee69..3d9ceb6234eb 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -40,6 +40,7 @@ enum stmmac_txbuf_type { STMMAC_TXBUF_T_SKB, STMMAC_TXBUF_T_XDP_TX, STMMAC_TXBUF_T_XDP_NDO, + STMMAC_TXBUF_T_XSK_TX, }; struct stmmac_tx_info { @@ -69,6 +70,8 @@ struct stmmac_tx_queue { struct xdp_frame **xdpf; }; struct stmmac_tx_info *tx_skbuff_dma; + struct xsk_buff_pool *xsk_pool; + u32 xsk_frames_done; unsigned int cur_tx; unsigned int dirty_tx; dma_addr_t dma_tx_phy; @@ -116,6 +119,7 @@ struct stmmac_rx_queue { struct stmmac_channel { struct napi_struct rx_napi ____cacheline_aligned_in_smp; struct napi_struct tx_napi ____cacheline_aligned_in_smp; + struct napi_struct rxtx_napi ____cacheline_aligned_in_smp; struct stmmac_priv *priv_data; spinlock_t lock; u32 index; @@ -338,6 +342,8 @@ static inline unsigned int stmmac_rx_offset(struct stmmac_priv *priv) void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue); void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue); +void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue); +void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue); int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags); #if IS_ENABLED(CONFIG_STMMAC_SELFTESTS) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 48e755ebcf2b..d10a67f0b21c 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -70,6 +70,9 @@ MODULE_PARM_DESC(phyaddr, "Physical device address"); #define STMMAC_TX_THRESH(x) ((x)->dma_tx_size / 4) #define STMMAC_RX_THRESH(x) ((x)->dma_rx_size / 4) +/* Limit to make sure XDP TX and slow path can coexist */ +#define STMMAC_XSK_TX_BUDGET_MAX 256 +#define STMMAC_TX_XSK_AVAIL 16 #define STMMAC_RX_FILL_BATCH 16 #define STMMAC_XDP_PASS 0 @@ -120,6 +123,8 @@ static irqreturn_t stmmac_mac_interrupt(int irq, void *dev_id); static irqreturn_t stmmac_safety_interrupt(int irq, void *dev_id); static irqreturn_t stmmac_msi_intr_tx(int irq, void *data); static irqreturn_t stmmac_msi_intr_rx(int irq, void *data); +static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue); +static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue); #ifdef CONFIG_DEBUG_FS static const struct net_device_ops stmmac_netdev_ops; @@ -192,6 +197,12 @@ static void __stmmac_disable_all_queues(struct stmmac_priv *priv) for (queue = 0; queue < maxq; queue++) { struct stmmac_channel *ch = &priv->channel[queue]; + if (stmmac_xdp_is_enabled(priv) && + test_bit(queue, priv->af_xdp_zc_qps)) { + napi_disable(&ch->rxtx_napi); + continue; + } + if (queue < rx_queues_cnt) napi_disable(&ch->rx_napi); if (queue < tx_queues_cnt) @@ -235,6 +246,12 @@ static void stmmac_enable_all_queues(struct stmmac_priv *priv) for (queue = 0; queue < maxq; queue++) { struct stmmac_channel *ch = &priv->channel[queue]; + if (stmmac_xdp_is_enabled(priv) && + test_bit(queue, priv->af_xdp_zc_qps)) { + napi_enable(&ch->rxtx_napi); + continue; + } + if (queue < rx_queues_cnt) napi_enable(&ch->rx_napi); if (queue < tx_queues_cnt) @@ -1488,6 +1505,9 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) tx_q->xdpf[i] = NULL; } + if (tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_XSK_TX) + tx_q->xsk_frames_done++; + if (tx_q->tx_skbuff[i] && tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_SKB) { dev_kfree_skb_any(tx_q->tx_skbuff[i]); @@ -1810,6 +1830,8 @@ static int __init_dma_tx_desc_rings(struct stmmac_priv *priv, u32 queue) priv->dma_tx_size, 0); } + tx_q->xsk_pool = stmmac_get_xsk_pool(priv, queue); + for (i = 0; i < priv->dma_tx_size; i++) { struct dma_desc *p; @@ -1886,10 +1908,19 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags) */ static void dma_free_tx_skbufs(struct stmmac_priv *priv, u32 queue) { + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; int i; + tx_q->xsk_frames_done = 0; + for (i = 0; i < priv->dma_tx_size; i++) stmmac_free_tx_buffer(priv, queue, i); + + if (tx_q->xsk_pool && tx_q->xsk_frames_done) { + xsk_tx_completed(tx_q->xsk_pool, tx_q->xsk_frames_done); + tx_q->xsk_frames_done = 0; + tx_q->xsk_pool = NULL; + } } /** @@ -2010,6 +2041,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) bool xdp_prog = stmmac_xdp_is_enabled(priv); struct page_pool_params pp_params = { 0 }; unsigned int num_pages; + unsigned int napi_id; int ret; rx_q->queue_index = queue; @@ -2057,9 +2089,15 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv, u32 queue) return -ENOMEM; } + if (stmmac_xdp_is_enabled(priv) && + test_bit(queue, priv->af_xdp_zc_qps)) + napi_id = ch->rxtx_napi.napi_id; + else + napi_id = ch->rx_napi.napi_id; + ret = xdp_rxq_info_reg(&rx_q->xdp_rxq, priv->dev, rx_q->queue_index, - ch->rx_napi.napi_id); + napi_id); if (ret) { netdev_err(priv->dev, "Failed to register xdp rxq info\n"); return -EINVAL; @@ -2381,6 +2419,101 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv) } } +static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget) +{ + struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue); + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct xsk_buff_pool *pool = tx_q->xsk_pool; + unsigned int entry = tx_q->cur_tx; + struct dma_desc *tx_desc = NULL; + struct xdp_desc xdp_desc; + bool work_done = true; + + /* Avoids TX time-out as we are sharing with slow path */ + nq->trans_start = jiffies; + + budget = min(budget, stmmac_tx_avail(priv, queue)); + + while (budget-- > 0) { + dma_addr_t dma_addr; + bool set_ic; + + /* We are sharing with slow path and stop XSK TX desc submission when + * available TX ring is less than threshold. + */ + if (unlikely(stmmac_tx_avail(priv, queue) < STMMAC_TX_XSK_AVAIL) || + !netif_carrier_ok(priv->dev)) { + work_done = false; + break; + } + + if (!xsk_tx_peek_desc(pool, &xdp_desc)) + break; + + if (likely(priv->extend_desc)) + tx_desc = (struct dma_desc *)(tx_q->dma_etx + entry); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + tx_desc = &tx_q->dma_entx[entry].basic; + else + tx_desc = tx_q->dma_tx + entry; + + dma_addr = xsk_buff_raw_get_dma(pool, xdp_desc.addr); + xsk_buff_raw_dma_sync_for_device(pool, dma_addr, xdp_desc.len); + + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_XSK_TX; + + /* To return XDP buffer to XSK pool, we simple call + * xsk_tx_completed(), so we don't need to fill up + * 'buf' and 'xdpf'. + */ + tx_q->tx_skbuff_dma[entry].buf = 0; + tx_q->xdpf[entry] = NULL; + + tx_q->tx_skbuff_dma[entry].map_as_page = false; + tx_q->tx_skbuff_dma[entry].len = xdp_desc.len; + tx_q->tx_skbuff_dma[entry].last_segment = true; + tx_q->tx_skbuff_dma[entry].is_jumbo = false; + + stmmac_set_desc_addr(priv, tx_desc, dma_addr); + + tx_q->tx_count_frames++; + + if (!priv->tx_coal_frames[queue]) + set_ic = false; + else if (tx_q->tx_count_frames % priv->tx_coal_frames[queue] == 0) + set_ic = true; + else + set_ic = false; + + if (set_ic) { + tx_q->tx_count_frames = 0; + stmmac_set_tx_ic(priv, tx_desc); + priv->xstats.tx_set_ic_bit++; + } + + stmmac_prepare_tx_desc(priv, tx_desc, 1, xdp_desc.len, + true, priv->mode, true, true, + xdp_desc.len); + + stmmac_enable_dma_transmission(priv, priv->ioaddr); + + tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_tx_size); + entry = tx_q->cur_tx; + } + + if (tx_desc) { + stmmac_flush_tx_descriptors(priv, queue); + xsk_tx_release(pool); + } + + /* Return true if all of the 3 conditions are met + * a) TX Budget is still available + * b) work_done = true when XSK TX desc peek is empty (no more + * pending XSK TX for transmission) + */ + return !!budget && work_done; +} + /** * stmmac_tx_clean - to manage the transmission completion * @priv: driver private structure @@ -2392,14 +2525,18 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; unsigned int bytes_compl = 0, pkts_compl = 0; - unsigned int entry, count = 0; + unsigned int entry, xmits = 0, count = 0; __netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue)); priv->xstats.tx_clean++; + tx_q->xsk_frames_done = 0; + entry = tx_q->dirty_tx; - while ((entry != tx_q->cur_tx) && (count < budget)) { + + /* Try to clean all TX complete frame in 1 shot */ + while ((entry != tx_q->cur_tx) && count < priv->dma_tx_size) { struct xdp_frame *xdpf; struct sk_buff *skb; struct dma_desc *p; @@ -2484,6 +2621,9 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) tx_q->xdpf[entry] = NULL; } + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XSK_TX) + tx_q->xsk_frames_done++; + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_SKB) { if (likely(skb)) { pkts_compl++; @@ -2511,6 +2651,28 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) netif_tx_wake_queue(netdev_get_tx_queue(priv->dev, queue)); } + if (tx_q->xsk_pool) { + bool work_done; + + if (tx_q->xsk_frames_done) + xsk_tx_completed(tx_q->xsk_pool, tx_q->xsk_frames_done); + + if (xsk_uses_need_wakeup(tx_q->xsk_pool)) + xsk_set_tx_need_wakeup(tx_q->xsk_pool); + + /* For XSK TX, we try to send as many as possible. + * If XSK work done (XSK TX desc empty and budget still + * available), return "budget - 1" to reenable TX IRQ. + * Else, return "budget" to make NAPI continue polling. + */ + work_done = stmmac_xdp_xmit_zc(priv, queue, + STMMAC_XSK_TX_BUDGET_MAX); + if (work_done) + xmits = budget - 1; + else + xmits = budget; + } + if (priv->eee_enabled && !priv->tx_path_in_lpi_mode && priv->eee_sw_timer_en) { stmmac_enable_eee_mode(priv); @@ -2525,7 +2687,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) __netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue)); - return count; + /* Combine decisions from TX clean and XSK TX */ + return max(count, xmits); } /** @@ -2607,24 +2770,31 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan, u32 dir) { int status = stmmac_dma_interrupt_status(priv, priv->ioaddr, &priv->xstats, chan, dir); + struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; + struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; struct stmmac_channel *ch = &priv->channel[chan]; + struct napi_struct *rx_napi; + struct napi_struct *tx_napi; unsigned long flags; + rx_napi = rx_q->xsk_pool ? &ch->rxtx_napi : &ch->rx_napi; + tx_napi = tx_q->xsk_pool ? &ch->rxtx_napi : &ch->tx_napi; + if ((status & handle_rx) && (chan < priv->plat->rx_queues_to_use)) { - if (napi_schedule_prep(&ch->rx_napi)) { + if (napi_schedule_prep(rx_napi)) { spin_lock_irqsave(&ch->lock, flags); stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0); spin_unlock_irqrestore(&ch->lock, flags); - __napi_schedule(&ch->rx_napi); + __napi_schedule(rx_napi); } } if ((status & handle_tx) && (chan < priv->plat->tx_queues_to_use)) { - if (napi_schedule_prep(&ch->tx_napi)) { + if (napi_schedule_prep(tx_napi)) { spin_lock_irqsave(&ch->lock, flags); stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1); spin_unlock_irqrestore(&ch->lock, flags); - __napi_schedule(&ch->tx_napi); + __napi_schedule(tx_napi); } } @@ -2822,16 +2992,18 @@ static enum hrtimer_restart stmmac_tx_timer(struct hrtimer *t) struct stmmac_tx_queue *tx_q = container_of(t, struct stmmac_tx_queue, txtimer); struct stmmac_priv *priv = tx_q->priv_data; struct stmmac_channel *ch; + struct napi_struct *napi; ch = &priv->channel[tx_q->queue_index]; + napi = tx_q->xsk_pool ? &ch->rxtx_napi : &ch->tx_napi; - if (likely(napi_schedule_prep(&ch->tx_napi))) { + if (likely(napi_schedule_prep(napi))) { unsigned long flags; spin_lock_irqsave(&ch->lock, flags); stmmac_disable_dma_irq(priv, priv->ioaddr, ch->index, 0, 1); spin_unlock_irqrestore(&ch->lock, flags); - __napi_schedule(&ch->tx_napi); + __napi_schedule(napi); } return HRTIMER_NORESTART; @@ -4629,7 +4801,7 @@ static struct sk_buff *stmmac_construct_skb_zc(struct stmmac_channel *ch, unsigned int datasize = xdp->data_end - xdp->data; struct sk_buff *skb; - skb = __napi_alloc_skb(&ch->rx_napi, + skb = __napi_alloc_skb(&ch->rxtx_napi, xdp->data_end - xdp->data_hard_start, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) @@ -4673,7 +4845,7 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue, skb_set_hash(skb, hash, hash_type); skb_record_rx_queue(skb, queue); - napi_gro_receive(&ch->rx_napi, skb); + napi_gro_receive(&ch->rxtx_napi, skb); priv->dev->stats.rx_packets++; priv->dev->stats.rx_bytes += len; @@ -5205,17 +5377,12 @@ static int stmmac_napi_poll_rx(struct napi_struct *napi, int budget) struct stmmac_channel *ch = container_of(napi, struct stmmac_channel, rx_napi); struct stmmac_priv *priv = ch->priv_data; - struct stmmac_rx_queue *rx_q; u32 chan = ch->index; int work_done; priv->xstats.napi_poll++; - rx_q = &priv->rx_queue[chan]; - - work_done = rx_q->xsk_pool ? - stmmac_rx_zc(priv, budget, chan) : - stmmac_rx(priv, budget, chan); + work_done = stmmac_rx(priv, budget, chan); if (work_done < budget && napi_complete_done(napi, work_done)) { unsigned long flags; @@ -5237,7 +5404,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget) priv->xstats.napi_poll++; - work_done = stmmac_tx_clean(priv, priv->dma_tx_size, chan); + work_done = stmmac_tx_clean(priv, budget, chan); work_done = min(work_done, budget); if (work_done < budget && napi_complete_done(napi, work_done)) { @@ -5251,6 +5418,42 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget) return work_done; } +static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget) +{ + struct stmmac_channel *ch = + container_of(napi, struct stmmac_channel, rxtx_napi); + struct stmmac_priv *priv = ch->priv_data; + int rx_done, tx_done; + u32 chan = ch->index; + + priv->xstats.napi_poll++; + + tx_done = stmmac_tx_clean(priv, budget, chan); + tx_done = min(tx_done, budget); + + rx_done = stmmac_rx_zc(priv, budget, chan); + + /* If either TX or RX work is not complete, return budget + * and keep pooling + */ + if (tx_done >= budget || rx_done >= budget) + return budget; + + /* all work done, exit the polling mode */ + if (napi_complete_done(napi, rx_done)) { + unsigned long flags; + + spin_lock_irqsave(&ch->lock, flags); + /* Both RX and TX work done are compelte, + * so enable both RX & TX IRQs. + */ + stmmac_enable_dma_irq(priv, priv->ioaddr, chan, 1, 1); + spin_unlock_irqrestore(&ch->lock, flags); + } + + return min(rx_done, budget - 1); +} + /** * stmmac_tx_timeout * @dev : Pointer to net device structure @@ -6231,10 +6434,63 @@ void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue) spin_unlock_irqrestore(&ch->lock, flags); } +void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_channel *ch = &priv->channel[queue]; + unsigned long flags; + + spin_lock_irqsave(&ch->lock, flags); + stmmac_disable_dma_irq(priv, priv->ioaddr, queue, 0, 1); + spin_unlock_irqrestore(&ch->lock, flags); + + stmmac_stop_tx_dma(priv, queue); + __free_dma_tx_desc_resources(priv, queue); +} + +void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue) +{ + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct stmmac_channel *ch = &priv->channel[queue]; + unsigned long flags; + int ret; + + ret = __alloc_dma_tx_desc_resources(priv, queue); + if (ret) { + netdev_err(priv->dev, "Failed to alloc TX desc.\n"); + return; + } + + ret = __init_dma_tx_desc_rings(priv, queue); + if (ret) { + __free_dma_tx_desc_resources(priv, queue); + netdev_err(priv->dev, "Failed to init TX desc.\n"); + return; + } + + stmmac_clear_tx_descriptors(priv, queue); + + stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, + tx_q->dma_tx_phy, tx_q->queue_index); + + if (tx_q->tbs & STMMAC_TBS_AVAIL) + stmmac_enable_tbs(priv, priv->ioaddr, 1, tx_q->queue_index); + + tx_q->tx_tail_addr = tx_q->dma_tx_phy; + stmmac_set_tx_tail_ptr(priv, priv->ioaddr, + tx_q->tx_tail_addr, tx_q->queue_index); + + stmmac_start_tx_dma(priv, queue); + + spin_lock_irqsave(&ch->lock, flags); + stmmac_enable_dma_irq(priv, priv->ioaddr, queue, 0, 1); + spin_unlock_irqrestore(&ch->lock, flags); +} + int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags) { struct stmmac_priv *priv = netdev_priv(dev); struct stmmac_rx_queue *rx_q; + struct stmmac_tx_queue *tx_q; struct stmmac_channel *ch; if (test_bit(STMMAC_DOWN, &priv->state) || @@ -6244,22 +6500,23 @@ int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags) if (!stmmac_xdp_is_enabled(priv)) return -ENXIO; - if (queue >= priv->plat->rx_queues_to_use) + if (queue >= priv->plat->rx_queues_to_use || + queue >= priv->plat->tx_queues_to_use) return -EINVAL; rx_q = &priv->rx_queue[queue]; + tx_q = &priv->tx_queue[queue]; ch = &priv->channel[queue]; - if (!rx_q->xsk_pool) + if (!rx_q->xsk_pool && !tx_q->xsk_pool) return -ENXIO; - if (flags & XDP_WAKEUP_RX && - !napi_if_scheduled_mark_missed(&ch->rx_napi)) { + if (!napi_if_scheduled_mark_missed(&ch->rxtx_napi)) { /* EQoS does not have per-DMA channel SW interrupt, * so we schedule RX Napi straight-away. */ - if (likely(napi_schedule_prep(&ch->rx_napi))) - __napi_schedule(&ch->rx_napi); + if (likely(napi_schedule_prep(&ch->rxtx_napi))) + __napi_schedule(&ch->rxtx_napi); } return 0; @@ -6444,6 +6701,12 @@ static void stmmac_napi_add(struct net_device *dev) stmmac_napi_poll_tx, NAPI_POLL_WEIGHT); } + if (queue < priv->plat->rx_queues_to_use && + queue < priv->plat->tx_queues_to_use) { + netif_napi_add(dev, &ch->rxtx_napi, + stmmac_napi_poll_rxtx, + NAPI_POLL_WEIGHT); + } } } @@ -6461,6 +6724,10 @@ static void stmmac_napi_del(struct net_device *dev) netif_napi_del(&ch->rx_napi); if (queue < priv->plat->tx_queues_to_use) netif_napi_del(&ch->tx_napi); + if (queue < priv->plat->rx_queues_to_use && + queue < priv->plat->tx_queues_to_use) { + netif_napi_del(&ch->rxtx_napi); + } } } diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c index caff0dfc6f4b..105821b53020 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c @@ -14,7 +14,8 @@ static int stmmac_xdp_enable_pool(struct stmmac_priv *priv, u32 frame_size; int err; - if (queue >= priv->plat->rx_queues_to_use) + if (queue >= priv->plat->rx_queues_to_use || + queue >= priv->plat->tx_queues_to_use) return -EINVAL; frame_size = xsk_pool_get_rx_frame_size(pool); @@ -34,14 +35,17 @@ static int stmmac_xdp_enable_pool(struct stmmac_priv *priv, if (need_update) { stmmac_disable_rx_queue(priv, queue); + stmmac_disable_tx_queue(priv, queue); napi_disable(&ch->rx_napi); + napi_disable(&ch->tx_napi); } set_bit(queue, priv->af_xdp_zc_qps); if (need_update) { - napi_enable(&ch->rx_napi); + napi_enable(&ch->rxtx_napi); stmmac_enable_rx_queue(priv, queue); + stmmac_enable_tx_queue(priv, queue); err = stmmac_xsk_wakeup(priv->dev, queue, XDP_WAKEUP_RX); if (err) @@ -57,7 +61,8 @@ static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue) struct xsk_buff_pool *pool; bool need_update; - if (queue >= priv->plat->rx_queues_to_use) + if (queue >= priv->plat->rx_queues_to_use || + queue >= priv->plat->tx_queues_to_use) return -EINVAL; pool = xsk_get_pool_from_qid(priv->dev, queue); @@ -68,8 +73,9 @@ static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue) if (need_update) { stmmac_disable_rx_queue(priv, queue); + stmmac_disable_tx_queue(priv, queue); synchronize_rcu(); - napi_disable(&ch->rx_napi); + napi_disable(&ch->rxtx_napi); } xsk_pool_dma_unmap(pool, STMMAC_RX_DMA_ATTR); @@ -78,7 +84,9 @@ static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue) if (need_update) { napi_enable(&ch->rx_napi); + napi_enable(&ch->tx_napi); stmmac_enable_rx_queue(priv, queue); + stmmac_enable_tx_queue(priv, queue); } return 0;