From patchwork Wed Jul 3 10:18:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37574112C for ; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27977288E2 for ; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B5A3288E4; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E16FC288E6 for ; Wed, 3 Jul 2019 10:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727103AbfGCKTL (ORCPT ); Wed, 3 Jul 2019 06:19:11 -0400 Received: from mail-lj1-f194.google.com ([209.85.208.194]:40287 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726977AbfGCKTK (ORCPT ); Wed, 3 Jul 2019 06:19:10 -0400 Received: by mail-lj1-f194.google.com with SMTP id a21so1789639ljh.7 for ; Wed, 03 Jul 2019 03:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MDAbzUanknf4PJsLEUbC0kOM1La2C83H5dJbGwjf9oQ=; b=BStuIp4lx39QUwMNDApesZDWOux56RQyzibqQT0WtE7sdrznGvAb4wfLUKg7QKY9EF 0eTCQlVTVpGq5zmjqgJk8ING+02J+P6W9QmFsv+oIeQPaTglC1bDZYjQPQixp3d4U24n DN6gdMxbbr2uYOX5bxg/rCjNL0oMBci2siqMJpCsLP26JhrG+o4405TwBeJV8XvylUWE jeENo2O1Epssy5gF3zvp7rBSkac1hWxWxGFAkvJetrnYBRoVktwsQm9tnXLOFv5P72vK vW4oKGQlH7i00io624xFqcPZBRXTRgB1SH5a/Ei8yX76FyIDLgSvSVdNj3sq1w7xdEY2 lPUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MDAbzUanknf4PJsLEUbC0kOM1La2C83H5dJbGwjf9oQ=; b=CHE+ZjtI5HEGc6oBPhWgTYN40j1XgR2v8OZ2Wa1cPrhguzAw3J0tZPatIMm//tabQu ClVGRHhYodB1yL+mvhpB8HhUPHCmPj00fvS2oJj9reVeXGBFBIF+hfbbrL79uz6IT6G2 gvvFgGou4ewUHnHzCvOwLbbsWy+nBPcw459HdUkNdE0ns3sR0dezWEt6o324ifWLMpe2 AMX1kC/BGHJg/TUDH2D0ds9HIUjdRGfLE4KoVN90ZOYfevFgmWAIgEFusO5rqAepsX4d xzq9yHyKFAITkRSvfSvXEZQqVMcEEsZdOg2od5sd9IhoOq5zs9OxxUaJ2acPpDdw/wri rLmw== X-Gm-Message-State: APjAAAU0fkFhvpBbBoyL9/TpUtfl32VVg5xKKolR20XQHC2GN7czZXY3 emVhHcfz2fkpPOYokb5gRcmynw== X-Google-Smtp-Source: APXvYqxN1wVjNfcQKjJAJHaRYIheYjov2AZ1vEh8atpfySg/spYnhrwoPYJrkeC9q18VFn1eFK4sig== X-Received: by 2002:a2e:8396:: with SMTP id x22mr7001525ljg.135.1562149148704; Wed, 03 Jul 2019 03:19:08 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.07 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:08 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 1/5] xdp: allow same allocator usage Date: Wed, 3 Jul 2019 13:18:59 +0300 Message-Id: <20190703101903.8411-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP First of all, it is an absolute requirement that each RX-queue have their own page_pool object/allocator. And this change is intendant to handle special case, where a single RX-queue can receive packets from two different net_devices. In order to protect against using same allocator for 2 different rx queues, add queue_index to xdp_mem_allocator to catch the obvious mistake where queue_index mismatch, as proposed by Jesper Dangaard Brouer. Adding this on xdp allocator level allows drivers with such dependency change the allocators w/o modifications. Signed-off-by: Ivan Khoronzhuk --- include/net/xdp_priv.h | 2 ++ net/core/xdp.c | 55 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h index 6a8cba6ea79a..9858a4057842 100644 --- a/include/net/xdp_priv.h +++ b/include/net/xdp_priv.h @@ -18,6 +18,8 @@ struct xdp_mem_allocator { struct rcu_head rcu; struct delayed_work defer_wq; unsigned long defer_warn; + unsigned long refcnt; + u32 queue_index; }; #endif /* __LINUX_NET_XDP_PRIV_H__ */ diff --git a/net/core/xdp.c b/net/core/xdp.c index 829377cc83db..4f0ddbb3717a 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -98,6 +98,18 @@ static bool __mem_id_disconnect(int id, bool force) WARN(1, "Request remove non-existing id(%d), driver bug?", id); return true; } + + /* to avoid calling hash lookup twice, decrement refcnt here till it + * reaches zero, then it can be called from workqueue afterwards. + */ + if (xa->refcnt) + xa->refcnt--; + + if (xa->refcnt) { + mutex_unlock(&mem_id_lock); + return true; + } + xa->disconnect_cnt++; /* Detects in-flight packet-pages for page_pool */ @@ -312,6 +324,33 @@ static bool __is_supported_mem_type(enum xdp_mem_type type) return true; } +static struct xdp_mem_allocator *xdp_allocator_find(void *allocator) +{ + struct xdp_mem_allocator *xae, *xa = NULL; + struct rhashtable_iter iter; + + if (!allocator) + return xa; + + rhashtable_walk_enter(mem_id_ht, &iter); + do { + rhashtable_walk_start(&iter); + + while ((xae = rhashtable_walk_next(&iter)) && !IS_ERR(xae)) { + if (xae->allocator == allocator) { + xa = xae; + break; + } + } + + rhashtable_walk_stop(&iter); + + } while (xae == ERR_PTR(-EAGAIN)); + rhashtable_walk_exit(&iter); + + return xa; +} + int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, enum xdp_mem_type type, void *allocator) { @@ -347,6 +386,20 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, } } + mutex_lock(&mem_id_lock); + xdp_alloc = xdp_allocator_find(allocator); + if (xdp_alloc) { + /* One allocator per queue is supposed only */ + if (xdp_alloc->queue_index != xdp_rxq->queue_index) + return -EINVAL; + + xdp_rxq->mem.id = xdp_alloc->mem.id; + xdp_alloc->refcnt++; + mutex_unlock(&mem_id_lock); + return 0; + } + mutex_unlock(&mem_id_lock); + xdp_alloc = kzalloc(sizeof(*xdp_alloc), gfp); if (!xdp_alloc) return -ENOMEM; @@ -360,6 +413,8 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, xdp_rxq->mem.id = id; xdp_alloc->mem = xdp_rxq->mem; xdp_alloc->allocator = allocator; + xdp_alloc->refcnt = 1; + xdp_alloc->queue_index = xdp_rxq->queue_index; /* Insert allocator into ID lookup table */ ptr = rhashtable_insert_slow(mem_id_ht, &id, &xdp_alloc->node); From patchwork Wed Jul 3 10:19:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CF55112C for ; Wed, 3 Jul 2019 10:19:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DEC428898 for ; Wed, 3 Jul 2019 10:19:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 213F2288E6; Wed, 3 Jul 2019 10:19:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8877028898 for ; Wed, 3 Jul 2019 10:19:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727138AbfGCKTM (ORCPT ); Wed, 3 Jul 2019 06:19:12 -0400 Received: from mail-lf1-f68.google.com ([209.85.167.68]:46090 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727071AbfGCKTL (ORCPT ); Wed, 3 Jul 2019 06:19:11 -0400 Received: by mail-lf1-f68.google.com with SMTP id z15so1345123lfh.13 for ; Wed, 03 Jul 2019 03:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=F7JFqzRXqxTzUkcqfAtcONZlOP60PRysVxN4UPvDiWY=; b=TXAox2sdWXBNBf+PRssojPI1C03UgoSeeRHKksHkDwWknje+kVCYI4gFwiCbZCxfEY IkgI+mvbjSXUUB6DVZ5qMJ7mz1RVdPm+ypWOjvbis+zWk4d+AihqA88rbKHidV14PiWR J8yJAfhdWtjTPK5Z2v3TiM8zork4wBqowDNGMSUVhiUMESHKEwOevdnHTiqLv97ldpif SoRPCvwrJb0POJBqOSvXIBcFJr1Ti2uZmbHkbFTyPyKisAtav9ThkI6MNIQR4DREmk8a ykPZHSrZYB4zt4dFwXwUr5VzE2AeqmubgnYNxQYwvfm+NLCO8HuyyY1LMKIpDwtXbVPI LK9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=F7JFqzRXqxTzUkcqfAtcONZlOP60PRysVxN4UPvDiWY=; b=qyClXvZ6gqwK9hGFI9dfW/vN1IOZC/FU3K6XJV/so12V3xwjlrgrUF0lam/HqBgg0K Et6BHomoGEw/OYTClmcKoQdFJ7NYyA7QYgVlaQoT2n4KMLk3N0jM/A1QrNiBcghoyDKr BmEGY/uDJvx4V4xrR5MQqltsE1Q4D+wYy5PWph74fZir8dB04cPncVmOMS+p+2FBZXFo sRrNLNhYYoGInq/VPTtSQRJFA4zJRadi9CnM6FRZxUn2ICwABllGwuPqfgDLmd9qyfG4 ZJ95mdXtNsvL9CHjzsfu3OHpiy6w2jRng107j5NVpOvL+1LTh4p2+bjYK1XT63bYqySK jUfw== X-Gm-Message-State: APjAAAVD4R8PL0AodI0oYvhtGBFpcnTNiFf9tshytWAiclZFCQ+yLHA8 BzcavoVyhhKSTGJ3ZhROyeWZPw== X-Google-Smtp-Source: APXvYqyjJ/dC++v25MO0K6HTcF98llYJkNq7R07VTHVGOQQa1klBHyw2COH6PNTUVmTlyk2e/+Qu4Q== X-Received: by 2002:ac2:46d5:: with SMTP id p21mr17199495lfo.133.1562149149849; Wed, 03 Jul 2019 03:19:09 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.08 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:09 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 2/5] net: ethernet: ti: davinci_cpdma: add dma mapped submit Date: Wed, 3 Jul 2019 13:19:00 +0300 Message-Id: <20190703101903.8411-3-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In case if dma mapped packet needs to be sent, like with XDP page pool, the "mapped" submit can be used. This patch adds dma mapped submit based on regular one. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/davinci_cpdma.c | 89 ++++++++++++++++++++++--- drivers/net/ethernet/ti/davinci_cpdma.h | 4 ++ 2 files changed, 83 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 5cf1758d425b..8da46394c0e7 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -139,6 +139,7 @@ struct submit_info { int directed; void *token; void *data; + int flags; int len; }; @@ -184,6 +185,8 @@ static struct cpdma_control_info controls[] = { (directed << CPDMA_TO_PORT_SHIFT)); \ } while (0) +#define CPDMA_DMA_EXT_MAP BIT(16) + static void cpdma_desc_pool_destroy(struct cpdma_ctlr *ctlr) { struct cpdma_desc_pool *pool = ctlr->pool; @@ -1015,6 +1018,7 @@ static int cpdma_chan_submit_si(struct submit_info *si) struct cpdma_chan *chan = si->chan; struct cpdma_ctlr *ctlr = chan->ctlr; int len = si->len; + int swlen = len; struct cpdma_desc __iomem *desc; dma_addr_t buffer; u32 mode; @@ -1036,16 +1040,22 @@ static int cpdma_chan_submit_si(struct submit_info *si) chan->stats.runt_transmit_buff++; } - buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir); - ret = dma_mapping_error(ctlr->dev, buffer); - if (ret) { - cpdma_desc_free(ctlr->pool, desc, 1); - return -EINVAL; - } - mode = CPDMA_DESC_OWNER | CPDMA_DESC_SOP | CPDMA_DESC_EOP; cpdma_desc_to_port(chan, mode, si->directed); + if (si->flags & CPDMA_DMA_EXT_MAP) { + buffer = (u32)si->data; + dma_sync_single_for_device(ctlr->dev, buffer, len, chan->dir); + swlen |= CPDMA_DMA_EXT_MAP; + } else { + buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir); + ret = dma_mapping_error(ctlr->dev, buffer); + if (ret) { + cpdma_desc_free(ctlr->pool, desc, 1); + return -EINVAL; + } + } + /* Relaxed IO accessors can be used here as there is read barrier * at the end of write sequence. */ @@ -1055,7 +1065,7 @@ static int cpdma_chan_submit_si(struct submit_info *si) writel_relaxed(mode | len, &desc->hw_mode); writel_relaxed((uintptr_t)si->token, &desc->sw_token); writel_relaxed(buffer, &desc->sw_buffer); - writel_relaxed(len, &desc->sw_len); + writel_relaxed(swlen, &desc->sw_len); desc_read(desc, sw_len); __cpdma_chan_submit(chan, desc); @@ -1079,6 +1089,32 @@ int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data, si.data = data; si.len = len; si.directed = directed; + si.flags = 0; + + spin_lock_irqsave(&chan->lock, flags); + if (chan->state == CPDMA_STATE_TEARDOWN) { + spin_unlock_irqrestore(&chan->lock, flags); + return -EINVAL; + } + + ret = cpdma_chan_submit_si(&si); + spin_unlock_irqrestore(&chan->lock, flags); + return ret; +} + +int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed) +{ + struct submit_info si; + unsigned long flags; + int ret; + + si.chan = chan; + si.token = token; + si.data = (void *)(u32)data; + si.len = len; + si.directed = directed; + si.flags = CPDMA_DMA_EXT_MAP; spin_lock_irqsave(&chan->lock, flags); if (chan->state == CPDMA_STATE_TEARDOWN) { @@ -1103,6 +1139,32 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, si.data = data; si.len = len; si.directed = directed; + si.flags = 0; + + spin_lock_irqsave(&chan->lock, flags); + if (chan->state != CPDMA_STATE_ACTIVE) { + spin_unlock_irqrestore(&chan->lock, flags); + return -EINVAL; + } + + ret = cpdma_chan_submit_si(&si); + spin_unlock_irqrestore(&chan->lock, flags); + return ret; +} + +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed) +{ + struct submit_info si; + unsigned long flags; + int ret; + + si.chan = chan; + si.token = token; + si.data = (void *)(u32)data; + si.len = len; + si.directed = directed; + si.flags = CPDMA_DMA_EXT_MAP; spin_lock_irqsave(&chan->lock, flags); if (chan->state != CPDMA_STATE_ACTIVE) { @@ -1140,10 +1202,17 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, uintptr_t token; token = desc_read(desc, sw_token); - buff_dma = desc_read(desc, sw_buffer); origlen = desc_read(desc, sw_len); - dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + buff_dma = desc_read(desc, sw_buffer); + if (origlen & CPDMA_DMA_EXT_MAP) { + origlen &= ~CPDMA_DMA_EXT_MAP; + dma_sync_single_for_cpu(ctlr->dev, buff_dma, origlen, + chan->dir); + } else { + dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + } + cpdma_desc_free(pool, desc, 1); (*chan->handler)((void *)token, outlen, status); } diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 9343c8c73c1b..0271a20c2e09 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -77,8 +77,12 @@ int cpdma_chan_stop(struct cpdma_chan *chan); int cpdma_chan_get_stats(struct cpdma_chan *chan, struct cpdma_chan_stats *stats); +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed); int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); +int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed); int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); int cpdma_chan_process(struct cpdma_chan *chan, int quota); From patchwork Wed Jul 3 10:19:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029309 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BB33112C for ; Wed, 3 Jul 2019 10:19:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED14728898 for ; Wed, 3 Jul 2019 10:19:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD493288E6; Wed, 3 Jul 2019 10:19:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C866A28898 for ; Wed, 3 Jul 2019 10:19:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727256AbfGCKTc (ORCPT ); Wed, 3 Jul 2019 06:19:32 -0400 Received: from mail-lf1-f52.google.com ([209.85.167.52]:34178 "EHLO mail-lf1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727110AbfGCKTN (ORCPT ); Wed, 3 Jul 2019 06:19:13 -0400 Received: by mail-lf1-f52.google.com with SMTP id b29so1395919lfq.1 for ; Wed, 03 Jul 2019 03:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NO88iVnC5CJGavQX1+oifd5FyeZJrmYwPjjZG7dx1kY=; b=xWYBTfuK3oxbaw2ihXoCR1u2W/q6BAnSu9cuBDvpQcBnRxJrqDr3icL3pTOdlndgs+ QXqo/j3RtvVa44pp6toHXNoIlGVhbRYMfCOW8QQO2ghQ06fy4ypjGdNgj7T9Gi5S4nOH fDFkAFjhlOlGDQMVULlWzD8KpUv90CS0+hkcx7poj8wdLN0h+GvIUWY7wWwFh/2e/zKZ Th5PGNeqgL0oopeVwiQuJfIXZrkJ/8ZHoOdgNR+0vdq+sL/bAraUeu0UcPNrBlJaB9XP 8/dgRutS7yeilCIy6OCRqECJt9lk/TMVy8V9XXmgTQE9iCu5K6dIjLmym3X+11dbFgqG 0OKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NO88iVnC5CJGavQX1+oifd5FyeZJrmYwPjjZG7dx1kY=; b=UrwcIDRPWUmhZBpyneEcYV/6CzE5DoOBWC5EgBXz7vaMcFhbijcTQiwhY9XLe2EbDn e1Te5t6STXJN/1xElBlYzDAhDxQavHOLL6AaAsdlSGW4Kz09WcXNQrSaZ1Bnu1A3PDvW TOOunQaUXeB+2369XL9SfqvMu/Lpy13ng2ug8SHlMpW5Ur6K9vVY6jlo+FtkLsW2RwRl XPy61eCAuhHylWJIa/wwAQph/sa1vIOD0WBLPcylULM4nWJ+LmgnHktxPbMuA1GosdUl Ig8FA/THuQoRvJcEHcW1oreM5PwMOr4Ep7hH4VtlGUTwbqRkEs3atbQZT5HViZ+1e7uH QIGQ== X-Gm-Message-State: APjAAAV3FJ8GFcaxWVDhHl10TZJGEpNAWAJrttTVAr7IYAP5i0BpeoYo EEr08SBXXPYvjO4a4mhZRyjqcgdYVZQ= X-Google-Smtp-Source: APXvYqxC06UjWw2SW6NYQYT0yijU13JSgMPdhUYpNUqdexFnLlc+oZSHl50a1BIfRBYkHx0LORdolw== X-Received: by 2002:ac2:419a:: with SMTP id z26mr14069127lfh.21.1562149150989; Wed, 03 Jul 2019 03:19:10 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.09 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:10 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 3/5] net: ethernet: ti: davinci_cpdma: allow desc split while down Date: Wed, 3 Jul 2019 13:19:01 +0300 Message-Id: <20190703101903.8411-4-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP That's possible to set ring params while interfaces are down. When interface gets up it uses number of descs to fill rx queue and on later on changes to create rx pools. Usually, this resplit can happen after phy is up, but it can be needed before this, so allow it to happen while setting number of rx descs, when interfaces are down. Also, if no dependency on intf state, move it to cpdma layer, where it should be. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw_ethtool.c | 9 ++++----- drivers/net/ethernet/ti/davinci_cpdma.c | 10 +++++++++- drivers/net/ethernet/ti/davinci_cpdma.h | 3 +-- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index f60dc1dfc443..08d7aaee8299 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -664,15 +664,14 @@ int cpsw_set_ringparam(struct net_device *ndev, cpsw_suspend_data_pass(ndev); - cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); - - if (cpsw->usage_count) - cpdma_chan_split_pool(cpsw->dma); + ret = cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); + if (ret) + goto err; ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; - +err: dev_err(cpsw->dev, "cannot set ring params, closing device\n"); dev_close(ndev); return ret; diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 8da46394c0e7..4167b0b77c8e 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -1423,8 +1423,16 @@ int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr) return ctlr->num_tx_desc; } -void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc) +int cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc) { + unsigned long flags; + int ret; + + spin_lock_irqsave(&ctlr->lock, flags); ctlr->num_rx_desc = num_rx_desc; ctlr->num_tx_desc = ctlr->pool->num_desc - ctlr->num_rx_desc; + ret = cpdma_chan_split_pool(ctlr); + spin_unlock_irqrestore(&ctlr->lock, flags); + + return ret; } diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 0271a20c2e09..d3cfe234d16a 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -116,8 +116,7 @@ enum cpdma_control { int cpdma_control_get(struct cpdma_ctlr *ctlr, int control); int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value); int cpdma_get_num_rx_descs(struct cpdma_ctlr *ctlr); -void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc); +int cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc); int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr); -int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr); #endif From patchwork Wed Jul 3 10:19:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EEA0513B1 for ; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0E5E288E2 for ; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D55FF288E6; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 017DB288E2 for ; Wed, 3 Jul 2019 10:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727210AbfGCKTP (ORCPT ); Wed, 3 Jul 2019 06:19:15 -0400 Received: from mail-lf1-f47.google.com ([209.85.167.47]:42206 "EHLO mail-lf1-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727152AbfGCKTO (ORCPT ); Wed, 3 Jul 2019 06:19:14 -0400 Received: by mail-lf1-f47.google.com with SMTP id s19so642280lfb.9 for ; Wed, 03 Jul 2019 03:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lLtsucvsZ+6X8j0YIBVtyEFlEwonh01weVxKZ0zmUeo=; b=N7ahiyVGoCj6eyemjVWc1fXvSVIP0KHaciHhTwSFTsbnUHHmrTZhhuTHqvv7iR9bDE vd3mUe7tU/W3eQ6o/wiHvs0R0HzLxmxNSpnHerJNNYG814T2ZRQ7GlkOpIWGTV/IzuRK nL9YkAbW+wayvAuV2n2jtKfE4VF+JD920R1oIYrfdHvirX+Fbeos3qMyL1+iFAyIkqYt GjkPd8UTGNJVbd4L7zyC3LYt4XIPN1Z3DcW6F0kJYRzX9v2nHAq44d/v0M41JFP5Do3G H4nSozx35eosQr2qUMntLb+qqwcsDt5S5cgwJ2kpkfUcUiX0pqgI7WAmMkgK5iNzqAht uOFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lLtsucvsZ+6X8j0YIBVtyEFlEwonh01weVxKZ0zmUeo=; b=ZTQIKThH278KcTm9mQOIVlEWAKu13aNx0cCGpycOzo2AA1lGVtliiuUMBZIF004NNW JOg1uNCUVVBfbtBqiEydwXaxfOGQlcnGjZ1U6nmjdroCOZ7JrnATbrfeywQ7PjL+sJ0r iMYYZJ6XCyyQ9gdfulTgPIIoJHuUlcSpYKs3uhDmROwsrVoKjelN8OLck+awm8Ymg9Lj jC1pskt2L4895RoSZp0LMeJQ5f4L/jNArKH0tDztpEv7bJK4N37l7ZaajBZT0uhhhXGa AQG4RoUOs+F7F6KE1q9HkuIYOeMY1nEAyBYSuaopZW+IeaBH0Ei9P2tki7+/GNM89B4v vd0w== X-Gm-Message-State: APjAAAUtRj3b2MAPHQu2n5pUWqalNAJo2QPmjwHktuCp/M7KI0ZU3Tuw +b0bZz/dDIjyaC9XSIYw82QTxw== X-Google-Smtp-Source: APXvYqw2TVQ/OpZSIQ7ZVajMAp1wWmK6ttP1QNBOQdxDKKL9IzYtVPd2gowk6D1qC0zJAdGxOrv0wQ== X-Received: by 2002:ac2:5225:: with SMTP id i5mr17389550lfl.157.1562149152126; Wed, 03 Jul 2019 03:19:12 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.11 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:11 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 4/5] net: ethernet: ti: cpsw_ethtool: allow res split while down Date: Wed, 3 Jul 2019 13:19:02 +0300 Message-Id: <20190703101903.8411-5-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP That's possible to set channel num while interfaces are down. When interface gets up it should resplit budget. This resplit can happen after phy is up but only if speed is changed, so should be set before this, for this allow it to happen while changing number of channels, when interfaces are down. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw_ethtool.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index 08d7aaee8299..fa4d75f5548e 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -620,8 +620,7 @@ int cpsw_set_channels_common(struct net_device *ndev, } } - if (cpsw->usage_count) - cpsw_split_res(cpsw); + cpsw_split_res(cpsw); ret = cpsw_resume_data_pass(ndev); if (!ret) From patchwork Wed Jul 3 10:19:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3528112C for ; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92018288E4 for ; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8394C288EA; Wed, 3 Jul 2019 10:19:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20A90288E4 for ; Wed, 3 Jul 2019 10:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727108AbfGCKTV (ORCPT ); Wed, 3 Jul 2019 06:19:21 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:42858 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727192AbfGCKTQ (ORCPT ); Wed, 3 Jul 2019 06:19:16 -0400 Received: by mail-lf1-f67.google.com with SMTP id s19so642337lfb.9 for ; Wed, 03 Jul 2019 03:19:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=P1JWh618NCi6bgFpWX5q0eYUE8jatUbTSJYMP54Y514=; b=eBKDnpPRuAX7ymFuhmrtPrO7w2Gxyan2cPvaejji6X1eyiM+Jl3Z/2veujTRT/hATl e29lgmtUw+6r7W+40VnjyIqpDbd2W7CDXwVloOEEBqUR+sQtUTnKceTwj5uGe5/SA70N Jl9GiVp2opBuS0+Yh7pcPKUGHE0MGKycUIJC33B6KJtOFC9+ti5CbS0JxXK9VQnoWDhj pLeSkjYfRfY8Maln43D3hyXnbEPWi1DynMkQDlsXMaDPkRufjd7woXJ4O34Dzju8PqPz a+272UyVShCOYIP2sc7PrDpel1FZ3MSeQPwxosUgle4JU6fWHByTiqI0lp3me5+lbxgZ c+ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=P1JWh618NCi6bgFpWX5q0eYUE8jatUbTSJYMP54Y514=; b=BvPYRAzcNCSpo/g325mgXVIm1UvgJPJXu7p2BZP9hL2gnCWkf/d1K4en0epAhMCSJH ElYkI0o13cOg4GrzJ6NpnPA1C4f5dMsCVmePdHrqy6D7gYDTSeioFpqLu5xpdzgsYPC2 X/q8kGP+yJ1zS/lo+9gRERr7xlEHhywr/QK9EIQEKxk2dQPCk0+FleMhFuutUaDwZX/T Q6lXhtehZi/AWxZONxTgd46XqFMPYfp7CQVIcMVXBtRRktf1Q7kvnEzW0hq1D8lOzD5N rO2CkAUtNGuFcWOoQKq0Oz2i5rp4zkezhWxK7yfYlsypHCjdsweY1MCKHMf3a1qjwx02 oWBg== X-Gm-Message-State: APjAAAVyT7xwmyG9ahQR4rdDqrhaYQZQfCGWFCT1/LfZBgBDokIeO/C8 xrYqSvRrxwXxWmGriMQevN0oTA== X-Google-Smtp-Source: APXvYqwmU0V1ZNtKNvk7YjXxPboC1/RtUiEPoVVOSZznM/31kmClIALnjasAX0Zr5dvM8TQ7dNVsnA== X-Received: by 2002:a19:750b:: with SMTP id y11mr17662707lfe.16.1562149153405; Wed, 03 Jul 2019 03:19:13 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.12 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:12 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support Date: Wed, 3 Jul 2019 13:19:03 +0300 Message-Id: <20190703101903.8411-6-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add XDP support based on rx page_pool allocator, one frame per page. Page pool allocator is used with assumption that only one rx_handler is running simultaneously. DMA map/unmap is reused from page pool despite there is no need to map whole page. Due to specific of cpsw, the same TX/RX handler can be used by 2 network devices, so special fields in buffer are added to identify an interface the frame is destined to. Thus XDP works for both interfaces, that allows to test xdp redirect between two interfaces easily. Aslo, each rx queue have own page pools, but common for both netdevs. XDP prog is common for all channels till appropriate changes are added in XDP infrastructure. Also, once page_pool recycling becomes part of skb netstack some simplifications can be added, like removing page_pool_release_page() before skb receive. In order to keep rx_dev while redirect, that can be somehow used in future, do flush in rx_handler, that allows to keep rx dev the same while reidrect. It allows to conform with tracing rx_dev pointed by Jesper. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/Kconfig | 1 + drivers/net/ethernet/ti/cpsw.c | 485 ++++++++++++++++++++++--- drivers/net/ethernet/ti/cpsw_ethtool.c | 66 +++- drivers/net/ethernet/ti/cpsw_priv.h | 7 + 4 files changed, 502 insertions(+), 57 deletions(-) diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig index a800d3417411..834afca3a019 100644 --- a/drivers/net/ethernet/ti/Kconfig +++ b/drivers/net/ethernet/ti/Kconfig @@ -50,6 +50,7 @@ config TI_CPSW depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST select TI_DAVINCI_MDIO select MFD_SYSCON + select PAGE_POOL select REGMAP ---help--- This driver supports TI's CPSW Ethernet Switch. diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 32b7b3b74a6b..6e9be22035a9 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -31,6 +31,10 @@ #include #include #include +#include +#include +#include +#include #include #include @@ -60,6 +64,10 @@ static int descs_pool_size = CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT; module_param(descs_pool_size, int, 0444); MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); +/* The buf includes headroom compatible with both skb and xdpf */ +#define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN) +#define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long)) + #define for_each_slave(priv, func, arg...) \ do { \ struct cpsw_slave *slave; \ @@ -74,6 +82,11 @@ MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); (func)(slave++, ##arg); \ } while (0) +#define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long)) + +#define CPSW_XDP_CONSUMED 1 +#define CPSW_XDP_PASS 0 + static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid); @@ -337,24 +350,58 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } +static int cpsw_is_xdpf_handle(void *handle) +{ + return (unsigned long)handle & BIT(0); +} + +static void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf) +{ + return (void *)((unsigned long)xdpf | BIT(0)); +} + +static struct xdp_frame *cpsw_handle_to_xdpf(void *handle) +{ + return (struct xdp_frame *)((unsigned long)handle & ~BIT(0)); +} + +struct __aligned(sizeof(long)) cpsw_meta_xdp { + struct net_device *ndev; + int ch; +}; + void cpsw_tx_handler(void *token, int len, int status) { + struct cpsw_meta_xdp *xmeta; + struct xdp_frame *xdpf; + struct net_device *ndev; struct netdev_queue *txq; - struct sk_buff *skb = token; - struct net_device *ndev = skb->dev; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct sk_buff *skb; + int ch; + + if (cpsw_is_xdpf_handle(token)) { + xdpf = cpsw_handle_to_xdpf(token); + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + ndev = xmeta->ndev; + ch = xmeta->ch; + xdp_return_frame(xdpf); + } else { + skb = token; + ndev = skb->dev; + ch = skb_get_queue_mapping(skb); + cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb); + dev_kfree_skb_any(skb); + } /* Check whether the queue is stopped due to stalled tx dma, if the * queue is stopped then start the queue as we have free desc for tx */ - txq = netdev_get_tx_queue(ndev, skb_get_queue_mapping(skb)); + txq = netdev_get_tx_queue(ndev, ch); if (unlikely(netif_tx_queue_stopped(txq))) netif_tx_wake_queue(txq); - cpts_tx_timestamp(cpsw->cpts, skb); ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; - dev_kfree_skb_any(skb); } static void cpsw_rx_vlan_encap(struct sk_buff *skb) @@ -400,24 +447,236 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } +static int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, + struct page *page) +{ + struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_meta_xdp *xmeta; + struct cpdma_chan *txch; + dma_addr_t dma; + int ret, port; + + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = 0; + txch = cpsw->txv[0].ch; + + port = priv->emac_port + cpsw->data.dual_emac; + if (page) { + dma = page_pool_get_dma_addr(page); + dma += xdpf->data - (void *)xdpf; + ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), + dma, xdpf->len, port); + } else { + if (sizeof(*xmeta) > xdpf->headroom) { + xdp_return_frame_rx_napi(xdpf); + return -EINVAL; + } + + ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf), + xdpf->data, xdpf->len, port); + } + + if (ret) { + priv->ndev->stats.tx_dropped++; + xdp_return_frame_rx_napi(xdpf); + } + + return ret; +} + +static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, + struct page *page) +{ + struct cpsw_common *cpsw = priv->cpsw; + struct net_device *ndev = priv->ndev; + int ret = CPSW_XDP_CONSUMED; + struct xdp_frame *xdpf; + struct bpf_prog *prog; + u32 act; + + rcu_read_lock(); + + prog = READ_ONCE(priv->xdp_prog); + if (!prog) { + ret = CPSW_XDP_PASS; + goto out; + } + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + ret = CPSW_XDP_PASS; + break; + case XDP_TX: + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + goto drop; + + cpsw_xdp_tx_frame(priv, xdpf, page); + break; + case XDP_REDIRECT: + if (xdp_do_redirect(ndev, xdp, prog)) + goto drop; + + /* as flush requires rx_dev to be per NAPI handle and there + * is can be two devices putting packets on bulk queue, + * do flush here avoid this just for sure. + */ + xdp_do_flush_map(); + break; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(ndev, prog, act); + /* fall through -- handle aborts by dropping packet */ + case XDP_DROP: + goto drop; + } +out: + rcu_read_unlock(); + return ret; +drop: + rcu_read_unlock(); + page_pool_recycle_direct(cpsw->page_pool[ch], page); + return ret; +} + +static unsigned int cpsw_rxbuf_total_len(unsigned int len) +{ + len += CPSW_HEADROOM; + len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + return SKB_DATA_ALIGN(len); +} + +static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw, + int size) +{ + struct page_pool_params pp_params; + struct page_pool *pool; + + pp_params.order = 0; + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = size; + pp_params.nid = NUMA_NO_NODE; + pp_params.dma_dir = DMA_BIDIRECTIONAL; + pp_params.dev = cpsw->dev; + + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) + dev_err(cpsw->dev, "cannot create rx page pool\n"); + + return pool; +} + +static int cpsw_create_rx_pool(struct cpsw_common *cpsw, int ch) +{ + struct page_pool *pool; + int ret = 0, pool_size; + + pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); + pool = cpsw_create_page_pool(cpsw, pool_size); + if (IS_ERR(pool)) + ret = PTR_ERR(pool); + else + cpsw->page_pool[ch] = pool; + + return ret; +} + +static int cpsw_ndev_create_xdp_rxq(struct cpsw_priv *priv, int ch) +{ + struct cpsw_common *cpsw = priv->cpsw; + int ret, new_pool = false; + struct xdp_rxq_info *rxq; + + rxq = &priv->xdp_rxq[ch]; + + ret = xdp_rxq_info_reg(rxq, priv->ndev, ch); + if (ret) + return ret; + + if (!cpsw->page_pool[ch]) { + ret = cpsw_create_rx_pool(cpsw, ch); + if (ret) + goto err_rxq; + + new_pool = true; + } + + ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, + cpsw->page_pool[ch]); + if (!ret) + return 0; + + if (new_pool) { + page_pool_free(cpsw->page_pool[ch]); + cpsw->page_pool[ch] = NULL; + } + +err_rxq: + xdp_rxq_info_unreg(rxq); + return ret; +} + +void cpsw_ndev_destroy_xdp_rxqs(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + struct xdp_rxq_info *rxq; + int i; + + for (i = 0; i < cpsw->rx_ch_num; i++) { + rxq = &priv->xdp_rxq[i]; + if (xdp_rxq_info_is_reg(rxq)) + xdp_rxq_info_unreg(rxq); + } +} + +int cpsw_ndev_create_xdp_rxqs(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + int i, ret; + + for (i = 0; i < cpsw->rx_ch_num; i++) { + ret = cpsw_ndev_create_xdp_rxq(priv, i); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_ndev_destroy_xdp_rxqs(priv); + + return ret; +} + static void cpsw_rx_handler(void *token, int len, int status) { - struct cpdma_chan *ch; - struct sk_buff *skb = token; - struct sk_buff *new_skb; - struct net_device *ndev = skb->dev; - int ret = 0, port; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct page *new_page, *page = token; + void *pa = page_address(page); + struct cpsw_meta_xdp *xmeta = pa + CPSW_XMETA_OFFSET; + struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); + int pkt_size = cpsw->rx_packet_max; + int ret = 0, port, ch = xmeta->ch; + int headroom = CPSW_HEADROOM; + struct net_device *ndev = xmeta->ndev; struct cpsw_priv *priv; + struct page_pool *pool; + struct sk_buff *skb; + struct xdp_buff xdp; + dma_addr_t dma; - if (cpsw->data.dual_emac) { + if (cpsw->data.dual_emac && status >= 0) { port = CPDMA_RX_SOURCE_PORT(status); - if (port) { + if (port) ndev = cpsw->slaves[--port].ndev; - skb->dev = ndev; - } } + priv = netdev_priv(ndev); + pool = cpsw->page_pool[ch]; if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { /* In dual emac mode check for all interfaces */ if (cpsw->data.dual_emac && cpsw->usage_count && @@ -426,43 +685,87 @@ static void cpsw_rx_handler(void *token, int len, int status) * is already down and the other interface is up * and running, instead of freeing which results * in reducing of the number of rx descriptor in - * DMA engine, requeue skb back to cpdma. + * DMA engine, requeue page back to cpdma. */ - new_skb = skb; + new_page = page; goto requeue; } - /* the interface is going down, skbs are purged */ - dev_kfree_skb_any(skb); + /* the interface is going down, pages are purged */ + page_pool_recycle_direct(pool, page); return; } - new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); - if (new_skb) { - skb_copy_queue_mapping(new_skb, skb); - skb_put(skb, len); - if (status & CPDMA_RX_VLAN_ENCAP) - cpsw_rx_vlan_encap(skb); - priv = netdev_priv(ndev); - if (priv->rx_ts_enabled) - cpts_rx_timestamp(cpsw->cpts, skb); - skb->protocol = eth_type_trans(skb, ndev); - netif_receive_skb(skb); - ndev->stats.rx_bytes += len; - ndev->stats.rx_packets++; - kmemleak_not_leak(new_skb); - } else { + new_page = page_pool_dev_alloc_pages(pool); + if (unlikely(!new_page)) { + new_page = page; ndev->stats.rx_dropped++; - new_skb = skb; + goto requeue; } + if (priv->xdp_prog) { + if (status & CPDMA_RX_VLAN_ENCAP) { + xdp.data = pa + CPSW_HEADROOM + + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + xdp.data_end = xdp.data + len - + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + } else { + xdp.data = pa + CPSW_HEADROOM; + xdp.data_end = xdp.data + len; + } + + xdp_set_data_meta_invalid(&xdp); + + xdp.data_hard_start = pa; + xdp.rxq = &priv->xdp_rxq[ch]; + + ret = cpsw_run_xdp(priv, ch, &xdp, page); + if (ret != CPSW_XDP_PASS) + goto requeue; + + /* XDP prog might have changed packet data and boundaries */ + len = xdp.data_end - xdp.data; + headroom = xdp.data - xdp.data_hard_start; + + /* XDP prog can modify vlan tag, so can't use encap header */ + status &= ~CPDMA_RX_VLAN_ENCAP; + } + + /* pass skb to netstack if no XDP prog or returned XDP_PASS */ + skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); + if (!skb) { + ndev->stats.rx_dropped++; + page_pool_recycle_direct(pool, page); + goto requeue; + } + + skb_reserve(skb, headroom); + skb_put(skb, len); + skb->dev = ndev; + if (status & CPDMA_RX_VLAN_ENCAP) + cpsw_rx_vlan_encap(skb); + if (priv->rx_ts_enabled) + cpts_rx_timestamp(cpsw->cpts, skb); + skb->protocol = eth_type_trans(skb, ndev); + + /* unmap page as no netstack skb page recycling */ + page_pool_release_page(pool, page); + netif_receive_skb(skb); + + ndev->stats.rx_bytes += len; + ndev->stats.rx_packets++; + requeue: - ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch; - ret = cpdma_chan_submit(ch, new_skb, new_skb->data, - skb_tailroom(new_skb), 0); + xmeta = page_address(new_page) + CPSW_XMETA_OFFSET; + xmeta->ndev = ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, + pkt_size, 0); if (ret < 0) { WARN_ON(ret == -ENOMEM); - dev_kfree_skb_any(new_skb); + page_pool_recycle_direct(pool, new_page); } } @@ -1032,33 +1335,39 @@ static void cpsw_init_host_port(struct cpsw_priv *priv) int cpsw_fill_rx_channels(struct cpsw_priv *priv) { struct cpsw_common *cpsw = priv->cpsw; - struct sk_buff *skb; + struct cpsw_meta_xdp *xmeta; + struct page_pool *pool; + struct page *page; int ch_buf_num; int ch, i, ret; + dma_addr_t dma; for (ch = 0; ch < cpsw->rx_ch_num; ch++) { + pool = cpsw->page_pool[ch]; ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); for (i = 0; i < ch_buf_num; i++) { - skb = __netdev_alloc_skb_ip_align(priv->ndev, - cpsw->rx_packet_max, - GFP_KERNEL); - if (!skb) { - cpsw_err(priv, ifup, "cannot allocate skb\n"); + page = page_pool_dev_alloc_pages(pool); + if (!page) { + cpsw_err(priv, ifup, "allocate rx page err\n"); return -ENOMEM; } - skb_set_queue_mapping(skb, ch); - ret = cpdma_chan_idle_submit(cpsw->rxv[ch].ch, skb, - skb->data, - skb_tailroom(skb), 0); + xmeta = page_address(page) + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; + ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, + page, dma, + cpsw->rx_packet_max, + 0); if (ret < 0) { cpsw_err(priv, ifup, - "cannot submit skb to channel %d rx, error %d\n", + "cannot submit page to channel %d rx, error %d\n", ch, ret); - kfree_skb(skb); + page_pool_recycle_direct(pool, page); return ret; } - kmemleak_not_leak(skb); } cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n", @@ -1370,6 +1679,10 @@ static int cpsw_ndo_open(struct net_device *ndev) cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan, ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0); + ret = cpsw_ndev_create_xdp_rxqs(priv); + if (ret) + goto err_cleanup; + /* initialize shared resources for every ndev */ if (!cpsw->usage_count) { /* disable priority elevation */ @@ -1422,9 +1735,10 @@ static int cpsw_ndo_open(struct net_device *ndev) err_cleanup: if (!cpsw->usage_count) { cpdma_ctlr_stop(cpsw->dma); - for_each_slave(priv, cpsw_slave_stop, cpsw); + memset(cpsw->page_pool, 0, sizeof(cpsw->page_pool)); } + for_each_slave(priv, cpsw_slave_stop, cpsw); pm_runtime_put_sync(cpsw->dev); netif_carrier_off(priv->ndev); return ret; @@ -1447,9 +1761,12 @@ static int cpsw_ndo_stop(struct net_device *ndev) cpsw_intr_disable(cpsw); cpdma_ctlr_stop(cpsw->dma); cpsw_ale_stop(cpsw->ale); + memset(cpsw->page_pool, 0, sizeof(cpsw->page_pool)); } for_each_slave(priv, cpsw_slave_stop, cpsw); + cpsw_ndev_destroy_xdp_rxqs(priv); + if (cpsw_need_resplit(cpsw)) cpsw_split_res(cpsw); @@ -2004,6 +2321,64 @@ static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, } } +static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf) +{ + struct bpf_prog *prog = bpf->prog; + + if (!priv->xdpi.prog && !prog) + return 0; + + if (!xdp_attachment_flags_ok(&priv->xdpi, bpf)) + return -EBUSY; + + WRITE_ONCE(priv->xdp_prog, prog); + + xdp_attachment_setup(&priv->xdpi, bpf); + + return 0; +} + +static int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return cpsw_xdp_prog_setup(priv, bpf); + + case XDP_QUERY_PROG: + return xdp_attachment_query(&priv->xdpi, bpf); + + default: + return -EINVAL; + } +} + +static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct xdp_frame *xdpf; + int i, drops = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (i = 0; i < n; i++) { + xdpf = frames[i]; + if (xdpf->len < CPSW_MIN_PACKET_SIZE) { + xdp_return_frame_rx_napi(xdpf); + drops++; + continue; + } + + if (cpsw_xdp_tx_frame(priv, xdpf, NULL)) + drops++; + } + + return n - drops; +} + #ifdef CONFIG_NET_POLL_CONTROLLER static void cpsw_ndo_poll_controller(struct net_device *ndev) { @@ -2032,6 +2407,8 @@ static const struct net_device_ops cpsw_netdev_ops = { .ndo_vlan_rx_add_vid = cpsw_ndo_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = cpsw_ndo_vlan_rx_kill_vid, .ndo_setup_tc = cpsw_ndo_setup_tc, + .ndo_bpf = cpsw_ndo_bpf, + .ndo_xdp_xmit = cpsw_ndo_xdp_xmit, }; static void cpsw_get_drvinfo(struct net_device *ndev, diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index fa4d75f5548e..b39a598cb094 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -578,6 +578,48 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, return 0; } +static void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw) +{ + struct net_device *ndev; + struct cpsw_priv *priv; + int i; + + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (!ndev || !netif_running(ndev)) + continue; + + priv = netdev_priv(ndev); + cpsw_ndev_destroy_xdp_rxqs(priv); + } + + memset(cpsw->page_pool, 0, sizeof(cpsw->page_pool)); +} + +static int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw) +{ + struct net_device *ndev; + struct cpsw_priv *priv; + int i, ret; + + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (!ndev || !netif_running(ndev)) + continue; + + priv = netdev_priv(ndev); + ret = cpsw_ndev_create_xdp_rxqs(priv); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_destroy_xdp_rxqs(cpsw); + return ret; +} + int cpsw_set_channels_common(struct net_device *ndev, struct ethtool_channels *chs, cpdma_handler_fn rx_handler) @@ -585,7 +627,7 @@ int cpsw_set_channels_common(struct net_device *ndev, struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; struct net_device *sl_ndev; - int i, ret; + int i, new_pools, ret; ret = cpsw_check_ch_settings(cpsw, chs); if (ret < 0) @@ -593,6 +635,10 @@ int cpsw_set_channels_common(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + new_pools = (chs->rx_count != cpsw->rx_ch_num) && cpsw->usage_count; + if (new_pools) + cpsw_destroy_xdp_rxqs(cpsw); + ret = cpsw_update_channels_res(priv, chs->rx_count, 1, rx_handler); if (ret) goto err; @@ -622,6 +668,12 @@ int cpsw_set_channels_common(struct net_device *ndev, cpsw_split_res(cpsw); + if (new_pools) { + ret = cpsw_create_xdp_rxqs(cpsw); + if (ret) + goto err; + } + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; @@ -647,8 +699,7 @@ void cpsw_get_ringparam(struct net_device *ndev, int cpsw_set_ringparam(struct net_device *ndev, struct ethtool_ringparam *ering) { - struct cpsw_priv *priv = netdev_priv(ndev); - struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); int ret; /* ignore ering->tx_pending - only rx_pending adjustment is supported */ @@ -663,10 +714,19 @@ int cpsw_set_ringparam(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + if (cpsw->usage_count) + cpsw_destroy_xdp_rxqs(cpsw); + ret = cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); if (ret) goto err; + if (cpsw->usage_count) { + ret = cpsw_create_xdp_rxqs(cpsw); + if (ret) + goto err; + } + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 04795b97ee71..da68764e7f87 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -346,6 +346,7 @@ struct cpsw_common { int rx_ch_num, tx_ch_num; int speed; int usage_count; + struct page_pool *page_pool[CPSW_MAX_QUEUES]; }; struct cpsw_priv { @@ -360,6 +361,10 @@ struct cpsw_priv { int shp_cfg_speed; int tx_ts_enabled; int rx_ts_enabled; + struct bpf_prog *xdp_prog; + struct xdp_rxq_info xdp_rxq[CPSW_MAX_QUEUES]; + struct xdp_attachment_info xdpi; + u32 emac_port; struct cpsw_common *cpsw; }; @@ -391,6 +396,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); void cpsw_tx_handler(void *token, int len, int status); +int cpsw_ndev_create_xdp_rxqs(struct cpsw_priv *priv); +void cpsw_ndev_destroy_xdp_rxqs(struct cpsw_priv *priv); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev);