From patchwork Wed Jun 5 13:20:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83F4C14B6 for ; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75FE0286FD for ; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 741DD288F6; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24D75286FD for ; Wed, 5 Jun 2019 13:21:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728011AbfFENUR (ORCPT ); Wed, 5 Jun 2019 09:20:17 -0400 Received: from mail-lj1-f195.google.com ([209.85.208.195]:33535 "EHLO mail-lj1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727945AbfFENUQ (ORCPT ); Wed, 5 Jun 2019 09:20:16 -0400 Received: by mail-lj1-f195.google.com with SMTP id v29so11829296ljv.0 for ; Wed, 05 Jun 2019 06:20:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AdSMTsu+LtgUCnPNRXGwSYsCPQyqbNfQqw5/RcvDJNQ=; b=mRYi4qcNVbZ//c6G57A5oDbp2t2NbASCzTsCDdchf6yJyjEu+CCRyMzKu+9CKFoid3 ZwHgGIiAW5HvrqKBJ1X5jiXV8j+0CWCui9NP5egRT36VcapZyU/vhp6SCCBuya6WqE09 EJI5NV/HOBQBQfIBMKnZYUXlDbPa5bHZf929T1qOP8lEPB6+5X4JX3FTK/mOgRyEjxPo NR9mpIYi6oFT+jPz7XZ9EZc54Uv5lBo2CILwvY4E+3997zzLyyuwJiHIM/f3AzUS5BuF Ott0Mezc+lVW/EK674CogiBpGozPsRXV/dA4sdWr0gp2tJ/mtnknDAgMwxIU0+VWMBe/ l5IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AdSMTsu+LtgUCnPNRXGwSYsCPQyqbNfQqw5/RcvDJNQ=; b=ekPwkuWfnYUhP7b6Le5WDIdBglhDzN9kRoLg2F52oz0FBJgDZe6mGaRtcx0D0z8Ax1 1gKJmObIF5UlE5gbXMPfsJ+eLDmwUybBzFJXJGXcDGJzVcagJ5tfB4gvSHUICS0ktiDp qnv/mqOO7j1PFQmR/RiUFmJ41VXTvRAxFWuc/G/SGEtpKUGrS4cjrgL/1TUDsgdWMhHC cBxsS143hDSjAhLYhR13DjwJBpQWZPhy7YrDIQp2GUhewiVEmo/9p/Pd9Iup1to9CD7B C8Tcpr7KZInOhJ/rPQin6XBIixuWhsxAAunLs7U4Bl1BRpt/joQo76tCILfGfj1xCsw5 E2nw== X-Gm-Message-State: APjAAAWXS6Zb5MFVf5/tkWV3Aj7bzAGXxAVhWepVCwyJQF3nMOCm/irx glz8V31OjDq8ishI00Kr436/cA== X-Google-Smtp-Source: APXvYqxvv58TgjJcuU4yrUeNFAHBwd42aANAPZB41bOxwg874J6YPY976CnnpFzfO0Rvt1BD4pVGQw== X-Received: by 2002:a2e:9c03:: with SMTP id s3mr6004919lji.209.1559740814483; Wed, 05 Jun 2019 06:20:14 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:13 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Jesper Dangaard Brouer , Ivan Khoronzhuk Subject: [PATCH v3 net-next 1/7] net: page_pool: add helper function to retrieve dma addresses Date: Wed, 5 Jun 2019 16:20:03 +0300 Message-Id: <20190605132009.10734-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ilias Apalodimas On a previous patch dma addr was stored in 'struct page'. Use that to retrieve DMA addresses used by network drivers Signed-off-by: Ilias Apalodimas Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Ivan Khoronzhuk --- include/net/page_pool.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 694d055e01ef..b885d86cb7a1 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -132,6 +132,11 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, __page_pool_put_page(pool, page, true); } +static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +{ + return page->dma_addr; +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL From patchwork Wed Jun 5 13:20:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976977 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA86015E6 for ; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DBEA286FD for ; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B70E288BA; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 473DA28988 for ; Wed, 5 Jun 2019 13:21:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727900AbfFENVE (ORCPT ); Wed, 5 Jun 2019 09:21:04 -0400 Received: from mail-lf1-f65.google.com ([209.85.167.65]:40060 "EHLO mail-lf1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728003AbfFENUR (ORCPT ); Wed, 5 Jun 2019 09:20:17 -0400 Received: by mail-lf1-f65.google.com with SMTP id a9so17646594lff.7 for ; Wed, 05 Jun 2019 06:20:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J8udFRfvbZ6fXOojTQ4C0YPBDnze74Ovvj+f7jr19Fw=; b=kakw92oc5wH+oxJ6ODpgbLaUIoQW9h/zUc9x9pUca+W4hUS2xJ0UItvFLf9VQu8CIc 1szBRsbJ0awE7LTjIq4G2XLFuR6NA7/GMra+3MrrKql34p3S1K/0BW9XeVMFhQYKCTzz GDEEbdIwXdjGoo65Q5GIPn9ATH2Pplj3ZlximgzKWmvhwJO5uL4xt3wvQPWA9XCOdqby pldooEWZ4PPz+4lA99qqmGvqQJJB+5gPWnOAnNelCPYCw6w7kQYvm1e9sfZN8Rio0xT9 oX8uoCH8Mn8GL0cCRgeiLGtGD9Z2H3fzWi6Nd6Ib1p80OBQKGvSmhpjPo8Pw9YT+Zszq pj7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J8udFRfvbZ6fXOojTQ4C0YPBDnze74Ovvj+f7jr19Fw=; b=PQhsrPhnauhKU+jY+VnAzxs2/UhD5ufro55mgQWWO66oNEAy0++bZ+oWvv1vKgRCpB 5M1wBAzsKb7XI2njUOKxHkcj6P0wjLfeN/CsyE4sA0jK6YKzhi3O/ZYyuu7SHNnGtqy9 Q1biUrSv472ovHzsz9dmTjNqfdYaypsQAbJSxLj+iOwND/XaKXzvuCpwtsiVierWrJhC gNIcNkAO5pEIv4K3k5+jORIPgJB1kxdq9bpmQqGiD8JyiwGA/5YqX3sA2QvlReXErSU0 fbrUCnBeuEPb+WydUarl/vRgdbqOd/clwJeZ5ZeDnftzNo1sj4sz+70WVWx8752BOfdz +HXw== X-Gm-Message-State: APjAAAVsSQx0ASXHKy0lI4hDz+ZNsiPgZPh82rmjcH1mOsRRdWz7MCno ucufm5XSdXBtXsnjhNBz+sT6ZQ== X-Google-Smtp-Source: APXvYqxUNjdEnl96GlRKiWrA7aDQd1EXYbFqh9DMeeWi0M9W28w5vnZ2wm2F2zFOSdm42p7PRe2ZeA== X-Received: by 2002:a19:2753:: with SMTP id n80mr20327221lfn.127.1559740815723; Wed, 05 Jun 2019 06:20:15 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:15 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Jesper Dangaard Brouer , Ivan Khoronzhuk Subject: [PATCH v3 net-next 2/7] net: page_pool: add helper function to unmap dma addresses Date: Wed, 5 Jun 2019 16:20:04 +0300 Message-Id: <20190605132009.10734-3-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ilias Apalodimas On a previous patch dma addr was stored in 'struct page'. Use that to unmap DMA addresses used by network drivers Signed-off-by: Ilias Apalodimas Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Ivan Khoronzhuk --- include/net/page_pool.h | 1 + net/core/page_pool.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index b885d86cb7a1..ad218cef88c5 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -110,6 +110,7 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) struct page_pool *page_pool_create(const struct page_pool_params *params); void page_pool_destroy(struct page_pool *pool); +void page_pool_unmap_page(struct page_pool *pool, struct page *page); /* Never call this directly, use helpers below */ void __page_pool_put_page(struct page_pool *pool, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5b2252c6d49b..205af7bd6d09 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -190,6 +190,13 @@ static void __page_pool_clean_page(struct page_pool *pool, page->dma_addr = 0; } +/* unmap the page and clean our state */ +void page_pool_unmap_page(struct page_pool *pool, struct page *page) +{ + __page_pool_clean_page(pool, page); +} +EXPORT_SYMBOL(page_pool_unmap_page); + /* Return a page to the page allocator, cleaning up our state */ static void __page_pool_return_page(struct page_pool *pool, struct page *page) { From patchwork Wed Jun 5 13:20:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976973 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B438F14B6 for ; Wed, 5 Jun 2019 13:21:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6A38285D8 for ; Wed, 5 Jun 2019 13:21:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9AAA228770; Wed, 5 Jun 2019 13:21:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26FB7288BA for ; Wed, 5 Jun 2019 13:21:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727874AbfFENVA (ORCPT ); Wed, 5 Jun 2019 09:21:00 -0400 Received: from mail-lf1-f65.google.com ([209.85.167.65]:33520 "EHLO mail-lf1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728031AbfFENUS (ORCPT ); Wed, 5 Jun 2019 09:20:18 -0400 Received: by mail-lf1-f65.google.com with SMTP id y17so19062128lfe.0 for ; Wed, 05 Jun 2019 06:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CE8Iq8ZjUtNENgYj+FlGTMxcTQ2fFaeeS723D5LJTaU=; b=Ie/iVmmmXC6K55Uaa53Di3vSnNQfwFe1FvKFIdOAI2En/YaClJm3K8u2Hw01cBFi/b RytUn8xjSlb/3NQuDuoxQom0+Udw7r6FvZd18bSt7iyupgyOrRmKC+OMKowrKmAGRB63 uBq2urzv3cyOApD2MKU65Q1u0P8ojAkvxcHiVqF/hBHGcihIQS4HMpqgWyZSZqd0nOo5 8kESN0+fpHn6UddiWD8pg+cXV42hX8sHqS3r43SktDUC27osbBWxOGhPGHrNTC6FexB1 pTr+tpVaOvaF68MfWcPNdljRzLk3X3EQymwMI5uA6DpySSsr+jKQRPsgpStETVz2sPXM w7nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CE8Iq8ZjUtNENgYj+FlGTMxcTQ2fFaeeS723D5LJTaU=; b=e1zCuKvxNJApv2FY9mk2ltIUg1zudXEz5P6uK9PISJjnc8uQtQ6LUlOOnEk6dH+ObM 6etbzn1SoYy5WAdwKEtvRTuQVZIVa91SD6FRXyiRTtzpqI9aeKJV3YYwP2spm3h84UWp J1NVTQVvgDRyBO8ZepDqbVo9e6EOvgGEs+4MfEZATbYQaYbV5gckxD8ggMbhrTlYSrCc cxfM53hHHaQgsX5GuLnr4YIsaauW7YW3AdRxtMRlCK2DW+VNk8972+nWMuP+d/Bz5WGo C0TW4IFFI7jF861l3X+IryUIb3U6M0w57xS+qNVS07oEP8Ylyfp2ClTJMM4eei2hJgdG gzvw== X-Gm-Message-State: APjAAAXSUL+9IfbwH7mMX9ET6Cx6iDX/io3o0fKWDJUr87K1eFDRA98j 0SlKMv+iU5/6YE+ycLAJZxI43g== X-Google-Smtp-Source: APXvYqwAg/kj6/eR/82+VKecddOJZ5/OhEi893jm44m9NIXKkIXEJmyM0m/IMzZ9j+Pv/4znmdaVOw== X-Received: by 2002:a19:7110:: with SMTP id m16mr20414541lfc.4.1559740817213; Wed, 05 Jun 2019 06:20:17 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:16 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 3/7] net: ethernet: ti: cpsw: use cpsw as drv data Date: Wed, 5 Jun 2019 16:20:05 +0300 Message-Id: <20190605132009.10734-4-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP No need to set ndev for drvdata when mainly cpsw reference is needed, so correct this legacy decision. Reviewed-by: Grygorii Strashko Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 6d3f1f3f90cb..3430503e1053 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -2265,8 +2265,7 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, static void cpsw_remove_dt(struct platform_device *pdev) { - struct net_device *ndev = platform_get_drvdata(pdev); - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct cpsw_common *cpsw = platform_get_drvdata(pdev); struct cpsw_platform_data *data = &cpsw->data; struct device_node *node = pdev->dev.of_node; struct device_node *slave_node; @@ -2477,7 +2476,7 @@ static int cpsw_probe(struct platform_device *pdev) goto clean_cpts; } - platform_set_drvdata(pdev, ndev); + platform_set_drvdata(pdev, cpsw); priv = netdev_priv(ndev); priv->cpsw = cpsw; priv->ndev = ndev; @@ -2570,9 +2569,8 @@ static int cpsw_probe(struct platform_device *pdev) static int cpsw_remove(struct platform_device *pdev) { - struct net_device *ndev = platform_get_drvdata(pdev); - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); - int ret; + struct cpsw_common *cpsw = platform_get_drvdata(pdev); + int i, ret; ret = pm_runtime_get_sync(&pdev->dev); if (ret < 0) { @@ -2580,9 +2578,9 @@ static int cpsw_remove(struct platform_device *pdev) return ret; } - if (cpsw->data.dual_emac) - unregister_netdev(cpsw->slaves[1].ndev); - unregister_netdev(ndev); + for (i = 0; i < cpsw->data.slaves; i++) + if (cpsw->slaves[i].ndev) + unregister_netdev(cpsw->slaves[i].ndev); cpts_release(cpsw->cpts); cpdma_ctlr_destroy(cpsw->dma); From patchwork Wed Jun 5 13:20:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976971 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C585614B6 for ; Wed, 5 Jun 2019 13:20:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7505285DB for ; Wed, 5 Jun 2019 13:20:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABB162876B; Wed, 5 Jun 2019 13:20:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 112B5285DB for ; Wed, 5 Jun 2019 13:20:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728152AbfFENU4 (ORCPT ); Wed, 5 Jun 2019 09:20:56 -0400 Received: from mail-lj1-f193.google.com ([209.85.208.193]:33537 "EHLO mail-lj1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727947AbfFENUU (ORCPT ); Wed, 5 Jun 2019 09:20:20 -0400 Received: by mail-lj1-f193.google.com with SMTP id v29so11829509ljv.0 for ; Wed, 05 Jun 2019 06:20:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mszTEyuZf7y7lIdYQewtMvW8UUIaYv1rPkX1Aw39++8=; b=GeKKL4uFsj2/YpNSdGplVaT5cKsjpFQjUhD224JDjupcB6mr9U156zGjxPD4bGUmXl xB/PoUc1l920zPS+AnJJUOr9ASZFlrKJHyoVjNuSmdT8879k0qfJ9qMZZ+CxTKOYYbT2 DnBUQOhmdPXG6+ez1tunwdiZq5pICCDQjgjAeY2LzFhMwHW0ju/L4HO+RevJrWXifqBT Ox86+S/xgtHpy1qZNnStIiHq6Qm62g9w1K+rkaHKFRD/YXdbTQ1HpPL8lrj6ndFR3YVL dKcg7pP5myPZNZJlqjzpJQMtIGf/p0I/ZTCU0XPBvt2oIGCUCqu0hAR/kU1GDuzk7Rla 1sxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mszTEyuZf7y7lIdYQewtMvW8UUIaYv1rPkX1Aw39++8=; b=Vj8Zi2eXhNlAEBHgBt87cSjy1Ivf8GpSHR5SREP9uXdECsQuiopp+S3ED4YUh3AP4W kOcSmLCftNGw6wSDK+jFg2+RvPnXCSyULfBpUcQ0/v6K7xiLreoLBhlLegyjg75OCObG jy53YgJwA2aNIfPrHZneOXLTq+4nJPxZ/+s90ZkkfiFgfCIHhyo68g5cPy05fyv4xYHf KtGK6QM76W0f6gnnRTQ3KTiviFhUjlLY0KmbVhe7wZMhSoCbfYO4wOgIrC4cyGj5iJSs EMYnGtZgBSTMw7vscSaUOmHeZ9U4dtGTBeOwJHkDcxsQ0+FoalBz4gYqfntSnzLv0HDe 9pMw== X-Gm-Message-State: APjAAAVGiDPQUJzVnhNBgzrztIJIKVKdKuXH16GF3tF6seYfXDi4ZaLq 4s6QlV/tUdvHNIDX6f+pmdi17w== X-Google-Smtp-Source: APXvYqx0Ver0O4/Equ6GVOE17NZ7KHBeKaZpaYrLCmxVRIpqH65BbJZz1P/KtdUw+kmPhT8rBHcJvQ== X-Received: by 2002:a2e:7216:: with SMTP id n22mr6617872ljc.42.1559740818396; Wed, 05 Jun 2019 06:20:18 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:17 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 4/7] net: ethernet: ti: cpsw_ethtool: simplify slave loops Date: Wed, 5 Jun 2019 16:20:06 +0300 Message-Id: <20190605132009.10734-5-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Only for consistency reasons, do it like in main cpsw.c module and use ndev reference but not by means of slave. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw_ethtool.c | 40 ++++++++++++++------------ 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index a4a7ec0d2531..3d5ae3fa5a8f 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -458,7 +458,6 @@ int cpsw_nway_reset(struct net_device *ndev) static void cpsw_suspend_data_pass(struct net_device *ndev) { struct cpsw_common *cpsw = ndev_to_cpsw(ndev); - struct cpsw_slave *slave; int i; /* Disable NAPI scheduling */ @@ -467,12 +466,13 @@ static void cpsw_suspend_data_pass(struct net_device *ndev) /* Stop all transmit queues for every network device. * Disable re-using rx descriptors with dormant_on. */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { - if (!(slave->ndev && netif_running(slave->ndev))) + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (!(ndev && netif_running(ndev))) continue; - netif_tx_stop_all_queues(slave->ndev); - netif_dormant_on(slave->ndev); + netif_tx_stop_all_queues(ndev); + netif_dormant_on(ndev); } /* Handle rest of tx packets and stop cpdma channels */ @@ -483,13 +483,14 @@ static int cpsw_resume_data_pass(struct net_device *ndev) { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; - struct cpsw_slave *slave; int i, ret; /* Allow rx packets handling */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) - if (slave->ndev && netif_running(slave->ndev)) - netif_dormant_off(slave->ndev); + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (ndev && netif_running(ndev)) + netif_dormant_off(ndev); + } /* After this receive is started */ if (cpsw->usage_count) { @@ -502,9 +503,11 @@ static int cpsw_resume_data_pass(struct net_device *ndev) } /* Resume transmit for every affected interface */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) - if (slave->ndev && netif_running(slave->ndev)) - netif_tx_start_all_queues(slave->ndev); + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (ndev && netif_running(ndev)) + netif_tx_start_all_queues(ndev); + } return 0; } @@ -587,7 +590,7 @@ int cpsw_set_channels_common(struct net_device *ndev, { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; - struct cpsw_slave *slave; + struct net_device *sl_ndev; int i, ret; ret = cpsw_check_ch_settings(cpsw, chs); @@ -604,20 +607,19 @@ int cpsw_set_channels_common(struct net_device *ndev, if (ret) goto err; - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { - if (!(slave->ndev && netif_running(slave->ndev))) + for (i = 0; i < cpsw->data.slaves; i++) { + sl_ndev = cpsw->slaves[i].ndev; + if (!(sl_ndev && netif_running(sl_ndev))) continue; /* Inform stack about new count of queues */ - ret = netif_set_real_num_tx_queues(slave->ndev, - cpsw->tx_ch_num); + ret = netif_set_real_num_tx_queues(sl_ndev, cpsw->tx_ch_num); if (ret) { dev_err(priv->dev, "cannot set real number of tx queues\n"); goto err; } - ret = netif_set_real_num_rx_queues(slave->ndev, - cpsw->rx_ch_num); + ret = netif_set_real_num_rx_queues(sl_ndev, cpsw->rx_ch_num); if (ret) { dev_err(priv->dev, "cannot set real number of rx queues\n"); goto err; From patchwork Wed Jun 5 13:20:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976969 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E35F14B6 for ; Wed, 5 Jun 2019 13:20:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 216EE286FD for ; Wed, 5 Jun 2019 13:20:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F8AB2871F; Wed, 5 Jun 2019 13:20:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A5BA2876B for ; Wed, 5 Jun 2019 13:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726659AbfFENUp (ORCPT ); Wed, 5 Jun 2019 09:20:45 -0400 Received: from mail-lf1-f65.google.com ([209.85.167.65]:40063 "EHLO mail-lf1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728054AbfFENUV (ORCPT ); Wed, 5 Jun 2019 09:20:21 -0400 Received: by mail-lf1-f65.google.com with SMTP id a9so17646759lff.7 for ; Wed, 05 Jun 2019 06:20:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ExBErmLKvAHAcEzB9N5VayIpTNJEe2yiC9dz/BkkizE=; b=X7ck71nGKtRWXB7809AuE9hEZDBJn1MsjyvkPtoKrbj01jcsBqcmAOlWDuv6ZOLZdq YHQ6xuuIoiHzm08o5x6ec7bpfTHojm0YRyjzykukcHFwBpdEOqhNG0pUOnAFnskNtASH GG5kVuowV8NYgXy2Pwnx1D4mgVY5mGZI6mFTIVClzZz4VOGPOtl13V51D1wyC16Lq8IE /aKAhegyBYjunqoWNakck+LLeKE0/e5qds28l2rPPBqJTtNsfVX+SWtH5xc2PCHeffza ZOD+tL0o73yMjfGuNyWGxAZyjaCq6w6jATng4U2K3kcavRTK0F4zTyQeilg5uuuJvqKK pgAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ExBErmLKvAHAcEzB9N5VayIpTNJEe2yiC9dz/BkkizE=; b=FTD7H131fWBn8Ak2YTgbOqH//z5ZI1J1w2g/AyA5BjuYN7nQQVlh339Zu1bfJXi6vM 0yq0ux+Xi5dRooGqpChVfG1HHeUXXXS5hwVVCssJT1yUC1kk0DKbzsep72CFr6REGq1S G/jeXmKUjBZSJzM+c3+VeBYhuTj1ADV1XCgWrty6oyySDdQzhy3GpAuSX3ai4nHLisLL KWTPN++Ozz47LWGeqLMqvGYK2g62NY1pZySr/kMn2hpeheD3oFGw94vYutaYksAVOiut BH3UMftd8qwd+rqcggGfBaj1w76mpl4B6eCTdNgW6X+JqQhmF9iySmGw61rWhuUwUfws sY7g== X-Gm-Message-State: APjAAAXDjljrQNTA2FXZr+9AsD6drOLX3Nk2yHhHHB/Oop4qoavo/kzl WFQKL088rhi7oSjmjNaeUX0awA== X-Google-Smtp-Source: APXvYqxcnL4ozAXQWLobtx+ERBT4c+dO7+O1nFnxO8Di/t52oFy6fBbJhwtYPqSfLOw40/k/nWJLZg== X-Received: by 2002:a19:f713:: with SMTP id z19mr19485532lfe.121.1559740819649; Wed, 05 Jun 2019 06:20:19 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:19 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 5/7] net: ethernet: ti: davinci_cpdma: add dma mapped submit Date: Wed, 5 Jun 2019 16:20:07 +0300 Message-Id: <20190605132009.10734-6-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In case if dma mapped packet needs to be sent, like with XDP page pool, the "mapped" submit can be used. This patch adds dma mapped submit based on regular one. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/davinci_cpdma.c | 88 ++++++++++++++++++++----- drivers/net/ethernet/ti/davinci_cpdma.h | 2 + 2 files changed, 75 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 35bf14d8e7af..7f89b2299f05 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -125,6 +125,15 @@ struct cpdma_chan { u32 rate; }; +struct submit_info { + struct cpdma_chan *chan; + int directed; + void *token; + void *data; + int flags; + int len; +}; + struct cpdma_control_info { u32 reg; u32 shift, mask; @@ -176,6 +185,8 @@ static struct cpdma_control_info controls[] = { (directed << CPDMA_TO_PORT_SHIFT)); \ } while (0) +#define CPDMA_DMA_EXT_MAP BIT(16) + static void cpdma_desc_pool_destroy(struct cpdma_ctlr *ctlr) { struct cpdma_desc_pool *pool = ctlr->pool; @@ -1002,10 +1013,12 @@ static void __cpdma_chan_submit(struct cpdma_chan *chan, } } -int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, - int len, int directed) +static int cpdma_chan_submit_si(struct submit_info *si) { + struct cpdma_chan *chan = si->chan; struct cpdma_ctlr *ctlr = chan->ctlr; + int len = si->len; + int swlen = len; struct cpdma_desc __iomem *desc; dma_addr_t buffer; unsigned long flags; @@ -1037,16 +1050,22 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, chan->stats.runt_transmit_buff++; } - buffer = dma_map_single(ctlr->dev, data, len, chan->dir); - ret = dma_mapping_error(ctlr->dev, buffer); - if (ret) { - cpdma_desc_free(ctlr->pool, desc, 1); - ret = -EINVAL; - goto unlock_ret; - } - mode = CPDMA_DESC_OWNER | CPDMA_DESC_SOP | CPDMA_DESC_EOP; - cpdma_desc_to_port(chan, mode, directed); + cpdma_desc_to_port(chan, mode, si->directed); + + if (si->flags & CPDMA_DMA_EXT_MAP) { + buffer = (dma_addr_t)si->data; + dma_sync_single_for_device(ctlr->dev, buffer, len, chan->dir); + swlen |= CPDMA_DMA_EXT_MAP; + } else { + buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir); + ret = dma_mapping_error(ctlr->dev, buffer); + if (ret) { + cpdma_desc_free(ctlr->pool, desc, 1); + ret = -EINVAL; + goto unlock_ret; + } + } /* Relaxed IO accessors can be used here as there is read barrier * at the end of write sequence. @@ -1055,9 +1074,9 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, writel_relaxed(buffer, &desc->hw_buffer); writel_relaxed(len, &desc->hw_len); writel_relaxed(mode | len, &desc->hw_mode); - writel_relaxed((uintptr_t)token, &desc->sw_token); + writel_relaxed((uintptr_t)si->token, &desc->sw_token); writel_relaxed(buffer, &desc->sw_buffer); - writel_relaxed(len, &desc->sw_len); + writel_relaxed(swlen, &desc->sw_len); desc_read(desc, sw_len); __cpdma_chan_submit(chan, desc); @@ -1072,6 +1091,38 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, return ret; } +int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, + int directed) +{ + struct submit_info si; + + si.chan = chan; + si.token = token; + si.data = data; + si.len = len; + si.directed = directed; + si.flags = 0; + + return cpdma_chan_submit_si(&si); +} +EXPORT_SYMBOL_GPL(cpdma_chan_submit); + +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed) +{ + struct submit_info si; + + si.chan = chan; + si.token = token; + si.data = (void *)data; + si.len = len; + si.directed = directed; + si.flags = CPDMA_DMA_EXT_MAP; + + return cpdma_chan_submit_si(&si); +} +EXPORT_SYMBOL_GPL(cpdma_chan_submit_mapped); + bool cpdma_check_free_tx_desc(struct cpdma_chan *chan) { struct cpdma_ctlr *ctlr = chan->ctlr; @@ -1097,10 +1148,17 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, uintptr_t token; token = desc_read(desc, sw_token); - buff_dma = desc_read(desc, sw_buffer); origlen = desc_read(desc, sw_len); - dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + buff_dma = desc_read(desc, sw_buffer); + if (origlen & CPDMA_DMA_EXT_MAP) { + origlen &= ~CPDMA_DMA_EXT_MAP; + dma_sync_single_for_cpu(ctlr->dev, buff_dma, origlen, + chan->dir); + } else { + dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + } + cpdma_desc_free(pool, desc, 1); (*chan->handler)((void *)token, outlen, status); } diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 10376062dafa..8f6f27185c63 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -77,6 +77,8 @@ int cpdma_chan_stop(struct cpdma_chan *chan); int cpdma_chan_get_stats(struct cpdma_chan *chan, struct cpdma_chan_stats *stats); +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed); int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); int cpdma_chan_process(struct cpdma_chan *chan, int quota); From patchwork Wed Jun 5 13:20:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976967 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C5521515 for ; Wed, 5 Jun 2019 13:20:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C66128782 for ; Wed, 5 Jun 2019 13:20:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 102FF2867F; Wed, 5 Jun 2019 13:20:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28CE82876B for ; Wed, 5 Jun 2019 13:20:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728058AbfFENUj (ORCPT ); Wed, 5 Jun 2019 09:20:39 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:44046 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728062AbfFENUX (ORCPT ); Wed, 5 Jun 2019 09:20:23 -0400 Received: by mail-lf1-f67.google.com with SMTP id r15so19041893lfm.11 for ; Wed, 05 Jun 2019 06:20:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=VHuNQ1++LWysFMf3aILQufzbZgwyC+0uuSHbvmrF61E=; b=vCpeBGeEkRctKU9ZdjqmLqgeTn1q8TfQfx17gP6KP/v4irpBlkPlTAcTtP2WgKU8eT 1GjAOrrk4NiJZidwYEDIaB0amFJ6EypRPEwY9fVfSlV7VnILxHt+catB5mgCaONQSPI/ ZVkw/2yXruqRxuVQeajoMIjeWMQTuRnOpA6nduvuXSMSZWoUqwfvJ76MJ1ljm0WJ3uBI /JLIKJEKhjglvQdWjlVvqOQqwZTAH4lDYUNYpm16TrieliuIuIzdxfWKLbqg4uh8U9aO 2DHUoQag/9C6vDVObhI/2sgcjMOLBOnubbKXltrKxzj5zOHIxy/ibLyU7FoahHT8P+MR Pl+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VHuNQ1++LWysFMf3aILQufzbZgwyC+0uuSHbvmrF61E=; b=fNXLeZfA7wU9fyzmq8pvc3kz3LnSuIZVFhPlpjG+tPs4+Ulpb47uAEPamd5+TsBQcL mQp2+S0qOepJo9bbN+RPjBNBgq0bdk2e5gUXSRxlnEjarWr1RN4MfpdHw70GBcwpCpyf MpMOdrmVgGyGMe15rQieEWQCRcvFmO1o8fXFvoWICEwEMMoCQZEdgj9BnEEQB+NOhMva QelQMOe1oFBU+tZyDV50fY+5NABw1IH0I7MVhAk/vYEh7GhyHLz5GP8JLLdjr6PANFS/ 1gl5PI1+AjxsWk/0iNs7gpeK7HrOvO1ZZIgFyjYVz/VWO9uE6H2mx28q5R6J83kqtcgV i5Ng== X-Gm-Message-State: APjAAAWfZD3RTdDBb9v0y5W3WWcnrxo/RwlFTn2GC0KzP+PpBrfnUuVu wOp9yu8NqXoruWEYHRDY7xcuwA== X-Google-Smtp-Source: APXvYqxDmWXUaEHST6B9qkslbB1+Y3E1YjlLRJvPHEJTcmAFLJQ+B5DNurA4aqsam40UJXIeCOcjIw== X-Received: by 2002:a19:2981:: with SMTP id p123mr19756368lfp.190.1559740820942; Wed, 05 Jun 2019 06:20:20 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:20 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 6/7] net: ethernet: ti: davinci_cpdma: return handler status Date: Wed, 5 Jun 2019 16:20:08 +0300 Message-Id: <20190605132009.10734-7-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This change is needed to return flush status of rx handler for flushing redirected xdp frames after processing channel packets. Do it as separate patch for simplicity. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 23 +++++++++++------ drivers/net/ethernet/ti/cpsw_ethtool.c | 2 +- drivers/net/ethernet/ti/cpsw_priv.h | 2 +- drivers/net/ethernet/ti/davinci_cpdma.c | 34 +++++++++++++++---------- drivers/net/ethernet/ti/davinci_cpdma.h | 4 +-- drivers/net/ethernet/ti/davinci_emac.c | 18 ++++++++----- 6 files changed, 50 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 3430503e1053..d89ad428315c 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -337,7 +337,7 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } -void cpsw_tx_handler(void *token, int len, int status) +int cpsw_tx_handler(void *token, int len, int status) { struct netdev_queue *txq; struct sk_buff *skb = token; @@ -355,6 +355,7 @@ void cpsw_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } static void cpsw_rx_vlan_encap(struct sk_buff *skb) @@ -400,7 +401,7 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } -static void cpsw_rx_handler(void *token, int len, int status) +static int cpsw_rx_handler(void *token, int len, int status) { struct cpdma_chan *ch; struct sk_buff *skb = token; @@ -434,7 +435,7 @@ static void cpsw_rx_handler(void *token, int len, int status) /* the interface is going down, skbs are purged */ dev_kfree_skb_any(skb); - return; + return 0; } new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); @@ -459,7 +460,7 @@ static void cpsw_rx_handler(void *token, int len, int status) requeue: if (netif_dormant(ndev)) { dev_kfree_skb_any(new_skb); - return; + return 0; } ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch; @@ -467,6 +468,8 @@ static void cpsw_rx_handler(void *token, int len, int status) skb_tailroom(new_skb), 0); if (WARN_ON(ret < 0)) dev_kfree_skb_any(new_skb); + + return 0; } void cpsw_split_res(struct cpsw_common *cpsw) @@ -605,7 +608,8 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) else cur_budget = txv->budget; - num_tx += cpdma_chan_process(txv->ch, cur_budget); + cpdma_chan_process(txv->ch, &cur_budget); + num_tx += cur_budget; if (num_tx >= budget) break; } @@ -623,7 +627,8 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); int num_tx; - num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget); + num_tx = budget; + cpdma_chan_process(cpsw->txv[0].ch, &num_tx); if (num_tx < budget) { napi_complete(napi_tx); writel(0xff, &cpsw->wr_regs->tx_en); @@ -655,7 +660,8 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) else cur_budget = rxv->budget; - num_rx += cpdma_chan_process(rxv->ch, cur_budget); + cpdma_chan_process(rxv->ch, &cur_budget); + num_rx += cur_budget; if (num_rx >= budget) break; } @@ -673,7 +679,8 @@ static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); int num_rx; - num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget); + num_rx = budget; + cpdma_chan_process(cpsw->rxv[0].ch, &num_rx); if (num_rx < budget) { napi_complete_done(napi_rx, num_rx); writel(0xff, &cpsw->wr_regs->rx_en); diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index 3d5ae3fa5a8f..94f8f5ab46a5 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -538,8 +538,8 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, cpdma_handler_fn rx_handler) { struct cpsw_common *cpsw = priv->cpsw; - void (*handler)(void *, int, int); struct netdev_queue *queue; + cpdma_handler_fn handler; struct cpsw_vector *vec; int ret, *ch, vch; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 04795b97ee71..2ecb3af59fe9 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -390,7 +390,7 @@ void cpsw_split_res(struct cpsw_common *cpsw); int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); -void cpsw_tx_handler(void *token, int len, int status); +int cpsw_tx_handler(void *token, int len, int status); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev); diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 7f89b2299f05..a59011d315d5 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -1137,15 +1137,16 @@ bool cpdma_check_free_tx_desc(struct cpdma_chan *chan) return free_tx_desc; } -static void __cpdma_chan_free(struct cpdma_chan *chan, - struct cpdma_desc __iomem *desc, - int outlen, int status) +static int __cpdma_chan_free(struct cpdma_chan *chan, + struct cpdma_desc __iomem *desc, int outlen, + int status) { struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t buff_dma; int origlen; uintptr_t token; + int ret; token = desc_read(desc, sw_token); origlen = desc_read(desc, sw_len); @@ -1160,14 +1161,16 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, } cpdma_desc_free(pool, desc, 1); - (*chan->handler)((void *)token, outlen, status); + ret = (*chan->handler)((void *)token, outlen, status); + + return ret; } static int __cpdma_chan_process(struct cpdma_chan *chan) { + int status, outlen, ret; struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc __iomem *desc; - int status, outlen; int cb_status = 0; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t desc_dma; @@ -1178,7 +1181,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) desc = chan->head; if (!desc) { chan->stats.empty_dequeue++; - status = -ENOENT; + ret = -ENOENT; goto unlock_ret; } desc_dma = desc_phys(pool, desc); @@ -1187,7 +1190,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) outlen = status & 0x7ff; if (status & CPDMA_DESC_OWNER) { chan->stats.busy_dequeue++; - status = -EBUSY; + ret = -EBUSY; goto unlock_ret; } @@ -1213,28 +1216,31 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) else cb_status = status; - __cpdma_chan_free(chan, desc, outlen, cb_status); - return status; + ret = __cpdma_chan_free(chan, desc, outlen, cb_status); + return ret; unlock_ret: spin_unlock_irqrestore(&chan->lock, flags); - return status; + return ret; } -int cpdma_chan_process(struct cpdma_chan *chan, int quota) +int cpdma_chan_process(struct cpdma_chan *chan, int *quota) { - int used = 0, ret = 0; + int used = 0, ret = 0, res = 0; if (chan->state != CPDMA_STATE_ACTIVE) return -EINVAL; - while (used < quota) { + while (used < *quota) { ret = __cpdma_chan_process(chan); if (ret < 0) break; + res |= ret; used++; } - return used; + + *quota = used; + return res; } int cpdma_chan_start(struct cpdma_chan *chan) diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 8f6f27185c63..56543d375923 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -61,7 +61,7 @@ struct cpdma_chan_stats { struct cpdma_ctlr; struct cpdma_chan; -typedef void (*cpdma_handler_fn)(void *token, int len, int status); +typedef int (*cpdma_handler_fn)(void *token, int len, int status); struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params); int cpdma_ctlr_destroy(struct cpdma_ctlr *ctlr); @@ -81,7 +81,7 @@ int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, dma_addr_t data, int len, int directed); int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); -int cpdma_chan_process(struct cpdma_chan *chan, int quota); +int cpdma_chan_process(struct cpdma_chan *chan, int *quota); int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable); void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr, u32 value); diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index 4bf65cab79e6..3592690b8dd8 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -860,7 +860,7 @@ static struct sk_buff *emac_rx_alloc(struct emac_priv *priv) return skb; } -static void emac_rx_handler(void *token, int len, int status) +static int emac_rx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -871,7 +871,7 @@ static void emac_rx_handler(void *token, int len, int status) /* free and bail if we are shutting down */ if (unlikely(!netif_running(ndev))) { dev_kfree_skb_any(skb); - return; + return 0; } /* recycle on receive error */ @@ -892,7 +892,7 @@ static void emac_rx_handler(void *token, int len, int status) if (!skb) { if (netif_msg_rx_err(priv) && net_ratelimit()) dev_err(emac_dev, "failed rx buffer alloc\n"); - return; + return 0; } recycle: @@ -902,9 +902,11 @@ static void emac_rx_handler(void *token, int len, int status) WARN_ON(ret == -ENOMEM); if (unlikely(ret < 0)) dev_kfree_skb_any(skb); + + return 0; } -static void emac_tx_handler(void *token, int len, int status) +static int emac_tx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -917,6 +919,7 @@ static void emac_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } /** @@ -1237,8 +1240,8 @@ static int emac_poll(struct napi_struct *napi, int budget) mask = EMAC_DM646X_MAC_IN_VECTOR_TX_INT_VEC; if (status & mask) { - num_tx_pkts = cpdma_chan_process(priv->txchan, - EMAC_DEF_TX_MAX_SERVICE); + num_tx_pkts = EMAC_DEF_TX_MAX_SERVICE; + cpdma_chan_process(priv->txchan, &num_tx_pkts); } /* TX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_RX_INT_VEC; @@ -1247,7 +1250,8 @@ static int emac_poll(struct napi_struct *napi, int budget) mask = EMAC_DM646X_MAC_IN_VECTOR_RX_INT_VEC; if (status & mask) { - num_rx_pkts = cpdma_chan_process(priv->rxchan, budget); + num_rx_pkts = budget; + cpdma_chan_process(priv->rxchan, &num_rx_pkts); } /* RX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_HOST_INT; From patchwork Wed Jun 5 13:20:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 10976965 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C73A01515 for ; Wed, 5 Jun 2019 13:20:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ADD6026E3A for ; Wed, 5 Jun 2019 13:20:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABB242878F; Wed, 5 Jun 2019 13:20:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD50D26E3A for ; Wed, 5 Jun 2019 13:20:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727642AbfFENU0 (ORCPT ); Wed, 5 Jun 2019 09:20:26 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:44046 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728084AbfFENUZ (ORCPT ); Wed, 5 Jun 2019 09:20:25 -0400 Received: by mail-lf1-f66.google.com with SMTP id r15so19041955lfm.11 for ; Wed, 05 Jun 2019 06:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3kBpujiWIdXnq8MUrKCnhnTLaetLoA6c2RM9nWLl5JU=; b=fXNGwtP/Dp4cqjjp05qGSV1SoTXU3pie4Ir0Hye/bofI5RGr2etqrQ4+rLbCDSgIir Cmzfy5In3rAbNUuHPI8h6Yb5rRhEMj5XHJDxak//jnEwfHbxX/vdPDKENXjz2pk1RKLK KJdZWYvmY+V3ULakS9hKqovqju4+tfDehcWw7r7kHjxXSqE8sYGzWXyDoF48qIRQO0mx cpIEcJoNdR44q3TfJT5oNOmA3wn30kcG+M//pxvim+AJPFLCzzT+baKwzhBuAFok4dk8 nvlLSf95MYE5S3ouy0t41RUZqX0vNLFO1x108eNoWDLL5EKV0wgyW9795ezMba0VK8+U vFEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3kBpujiWIdXnq8MUrKCnhnTLaetLoA6c2RM9nWLl5JU=; b=M9sWxKacdHCUPQ2VUnc9RI3W1Tebv4qDTrVmp2pvSeSvKZf8eJeZ1Yl+/2imc8JF/6 mpnlVoJlH/cJdx+7qJfIM0JR9YdNZuSigVOuK0ctQuYp12C1tDiJPSwtySRHbJM+jrHI VRCQPNTs1Xn60svPINFT6DKQmIWYPVGMfScHaDzXlkWfwI88vJjQP1YpTftWZoAVQlJh /qNT5OXU5hk6tzmaomAjQR+/jHlzhe6kQBxE1f3U5Vski7OFpRVrGQyZ1C80ofholggK vRNm5rXyEw6oJzIfyarDBA0VbFyU48Jlq13TFS7bRNzv7JHcJH5Qd+e2jJe3S5pLY416 7IVg== X-Gm-Message-State: APjAAAVONzr3WEhqo32VAo72tl8E5OF6TtURikKOSWmmg3nnij14NheJ C1R0MuXUmjzwcLiMr/wDbzi+dw== X-Google-Smtp-Source: APXvYqzohYjDxjLSYKXt2D+yq4iMurVhbkg5S+LSUZXogQfiB+9QYyAIGQ+sZ832Ky6dp8laA0hufQ== X-Received: by 2002:a19:fc1d:: with SMTP id a29mr21490995lfi.35.1559740822300; Wed, 05 Jun 2019 06:20:22 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:21 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 7/7] net: ethernet: ti: cpsw: add XDP support Date: Wed, 5 Jun 2019 16:20:09 +0300 Message-Id: <20190605132009.10734-8-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add XDP support based on rx page_pool allocator, one frame per page. Page pool allocator is used with assumption that only one rx_handler is running simultaneously. DMA map/unmap is reused from page pool despite there is no need to map whole page. Due to specific of cpsw, the same TX/RX handler can be used by 2 network devices, so special fields in buffer are added to identify an interface the frame is destined to. Thus XDP works for both interfaces, that allows to test xdp redirect between two interfaces easily. Aslo, each ndev and its rx queues have own page pools. XDP prog is common for all channels till appropriate changes are added in XDP infrastructure. Also, once page_pool recycling becomes part of skb netstack some simplifications can be added marked with comments. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/Kconfig | 1 + drivers/net/ethernet/ti/cpsw.c | 524 ++++++++++++++++++++++--- drivers/net/ethernet/ti/cpsw_ethtool.c | 58 ++- drivers/net/ethernet/ti/cpsw_priv.h | 7 + 4 files changed, 523 insertions(+), 67 deletions(-) diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig index bd05a977ee7e..3cb8c5214835 100644 --- a/drivers/net/ethernet/ti/Kconfig +++ b/drivers/net/ethernet/ti/Kconfig @@ -50,6 +50,7 @@ config TI_CPSW depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST select TI_DAVINCI_MDIO select MFD_SYSCON + select PAGE_POOL select REGMAP ---help--- This driver supports TI's CPSW Ethernet Switch. diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index d89ad428315c..391f2378a0c3 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -31,6 +31,10 @@ #include #include #include +#include +#include +#include +#include #include #include @@ -60,6 +64,10 @@ static int descs_pool_size = CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT; module_param(descs_pool_size, int, 0444); MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); +/* The buf includes headroom compatible with both skb and xdpf */ +#define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN) +#define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long)) + #define for_each_slave(priv, func, arg...) \ do { \ struct cpsw_slave *slave; \ @@ -74,6 +82,13 @@ MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); (func)(slave++, ##arg); \ } while (0) +#define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long)) + +#define CPSW_XDP_CONSUMED 1 +#define CPSW_XDP_CONSUMED_FLUSH 2 +#define CPSW_XDP_PASS 0 +#define CPSW_FLUSH_XDP_MAP 1 + static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid); @@ -337,24 +352,58 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } +static int cpsw_is_xdpf_handle(void *handle) +{ + return (unsigned long)handle & BIT(0); +} + +static void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf) +{ + return (void *)((unsigned long)xdpf | BIT(0)); +} + +static struct xdp_frame *cpsw_handle_to_xdpf(void *handle) +{ + return (struct xdp_frame *)((unsigned long)handle & ~BIT(0)); +} + +struct __aligned(sizeof(long)) cpsw_meta_xdp { + struct net_device *ndev; + int ch; +}; + int cpsw_tx_handler(void *token, int len, int status) { + struct cpsw_meta_xdp *xmeta; + struct xdp_frame *xdpf; + struct net_device *ndev; struct netdev_queue *txq; - struct sk_buff *skb = token; - struct net_device *ndev = skb->dev; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct sk_buff *skb; + int ch; + + if (cpsw_is_xdpf_handle(token)) { + xdpf = cpsw_handle_to_xdpf(token); + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + ndev = xmeta->ndev; + ch = xmeta->ch; + xdp_return_frame_rx_napi(xdpf); + } else { + skb = token; + ndev = skb->dev; + ch = skb_get_queue_mapping(skb); + cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb); + dev_kfree_skb_any(skb); + } /* Check whether the queue is stopped due to stalled tx dma, if the * queue is stopped then start the queue as we have free desc for tx */ - txq = netdev_get_tx_queue(ndev, skb_get_queue_mapping(skb)); + txq = netdev_get_tx_queue(ndev, ch); if (unlikely(netif_tx_queue_stopped(txq))) netif_tx_wake_queue(txq); - cpts_tx_timestamp(cpsw->cpts, skb); ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; - dev_kfree_skb_any(skb); return 0; } @@ -401,25 +450,246 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } +static int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, + struct page *page) +{ + struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_meta_xdp *xmeta; + struct netdev_queue *txq; + struct cpdma_chan *txch; + dma_addr_t dma; + int ret, port; + + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = 0; + txch = cpsw->txv[0].ch; + + port = priv->emac_port + cpsw->data.dual_emac; + if (page) { + dma = page_pool_get_dma_addr(page); + dma += xdpf->data - (void *)xdpf; + ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), + dma, xdpf->len, port); + } else { + if (sizeof(*xmeta) > xdpf->headroom) { + xdp_return_frame_rx_napi(xdpf); + return -EINVAL; + } + + ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf), + xdpf->data, xdpf->len, port); + } + + if (ret) { + xdp_return_frame_rx_napi(xdpf); + goto stop; + } + + /* no tx desc - stop sending us tx frames */ + if (unlikely(!cpdma_check_free_tx_desc(txch))) + goto stop; + + return ret; +stop: + txq = netdev_get_tx_queue(priv->ndev, 0); + netif_tx_stop_queue(txq); + + /* Barrier, so that stop_queue visible to other cpus */ + smp_mb__after_atomic(); + + if (cpdma_check_free_tx_desc(txch)) + netif_tx_wake_queue(txq); + + return ret; +} + +static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, + struct page *page) +{ + struct net_device *ndev = priv->ndev; + int ret = CPSW_XDP_CONSUMED; + struct xdp_frame *xdpf; + struct bpf_prog *prog; + u32 act; + + rcu_read_lock(); + + prog = READ_ONCE(priv->xdp_prog); + if (!prog) { + ret = CPSW_XDP_PASS; + goto out; + } + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + ret = CPSW_XDP_PASS; + break; + case XDP_TX: + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + goto drop; + + cpsw_xdp_tx_frame(priv, xdpf, page); + break; + case XDP_REDIRECT: + if (xdp_do_redirect(ndev, xdp, prog)) + goto drop; + + ret = CPSW_XDP_CONSUMED_FLUSH; + break; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(ndev, prog, act); + /* fall through -- handle aborts by dropping packet */ + case XDP_DROP: + goto drop; + } +out: + rcu_read_unlock(); + return ret; +drop: + rcu_read_unlock(); + page_pool_recycle_direct(priv->page_pool[ch], page); + return ret; +} + +static unsigned int cpsw_rxbuf_total_len(unsigned int len) +{ + len += CPSW_HEADROOM; + len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + return SKB_DATA_ALIGN(len); +} + +static void cpsw_destroy_rx_pool(struct cpsw_priv *priv, int ch) +{ + if (!xdp_rxq_info_is_reg(&priv->xdp_rxq[ch])) + return; + + xdp_rxq_info_unreg(&priv->xdp_rxq[ch]); + page_pool_destroy(priv->page_pool[ch]); + priv->page_pool[ch] = NULL; +} + +struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw, int size) +{ + struct page_pool_params pp_params; + struct page_pool *pool; + + pp_params.order = 0; + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = size; + pp_params.nid = NUMA_NO_NODE; + pp_params.dma_dir = DMA_BIDIRECTIONAL; + pp_params.dev = cpsw->dev; + + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) + dev_err(cpsw->dev, "cannot create rx page pool\n"); + + return pool; +} + +static int cpsw_create_rx_pool(struct cpsw_priv *priv, int ch) +{ + struct xdp_rxq_info *xdp_rxq = &priv->xdp_rxq[ch]; + struct cpsw_common *cpsw = priv->cpsw; + struct page_pool *pool; + int ret, pool_size; + + ret = xdp_rxq_info_reg(xdp_rxq, priv->ndev, ch); + if (ret) + return ret; + + pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); + pool = cpsw_create_page_pool(cpsw, pool_size); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + xdp_rxq_info_unreg(xdp_rxq); + return ret; + } + + priv->page_pool[ch] = pool; + ret = xdp_rxq_info_reg_mem_model(xdp_rxq, MEM_TYPE_PAGE_POOL, pool); + if (ret) + cpsw_destroy_rx_pool(priv, ch); + + return ret; +} + +void cpsw_ndev_destroy_rx_pools(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + int i; + + for (i = 0; i < cpsw->rx_ch_num; i++) + cpsw_destroy_rx_pool(priv, i); +} + +int cpsw_ndev_create_rx_pools(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + int i, ret; + + for (i = 0; i < cpsw->rx_ch_num; i++) { + ret = cpsw_create_rx_pool(priv, i); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_ndev_destroy_rx_pools(priv); + + return ret; +} + static int cpsw_rx_handler(void *token, int len, int status) { - struct cpdma_chan *ch; - struct sk_buff *skb = token; - struct sk_buff *new_skb; - struct net_device *ndev = skb->dev; - int ret = 0, port; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct page *new_page, *page = token; + void *pa = page_address(page); + struct cpsw_meta_xdp *xmeta = pa + CPSW_XMETA_OFFSET; + struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); + int pkt_size = cpsw->rx_packet_max; + int ret = 0, port, ch = xmeta->ch; + int headroom = CPSW_HEADROOM; + struct net_device *ndev = xmeta->ndev; + int res = 0; struct cpsw_priv *priv; + struct page_pool *pool; + struct sk_buff *skb; + struct xdp_buff xdp; + dma_addr_t dma; - if (cpsw->data.dual_emac) { + if (cpsw->data.dual_emac && status >= 0) { port = CPDMA_RX_SOURCE_PORT(status); - if (port) { + if (port) ndev = cpsw->slaves[--port].ndev; - skb->dev = ndev; - } } + priv = netdev_priv(ndev); + pool = priv->page_pool[ch]; if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { + if (cpsw->data.dual_emac && !pool) { + /* In dual mac mode while going down the descriptors + * can have pointer on netdev that has been down, so + * find active device and its page pool. + */ + for (port = 0; port < cpsw->data.slaves; port++) { + ndev = cpsw->slaves[port].ndev; + priv = netdev_priv(ndev); + if (priv->page_pool[ch]) { + pool = priv->page_pool[ch]; + break; + } + } + } + /* In dual emac mode check for all interfaces */ if (cpsw->data.dual_emac && cpsw->usage_count && (status >= 0)) { @@ -427,49 +697,97 @@ static int cpsw_rx_handler(void *token, int len, int status) * is already down and the other interface is up * and running, instead of freeing which results * in reducing of the number of rx descriptor in - * DMA engine, requeue skb back to cpdma. + * DMA engine, requeue page back to cpdma. */ - new_skb = skb; + new_page = page; goto requeue; } - /* the interface is going down, skbs are purged */ - dev_kfree_skb_any(skb); + /* the interface is going down, pages are purged */ + page_pool_recycle_direct(pool, page); return 0; } - new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); - if (new_skb) { - skb_copy_queue_mapping(new_skb, skb); - skb_put(skb, len); - if (status & CPDMA_RX_VLAN_ENCAP) - cpsw_rx_vlan_encap(skb); - priv = netdev_priv(ndev); - if (priv->rx_ts_enabled) - cpts_rx_timestamp(cpsw->cpts, skb); - skb->protocol = eth_type_trans(skb, ndev); - netif_receive_skb(skb); - ndev->stats.rx_bytes += len; - ndev->stats.rx_packets++; - kmemleak_not_leak(new_skb); - } else { + new_page = page_pool_dev_alloc_pages(pool); + if (unlikely(!new_page)) { + new_page = page; ndev->stats.rx_dropped++; - new_skb = skb; + goto requeue; } + if (priv->xdp_prog) { + if (status & CPDMA_RX_VLAN_ENCAP) { + xdp.data = pa + CPSW_HEADROOM + + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + xdp.data_end = xdp.data + len - + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + } else { + xdp.data = pa + CPSW_HEADROOM; + xdp.data_end = xdp.data + len; + } + + xdp_set_data_meta_invalid(&xdp); + + xdp.data_hard_start = pa; + xdp.rxq = &priv->xdp_rxq[ch]; + + ret = cpsw_run_xdp(priv, ch, &xdp, page); + if (ret != CPSW_XDP_PASS) { + if (ret == CPSW_XDP_CONSUMED_FLUSH) + res = CPSW_FLUSH_XDP_MAP; + + goto requeue; + } + + /* XDP prog might have changed packet data and boundaries */ + len = xdp.data_end - xdp.data; + headroom = xdp.data - xdp.data_hard_start; + + /* XDP prog can modify vlan tag, so can't use encap header */ + status &= ~CPDMA_RX_VLAN_ENCAP; + } + + /* pass skb to netstack if no XDP prog or returned XDP_PASS */ + skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); + if (!skb) { + ndev->stats.rx_dropped++; + page_pool_recycle_direct(pool, page); + goto requeue; + } + + skb_reserve(skb, headroom); + skb_put(skb, len); + skb->dev = ndev; + if (status & CPDMA_RX_VLAN_ENCAP) + cpsw_rx_vlan_encap(skb); + if (priv->rx_ts_enabled) + cpts_rx_timestamp(cpsw->cpts, skb); + skb->protocol = eth_type_trans(skb, ndev); + + /* unmap page as no netstack skb page recycling */ + page_pool_unmap_page(pool, page); + netif_receive_skb(skb); + + ndev->stats.rx_bytes += len; + ndev->stats.rx_packets++; + requeue: if (netif_dormant(ndev)) { - dev_kfree_skb_any(new_skb); - return 0; + page_pool_recycle_direct(pool, new_page); + return res; } - ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch; - ret = cpdma_chan_submit(ch, new_skb, new_skb->data, - skb_tailroom(new_skb), 0); + xmeta = page_address(new_page) + CPSW_XMETA_OFFSET; + xmeta->ndev = ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, + pkt_size, 0); if (WARN_ON(ret < 0)) - dev_kfree_skb_any(new_skb); + page_pool_recycle_direct(pool, new_page); - return 0; + return res; } void cpsw_split_res(struct cpsw_common *cpsw) @@ -644,8 +962,8 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) { u32 ch_map; - int num_rx, cur_budget, ch; struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); + int num_rx, cur_budget, ch, res; struct cpsw_vector *rxv; /* process every unprocessed channel */ @@ -660,8 +978,12 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) else cur_budget = rxv->budget; - cpdma_chan_process(rxv->ch, &cur_budget); + res = cpdma_chan_process(rxv->ch, &cur_budget); num_rx += cur_budget; + + if (res & CPSW_FLUSH_XDP_MAP) + xdp_do_flush_map(); + if (num_rx >= budget) break; } @@ -677,10 +999,15 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) { struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); - int num_rx; + struct cpsw_vector *rxv; + int num_rx, res; num_rx = budget; - cpdma_chan_process(cpsw->rxv[0].ch, &num_rx); + rxv = &cpsw->rxv[0]; + res = cpdma_chan_process(rxv->ch, &num_rx); + if (res & CPSW_FLUSH_XDP_MAP) + xdp_do_flush_map(); + if (num_rx < budget) { napi_complete_done(napi_rx, num_rx); writel(0xff, &cpsw->wr_regs->rx_en); @@ -1042,33 +1369,38 @@ static void cpsw_init_host_port(struct cpsw_priv *priv) int cpsw_fill_rx_channels(struct cpsw_priv *priv) { struct cpsw_common *cpsw = priv->cpsw; - struct sk_buff *skb; + struct cpsw_meta_xdp *xmeta; + struct page_pool *pool; + struct page *page; int ch_buf_num; int ch, i, ret; + dma_addr_t dma; for (ch = 0; ch < cpsw->rx_ch_num; ch++) { + pool = priv->page_pool[ch]; ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); for (i = 0; i < ch_buf_num; i++) { - skb = __netdev_alloc_skb_ip_align(priv->ndev, - cpsw->rx_packet_max, - GFP_KERNEL); - if (!skb) { - cpsw_err(priv, ifup, "cannot allocate skb\n"); + page = page_pool_dev_alloc_pages(pool); + if (!page) { + cpsw_err(priv, ifup, "allocate rx page err\n"); return -ENOMEM; } - skb_set_queue_mapping(skb, ch); - ret = cpdma_chan_submit(cpsw->rxv[ch].ch, skb, - skb->data, skb_tailroom(skb), - 0); + xmeta = page_address(page) + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, page, + dma, cpsw->rx_packet_max, + 0); if (ret < 0) { cpsw_err(priv, ifup, - "cannot submit skb to channel %d rx, error %d\n", + "cannot submit page to channel %d rx, error %d\n", ch, ret); - kfree_skb(skb); + page_pool_recycle_direct(pool, page); return ret; } - kmemleak_not_leak(skb); } cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n", @@ -1380,6 +1712,10 @@ static int cpsw_ndo_open(struct net_device *ndev) cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan, ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0); + ret = cpsw_ndev_create_rx_pools(priv); + if (ret) + goto err_cleanup; + /* initialize shared resources for every ndev */ if (!cpsw->usage_count) { /* disable priority elevation */ @@ -1430,11 +1766,11 @@ static int cpsw_ndo_open(struct net_device *ndev) return 0; err_cleanup: - if (!cpsw->usage_count) { + if (!cpsw->usage_count) cpdma_ctlr_stop(cpsw->dma); - for_each_slave(priv, cpsw_slave_stop, cpsw); - } + cpsw_ndev_destroy_rx_pools(priv); + for_each_slave(priv, cpsw_slave_stop, cpsw); pm_runtime_put_sync(cpsw->dev); netif_carrier_off(priv->ndev); return ret; @@ -1463,6 +1799,8 @@ static int cpsw_ndo_stop(struct net_device *ndev) if (cpsw_need_resplit(cpsw)) cpsw_split_res(cpsw); + cpsw_ndev_destroy_rx_pools(priv); + cpsw->usage_count--; pm_runtime_put_sync(cpsw->dev); return 0; @@ -2014,6 +2352,64 @@ static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, } } +static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf) +{ + struct bpf_prog *prog = bpf->prog; + + if (!priv->xdpi.prog && !prog) + return 0; + + if (!xdp_attachment_flags_ok(&priv->xdpi, bpf)) + return -EBUSY; + + WRITE_ONCE(priv->xdp_prog, prog); + + xdp_attachment_setup(&priv->xdpi, bpf); + + return 0; +} + +static int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return cpsw_xdp_prog_setup(priv, bpf); + + case XDP_QUERY_PROG: + return xdp_attachment_query(&priv->xdpi, bpf); + + default: + return -EINVAL; + } +} + +static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct xdp_frame *xdpf; + int i, drops = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (i = 0; i < n; i++) { + xdpf = frames[i]; + if (xdpf->len < CPSW_MIN_PACKET_SIZE) { + xdp_return_frame_rx_napi(xdpf); + drops++; + continue; + } + + if (cpsw_xdp_tx_frame(priv, xdpf, NULL)) + drops++; + } + + return n - drops; +} + #ifdef CONFIG_NET_POLL_CONTROLLER static void cpsw_ndo_poll_controller(struct net_device *ndev) { @@ -2042,6 +2438,8 @@ static const struct net_device_ops cpsw_netdev_ops = { .ndo_vlan_rx_add_vid = cpsw_ndo_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = cpsw_ndo_vlan_rx_kill_vid, .ndo_setup_tc = cpsw_ndo_setup_tc, + .ndo_bpf = cpsw_ndo_bpf, + .ndo_xdp_xmit = cpsw_ndo_xdp_xmit, }; static void cpsw_get_drvinfo(struct net_device *ndev, diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index 94f8f5ab46a5..71ccef9d1984 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -584,6 +584,41 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, return 0; } +static void cpsw_destroy_rx_pools(struct cpsw_common *cpsw) +{ + struct cpsw_priv *priv; + int i; + + for (i = 0; i < cpsw->data.slaves; i++) { + priv = netdev_priv(cpsw->slaves[i].ndev); + if (priv->ndev && netif_running(priv->ndev)) + cpsw_ndev_destroy_rx_pools(priv); + } +} + +static int cpsw_create_rx_pools(struct cpsw_common *cpsw) +{ + struct cpsw_priv *priv; + int i, ret; + + for (i = 0; i < cpsw->data.slaves; i++) { + priv = netdev_priv(cpsw->slaves[i].ndev); + if (!(priv->ndev && netif_running(priv->ndev))) + continue; + + ret = cpsw_ndev_create_rx_pools(priv); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_destroy_rx_pools(cpsw); + + return ret; +} + int cpsw_set_channels_common(struct net_device *ndev, struct ethtool_channels *chs, cpdma_handler_fn rx_handler) @@ -591,7 +626,7 @@ int cpsw_set_channels_common(struct net_device *ndev, struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; struct net_device *sl_ndev; - int i, ret; + int i, new_pools, ret; ret = cpsw_check_ch_settings(cpsw, chs); if (ret < 0) @@ -599,6 +634,10 @@ int cpsw_set_channels_common(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + new_pools = (chs->rx_count != cpsw->rx_ch_num) && cpsw->usage_count; + if (new_pools) + cpsw_destroy_rx_pools(cpsw); + ret = cpsw_update_channels_res(priv, chs->rx_count, 1, rx_handler); if (ret) goto err; @@ -629,6 +668,12 @@ int cpsw_set_channels_common(struct net_device *ndev, if (cpsw->usage_count) cpsw_split_res(cpsw); + if (new_pools) { + ret = cpsw_create_rx_pools(cpsw); + if (ret) + goto err; + } + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; @@ -654,8 +699,7 @@ void cpsw_get_ringparam(struct net_device *ndev, int cpsw_set_ringparam(struct net_device *ndev, struct ethtool_ringparam *ering) { - struct cpsw_priv *priv = netdev_priv(ndev); - struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); int ret; /* ignore ering->tx_pending - only rx_pending adjustment is supported */ @@ -670,15 +714,21 @@ int cpsw_set_ringparam(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + cpsw_destroy_rx_pools(cpsw); + cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); if (cpsw->usage_count) cpdma_chan_split_pool(cpsw->dma); + ret = cpsw_create_rx_pools(cpsw); + if (ret) + goto err; + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; - +err: dev_err(cpsw->dev, "cannot set ring params, closing device\n"); dev_close(ndev); return ret; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 2ecb3af59fe9..b428875fedfe 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -360,6 +360,11 @@ struct cpsw_priv { int shp_cfg_speed; int tx_ts_enabled; int rx_ts_enabled; + struct bpf_prog *xdp_prog; + struct xdp_rxq_info xdp_rxq[CPSW_MAX_QUEUES]; + struct page_pool *page_pool[CPSW_MAX_QUEUES]; + struct xdp_attachment_info xdpi; + u32 emac_port; struct cpsw_common *cpsw; }; @@ -391,6 +396,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); int cpsw_tx_handler(void *token, int len, int status); +int cpsw_ndev_create_rx_pools(struct cpsw_priv *priv); +void cpsw_ndev_destroy_rx_pools(struct cpsw_priv *priv); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev);