From patchwork Thu Nov 19 16:29:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918231 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E03FEC56201 for ; Thu, 19 Nov 2020 16:30:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 922D020757 for ; Thu, 19 Nov 2020 16:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728706AbgKSQ3n (ORCPT ); Thu, 19 Nov 2020 11:29:43 -0500 Received: from inva021.nxp.com ([92.121.34.21]:56602 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728436AbgKSQ3l (ORCPT ); Thu, 19 Nov 2020 11:29:41 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 904092004CD; Thu, 19 Nov 2020 17:29:39 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 82AFB2002F0; Thu, 19 Nov 2020 17:29:39 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 2591920329; Thu, 19 Nov 2020 17:29:39 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 1/7] dpaa_eth: add struct for software backpointers Date: Thu, 19 Nov 2020 18:29:30 +0200 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We maintain an skb backpointer in the software annotations area of Tx frames. Introduce a structure for explicit handling. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 16 +++++++++------- drivers/net/ethernet/freescale/dpaa/dpaa_eth.h | 8 ++++++++ 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 8867693..88533a2 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -1633,6 +1633,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, dma_addr_t addr = qm_fd_addr(fd); void *vaddr = phys_to_virt(addr); const struct qm_sg_entry *sgt; + struct dpaa_eth_swbp *swbp; struct sk_buff *skb; u64 ns; int i; @@ -1665,7 +1666,8 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, dma_dir); } - skb = *(struct sk_buff **)vaddr; + swbp = (struct dpaa_eth_swbp *)vaddr; + skb = swbp->skb; /* DMA unmapping is required before accessing the HW provided info */ if (ts && priv->tx_tstamp && @@ -1879,8 +1881,8 @@ static int skb_to_contig_fd(struct dpaa_priv *priv, { struct net_device *net_dev = priv->net_dev; enum dma_data_direction dma_dir; + struct dpaa_eth_swbp *swbp; unsigned char *buff_start; - struct sk_buff **skbh; dma_addr_t addr; int err; @@ -1891,8 +1893,8 @@ static int skb_to_contig_fd(struct dpaa_priv *priv, buff_start = skb->data - priv->tx_headroom; dma_dir = DMA_TO_DEVICE; - skbh = (struct sk_buff **)buff_start; - *skbh = skb; + swbp = (struct dpaa_eth_swbp *)buff_start; + swbp->skb = skb; /* Enable L3/L4 hardware checksum computation. * @@ -1931,8 +1933,8 @@ static int skb_to_sg_fd(struct dpaa_priv *priv, const enum dma_data_direction dma_dir = DMA_TO_DEVICE; const int nr_frags = skb_shinfo(skb)->nr_frags; struct net_device *net_dev = priv->net_dev; + struct dpaa_eth_swbp *swbp; struct qm_sg_entry *sgt; - struct sk_buff **skbh; void *buff_start; skb_frag_t *frag; dma_addr_t addr; @@ -2005,8 +2007,8 @@ static int skb_to_sg_fd(struct dpaa_priv *priv, qm_fd_set_sg(fd, priv->tx_headroom, skb->len); /* DMA map the SGT page */ - skbh = (struct sk_buff **)buff_start; - *skbh = skb; + swbp = (struct dpaa_eth_swbp *)buff_start; + swbp->skb = skb; addr = dma_map_page(priv->tx_dma_dev, p, 0, priv->tx_headroom + DPAA_SGT_SIZE, dma_dir); diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h index fc2cc4c..da30e5d 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h @@ -144,6 +144,14 @@ struct dpaa_buffer_layout { u16 priv_data_size; }; +/* Information to be used on the Tx confirmation path. Stored just + * before the start of the transmit buffer. Maximum size allowed + * is DPAA_TX_PRIV_DATA_SIZE bytes. + */ +struct dpaa_eth_swbp { + struct sk_buff *skb; +}; + struct dpaa_priv { struct dpaa_percpu_priv __percpu *percpu_priv; struct dpaa_bp *dpaa_bp; From patchwork Thu Nov 19 16:29:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918239 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 953E1C64E75 for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBCE4246AA for ; Thu, 19 Nov 2020 16:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729053AbgKSQ3q (ORCPT ); Thu, 19 Nov 2020 11:29:46 -0500 Received: from inva020.nxp.com ([92.121.34.13]:44722 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727543AbgKSQ3o (ORCPT ); Thu, 19 Nov 2020 11:29:44 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E1C161A0698; Thu, 19 Nov 2020 17:29:40 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D375B1A056C; Thu, 19 Nov 2020 17:29:40 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 7FA8E20329; Thu, 19 Nov 2020 17:29:40 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 2/7] dpaa_eth: add basic XDP support Date: Thu, 19 Nov 2020 18:29:31 +0200 Message-Id: <257fc3a02512bb4d2fc5eccec1796011ec9f0fbb.1605802951.git.camelia.groza@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Implement the XDP_DROP and XDP_PASS actions. Avoid draining and reconfiguring the buffer pool at each XDP setup/teardown by increasing the frame headroom and reserving XDP_PACKET_HEADROOM bytes from the start. Since we always reserve an entire page per buffer, this change only impacts Jumbo frame scenarios where the maximum linear frame size is reduced by 256 bytes. Multi buffer Scatter/Gather frames are now used instead in these scenarios. Allow XDP programs to access the entire buffer. The data in the received frame's headroom can be overwritten by the XDP program. Extract the relevant fields from the headroom while they are still available, before running the XDP program. Since the headroom might be resized before the frame is passed up to the stack, remove the check for a fixed headroom value when building an skb. Allow the meta data to be updated and pass the information up the stack. Scatter/Gather frames are dropped when XDP is enabled. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- Changes in v2: - warn only once if extracting the timestamp from a received frame fails Changes in v3: - drop received S/G frames when XDP is enabled drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 158 ++++++++++++++++++++++--- drivers/net/ethernet/freescale/dpaa/dpaa_eth.h | 2 + 2 files changed, 144 insertions(+), 16 deletions(-) -- 1.9.1 diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 88533a2..102023c 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -53,6 +53,8 @@ #include #include #include +#include +#include #include #include #include "fman.h" @@ -177,7 +179,7 @@ #define DPAA_HWA_SIZE (DPAA_PARSE_RESULTS_SIZE + DPAA_TIME_STAMP_SIZE \ + DPAA_HASH_RESULTS_SIZE) #define DPAA_RX_PRIV_DATA_DEFAULT_SIZE (DPAA_TX_PRIV_DATA_SIZE + \ - dpaa_rx_extra_headroom) + XDP_PACKET_HEADROOM - DPAA_HWA_SIZE) #ifdef CONFIG_DPAA_ERRATUM_A050385 #define DPAA_RX_PRIV_DATA_A050385_SIZE (DPAA_A050385_ALIGN - DPAA_HWA_SIZE) #define DPAA_RX_PRIV_DATA_SIZE (fman_has_errata_a050385() ? \ @@ -1733,7 +1735,6 @@ static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); if (WARN_ONCE(!skb, "Build skb failure on Rx\n")) goto free_buffer; - WARN_ON(fd_off != priv->rx_headroom); skb_reserve(skb, fd_off); skb_put(skb, qm_fd_get_length(fd)); @@ -2349,12 +2350,62 @@ static enum qman_cb_dqrr_result rx_error_dqrr(struct qman_portal *portal, return qman_cb_dqrr_consume; } +static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, + unsigned int *xdp_meta_len) +{ + ssize_t fd_off = qm_fd_get_offset(fd); + struct bpf_prog *xdp_prog; + struct xdp_buff xdp; + u32 xdp_act; + + rcu_read_lock(); + + xdp_prog = READ_ONCE(priv->xdp_prog); + if (!xdp_prog) { + rcu_read_unlock(); + return XDP_PASS; + } + + xdp.data = vaddr + fd_off; + xdp.data_meta = xdp.data; + xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM; + xdp.data_end = xdp.data + qm_fd_get_length(fd); + xdp.frame_sz = DPAA_BP_RAW_SIZE; + + xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); + + /* Update the length and the offset of the FD */ + qm_fd_set_contig(fd, xdp.data - vaddr, xdp.data_end - xdp.data); + + switch (xdp_act) { + case XDP_PASS: + *xdp_meta_len = xdp.data - xdp.data_meta; + break; + default: + bpf_warn_invalid_xdp_action(xdp_act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act); + fallthrough; + case XDP_DROP: + /* Free the buffer */ + free_pages((unsigned long)vaddr, 0); + break; + } + + rcu_read_unlock(); + + return xdp_act; +} + static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, struct qman_fq *fq, const struct qm_dqrr_entry *dq, bool sched_napi) { + bool ts_valid = false, hash_valid = false; struct skb_shared_hwtstamps *shhwtstamps; + unsigned int skb_len, xdp_meta_len = 0; struct rtnl_link_stats64 *percpu_stats; struct dpaa_percpu_priv *percpu_priv; const struct qm_fd *fd = &dq->fd; @@ -2362,12 +2413,14 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, enum qm_fd_format fd_format; struct net_device *net_dev; u32 fd_status, hash_offset; + struct qm_sg_entry *sgt; struct dpaa_bp *dpaa_bp; struct dpaa_priv *priv; - unsigned int skb_len; struct sk_buff *skb; int *count_ptr; + u32 xdp_act; void *vaddr; + u32 hash; u64 ns; fd_status = be32_to_cpu(fd->status); @@ -2423,35 +2476,67 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); (*count_ptr)--; - if (likely(fd_format == qm_fd_contig)) + /* Extract the timestamp stored in the headroom before running XDP */ + if (priv->rx_tstamp) { + if (!fman_port_get_tstamp(priv->mac_dev->port[RX], vaddr, &ns)) + ts_valid = true; + else + WARN_ONCE(1, "fman_port_get_tstamp failed!\n"); + } + + /* Extract the hash stored in the headroom before running XDP */ + if (net_dev->features & NETIF_F_RXHASH && priv->keygen_in_use && + !fman_port_get_hash_result_offset(priv->mac_dev->port[RX], + &hash_offset)) { + hash = be32_to_cpu(*(u32 *)(vaddr + hash_offset)); + hash_valid = true; + } + + if (likely(fd_format == qm_fd_contig)) { + xdp_act = dpaa_run_xdp(priv, (struct qm_fd *)fd, vaddr, + &xdp_meta_len); + if (xdp_act != XDP_PASS) { + percpu_stats->rx_packets++; + percpu_stats->rx_bytes += qm_fd_get_length(fd); + return qman_cb_dqrr_consume; + } skb = contig_fd_to_skb(priv, fd); - else + } else { + /* XDP doesn't support S/G frames. Return the fragments to the + * buffer pool and release the SGT. + */ + if (READ_ONCE(priv->xdp_prog)) { + WARN_ONCE(1, "S/G frames not supported under XDP\n"); + sgt = vaddr + qm_fd_get_offset(fd); + dpaa_release_sgt_members(sgt); + free_pages((unsigned long)vaddr, 0); + return qman_cb_dqrr_consume; + } skb = sg_fd_to_skb(priv, fd); + } if (!skb) return qman_cb_dqrr_consume; - if (priv->rx_tstamp) { + if (xdp_meta_len) + skb_metadata_set(skb, xdp_meta_len); + + /* Set the previously extracted timestamp */ + if (ts_valid) { shhwtstamps = skb_hwtstamps(skb); memset(shhwtstamps, 0, sizeof(*shhwtstamps)); - - if (!fman_port_get_tstamp(priv->mac_dev->port[RX], vaddr, &ns)) - shhwtstamps->hwtstamp = ns_to_ktime(ns); - else - dev_warn(net_dev->dev.parent, "fman_port_get_tstamp failed!\n"); + shhwtstamps->hwtstamp = ns_to_ktime(ns); } skb->protocol = eth_type_trans(skb, net_dev); - if (net_dev->features & NETIF_F_RXHASH && priv->keygen_in_use && - !fman_port_get_hash_result_offset(priv->mac_dev->port[RX], - &hash_offset)) { + /* Set the previously extracted hash */ + if (hash_valid) { enum pkt_hash_types type; /* if L4 exists, it was used in the hash generation */ type = be32_to_cpu(fd->status) & FM_FD_STAT_L4CV ? PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3; - skb_set_hash(skb, be32_to_cpu(*(u32 *)(vaddr + hash_offset)), - type); + skb_set_hash(skb, hash, type); } skb_len = skb->len; @@ -2671,6 +2756,46 @@ static int dpaa_eth_stop(struct net_device *net_dev) return err; } +static int dpaa_setup_xdp(struct net_device *net_dev, struct bpf_prog *prog) +{ + struct dpaa_priv *priv = netdev_priv(net_dev); + struct bpf_prog *old_prog; + bool up; + int err; + + /* S/G fragments are not supported in XDP-mode */ + if (prog && (priv->dpaa_bp->raw_size < + net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN)) + return -EINVAL; + + up = netif_running(net_dev); + + if (up) + dpaa_eth_stop(net_dev); + + old_prog = xchg(&priv->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + if (up) { + err = dpaa_open(net_dev); + if (err) + return err; + } + + return 0; +} + +static int dpaa_xdp(struct net_device *net_dev, struct netdev_bpf *xdp) +{ + switch (xdp->command) { + case XDP_SETUP_PROG: + return dpaa_setup_xdp(net_dev, xdp->prog); + default: + return -EINVAL; + } +} + static int dpaa_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) { struct dpaa_priv *priv = netdev_priv(dev); @@ -2737,6 +2862,7 @@ static int dpaa_ioctl(struct net_device *net_dev, struct ifreq *rq, int cmd) .ndo_set_rx_mode = dpaa_set_rx_mode, .ndo_do_ioctl = dpaa_ioctl, .ndo_setup_tc = dpaa_setup_tc, + .ndo_bpf = dpaa_xdp, }; static int dpaa_napi_add(struct net_device *net_dev) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h index da30e5d..94e8613 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h @@ -196,6 +196,8 @@ struct dpaa_priv { bool tx_tstamp; /* Tx timestamping enabled */ bool rx_tstamp; /* Rx timestamping enabled */ + + struct bpf_prog *xdp_prog; }; /* from dpaa_ethtool.c */ From patchwork Thu Nov 19 16:29:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918233 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BBD8C6369E for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE4CE20829 for ; Thu, 19 Nov 2020 16:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729043AbgKSQ3o (ORCPT ); Thu, 19 Nov 2020 11:29:44 -0500 Received: from inva021.nxp.com ([92.121.34.21]:56626 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728670AbgKSQ3n (ORCPT ); Thu, 19 Nov 2020 11:29:43 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 2488B2004E6; Thu, 19 Nov 2020 17:29:42 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 176D42002F0; Thu, 19 Nov 2020 17:29:42 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id AE52F20329; Thu, 19 Nov 2020 17:29:41 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 3/7] dpaa_eth: limit the possible MTU range when XDP is enabled Date: Thu, 19 Nov 2020 18:29:32 +0200 Message-Id: <859f2a69342d32d7c4a5885bfb4cb0097f246ac4.1605802951.git.camelia.groza@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Implement the ndo_change_mtu callback to prevent users from setting an MTU that would permit processing of S/G frames. The maximum MTU size is dependent on the buffer size. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 32 ++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 102023c..242ed45 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2756,6 +2756,34 @@ static int dpaa_eth_stop(struct net_device *net_dev) return err; } +static bool xdp_validate_mtu(struct dpaa_priv *priv, int mtu) +{ + int max_contig_data = priv->dpaa_bp->size - priv->rx_headroom; + + /* We do not support S/G fragments when XDP is enabled. + * Limit the MTU in relation to the buffer size. + */ + if (mtu + VLAN_ETH_HLEN + ETH_FCS_LEN > max_contig_data) { + dev_warn(priv->net_dev->dev.parent, + "The maximum MTU for XDP is %d\n", + max_contig_data - VLAN_ETH_HLEN - ETH_FCS_LEN); + return false; + } + + return true; +} + +static int dpaa_change_mtu(struct net_device *net_dev, int new_mtu) +{ + struct dpaa_priv *priv = netdev_priv(net_dev); + + if (priv->xdp_prog && !xdp_validate_mtu(priv, new_mtu)) + return -EINVAL; + + net_dev->mtu = new_mtu; + return 0; +} + static int dpaa_setup_xdp(struct net_device *net_dev, struct bpf_prog *prog) { struct dpaa_priv *priv = netdev_priv(net_dev); @@ -2764,8 +2792,7 @@ static int dpaa_setup_xdp(struct net_device *net_dev, struct bpf_prog *prog) int err; /* S/G fragments are not supported in XDP-mode */ - if (prog && (priv->dpaa_bp->raw_size < - net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN)) + if (prog && !xdp_validate_mtu(priv, net_dev->mtu)) return -EINVAL; up = netif_running(net_dev); @@ -2862,6 +2889,7 @@ static int dpaa_ioctl(struct net_device *net_dev, struct ifreq *rq, int cmd) .ndo_set_rx_mode = dpaa_set_rx_mode, .ndo_do_ioctl = dpaa_ioctl, .ndo_setup_tc = dpaa_setup_tc, + .ndo_change_mtu = dpaa_change_mtu, .ndo_bpf = dpaa_xdp, }; From patchwork Thu Nov 19 16:29:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918241 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 623A3C6379F for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2167E20780 for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729057AbgKSQ3q (ORCPT ); Thu, 19 Nov 2020 11:29:46 -0500 Received: from inva021.nxp.com ([92.121.34.21]:56662 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728707AbgKSQ3q (ORCPT ); Thu, 19 Nov 2020 11:29:46 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 4E2312004E5; Thu, 19 Nov 2020 17:29:43 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 405092004CD; Thu, 19 Nov 2020 17:29:43 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id E02A520329; Thu, 19 Nov 2020 17:29:42 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 4/7] dpaa_eth: add XDP_TX support Date: Thu, 19 Nov 2020 18:29:33 +0200 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use an xdp_frame structure for managing the frame. Store a backpointer to the structure at the start of the buffer before enqueueing. Use the XDP API for freeing the buffer when it returns to the driver on the TX confirmation path. This approach will be reused for XDP REDIRECT. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 129 ++++++++++++++++++++++++- drivers/net/ethernet/freescale/dpaa/dpaa_eth.h | 2 + 2 files changed, 126 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 242ed45..cd5f4f6 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -1130,6 +1130,24 @@ static int dpaa_fq_init(struct dpaa_fq *dpaa_fq, bool td_enable) dpaa_fq->fqid = qman_fq_fqid(fq); + if (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT || + dpaa_fq->fq_type == FQ_TYPE_RX_PCD) { + err = xdp_rxq_info_reg(&dpaa_fq->xdp_rxq, dpaa_fq->net_dev, + dpaa_fq->fqid); + if (err) { + dev_err(dev, "xdp_rxq_info_reg failed\n"); + return err; + } + + err = xdp_rxq_info_reg_mem_model(&dpaa_fq->xdp_rxq, + MEM_TYPE_PAGE_ORDER0, NULL); + if (err) { + dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n"); + xdp_rxq_info_unreg(&dpaa_fq->xdp_rxq); + return err; + } + } + return 0; } @@ -1159,6 +1177,10 @@ static int dpaa_fq_free_entry(struct device *dev, struct qman_fq *fq) } } + if (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT || + dpaa_fq->fq_type == FQ_TYPE_RX_PCD) + xdp_rxq_info_unreg(&dpaa_fq->xdp_rxq); + qman_destroy_fq(fq); list_del(&dpaa_fq->list); @@ -1625,6 +1647,9 @@ static int dpaa_eth_refill_bpools(struct dpaa_priv *priv) * * Return the skb backpointer, since for S/G frames the buffer containing it * gets freed here. + * + * No skb backpointer is set when transmitting XDP frames. Cleanup the buffer + * and return NULL in this case. */ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, const struct qm_fd *fd, bool ts) @@ -1636,6 +1661,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, void *vaddr = phys_to_virt(addr); const struct qm_sg_entry *sgt; struct dpaa_eth_swbp *swbp; + struct xdp_frame *xdpf; struct sk_buff *skb; u64 ns; int i; @@ -1664,13 +1690,22 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, } } else { dma_unmap_single(priv->tx_dma_dev, addr, - priv->tx_headroom + qm_fd_get_length(fd), + qm_fd_get_offset(fd) + qm_fd_get_length(fd), dma_dir); } swbp = (struct dpaa_eth_swbp *)vaddr; skb = swbp->skb; + /* No skb backpointer is set when running XDP. An xdp_frame + * backpointer is saved instead. + */ + if (!skb) { + xdpf = swbp->xdpf; + xdp_return_frame(xdpf); + return NULL; + } + /* DMA unmapping is required before accessing the HW provided info */ if (ts && priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { @@ -2350,11 +2385,76 @@ static enum qman_cb_dqrr_result rx_error_dqrr(struct qman_portal *portal, return qman_cb_dqrr_consume; } +static int dpaa_xdp_xmit_frame(struct net_device *net_dev, + struct xdp_frame *xdpf) +{ + struct dpaa_priv *priv = netdev_priv(net_dev); + struct rtnl_link_stats64 *percpu_stats; + struct dpaa_percpu_priv *percpu_priv; + struct dpaa_eth_swbp *swbp; + struct netdev_queue *txq; + void *buff_start; + struct qm_fd fd; + dma_addr_t addr; + int err; + + percpu_priv = this_cpu_ptr(priv->percpu_priv); + percpu_stats = &percpu_priv->stats; + + if (xdpf->headroom < DPAA_TX_PRIV_DATA_SIZE) { + err = -EINVAL; + goto out_error; + } + + buff_start = xdpf->data - xdpf->headroom; + + /* Leave empty the skb backpointer at the start of the buffer. + * Save the XDP frame for easy cleanup on confirmation. + */ + swbp = (struct dpaa_eth_swbp *)buff_start; + swbp->skb = NULL; + swbp->xdpf = xdpf; + + qm_fd_clear_fd(&fd); + fd.bpid = FSL_DPAA_BPID_INV; + fd.cmd |= cpu_to_be32(FM_FD_CMD_FCO); + qm_fd_set_contig(&fd, xdpf->headroom, xdpf->len); + + addr = dma_map_single(priv->tx_dma_dev, buff_start, + xdpf->headroom + xdpf->len, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) { + err = -EINVAL; + goto out_error; + } + + qm_fd_addr_set64(&fd, addr); + + /* Bump the trans_start */ + txq = netdev_get_tx_queue(net_dev, smp_processor_id()); + txq->trans_start = jiffies; + + err = dpaa_xmit(priv, percpu_stats, smp_processor_id(), &fd); + if (err) { + dma_unmap_single(priv->tx_dma_dev, addr, + qm_fd_get_offset(&fd) + qm_fd_get_length(&fd), + DMA_TO_DEVICE); + goto out_error; + } + + return 0; + +out_error: + percpu_stats->tx_errors++; + return err; +} + static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, - unsigned int *xdp_meta_len) + struct dpaa_fq *dpaa_fq, unsigned int *xdp_meta_len) { ssize_t fd_off = qm_fd_get_offset(fd); struct bpf_prog *xdp_prog; + struct xdp_frame *xdpf; struct xdp_buff xdp; u32 xdp_act; @@ -2370,7 +2470,8 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, xdp.data_meta = xdp.data; xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM; xdp.data_end = xdp.data + qm_fd_get_length(fd); - xdp.frame_sz = DPAA_BP_RAW_SIZE; + xdp.frame_sz = DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE; + xdp.rxq = &dpaa_fq->xdp_rxq; xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); @@ -2381,6 +2482,22 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, case XDP_PASS: *xdp_meta_len = xdp.data - xdp.data_meta; break; + case XDP_TX: + /* We can access the full headroom when sending the frame + * back out + */ + xdp.data_hard_start = vaddr; + xdp.frame_sz = DPAA_BP_RAW_SIZE; + xdpf = xdp_convert_buff_to_frame(&xdp); + if (unlikely(!xdpf)) { + free_pages((unsigned long)vaddr, 0); + break; + } + + if (dpaa_xdp_xmit_frame(priv->net_dev, xdpf)) + xdp_return_frame_rx_napi(xdpf); + + break; default: bpf_warn_invalid_xdp_action(xdp_act); fallthrough; @@ -2415,6 +2532,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, u32 fd_status, hash_offset; struct qm_sg_entry *sgt; struct dpaa_bp *dpaa_bp; + struct dpaa_fq *dpaa_fq; struct dpaa_priv *priv; struct sk_buff *skb; int *count_ptr; @@ -2423,9 +2541,10 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, u32 hash; u64 ns; + dpaa_fq = container_of(fq, struct dpaa_fq, fq_base); fd_status = be32_to_cpu(fd->status); fd_format = qm_fd_get_format(fd); - net_dev = ((struct dpaa_fq *)fq)->net_dev; + net_dev = dpaa_fq->net_dev; priv = netdev_priv(net_dev); dpaa_bp = dpaa_bpid2pool(dq->fd.bpid); if (!dpaa_bp) @@ -2494,7 +2613,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, if (likely(fd_format == qm_fd_contig)) { xdp_act = dpaa_run_xdp(priv, (struct qm_fd *)fd, vaddr, - &xdp_meta_len); + dpaa_fq, &xdp_meta_len); if (xdp_act != XDP_PASS) { percpu_stats->rx_packets++; percpu_stats->rx_bytes += qm_fd_get_length(fd); diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h index 94e8613..5c8d52a 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h @@ -68,6 +68,7 @@ struct dpaa_fq { u16 channel; u8 wq; enum dpaa_fq_type fq_type; + struct xdp_rxq_info xdp_rxq; }; struct dpaa_fq_cbs { @@ -150,6 +151,7 @@ struct dpaa_buffer_layout { */ struct dpaa_eth_swbp { struct sk_buff *skb; + struct xdp_frame *xdpf; }; struct dpaa_priv { From patchwork Thu Nov 19 16:29:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918237 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A10C6379D for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A4A12076C for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729075AbgKSQ3s (ORCPT ); Thu, 19 Nov 2020 11:29:48 -0500 Received: from inva021.nxp.com ([92.121.34.21]:56694 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729045AbgKSQ3q (ORCPT ); Thu, 19 Nov 2020 11:29:46 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 612822004E3; Thu, 19 Nov 2020 17:29:44 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5417A2004CD; Thu, 19 Nov 2020 17:29:44 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 001EF20329; Thu, 19 Nov 2020 17:29:43 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 5/7] dpaa_eth: add XDP_REDIRECT support Date: Thu, 19 Nov 2020 18:29:34 +0200 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org After transmission, the frame is returned on confirmation queues for cleanup. For this, store a backpointer to the xdp_frame in the private reserved area at the start of the TX buffer. No TX batching support is implemented at this time. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 48 +++++++++++++++++++++++++- drivers/net/ethernet/freescale/dpaa/dpaa_eth.h | 1 + 2 files changed, 48 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index cd5f4f6..3214ea0 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2305,8 +2305,11 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget) { struct dpaa_napi_portal *np = container_of(napi, struct dpaa_napi_portal, napi); + int cleaned; - int cleaned = qman_p_poll_dqrr(np->p, budget); + np->xdp_act = 0; + + cleaned = qman_p_poll_dqrr(np->p, budget); if (cleaned < budget) { napi_complete_done(napi, cleaned); @@ -2315,6 +2318,9 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget) qman_p_irqsource_add(np->p, QM_PIRQ_DQRI); } + if (np->xdp_act & XDP_REDIRECT) + xdp_do_flush(); + return cleaned; } @@ -2457,6 +2463,7 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, struct xdp_frame *xdpf; struct xdp_buff xdp; u32 xdp_act; + int err; rcu_read_lock(); @@ -2498,6 +2505,17 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, xdp_return_frame_rx_napi(xdpf); break; + case XDP_REDIRECT: + /* Allow redirect to use the full headroom */ + xdp.data_hard_start = vaddr; + xdp.frame_sz = DPAA_BP_RAW_SIZE; + + err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog); + if (err) { + trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act); + free_pages((unsigned long)vaddr, 0); + } + break; default: bpf_warn_invalid_xdp_action(xdp_act); fallthrough; @@ -2527,6 +2545,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, struct dpaa_percpu_priv *percpu_priv; const struct qm_fd *fd = &dq->fd; dma_addr_t addr = qm_fd_addr(fd); + struct dpaa_napi_portal *np; enum qm_fd_format fd_format; struct net_device *net_dev; u32 fd_status, hash_offset; @@ -2541,6 +2560,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, u32 hash; u64 ns; + np = container_of(&portal, struct dpaa_napi_portal, p); dpaa_fq = container_of(fq, struct dpaa_fq, fq_base); fd_status = be32_to_cpu(fd->status); fd_format = qm_fd_get_format(fd); @@ -2614,6 +2634,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, if (likely(fd_format == qm_fd_contig)) { xdp_act = dpaa_run_xdp(priv, (struct qm_fd *)fd, vaddr, dpaa_fq, &xdp_meta_len); + np->xdp_act |= xdp_act; if (xdp_act != XDP_PASS) { percpu_stats->rx_packets++; percpu_stats->rx_bytes += qm_fd_get_length(fd); @@ -2942,6 +2963,30 @@ static int dpaa_xdp(struct net_device *net_dev, struct netdev_bpf *xdp) } } +static int dpaa_xdp_xmit(struct net_device *net_dev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct xdp_frame *xdpf; + int i, err, drops = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + if (!netif_running(net_dev)) + return -ENETDOWN; + + for (i = 0; i < n; i++) { + xdpf = frames[i]; + err = dpaa_xdp_xmit_frame(net_dev, xdpf); + if (err) { + xdp_return_frame_rx_napi(xdpf); + drops++; + } + } + + return n - drops; +} + static int dpaa_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) { struct dpaa_priv *priv = netdev_priv(dev); @@ -3010,6 +3055,7 @@ static int dpaa_ioctl(struct net_device *net_dev, struct ifreq *rq, int cmd) .ndo_setup_tc = dpaa_setup_tc, .ndo_change_mtu = dpaa_change_mtu, .ndo_bpf = dpaa_xdp, + .ndo_xdp_xmit = dpaa_xdp_xmit, }; static int dpaa_napi_add(struct net_device *net_dev) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h index 5c8d52a..daf894a 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h @@ -127,6 +127,7 @@ struct dpaa_napi_portal { struct napi_struct napi; struct qman_portal *p; bool down; + int xdp_act; }; struct dpaa_percpu_priv { From patchwork Thu Nov 19 16:29:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918243 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4BF6C64E7A for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A483E2076C for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729069AbgKSQ3s (ORCPT ); Thu, 19 Nov 2020 11:29:48 -0500 Received: from inva020.nxp.com ([92.121.34.13]:44776 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729049AbgKSQ3r (ORCPT ); Thu, 19 Nov 2020 11:29:47 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 74E081A06BC; Thu, 19 Nov 2020 17:29:45 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 67D551A06D2; Thu, 19 Nov 2020 17:29:45 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 145992035B; Thu, 19 Nov 2020 17:29:45 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 6/7] dpaa_eth: rename current skb A050385 erratum workaround Date: Thu, 19 Nov 2020 18:29:35 +0200 Message-Id: <454f789a87f44af29db4d83277fb7e81e6be0b76.1605802951.git.camelia.groza@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Explicitly point that the current workaround addresses skbs. This change is in preparation for adding a workaround for XDP scenarios. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 3214ea0..b9d46e6 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2105,7 +2105,7 @@ static inline int dpaa_xmit(struct dpaa_priv *priv, } #ifdef CONFIG_DPAA_ERRATUM_A050385 -static int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s) +static int dpaa_a050385_wa_skb(struct net_device *net_dev, struct sk_buff **s) { struct dpaa_priv *priv = netdev_priv(net_dev); struct sk_buff *new_skb, *skb = *s; @@ -2221,7 +2221,7 @@ static int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s) #ifdef CONFIG_DPAA_ERRATUM_A050385 if (unlikely(fman_has_errata_a050385())) { - if (dpaa_a050385_wa(net_dev, &skb)) + if (dpaa_a050385_wa_skb(net_dev, &skb)) goto enomem; nonlinear = skb_is_nonlinear(skb); } From patchwork Thu Nov 19 16:29:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Camelia Alexandra Groza X-Patchwork-Id: 11918235 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A991EC64E69 for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7967620780 for ; Thu, 19 Nov 2020 16:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729084AbgKSQ3t (ORCPT ); Thu, 19 Nov 2020 11:29:49 -0500 Received: from inva021.nxp.com ([92.121.34.21]:56662 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729060AbgKSQ3s (ORCPT ); Thu, 19 Nov 2020 11:29:48 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 8A38D2004EF; Thu, 19 Nov 2020 17:29:46 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7CB302004ED; Thu, 19 Nov 2020 17:29:46 +0100 (CET) Received: from fsr-ub1464-019.ea.freescale.net (fsr-ub1464-019.ea.freescale.net [10.171.81.207]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 2819720329; Thu, 19 Nov 2020 17:29:46 +0100 (CET) From: Camelia Groza To: kuba@kernel.org, brouer@redhat.com, saeed@kernel.org, davem@davemloft.net Cc: madalin.bucur@oss.nxp.com, ioana.ciornei@nxp.com, netdev@vger.kernel.org, Camelia Groza Subject: [PATCH net-next v3 7/7] dpaa_eth: implement the A050385 erratum workaround for XDP Date: Thu, 19 Nov 2020 18:29:36 +0200 Message-Id: <42fd1691b072a856827436378792ae54183d17ba.1605802951.git.camelia.groza@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org For XDP TX, even tough we start out with correctly aligned buffers, the XDP program might change the data's alignment. For REDIRECT, we have no control over the alignment either. Create a new workaround for xdp_frame structures to verify the erratum conditions and move the data to a fresh buffer if necessary. Create a new xdp_frame for managing the new buffer and free the old one using the XDP API. Due to alignment constraints, all frames have a 256 byte headroom that is offered fully to XDP under the erratum. If the XDP program uses all of it, the data needs to be move to make room for the xdpf backpointer. Disable the metadata support since the information can be lost. Acked-by: Madalin Bucur Signed-off-by: Camelia Groza --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 82 +++++++++++++++++++++++++- 1 file changed, 79 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index b9d46e6..aaf9112 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2171,6 +2171,52 @@ static int dpaa_a050385_wa_skb(struct net_device *net_dev, struct sk_buff **s) return 0; } + +static int dpaa_a050385_wa_xdpf(struct dpaa_priv *priv, + struct xdp_frame **init_xdpf) +{ + struct xdp_frame *new_xdpf, *xdpf = *init_xdpf; + void *new_buff; + struct page *p; + + /* Check the data alignment and make sure the headroom is large + * enough to store the xdpf backpointer. Use an aligned headroom + * value. + * + * Due to alignment constraints, we give XDP access to the full 256 + * byte frame headroom. If the XDP program uses all of it, copy the + * data to a new buffer and make room for storing the backpointer. + */ + if (PTR_IS_ALIGNED(xdpf->data, DPAA_A050385_ALIGN) && + xdpf->headroom >= priv->tx_headroom) { + xdpf->headroom = priv->tx_headroom; + return 0; + } + + p = dev_alloc_pages(0); + if (unlikely(!p)) + return -ENOMEM; + + /* Copy the data to the new buffer at a properly aligned offset */ + new_buff = page_address(p); + memcpy(new_buff + priv->tx_headroom, xdpf->data, xdpf->len); + + /* Create an XDP frame around the new buffer in a similar fashion + * to xdp_convert_buff_to_frame. + */ + new_xdpf = new_buff; + new_xdpf->data = new_buff + priv->tx_headroom; + new_xdpf->len = xdpf->len; + new_xdpf->headroom = priv->tx_headroom; + new_xdpf->frame_sz = DPAA_BP_RAW_SIZE; + new_xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; + + /* Release the initial buffer */ + xdp_return_frame_rx_napi(xdpf); + + *init_xdpf = new_xdpf; + return 0; +} #endif static netdev_tx_t @@ -2407,6 +2453,15 @@ static int dpaa_xdp_xmit_frame(struct net_device *net_dev, percpu_priv = this_cpu_ptr(priv->percpu_priv); percpu_stats = &percpu_priv->stats; +#ifdef CONFIG_DPAA_ERRATUM_A050385 + if (unlikely(fman_has_errata_a050385())) { + if (dpaa_a050385_wa_xdpf(priv, &xdpf)) { + err = -ENOMEM; + goto out_error; + } + } +#endif + if (xdpf->headroom < DPAA_TX_PRIV_DATA_SIZE) { err = -EINVAL; goto out_error; @@ -2480,6 +2535,20 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, xdp.frame_sz = DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE; xdp.rxq = &dpaa_fq->xdp_rxq; + /* We reserve a fixed headroom of 256 bytes under the erratum and we + * offer it all to XDP programs to use. If no room is left for the + * xdpf backpointer on TX, we will need to copy the data. + * Disable metadata support since data realignments might be required + * and the information can be lost. + */ +#ifdef CONFIG_DPAA_ERRATUM_A050385 + if (unlikely(fman_has_errata_a050385())) { + xdp_set_data_meta_invalid(&xdp); + xdp.data_hard_start = vaddr; + xdp.frame_sz = DPAA_BP_RAW_SIZE; + } +#endif + xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); /* Update the length and the offset of the FD */ @@ -2487,7 +2556,8 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, switch (xdp_act) { case XDP_PASS: - *xdp_meta_len = xdp.data - xdp.data_meta; + *xdp_meta_len = xdp_data_meta_unsupported(&xdp) ? 0 : + xdp.data - xdp.data_meta; break; case XDP_TX: /* We can access the full headroom when sending the frame @@ -3187,10 +3257,16 @@ static u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl, */ headroom = (u16)(bl[port].priv_data_size + DPAA_HWA_SIZE); - if (port == RX) + if (port == RX) { +#ifdef CONFIG_DPAA_ERRATUM_A050385 + if (unlikely(fman_has_errata_a050385())) + headroom = XDP_PACKET_HEADROOM; +#endif + return ALIGN(headroom, DPAA_FD_RX_DATA_ALIGNMENT); - else + } else { return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT); + } } static int dpaa_eth_probe(struct platform_device *pdev)