From patchwork Mon Dec 13 15:31:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674113 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 926D3C4332F for ; Mon, 13 Dec 2021 15:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235711AbhLMPba (ORCPT ); Mon, 13 Dec 2021 10:31:30 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235650AbhLMPb2 (ORCPT ); Mon, 13 Dec 2021 10:31:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409488; x=1670945488; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ESk7lCQue1vgoFmWYSSeL0HGjoKRzv97ZtcUy95rvkM=; b=jS8M5KDEXdjcC9O6Y2URJ3oXImnSpNi9rOXdDRCrwCLrAjHufsTBd5f9 S3vrlP9ZeiF7san4epCliG4PA1Z5uPbFD3e8TDCI8wXrfne47KeYkZBxh w1sOXLasL0qQdMtEJRoYKRU6sOUXTWAiPYS+m4IzELN1ok8Z+jZPYHJ8l /TUfVhlz2y0sb9WHwSv8jWb8T8m8iejtdTovk0DGii57p2p3Up2cl7RkT QD8PRNUTl4+R6d+0wxWBGiWJ4HaHlAKa//Y6VCFPsgqHYg1GSZdfq/80V m5b9qijrZT48bsuYIpmO+lMjC6UKL40Ko6YjIFKCuJ1LQSLpUx04odqw5 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490481" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490481" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864714" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:22 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Maciej Fijalkowski Subject: [PATCH v2 intel-net 1/6] ice: xsk: return xsk buffers back to pool when cleaning the ring Date: Mon, 13 Dec 2021 16:31:06 +0100 Message-Id: <20211213153111.110877-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently we only NULL the xdp_buff pointer in the internal SW ring but we never give it back to the xsk buffer pool. This means that buffers can be leaked out of the buff pool and never be used again. Add missing xsk_buff_free() call to the routine that is supposed to clean the entries that are left in the ring so that these buffers in the umem can be used by other sockets. Also, only go through the space that is actually left to be cleaned instead of a whole ring. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index bb9a80847298..8593717a755e 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -811,14 +811,14 @@ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi) */ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) { - u16 i; - - for (i = 0; i < rx_ring->count; i++) { - struct xdp_buff **xdp = &rx_ring->xdp_buf[i]; + u16 count_mask = rx_ring->count - 1; + u16 ntc = rx_ring->next_to_clean; + u16 ntu = rx_ring->next_to_use; - if (!xdp) - continue; + for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) { + struct xdp_buff **xdp = &rx_ring->xdp_buf[ntc]; + xsk_buff_free(*xdp); *xdp = NULL; } } From patchwork Mon Dec 13 15:31:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674115 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A313FC433F5 for ; Mon, 13 Dec 2021 15:31:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236351AbhLMPbb (ORCPT ); Mon, 13 Dec 2021 10:31:31 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235651AbhLMPb3 (ORCPT ); Mon, 13 Dec 2021 10:31:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409489; x=1670945489; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+lHQLv9K2BG28Gkoc4NjRolilUJSzHIHRIgA5CyHiXs=; b=BMTujD0qBMbdmkchv5JR7wZGyHIGJrkXxcQcDJj7Hun3nEygVn0OQM/U FdKIcxlEuzkntG/eEQGAvqXu+ZCv0ZhlFojbTc32f5HiZ998hQORjuZE9 YRkvK56qdZXm9ZcqTwJ+qR4qpt7cGO82Uge/3U6Q9rhnQRnXCeiKUtfPv zqDza4NifYl/Do/TaCcVi0rPo14ynSDJD9zvcaXe+V+RJUFaCYgAp0EyK hAXtFYYGiMtmF4iYZyK879rcWoKN9Frr8wxI6HlUdPSXosf5CI7SLn1h1 zIBtwXT0zpQKYXXcSjuPlxzeYSoeSr5jeh2kFj94n6GkDvfmr7jWffhpv A==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490490" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490490" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864732" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:25 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Maciej Fijalkowski Subject: [PATCH v2 intel-net 2/6] ice: xsk: allocate separate memory for XDP SW ring Date: Mon, 13 Dec 2021 16:31:07 +0100 Message-Id: <20211213153111.110877-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, the zero-copy data path is reusing the memory region that was initially allocated for an array of struct ice_rx_buf for its own purposes. This is error prone as it is based on the ice_rx_buf struct always being the same size or bigger than what the zero-copy path needs. There can also be old values present in that array giving rise to errors when the zero-copy path uses it. Fix this by freeing the ice_rx_buf region and allocating a new array for the zero-copy path that has the right length and is initialized to zero. Fixes: 57f7f8b6bc0b ("ice: Use xdp_buf instead of rx_buf for xsk zero-copy") Signed-off-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_base.c | 17 ++++++++++++ drivers/net/ethernet/intel/ice/ice_txrx.c | 19 ++++++++----- drivers/net/ethernet/intel/ice/ice_xsk.c | 33 ++++++++++++----------- 3 files changed, 47 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 1efc635cc0f5..fafe020e46ee 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -6,6 +6,18 @@ #include "ice_lib.h" #include "ice_dcb_lib.h" +static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring) +{ + rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL); + return !!rx_ring->xdp_buf; +} + +static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring) +{ + rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); + return !!rx_ring->rx_buf; +} + /** * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI * @qs_cfg: gathered variables needed for PF->VSI queues assignment @@ -492,8 +504,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, ring->q_index, ring->q_vector->napi.napi_id); + kfree(ring->rx_buf); ring->xsk_pool = ice_xsk_pool(ring); if (ring->xsk_pool) { + if (!ice_alloc_rx_buf_zc(ring)) + return -ENOMEM; xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); ring->rx_buf_len = @@ -508,6 +523,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n", ring->q_index); } else { + if (!ice_alloc_rx_buf(ring)) + return -ENOMEM; if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) /* coverity[check_return] */ xdp_rxq_info_reg(&ring->xdp_rxq, diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index bc3ba19dc88f..dccf09eefc75 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -419,7 +419,10 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) } rx_skip_free: - memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count); + if (rx_ring->xsk_pool) + memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf))); + else + memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf))); /* Zero out the descriptor ring */ size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), @@ -446,8 +449,13 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring) if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) xdp_rxq_info_unreg(&rx_ring->xdp_rxq); rx_ring->xdp_prog = NULL; - devm_kfree(rx_ring->dev, rx_ring->rx_buf); - rx_ring->rx_buf = NULL; + if (rx_ring->xsk_pool) { + kfree(rx_ring->xdp_buf); + rx_ring->xdp_buf = NULL; + } else { + kfree(rx_ring->rx_buf); + rx_ring->rx_buf = NULL; + } if (rx_ring->desc) { size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), @@ -475,8 +483,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) /* warn if we are about to overwrite the pointer */ WARN_ON(rx_ring->rx_buf); rx_ring->rx_buf = - devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count, - GFP_KERNEL); + kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); if (!rx_ring->rx_buf) return -ENOMEM; @@ -505,7 +512,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) return 0; err: - devm_kfree(dev, rx_ring->rx_buf); + kfree(rx_ring->rx_buf); rx_ring->rx_buf = NULL; return -ENOMEM; } diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 8593717a755e..c124229d98fe 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -12,6 +12,11 @@ #include "ice_txrx_lib.h" #include "ice_lib.h" +static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx) +{ + return &rx_ring->xdp_buf[idx]; +} + /** * ice_qp_reset_stats - Resets all stats for rings of given index * @vsi: VSI that contains rings of interest @@ -372,7 +377,7 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) dma_addr_t dma; rx_desc = ICE_RX_DESC(rx_ring, ntu); - xdp = &rx_ring->xdp_buf[ntu]; + xdp = ice_xdp_buf(rx_ring, ntu); nb_buffs = min_t(u16, count, rx_ring->count - ntu); nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); @@ -419,19 +424,18 @@ static void ice_bump_ntc(struct ice_rx_ring *rx_ring) /** * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer * @rx_ring: Rx ring - * @xdp_arr: Pointer to the SW ring of xdp_buff pointers + * @xdp: Pointer to XDP buffer * * This function allocates a new skb from a zero-copy Rx buffer. * * Returns the skb on success, NULL on failure. */ static struct sk_buff * -ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr) +ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) { - struct xdp_buff *xdp = *xdp_arr; + unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; unsigned int metasize = xdp->data - xdp->data_meta; unsigned int datasize = xdp->data_end - xdp->data; - unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; struct sk_buff *skb; skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard, @@ -445,7 +449,6 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr) skb_metadata_set(skb, metasize); xsk_buff_free(xdp); - *xdp_arr = NULL; return skb; } @@ -522,7 +525,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) while (likely(total_rx_packets < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; unsigned int size, xdp_res = 0; - struct xdp_buff **xdp; + struct xdp_buff *xdp; struct sk_buff *skb; u16 stat_err_bits; u16 vlan_tag = 0; @@ -545,18 +548,17 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) if (!size) break; - xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean]; - xsk_buff_set_size(*xdp, size); - xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool); + xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); + xsk_buff_set_size(xdp, size); + xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool); - xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring); + xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring); if (xdp_res) { if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) xdp_xmit |= xdp_res; else - xsk_buff_free(*xdp); + xsk_buff_free(xdp); - *xdp = NULL; total_rx_bytes += size; total_rx_packets++; cleaned_count++; @@ -816,10 +818,9 @@ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) u16 ntu = rx_ring->next_to_use; for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) { - struct xdp_buff **xdp = &rx_ring->xdp_buf[ntc]; + struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc); - xsk_buff_free(*xdp); - *xdp = NULL; + xsk_buff_free(xdp); } } From patchwork Mon Dec 13 15:31:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674117 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55EFAC433F5 for ; Mon, 13 Dec 2021 15:31:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240113AbhLMPbe (ORCPT ); Mon, 13 Dec 2021 10:31:34 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237963AbhLMPbc (ORCPT ); Mon, 13 Dec 2021 10:31:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409492; x=1670945492; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EVH+iAt06HTD4Z8tBf55CAuck0p5MH8rPhwNpMVK6ok=; b=PmqepvOyBD+fLGLgfydl/rv4Frqu3cvPa3w3f75i7+xQu/jyAwU4Jg7k S1ZWUnZk5YIFTo7nO0HS7tKyrCoIBm03rawYacrY6L5nUsRFTQ2gnQRDu Lq+WErQVzznPjgM58dnHWuyUjjJChafUhdxVOKBLHV27nDBoN8k9UKrDx 8kE9h55YQLbysucakjNTyUWWHTaIT1VfhEQMZY/Xpr67Cd/7wTt5Q0yz7 KACUhGiioTNFJxqwgeaf1RwWRe33zYqE02zEe8iz2Afdvtz3Hil267+PX tv4q/w2Fxyw35bZa3gyFAmhmXwihKUVFl7sgIfs2IuOhSpN8eFSERg3Fj g==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490511" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490511" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864745" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:28 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Alexander Lobakin , Maciej Fijalkowski Subject: [PATCH v2 intel-net 3/6] ice: remove dead store on XSK hotpath Date: Mon, 13 Dec 2021 16:31:08 +0100 Message-Id: <20211213153111.110877-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Alexander Lobakin The 'if (ntu == rx_ring->count)' block in ice_alloc_rx_buffers_zc() was previously residing in the loop, but after introducing the batched interface it is used only to wrap-around the NTU descriptor, thus no more need to assign 'xdp'. Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface") Signed-off-by: Alexander Lobakin Acked-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index c124229d98fe..27f5f64dcbd6 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -397,7 +397,6 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) ntu += nb_buffs; if (ntu == rx_ring->count) { rx_desc = ICE_RX_DESC(rx_ring, 0); - xdp = rx_ring->xdp_buf; ntu = 0; } From patchwork Mon Dec 13 15:31:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674119 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBD33C433EF for ; Mon, 13 Dec 2021 15:31:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240127AbhLMPbg (ORCPT ); Mon, 13 Dec 2021 10:31:36 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240110AbhLMPbd (ORCPT ); Mon, 13 Dec 2021 10:31:33 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409493; x=1670945493; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=p3wfREPxIWyV7+YjRddZlswhnJ1Ba/ctyihpKflFrUU=; b=JAmYtu7pveWnY75PrAe80giJSvZuS9CmYIX0H/yVCjSt39A8VWfWHtaZ fdYMAe9bsT+dvtxflMWua5IJIJegqHRst++klt6tlwpOT9DcE/Wh4Z/0v 8icrs8PAv1dbSC2jOrqrFQ9brcbuYOL46bGHUAiToc4FnByvrmVYe9KM8 aA1Dq2/BWPVH3RtivV5XsVFIHduhMhKM0XEnmwxEylR1TWu1QMAoMo+2p F9x6g2MtqO949Aik7dGAXI8Ga6OBU/E3bYdCofKK11r9rB2SpRHPBo/Nu mverq9ppcU+nEkQGjY6eAgfc+Dv+Q8aI6Ve02KeqxiJYZdQ9751kiARJD g==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490518" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490518" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864755" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:30 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Maciej Fijalkowski Subject: [PATCH v2 intel-net 4/6] ice: xsk: do not clear status_error0 for ntu + nb_buffs descriptor Date: Mon, 13 Dec 2021 16:31:09 +0100 Message-Id: <20211213153111.110877-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The descriptor that ntu is pointing at when we exit ice_alloc_rx_bufs_zc() should not have its corresponding DD bit cleared as descriptor is not allocated in there and it is not valid for HW usage. The allocation routine at the entry will fill the descriptor that ntu points to after it was set to ntu + nb_buffs on previous call. Even the spec says: "The tail pointer should be set to one descriptor beyond the last empty descriptor in host descriptor ring." Therefore, step away from clearing the status_error0 on ntu + nb_buffs descriptor. Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface") Reported-by: Elza Mathew Signed-off-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 27f5f64dcbd6..ffa9a160766a 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -395,13 +395,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) } ntu += nb_buffs; - if (ntu == rx_ring->count) { - rx_desc = ICE_RX_DESC(rx_ring, 0); + if (ntu == rx_ring->count) ntu = 0; - } - /* clear the status bits for the next_to_use descriptor */ - rx_desc->wb.status_error0 = 0; ice_release_rx_desc(rx_ring, ntu); return count == nb_buffs; From patchwork Mon Dec 13 15:31:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674121 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DEE6C433FE for ; Mon, 13 Dec 2021 15:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238155AbhLMPbh (ORCPT ); Mon, 13 Dec 2021 10:31:37 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240120AbhLMPbf (ORCPT ); Mon, 13 Dec 2021 10:31:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409495; x=1670945495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rfq5psmoi374dmziTJT7+nktFd6U4UtrIwbRZTJRL70=; b=RcByMco7Fq6PikLL+IiW/pqAkCloQVSwQRmffVhD5NENr1y+Z9uM6G/+ xWlO/6b37A3lT/HasMS6CxqcDI0gYWyb3rx0c9SKqNRgkhDNrcrC67IgX mgyiwaqdf53PR4LIcAO9Wj48i5RMH1GZxGcotZQOszxppdmVEh9WA2LPk IYnLDStAjKxozrKW7KkQkz2Utckgiz1rybZrE/W6berO+GulaR+uorV0C eovlHHdp2+lmqSDHc+MBf6/wqT9FcunHMbzwdwaSGtVfdxt8JGCM9VU60 BIETz+IxFrshaZ2Dw8qDAzc/RaNyHY03+YVNNnFIbEl6qDGDD5zIOqSyL A==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490525" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490525" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864772" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:33 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Maciej Fijalkowski Subject: [PATCH v2 intel-net 5/6] ice: xsk: allow empty Rx descriptors on XSK ZC data path Date: Mon, 13 Dec 2021 16:31:10 +0100 Message-Id: <20211213153111.110877-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Commit ac6f733a7bd5 ("ice: allow empty Rx descriptors") stated that ice HW can produce empty descriptors that are valid and they should be processed. Add this support to xsk ZC path to avoid potential processing problems. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ffa9a160766a..c1491dc0675d 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -538,12 +538,18 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) */ dma_rmb(); + xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); + size = le16_to_cpu(rx_desc->wb.pkt_len) & ICE_RX_FLX_DESC_PKT_LEN_M; - if (!size) - break; + if (!size) { + xdp->data = NULL; + xdp->data_end = NULL; + xdp->data_hard_start = NULL; + xdp->data_meta = NULL; + goto construct_skb; + } - xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); xsk_buff_set_size(xdp, size); xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool); @@ -561,7 +567,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) ice_bump_ntc(rx_ring); continue; } - +construct_skb: /* XDP_PASS path */ skb = ice_construct_skb_zc(rx_ring, xdp); if (!skb) { From patchwork Mon Dec 13 15:31:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12674123 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E4EC433EF for ; Mon, 13 Dec 2021 15:31:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240049AbhLMPbj (ORCPT ); Mon, 13 Dec 2021 10:31:39 -0500 Received: from mga04.intel.com ([192.55.52.120]:4850 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238225AbhLMPbi (ORCPT ); Mon, 13 Dec 2021 10:31:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639409498; x=1670945498; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K3vnyLz2bxQysLYlwScP09dg12wIGh96QuIolR0pyVc=; b=HtL5bGt6JRf+2m2ODRZK3r1hrOhKOj0LG25LpLYa5OFWk9jMY3B0KfLk Yuw6RmiCASnc4dbX3xabSrKfQvMlz8J3iwfJLQl/kDmnmNGP3Vv+xG6BS hnQ1ajDois1wdS4I2I8CioGn/0iNtEKR2CnKXiPEIOIXBGDjFDgqLZ33E qENTqyMOSMTU4tYezx/MV9Lio+X+DcVSfwQjyXuEN3qcnFTPRrUGRsDV4 Y/udacMNA8Xdgb2a0CH5VHG1MHZLnAjf+Dv8/e9CIm+h56dirqL61AGlO HGkni+zbRGUj8N/i1zb2+9VHkcQ6rmsyPdIzaWDySV9qN9qsv52lPVd2J w==; X-IronPort-AV: E=McAfee;i="6200,9189,10196"; a="237490540" X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="237490540" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2021 07:31:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,202,1635231600"; d="scan'208";a="613864787" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 13 Dec 2021 07:31:35 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, davem@davemloft.net, magnus.karlsson@intel.com, elza.mathew@intel.com, Maciej Fijalkowski Subject: [PATCH v2 intel-net 6/6] ice: xsk: fix cleaned_count setting Date: Mon, 13 Dec 2021 16:31:11 +0100 Message-Id: <20211213153111.110877-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213153111.110877-1-maciej.fijalkowski@intel.com> References: <20211213153111.110877-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently cleaned_count is initialized to ICE_DESC_UNUSED(rx_ring) and later on during the Rx processing it is incremented per each frame that driver consumed. This can result in excessive buffers requested from xsk pool based on that value. To address this, just drop cleaned_count and pass ICE_DESC_UNUSED(rx_ring) directly as a function argument to ice_alloc_rx_bufs_zc(). Idea is to ask for buffers as many as consumed. Let us also call ice_alloc_rx_bufs_zc unconditionally at the end of ice_clean_rx_irq_zc. This has been changed in that way for corresponding ice_clean_rx_irq, but not here. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Kiran Bhandare A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_txrx.h | 1 - drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +----- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index c56dd1749903..b7b3bd4816f0 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -24,7 +24,6 @@ #define ICE_MAX_DATA_PER_TXD_ALIGNED \ (~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD) -#define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */ #define ICE_MAX_TXQ_PER_TXQG 128 /* Attempt to maximize the headroom available for incoming frames. We use a 2K diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index c1491dc0675d..c895351b25e0 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -505,7 +505,6 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; - u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); struct ice_tx_ring *xdp_ring; unsigned int xdp_xmit = 0; struct bpf_prog *xdp_prog; @@ -562,7 +561,6 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) total_rx_bytes += size; total_rx_packets++; - cleaned_count++; ice_bump_ntc(rx_ring); continue; @@ -575,7 +573,6 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) break; } - cleaned_count++; ice_bump_ntc(rx_ring); if (eth_skb_pad(skb)) { @@ -597,8 +594,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) ice_receive_skb(rx_ring, skb, vlan_tag); } - if (cleaned_count >= ICE_RX_BUF_WRITE) - failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count); + failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring)); ice_finalize_xdp_rx(xdp_ring, xdp_xmit); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);