From patchwork Mon Dec 14 15:13:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972347 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59C68C2BB40 for ; Mon, 14 Dec 2020 15:26:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BFDF22B40 for ; Mon, 14 Dec 2020 15:26:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408065AbgLNPXx (ORCPT ); Mon, 14 Dec 2020 10:23:53 -0500 Received: from mga17.intel.com ([192.55.52.151]:28673 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727425AbgLNPXg (ORCPT ); Mon, 14 Dec 2020 10:23:36 -0500 IronPort-SDR: SjFHRfuU22Z5gveVRPC7bBdjNIFZKH/T6rXFYof77ujkpPBJ4uPKlB4+0bd2qwjvHxdtX9mrgj B6GItFEhEnjg== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531329" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531329" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:22:55 -0800 IronPort-SDR: NkmcWa+ceThqdaP05/z8JXEvfS2EkFinu00PuzjCwJH8zWNTI711R/ftXO/8WlKUwWrdLNXP81 u3+VMC1l3hpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285652" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:22:54 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 1/8] i40e: drop redundant check when setting xdp prog Date: Mon, 14 Dec 2020 16:13:01 +0100 Message-Id: <20201214151308.15275-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Net core handles the case where netdev has no xdp prog attached and current prog is NULL. Therefore, remove such check within i40e_xdp_setup. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_main.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 1337686bd099..5d494dd66d31 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -12452,9 +12452,6 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, if (frame_size > vsi->rx_buf_len) return -EINVAL; - if (!i40e_enabled_xdp_vsi(vsi) && !prog) - return 0; - /* When turning XDP on->off/off->on we reset and rebuild the rings. */ need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog); From patchwork Mon Dec 14 15:13:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972331 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CD7AC4361B for ; Mon, 14 Dec 2020 15:23:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47A48207AC for ; Mon, 14 Dec 2020 15:23:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405572AbgLNPXr (ORCPT ); Mon, 14 Dec 2020 10:23:47 -0500 Received: from mga17.intel.com ([192.55.52.151]:28677 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390837AbgLNPXi (ORCPT ); Mon, 14 Dec 2020 10:23:38 -0500 IronPort-SDR: 2atMeT78DGBj2h7LUb4IomMvM98FilaCPWmpq1YTRE4U/LaTO355wLAbOTbguQq+uG89U8t9dQ rlNIH8Txwkkw== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531337" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531337" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:22:58 -0800 IronPort-SDR: pW+MnQWCeQ+EM1++nqjUSh8dM5MoWij+HXfxqUrMqIGEa/Di/wNmk+IwQjwuxuJ46XSyw1bWJT 5MUKlWsNESzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285667" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:22:56 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 2/8] i40e: drop misleading function comments Date: Mon, 14 Dec 2020 16:13:02 +0100 Message-Id: <20201214151308.15275-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org i40e_cleanup_headers has a statement about check against skb being linear or not which is not relevant anymore, so let's remove it. Same case for i40e_can_reuse_rx_page, it references things that are not present there anymore. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 33 ++++----------------- 1 file changed, 6 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 9f73cd7aee09..1dbb8efa50ad 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1809,9 +1809,6 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, * @skb: pointer to current skb being fixed * @rx_desc: pointer to the EOP Rx descriptor * - * Also address the case where we are pulling data in on pages only - * and as such no data is present in the skb header. - * * In addition if skb is not at least 60 bytes we need to pad it so that * it is large enough to qualify as a valid Ethernet frame. * @@ -1857,32 +1854,14 @@ static inline bool i40e_page_is_reusable(struct page *page) } /** - * i40e_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * + * i40e_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buffer: buffer containing the page * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ + * If page is reusable, we have a green light for calling i40e_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer) { unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; From patchwork Mon Dec 14 15:13:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972333 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D04EDC4361B for ; Mon, 14 Dec 2020 15:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E0D8207AC for ; Mon, 14 Dec 2020 15:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408153AbgLNPXy (ORCPT ); Mon, 14 Dec 2020 10:23:54 -0500 Received: from mga17.intel.com ([192.55.52.151]:28670 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407465AbgLNPXt (ORCPT ); Mon, 14 Dec 2020 10:23:49 -0500 IronPort-SDR: yYe3N4wXEK/buAwnB9IZkOCFnf6w/WYfNP6AzlmKaPI3btYVoU+VYyCDrXXeox3RJ3vISd2Y8B Tk6zOMpJ8r3g== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531340" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531340" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:00 -0800 IronPort-SDR: wzGkGTRJse31QTl3lRLx1+d7iDIkO+AtEU/gPcn8Eb01rKf1TDlv0PbrpyxEC1etyn76MlNkmF mFpiZlXInWrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285687" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:22:58 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 3/8] i40e: adjust i40e_is_non_eop Date: Mon, 14 Dec 2020 16:13:03 +0100 Message-Id: <20201214151308.15275-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org i40e_is_non_eop had a leftover comment and unused skb argument which was used for placing the skb onto rx_buf in case when current buffer was non-eop one. This is not relevant anymore as commit e72e56597ba1 ("i40e/i40evf: Moves skb from i40e_rx_buffer to i40e_ring") pulled the non-complete skb handling out of rx_bufs up to rx_ring. Therefore, let's adjust the function arguments that i40e_is_non_eop takes. Furthermore, since there is already a function responsible for bumping the ntc, make use of that and drop that logic from i40e_is_non_eop so that the scope of this function is limited to what the name actually states. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 23 ++++++--------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 1dbb8efa50ad..205dc2124e96 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2118,25 +2118,13 @@ static void i40e_put_rx_buffer(struct i40e_ring *rx_ring, * i40e_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress * - * This function updates next to clean. If the buffer is an EOP buffer - * this function exits returning false, otherwise it will place the - * sk_buff in the next buffer to be chained and return true indicating - * that this is in fact a non-EOP buffer. - **/ + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ static bool i40e_is_non_eop(struct i40e_ring *rx_ring, - union i40e_rx_desc *rx_desc, - struct sk_buff *skb) + union i40e_rx_desc *rx_desc) { - u32 ntc = rx_ring->next_to_clean + 1; - - /* fetch, update, and store next to clean */ - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - - prefetch(I40E_RX_DESC(rx_ring, ntc)); - /* if we are the last buffer then there is nothing else to do */ #define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT) if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF))) @@ -2414,7 +2402,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) i40e_put_rx_buffer(rx_ring, rx_buffer); cleaned_count++; - if (i40e_is_non_eop(rx_ring, rx_desc, skb)) + i40e_inc_ntc(rx_ring); + if (i40e_is_non_eop(rx_ring, rx_desc)) continue; if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) { From patchwork Mon Dec 14 15:13:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972335 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EFC9C4361B for ; Mon, 14 Dec 2020 15:24:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB9E1207AC for ; Mon, 14 Dec 2020 15:24:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390837AbgLNPX5 (ORCPT ); Mon, 14 Dec 2020 10:23:57 -0500 Received: from mga17.intel.com ([192.55.52.151]:28677 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2408091AbgLNPX4 (ORCPT ); Mon, 14 Dec 2020 10:23:56 -0500 IronPort-SDR: enRtKApQK9a3KEf6/RGC9yWJTwWKHuA1+pAPtYtCSI2pHrv92eB69A4kTlqf5fIZeCCxsAkRcL jeHuICNQngCg== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531347" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531347" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:02 -0800 IronPort-SDR: M6G2TaFF9s87TaSaPSevh93XG4Rn07MkCpFxSneFdo9n3Elgqvd+9ZyT+5Ax9vAgaMVgYRM5+a ZHlg00IxDeKA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285711" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:23:00 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 4/8] ice: simplify ice_run_xdp Date: Mon, 14 Dec 2020 16:13:04 +0100 Message-Id: <20201214151308.15275-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org There's no need for 'result' variable, we can directly return the internal status based on action returned by xdp prog. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_txrx.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 77d5eae6b4c2..8b5d23436904 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -537,22 +537,20 @@ static int ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - int err, result = ICE_XDP_PASS; struct ice_ring *xdp_ring; + int err; u32 act; act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: - break; + return ICE_XDP_PASS; case XDP_TX: xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()]; - result = ice_xmit_xdp_buff(xdp, xdp_ring); - break; + return ice_xmit_xdp_buff(xdp, xdp_ring); case XDP_REDIRECT: err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; - break; + return !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -560,11 +558,8 @@ ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp, trace_xdp_exception(rx_ring->netdev, xdp_prog, act); fallthrough; case XDP_DROP: - result = ICE_XDP_CONSUMED; - break; + return ICE_XDP_CONSUMED; } - - return result; } /** From patchwork Mon Dec 14 15:13:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972345 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA5EFC2BB48 for ; Mon, 14 Dec 2020 15:25:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 884DF22B4B for ; Mon, 14 Dec 2020 15:25:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408324AbgLNPYL (ORCPT ); Mon, 14 Dec 2020 10:24:11 -0500 Received: from mga17.intel.com ([192.55.52.151]:28673 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2408191AbgLNPX4 (ORCPT ); Mon, 14 Dec 2020 10:23:56 -0500 IronPort-SDR: hjU+PSSRWsLsTGIL6CfFKhPFncjN1syChAuvaf87vZjb7L394bvfBKQnb3Qae5qILKvY+xPV8M t/BEGXHr4Gmg== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531354" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531354" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:04 -0800 IronPort-SDR: LqhIPoA+gUwfUZLbCGMDKM/OzA/rWsF/ebAxKGYPvr72IC5bpKm/58hT9H6KaEdflrcnKKeBZE k2PSrXleQpRg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285733" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:23:02 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 5/8] ice: move skb pointer from rx_buf to rx_ring Date: Mon, 14 Dec 2020 16:13:05 +0100 Message-Id: <20201214151308.15275-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Similar thing has been done in i40e, as there is no real need for having the sk_buff pointer in each rx_buf. Non-eop frames can be simply handled on that pointer moved upwards to rx_ring. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_txrx.c | 29 ++++++++++------------- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- 2 files changed, 13 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8b5d23436904..0ad89ee48c09 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -375,6 +375,11 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring) if (!rx_ring->rx_buf) return; + if (rx_ring->skb) { + dev_kfree_skb(rx_ring->skb); + rx_ring->skb = NULL; + } + if (rx_ring->xsk_pool) { ice_xsk_clean_rx_ring(rx_ring); goto rx_skip_free; @@ -384,10 +389,6 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring) for (i = 0; i < rx_ring->count; i++) { struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i]; - if (rx_buf->skb) { - dev_kfree_skb(rx_buf->skb); - rx_buf->skb = NULL; - } if (!rx_buf->page) continue; @@ -857,21 +858,18 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) /** * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use * @rx_ring: Rx descriptor ring to transact packets on - * @skb: skb to be used * @size: size of buffer to add to skb * * This function will pull an Rx buffer from the ring and synchronize it * for use by the CPU. */ static struct ice_rx_buf * -ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb, - const unsigned int size) +ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size) { struct ice_rx_buf *rx_buf; rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; prefetchw(rx_buf->page); - *skb = rx_buf->skb; if (!size) return rx_buf; @@ -1030,29 +1028,24 @@ static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) /* clear contents of buffer_info */ rx_buf->page = NULL; - rx_buf->skb = NULL; } /** * ice_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress * * If the buffer is an EOP buffer, this function exits returning false, * otherwise return true indicating that this is in fact a non-EOP buffer. */ static bool -ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, - struct sk_buff *skb) +ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) if (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF))) return false; - /* place skb in next buffer to be received */ - rx_ring->rx_buf[rx_ring->next_to_clean].skb = skb; rx_ring->rx_stats.non_eop_descs++; return true; @@ -1075,6 +1068,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_pkts = 0; u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); unsigned int xdp_res, xdp_xmit = 0; + struct sk_buff *skb = rx_ring->skb; struct bpf_prog *xdp_prog = NULL; struct xdp_buff xdp; bool failure; @@ -1089,7 +1083,6 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) while (likely(total_rx_pkts < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; struct ice_rx_buf *rx_buf; - struct sk_buff *skb; unsigned int size; u16 stat_err_bits; u16 vlan_tag = 0; @@ -1123,7 +1116,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) ICE_RX_FLX_DESC_PKT_LEN_M; /* retrieve a buffer from the ring */ - rx_buf = ice_get_rx_buf(rx_ring, &skb, size); + rx_buf = ice_get_rx_buf(rx_ring, size); if (!size) { xdp.data = NULL; @@ -1186,7 +1179,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) cleaned_count++; /* skip if it is NOP desc */ - if (ice_is_non_eop(rx_ring, rx_desc, skb)) + if (ice_is_non_eop(rx_ring, rx_desc)) continue; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); @@ -1216,6 +1209,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) /* send completed skb up the stack */ ice_receive_skb(rx_ring, skb, vlan_tag); + skb = NULL; /* update budget accounting */ total_rx_pkts++; @@ -1226,6 +1220,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) if (xdp_prog) ice_finalize_xdp_rx(rx_ring, xdp_xmit); + rx_ring->skb = skb; ice_update_rx_ring_stats(rx_ring, total_rx_pkts, total_rx_bytes); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index ff1a1cbd078e..c77dbbb760cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -165,7 +165,6 @@ struct ice_tx_offload_params { struct ice_rx_buf { union { struct { - struct sk_buff *skb; dma_addr_t dma; struct page *page; unsigned int page_offset; @@ -298,6 +297,7 @@ struct ice_ring { struct xsk_buff_pool *xsk_pool; /* CL3 - 3rd cacheline starts here */ struct xdp_rxq_info xdp_rxq; + struct sk_buff *skb; /* CLX - the below items are only accessed infrequently and should be * in their own cache line if possible */ From patchwork Mon Dec 14 15:13:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972341 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA82DC4361B for ; Mon, 14 Dec 2020 15:25:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68FCE22B3B for ; Mon, 14 Dec 2020 15:25:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408343AbgLNPYL (ORCPT ); Mon, 14 Dec 2020 10:24:11 -0500 Received: from mga17.intel.com ([192.55.52.151]:28670 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727908AbgLNPYI (ORCPT ); Mon, 14 Dec 2020 10:24:08 -0500 IronPort-SDR: zikulLBhGY4nyepekg1Jv3vhpXvzhiI5L8Nt/e9d8M2G23PtCGMea+1w+OfOuhyaNhrP1zetNT OLgaFjQHfbdg== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531359" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531359" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:07 -0800 IronPort-SDR: tMWyc6D7apkB+5WNsBfOIi7PhA1KpYYKW0XEbqrlnAFjJeU72prQ8475bbhanQagkBaYMRx4/B 4GFm0E8cep7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285741" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:23:04 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 6/8] ice: remove redundant checks in ice_change_mtu Date: Mon, 14 Dec 2020 16:13:06 +0100 Message-Id: <20201214151308.15275-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org dev_validate_mtu checks that mtu value specified by user is not less than min mtu and not greater than max allowed mtu. It is being done before calling the ndo_change_mtu exposed by driver, so remove these redundant checks in ice_change_mtu. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_main.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 2dea4d0e9415..476e20af7309 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -6126,15 +6126,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) } } - if (new_mtu < (int)netdev->min_mtu) { - netdev_err(netdev, "new MTU invalid. min_mtu is %d\n", - netdev->min_mtu); - return -EINVAL; - } else if (new_mtu > (int)netdev->max_mtu) { - netdev_err(netdev, "new MTU invalid. max_mtu is %d\n", - netdev->min_mtu); - return -EINVAL; - } /* if a reset is in progress, wait for some time for it to complete */ do { if (ice_is_reset_in_progress(pf->state)) { From patchwork Mon Dec 14 15:13:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972337 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13889C4361B for ; Mon, 14 Dec 2020 15:24:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8ABD22B3B for ; Mon, 14 Dec 2020 15:24:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408360AbgLNPYR (ORCPT ); Mon, 14 Dec 2020 10:24:17 -0500 Received: from mga17.intel.com ([192.55.52.151]:28677 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2408357AbgLNPYM (ORCPT ); Mon, 14 Dec 2020 10:24:12 -0500 IronPort-SDR: OLPI5N2JrQamT48xkpPpXG8B66jDWPxr8luuY7mqy6/dTaBln8+ceKtv9HW3MAv/rwnv0ufwRm yLBNiD0GKzKw== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531365" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531365" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:09 -0800 IronPort-SDR: 2sz3LWE2BdjS1ZGPnyBMZBaeaVD9tY7CiHzCdSxNlxF9f4WJx7VvYKbHz50HaGMt+wHVXsH1A3 pnKX0EuNXOsA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285753" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:23:07 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v2 net-next 7/8] ice: skip NULL check against XDP prog in ZC path Date: Mon, 14 Dec 2020 16:13:07 +0100 Message-Id: <20201214151308.15275-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Whole zero-copy variant of clean Rx irq is executed when xsk_pool is attached to rx_ring and it can happen only when XDP program is present on interface. Therefore it is safe to assume that program is always !NULL and there is no need for checking it in ice_run_xdp_zc. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 797886524054..9aea97ca4a04 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -514,11 +514,10 @@ ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) u32 act; rcu_read_lock(); + /* ZC patch is enabled only when XDP program is set, + * so here it can not be NULL + */ xdp_prog = READ_ONCE(rx_ring->xdp_prog); - if (!xdp_prog) { - rcu_read_unlock(); - return ICE_XDP_PASS; - } act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { From patchwork Mon Dec 14 15:13:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 11972339 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59514C4361B for ; Mon, 14 Dec 2020 15:24:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D46722B2C for ; Mon, 14 Dec 2020 15:24:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2408053AbgLNPYQ (ORCPT ); Mon, 14 Dec 2020 10:24:16 -0500 Received: from mga17.intel.com ([192.55.52.151]:28673 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2408356AbgLNPYM (ORCPT ); Mon, 14 Dec 2020 10:24:12 -0500 IronPort-SDR: 0ZIPLNF+c/k8CKq3GSZIgg8c4P+OKZLUU2wX04xENfLKO04upCeA4j4VTIg7r9xVOVHa5VLRGe KR+VorUbn00Q== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="154531366" X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="154531366" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 07:23:11 -0800 IronPort-SDR: ctp9ScIgn6YcqGCvrAdH0QTDHWYEnWb8z+6R82dHgYTMKJNdwk7MX1I1sr2oCNFG/QFVtwkE+L 1WnJpaF38FWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,420,1599548400"; d="scan'208";a="411285763" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga001.jf.intel.com with ESMTP; 14 Dec 2020 07:23:09 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com Subject: [PATCH v2 net-next 8/8] i40e, xsk: Simplify the do-while allocation loop Date: Mon, 14 Dec 2020 16:13:08 +0100 Message-Id: <20201214151308.15275-9-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201214151308.15275-1-maciej.fijalkowski@intel.com> References: <20201214151308.15275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Björn Töpel Fold the count decrement into the while-statement. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index bfa84bfb0488..679200d94ef8 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -215,9 +215,7 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count) bi = i40e_rx_bi(rx_ring, 0); ntu = 0; } - - count--; - } while (count); + } while (--count); no_buffers: if (rx_ring->next_to_use != ntu)