From patchwork Wed Aug 7 10:53:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13756142 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEDF61DE868 for ; Wed, 7 Aug 2024 10:55:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028121; cv=none; b=AHGNx2vAj3/yMVcQuxcW7dIp+0R4SqjLQKfrJ9HdZJ5iBNoElmqe/XYCHHO+PnkkLCN6FJKacHepTFgaiHzNf4SkV59TNC0EJ1idXMAjWYgi1kvLHucUOjoa+ST1YgAmBy8Nl2xa20QSQjaEGXjN1R9z73spZkZWg/8ICCXQcZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028121; c=relaxed/simple; bh=Jusq7YWDx6mJtyPjWnz6er9sRRpOwRpAmltB9HEnGBM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kVfAx5VJEPGLcifgjp60k5/G2sxaR35g+9eJjJUdIhvA+c1pOTL+a8jrqaMuPn2R2BZrTZWv0L2TdZK0tahhrUNuaNosdpGbz+yPnHtT2FiYYAxV7JmuQkDAzZjJLReO/9vYTZ+XkUelqpbPchMDCyXYsMXqqlK7RmZ7o10/IFQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XnIZeKGl; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XnIZeKGl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723028120; x=1754564120; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Jusq7YWDx6mJtyPjWnz6er9sRRpOwRpAmltB9HEnGBM=; b=XnIZeKGlxii6c+P4d+IO+Br5cjuKXzEEJtEqo3BHWpbLCuGPPDEU/zvT BG3Jpdc6MfckQIPnPWU0JgyPbP0hZFWFoI+Cegnq12+SggkEC2yPj3WEi n3OYWnTRFJ7nPUf8L12hFP0v7pDRxvqschgC5OlVimbF/BtgQv1/cJmw6 1SSakAZJ5uJLGd20Q6JE2J3hjfZGOcU3jMsjV362WdXeJPNZCCs97xRhq WjGvPlAcycdtRKEHxxyB1/lMjkuU4Q/YNmjPhbH5j0c7a2v5PZ4HA7X48 +QCV6CmtK5xazKTpe8E+9ZN7JkzOnatCUk9xWtk54J4XnT5RLFRabKHNt A==; X-CSE-ConnectionGUID: g/mP3oQkSuKQlSZhNsmXeg== X-CSE-MsgGUID: PJ1MbhZcSAS7Ges9ZnWa9w== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="31664387" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="31664387" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 03:55:19 -0700 X-CSE-ConnectionGUID: ZdRm4x6sToW2r9LnvbEmew== X-CSE-MsgGUID: mL/dideMRtGqwFK73M/SjA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87757260" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmviesa001.fm.intel.com with ESMTP; 07 Aug 2024 03:55:17 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, bjorn@kernel.org, luizcap@redhat.com, Maciej Fijalkowski Subject: [PATCH iwl-net 1/3] ice: fix page reuse when PAGE_SIZE is over 8k Date: Wed, 7 Aug 2024 12:53:24 +0200 Message-Id: <20240807105326.86665-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20240807105326.86665-1-maciej.fijalkowski@intel.com> References: <20240807105326.86665-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Architectures that have PAGE_SIZE >= 8192 such as arm64 should act the same as x86 currently, meaning reuse of a page should only take place when no one else is busy with it. Do two things independently of underlying PAGE_SIZE: - store the page count under ice_rx_buf::pgcnt - then act upon its value vs ice_rx_buf::pagecnt_bias when making the decision regarding page reuse Fixes: 2b245cb29421 ("ice: Implement transmit and NAPI support") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_txrx.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8d25b6981269..50211188c1a7 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -837,16 +837,15 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) if (!dev_page_is_reusable(page)) return false; -#if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1)) return false; -#else +#if (PAGE_SIZE >= 8192) #define ICE_LAST_OFFSET \ (SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_2048) if (rx_buf->page_offset > ICE_LAST_OFFSET) return false; -#endif /* PAGE_SIZE < 8192) */ +#endif /* PAGE_SIZE >= 8192) */ /* If we have drained the page fragment pool we need to update * the pagecnt_bias and page count so that we fully restock the @@ -949,12 +948,7 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size, struct ice_rx_buf *rx_buf; rx_buf = &rx_ring->rx_buf[ntc]; - rx_buf->pgcnt = -#if (PAGE_SIZE < 8192) - page_count(rx_buf->page); -#else - 0; -#endif + rx_buf->pgcnt = page_count(rx_buf->page); prefetchw(rx_buf->page); if (!size) From patchwork Wed Aug 7 10:53:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13756143 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F4C41E2860 for ; Wed, 7 Aug 2024 10:55:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028124; cv=none; b=Xc1/NIsctVkvm/x0lwg20NjPv/JpRpwyRHXDybwS7ewYSts6m5ZCn5xoFDhMx54kMbsBrWm/OYtFqkZO/9SSCTK9uRYCxsDRm+eqETJYQRfTXlB4lepMT2uH9jgb82V0MHXhsoHXFwzTcsz/O6kPAqK2bvc9rRsyZS/Do7tjlAI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028124; c=relaxed/simple; bh=tzexPoIfOlUtrPzVq0Y+xdSBYhn5+Dywu4gsjG+uR5A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WDTXcAnUk+WMjHTKM/0LG0qw6kZgzJJ0z0iyZIAWrCu3oIfPzbhc+xzNwXnJMZss/f0x9Vrk8VxBR4BVMqWdNgSlDHphWNkFR8ht6FX2MkVvHg69BokacNRNlginqoRuF+3XMIqkYmivu02RSTEeyJMH8uadtymRXuAs5Xkguzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ijsb0U2S; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ijsb0U2S" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723028122; x=1754564122; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tzexPoIfOlUtrPzVq0Y+xdSBYhn5+Dywu4gsjG+uR5A=; b=Ijsb0U2SZwslYXGuGLsXYWYNPcAfdHwnHHy2fPp4fw/AFjpeHI2OVyAD /oj3DMhVoL2ZBPJlYQymu7IFOaL+okwT0emmHqJmTYKHV11qVAPcPNmqp LKjLvtcd6oZl3rCoPZteGLcdYAUOFqaKcyl7aSlA8kM8a99S+w8tTNYGj NQB+4mzbXu8WoSRu005IrcbaTIiyql4TK7sAz4dhxWlGYkwMIAzn2lpHm znSRQhdWuOoR5FLisKUbH98wgv90RzcE6tBRxxVd8vul45jffuDkGdE/c HjpiRo7rSUCHN66ZoOlNpTwAmldk8T1QgLsqSFgGoxWdhOGxAtAbMr+Fm w==; X-CSE-ConnectionGUID: KSSWOF8YRbeKtydcDbBnLA== X-CSE-MsgGUID: 1NF1m3vlTXiSbqx0njPdnw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="31664396" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="31664396" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 03:55:21 -0700 X-CSE-ConnectionGUID: 5xGkPkfkS5CJhjyvpeZv7Q== X-CSE-MsgGUID: 5ALGpwbMQCOiZHC5Be/jZA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87757285" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmviesa001.fm.intel.com with ESMTP; 07 Aug 2024 03:55:19 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, bjorn@kernel.org, luizcap@redhat.com, Maciej Fijalkowski Subject: [PATCH iwl-net 2/3] ice: fix ICE_LAST_OFFSET formula Date: Wed, 7 Aug 2024 12:53:25 +0200 Message-Id: <20240807105326.86665-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20240807105326.86665-1-maciej.fijalkowski@intel.com> References: <20240807105326.86665-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org For bigger PAGE_SIZE archs, ice driver works on 3k Rx buffers. Therefore, ICE_LAST_OFFSET should take into account ICE_RXBUF_3072, not ICE_RXBUF_2048. Fixes: 7237f5b0dba4 ("ice: introduce legacy Rx flag") Suggested-by: Luiz Capitulino Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 50211188c1a7..4b690952bb40 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -842,7 +842,7 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) return false; #if (PAGE_SIZE >= 8192) #define ICE_LAST_OFFSET \ - (SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_2048) + (SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_3072) if (rx_buf->page_offset > ICE_LAST_OFFSET) return false; #endif /* PAGE_SIZE >= 8192) */ From patchwork Wed Aug 7 10:53:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13756144 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4ED561DE862 for ; Wed, 7 Aug 2024 10:55:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028128; cv=none; b=tJnMpJWc4ikCm3QKFkS0XCrnHo0DnZ74Rxfv9IGB5RyGMgauOvz6y89u6pDvaAlYCENcb7ihUTrjvzGeq5QG2N3nxfNHZ4g7Mags2CmB5o45fYNEobYw0TRx3ZTblmSMX1zfuA6sNLhYvFFZvV4+0q3nVEWbfb8to2/9AMVEq6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723028128; c=relaxed/simple; bh=KH602mCFdxejIJTl3iLMySBJCqqCzdk6KQPH1exe3oM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=vC1Si3sE5i2S93f0koKCDEcuKc3HFq0stWYqsK04LEiLpXorqBdmH6NlTT9w/tIuUKRheK6xJUqH5Mbd4hQ1mqqtCG146FV4HtVRikPzglUyb3sLOWjerIoGCSvKpXL/x3uLsrwEN3rog+/5Wb+0tjGoFfO7zVEkxORpmanL198= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QDEeBrdS; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QDEeBrdS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723028126; x=1754564126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KH602mCFdxejIJTl3iLMySBJCqqCzdk6KQPH1exe3oM=; b=QDEeBrdSQSfy1/aeMfZktJW4XfK+osVOdStU65BN4XEGF1/Oolj1neTE LOEjH6HI/Nb3H+BmaL1pXjHFob1zuKJ4Ll7byK632jnFWfF/t19axBHSj WCYT+ktqFotVsftstHfDE6tHZDJwlCzFSfQ9rtIi3uQxn0lo+7Tsz8Dn4 QtvGQIvYhWi1Y++3kL7RzbKJ5i+wihhuwV6nLj6BUTPaKJydBjmfyNhD0 AxWO34FlOwJgX07uJBO1b6vBAEEsRaGK1DHgS2AGKBYDczfUW8pHnE5iJ cppjsSOsyjgHsoQhqgkny6ChySI+bEV2zYfrLt+DXjqHMxS4NcH19vyWW g==; X-CSE-ConnectionGUID: 6LADuNK5S1m3zijxiu8lxQ== X-CSE-MsgGUID: lWkbBq0SRKCPurXFWy+GRg== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="31664524" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="31664524" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 03:55:23 -0700 X-CSE-ConnectionGUID: 3BEQ1czcSnilL5OEQFv01g== X-CSE-MsgGUID: vfHfRGUqRRO07gD+o6oDrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87757423" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmviesa001.fm.intel.com with ESMTP; 07 Aug 2024 03:55:21 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, bjorn@kernel.org, luizcap@redhat.com, Maciej Fijalkowski Subject: [PATCH iwl-net 3/3] ice: fix truesize operations for PAGE_SIZE >= 8192 Date: Wed, 7 Aug 2024 12:53:26 +0200 Message-Id: <20240807105326.86665-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20240807105326.86665-1-maciej.fijalkowski@intel.com> References: <20240807105326.86665-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org When working on multi-buffer packet on arch that has PAGE_SIZE >= 8192, truesize is calculated and stored in xdp_buff::frame_sz per each processed Rx buffer. This means that frame_sz will contain the truesize based on last received buffer, but commit 1dc1a7e7f410 ("ice: Centrallize Rx buffer recycling") assumed this value will be constant for each buffer, which breaks the page recycling scheme and mess up the way we update the page::page_offset. To fix this, let us work on constant truesize when PAGE_SIZE >= 8192 instead of basing this on size of a packet read from Rx descriptor. This way we can simplify the code and avoid calculating truesize per each received frame and on top of that when using xdp_update_skb_shared_info(), current formula for truesize update will be valid. This means ice_rx_frame_truesize() can be removed altogether. Furthermore, first call to it within ice_clean_rx_irq() for 4k PAGE_SIZE was redundant as xdp_buff::frame_sz is initialized via xdp_init_buff() in ice_vsi_cfg_rxq(). This should have been removed at the point where xdp_buff struct started to be a member of ice_rx_ring and it was no longer a stack based variable. There are two fixes tags as my understanding is that the first one exposed us to broken truesize and page_offset handling and then second introduced broken skb_shared_info update in ice_{construct,build}_skb(). Reported-and-tested-by: Luiz Capitulino Closes: https://lore.kernel.org/netdev/8f9e2a5c-fd30-4206-9311-946a06d031bb@redhat.com/ Fixes: 1dc1a7e7f410 ("ice: Centrallize Rx buffer recycling") Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_base.c | 21 ++++++++++++++- drivers/net/ethernet/intel/ice/ice_txrx.c | 33 ----------------------- 2 files changed, 20 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 1facf179a96f..f448d3a84564 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -512,6 +512,25 @@ static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring) xsk_pool_fill_cb(ring->xsk_pool, &desc); } +/** + * ice_get_frame_sz - calculate xdp_buff::frame_sz + * @rx_ring: the ring being configured + * + * Return frame size based on underlying PAGE_SIZE + */ +static unsigned int ice_get_frame_sz(struct ice_rx_ring *rx_ring) +{ + unsigned int frame_sz; + +#if (PAGE_SIZE >= 8192) + frame_sz = rx_ring->rx_buf_len; +#else + frame_sz = ice_rx_pg_size(rx_ring) / 2; +#endif + + return frame_sz; +} + /** * ice_vsi_cfg_rxq - Configure an Rx queue * @ring: the ring being configured @@ -576,7 +595,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) } } - xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq); + xdp_init_buff(&ring->xdp, ice_get_frame_sz(ring), &ring->xdp_rxq); ring->xdp.data = NULL; ring->xdp_ext.pkt_ctx = &ring->pkt_ctx; err = ice_setup_rx_ctx(ring); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 4b690952bb40..c9bc3f1add5d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -521,30 +521,6 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) return -ENOMEM; } -/** - * ice_rx_frame_truesize - * @rx_ring: ptr to Rx ring - * @size: size - * - * calculate the truesize with taking into the account PAGE_SIZE of - * underlying arch - */ -static unsigned int -ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size) -{ - unsigned int truesize; - -#if (PAGE_SIZE < 8192) - truesize = ice_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ -#else - truesize = rx_ring->rx_offset ? - SKB_DATA_ALIGN(rx_ring->rx_offset + size) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : - SKB_DATA_ALIGN(size); -#endif - return truesize; -} - /** * ice_run_xdp - Executes an XDP program on initialized xdp_buff * @rx_ring: Rx ring @@ -1154,11 +1130,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) bool failure; u32 first; - /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ -#if (PAGE_SIZE < 8192) - xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0); -#endif - xdp_prog = READ_ONCE(rx_ring->xdp_prog); if (xdp_prog) { xdp_ring = rx_ring->xdp_ring; @@ -1217,10 +1188,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) hard_start = page_address(rx_buf->page) + rx_buf->page_offset - offset; xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); -#if (PAGE_SIZE > 4096) - /* At larger PAGE_SIZE, frame_sz depend on len size */ - xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size); -#endif xdp_buff_clear_frags_flag(xdp); } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) { break;