Message ID | 20250122151046.574061-4-maciej.fijalkowski@intel.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | ice: fix Rx data path for heavy 9k MTU traffic | expand |
Hi Maciej,
kernel test robot noticed the following build warnings:
[auto build test WARNING on tnguy-net-queue/dev-queue]
url: https://github.com/intel-lab-lkp/linux/commits/Maciej-Fijalkowski/ice-put-Rx-buffers-after-being-done-with-current-frame/20250122-231406
base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git dev-queue
patch link: https://lore.kernel.org/r/20250122151046.574061-4-maciej.fijalkowski%40intel.com
patch subject: [Intel-wired-lan] [PATCH v4 iwl-net 3/3] ice: stop storing XDP verdict within ice_rx_buf
config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20250123/202501230352.vJJRWiRa-lkp@intel.com/config)
compiler: s390-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250123/202501230352.vJJRWiRa-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501230352.vJJRWiRa-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/ice/ice_txrx.c:1140: warning: Function parameter or struct member 'verdict' not described in 'ice_put_rx_mbuf'
vim +1140 drivers/net/ethernet/intel/ice/ice_txrx.c
2b245cb29421ab Anirudh Venkataramanan 2018-03-20 1126
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1127 /**
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1128 * ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1129 * @rx_ring: Rx ring with all the auxiliary data
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1130 * @xdp: XDP buffer carrying linear + frags part
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1131 * @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1132 * @ntc: a current next_to_clean value to be stored at rx_ring
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1133 *
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1134 * Walk through gathered fragments and satisfy internal page
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1135 * recycle mechanism; we take here an action related to verdict
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1136 * returned by XDP program;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1137 */
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1138 static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1139 u32 *xdp_xmit, u32 ntc, u32 verdict)
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 @1140 {
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1141 u32 nr_frags = rx_ring->nr_frags + 1;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1142 u32 idx = rx_ring->first_desc;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1143 u32 cnt = rx_ring->count;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1144 u32 post_xdp_frags = 1;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1145 struct ice_rx_buf *buf;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1146 int i;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1147
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1148 if (unlikely(xdp_buff_has_frags(xdp)))
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1149 post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1150
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1151 for (i = 0; i < post_xdp_frags; i++) {
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1152 buf = &rx_ring->rx_buf[idx];
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1153
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1154 if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1155 ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1156 *xdp_xmit |= verdict;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1157 } else if (verdict & ICE_XDP_CONSUMED) {
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1158 buf->pagecnt_bias++;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1159 } else if (verdict == ICE_XDP_PASS) {
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1160 ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1161 }
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1162
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1163 ice_put_rx_buf(rx_ring, buf);
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1164
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1165 if (++idx == cnt)
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1166 idx = 0;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1167 }
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1168 /* handle buffers that represented frags released by XDP prog;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1169 * for these we keep pagecnt_bias as-is; refcount from struct page
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1170 * has been decremented within XDP prog and we do not have to increase
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1171 * the biased refcnt
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1172 */
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1173 for (; i < nr_frags; i++) {
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1174 buf = &rx_ring->rx_buf[idx];
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1175 ice_put_rx_buf(rx_ring, buf);
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1176 if (++idx == cnt)
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1177 idx = 0;
3dab6a1cb7698e Maciej Fijalkowski 2025-01-22 1178 }
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1179
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1180 xdp->data = NULL;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1181 rx_ring->first_desc = ntc;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1182 rx_ring->nr_frags = 0;
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1183 }
bee8fb85c01733 Maciej Fijalkowski 2025-01-22 1184
On Wed, Jan 22, 2025 at 04:10:46PM +0100, Maciej Fijalkowski wrote: > Idea behind having ice_rx_buf::act was to simplify and speed up the Rx > data path by walking through buffers that were representing cleaned HW > Rx descriptors. Since it caused us a major headache recently and we > rolled back to old approach that 'puts' Rx buffers right after running > XDP prog/creating skb, this is useless now and should be removed. > > Get rid of ice_rx_buf::act and related logic. We still need to take care > of a corner case where XDP program releases a particular fragment. > > Make ice_run_xdp() to return its result and use it within > ice_put_rx_mbuf(). > > Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > --- > drivers/net/ethernet/intel/ice/ice_txrx.c | 61 +++++++++++-------- > drivers/net/ethernet/intel/ice/ice_txrx.h | 1 - > drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 43 ------------- > 3 files changed, 35 insertions(+), 70 deletions(-) > > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c ... > @@ -1139,23 +1136,27 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf) > * returned by XDP program; > */ > static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, > - u32 *xdp_xmit, u32 ntc) > + u32 *xdp_xmit, u32 ntc, u32 verdict) Hi Marciej, Sorry, there is one more Kernel doc nit. As reported by the Kernel Test Robot, verdict should be added to the Kernel doc for this function. With that addressed feel free to add: Reviewed-by: Simon Horman <horms@kernel.org> ...
On Thu, Jan 23, 2025 at 10:45:36AM +0000, Simon Horman wrote: > On Wed, Jan 22, 2025 at 04:10:46PM +0100, Maciej Fijalkowski wrote: > > Idea behind having ice_rx_buf::act was to simplify and speed up the Rx > > data path by walking through buffers that were representing cleaned HW > > Rx descriptors. Since it caused us a major headache recently and we > > rolled back to old approach that 'puts' Rx buffers right after running > > XDP prog/creating skb, this is useless now and should be removed. > > > > Get rid of ice_rx_buf::act and related logic. We still need to take care > > of a corner case where XDP program releases a particular fragment. > > > > Make ice_run_xdp() to return its result and use it within > > ice_put_rx_mbuf(). > > > > Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") > > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > --- > > drivers/net/ethernet/intel/ice/ice_txrx.c | 61 +++++++++++-------- > > drivers/net/ethernet/intel/ice/ice_txrx.h | 1 - > > drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 43 ------------- > > 3 files changed, 35 insertions(+), 70 deletions(-) > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c > > ... > > > @@ -1139,23 +1136,27 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf) > > * returned by XDP program; > > */ > > static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, > > - u32 *xdp_xmit, u32 ntc) > > + u32 *xdp_xmit, u32 ntc, u32 verdict) > > Hi Marciej, > > Sorry, there is one more Kernel doc nit. As reported by the Kernel Test > Robot, verdict should be added to the Kernel doc for this function. Yeah that is embarrassing. I have now included ./scripts/kernel-doc -none $FILE to my pre-upstreaming checks so that it won't be happening again... (or is there a way to run the kernel-doc against patch itself?) > > With that addressed feel free to add: > > Reviewed-by: Simon Horman <horms@kernel.org> Thanks! Will include them in v5. > > ...
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index cf46bcf143b4..6e99cdd0bd7a 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -527,15 +527,14 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) * @xdp: xdp_buff used as input to the XDP program * @xdp_prog: XDP program to run * @xdp_ring: ring to be used for XDP_TX action - * @rx_buf: Rx buffer to store the XDP action * @eop_desc: Last descriptor in packet to read metadata from * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ -static void +static u32 ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, - struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *eop_desc) + union ice_32b_rx_flex_desc *eop_desc) { unsigned int ret = ICE_XDP_PASS; u32 act; @@ -574,7 +573,7 @@ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, ret = ICE_XDP_CONSUMED; } exit: - ice_set_rx_bufs_act(xdp, rx_ring, ret); + return ret; } /** @@ -860,10 +859,8 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, xdp_buff_set_frags_flag(xdp); } - if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { - ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED); + if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) return -ENOMEM; - } __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page, rx_buf->page_offset, size); @@ -1075,12 +1072,12 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) rx_buf->page_offset + headlen, size, xdp->frame_sz); } else { - /* buffer is unused, change the act that should be taken later - * on; data was copied onto skb's linear part so there's no + /* buffer is unused, restore biased page count in Rx buffer; + * data was copied onto skb's linear part so there's no * need for adjusting page offset and we can reuse this buffer * as-is */ - rx_buf->act = ICE_SKB_CONSUMED; + rx_buf->pagecnt_bias++; } if (unlikely(xdp_buff_has_frags(xdp))) { @@ -1139,23 +1136,27 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf) * returned by XDP program; */ static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, - u32 *xdp_xmit, u32 ntc) + u32 *xdp_xmit, u32 ntc, u32 verdict) { u32 nr_frags = rx_ring->nr_frags + 1; u32 idx = rx_ring->first_desc; u32 cnt = rx_ring->count; + u32 post_xdp_frags = 1; struct ice_rx_buf *buf; int i; - for (i = 0; i < nr_frags; i++) { + if (unlikely(xdp_buff_has_frags(xdp))) + post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags; + + for (i = 0; i < post_xdp_frags; i++) { buf = &rx_ring->rx_buf[idx]; - if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) { + if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) { ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); - *xdp_xmit |= buf->act; - } else if (buf->act & ICE_XDP_CONSUMED) { + *xdp_xmit |= verdict; + } else if (verdict & ICE_XDP_CONSUMED) { buf->pagecnt_bias++; - } else if (buf->act == ICE_XDP_PASS) { + } else if (verdict == ICE_XDP_PASS) { ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); } @@ -1164,6 +1165,17 @@ static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, if (++idx == cnt) idx = 0; } + /* handle buffers that represented frags released by XDP prog; + * for these we keep pagecnt_bias as-is; refcount from struct page + * has been decremented within XDP prog and we do not have to increase + * the biased refcnt + */ + for (; i < nr_frags; i++) { + buf = &rx_ring->rx_buf[idx]; + ice_put_rx_buf(rx_ring, buf); + if (++idx == cnt) + idx = 0; + } xdp->data = NULL; rx_ring->first_desc = ntc; @@ -1190,9 +1202,9 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) struct ice_tx_ring *xdp_ring = NULL; struct bpf_prog *xdp_prog = NULL; u32 ntc = rx_ring->next_to_clean; + u32 cached_ntu, xdp_verdict; u32 cnt = rx_ring->count; u32 xdp_xmit = 0; - u32 cached_ntu; bool failure; xdp_prog = READ_ONCE(rx_ring->xdp_prog); @@ -1255,7 +1267,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); xdp_buff_clear_frags_flag(xdp); } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) { - ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc); + ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED); break; } if (++ntc == cnt) @@ -1266,13 +1278,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) continue; ice_get_pgcnts(rx_ring); - ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc); - if (rx_buf->act == ICE_XDP_PASS) + xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc); + if (xdp_verdict == ICE_XDP_PASS) goto construct_skb; total_rx_bytes += xdp_get_buff_len(xdp); total_rx_pkts++; - ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc); + ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict); continue; construct_skb: @@ -1283,12 +1295,9 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { rx_ring->ring_stats->rx_stats.alloc_page_failed++; - rx_buf->act = ICE_XDP_CONSUMED; - if (unlikely(xdp_buff_has_frags(xdp))) - ice_set_rx_bufs_act(xdp, rx_ring, - ICE_XDP_CONSUMED); + xdp_verdict = ICE_XDP_CONSUMED; } - ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc); + ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict); if (!skb) break; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index cb347c852ba9..806bce701df3 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -201,7 +201,6 @@ struct ice_rx_buf { struct page *page; unsigned int page_offset; unsigned int pgcnt; - unsigned int act; unsigned int pagecnt_bias; }; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index 79f960c6680d..6cf32b404127 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -5,49 +5,6 @@ #define _ICE_TXRX_LIB_H_ #include "ice.h" -/** - * ice_set_rx_bufs_act - propagate Rx buffer action to frags - * @xdp: XDP buffer representing frame (linear and frags part) - * @rx_ring: Rx ring struct - * act: action to store onto Rx buffers related to XDP buffer parts - * - * Set action that should be taken before putting Rx buffer from first frag - * to the last. - */ -static inline void -ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring, - const unsigned int act) -{ - u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags; - u32 nr_frags = rx_ring->nr_frags + 1; - u32 idx = rx_ring->first_desc; - u32 cnt = rx_ring->count; - struct ice_rx_buf *buf; - - for (int i = 0; i < nr_frags; i++) { - buf = &rx_ring->rx_buf[idx]; - buf->act = act; - - if (++idx == cnt) - idx = 0; - } - - /* adjust pagecnt_bias on frags freed by XDP prog */ - if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) { - u32 delta = rx_ring->nr_frags - sinfo_frags; - - while (delta) { - if (idx == 0) - idx = cnt - 1; - else - idx--; - buf = &rx_ring->rx_buf[idx]; - buf->pagecnt_bias--; - delta--; - } - } -} - /** * ice_test_staterr - tests bits in Rx descriptor status and error fields * @status_err_n: Rx descriptor status_error0 or status_error1 bits