Message ID | 20241013154415.20262-10-mateusz.polchlopek@intel.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Add support for Rx timestamping for both ice and iavf drivers | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Guessing tree name failed - patch did not apply |
Hi Mateusz, kernel test robot noticed the following build errors: [auto build test ERROR on a77c49f53be0af1efad5b4541a9a145505c81800] url: https://github.com/intel-lab-lkp/linux/commits/Mateusz-Polchlopek/virtchnl-add-support-for-enabling-PTP-on-iAVF/20241014-174710 base: a77c49f53be0af1efad5b4541a9a145505c81800 patch link: https://lore.kernel.org/r/20241013154415.20262-10-mateusz.polchlopek%40intel.com patch subject: [Intel-wired-lan] [PATCH iwl-next v11 09/14] libeth: move idpf_rx_csum_decoded and idpf_rx_extracted config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20241016/202410160029.BdxyUZWz-lkp@intel.com/config) compiler: alpha-linux-gcc (GCC) 13.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241016/202410160029.BdxyUZWz-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202410160029.BdxyUZWz-lkp@intel.com/ All errors (new ones prefixed by >>): drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c: In function 'idpf_rx_singleq_clean': >> drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c:1040:78: error: expected ';' before 'napi_gro_receive' 1040 | idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) | ^ | ; ...... 1043 | napi_gro_receive(rx_q->pp->p.napi, skb); | ~~~~~~~~~~~~~~~~ vim +1040 drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c 954 955 /** 956 * idpf_rx_singleq_clean - Reclaim resources after receive completes 957 * @rx_q: rx queue to clean 958 * @budget: Total limit on number of packets to process 959 * 960 * Returns true if there's any budget left (e.g. the clean is finished) 961 */ 962 static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) 963 { 964 unsigned int total_rx_bytes = 0, total_rx_pkts = 0; 965 struct sk_buff *skb = rx_q->skb; 966 u16 ntc = rx_q->next_to_clean; 967 u16 cleaned_count = 0; 968 bool failure = false; 969 970 /* Process Rx packets bounded by budget */ 971 while (likely(total_rx_pkts < (unsigned int)budget)) { 972 struct libeth_rqe_info fields = { }; 973 union virtchnl2_rx_desc *rx_desc; 974 struct idpf_rx_buf *rx_buf; 975 u32 ptype; 976 977 /* get the Rx desc from Rx queue based on 'next_to_clean' */ 978 rx_desc = &rx_q->rx[ntc]; 979 980 /* status_error_ptype_len will always be zero for unused 981 * descriptors because it's cleared in cleanup, and overlaps 982 * with hdr_addr which is always zero because packet split 983 * isn't used, if the hardware wrote DD then the length will be 984 * non-zero 985 */ 986 #define IDPF_RXD_DD VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M 987 if (!idpf_rx_singleq_test_staterr(rx_desc, 988 IDPF_RXD_DD)) 989 break; 990 991 /* This memory barrier is needed to keep us from reading 992 * any other fields out of the rx_desc 993 */ 994 dma_rmb(); 995 996 idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields, &ptype); 997 998 rx_buf = &rx_q->rx_buf[ntc]; 999 if (!libeth_rx_sync_for_cpu(rx_buf, fields.len)) 1000 goto skip_data; 1001 1002 if (skb) 1003 idpf_rx_add_frag(rx_buf, skb, fields.len); 1004 else 1005 skb = idpf_rx_build_skb(rx_buf, fields.len); 1006 1007 /* exit if we failed to retrieve a buffer */ 1008 if (!skb) 1009 break; 1010 1011 skip_data: 1012 rx_buf->page = NULL; 1013 1014 IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); 1015 cleaned_count++; 1016 1017 /* skip if it is non EOP desc */ 1018 if (idpf_rx_singleq_is_non_eop(rx_desc) || unlikely(!skb)) 1019 continue; 1020 1021 #define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \ 1022 VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M) 1023 if (unlikely(idpf_rx_singleq_test_staterr(rx_desc, 1024 IDPF_RXD_ERR_S))) { 1025 dev_kfree_skb_any(skb); 1026 skb = NULL; 1027 continue; 1028 } 1029 1030 /* pad skb if needed (to make valid ethernet frame) */ 1031 if (eth_skb_pad(skb)) { 1032 skb = NULL; 1033 continue; 1034 } 1035 1036 /* probably a little skewed due to removing CRC */ 1037 total_rx_bytes += skb->len; 1038 1039 /* protocol */ > 1040 idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) 1041 1042 /* send completed skb up the stack */ 1043 napi_gro_receive(rx_q->pp->p.napi, skb); 1044 skb = NULL; 1045 1046 /* update budget accounting */ 1047 total_rx_pkts++; 1048 } 1049 1050 rx_q->skb = skb; 1051 1052 rx_q->next_to_clean = ntc; 1053 1054 page_pool_nid_changed(rx_q->pp, numa_mem_id()); 1055 if (cleaned_count) 1056 failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count); 1057 1058 u64_stats_update_begin(&rx_q->stats_sync); 1059 u64_stats_add(&rx_q->q_stats.packets, total_rx_pkts); 1060 u64_stats_add(&rx_q->q_stats.bytes, total_rx_bytes); 1061 u64_stats_update_end(&rx_q->stats_sync); 1062 1063 /* guarantee a trip back through this routine if there was a failure */ 1064 return failure ? budget : (int)total_rx_pkts; 1065 } 1066
Hi Mateusz, kernel test robot noticed the following build errors: [auto build test ERROR on a77c49f53be0af1efad5b4541a9a145505c81800] url: https://github.com/intel-lab-lkp/linux/commits/Mateusz-Polchlopek/virtchnl-add-support-for-enabling-PTP-on-iAVF/20241014-174710 base: a77c49f53be0af1efad5b4541a9a145505c81800 patch link: https://lore.kernel.org/r/20241013154415.20262-10-mateusz.polchlopek%40intel.com patch subject: [Intel-wired-lan] [PATCH iwl-next v11 09/14] libeth: move idpf_rx_csum_decoded and idpf_rx_extracted config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20241016/202410160123.5dWnVGKr-lkp@intel.com/config) compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241016/202410160123.5dWnVGKr-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202410160123.5dWnVGKr-lkp@intel.com/ All errors (new ones prefixed by >>): >> drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c:1040:64: error: expected ';' after expression 1040 | idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) | ^ | ; 1 error generated. vim +1040 drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c 954 955 /** 956 * idpf_rx_singleq_clean - Reclaim resources after receive completes 957 * @rx_q: rx queue to clean 958 * @budget: Total limit on number of packets to process 959 * 960 * Returns true if there's any budget left (e.g. the clean is finished) 961 */ 962 static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) 963 { 964 unsigned int total_rx_bytes = 0, total_rx_pkts = 0; 965 struct sk_buff *skb = rx_q->skb; 966 u16 ntc = rx_q->next_to_clean; 967 u16 cleaned_count = 0; 968 bool failure = false; 969 970 /* Process Rx packets bounded by budget */ 971 while (likely(total_rx_pkts < (unsigned int)budget)) { 972 struct libeth_rqe_info fields = { }; 973 union virtchnl2_rx_desc *rx_desc; 974 struct idpf_rx_buf *rx_buf; 975 u32 ptype; 976 977 /* get the Rx desc from Rx queue based on 'next_to_clean' */ 978 rx_desc = &rx_q->rx[ntc]; 979 980 /* status_error_ptype_len will always be zero for unused 981 * descriptors because it's cleared in cleanup, and overlaps 982 * with hdr_addr which is always zero because packet split 983 * isn't used, if the hardware wrote DD then the length will be 984 * non-zero 985 */ 986 #define IDPF_RXD_DD VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M 987 if (!idpf_rx_singleq_test_staterr(rx_desc, 988 IDPF_RXD_DD)) 989 break; 990 991 /* This memory barrier is needed to keep us from reading 992 * any other fields out of the rx_desc 993 */ 994 dma_rmb(); 995 996 idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields, &ptype); 997 998 rx_buf = &rx_q->rx_buf[ntc]; 999 if (!libeth_rx_sync_for_cpu(rx_buf, fields.len)) 1000 goto skip_data; 1001 1002 if (skb) 1003 idpf_rx_add_frag(rx_buf, skb, fields.len); 1004 else 1005 skb = idpf_rx_build_skb(rx_buf, fields.len); 1006 1007 /* exit if we failed to retrieve a buffer */ 1008 if (!skb) 1009 break; 1010 1011 skip_data: 1012 rx_buf->page = NULL; 1013 1014 IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); 1015 cleaned_count++; 1016 1017 /* skip if it is non EOP desc */ 1018 if (idpf_rx_singleq_is_non_eop(rx_desc) || unlikely(!skb)) 1019 continue; 1020 1021 #define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \ 1022 VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M) 1023 if (unlikely(idpf_rx_singleq_test_staterr(rx_desc, 1024 IDPF_RXD_ERR_S))) { 1025 dev_kfree_skb_any(skb); 1026 skb = NULL; 1027 continue; 1028 } 1029 1030 /* pad skb if needed (to make valid ethernet frame) */ 1031 if (eth_skb_pad(skb)) { 1032 skb = NULL; 1033 continue; 1034 } 1035 1036 /* probably a little skewed due to removing CRC */ 1037 total_rx_bytes += skb->len; 1038 1039 /* protocol */ > 1040 idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) 1041 1042 /* send completed skb up the stack */ 1043 napi_gro_receive(rx_q->pp->p.napi, skb); 1044 skb = NULL; 1045 1046 /* update budget accounting */ 1047 total_rx_pkts++; 1048 } 1049 1050 rx_q->skb = skb; 1051 1052 rx_q->next_to_clean = ntc; 1053 1054 page_pool_nid_changed(rx_q->pp, numa_mem_id()); 1055 if (cleaned_count) 1056 failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count); 1057 1058 u64_stats_update_begin(&rx_q->stats_sync); 1059 u64_stats_add(&rx_q->q_stats.packets, total_rx_pkts); 1060 u64_stats_add(&rx_q->q_stats.bytes, total_rx_bytes); 1061 u64_stats_update_end(&rx_q->stats_sync); 1062 1063 /* guarantee a trip back through this routine if there was a failure */ 1064 return failure ? budget : (int)total_rx_pkts; 1065 } 1066
On 10/13/2024 8:44 AM, Mateusz Polchlopek wrote: > Structs idpf_rx_csum_decoded and idpf_rx_extracted are used both in > idpf and iavf Intel drivers. Change the prefix from idpf_* to libeth_* > and move mentioned structs to libeth's rx.h header file. > > Adjust usage in idpf driver. > > Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com> > Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> > --- This fails to compile: > drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c: In function ‘idpf_rx_singleq_clean’: > drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c:1040:78: error: expected ‘;’ before ‘napi_gro_receive’ > 1040 | idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) > | ^ > | ; > ...... > 1043 | napi_gro_receive(rx_q->pp->p.napi, skb); > | ~~~~~~~~~~~~~~~~ Thanks, Jake
diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index dfd7cf1d9aa0..27797dfc95da 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -595,7 +595,7 @@ static bool idpf_rx_singleq_is_non_eop(const union virtchnl2_rx_desc *rx_desc) */ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, - struct idpf_rx_csum_decoded csum_bits, + struct libeth_rx_csum csum_bits, struct libeth_rx_pt decoded) { bool ipv4, ipv6; @@ -661,10 +661,10 @@ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, * * Return: parsed checksum status. **/ -static struct idpf_rx_csum_decoded +static struct libeth_rx_csum idpf_rx_singleq_base_csum(const union virtchnl2_rx_desc *rx_desc) { - struct idpf_rx_csum_decoded csum_bits = { }; + struct libeth_rx_csum csum_bits = { }; u32 rx_error, rx_status; u64 qword; @@ -696,10 +696,10 @@ idpf_rx_singleq_base_csum(const union virtchnl2_rx_desc *rx_desc) * * Return: parsed checksum status. **/ -static struct idpf_rx_csum_decoded +static struct libeth_rx_csum idpf_rx_singleq_flex_csum(const union virtchnl2_rx_desc *rx_desc) { - struct idpf_rx_csum_decoded csum_bits = { }; + struct libeth_rx_csum csum_bits = { }; u16 rx_status0, rx_status1; rx_status0 = le16_to_cpu(rx_desc->flex_nic_wb.status_error0); @@ -798,7 +798,7 @@ idpf_rx_singleq_process_skb_fields(struct idpf_rx_queue *rx_q, u16 ptype) { struct libeth_rx_pt decoded = rx_q->rx_ptype_lkup[ptype]; - struct idpf_rx_csum_decoded csum_bits; + struct libeth_rx_csum csum_bits; /* modifies the skb - consumes the enet header */ skb->protocol = eth_type_trans(skb, rx_q->netdev); @@ -891,6 +891,7 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rx_q, * idpf_rx_singleq_extract_base_fields - Extract fields from the Rx descriptor * @rx_desc: the descriptor to process * @fields: storage for extracted values + * @ptype: pointer that will store packet type * * Decode the Rx descriptor and extract relevant information including the * size and Rx packet type. @@ -900,20 +901,21 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rx_q, */ static void idpf_rx_singleq_extract_base_fields(const union virtchnl2_rx_desc *rx_desc, - struct idpf_rx_extracted *fields) + struct libeth_rqe_info *fields, u32 *ptype) { u64 qword; qword = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len); - fields->size = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M, qword); - fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M, qword); + fields->len = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M, qword); + *ptype = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M, qword); } /** * idpf_rx_singleq_extract_flex_fields - Extract fields from the Rx descriptor * @rx_desc: the descriptor to process * @fields: storage for extracted values + * @ptype: pointer that will store packet type * * Decode the Rx descriptor and extract relevant information including the * size and Rx packet type. @@ -923,12 +925,12 @@ idpf_rx_singleq_extract_base_fields(const union virtchnl2_rx_desc *rx_desc, */ static void idpf_rx_singleq_extract_flex_fields(const union virtchnl2_rx_desc *rx_desc, - struct idpf_rx_extracted *fields) + struct libeth_rqe_info *fields, u32 *ptype) { - fields->size = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M, - le16_to_cpu(rx_desc->flex_nic_wb.pkt_len)); - fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PTYPE_M, - le16_to_cpu(rx_desc->flex_nic_wb.ptype_flex_flags0)); + fields->len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M, + le16_to_cpu(rx_desc->flex_nic_wb.pkt_len)); + *ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PTYPE_M, + le16_to_cpu(rx_desc->flex_nic_wb.ptype_flex_flags0)); } /** @@ -936,17 +938,18 @@ idpf_rx_singleq_extract_flex_fields(const union virtchnl2_rx_desc *rx_desc, * @rx_q: Rx descriptor queue * @rx_desc: the descriptor to process * @fields: storage for extracted values + * @ptype: pointer that will store packet type * */ static void idpf_rx_singleq_extract_fields(const struct idpf_rx_queue *rx_q, const union virtchnl2_rx_desc *rx_desc, - struct idpf_rx_extracted *fields) + struct libeth_rqe_info *fields, u32 *ptype) { if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) - idpf_rx_singleq_extract_base_fields(rx_desc, fields); + idpf_rx_singleq_extract_base_fields(rx_desc, fields, ptype); else - idpf_rx_singleq_extract_flex_fields(rx_desc, fields); + idpf_rx_singleq_extract_flex_fields(rx_desc, fields, ptype); } /** @@ -966,9 +969,10 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) /* Process Rx packets bounded by budget */ while (likely(total_rx_pkts < (unsigned int)budget)) { - struct idpf_rx_extracted fields = { }; + struct libeth_rqe_info fields = { }; union virtchnl2_rx_desc *rx_desc; struct idpf_rx_buf *rx_buf; + u32 ptype; /* get the Rx desc from Rx queue based on 'next_to_clean' */ rx_desc = &rx_q->rx[ntc]; @@ -989,16 +993,16 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) */ dma_rmb(); - idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields); + idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields, &ptype); rx_buf = &rx_q->rx_buf[ntc]; - if (!libeth_rx_sync_for_cpu(rx_buf, fields.size)) + if (!libeth_rx_sync_for_cpu(rx_buf, fields.len)) goto skip_data; if (skb) - idpf_rx_add_frag(rx_buf, skb, fields.size); + idpf_rx_add_frag(rx_buf, skb, fields.len); else - skb = idpf_rx_build_skb(rx_buf, fields.size); + skb = idpf_rx_build_skb(rx_buf, fields.len); /* exit if we failed to retrieve a buffer */ if (!skb) @@ -1033,8 +1037,7 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) total_rx_bytes += skb->len; /* protocol */ - idpf_rx_singleq_process_skb_fields(rx_q, skb, - rx_desc, fields.rx_ptype); + idpf_rx_singleq_process_skb_fields(rx_q, skb, rx_desc, ptype) /* send completed skb up the stack */ napi_gro_receive(rx_q->pp->p.napi, skb); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 60d15b3e6e2f..63a76e5b7f01 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2895,7 +2895,7 @@ idpf_rx_hash(const struct idpf_rx_queue *rxq, struct sk_buff *skb, * skb->protocol must be set before this function is called */ static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, - struct idpf_rx_csum_decoded csum_bits, + struct libeth_rx_csum csum_bits, struct libeth_rx_pt decoded) { bool ipv4, ipv6; @@ -2923,7 +2923,7 @@ static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, if (unlikely(csum_bits.l4e)) goto checksum_fail; - if (csum_bits.raw_csum_inv || + if (!csum_bits.raw_csum_valid || decoded.inner_prot == LIBETH_RX_PT_INNER_SCTP) { skb->ip_summed = CHECKSUM_UNNECESSARY; return; @@ -2946,10 +2946,10 @@ static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, * * Return: parsed checksum status. **/ -static struct idpf_rx_csum_decoded +static struct libeth_rx_csum idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) { - struct idpf_rx_csum_decoded csum = { }; + struct libeth_rx_csum csum = { }; u8 qword0, qword1; qword0 = rx_desc->status_err0_qw0; @@ -2965,9 +2965,9 @@ idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 * qword1); csum.ipv6exadd = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_M, qword0); - csum.raw_csum_inv = - le16_get_bits(rx_desc->ptype_err_fflags0, - VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M); + csum.raw_csum_valid = + !le16_get_bits(rx_desc->ptype_err_fflags0, + VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M); csum.raw_csum = le16_to_cpu(rx_desc->misc.raw_cs); return csum; @@ -3060,7 +3060,7 @@ static int idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb, const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) { - struct idpf_rx_csum_decoded csum_bits; + struct libeth_rx_csum csum_bits; struct libeth_rx_pt decoded; u16 rx_ptype; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 9c1fe84108ed..b59aa7d8de2c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -213,25 +213,6 @@ enum idpf_tx_ctx_desc_eipt_offload { IDPF_TX_CTX_EXT_IP_IPV4 = 0x3 }; -/* Checksum offload bits decoded from the receive descriptor. */ -struct idpf_rx_csum_decoded { - u32 l3l4p : 1; - u32 ipe : 1; - u32 eipe : 1; - u32 eudpe : 1; - u32 ipv6exadd : 1; - u32 l4e : 1; - u32 pprs : 1; - u32 nat : 1; - u32 raw_csum_inv : 1; - u32 raw_csum : 16; -}; - -struct idpf_rx_extracted { - unsigned int size; - u16 rx_ptype; -}; - #define IDPF_TX_COMPLQ_CLEAN_BUDGET 256 #define IDPF_TX_MIN_PKT_LEN 17 #define IDPF_TX_DESCS_FOR_SKB_DATA_PTR 1 diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index 43574bd6612f..ab05024be518 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -198,6 +198,53 @@ struct libeth_rx_pt { enum xdp_rss_hash_type hash_type:16; }; +/** + * struct libeth_rx_csum - checksum offload bits decoded from the Rx descriptor + * @l3l4p: detectable L3 and L4 integrity check is processed by the hardware + * @ipe: IP checksum error + * @eipe: external (outermost) IP header (only for tunels) + * @eudpe: external (outermost) UDP checksum error (only for tunels) + * @ipv6exadd: IPv6 header with extension headers + * @l4e: L4 integrity error + * @pprs: set for packets that skip checksum calculation in the HW pre parser + * @nat: the packet is a UDP tunneled packet + * @raw_csum_valid: set if raw checksum is valid + * @pad: padding to naturally align raw_csum field + * @raw_csum: raw checksum + */ +struct libeth_rx_csum { + u32 l3l4p:1; + u32 ipe:1; + u32 eipe:1; + u32 eudpe:1; + u32 ipv6exadd:1; + u32 l4e:1; + u32 pprs:1; + u32 nat:1; + + u32 raw_csum_valid:1; + u32 pad:7; + u32 raw_csum:16; +}; + +/** + * struct libeth_rqe_info - receive queue element info + * @len: packet length + * @ptype: packet type based on types programmed into the device + * @eop: whether it's the last fragment of the packet + * @rxe: MAC errors: CRC, Alignment, Oversize, Undersizes, Length error + * @vlan: C-VLAN or S-VLAN tag depending on the VLAN offload configuration + */ +struct libeth_rqe_info { + u32 len; + + u32 ptype:14; + u32 eop:1; + u32 rxe:1; + + u32 vlan:16; +}; + void libeth_rx_pt_gen_hash_type(struct libeth_rx_pt *pt); /**
Structs idpf_rx_csum_decoded and idpf_rx_extracted are used both in idpf and iavf Intel drivers. Change the prefix from idpf_* to libeth_* and move mentioned structs to libeth's rx.h header file. Adjust usage in idpf driver. Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 51 ++++++++++--------- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 16 +++--- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 19 ------- include/net/libeth/rx.h | 47 +++++++++++++++++ 4 files changed, 82 insertions(+), 51 deletions(-)