From patchwork Mon Mar 29 17:09:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170553 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEF93C433C1 for ; Mon, 29 Mar 2021 17:09:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 966916191D for ; Mon, 29 Mar 2021 17:09:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230307AbhC2RIe (ORCPT ); Mon, 29 Mar 2021 13:08:34 -0400 Received: from mga02.intel.com ([134.134.136.20]:51878 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230421AbhC2RH7 (ORCPT ); Mon, 29 Mar 2021 13:07:59 -0400 IronPort-SDR: PgvnNzpsr9QGjDFAPKANbbSdy0sPT2cnmOX2i3gq9iwtNuEt7Cgd3Qnw6L6E4AA6UItDtIIxWK Me+BQm4u7dwA== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720124" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720124" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:07:58 -0700 IronPort-SDR: P1soWp18uyTCqvcKYHvYo1iMdK7MD53KJZxBFadL7zotV/I2zynepWSrvnLPT8HfwF+JE+m4jK chZLZJRvve5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447254" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:07:58 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 1/8] igc: Remove unused argument from igc_tx_cmd_type() Date: Mon, 29 Mar 2021 10:09:24 -0700 Message-Id: <20210329170931.2356162-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes The 'skb' argument from igc_tx_cmd_type() is not used so remove it. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index baa45a1f3a65..e1e38d797f29 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1029,7 +1029,7 @@ static inline int igc_maybe_stop_tx(struct igc_ring *tx_ring, const u16 size) ((u32)((_input) & (_flag)) * ((_result) / (_flag))) : \ ((u32)((_input) & (_flag)) / ((_flag) / (_result)))) -static u32 igc_tx_cmd_type(struct sk_buff *skb, u32 tx_flags) +static u32 igc_tx_cmd_type(u32 tx_flags) { /* set type for advanced descriptor with frame checksum insertion */ u32 cmd_type = IGC_ADVTXD_DTYP_DATA | @@ -1078,7 +1078,7 @@ static int igc_tx_map(struct igc_ring *tx_ring, u16 i = tx_ring->next_to_use; unsigned int data_len, size; dma_addr_t dma; - u32 cmd_type = igc_tx_cmd_type(skb, tx_flags); + u32 cmd_type = igc_tx_cmd_type(tx_flags); tx_desc = IGC_TX_DESC(tx_ring, i); From patchwork Mon Mar 29 17:09:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170559 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE022C433DB for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 845FE6148E for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231332AbhC2RIf (ORCPT ); Mon, 29 Mar 2021 13:08:35 -0400 Received: from mga02.intel.com ([134.134.136.20]:51883 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231178AbhC2RIB (ORCPT ); Mon, 29 Mar 2021 13:08:01 -0400 IronPort-SDR: WhRr6nFE0OvOoWIJw0nJDAtULYN6Mg3O24CI7Hrhro/XP0zHDzH5xmpmABJsgYfSaUj9H9M9Gx dXM0wiFHNOrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720126" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720126" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:07:59 -0700 IronPort-SDR: uQthlEBq/KV/5XmO9ZyG1SlG5N/DPMAW+G02HUUGCWqg4/dhH3yf02J1qvxt4QrzSv9gFSFipy +XU7TPOydBHw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447258" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:07:58 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 2/8] igc: Introduce igc_rx_buffer_flip() helper Date: Mon, 29 Mar 2021 10:09:25 -0700 Message-Id: <20210329170931.2356162-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes The igc driver implements the same page recycling scheme from other Intel drivers which reuses the page by flipping the buffer. The code to handle buffer flips is duplicated in many locations so introduce the igc_rx_buffer_flip() helper and use it where applicable. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 43 +++++++++++------------ 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index e1e38d797f29..1afbc903aaae 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1499,6 +1499,16 @@ static struct igc_rx_buffer *igc_get_rx_buffer(struct igc_ring *rx_ring, return rx_buffer; } +static void igc_rx_buffer_flip(struct igc_rx_buffer *buffer, + unsigned int truesize) +{ +#if (PAGE_SIZE < 8192) + buffer->page_offset ^= truesize; +#else + buffer->page_offset += truesize; +#endif +} + /** * igc_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on @@ -1513,20 +1523,19 @@ static void igc_add_rx_frag(struct igc_ring *rx_ring, struct sk_buff *skb, unsigned int size) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = igc_rx_pg_size(rx_ring) / 2; + unsigned int truesize; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, - rx_buffer->page_offset, size, truesize); - rx_buffer->page_offset ^= truesize; +#if (PAGE_SIZE < 8192) + truesize = igc_rx_pg_size(rx_ring) / 2; #else - unsigned int truesize = ring_uses_build_skb(rx_ring) ? - SKB_DATA_ALIGN(IGC_SKB_PAD + size) : - SKB_DATA_ALIGN(size); + truesize = ring_uses_build_skb(rx_ring) ? + SKB_DATA_ALIGN(IGC_SKB_PAD + size) : + SKB_DATA_ALIGN(size); +#endif skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); - rx_buffer->page_offset += truesize; -#endif + + igc_rx_buffer_flip(rx_buffer, truesize); } static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, @@ -1555,13 +1564,7 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, skb_reserve(skb, IGC_SKB_PAD); __skb_put(skb, size); - /* update buffer offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - + igc_rx_buffer_flip(rx_buffer, truesize); return skb; } @@ -1607,11 +1610,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, skb_add_rx_frag(skb, 0, rx_buffer->page, (va + headlen) - page_address(rx_buffer->page), size, truesize); -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif + igc_rx_buffer_flip(rx_buffer, truesize); } else { rx_buffer->pagecnt_bias++; } From patchwork Mon Mar 29 17:09:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170555 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D27BFC433E1 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A63776198F for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231388AbhC2RIi (ORCPT ); Mon, 29 Mar 2021 13:08:38 -0400 Received: from mga02.intel.com ([134.134.136.20]:51885 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230418AbhC2RIB (ORCPT ); Mon, 29 Mar 2021 13:08:01 -0400 IronPort-SDR: ftcURbCKDcLLpb7mrLamBiDzQe4k+88R+Eq3c6ecX9Yvt7I89Ij4M97iB4cg8PKKxpzRf5ZxBK syJdhKVj0WYg== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720132" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720132" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:07:59 -0700 IronPort-SDR: ZUE/enII3RjuKxH3rXfUCDq5fCXew7nzNp9v31jBfwxXD7+q5QW6feOAckswtRWI6+E+H+zmmK 9swO1j2Sbong== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447265" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:07:59 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 3/8] igc: Introduce igc_get_rx_frame_truesize() helper Date: Mon, 29 Mar 2021 10:09:26 -0700 Message-Id: <20210329170931.2356162-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes The RX frame truesize calculation is scattered throughout the RX code. This patch creates the helper function igc_get_rx_frame_truesize() and uses it where applicable. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 29 ++++++++++++++--------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 1afbc903aaae..d6947c1d6f49 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1509,6 +1509,22 @@ static void igc_rx_buffer_flip(struct igc_rx_buffer *buffer, #endif } +static unsigned int igc_get_rx_frame_truesize(struct igc_ring *ring, + unsigned int size) +{ + unsigned int truesize; + +#if (PAGE_SIZE < 8192) + truesize = igc_rx_pg_size(ring) / 2; +#else + truesize = ring_uses_build_skb(ring) ? + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + + SKB_DATA_ALIGN(IGC_SKB_PAD + size) : + SKB_DATA_ALIGN(size); +#endif + return truesize; +} + /** * igc_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on @@ -1544,12 +1560,7 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; -#if (PAGE_SIZE < 8192) - unsigned int truesize = igc_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(IGC_SKB_PAD + size); -#endif + unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); struct sk_buff *skb; /* prefetch first cache line of first page */ @@ -1574,11 +1585,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; -#if (PAGE_SIZE < 8192) - unsigned int truesize = igc_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(size); -#endif + unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); unsigned int headlen; struct sk_buff *skb; From patchwork Mon Mar 29 17:09:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170557 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB90C433E3 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B962F6148E for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229555AbhC2RIh (ORCPT ); Mon, 29 Mar 2021 13:08:37 -0400 Received: from mga02.intel.com ([134.134.136.20]:51878 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230323AbhC2RID (ORCPT ); Mon, 29 Mar 2021 13:08:03 -0400 IronPort-SDR: WubWbGtK2RlGIEI0NUCn36KD16Q+eC9kj2FNqCTKQffa6kgLOOtAMxSp4n5zkjz4bnFE0wTLnS /zEFvhOw1jPA== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720133" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720133" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:08:00 -0700 IronPort-SDR: 0fWhAa3dy07eKhBp3MyJxrFP74h/N1J+FoUguGTSSDliCSq/tPYaI9wFDeX2bMF3AhB1VycHr1 IU2z4xYoE+FQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447269" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:07:59 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 4/8] igc: Refactor Rx timestamp handling Date: Mon, 29 Mar 2021 10:09:27 -0700 Message-Id: <20210329170931.2356162-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes Refactor the Rx timestamp handling in preparation to land XDP support. Rx timestamps are put in the Rx buffer by hardware, before the packet data. When creating the xdp buffer, we will need to check the Rx descriptor to determine if the buffer contains timestamp information and consider the offset when setting xdp.data. The Rx descriptor check is already done in igc_construct_skb(). To avoid code duplication, this patch moves the timestamp handling to igc_clean_rx_irq() so both skb and xdp paths can reuse it. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc.h | 3 +-- drivers/net/ethernet/intel/igc/igc_main.c | 30 +++++++++++++++-------- drivers/net/ethernet/intel/igc/igc_ptp.c | 25 ++++++++++--------- 3 files changed, 34 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index 1b08a7dc7bc4..c6d38c6a1a6a 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -547,8 +547,7 @@ void igc_ptp_init(struct igc_adapter *adapter); void igc_ptp_reset(struct igc_adapter *adapter); void igc_ptp_suspend(struct igc_adapter *adapter); void igc_ptp_stop(struct igc_adapter *adapter); -void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, - struct sk_buff *skb); +ktime_t igc_ptp_rx_pktstamp(struct igc_adapter *adapter, __le32 *buf); int igc_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr); int igc_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr); void igc_ptp_tx_hang(struct igc_adapter *adapter); diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index d6947c1d6f49..1eacb4ebd9f6 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1581,10 +1581,11 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, struct igc_rx_buffer *rx_buffer, - union igc_adv_rx_desc *rx_desc, - unsigned int size) + unsigned int size, int pkt_offset, + ktime_t timestamp) { - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; + void *va = page_address(rx_buffer->page) + rx_buffer->page_offset + + pkt_offset; unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); unsigned int headlen; struct sk_buff *skb; @@ -1597,11 +1598,8 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, if (unlikely(!skb)) return NULL; - if (unlikely(igc_test_staterr(rx_desc, IGC_RXDADV_STAT_TSIP))) { - igc_ptp_rx_pktstamp(rx_ring->q_vector, va, skb); - va += IGC_TS_HDR_LEN; - size -= IGC_TS_HDR_LEN; - } + if (timestamp) + skb_hwtstamps(skb)->hwtstamp = timestamp; /* Determine available headroom for copy */ headlen = size; @@ -1895,6 +1893,8 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) while (likely(total_packets < budget)) { union igc_adv_rx_desc *rx_desc; struct igc_rx_buffer *rx_buffer; + ktime_t timestamp = 0; + int pkt_offset = 0; unsigned int size; /* return some buffers to hardware, one at a time is too slow */ @@ -1916,14 +1916,24 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) rx_buffer = igc_get_rx_buffer(rx_ring, size); + if (igc_test_staterr(rx_desc, IGC_RXDADV_STAT_TSIP)) { + void *pktbuf = page_address(rx_buffer->page) + + rx_buffer->page_offset; + + timestamp = igc_ptp_rx_pktstamp(q_vector->adapter, + pktbuf); + pkt_offset = IGC_TS_HDR_LEN; + size -= IGC_TS_HDR_LEN; + } + /* retrieve a buffer from the ring */ if (skb) igc_add_rx_frag(rx_ring, rx_buffer, skb, size); else if (ring_uses_build_skb(rx_ring)) skb = igc_build_skb(rx_ring, rx_buffer, rx_desc, size); else - skb = igc_construct_skb(rx_ring, rx_buffer, - rx_desc, size); + skb = igc_construct_skb(rx_ring, rx_buffer, size, + pkt_offset, timestamp); /* exit if we failed to retrieve a buffer */ if (!skb) { diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c index 545f4d0e67cf..dfa3b247fcd8 100644 --- a/drivers/net/ethernet/intel/igc/igc_ptp.c +++ b/drivers/net/ethernet/intel/igc/igc_ptp.c @@ -153,20 +153,20 @@ static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter, /** * igc_ptp_rx_pktstamp - Retrieve timestamp from Rx packet buffer - * @q_vector: Pointer to interrupt specific structure - * @va: Pointer to address containing Rx buffer - * @skb: Buffer containing timestamp and packet + * @adapter: Pointer to adapter the packet buffer belongs to + * @buf: Pointer to packet buffer * * This function retrieves the timestamp saved in the beginning of packet * buffer. While two timestamps are available, one in timer0 reference and the * other in timer1 reference, this function considers only the timestamp in * timer0 reference. + * + * Returns timestamp value. */ -void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, - struct sk_buff *skb) +ktime_t igc_ptp_rx_pktstamp(struct igc_adapter *adapter, __le32 *buf) { - struct igc_adapter *adapter = q_vector->adapter; - u64 regval; + ktime_t timestamp; + u32 secs, nsecs; int adjust; /* Timestamps are saved in little endian at the beginning of the packet @@ -178,9 +178,10 @@ void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, * SYSTIML holds the nanoseconds part while SYSTIMH holds the seconds * part of the timestamp. */ - regval = le32_to_cpu(va[2]); - regval |= (u64)le32_to_cpu(va[3]) << 32; - igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), regval); + nsecs = le32_to_cpu(buf[2]); + secs = le32_to_cpu(buf[3]); + + timestamp = ktime_set(secs, nsecs); /* Adjust timestamp for the RX latency based on link speed */ switch (adapter->link_speed) { @@ -201,8 +202,8 @@ void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, netdev_warn_once(adapter->netdev, "Imprecise timestamp\n"); break; } - skb_hwtstamps(skb)->hwtstamp = - ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust); + + return ktime_sub_ns(timestamp, adjust); } static void igc_ptp_disable_rx_timestamp(struct igc_adapter *adapter) From patchwork Mon Mar 29 17:09:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170563 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B83D5C433E8 for ; Mon, 29 Mar 2021 17:09:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 962D46196C for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231351AbhC2RIg (ORCPT ); Mon, 29 Mar 2021 13:08:36 -0400 Received: from mga02.intel.com ([134.134.136.20]:51883 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbhC2RID (ORCPT ); Mon, 29 Mar 2021 13:08:03 -0400 IronPort-SDR: xKMSbK5VgXbv9dVcYtbrjmKFc7fTFK4S/dxZZfH0YtCcYCMtwq6AD/5e/zWzgLdR0nKvPJp9ha WPIYrzmR3cnw== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720134" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720134" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:08:00 -0700 IronPort-SDR: yo8Q4VejOcrxBpeypv05wcz2Fy6pvFfATyLGNmvCLaqalRJycDjAqJjbfk29ibHttPNt2uVhcr 6tTI8/DC629g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447274" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:08:00 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 5/8] igc: Add set/clear large buffer helpers Date: Mon, 29 Mar 2021 10:09:28 -0700 Message-Id: <20210329170931.2356162-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes While commit 13b5b7fd6a4a ("igc: Add support for Tx/Rx rings") introduced code to handle larger packet buffers, it missed the set/clear helpers which enable/disable that feature. Introduce the missing helpers which will be use in the next patch. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index c6d38c6a1a6a..b00cd8696b6d 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -504,6 +504,10 @@ enum igc_ring_flags_t { #define ring_uses_large_buffer(ring) \ test_bit(IGC_RING_FLAG_RX_3K_BUFFER, &(ring)->flags) +#define set_ring_uses_large_buffer(ring) \ + set_bit(IGC_RING_FLAG_RX_3K_BUFFER, &(ring)->flags) +#define clear_ring_uses_large_buffer(ring) \ + clear_bit(IGC_RING_FLAG_RX_3K_BUFFER, &(ring)->flags) #define ring_uses_build_skb(ring) \ test_bit(IGC_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags) From patchwork Mon Mar 29 17:09:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170565 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A061EC433E0 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73EA161959 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231309AbhC2RIf (ORCPT ); Mon, 29 Mar 2021 13:08:35 -0400 Received: from mga02.intel.com ([134.134.136.20]:51885 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbhC2RID (ORCPT ); Mon, 29 Mar 2021 13:08:03 -0400 IronPort-SDR: hibcQ/gT8OGv00UzhDVre+X0fpMq0H2GfoaWOmncWKbkrJXUR/ipETtfuk3eNQqFhf3detqd/0 Q5Mpk1Jp6BiA== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720135" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720135" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:08:00 -0700 IronPort-SDR: IOZG0aE1hZ+uWDMjvIEoKK54WtyXV/w5Nvq9Pr4WocMnuNYFSomcrBVh68BABTHl5mqI9EGTPC 9oLSTP5BmvwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447277" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:08:00 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 6/8] igc: Add initial XDP support Date: Mon, 29 Mar 2021 10:09:29 -0700 Message-Id: <20210329170931.2356162-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes Add the initial XDP support to the igc driver. For now, only XDP_PASS, XDP_DROP, XDP_ABORTED actions are supported. Upcoming patches will add support for the remaining XDP actions. XDP configuration helpers are defined in a new file, igc_xdp.c. These helpers are utilized in igc_main.c to implement the ndo_bpf callback. XDP-related code that belongs to the driver's hot path is landed in igc_main.c. By default, the driver uses Rx buffers with 2 KB size. When XDP is enabled, it uses larger buffers so we have enough space to accommodate the headroom and tailroom required by XDP infrastructure. Also, the driver doesn't support XDP functionality with frames that span over multiple buffers so jumbo frames are not allowed for now. The approach implemented follows the approach implemented in other Intel drivers as much as possible for the sake of consistency across the drivers. Quick comment regarding igc_build_skb(): this patch doesn't touch it because the function is never called. It seems its support is incomplete/in progress. The function was added by commit 0507ef8a0372b ("igc: Add transmit and receive fastpath and interrupt handlers") but ring_uses_build_skb() always return False since the IGC_RING_FLAG_RX_ BUILD_SKB_ENABLED isn't set anywhere in the driver code. This patch has been tested with the sample app "xdp1" located in samples/bpf/ dir. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/Makefile | 2 +- drivers/net/ethernet/intel/igc/igc.h | 2 + drivers/net/ethernet/intel/igc/igc_main.c | 118 ++++++++++++++++++++-- drivers/net/ethernet/intel/igc/igc_xdp.c | 33 ++++++ drivers/net/ethernet/intel/igc/igc_xdp.h | 10 ++ 5 files changed, 153 insertions(+), 12 deletions(-) create mode 100644 drivers/net/ethernet/intel/igc/igc_xdp.c create mode 100644 drivers/net/ethernet/intel/igc/igc_xdp.h diff --git a/drivers/net/ethernet/intel/igc/Makefile b/drivers/net/ethernet/intel/igc/Makefile index 1c3051db9085..95d1e8c490a4 100644 --- a/drivers/net/ethernet/intel/igc/Makefile +++ b/drivers/net/ethernet/intel/igc/Makefile @@ -8,4 +8,4 @@ obj-$(CONFIG_IGC) += igc.o igc-objs := igc_main.o igc_mac.o igc_i225.o igc_base.o igc_nvm.o igc_phy.o \ -igc_diag.o igc_ethtool.o igc_ptp.o igc_dump.o igc_tsn.o +igc_diag.o igc_ethtool.o igc_ptp.o igc_dump.o igc_tsn.o igc_xdp.o diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index b00cd8696b6d..3a1737227222 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -219,6 +219,8 @@ struct igc_adapter { ktime_t ptp_reset_start; /* Reset time in clock mono */ char fw_version[32]; + + struct bpf_prog *xdp_prog; }; void igc_up(struct igc_adapter *adapter); diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 1eacb4ebd9f6..7da31da523c8 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -10,17 +10,22 @@ #include #include #include +#include #include #include "igc.h" #include "igc_hw.h" #include "igc_tsn.h" +#include "igc_xdp.h" #define DRV_SUMMARY "Intel(R) 2.5G Ethernet Linux Driver" #define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK) +#define IGC_XDP_PASS 0 +#define IGC_XDP_CONSUMED BIT(0) + static int debug = -1; MODULE_AUTHOR("Intel Corporation, "); @@ -375,6 +380,8 @@ static void igc_clean_rx_ring(struct igc_ring *rx_ring) i = 0; } + clear_ring_uses_large_buffer(rx_ring); + rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; @@ -497,6 +504,11 @@ static int igc_setup_all_rx_resources(struct igc_adapter *adapter) return err; } +static bool igc_xdp_is_enabled(struct igc_adapter *adapter) +{ + return !!adapter->xdp_prog; +} + /** * igc_configure_rx_ring - Configure a receive ring after Reset * @adapter: board private structure @@ -513,6 +525,9 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter, u32 srrctl = 0, rxdctl = 0; u64 rdba = ring->dma; + if (igc_xdp_is_enabled(adapter)) + set_ring_uses_large_buffer(ring); + /* disable the queue */ wr32(IGC_RXDCTL(reg_idx), 0); @@ -1581,12 +1596,12 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, struct igc_rx_buffer *rx_buffer, - unsigned int size, int pkt_offset, + struct xdp_buff *xdp, ktime_t timestamp) { - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset + - pkt_offset; + unsigned int size = xdp->data_end - xdp->data; unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); + void *va = xdp->data; unsigned int headlen; struct sk_buff *skb; @@ -1730,6 +1745,10 @@ static bool igc_cleanup_headers(struct igc_ring *rx_ring, union igc_adv_rx_desc *rx_desc, struct sk_buff *skb) { + /* XDP packets use error pointer so abort at this point */ + if (IS_ERR(skb)) + return true; + if (unlikely(igc_test_staterr(rx_desc, IGC_RXDEXT_STATERR_RXE))) { struct net_device *netdev = rx_ring->netdev; @@ -1769,7 +1788,14 @@ static void igc_put_rx_buffer(struct igc_ring *rx_ring, static inline unsigned int igc_rx_offset(struct igc_ring *rx_ring) { - return ring_uses_build_skb(rx_ring) ? IGC_SKB_PAD : 0; + struct igc_adapter *adapter = rx_ring->q_vector->adapter; + + if (ring_uses_build_skb(rx_ring)) + return IGC_SKB_PAD; + if (igc_xdp_is_enabled(adapter)) + return XDP_PACKET_HEADROOM; + + return 0; } static bool igc_alloc_mapped_page(struct igc_ring *rx_ring, @@ -1883,6 +1909,42 @@ static void igc_alloc_rx_buffers(struct igc_ring *rx_ring, u16 cleaned_count) } } +static struct sk_buff *igc_xdp_run_prog(struct igc_adapter *adapter, + struct xdp_buff *xdp) +{ + struct bpf_prog *prog; + int res; + u32 act; + + rcu_read_lock(); + + prog = READ_ONCE(adapter->xdp_prog); + if (!prog) { + res = IGC_XDP_PASS; + goto unlock; + } + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + res = IGC_XDP_PASS; + break; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(adapter->netdev, prog, act); + fallthrough; + case XDP_DROP: + res = IGC_XDP_CONSUMED; + break; + } + +unlock: + rcu_read_unlock(); + return ERR_PTR(-res); +} + static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) { unsigned int total_bytes = 0, total_packets = 0; @@ -1894,8 +1956,10 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) union igc_adv_rx_desc *rx_desc; struct igc_rx_buffer *rx_buffer; ktime_t timestamp = 0; + struct xdp_buff xdp; int pkt_offset = 0; unsigned int size; + void *pktbuf; /* return some buffers to hardware, one at a time is too slow */ if (cleaned_count >= IGC_RX_BUFFER_WRITE) { @@ -1916,24 +1980,38 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) rx_buffer = igc_get_rx_buffer(rx_ring, size); - if (igc_test_staterr(rx_desc, IGC_RXDADV_STAT_TSIP)) { - void *pktbuf = page_address(rx_buffer->page) + - rx_buffer->page_offset; + pktbuf = page_address(rx_buffer->page) + rx_buffer->page_offset; + if (igc_test_staterr(rx_desc, IGC_RXDADV_STAT_TSIP)) { timestamp = igc_ptp_rx_pktstamp(q_vector->adapter, pktbuf); pkt_offset = IGC_TS_HDR_LEN; size -= IGC_TS_HDR_LEN; } - /* retrieve a buffer from the ring */ - if (skb) + if (!skb) { + struct igc_adapter *adapter = q_vector->adapter; + + xdp.data = pktbuf + pkt_offset; + xdp.data_end = xdp.data + size; + xdp.data_hard_start = pktbuf - igc_rx_offset(rx_ring); + xdp_set_data_meta_invalid(&xdp); + xdp.frame_sz = igc_get_rx_frame_truesize(rx_ring, size); + + skb = igc_xdp_run_prog(adapter, &xdp); + } + + if (IS_ERR(skb)) { + rx_buffer->pagecnt_bias++; + total_packets++; + total_bytes += size; + } else if (skb) igc_add_rx_frag(rx_ring, rx_buffer, skb, size); else if (ring_uses_build_skb(rx_ring)) skb = igc_build_skb(rx_ring, rx_buffer, rx_desc, size); else - skb = igc_construct_skb(rx_ring, rx_buffer, size, - pkt_offset, timestamp); + skb = igc_construct_skb(rx_ring, rx_buffer, &xdp, + timestamp); /* exit if we failed to retrieve a buffer */ if (!skb) { @@ -3874,6 +3952,11 @@ static int igc_change_mtu(struct net_device *netdev, int new_mtu) int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; struct igc_adapter *adapter = netdev_priv(netdev); + if (igc_xdp_is_enabled(adapter) && new_mtu > ETH_DATA_LEN) { + netdev_dbg(netdev, "Jumbo frames not supported with XDP"); + return -EINVAL; + } + /* adjust max frame to be at least the size of a standard frame */ if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN)) max_frame = ETH_FRAME_LEN + ETH_FCS_LEN; @@ -4860,6 +4943,18 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, } } +static int igc_bpf(struct net_device *dev, struct netdev_bpf *bpf) +{ + struct igc_adapter *adapter = netdev_priv(dev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return igc_xdp_set_prog(adapter, bpf->prog, bpf->extack); + default: + return -EOPNOTSUPP; + } +} + static const struct net_device_ops igc_netdev_ops = { .ndo_open = igc_open, .ndo_stop = igc_close, @@ -4873,6 +4968,7 @@ static const struct net_device_ops igc_netdev_ops = { .ndo_features_check = igc_features_check, .ndo_do_ioctl = igc_ioctl, .ndo_setup_tc = igc_setup_tc, + .ndo_bpf = igc_bpf, }; /* PCIe configuration access */ diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c new file mode 100644 index 000000000000..27c886a254f1 --- /dev/null +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2020, Intel Corporation. */ + +#include "igc.h" +#include "igc_xdp.h" + +int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct net_device *dev = adapter->netdev; + bool if_running = netif_running(dev); + struct bpf_prog *old_prog; + + if (dev->mtu > ETH_DATA_LEN) { + /* For now, the driver doesn't support XDP functionality with + * jumbo frames so we return error. + */ + NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported"); + return -EOPNOTSUPP; + } + + if (if_running) + igc_close(dev); + + old_prog = xchg(&adapter->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + if (if_running) + igc_open(dev); + + return 0; +} diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.h b/drivers/net/ethernet/intel/igc/igc_xdp.h new file mode 100644 index 000000000000..8a410bcefe1a --- /dev/null +++ b/drivers/net/ethernet/intel/igc/igc_xdp.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IGC_XDP_H_ +#define _IGC_XDP_H_ + +int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, + struct netlink_ext_ack *extack); + +#endif /* _IGC_XDP_H_ */ From patchwork Mon Mar 29 17:09:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170561 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC58FC433E2 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D968E6191D for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231426AbhC2RIj (ORCPT ); Mon, 29 Mar 2021 13:08:39 -0400 Received: from mga02.intel.com ([134.134.136.20]:51878 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231284AbhC2RID (ORCPT ); Mon, 29 Mar 2021 13:08:03 -0400 IronPort-SDR: 4+0jFDXOco3oOBZzZ/5wIk8mA+5SM8lM60i8FGb3/pyUoANrwaeiCg5TfEES/taeAcMBxaavWN /4FuOWbkkCGg== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720136" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720136" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:08:01 -0700 IronPort-SDR: 3WSipQGzjV5oC0alV1CbrzAkCig0UglX7y9CwwdIICV4tINiFByN7hja3mHUmuseasRmYzx/1t SVSpDA7TNP6Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447281" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:08:00 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 7/8] igc: Add support for XDP_TX action Date: Mon, 29 Mar 2021 10:09:30 -0700 Message-Id: <20210329170931.2356162-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes Add support for XDP_TX action which enables XDP programs to transmit back receiving frames. I225 controller has only 4 Tx hardware queues. Since XDP programs may not even issue an XDP_TX action, this patch doesn't reserve dedicated queues just for XDP like other Intel drivers do. Instead, the queues are shared between the network stack and XDP. The netdev queue lock is used to ensure mutual exclusion. Since frames can now be transmitted via XDP_TX, the igc_tx_buffer structure is modified so we are able to save a reference to the xdp frame for later clean up once the packet is transmitted. The tx_buffer is mapped to either a skb or a xdpf so we use a union to save the skb or xdpf pointer and have a bit in tx_flags to indicate which field to use. This patch has been tested with the sample app "xdp2" located in samples/bpf/ dir. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc.h | 9 +- drivers/net/ethernet/intel/igc/igc_main.c | 176 ++++++++++++++++++++-- drivers/net/ethernet/intel/igc/igc_xdp.c | 27 ++++ drivers/net/ethernet/intel/igc/igc_xdp.h | 3 + 4 files changed, 204 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index 3a1737227222..91493a73355d 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -111,6 +111,8 @@ struct igc_ring { struct sk_buff *skb; }; }; + + struct xdp_rxq_info xdp_rxq; } ____cacheline_internodealigned_in_smp; /* Board specific private data structure */ @@ -375,6 +377,8 @@ enum igc_tx_flags { /* olinfo flags */ IGC_TX_FLAGS_IPV4 = 0x10, IGC_TX_FLAGS_CSUM = 0x20, + + IGC_TX_FLAGS_XDP = 0x100, }; enum igc_boards { @@ -397,7 +401,10 @@ enum igc_boards { struct igc_tx_buffer { union igc_adv_tx_desc *next_to_watch; unsigned long time_stamp; - struct sk_buff *skb; + union { + struct sk_buff *skb; + struct xdp_frame *xdpf; + }; unsigned int bytecount; u16 gso_segs; __be16 protocol; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 7da31da523c8..5ad360e52038 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -25,6 +25,7 @@ #define IGC_XDP_PASS 0 #define IGC_XDP_CONSUMED BIT(0) +#define IGC_XDP_TX BIT(1) static int debug = -1; @@ -181,8 +182,10 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring) while (i != tx_ring->next_to_use) { union igc_adv_tx_desc *eop_desc, *tx_desc; - /* Free all the Tx ring sk_buffs */ - dev_kfree_skb_any(tx_buffer->skb); + if (tx_buffer->tx_flags & IGC_TX_FLAGS_XDP) + xdp_return_frame(tx_buffer->xdpf); + else + dev_kfree_skb_any(tx_buffer->skb); /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -410,6 +413,8 @@ void igc_free_rx_resources(struct igc_ring *rx_ring) { igc_clean_rx_ring(rx_ring); + igc_xdp_unregister_rxq_info(rx_ring); + vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; @@ -447,7 +452,11 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring) { struct net_device *ndev = rx_ring->netdev; struct device *dev = rx_ring->dev; - int size, desc_len; + int size, desc_len, res; + + res = igc_xdp_register_rxq_info(rx_ring); + if (res < 0) + return res; size = sizeof(struct igc_rx_buffer) * rx_ring->count; rx_ring->rx_buffer_info = vzalloc(size); @@ -473,6 +482,7 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring) return 0; err: + igc_xdp_unregister_rxq_info(rx_ring); vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; netdev_err(ndev, "Unable to allocate memory for Rx descriptor ring\n"); @@ -1909,6 +1919,101 @@ static void igc_alloc_rx_buffers(struct igc_ring *rx_ring, u16 cleaned_count) } } +static int igc_xdp_init_tx_buffer(struct igc_tx_buffer *buffer, + struct xdp_frame *xdpf, + struct igc_ring *ring) +{ + dma_addr_t dma; + + dma = dma_map_single(ring->dev, xdpf->data, xdpf->len, DMA_TO_DEVICE); + if (dma_mapping_error(ring->dev, dma)) { + netdev_err_once(ring->netdev, "Failed to map DMA for TX\n"); + return -ENOMEM; + } + + buffer->xdpf = xdpf; + buffer->tx_flags = IGC_TX_FLAGS_XDP; + buffer->protocol = 0; + buffer->bytecount = xdpf->len; + buffer->gso_segs = 1; + buffer->time_stamp = jiffies; + dma_unmap_len_set(buffer, len, xdpf->len); + dma_unmap_addr_set(buffer, dma, dma); + return 0; +} + +/* This function requires __netif_tx_lock is held by the caller. */ +static int igc_xdp_init_tx_descriptor(struct igc_ring *ring, + struct xdp_frame *xdpf) +{ + struct igc_tx_buffer *buffer; + union igc_adv_tx_desc *desc; + u32 cmd_type, olinfo_status; + int err; + + if (!igc_desc_unused(ring)) + return -EBUSY; + + buffer = &ring->tx_buffer_info[ring->next_to_use]; + err = igc_xdp_init_tx_buffer(buffer, xdpf, ring); + if (err) + return err; + + cmd_type = IGC_ADVTXD_DTYP_DATA | IGC_ADVTXD_DCMD_DEXT | + IGC_ADVTXD_DCMD_IFCS | IGC_TXD_DCMD | + buffer->bytecount; + olinfo_status = buffer->bytecount << IGC_ADVTXD_PAYLEN_SHIFT; + + desc = IGC_TX_DESC(ring, ring->next_to_use); + desc->read.cmd_type_len = cpu_to_le32(cmd_type); + desc->read.olinfo_status = cpu_to_le32(olinfo_status); + desc->read.buffer_addr = cpu_to_le64(dma_unmap_addr(buffer, dma)); + + netdev_tx_sent_queue(txring_txq(ring), buffer->bytecount); + + buffer->next_to_watch = desc; + + ring->next_to_use++; + if (ring->next_to_use == ring->count) + ring->next_to_use = 0; + + return 0; +} + +static struct igc_ring *igc_xdp_get_tx_ring(struct igc_adapter *adapter, + int cpu) +{ + int index = cpu; + + if (unlikely(index < 0)) + index = 0; + + while (index >= adapter->num_tx_queues) + index -= adapter->num_tx_queues; + + return adapter->tx_ring[index]; +} + +static int igc_xdp_xmit_back(struct igc_adapter *adapter, struct xdp_buff *xdp) +{ + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); + int cpu = smp_processor_id(); + struct netdev_queue *nq; + struct igc_ring *ring; + int res; + + if (unlikely(!xdpf)) + return -EFAULT; + + ring = igc_xdp_get_tx_ring(adapter, cpu); + nq = txring_txq(ring); + + __netif_tx_lock(nq, cpu); + res = igc_xdp_init_tx_descriptor(ring, xdpf); + __netif_tx_unlock(nq); + return res; +} + static struct sk_buff *igc_xdp_run_prog(struct igc_adapter *adapter, struct xdp_buff *xdp) { @@ -1929,6 +2034,12 @@ static struct sk_buff *igc_xdp_run_prog(struct igc_adapter *adapter, case XDP_PASS: res = IGC_XDP_PASS; break; + case XDP_TX: + if (igc_xdp_xmit_back(adapter, xdp) < 0) + res = IGC_XDP_CONSUMED; + else + res = IGC_XDP_TX; + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -1945,20 +2056,49 @@ static struct sk_buff *igc_xdp_run_prog(struct igc_adapter *adapter, return ERR_PTR(-res); } +/* This function assumes __netif_tx_lock is held by the caller. */ +static void igc_flush_tx_descriptors(struct igc_ring *ring) +{ + /* Once tail pointer is updated, hardware can fetch the descriptors + * any time so we issue a write membar here to ensure all memory + * writes are complete before the tail pointer is updated. + */ + wmb(); + writel(ring->next_to_use, ring->tail); +} + +static void igc_finalize_xdp(struct igc_adapter *adapter, int status) +{ + int cpu = smp_processor_id(); + struct netdev_queue *nq; + struct igc_ring *ring; + + if (status & IGC_XDP_TX) { + ring = igc_xdp_get_tx_ring(adapter, cpu); + nq = txring_txq(ring); + + __netif_tx_lock(nq, cpu); + igc_flush_tx_descriptors(ring); + __netif_tx_unlock(nq); + } +} + static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) { unsigned int total_bytes = 0, total_packets = 0; + struct igc_adapter *adapter = q_vector->adapter; struct igc_ring *rx_ring = q_vector->rx.ring; struct sk_buff *skb = rx_ring->skb; u16 cleaned_count = igc_desc_unused(rx_ring); + int xdp_status = 0; while (likely(total_packets < budget)) { union igc_adv_rx_desc *rx_desc; struct igc_rx_buffer *rx_buffer; + unsigned int size, truesize; ktime_t timestamp = 0; struct xdp_buff xdp; int pkt_offset = 0; - unsigned int size; void *pktbuf; /* return some buffers to hardware, one at a time is too slow */ @@ -1979,6 +2119,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) dma_rmb(); rx_buffer = igc_get_rx_buffer(rx_ring, size); + truesize = igc_get_rx_frame_truesize(rx_ring, size); pktbuf = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -1990,19 +2131,29 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) } if (!skb) { - struct igc_adapter *adapter = q_vector->adapter; - xdp.data = pktbuf + pkt_offset; xdp.data_end = xdp.data + size; xdp.data_hard_start = pktbuf - igc_rx_offset(rx_ring); xdp_set_data_meta_invalid(&xdp); - xdp.frame_sz = igc_get_rx_frame_truesize(rx_ring, size); + xdp.frame_sz = truesize; + xdp.rxq = &rx_ring->xdp_rxq; skb = igc_xdp_run_prog(adapter, &xdp); } if (IS_ERR(skb)) { - rx_buffer->pagecnt_bias++; + unsigned int xdp_res = -PTR_ERR(skb); + + switch (xdp_res) { + case IGC_XDP_CONSUMED: + rx_buffer->pagecnt_bias++; + break; + case IGC_XDP_TX: + igc_rx_buffer_flip(rx_buffer, truesize); + xdp_status |= xdp_res; + break; + } + total_packets++; total_bytes += size; } else if (skb) @@ -2048,6 +2199,9 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) total_packets++; } + if (xdp_status) + igc_finalize_xdp(adapter, xdp_status); + /* place incomplete frames back on ring for completion */ rx_ring->skb = skb; @@ -2109,8 +2263,10 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) total_bytes += tx_buffer->bytecount; total_packets += tx_buffer->gso_segs; - /* free the skb */ - napi_consume_skb(tx_buffer->skb, napi_budget); + if (tx_buffer->tx_flags & IGC_TX_FLAGS_XDP) + xdp_return_frame(tx_buffer->xdpf); + else + napi_consume_skb(tx_buffer->skb, napi_budget); /* unmap skb header data */ dma_unmap_single(tx_ring->dev, diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c index 27c886a254f1..11133c4619bb 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.c +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -31,3 +31,30 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, return 0; } + +int igc_xdp_register_rxq_info(struct igc_ring *ring) +{ + struct net_device *dev = ring->netdev; + int err; + + err = xdp_rxq_info_reg(&ring->xdp_rxq, dev, ring->queue_index, 0); + if (err) { + netdev_err(dev, "Failed to register xdp rxq info\n"); + return err; + } + + err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_SHARED, + NULL); + if (err) { + netdev_err(dev, "Failed to register xdp rxq mem model\n"); + xdp_rxq_info_unreg(&ring->xdp_rxq); + return err; + } + + return 0; +} + +void igc_xdp_unregister_rxq_info(struct igc_ring *ring) +{ + xdp_rxq_info_unreg(&ring->xdp_rxq); +} diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.h b/drivers/net/ethernet/intel/igc/igc_xdp.h index 8a410bcefe1a..cfecb515b718 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.h +++ b/drivers/net/ethernet/intel/igc/igc_xdp.h @@ -7,4 +7,7 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, struct netlink_ext_ack *extack); +int igc_xdp_register_rxq_info(struct igc_ring *ring); +void igc_xdp_unregister_rxq_info(struct igc_ring *ring); + #endif /* _IGC_XDP_H_ */ From patchwork Mon Mar 29 17:09:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12170567 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09704C433E6 for ; Mon, 29 Mar 2021 17:09:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9A12619A9 for ; Mon, 29 Mar 2021 17:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231420AbhC2RIi (ORCPT ); Mon, 29 Mar 2021 13:08:38 -0400 Received: from mga02.intel.com ([134.134.136.20]:51883 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbhC2RIE (ORCPT ); Mon, 29 Mar 2021 13:08:04 -0400 IronPort-SDR: 2qURFyaH5lqpPdZngIkHKNWhlfqiQAAGokm+BsIJo13FQdVzSW9vfyo3+Za/hY9n2rj+nATZeH rdYGmj/ChA1A== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="178720137" X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="178720137" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 10:08:01 -0700 IronPort-SDR: BP4e4Yj/vm1Y35jvZRfAB20loV4KRcklhMDHQt4lMC/NZWBFBeMS6qjMk0yDi1I1iFRodQRQWQ 9KwZobrrjEog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,288,1610438400"; d="scan'208";a="606447283" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga006.fm.intel.com with ESMTP; 29 Mar 2021 10:08:01 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 8/8] igc: Add support for XDP_REDIRECT action Date: Mon, 29 Mar 2021 10:09:31 -0700 Message-Id: <20210329170931.2356162-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> References: <20210329170931.2356162-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andre Guedes Add support for the XDP_REDIRECT action which enables XDP programs to redirect packets arriving at I225 NIC. It also implements the ndo_xdp_xmit ops, enabling the igc driver to transmit packets forwarded to it by xdp programs running on other interfaces. The patch tweaks the driver's page counting and recycling scheme as described in the following two commits and implemented by other Intel drivers in order to properly support XDP_REDIRECT action: commit 8ce29c679a6e ("i40e: tweak page counting for XDP_REDIRECT") commit 75aab4e10ae6 ("i40e: avoid premature Rx buffer reuse") This patch has been tested with the sample apps "xdp_redirect_cpu" and "xdp_redirect_map" located in samples/bpf/. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 84 ++++++++++++++++++++--- 1 file changed, 73 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 5ad360e52038..10765491e357 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -26,6 +26,7 @@ #define IGC_XDP_PASS 0 #define IGC_XDP_CONSUMED BIT(0) #define IGC_XDP_TX BIT(1) +#define IGC_XDP_REDIRECT BIT(2) static int debug = -1; @@ -1505,11 +1506,18 @@ static void igc_process_skb_fields(struct igc_ring *rx_ring, } static struct igc_rx_buffer *igc_get_rx_buffer(struct igc_ring *rx_ring, - const unsigned int size) + const unsigned int size, + int *rx_buffer_pgcnt) { struct igc_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + *rx_buffer_pgcnt = +#if (PAGE_SIZE < 8192) + page_count(rx_buffer->page); +#else + 0; +#endif prefetchw(rx_buffer->page); /* we are reusing so sync this buffer for CPU use */ @@ -1677,7 +1685,8 @@ static void igc_reuse_rx_page(struct igc_ring *rx_ring, new_buff->pagecnt_bias = old_buff->pagecnt_bias; } -static bool igc_can_reuse_rx_page(struct igc_rx_buffer *rx_buffer) +static bool igc_can_reuse_rx_page(struct igc_rx_buffer *rx_buffer, + int rx_buffer_pgcnt) { unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; struct page *page = rx_buffer->page; @@ -1688,7 +1697,7 @@ static bool igc_can_reuse_rx_page(struct igc_rx_buffer *rx_buffer) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((page_ref_count(page) - pagecnt_bias) > 1)) + if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) return false; #else #define IGC_LAST_OFFSET \ @@ -1702,8 +1711,8 @@ static bool igc_can_reuse_rx_page(struct igc_rx_buffer *rx_buffer) * the pagecnt_bias and page count so that we fully restock the * number of references the driver holds. */ - if (unlikely(!pagecnt_bias)) { - page_ref_add(page, USHRT_MAX); + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX - 1); rx_buffer->pagecnt_bias = USHRT_MAX; } @@ -1776,9 +1785,10 @@ static bool igc_cleanup_headers(struct igc_ring *rx_ring, } static void igc_put_rx_buffer(struct igc_ring *rx_ring, - struct igc_rx_buffer *rx_buffer) + struct igc_rx_buffer *rx_buffer, + int rx_buffer_pgcnt) { - if (igc_can_reuse_rx_page(rx_buffer)) { + if (igc_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) { /* hand second half of page back to the ring */ igc_reuse_rx_page(rx_ring, rx_buffer); } else { @@ -1844,7 +1854,8 @@ static bool igc_alloc_mapped_page(struct igc_ring *rx_ring, bi->dma = dma; bi->page = page; bi->page_offset = igc_rx_offset(rx_ring); - bi->pagecnt_bias = 1; + page_ref_add(page, USHRT_MAX - 1); + bi->pagecnt_bias = USHRT_MAX; return true; } @@ -2040,6 +2051,12 @@ static struct sk_buff *igc_xdp_run_prog(struct igc_adapter *adapter, else res = IGC_XDP_TX; break; + case XDP_REDIRECT: + if (xdp_do_redirect(adapter->netdev, xdp, prog) < 0) + res = IGC_XDP_CONSUMED; + else + res = IGC_XDP_REDIRECT; + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -2081,6 +2098,9 @@ static void igc_finalize_xdp(struct igc_adapter *adapter, int status) igc_flush_tx_descriptors(ring); __netif_tx_unlock(nq); } + + if (status & IGC_XDP_REDIRECT) + xdp_do_flush(); } static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) @@ -2090,7 +2110,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) struct igc_ring *rx_ring = q_vector->rx.ring; struct sk_buff *skb = rx_ring->skb; u16 cleaned_count = igc_desc_unused(rx_ring); - int xdp_status = 0; + int xdp_status = 0, rx_buffer_pgcnt; while (likely(total_packets < budget)) { union igc_adv_rx_desc *rx_desc; @@ -2118,7 +2138,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) */ dma_rmb(); - rx_buffer = igc_get_rx_buffer(rx_ring, size); + rx_buffer = igc_get_rx_buffer(rx_ring, size, &rx_buffer_pgcnt); truesize = igc_get_rx_frame_truesize(rx_ring, size); pktbuf = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -2149,6 +2169,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) rx_buffer->pagecnt_bias++; break; case IGC_XDP_TX: + case IGC_XDP_REDIRECT: igc_rx_buffer_flip(rx_buffer, truesize); xdp_status |= xdp_res; break; @@ -2171,7 +2192,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) break; } - igc_put_rx_buffer(rx_ring, rx_buffer); + igc_put_rx_buffer(rx_ring, rx_buffer, rx_buffer_pgcnt); cleaned_count++; /* fetch next buffer in frame if non-eop */ @@ -5111,6 +5132,46 @@ static int igc_bpf(struct net_device *dev, struct netdev_bpf *bpf) } } +static int igc_xdp_xmit(struct net_device *dev, int num_frames, + struct xdp_frame **frames, u32 flags) +{ + struct igc_adapter *adapter = netdev_priv(dev); + int cpu = smp_processor_id(); + struct netdev_queue *nq; + struct igc_ring *ring; + int i, drops; + + if (unlikely(test_bit(__IGC_DOWN, &adapter->state))) + return -ENETDOWN; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + ring = igc_xdp_get_tx_ring(adapter, cpu); + nq = txring_txq(ring); + + __netif_tx_lock(nq, cpu); + + drops = 0; + for (i = 0; i < num_frames; i++) { + int err; + struct xdp_frame *xdpf = frames[i]; + + err = igc_xdp_init_tx_descriptor(ring, xdpf); + if (err) { + xdp_return_frame_rx_napi(xdpf); + drops++; + } + } + + if (flags & XDP_XMIT_FLUSH) + igc_flush_tx_descriptors(ring); + + __netif_tx_unlock(nq); + + return num_frames - drops; +} + static const struct net_device_ops igc_netdev_ops = { .ndo_open = igc_open, .ndo_stop = igc_close, @@ -5125,6 +5186,7 @@ static const struct net_device_ops igc_netdev_ops = { .ndo_do_ioctl = igc_ioctl, .ndo_setup_tc = igc_setup_tc, .ndo_bpf = igc_bpf, + .ndo_xdp_xmit = igc_xdp_xmit, }; /* PCIe configuration access */