From patchwork Tue Feb 14 12:30:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tirthendu Sarkar X-Patchwork-Id: 13139997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78521C05027 for ; Tue, 14 Feb 2023 12:44:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232384AbjBNMoo (ORCPT ); Tue, 14 Feb 2023 07:44:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231834AbjBNMon (ORCPT ); Tue, 14 Feb 2023 07:44:43 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C799F2129C; Tue, 14 Feb 2023 04:44:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676378682; x=1707914682; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OrE22GD7r67H+UcsZxEmeqJh5Yp4voO/0pi2b0/zvnU=; b=BhAyfBmkw+cRvwptaZMoU6pb8jfK2IyHdKPBx0JM8P5gPq07tPLlOVlj 9adcVvTFOCHaz+Trsz9hQ6YFd4t0kJBHVk2RAytfKq5bh8qnV9qn/aAPk 2tjKpF5S6tvmoJRSJfKbyv7FHaSazFzFvyRe+M2W8bTZF+oOnxaJ5t2W2 oEHH6Kvx6gzSJwSjVvjWumgrq8yOTWflz4UZ3Nl0UvYZ41EQEocRroZOR dO7VG5eU4AamQIbgw6jF+DB/butVv6RTOq3ER7Clx5isyzAdrA/dklOyx XEhSwca3cJyDSCBfjes2d2kuH9ccLxYfO1n9CdsSd4TOaWEiiyvd8ToR8 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10620"; a="417371179" X-IronPort-AV: E=Sophos;i="5.97,296,1669104000"; d="scan'208";a="417371179" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Feb 2023 04:44:42 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10620"; a="778308573" X-IronPort-AV: E=Sophos;i="5.97,296,1669104000"; d="scan'208";a="778308573" Received: from unknown (HELO paamrpdk12-S2600BPB.aw.intel.com) ([10.228.151.145]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Feb 2023 04:44:41 -0800 From: Tirthendu Sarkar To: intel-wired-lan@lists.osuosl.org Cc: jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, netdev@vger.kernel.org, bpf@vger.kernel.org, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH intel-next v3 0/8] i40e: support XDP multi-buffer Date: Tue, 14 Feb 2023 18:00:10 +0530 Message-Id: <20230214123018.54386-1-tirthendu.sarkar@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patchset adds multi-buffer support for XDP. Tx side already has support for multi-buffer. This patchset focuses on Rx side. The last patch contains actual multi-buffer changes while the previous ones are preparatory patches. On receiving the first buffer of a packet, xdp_buff is built and its subsequent buffers are added to it as frags. While 'next_to_clean' keeps pointing to the first descriptor, the newly introduced 'next_to_process' keeps track of every descriptor for the packet. On receiving EOP buffer the XDP program is called and appropriate action is taken (building skb for XDP_PASS, reusing page for XDP_DROP, adjusting page offsets for XDP_{REDIRECT,TX}). The patchset also streamlines page offset adjustments for buffer reuse to make it easier to post process the rx_buffers after running XDP prog. With this patchset there does not seem to be any performance degradation for XDP_PASS and some improvement (~1% for XDP_TX, ~5% for XDP_DROP) when measured using xdp_rxq_info program from samples/bpf/ for 64B packets. Changelog: v2 -> v3: - Fixed buffer cleanup for single buffer packets on skb alloc failure. - Better naming of cleanup function - Stop incrementing nr_frags for overflowing packets v1 -> v2: - Instead of building xdp_buff on eop now it is built incrementally. - xdp_buff is now added to i40e_ring struct for preserving across napi calls. [Alexander Duyck] - Post XDP program rx_buffer processing has been simplified. - Rx buffer allocation pull out is reverted to avoid performance issues for smaller ring sizes and now done when at least half of the ring has been cleaned. With v1 there was ~75% drop for XDP_PASS with the smallest ring size of 64 which is mitigated by v2 [Alexander Duyck] - Instead of retrying skb allocation on previous failure now the packet is dropped. [Maciej] - Simplified page offset adjustments by using xdp->frame_sz instead of recalculating truesize. [Maciej] - Change i40e_trace() to use xdp instead of skb [Maciej] - Reserve tailroom for legacy-rx [Maciej] - Centralize max frame size calculation Tirthendu Sarkar (8): i40e: consolidate maximum frame size calculation for vsi i40e: change Rx buffer size for legacy-rx to support XDP multi-buffer i40e: add pre-xdp page_count in rx_buffer i40e: Change size to truesize when using i40e_rx_buffer_flip() i40e: use frame_sz instead of recalculating truesize for building skb i40e: introduce next_to_process to i40e_ring i40e: add xdp_buff to i40e_ring struct i40e: add support for XDP multi-buffer Rx drivers/net/ethernet/intel/i40e/i40e_main.c | 75 ++-- drivers/net/ethernet/intel/i40e/i40e_trace.h | 20 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 420 +++++++++++-------- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 21 +- 4 files changed, 305 insertions(+), 231 deletions(-)