From patchwork Mon Jun 5 14:44:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13267424 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8B35200B7; Mon, 5 Jun 2023 14:45:17 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B64A0ED; Mon, 5 Jun 2023 07:45:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685976315; x=1717512315; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wMV2V2E1sFrBlyO+yBc8Em/Okxc0+BCvYpMSJeYiI1E=; b=PXeEDYFf2hEuVEKhR87yGlg0JmFW9bXBxLtRgFMrXge9fAbZ/o/q+co8 l74QM1m+P8Q8j4Lsj6XPbDdy2vPPqdMk3uMlPVLNXDt8s1ZKakaANyIXK T/yIvcTCnVJodLFxqKxHgYIf1HM6Us40akbLh+5egmy21SQ9nBs0JhlCv +nicKRTn47fEu++J/O5YU609kN5WN8X/p878VGmu8oBeXp0ologxTxYol f9xiJdlhVRy7IiJuxIc1fmwWwdkB88F8Qmgr6tgmhj3plYtNRDDbUtAmh zrffvbdXqcEh+8qo5H2PS3nl9cB0qc5T8KNpmbeHKi4sAV683VMktN/X+ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10732"; a="442757875" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="442757875" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 07:45:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10732"; a="798464187" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798464187" Received: from boxer.igk.intel.com ([10.102.20.173]) by FMSMGA003.fm.intel.com with ESMTP; 05 Jun 2023 07:45:13 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, tirthendu.sarkar@intel.com, maciej.fijalkowski@intel.com, simon.horman@corigine.com Subject: [PATCH v3 bpf-next 07/22] xsk: allow core/drivers to test EOP bit Date: Mon, 5 Jun 2023 16:44:18 +0200 Message-Id: <20230605144433.290114-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230605144433.290114-1-maciej.fijalkowski@intel.com> References: <20230605144433.290114-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Drivers are used to check for EOP bit whereas AF_XDP operates on inverted logic - user space indicates that current frag is not the last one and packet continues. For AF_XDP core needs, add xp_mb_desc() that will simply test XDP_PKT_CONTD from xdp_desc::options, but in order to preserve drivers default behavior, introduce an interface for ZC drivers that will negate xp_mb_desc() result and therefore make it easier to test EOP bit from during production of HW Tx descriptors. Signed-off-by: Maciej Fijalkowski --- include/net/xdp_sock_drv.h | 10 ++++++++++ include/net/xsk_buff_pool.h | 5 +++++ 2 files changed, 15 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 9c0d860609ba..a3e18b958d93 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -89,6 +89,11 @@ static inline struct xdp_buff *xsk_buff_alloc(struct xsk_buff_pool *pool) return xp_alloc(pool); } +static inline bool xsk_is_eop_desc(struct xdp_desc *desc) +{ + return !xp_mb_desc(desc); +} + /* Returns as many entries as possible up to max. 0 <= N <= max. */ static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) { @@ -241,6 +246,11 @@ static inline struct xdp_buff *xsk_buff_alloc(struct xsk_buff_pool *pool) return NULL; } +static inline bool xsk_is_eop_desc(struct xdp_desc *desc) +{ + return false; +} + static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) { return 0; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index a8d7b8a3688a..4dcca163e076 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -184,6 +184,11 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool, !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK); } +static inline bool xp_mb_desc(struct xdp_desc *desc) +{ + return desc->options & XDP_PKT_CONTD; +} + static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr) { return addr & pool->chunk_mask;