From patchwork Thu Nov 7 16:10:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13866710 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D16CB21C187; Thu, 7 Nov 2024 16:14:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996093; cv=none; b=PGdmcZAN8C9LqienpZV43rE6hq5cOTaqQguaGL7Kf2YydWIP6U+0EJaki9KxeH3LqxdpwNa5vh6UGCqoqJTxWodI0ivfxJO4bK1ARcoVu9y/QyhU2QX71GgRnRRJzYetwwc9ZHkXvw8Hd0mKSzxqCCFrew6m/uX7LtEjW7xWoKw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996093; c=relaxed/simple; bh=w49RAZ+8uMjhzY9cwwAS/yrxn853A/gL1dROucE0B1Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kDJ7qmwpRZ6wGM/gMtOsjE/60v8ZcTodO61eaBm7KNs+RhRdh08kJURvWW8+QoHVQkmLrgELD9xlm+amuLGZ0WLXZXvVpzaOwSd/ux6nIUh0TZr+rH9YdI67bsCcBEy4uuLKjfinWNa0cGaojWWiZjTCgXB5BkEXGR8MiFbpjMM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IIsvBaQn; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IIsvBaQn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730996092; x=1762532092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=w49RAZ+8uMjhzY9cwwAS/yrxn853A/gL1dROucE0B1Q=; b=IIsvBaQn/HYKppxiAQvQFTjuHiXX/TYWf4olrZAOUBbavNxV5xZt26Gc VGoL+2Di65mamsyQSx1bRfvxUcKI+hheU+MZlLpRAuNz4I7FEjnXHk6yO eU/UkRoBkbh4kgHDq+93idouktHV89aqy+QDN+6z8NIt3kM1tp4SjJw5L IN0MghJMVsYews0r+kqeztlOfrpzPyTbLdAIGkJolO2OLEIkCdgCr/uS8 ng8l3SyVox6j4/fJCzBTEUyFj18eHdUHEjqtaj0EW3zorq2lwrUhAx2y2 UDtz5itYQud3ivq+6eePN54Jm2BuVb8woastF+puWYxLbYCXDVoxVorBp A==; X-CSE-ConnectionGUID: v8jtxeCLSkmESmygHVdYzA== X-CSE-MsgGUID: b0cr0KvMTVyF8O44K2Fvsw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41956064" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41956064" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2024 08:14:52 -0800 X-CSE-ConnectionGUID: OAOP+ZP5Tja2Zdp3U8DMxA== X-CSE-MsgGUID: PFEHxGA5SuSYmGMvkmvO4w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,135,1728975600"; d="scan'208";a="90258249" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa004.jf.intel.com with ESMTP; 07 Nov 2024 08:14:48 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?utf-8?q?Toke_H=C3=B8?= =?utf-8?q?iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Maciej Fijalkowski , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 17/19] xsk: add helper to get &xdp_desc's DMA and meta pointer in one go Date: Thu, 7 Nov 2024 17:10:24 +0100 Message-ID: <20241107161026.2903044-18-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241107161026.2903044-1-aleksander.lobakin@intel.com> References: <20241107161026.2903044-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, when you send an XSk frame with metadata, you need to do the following: * call external xsk_buff_raw_get_dma(); * call inline xsk_buff_get_metadata(), which calls external xsk_buff_raw_get_data() and then do some inline checks. This effectively means that the following piece: addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; is done twice per frame, plus you have 2 external calls per frame, plus this: meta = pool->addrs + addr - pool->tx_metadata_len; if (unlikely(!xsk_buff_valid_tx_metadata(meta))) is always inlined, even if there's no meta or it's invalid. Add xsk_buff_raw_get_ctx() (xp_raw_get_ctx() to be precise) to do that in one go. It returns a small structure with 2 fields: DMA address, filled unconditionally, and metadata pointer, valid only if it's present. The address correction is performed only once and you also have only 1 external call per XSk frame, which does all the calculations and checks outside of your hotpath. You only need to check `if (ctx.meta)` for the metadata presence. Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 23 +++++++++++++++++++++ include/net/xsk_buff_pool.h | 8 ++++++++ net/xdp/xsk_buff_pool.c | 40 +++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 6aae95b83645..324a4bb04431 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -205,6 +205,23 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return xp_raw_get_data(pool, addr); } +/** + * xsk_buff_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Wrapper for xp_raw_get_ctx() to be used in drivers, see its kdoc for + * details. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return xp_raw_get_ctx(pool, addr); +} + #define XDP_TXMD_FLAGS_VALID ( \ XDP_TXMD_FLAGS_TIMESTAMP | \ XDP_TXMD_FLAGS_CHECKSUM | \ @@ -402,6 +419,12 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return NULL; } +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return (struct xdp_desc_ctx){ }; +} + static inline bool xsk_buff_valid_tx_metadata(struct xsk_tx_metadata *meta) { return false; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 50779406bc2d..1dcd4d71468a 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -141,6 +141,14 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max); bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count); void *xp_raw_get_data(struct xsk_buff_pool *pool, u64 addr); dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr); + +struct xdp_desc_ctx { + dma_addr_t dma; + struct xsk_tx_metadata *meta; +}; + +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr); + static inline dma_addr_t xp_get_dma(struct xdp_buff_xsk *xskb) { return xskb->dma; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index ae71da7d2cd6..02c42caec9f4 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -715,3 +715,43 @@ dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr) (addr & ~PAGE_MASK); } EXPORT_SYMBOL(xp_raw_get_dma); + +/** + * xp_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Helper for getting desc's DMA address and metadata pointer, if present. + * Saves one call on hotpath, double calculation of the actual address, + * and inline checks for metadata presence and sanity. + * Please use xsk_buff_raw_get_ctx() in drivers instead. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + struct xsk_tx_metadata *meta; + struct xdp_desc_ctx ret; + + addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; + ret = (typeof(ret)){ + /* Same logic as in xp_raw_get_dma() */ + .dma = (pool->dma_pages[addr >> PAGE_SHIFT] & + ~XSK_NEXT_PG_CONTIG_MASK) + (addr & ~PAGE_MASK), + }; + + if (!pool->tx_metadata_len) + goto out; + + /* Same logic as in xp_raw_get_data() + xsk_buff_get_metadata() */ + meta = pool->addrs + addr - pool->tx_metadata_len; + if (unlikely(!xsk_buff_valid_tx_metadata(meta))) + goto out; + + ret.meta = meta; + +out: + return ret; +} +EXPORT_SYMBOL(xp_raw_get_ctx);