From patchwork Wed Oct 9 15:27:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13828655 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49D001E105B; Wed, 9 Oct 2024 15:29:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728487779; cv=none; b=ly4JL7dGsejnC6gXay7pIXFKld4ad19r88PCUX9EWFaazUopyrJJZmchItJnr9rM7OfJ1kzDlXKfUdumQEmfxGexGx9w91uJak3Rp7kDl26mmHVlB38HW94m/TxSQfw/qlbAnLosV6i3hutDQXFU/AVC5vATqgFlRU7EexiA7ro= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728487779; c=relaxed/simple; bh=OaIP6Fbig8EboEUzMOctbHaWUtp2kGf6/6R4/oNtyIU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uDbXHuNh0wcKgxfolr5ooTpNRvFyKgXqJdlL84hcMCbWrxfXqFNkcYIXJKnHLq2qPVUYgK9onf7/ldY9CplVRY0rSTZjatMmyvce/gonSR8KKGL2aS9Tdybap1n/Ey1P4wBsgP9WBmwblantol9CYG8GSyTfol7/mZpc69b+Sl8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XW0jZkg2; arc=none smtp.client-ip=192.198.163.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XW0jZkg2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728487778; x=1760023778; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OaIP6Fbig8EboEUzMOctbHaWUtp2kGf6/6R4/oNtyIU=; b=XW0jZkg2KM1xY3q7DYwESdw5ni+TkBxJHwM89hO0/JNwJQNZTFWDT8qS EYJWC64RePiH73ZH0lGf+lbP/m25B9xui8ilU1bQ8ZzwVlUeNdYjVlUrA ly/wK2AGc2MP2aEyqKShupPJ7rjgsyDl6R9cxl7bYUkAJ0qeL90A+fYl6 iPOXxfhAldxxpFsV1593hOvMevqSf++6DOHRoC5jAi7i/89QG8fNiMJyD 9gt2N056dPJ3mnL4JOco7L9z16O53prKnw9SSSP4ZqyY5aOf+3VXneoJM 5FBecY825hKe/5fozTTMP1+tHhdzS9yh7eQIiMjusnb+Ljaxr8T0lJReP Q==; X-CSE-ConnectionGUID: 2UPucyshQKatS1YTH58oNw== X-CSE-MsgGUID: WqyJH8llTjmb4l8fuSl/Qg== X-IronPort-AV: E=McAfee;i="6700,10204,11220"; a="27675868" X-IronPort-AV: E=Sophos;i="6.11,190,1725346800"; d="scan'208";a="27675868" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2024 08:29:38 -0700 X-CSE-ConnectionGUID: 89Mep0jVQuupKdbRtDwFuw== X-CSE-MsgGUID: XvVbAnt2TPqzmlycjX11bA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,190,1725346800"; d="scan'208";a="81306076" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa004.jf.intel.com with ESMTP; 09 Oct 2024 08:29:34 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?utf-8?q?Toke_H=C3=B8?= =?utf-8?q?iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 16/18] xsk: add helper to get &xdp_desc's DMA and meta pointer in one go Date: Wed, 9 Oct 2024 17:27:54 +0200 Message-ID: <20241009152756.3113697-17-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241009152756.3113697-1-aleksander.lobakin@intel.com> References: <20241009152756.3113697-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, when you send an XSk frame without metadata, you need to do the following: * call external xsk_buff_raw_get_dma(); * call inline xsk_buff_get_metadata(), which calls external xsk_buff_raw_get_data() and then do some inline checks. This effectively means that the following piece: addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; is done twice per frame, plus you have 2 external calls per frame, plus this: meta = pool->addrs + addr - pool->tx_metadata_len; if (unlikely(!xsk_buff_valid_tx_metadata(meta))) is always inlined, even if there's no meta or it's invalid. Add xsk_buff_raw_get_ctx() (xp_raw_get_ctx() to be precise) to do that in one go. It returns a small structure with 2 fields: DMA address, filled unconditionally, and metadata pointer, valid only if it's present. The address correction is performed only once and you also have only 1 external call per XSk frame, which does all the calculations and checks outside of your hotpath. You only need to check `if (ctx.meta)` for the metadata presence. Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 23 +++++++++++++++++++++ include/net/xsk_buff_pool.h | 8 ++++++++ net/xdp/xsk_buff_pool.c | 40 +++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index e6c08749c3d2..fbfe7bf658ba 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -205,6 +205,23 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return xp_raw_get_data(pool, addr); } +/** + * xsk_buff_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Wrapper for xp_raw_get_ctx() to be used in drivers, see its kdoc for + * details. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return xp_raw_get_ctx(pool, addr); +} + #define XDP_TXMD_FLAGS_VALID ( \ XDP_TXMD_FLAGS_TIMESTAMP | \ XDP_TXMD_FLAGS_CHECKSUM | \ @@ -402,6 +419,12 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return NULL; } +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return (struct xdp_desc_ctx){ }; +} + static inline bool xsk_buff_valid_tx_metadata(struct xsk_tx_metadata *meta) { return false; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 0442ba8dafa4..e50918b6283e 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -143,6 +143,14 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max); bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count); void *xp_raw_get_data(struct xsk_buff_pool *pool, u64 addr); dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr); + +struct xdp_desc_ctx { + dma_addr_t dma; + struct xsk_tx_metadata *meta; +}; + +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr); + static inline dma_addr_t xp_get_dma(struct xdp_buff_xsk *xskb) { return xskb->dma; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 521a2938e50a..a0bb67a92d02 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -711,3 +711,43 @@ dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr) (addr & ~PAGE_MASK); } EXPORT_SYMBOL(xp_raw_get_dma); + +/** + * xp_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Helper for getting desc's DMA address and metadata pointer, if present. + * Saves one call on hotpath, double calculation of the actual address, + * and inline checks for metadata presence and sanity. + * Please use xsk_buff_raw_get_ctx() in drivers instead. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + struct xsk_tx_metadata *meta; + struct xdp_desc_ctx ret; + + addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; + ret = (typeof(ret)){ + /* Same logic as in xp_raw_get_dma() */ + .dma = (pool->dma_pages[addr >> PAGE_SHIFT] & + ~XSK_NEXT_PG_CONTIG_MASK) + (addr & ~PAGE_MASK), + }; + + if (!pool->tx_metadata_len) + goto out; + + /* Same logic as in xp_raw_get_data() + xsk_buff_get_metadata() */ + meta = pool->addrs + addr - pool->tx_metadata_len; + if (unlikely(!xsk_buff_valid_tx_metadata(meta))) + goto out; + + ret.meta = meta; + +out: + return ret; +} +EXPORT_SYMBOL(xp_raw_get_ctx);