From patchwork Mon Oct 7 12:24:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824556 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C722E1D279B; Mon, 7 Oct 2024 12:25:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303907; cv=none; b=e23z7Ja7dwW5ec8SIyMCnfhM999e/n0w80p2veML6LVSVFmriidMUhFXBHSlinqRkLM+Mnshb4DsevuEQykxgYihclafs5bCA2+pOrx+ayBaMa/g10LEBeYiuHmIAPMVHGI0lCuTb51uH3Dj5cVyXuEdRZh0UasKjBquXn0fL7U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303907; c=relaxed/simple; bh=BhsGaH2BzRzfXaoCiDbov/l7oJfV/m+tDw1BtszQDUc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RIEPrWGz1SyyZsZmAMyYkxXwcOOZjzP4v7ofsZ1ZJ8lUTOxcTgQTHBtB7RKyGqxa35RjHsFqMKQujI+tG40MsLlU/XFRDUBY3o69yiGqQV+b10nohc8qR9vFaO2goJvdUDSRcutBCyQLhiA9+1CTOBxIh7yXD7rt/5hc7s//xFU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Svf4OJ7H; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Svf4OJ7H" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303905; x=1759839905; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BhsGaH2BzRzfXaoCiDbov/l7oJfV/m+tDw1BtszQDUc=; b=Svf4OJ7HAkDAP74QYzgYdel8zZ+QCPHm+AOrzYLWIs7kMDhThWOs3nXn 9xpT1nia/hfeBBBBKytg/d2OvAXM4IrkYkZhvdMmAENCPbwA8SgVc1M5m kC83b3esM5FJrQRJ0pnmSLmot4rH1o3a3D6VYCAh6kgvk6WoNCJv5pk5E qBE/FlmWkIzyKKS75fBHZaEXK9ee581tROjbqty1nmSy/JJkAEAG/MYig 88gk2wbKOPDakP1RxUdCOopfQAzTsMXNVTgjD2LBQ3TEz2dybqv8p0f7z 29cSg6cKcmK1mtfvbFzID4E5prT94fQHulJs1XWPE/etovyzQpcIeiEE3 w==; X-CSE-ConnectionGUID: 30KqeQ1KTbG3M9SkoGA7Ww== X-CSE-MsgGUID: Qu7Z1VUzTu+W4mhrrjSfgw== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066330" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066330" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:05 -0700 X-CSE-ConnectionGUID: rJAgGhlpTgGSKvF9NcH98Q== X-CSE-MsgGUID: QayY8+jBSRW3hD/LM2iVWA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80250841" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:04 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 1/6] xsk: get rid of xdp_buff_xsk::xskb_list_node Date: Mon, 7 Oct 2024 14:24:53 +0200 Message-Id: <20241007122458.282590-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Let's bring xdp_buff_xsk back to occupying 2 cachelines by removing xskb_list_node - for the purpose of gathering the xskb frags free_list_node can be used, head of the list (xsk_buff_pool::xskb_list) stays as-is, just reuse the node ptr. It is safe to do as a single xdp_buff_xsk can never reside in two pool's lists simultaneously. Signed-off-by: Maciej Fijalkowski --- include/net/xdp_sock_drv.h | 14 +++++++------- include/net/xsk_buff_pool.h | 1 - net/xdp/xsk.c | 4 ++-- net/xdp/xsk_buff_pool.c | 1 - 4 files changed, 9 insertions(+), 11 deletions(-) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 0a5dca2b2b3f..360bc1244c6a 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -126,8 +126,8 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) if (likely(!xdp_buff_has_frags(xdp))) goto out; - list_for_each_entry_safe(pos, tmp, xskb_list, xskb_list_node) { - list_del(&pos->xskb_list_node); + list_for_each_entry_safe(pos, tmp, xskb_list, free_list_node) { + list_del(&pos->free_list_node); xp_free(pos); } @@ -140,7 +140,7 @@ static inline void xsk_buff_add_frag(struct xdp_buff *xdp) { struct xdp_buff_xsk *frag = container_of(xdp, struct xdp_buff_xsk, xdp); - list_add_tail(&frag->xskb_list_node, &frag->pool->xskb_list); + list_add_tail(&frag->free_list_node, &frag->pool->xskb_list); } static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first) @@ -150,9 +150,9 @@ static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first) struct xdp_buff_xsk *frag; frag = list_first_entry_or_null(&xskb->pool->xskb_list, - struct xdp_buff_xsk, xskb_list_node); + struct xdp_buff_xsk, free_list_node); if (frag) { - list_del(&frag->xskb_list_node); + list_del(&frag->free_list_node); ret = &frag->xdp; } @@ -163,7 +163,7 @@ static inline void xsk_buff_del_tail(struct xdp_buff *tail) { struct xdp_buff_xsk *xskb = container_of(tail, struct xdp_buff_xsk, xdp); - list_del(&xskb->xskb_list_node); + list_del(&xskb->free_list_node); } static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first) @@ -172,7 +172,7 @@ static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first) struct xdp_buff_xsk *frag; frag = list_last_entry(&xskb->pool->xskb_list, struct xdp_buff_xsk, - xskb_list_node); + free_list_node); return &frag->xdp; } diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index bacb33f1e3e5..aa7f1d0b3a5e 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -30,7 +30,6 @@ struct xdp_buff_xsk { struct xsk_buff_pool *pool; u64 orig_addr; struct list_head free_list_node; - struct list_head xskb_list_node; }; #define XSK_CHECK_PRIV_TYPE(t) BUILD_BUG_ON(sizeof(t) > offsetofend(struct xdp_buff_xsk, cb)) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 1140b2a120ca..9c93064349a8 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -171,14 +171,14 @@ static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) return 0; xskb_list = &xskb->pool->xskb_list; - list_for_each_entry_safe(pos, tmp, xskb_list, xskb_list_node) { + list_for_each_entry_safe(pos, tmp, xskb_list, free_list_node) { if (list_is_singular(xskb_list)) contd = 0; len = pos->xdp.data_end - pos->xdp.data; err = __xsk_rcv_zc(xs, pos, len, contd); if (err) goto err; - list_del(&pos->xskb_list_node); + list_del(&pos->free_list_node); } return 0; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 521a2938e50a..e5368db7d18e 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -102,7 +102,6 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, xskb->pool = pool; xskb->xdp.frame_sz = umem->chunk_size - umem->headroom; INIT_LIST_HEAD(&xskb->free_list_node); - INIT_LIST_HEAD(&xskb->xskb_list_node); if (pool->unaligned) pool->free_heads[i] = xskb; else From patchwork Mon Oct 7 12:24:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824557 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51C291D2F48; Mon, 7 Oct 2024 12:25:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303910; cv=none; b=QMaY+8FjwI+QjepxEH9q32SfU583mbHKXRlsisx+80ZI8P/BP0BiVOSpqkpmwZcdS23DwZDo1Wjs2In0adInTOTBvmgttw3wWp8tfzaEpnbB1cWdevcgqSwomnIQJJAlWDtQIot8GPFOX36B75C65FK9hBJrJcHfJYIzVODxukY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303910; c=relaxed/simple; bh=5Zf1Q5qMfX8Y5qhSagKmeAR1/2EBNg9LrUHAC9EC+Zo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UuLqomTo96BmCpgEct4ewg/l/iEYjYsDkKnZJKS5EEUCuw4fdocV3YjrZGQfTxT0yUWXk/4JnCrRoKD0mht9UV24GGY7ETShRvH8flX/wiKe3IORxK+kJIcsbXuRO1ap6uhVwCs4LuqSvRlSnqSose3OuIAS92wkBKatNZwgQRI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UfFGTv7e; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UfFGTv7e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303908; x=1759839908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5Zf1Q5qMfX8Y5qhSagKmeAR1/2EBNg9LrUHAC9EC+Zo=; b=UfFGTv7exgvT03VheY+BK2rm6Rnm/C+2llWoz/fFwXQQ+auh6uIaP8l5 9m6A3EptDQQK0IYfbiKtUEqrhQ9k82pNI25zwBZHHHVvkMLPAzdmGcmov 9GcN+lClQ+l7zCFan0hjLFialmRqQulFwGI7q6NxEo9dabSEvhFrro3Cz WF6vgqn6QIpvAwf5MHFutVlb/55R5nuFw4b0n08m1MiNt3bEwfY+NdBMO F7lvqxN7Bupwjb1yIHRDb5bD/Leq0ViVWJYNf0RrtymnW8b3GgL1gUWPn eP1XzlbdY1uMFGxoCoj6dAc6bfWSPkka5qIea48tWBADfh/cyvsLN5k9V g==; X-CSE-ConnectionGUID: ENGq5XQGRpCgiw+uGHu2UQ== X-CSE-MsgGUID: FrZFiVDWQ6a8YIWOjdaaug== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066336" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066336" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:08 -0700 X-CSE-ConnectionGUID: 14397OHKSwGATwjN/3NTsw== X-CSE-MsgGUID: MJARXhSOSDKCU2eSIuV9Vw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80250877" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:06 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 2/6] xsk: s/free_list_node/list_node Date: Mon, 7 Oct 2024 14:24:54 +0200 Message-Id: <20241007122458.282590-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Now that free_list_node's purpose is two-folded, make it just a 'list_node'. Signed-off-by: Maciej Fijalkowski --- include/net/xdp_sock_drv.h | 14 +++++++------- include/net/xsk_buff_pool.h | 2 +- net/xdp/xsk.c | 4 ++-- net/xdp/xsk_buff_pool.c | 14 +++++++------- 4 files changed, 17 insertions(+), 17 deletions(-) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 360bc1244c6a..40085afd9160 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -126,8 +126,8 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) if (likely(!xdp_buff_has_frags(xdp))) goto out; - list_for_each_entry_safe(pos, tmp, xskb_list, free_list_node) { - list_del(&pos->free_list_node); + list_for_each_entry_safe(pos, tmp, xskb_list, list_node) { + list_del(&pos->list_node); xp_free(pos); } @@ -140,7 +140,7 @@ static inline void xsk_buff_add_frag(struct xdp_buff *xdp) { struct xdp_buff_xsk *frag = container_of(xdp, struct xdp_buff_xsk, xdp); - list_add_tail(&frag->free_list_node, &frag->pool->xskb_list); + list_add_tail(&frag->list_node, &frag->pool->xskb_list); } static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first) @@ -150,9 +150,9 @@ static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first) struct xdp_buff_xsk *frag; frag = list_first_entry_or_null(&xskb->pool->xskb_list, - struct xdp_buff_xsk, free_list_node); + struct xdp_buff_xsk, list_node); if (frag) { - list_del(&frag->free_list_node); + list_del(&frag->list_node); ret = &frag->xdp; } @@ -163,7 +163,7 @@ static inline void xsk_buff_del_tail(struct xdp_buff *tail) { struct xdp_buff_xsk *xskb = container_of(tail, struct xdp_buff_xsk, xdp); - list_del(&xskb->free_list_node); + list_del(&xskb->list_node); } static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first) @@ -172,7 +172,7 @@ static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first) struct xdp_buff_xsk *frag; frag = list_last_entry(&xskb->pool->xskb_list, struct xdp_buff_xsk, - free_list_node); + list_node); return &frag->xdp; } diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index aa7f1d0b3a5e..af8b6f776f86 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -29,7 +29,7 @@ struct xdp_buff_xsk { dma_addr_t frame_dma; struct xsk_buff_pool *pool; u64 orig_addr; - struct list_head free_list_node; + struct list_head list_node; }; #define XSK_CHECK_PRIV_TYPE(t) BUILD_BUG_ON(sizeof(t) > offsetofend(struct xdp_buff_xsk, cb)) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 9c93064349a8..520023405908 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -171,14 +171,14 @@ static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) return 0; xskb_list = &xskb->pool->xskb_list; - list_for_each_entry_safe(pos, tmp, xskb_list, free_list_node) { + list_for_each_entry_safe(pos, tmp, xskb_list, list_node) { if (list_is_singular(xskb_list)) contd = 0; len = pos->xdp.data_end - pos->xdp.data; err = __xsk_rcv_zc(xs, pos, len, contd); if (err) goto err; - list_del(&pos->free_list_node); + list_del(&pos->list_node); } return 0; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index e5368db7d18e..973557d5e4f7 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -101,7 +101,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, xskb = &pool->heads[i]; xskb->pool = pool; xskb->xdp.frame_sz = umem->chunk_size - umem->headroom; - INIT_LIST_HEAD(&xskb->free_list_node); + INIT_LIST_HEAD(&xskb->list_node); if (pool->unaligned) pool->free_heads[i] = xskb; else @@ -549,8 +549,8 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool) } else { pool->free_list_cnt--; xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk, - free_list_node); - list_del_init(&xskb->free_list_node); + list_node); + list_del_init(&xskb->list_node); } xskb->xdp.data = xskb->xdp.data_hard_start + XDP_PACKET_HEADROOM; @@ -616,8 +616,8 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3 i = nb_entries; while (i--) { - xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk, free_list_node); - list_del_init(&xskb->free_list_node); + xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk, list_node); + list_del_init(&xskb->list_node); *xdp = &xskb->xdp; xdp++; @@ -687,11 +687,11 @@ EXPORT_SYMBOL(xp_can_alloc); void xp_free(struct xdp_buff_xsk *xskb) { - if (!list_empty(&xskb->free_list_node)) + if (!list_empty(&xskb->list_node)) return; xskb->pool->free_list_cnt++; - list_add(&xskb->free_list_node, &xskb->pool->free_list); + list_add(&xskb->list_node, &xskb->pool->free_list); } EXPORT_SYMBOL(xp_free); From patchwork Mon Oct 7 12:24:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824558 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA7511D1F63; Mon, 7 Oct 2024 12:25:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303921; cv=none; b=DxOngPWRPmUgESj/JEzf8ya787l2m4gyX8cHyjV0peR38c7ENON3H23421raJaDXoYt2Dv0T50ph0QuQCeWlxKO5sWFM8JfNYZ8YV0I/86wqGCHfDZzahh30H30yxPAx/hcxXRL1Gx/5LqmBOU1OKm0DySdr7Ujg0eVOFIYmbbU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303921; c=relaxed/simple; bh=jmNzshYLRcMGG27GagBV1VflSpy6wZhtH6C3XxtQ2FA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LY1huuimExrOrwCvVlGniOvtwuGO3D+eSAjClP17O31z5Si4iya84+cLksUZftaZrqh5ApTh1sxcuoXRJVjQvaiIAljdXqgl6EwRgHiSPXltsMSDaFJzalt6eYsDhWDdu2MC1nQ2lNNY8FLoxwosX/Fr8oOn20sXhwenwFmPQ/Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=F4BE7JHZ; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="F4BE7JHZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303919; x=1759839919; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jmNzshYLRcMGG27GagBV1VflSpy6wZhtH6C3XxtQ2FA=; b=F4BE7JHZUBrR4ybGMKIkYkhridsNtBhR324TdLBGzRQQU5ZfG/5PKqL7 m0Q9rzUMARF+XTYy4svfHLxSaAtowjzIG02saManuSADQ7qh2YReE/nj4 gtDkvwamfyWKBKou5sd0h7OCLvKQMCSHU2Dxktjd33jPDNqQf/J3TWaoH jTNhmJl0K+B3CP+ki3LOHy+03nkCYUFG2CuqepozP1aBwGJ77G5Q3H7oO 8R/FD1iZWTAfqRRP1AU/ts3APXRIAhVWB34RNJsjBi+2yHGRLpVPp228X ceKfwra3QSI912hdMVQrAEuTcUnjcsRu4W1I+oZh3GOyB0UOq9R8PG92l Q==; X-CSE-ConnectionGUID: CIrOpnsqQ4ClSixlyLNdMw== X-CSE-MsgGUID: S2IVcMweRsqclFO8Xc5L7w== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066365" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066365" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:10 -0700 X-CSE-ConnectionGUID: j32FjuqQQFSe3G2VW3S1WA== X-CSE-MsgGUID: JRpSuzsHRRuq+qKCTZXclQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80250916" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:09 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 3/6] xsk: get rid of xdp_buff_xsk::orig_addr Date: Mon, 7 Oct 2024 14:24:55 +0200 Message-Id: <20241007122458.282590-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Continue the process of dieting xdp_buff_xsk by removing orig_addr member. It can be calculated from xdp->data_hard_start where it was previously used, so it is not anything that has to be carried around in struct used widely in hot path. This has been used for initializing xdp_buff_xsk::frame_dma during pool setup and as a shortcut in xp_get_handle() to retrieve address provided to xsk Rx queue. Signed-off-by: Maciej Fijalkowski --- include/net/xsk_buff_pool.h | 19 +++++++++++-------- net/xdp/xsk.c | 2 +- net/xdp/xsk_buff_pool.c | 4 +++- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index af8b6f776f86..468a23b1b4c5 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -28,7 +28,6 @@ struct xdp_buff_xsk { dma_addr_t dma; dma_addr_t frame_dma; struct xsk_buff_pool *pool; - u64 orig_addr; struct list_head list_node; }; @@ -119,7 +118,6 @@ void xp_free(struct xdp_buff_xsk *xskb); static inline void xp_init_xskb_addr(struct xdp_buff_xsk *xskb, struct xsk_buff_pool *pool, u64 addr) { - xskb->orig_addr = addr; xskb->xdp.data_hard_start = pool->addrs + addr + pool->headroom; } @@ -221,14 +219,19 @@ static inline void xp_release(struct xdp_buff_xsk *xskb) xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb; } -static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb) +static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb, + struct xsk_buff_pool *pool) { - u64 offset = xskb->xdp.data - xskb->xdp.data_hard_start; + u64 orig_addr = xskb->xdp.data - pool->addrs; + u64 offset; - offset += xskb->pool->headroom; - if (!xskb->pool->unaligned) - return xskb->orig_addr + offset; - return xskb->orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); + if (!pool->unaligned) + return orig_addr; + + offset = xskb->xdp.data - xskb->xdp.data_hard_start; + orig_addr -= offset; + offset += pool->headroom; + return orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); } static inline bool xp_tx_metadata_enabled(const struct xsk_buff_pool *pool) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 520023405908..6c31c1de1619 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -141,7 +141,7 @@ static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff_xsk *xskb, u32 len, u64 addr; int err; - addr = xp_get_handle(xskb); + addr = xp_get_handle(xskb, xskb->pool); err = xskq_prod_reserve_desc(xs->rx, addr, len, flags); if (err) { xs->rx_queue_full++; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 973557d5e4f7..7ecd4ccd2473 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -416,8 +416,10 @@ static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_ for (i = 0; i < pool->heads_cnt; i++) { struct xdp_buff_xsk *xskb = &pool->heads[i]; + u64 orig_addr; - xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr); + orig_addr = xskb->xdp.data_hard_start - pool->addrs - pool->headroom; + xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, orig_addr); } } From patchwork Mon Oct 7 12:24:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824559 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBB561D356C; Mon, 7 Oct 2024 12:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303922; cv=none; b=Ub+gayl+IvV4YKsETNHXdb+PnshWoXbnfKicXLy8CHx5tIYYRJjR4Y6IAg/xn2UT+YBdzxlUZI8qnx8RwCPd6RuJJT7+pmWLa889c9mTtpBJQ1KBkrw/QNR7+y6MI9H1OFRX3eXhmHo7sYedfy2+zMnz1+7kAQpIh8zOGWjT1Uc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303922; c=relaxed/simple; bh=IQAa3vOt+XkyPeriYThtqnyUYqqLHkBN4ryY9KHhEuI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TKBCuJgCRdDrrRDvfdGtf768OzRMFzqpBvjMLGvft01eT5x9yq2kgM7rtBPRK3bpTLbRJk7dnXLHhkMUvK+phrywRDCxlW8aAXjj7o8tPzODj/x/iGk/C/5aX8nxwchrPOpRFW8U5DktY5Db4BpkV3yHBCy36fkbJlrwHj5adZg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fbXMDhx4; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fbXMDhx4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303920; x=1759839920; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IQAa3vOt+XkyPeriYThtqnyUYqqLHkBN4ryY9KHhEuI=; b=fbXMDhx4gI6o5ydvIrN1TVXlaQWosShX/76ya6SNN8MzKmtSTs+KW3uX 4D39IPbuyYazv/UAYeoYhPKLZPne1muVzhsuOzCnwQyLapEsxBnbxX9Pr v7skXEzZXf/n8QPhFotgWCSc4xjMo7Hk5C+GlCsdRuxMu7kJxAbOBIrOR kvcXxhovf2nY+hB97G2hHHVzDbOZtE/T2gOxh5z9AcMMjo/968aYnggkm RROoqLNSl5MIJQcWf7emh7uWqY7kqhcewAG6T0aIfvtBhrXKbnP3g6Vgf d6zDURocQCkXBIRF3iQv9LlkA93mRs6WtJTl66iKDsWXIiQCyOZX3xqtJ w==; X-CSE-ConnectionGUID: cGDgP94ISKqsUQxZO+Gb8g== X-CSE-MsgGUID: x+RfsCNlTMCA4SNqepxo3A== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066372" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066372" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:13 -0700 X-CSE-ConnectionGUID: /jIrBlz+ShyO/lfrO3X4RA== X-CSE-MsgGUID: Tn292JxYQtio6w+YrEbi6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80250948" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:11 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 4/6] xsk: carry a copy of xdp_zc_max_segs within xsk_buff_pool Date: Mon, 7 Oct 2024 14:24:56 +0200 Message-Id: <20241007122458.282590-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net This so we avoid dereferencing struct net_device within hot path. Signed-off-by: Maciej Fijalkowski --- include/net/xsk_buff_pool.h | 1 + net/xdp/xsk_buff_pool.c | 1 + net/xdp/xsk_queue.h | 2 +- 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 468a23b1b4c5..bb03cee716b3 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -76,6 +76,7 @@ struct xsk_buff_pool { u32 chunk_size; u32 chunk_shift; u32 frame_len; + u32 xdp_zc_max_segs; u8 tx_metadata_len; /* inherited from umem */ u8 cached_need_wakeup; bool uses_need_wakeup; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 7ecd4ccd2473..e946ba4a5ccf 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -229,6 +229,7 @@ int xp_assign_dev(struct xsk_buff_pool *pool, goto err_unreg_xsk; } pool->umem->zc = true; + pool->xdp_zc_max_segs = netdev->xdp_zc_max_segs; return 0; err_unreg_xsk: diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index 406b20dfee8d..46d87e961ad6 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -260,7 +260,7 @@ u32 xskq_cons_read_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool, nr_frags = 0; } else { nr_frags++; - if (nr_frags == pool->netdev->xdp_zc_max_segs) { + if (nr_frags == pool->xdp_zc_max_segs) { nr_frags = 0; break; } From patchwork Mon Oct 7 12:24:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824560 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C8231D2B0E; Mon, 7 Oct 2024 12:25:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303922; cv=none; b=CjW0DjHZtBWBwalQaSZ2ypupxj5Dce1cmH1/3QRpRCnV89085axDGn+QEfxXCVhC3L5pXQaAs0eOB/6zLR+uIVcnJfe84pqtICcwDrkXQ03ZiN0yWD3naj+QSKKsstUduhuVaJpJnVkQw7SpyaUTH0DTS+iEYyqB6KY0OPDt8h4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303922; c=relaxed/simple; bh=KvVnpCkxmabJb+RRBOEsAMz7vD62sAxroz3xiSVnRMs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mdS83PwUHthz0ROOewFqor3z2B4fdbKE3ry+02y7puMXM72K5G6zsS2/qw/DObIBJ2irVAM/mJQRGCr9xsFzXYeZxU3vTutYJNuFhTPPvUEr/HoziXuJDn8d9Ek1pfe8tncOS8mO66Ay725G3qkdyTR0HegSDFbAbily1WI28jE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=naBk+bFh; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="naBk+bFh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303921; x=1759839921; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KvVnpCkxmabJb+RRBOEsAMz7vD62sAxroz3xiSVnRMs=; b=naBk+bFhnZvwYB7pc7C6qWf6vPazfUWfeaP8jUPWWGUTObaetLfWbS6F OtNIqE2NuyTcGeGxWt05bH7G/OcIt8e2D0yTPURpqZAdhLiCDJmJeOwm8 ffZlVy3OXJi66SMMgXBFu3dXmjzOOyYiPAxdMTQoV2YotSXD2hIm6ABWQ /nXS+u38C1xtmOODg+6inAH8PVWPgGb7TNQQUJ7EbptyTLP9suVjB4YeV ew4LuguyeJW1w4s/nLebaO4Z+Y+CIJwnUlVtTDNMb/kIYYaELhjavbZg+ JJpE56Ou5WJZGVrQeSaGrfAQqxqpubSwTaVJNKTps93OuxFJw6Cpd/Ssm g==; X-CSE-ConnectionGUID: dJxqkMRdTua0LQD8i9AYbw== X-CSE-MsgGUID: Ago9T+N5QZGwYZm3w9snoA== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066380" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066380" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:15 -0700 X-CSE-ConnectionGUID: ce4PpnXUStSBsBCEKCZP6w== X-CSE-MsgGUID: Xt7OTP7BTaOKqM2e/7JJGA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80250985" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:14 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 5/6] xsk: wrap duplicated code to function Date: Mon, 7 Oct 2024 14:24:57 +0200 Message-Id: <20241007122458.282590-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Both allocation paths have exactly the same code responsible for getting and initializing xskb. Pull it out to common function. Signed-off-by: Maciej Fijalkowski --- net/xdp/xsk_buff_pool.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index e946ba4a5ccf..ae71da7d2cd6 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -503,6 +503,22 @@ static bool xp_check_aligned(struct xsk_buff_pool *pool, u64 *addr) return *addr < pool->addrs_cnt; } +static struct xdp_buff_xsk *xp_get_xskb(struct xsk_buff_pool *pool, u64 addr) +{ + struct xdp_buff_xsk *xskb; + + if (pool->unaligned) { + xskb = pool->free_heads[--pool->free_heads_cnt]; + xp_init_xskb_addr(xskb, pool, addr); + if (pool->dma_pages) + xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr); + } else { + xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)]; + } + + return xskb; +} + static struct xdp_buff_xsk *__xp_alloc(struct xsk_buff_pool *pool) { struct xdp_buff_xsk *xskb; @@ -528,14 +544,7 @@ static struct xdp_buff_xsk *__xp_alloc(struct xsk_buff_pool *pool) break; } - if (pool->unaligned) { - xskb = pool->free_heads[--pool->free_heads_cnt]; - xp_init_xskb_addr(xskb, pool, addr); - if (pool->dma_pages) - xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr); - } else { - xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)]; - } + xskb = xp_get_xskb(pool, addr); xskq_cons_release(pool->fq); return xskb; @@ -593,14 +602,7 @@ static u32 xp_alloc_new_from_fq(struct xsk_buff_pool *pool, struct xdp_buff **xd continue; } - if (pool->unaligned) { - xskb = pool->free_heads[--pool->free_heads_cnt]; - xp_init_xskb_addr(xskb, pool, addr); - if (pool->dma_pages) - xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr); - } else { - xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)]; - } + xskb = xp_get_xskb(pool, addr); *xdp = &xskb->xdp; xdp++; From patchwork Mon Oct 7 12:24:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13824561 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7E8A1D3640; Mon, 7 Oct 2024 12:25:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303923; cv=none; b=S4LmnReN7FGJz0xXv41tJg88wBiw2FRWGk5BpqSHkvBg7uPbFldZZKgMF308rZhY+lkJgG6RKh/RHpufr2/XatfvNG9ziKLBGuAP9ZsEOU2YHYqf+El51e1Cw6+OhlgWQP2bXZeIUn5M2wZ8ICCJFiAvgW8jRmcfOmdawBdoeYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728303923; c=relaxed/simple; bh=U7tNzbhI7z63cKaK1ajHEzbuxNm1oQWVdEKIsF+aYdk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HcNCfpncwJ6wAjyEnzSHwpNyBAUiVL3uifJR7YD2UYwgTu2zCU4NULU1LljA6/pm1lWFj1szS/TKbMBZPLhFwAfxWVqp2/DzgMecIOkKNFwxKJ+keJPdRpR04rZsv301lxgqg0G48Y+tq7MnF/Hc7s+kcq57dfd0vjKYbexOKnw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jx9OVtQe; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jx9OVtQe" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1728303921; x=1759839921; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U7tNzbhI7z63cKaK1ajHEzbuxNm1oQWVdEKIsF+aYdk=; b=jx9OVtQePP5UNUViKFbACYG+dVmSsopAzEPWf5E2LHceZNg0ia+62C7M zaMFbH7UYdL5yv0C44uviSFfxpW/Nwu5mn1yx61w6trsnaWg99ezwgK+y bV+RLh/QBJBnK8YERp1r3LWA63Ftm06LibJMrzxRGQclXA6KPIy4YYfPU SUmDy8J1c5dNAU4+A22ZfLe7SbrQAS6OtV3/cDi9jnfB1b0LpEttmBz/C +dJ+Gnd1yS0Dp6Y2iG3U0jowVp0FcNC6uWOkB8eiX0r6JWQauzyj/2NXx zzJF3C5Kbz5utPS0+h/AertA9r0adVFnAcp48lUH7VhNkJr1w5cK6O89s w==; X-CSE-ConnectionGUID: UN18zyioTYSg1JqIbgtZVw== X-CSE-MsgGUID: qMoABDiQTMytbr3pCJPArw== X-IronPort-AV: E=McAfee;i="6700,10204,11217"; a="15066387" X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="15066387" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2024 05:25:19 -0700 X-CSE-ConnectionGUID: 5nVVT1qIQpeBWndU+QPwDw== X-CSE-MsgGUID: wR5s/M/gTea3YTXkBYlW2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,184,1725346800"; d="scan'208";a="80251019" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa005.jf.intel.com with ESMTP; 07 Oct 2024 05:25:16 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, maciej.fijalkowski@intel.com, vadfed@meta.com Subject: [PATCH v2 bpf-next 6/6] xsk: use xsk_buff_pool directly for cq functions Date: Mon, 7 Oct 2024 14:24:58 +0200 Message-Id: <20241007122458.282590-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20241007122458.282590-1-maciej.fijalkowski@intel.com> References: <20241007122458.282590-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Currently xsk_cq_{reserve_addr,submit,cancel}_locked() take xdp_sock as an input argument but it is only used for pulling out xsk_buff_pool pointer from it. Change mentioned functions to take pool pointer as an input argument to avoid unnecessary dereferences. Signed-off-by: Maciej Fijalkowski --- net/xdp/xsk.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 6c31c1de1619..7d7e37f53708 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -527,34 +527,34 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags) return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags); } -static int xsk_cq_reserve_addr_locked(struct xdp_sock *xs, u64 addr) +static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr) { unsigned long flags; int ret; - spin_lock_irqsave(&xs->pool->cq_lock, flags); - ret = xskq_prod_reserve_addr(xs->pool->cq, addr); - spin_unlock_irqrestore(&xs->pool->cq_lock, flags); + spin_lock_irqsave(&pool->cq_lock, flags); + ret = xskq_prod_reserve_addr(pool->cq, addr); + spin_unlock_irqrestore(&pool->cq_lock, flags); return ret; } -static void xsk_cq_submit_locked(struct xdp_sock *xs, u32 n) +static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n) { unsigned long flags; - spin_lock_irqsave(&xs->pool->cq_lock, flags); - xskq_prod_submit_n(xs->pool->cq, n); - spin_unlock_irqrestore(&xs->pool->cq_lock, flags); + spin_lock_irqsave(&pool->cq_lock, flags); + xskq_prod_submit_n(pool->cq, n); + spin_unlock_irqrestore(&pool->cq_lock, flags); } -static void xsk_cq_cancel_locked(struct xdp_sock *xs, u32 n) +static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n) { unsigned long flags; - spin_lock_irqsave(&xs->pool->cq_lock, flags); - xskq_prod_cancel_n(xs->pool->cq, n); - spin_unlock_irqrestore(&xs->pool->cq_lock, flags); + spin_lock_irqsave(&pool->cq_lock, flags); + xskq_prod_cancel_n(pool->cq, n); + spin_unlock_irqrestore(&pool->cq_lock, flags); } static u32 xsk_get_num_desc(struct sk_buff *skb) @@ -571,7 +571,7 @@ static void xsk_destruct_skb(struct sk_buff *skb) *compl->tx_timestamp = ktime_get_tai_fast_ns(); } - xsk_cq_submit_locked(xdp_sk(skb->sk), xsk_get_num_desc(skb)); + xsk_cq_submit_locked(xdp_sk(skb->sk)->pool, xsk_get_num_desc(skb)); sock_wfree(skb); } @@ -587,7 +587,7 @@ static void xsk_consume_skb(struct sk_buff *skb) struct xdp_sock *xs = xdp_sk(skb->sk); skb->destructor = sock_wfree; - xsk_cq_cancel_locked(xs, xsk_get_num_desc(skb)); + xsk_cq_cancel_locked(xs->pool, xsk_get_num_desc(skb)); /* Free skb without triggering the perf drop trace */ consume_skb(skb); xs->skb = NULL; @@ -765,7 +765,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, xskq_cons_release(xs->tx); } else { /* Let application retry */ - xsk_cq_cancel_locked(xs, 1); + xsk_cq_cancel_locked(xs->pool, 1); } return ERR_PTR(err); @@ -802,7 +802,7 @@ static int __xsk_generic_xmit(struct sock *sk) * if there is space in it. This avoids having to implement * any buffering in the Tx path. */ - if (xsk_cq_reserve_addr_locked(xs, desc.addr)) + if (xsk_cq_reserve_addr_locked(xs->pool, desc.addr)) goto out; skb = xsk_build_skb(xs, &desc);