From patchwork Fri Jan 17 16:11:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943534 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91211199EAF; Fri, 17 Jan 2025 16:11:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130283; cv=none; b=jQhv1DadBSZrgX4OxwaDa1tWRwu5htyoYGeZjsO+sdHKax4T4vHhNqVe07/7wAj62ohBy6xgZgpbQmNfbkW6Z1eVAAQx91t5v/VYDlSYavgqFx/5+2eif9um/HVhVoexNpNhasCxsFgJrw9NqSADgKmJfRUsYXEj7HvoleFKDUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130283; c=relaxed/simple; bh=bA3pAcJfLjwZfSQNCnlsBmesxFX9eRKBrhTwj0Xnaso=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WL0cr+tOcS1Pcyn5+/hj/A33B86n1p70n5FKKKUUCtAmME+YEh+EJL7MV4YVMt4Uybf68yDWrHnqUnZNQgoKM8bglnsbYiMQoGY9GLaFeTavAPWvpQFcv/ZAJfSPj96yOmb9tFT3I/3Wjct/51w6UjRf3J3dSIymOtl/8+RNzsE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LzhSQYa+; arc=none smtp.client-ip=209.85.218.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LzhSQYa+" Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-ab322ecd75dso411019366b.0; Fri, 17 Jan 2025 08:11:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130280; x=1737735080; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MBLDh/CsE/QEymQb9xLVicBSkf/Bpc1RfWbzcmd0aio=; b=LzhSQYa+z37Qx8ultJB0h3toQX1VZO5CKKOsOgMY/nAxvhThUgQ8/r/KYEq1vBoxX6 /EBi9H6lP2VXkgdgJ3JDRBW4WtvNHaMJom94OfGrIJR+0YFE9AT9vwWjNqXe8Qi5NSqI N9L5B8vsbt9k9TqqxZHt1HkWFFetssnJHQSyZ0CTzrHFRe5PpctCPuYc17qoo/h957jC GU2gQxSc15F1Ro1qHINSciDq7aEoRWlcagsI+FBspHrZ54vKJ/7i6psAnsyZ6IQpyGWs LsYMSnasK2hnsXbdgw+elViOkR0kbAxCv5+RbOzkGiuWPih7aZJ3+ktclTPe8Q4wnAE9 S7hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130280; x=1737735080; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MBLDh/CsE/QEymQb9xLVicBSkf/Bpc1RfWbzcmd0aio=; b=FHJl+dRy90xCHxbIt51DJ0Iez4uhBZphEUe8M68LMZKlOnbssBdtAUj7pMesqh8Yoj JAwzGo1Gd+XzRF7brlxp0Ikf3UaM7ZThUYLvZgzz76SLwFs7zOJbKxH9rq9ijUyCePrk 7y5G20Orn+h2FAc6Bn0TKN2Xa5Wf8QktAXiEgZVCqke+zSkG38mloDD+R8VPnOQl0zBi e7HcwQScgPM7dbJUoOWBmJP43vCO04vzUOENmAbsSCqEBwAQkzznZv7WS43NjFBaksH3 OCF4mpBLx40iI33uGHBQmlHU+EkvDyt8BBth++MoeE45pxsSL07tlV7h5P6CDuGutwmw ui8A== X-Forwarded-Encrypted: i=1; AJvYcCUxGBDaspoJDW7iECKV2kkSxreaVfLoe3LYjKo9UE82az1GsJOF3QMv3ja/sxbhnWG8J4q5DMY=@vger.kernel.org X-Gm-Message-State: AOJu0Ywm81h5Ca7TkNq+bpIGiKcrMSlY5y/qH9soxvuptvJ1msH6KH1n UVRdpEAT3lhogePktPmZXisnwD0hWbpX72dvojLWjeR3JMlouR+5Pf2TLw== X-Gm-Gg: ASbGncvfrSUbkq12jSWD3Er+cPFYjzQTS8ZNaH4OlOLg21ZU+/MjM4CRhE1fJB2qSF9 GMDPJe8YlSYp3e4zIRrpggK0a4UjrXnqZUT1bVTj+rYnSsepSVZES2wG9oW7i3Q7RpqeePf045H FMJNeF1CxIbpzighBx5w7AUOIks/m3sIdPEVyT5hdyqRuab7xaR6RUNovJa9nmiL7KOtJaT0HOg E9RX5QmiNEP95bE4x50KlNqBM6sUfS0Mem7321f X-Google-Smtp-Source: AGHT+IHn5v5/fshGqaQRY9t4rwSrPaD8MkGBdbX5oXbFwvx0sptPKtInAfZu2CFnJIAE62vA3NoyYQ== X-Received: by 2002:a17:907:268a:b0:aa6:938a:3c40 with SMTP id a640c23a62f3a-ab36e479602mr766171766b.24.1737130277959; Fri, 17 Jan 2025 08:11:17 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:17 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 01/10] net: page_pool: don't cast mp param to devmem Date: Fri, 17 Jan 2025 16:11:39 +0000 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org page_pool_check_memory_provider() is a generic path and shouldn't assume anything about the actual type of the memory provider argument. It's fine while devmem is the only provider, but cast away the devmem specific binding types to avoid confusion. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov --- net/core/page_pool_user.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 48335766c1bf..8d31c71bea1a 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -353,7 +353,7 @@ void page_pool_unlist(struct page_pool *pool) int page_pool_check_memory_provider(struct net_device *dev, struct netdev_rx_queue *rxq) { - struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv; + void *binding = rxq->mp_params.mp_priv; struct page_pool *pool; struct hlist_node *n; From patchwork Fri Jan 17 16:11:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943533 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4D92198822; Fri, 17 Jan 2025 16:11:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130282; cv=none; b=EeP9Q07RIht29XdswubBLvhi9Dly0iXZktXYp3gY5q2AMfec3T0K2Nzpbpn1z6QgMJFmWQMZtSmws7wfqLbSzqgA2FODq0nOMAZMIc5foW127sozbOj2bCXoYVs+FkcGs1fOMDmgLjAD5pLEAlRRggBD6TqTy230udBWRJA+MC0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130282; c=relaxed/simple; bh=xNV5PSzGQkIbUJf+1h9bYQMhTbtUCoF4OXpyhvdcOrI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L0PXTmrk2BDe9vtRlXl8k1iygXMAgLQSbAFGVGcUE8jBPnXH0OOaeLoMdSev6Pn8uoyA1f2PLyf9eqXw8FWG0BcN77LvUDe4OHQtJsyUV7DSdVU0gYc5KlmI7uJQq+mjagbqHejKR5tzH/LOjbb4d/g/YyOUJkMyoOzbIt/OJvg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hnHapYmg; arc=none smtp.client-ip=209.85.208.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hnHapYmg" Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-5d3d14336f0so4056100a12.3; Fri, 17 Jan 2025 08:11:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130279; x=1737735079; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gm3TPDkykLNtCw5xss6LQT+wSBxwN0LqpzUDne1C954=; b=hnHapYmg3IBruHstneiyLQ1uWxNmK8W/jexeipg8eg/SVAQ6I1Xq41wHrrx+CA3IFP FCwo21TM81NFGyvwOhqAWKP/bE6NLsj9YQ8IANJJDZ+2mzcS0W0ytmodTjQZdjllqMGb xUXcODrf+ym6jZAMBwA6qDKPpfnPBnf7q7s0gYJGeQZ7jnaR3/sqNQVU6VAaT64KLlbf GFc/u7LzyYzDmTXuWg0pMq3o7XQtrQODn0uHhs2Zrco+SJ2e3QETAdj2tmnkdcqKP8u7 mN7JPePn/OHUUmQiKpDF4xWxWxl/Pb2MlhC5bKwmhnQDXZl1km427tJ6Ct/0tBJYmcRH jIVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130279; x=1737735079; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gm3TPDkykLNtCw5xss6LQT+wSBxwN0LqpzUDne1C954=; b=V9lhz9myuVxHvjmYSE4yHkHmIt94UapZFL3YS1Sd5WEWZzcrb5/bhTeBOyVkh6gMXp L7KPUbemRlM3eqbg69+ozfMYGckuPG6bH12tobRWbM9eT1rBtRqdJv2qUjVljWTF9NhX w/6uwGaVu5S/9rGfI/w89FXz//ioNc/7lomP3Tr+KnSsoX5zixUW0AmMN/dSueuLV08E D7hfPK7+PngurtK0B87DDypgEW7Jljm1xYA5AyzErf7BARDHfUByS/IMeOq6xlHb5KUY ey/uouGeLizEx5zk4QxPPLHinbqkIKUv2f+hTohq5oWC8B+3v7S0yeUeyEl3+h6MvpOX DjqA== X-Forwarded-Encrypted: i=1; AJvYcCVYEO0298aLE93h51WJB9+QfyTcoih4muvmm4+/yXxsg4RX5NXHUSKIxaoGBOZAdkRsUldWKAQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yxp6n3Z+o0VFMrlxa6PqMhc8tSJ/yEH5YWfF25qZemZs05wqIct 2H1orYxqdBOnLG1Sgi7/8gNQ37De724qsFEekIrmFIrhF5uukCWDsFTa8Q== X-Gm-Gg: ASbGncs8zwbTM0Nmc9LTwzAC23j346KlB3RULgoCIjFfhGT2PeRAgQhRbIHLkfW1gOS 9jro+UXjUTccTS3wRN0esl8/NCiWhFgkPtdYCZZSXBsIGWaKkWIFFM0IA+T4uaTLDZVhl49LYcE /+tf9Gwg18TFwIJo/KacMGKhSdxedJylWm4TpVVXH1vv7kV8M8h0ILhdFntXUzd1ljzqpUhG6h4 UvwltCT4JVhQhvUf98QVJntOiFUtC3dN2GJRyQI X-Google-Smtp-Source: AGHT+IFUAl8wAlqtDicZghczHXpiwTnm5n8kfwoGC581koqcUgOkSXVT5mb+rbZVzvBl9/jomo6qkg== X-Received: by 2002:a17:907:970b:b0:aa6:74a9:ce6e with SMTP id a640c23a62f3a-ab38b240b20mr274337166b.16.1737130278846; Fri, 17 Jan 2025 08:11:18 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:18 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 02/10] net: prefix devmem specific helpers Date: Fri, 17 Jan 2025 16:11:40 +0000 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add prefixes to all helpers that are specific to devmem TCP, i.e. net_iov_binding[_id]. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov --- net/core/devmem.c | 2 +- net/core/devmem.h | 14 +++++++------- net/ipv4/tcp.c | 2 +- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index c971b8aceac8..acd3e390a3da 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -94,7 +94,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) void net_devmem_free_dmabuf(struct net_iov *niov) { - struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + struct net_devmem_dmabuf_binding *binding = net_devmem_iov_binding(niov); unsigned long dma_addr = net_devmem_get_dma_addr(niov); if (WARN_ON(!gen_pool_has_addr(binding->chunk_pool, dma_addr, diff --git a/net/core/devmem.h b/net/core/devmem.h index 76099ef9c482..99782ddeca40 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -86,11 +86,16 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov) } static inline struct net_devmem_dmabuf_binding * -net_iov_binding(const struct net_iov *niov) +net_devmem_iov_binding(const struct net_iov *niov) { return net_iov_owner(niov)->binding; } +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) +{ + return net_devmem_iov_binding(niov)->id; +} + static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); @@ -99,11 +104,6 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); } -static inline u32 net_iov_binding_id(const struct net_iov *niov) -{ - return net_iov_owner(niov)->binding->id; -} - static inline void net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) { @@ -171,7 +171,7 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) return 0; } -static inline u32 net_iov_binding_id(const struct net_iov *niov) +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0d704bda6c41..b872de9a8271 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2494,7 +2494,7 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, /* Will perform the exchange later */ dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx]; - dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov); + dmabuf_cmsg.dmabuf_id = net_devmem_iov_binding_id(niov); offset += copy; remaining_len -= copy; From patchwork Fri Jan 17 16:11:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943535 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51D7219ADA4; Fri, 17 Jan 2025 16:11:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130284; cv=none; b=A3dP5MLQgFeU+vOHASn3n3lElPocKaB/pOUwPkDOMc12AX55yMdiwcHX+sZrCErK3z8V5bKejo8vrActrgA9nsA8crRwHenTztLJExabqRgvgXZjpQQQaY4EBafpQygCjvkTEnHnTijAwqKvfxgMBxZlqeRj1gpvi8a0jqdgrk0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130284; c=relaxed/simple; bh=E8t4c2evOr4V00Crn7CwY8lfpwwJy/WYdxEVEKTp+8k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O3NXntx65oZf5swEeWGF99PcfTYpKKGv8fazWag2D+lsOC3+2seBTFIavUsrI2IQ+qi01h2RKWSeNl54OyeoBRgJyRb1vWFjyjPtevB5tuhSyslKo5eKzLiIclVxgqXuCz35zHI+Ck8pkTnucGBBMsOm0mAuN50cPt+GZX/747M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fgWXsKGq; arc=none smtp.client-ip=209.85.208.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fgWXsKGq" Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-5d3e6f6cf69so3644074a12.1; Fri, 17 Jan 2025 08:11:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130280; x=1737735080; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aLDt5pEUbI2MwsAQ6Q/k5nfoUnfGnq5JLGmiSunlKm4=; b=fgWXsKGqFrYil2Y+BAcrzYiqOzVw1WR5BpG3VHKugZNEQPm2OAXcAibNuXDGgiA+Gq oGHYhlcPL9bCzf8yexQ3Meptrae/qTd9efLXYmd1BN6WIaSEEuvHUwIAWCsLqs9iIgR6 RkGlvhYEYbQc3DgBItpodzGBCidNkdvr15TRHXZB+Mqov2/n8e7VdXuNc+GWzznv66tK EJdRjJwVDfVQMeFTNPzbo2oTaAeutLv/4BOFM5NHfy/C/Si1RHhzGHNTQWb8j7YDfqbS HjeiqjW/Zneg89YoHGUriNGwQoBYF3G0IfOg/GAe4AfUIvibfxH1LXLHIPexvG3yKZag wI6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130280; x=1737735080; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aLDt5pEUbI2MwsAQ6Q/k5nfoUnfGnq5JLGmiSunlKm4=; b=mlwqoShQxncrJsqwaGKRH4C2tFH3w6IbdcheLSlHodVGdUgheYrqLEXolVn7TecE5n iL3vCDRdgqXsO+zkLjyqxPTE5Sli54U1NImKL4AD7e5KnlozEmm1a0ep5oQ+TnvsOH8p OW9QUBGQRawfTHmjXIxEB52zSQNhst19osd5YSiH9UkzTxrMJWqfHg4CxJwx45zLdSCX JS3g//Y+RKNoNxF2GvmaOiXNeogK3zMAbHqBnlHfmRSihb35FKPZQ0++8KnVPFJEgCUF H3tyUTet5mSC58sWXTDA9RuNXvR1WcqWftrvozG5zYrj5XhvA3C51eYwMbITtJV4lqDQ 6XlA== X-Forwarded-Encrypted: i=1; AJvYcCVJpWZAPwZMItMPUaWhqXD8tkX+gq+Wpu0CBjo+txHhcnZ7CQDbPxb3q1RJX51i0G95pyKFI78=@vger.kernel.org X-Gm-Message-State: AOJu0YwBPD0SDwUqg70HAjP83QM436SD2BaA7dQCSOtDPRavgZd3qoK8 smfXwWOXx+rEZKjYYEOyLeuRdvVDsGPEq34LNzhFeAKFpc/DoV+CkBvoNg== X-Gm-Gg: ASbGncvI9vjgv5+AS842xhyPil4AGXPyyLPToycw5YAsJ8aB4UppnpMy0FLLnRcLqA3 EZW0E8gNqlBA4jDQAIcBvG8crXrUKRcYVtCvaCDLA5BIdGz8TbMS2it0PwtEzKSt7QCKKFhC7He CbEMla92436X2XOLhPx+irkI9NKBtS+Ht/0fDVPYL93NSdw4rJkfvJ2c0bDvOLqrRb31tsB9rXk n7BRPrglcJGlnN1tA4R6T2KcQh9KhY1QxUSY0+T X-Google-Smtp-Source: AGHT+IEbvHgL5FlhQ5zRvkWePEwp1qqtE1ACR3cXiwxoL4+z9nmoErWn5jzGrVnTQ3QbCLi87wkKbw== X-Received: by 2002:a17:907:1c19:b0:aab:dee6:a3a2 with SMTP id a640c23a62f3a-ab38b4c6500mr272713866b.47.1737130279961; Fri, 17 Jan 2025 08:11:19 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:19 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 03/10] net: generalise net_iov chunk owners Date: Fri, 17 Jan 2025 16:11:41 +0000 Message-ID: <7a5e4bbbcbc39a2db44bb89274304155eae76178.1737129699.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner, which serves as a useful abstraction to share data and provide a context. However, it's too devmem specific, and we want to reuse it for other memory providers, and for that we need to decouple net_iov from devmem. Make net_iov to point to a new base structure called net_iov_area, which dmabuf_genpool_chunk_owner extends. Reviewed-by: Mina Almasry Acked-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- include/net/netmem.h | 21 ++++++++++++++++++++- net/core/devmem.c | 25 +++++++++++++------------ net/core/devmem.h | 25 +++++++++---------------- 3 files changed, 42 insertions(+), 29 deletions(-) diff --git a/include/net/netmem.h b/include/net/netmem.h index 1b58faa4f20f..c61d5b21e7b4 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -24,11 +24,20 @@ struct net_iov { unsigned long __unused_padding; unsigned long pp_magic; struct page_pool *pp; - struct dmabuf_genpool_chunk_owner *owner; + struct net_iov_area *owner; unsigned long dma_addr; atomic_long_t pp_ref_count; }; +struct net_iov_area { + /* Array of net_iovs for this area. */ + struct net_iov *niovs; + size_t num_niovs; + + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; +}; + /* These fields in struct page are used by the page_pool and net stack: * * struct { @@ -54,6 +63,16 @@ NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); #undef NET_IOV_ASSERT_OFFSET +static inline struct net_iov_area *net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index acd3e390a3da..3d91fba2bd26 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -33,14 +33,15 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, { struct dmabuf_genpool_chunk_owner *owner = chunk->owner; - kvfree(owner->niovs); + kvfree(owner->area.niovs); kfree(owner); } static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct dmabuf_genpool_chunk_owner *owner; + owner = net_devmem_iov_to_chunk_owner(niov); return owner->base_dma_addr + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); } @@ -83,7 +84,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) offset = dma_addr - owner->base_dma_addr; index = offset / PAGE_SIZE; - niov = &owner->niovs[index]; + niov = &owner->area.niovs[index]; niov->pp_magic = 0; niov->pp = NULL; @@ -261,9 +262,9 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->base_virtual = virtual; + owner->area.base_virtual = virtual; owner->base_dma_addr = dma_addr; - owner->num_niovs = len / PAGE_SIZE; + owner->area.num_niovs = len / PAGE_SIZE; owner->binding = binding; err = gen_pool_add_owner(binding->chunk_pool, dma_addr, @@ -275,17 +276,17 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->niovs = kvmalloc_array(owner->num_niovs, - sizeof(*owner->niovs), - GFP_KERNEL); - if (!owner->niovs) { + owner->area.niovs = kvmalloc_array(owner->area.num_niovs, + sizeof(*owner->area.niovs), + GFP_KERNEL); + if (!owner->area.niovs) { err = -ENOMEM; goto err_free_chunks; } - for (i = 0; i < owner->num_niovs; i++) { - niov = &owner->niovs[i]; - niov->owner = owner; + for (i = 0; i < owner->area.num_niovs; i++) { + niov = &owner->area.niovs[i]; + niov->owner = &owner->area; page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), net_devmem_get_dma_addr(niov)); } diff --git a/net/core/devmem.h b/net/core/devmem.h index 99782ddeca40..a2b9913e9a17 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -10,6 +10,8 @@ #ifndef _NET_DEVMEM_H #define _NET_DEVMEM_H +#include + struct netlink_ext_ack; struct net_devmem_dmabuf_binding { @@ -51,17 +53,11 @@ struct net_devmem_dmabuf_binding { * allocations from this chunk. */ struct dmabuf_genpool_chunk_owner { - /* Offset into the dma-buf where this chunk starts. */ - unsigned long base_virtual; + struct net_iov_area area; + struct net_devmem_dmabuf_binding *binding; /* dma_addr of the start of the chunk. */ dma_addr_t base_dma_addr; - - /* Array of net_iovs for this chunk. */ - struct net_iov *niovs; - size_t num_niovs; - - struct net_devmem_dmabuf_binding *binding; }; void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); @@ -75,20 +71,17 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * -net_iov_owner(const struct net_iov *niov) +net_devmem_iov_to_chunk_owner(const struct net_iov *niov) { - return niov->owner; -} + struct net_iov_area *owner = net_iov_owner(niov); -static inline unsigned int net_iov_idx(const struct net_iov *niov) -{ - return niov - net_iov_owner(niov)->niovs; + return container_of(owner, struct dmabuf_genpool_chunk_owner, area); } static inline struct net_devmem_dmabuf_binding * net_devmem_iov_binding(const struct net_iov *niov) { - return net_iov_owner(niov)->binding; + return net_devmem_iov_to_chunk_owner(niov)->binding; } static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) @@ -98,7 +91,7 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct net_iov_area *owner = net_iov_owner(niov); return owner->base_virtual + ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); From patchwork Fri Jan 17 16:11:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943536 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04F7A19CC27; Fri, 17 Jan 2025 16:11:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130284; cv=none; b=J/mk9tkQPVMRLsYUBsx7AyH8xALxcW0qxKLN4pBxpcn2krRll0yXB7ZfJCzShynii0mlZylKuDtA1n0XbNMUx19eCiScXzO/hiBme1oH6SKVLTtCpbqlo6y+cVffPeEvLmb7cSC7YFR6EldDAVc76dUpVvwYkyP4EpBhxlZNId0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130284; c=relaxed/simple; bh=KkoCFdqeGAx/jAKbeobYbpZcVxDZ8fOpp1Uvm/4UJS0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GsmXaJjyMMDWyKAW9hthhhjgILrg7B/uL/Hf07h6XZ8ox45MlnoA0NuCL0ljc7jkJNaVViwmHNZmklLnwRdLq8LqFQhzZn7dfCA1I6z0GR9tqcoaN6844T4bNdiSGW+H1xiTZWJ0ejt0ec91sp/xTAsZM+gmxbEAxkugEVyumdA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fFxEbZYe; arc=none smtp.client-ip=209.85.218.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fFxEbZYe" Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-ab39f84cbf1so37237066b.3; Fri, 17 Jan 2025 08:11:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130281; x=1737735081; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fhVLlGQYKbTCHyiUVg4Zavq5CQp0lOKqe27gBDb4LXw=; b=fFxEbZYe7/bL1RZCteYtW+VlaBGBLf38kJ/jL2LE7VGbVSxfK/jz1lghuLNR0gcYtb 5bXmSkMIRqN9Md1PdWfH0/tQGVt2e1ZePyNeEy41XZbCBr0UEDCy4vT5ABKZuJkpJFF0 4p9/HWtxEse8tOBFCz3GSQTZSBtlKz+n0EY6d8P33yml1Zmu+302LFIJemKvRY7hTh03 2JwspKNXk+5f1G55K/JVnvzLMbERUjbJwGWCP4SUGAnCfXqTcLmLLmkyMvB/Ogp0Ensv 9T7c2lzIjcqSTNIW8bupdkfbNGDo+12FgYUvgJq3pjGDWQ36Ueqs3se9LxjazMKd8aQ+ WNPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130281; x=1737735081; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fhVLlGQYKbTCHyiUVg4Zavq5CQp0lOKqe27gBDb4LXw=; b=YcL9WPhmS5m1bjRd953gQb1dFWPmH+6+OnaJB0/KT4w7ZEff0omzYsooAOe8uoihpV C4kyda/lHP479BUDjx6o26/2krw6YT9bwcEDfw9a72QwwlF3HfrbvtFYx+Nf1m3snqOd oehwvPj1n8mJr2/gNaq29MJpmwxOrl3IWoRVKS/MJO7qjEhu7aQqGjsQblZsjwvCmyLn CsvnsL3+Ra9lGDsreKQNr5cpvSHv5BAxeIob4GqCMpuDzw4459jAm31LzuTj+u4GT7nH wLZEeDw7cKNf4GDqyPRvM6lZnB/pOxplKhjGMV5eOuBMn/qtd9A6jyebA+pQBnPq8MLI Ci3g== X-Forwarded-Encrypted: i=1; AJvYcCXZaEmwj/2YvHSkH7c7OG7xEl1qkEfqc6qNhE2O4ugFeZhc0imbNU0nyG7uAFTioinASrPiY7Q=@vger.kernel.org X-Gm-Message-State: AOJu0Ywuq+TqooBf5Wq4jHWd3DhFLrpjVLLbwEbKcy+5gnp+pqOupYZs NMrj23O4RnFdwjo5HvzeJn2XOiN59CG9lE4w+RjSo+tcsH1zzEcFd+edCQ== X-Gm-Gg: ASbGncus00x/IRrR/wiO8ywCymQrDcwAnjgYYXwyM+aAIu5O7pbmBLvrCTFbB95HjxR y+rFoK7NJR5zv3EhxN0HyHdawqzMleYEMP7we6PPbGKAOJXk8+YiLCTX+0B+C7HqANNIUhZzBPn Dl8CIeotbuEyIod78jA5R4kTVz+EvVEPGsnvO1E3hPS02s92YylSOfDu2Xf2qVpJAy0CyUppRgu 4mO4jNxRbP1wz2LRjNG8lJJA6KMWkhEy74qSUXb X-Google-Smtp-Source: AGHT+IETIoeJnGYR3eGQuMug+TMoWNgDpE1uopseE3IWnBKre9KB9DxN+xPmoyxwdkGLbJFWY4rxaQ== X-Received: by 2002:a17:907:7fa2:b0:aa6:96ad:f903 with SMTP id a640c23a62f3a-ab38b1bc6a1mr335351366b.31.1737130280885; Fri, 17 Jan 2025 08:11:20 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:20 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 04/10] net: page_pool: create hooks for custom memory providers Date: Fri, 17 Jan 2025 16:11:42 +0000 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org A spin off from the original page pool memory providers patch by Jakub, which allows extending page pools with custom allocators. One of such providers is devmem TCP, and the other is io_uring zerocopy added in following patches. Link: https://lore.kernel.org/netdev/20230707183935.997267-7-kuba@kernel.org/ Co-developed-by: Jakub Kicinski # initial mp proposal Signed-off-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- include/net/page_pool/memory_provider.h | 15 +++++++++++++++ include/net/page_pool/types.h | 4 ++++ net/core/devmem.c | 15 ++++++++++++++- net/core/page_pool.c | 23 +++++++++++++++-------- 4 files changed, 48 insertions(+), 9 deletions(-) create mode 100644 include/net/page_pool/memory_provider.h diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h new file mode 100644 index 000000000000..e49d0a52629d --- /dev/null +++ b/include/net/page_pool/memory_provider.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H + +#include +#include + +struct memory_provider_ops { + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); +}; + +#endif diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index ed4cd114180a..88f65c3e2ad9 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -152,8 +152,11 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct memory_provider_ops; + struct pp_memory_provider_params { void *mp_priv; + const struct memory_provider_ops *mp_ops; }; struct page_pool { @@ -216,6 +219,7 @@ struct page_pool { struct ptr_ring ring; void *mp_priv; + const struct memory_provider_ops *mp_ops; #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ diff --git a/net/core/devmem.c b/net/core/devmem.c index 3d91fba2bd26..1a88ab6faf06 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include "devmem.h" @@ -27,6 +28,8 @@ /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); +static const struct memory_provider_ops dmabuf_devmem_ops; + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -118,6 +121,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(rxq->mp_params.mp_priv != binding); rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; rxq_idx = get_netdev_rx_queue_index(rxq); @@ -153,7 +157,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } rxq = __netif_get_rx_queue(dev, rxq_idx); - if (rxq->mp_params.mp_priv) { + if (rxq->mp_params.mp_ops) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); return -EEXIST; } @@ -171,6 +175,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return err; rxq->mp_params.mp_priv = binding; + rxq->mp_params.mp_ops = &dmabuf_devmem_ops; err = netdev_rx_queue_restart(dev, rxq_idx); if (err) @@ -180,6 +185,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, err_xa_erase: rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; xa_erase(&binding->bound_rxqs, xa_idx); return err; @@ -399,3 +405,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) /* We don't want the page pool put_page()ing our net_iovs. */ return false; } + +static const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, + .release_netmem = mp_dmabuf_devmem_release_page, +}; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index a3de752c5178..199564b03533 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -285,13 +286,19 @@ static int page_pool_init(struct page_pool *pool, rxq = __netif_get_rx_queue(pool->slow.netdev, pool->slow.queue_idx); pool->mp_priv = rxq->mp_params.mp_priv; + pool->mp_ops = rxq->mp_params.mp_ops; } - if (pool->mp_priv) { + if (pool->mp_ops) { if (!pool->dma_map || !pool->dma_sync) return -EOPNOTSUPP; - err = mp_dmabuf_devmem_init(pool); + if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { + err = -EFAULT; + goto free_ptr_ring; + } + + err = pool->mp_ops->init(pool); if (err) { pr_warn("%s() mem-provider init failed %d\n", __func__, err); @@ -588,8 +595,8 @@ netmem_ref page_pool_alloc_netmems(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + netmem = pool->mp_ops->alloc_netmems(pool, gfp); else netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; @@ -680,8 +687,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) bool put; put = true; - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - put = mp_dmabuf_devmem_release_page(pool, netmem); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_netmem(pool, netmem); else __page_pool_release_page_dma(pool, netmem); @@ -1049,8 +1056,8 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_priv) { - mp_dmabuf_devmem_destroy(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); static_branch_dec(&page_pool_mem_providers); } From patchwork Fri Jan 17 16:11:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943537 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10B3219D8AC; Fri, 17 Jan 2025 16:11:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130285; cv=none; b=OpLDWc+RDMkRiD27TOlmbq1uYFrUp9okGFfw3KWA3lvLkmRmYY1hkXvvAOglf52nbh0b2oONfKK5oWvRbooWDbiWMEWKYL7I7/ZpsXWW8qTBdB7SifsyJV073bE6nUPju2PyPupIKNHrei9WF9a0hOVqo8fHWE7iUdsvy7x/4Zc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130285; c=relaxed/simple; bh=SZl183b/DPWI5juMXy0YwJdwtSzW+eSy82+x2+PSLiA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JvJ9+jlY7XctNcOdCFMiShkidd4D4AafCOjkg0Mv07csmQYQQk+aRGq1t5aKOeEDfQ3ABB3+n8rBIjBVdxESNgnE9YQQtpHnA+uEZFBd33aKvUNwDBuvXPamYeZlY/RZV+dlu65gbaPSp1Cz11PXPWOuoPHbNSbGjBxjUQa3tEY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UZVbRuzl; arc=none smtp.client-ip=209.85.208.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UZVbRuzl" Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-5d9f0a6adb4so4656780a12.1; Fri, 17 Jan 2025 08:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130282; x=1737735082; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qIp/+aRSJ8ja9DquormKZLk+R2k2At54M2TTTA6f1ds=; b=UZVbRuzluf+DFeXi15Zfeazr5ZrC+8IsRi1mI85FVE+UQaZexACazYji8Vyi8V5iQM uNVXGaKctHLdeq2oslE20f3G2dRr3DC1mze38pCFE7I+KB+Wf6FQ60fhVDZaYMTUBl4C JapAl0uf0Yda/gnJXD/RtLl75G3qscXkJ3Km179Qp7gbU0o4NUkLSBVAja3Cx6/YTDuV E6zZSNzDTjojXw2UyMDESooHASWsHajzF0K82CF++qbb05iSaN4PU2QCjrjLuk7ve2n0 vzp7UQEeI1ny/T0v1tCZO2Nv4zMIrN/rDf2BIq21iG555GB+MgeZP3ehQluaPtAbl48a FFog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130282; x=1737735082; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qIp/+aRSJ8ja9DquormKZLk+R2k2At54M2TTTA6f1ds=; b=noEAAnrFoCQHlZAOHFme9aetCXN3kjzYK2l7jrJgs7LJNvC+wnAteD+mt+8L8WrjSU N5lnpoM+7qRIzFTbh96He2oSdTFoZLd9EY8JwWZfh3rAmQaYYu0lYrm6PV2nhWYzNSdt pTLdQ7rYo624LYeWb4aTYXEZSf7Ykw7tywpgW/tcK74jp0n9wEemTe6m51R51+W0vZrA E6+RCjhWQqbGDhPBbIwJpKAy6h42PzRr8POeqIngOqr08VHo195h6mMHdJVb+xIudvas cRni2Q52eOLHjJebfs2qvBiuWjxqK6491eGJz3zqaY67z0UIcjp4gboBo+zXHe3DUyOU kCig== X-Forwarded-Encrypted: i=1; AJvYcCVKx6vnvZcgHb5yNPH2gNgNX0QsBnyw/FitfiQlKx2dLmB6KHRUDmOOCzgc2ktJ0alEgQtuKiA=@vger.kernel.org X-Gm-Message-State: AOJu0Yz/B54+ut5iGe63xJcyciw8x5bDe1ZPPpIui8GzKZbsko7+LheE jOuhQ51Cz6KAssNO04UAyUX/Dhsv2nNFZ+PPTxXkfIwrwh/RjgPdMTzneA== X-Gm-Gg: ASbGncuWSm8oy5SGkyx5AGWNolW/GgXrAgYS7iMoq5rCLlb1QGzbx25IetAlECZ1r3g K2Eu8WO++ycbkd9+blBATd5ghpFNZ39YIRHR2N0fRx+8M5eX/J2IENgf6V4ejXT0chmC9foIHqz FRI6rv4zBJoZYqmlE6Y4PKaMXkjDoNraeRlsAUTbCM945itngQOcX2X6HNc+8b+7hcMYrTIycYp 8BIOhSOqqIUKnphnInbWj8ADZk/yq0xYshko7Hq X-Google-Smtp-Source: AGHT+IGBwGCr881zCycTxUD7CFW5M+lM332oWUYBECQkuzU0tnC5QeBZAh5s8bvB5DcI73VTEMl/iw== X-Received: by 2002:a17:907:706:b0:ab2:eb1a:9471 with SMTP id a640c23a62f3a-ab38b3cf59fmr331086766b.48.1737130281934; Fri, 17 Jan 2025 08:11:21 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:21 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 05/10] netdev: add io_uring memory provider info Date: Fri, 17 Jan 2025 16:11:43 +0000 Message-ID: <45e621e569a0ad2c172a19d0bbe5977ff6ec42f3.1737129699.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: David Wei Add a nested attribute for io_uring memory provider info. For now it is empty and its presence indicates that a particular page pool or queue has an io_uring memory provider attached. $ ./cli.py --spec netlink/specs/netdev.yaml --dump page-pool-get [{'id': 80, 'ifindex': 2, 'inflight': 64, 'inflight-mem': 262144, 'napi-id': 525}, {'id': 79, 'ifindex': 2, 'inflight': 320, 'inflight-mem': 1310720, 'io_uring': {}, 'napi-id': 525}, ... $ ./cli.py --spec netlink/specs/netdev.yaml --dump queue-get [{'id': 0, 'ifindex': 1, 'type': 'rx'}, {'id': 0, 'ifindex': 1, 'type': 'tx'}, {'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'rx'}, {'id': 1, 'ifindex': 2, 'napi-id': 514, 'type': 'rx'}, ... {'id': 12, 'ifindex': 2, 'io_uring': {}, 'napi-id': 525, 'type': 'rx'}, ... Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- Documentation/netlink/specs/netdev.yaml | 15 +++++++++++++++ include/uapi/linux/netdev.h | 8 ++++++++ tools/include/uapi/linux/netdev.h | 8 ++++++++ 3 files changed, 31 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index cbb544bd6c84..288923e965ae 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -114,6 +114,9 @@ attribute-sets: doc: Bitmask of enabled AF_XDP features. type: u64 enum: xsk-flags + - + name: io-uring-provider-info + attributes: [] - name: page-pool attributes: @@ -171,6 +174,11 @@ attribute-sets: name: dmabuf doc: ID of the dmabuf this page-pool is attached to. type: u32 + - + name: io-uring + doc: io-uring memory provider information. + type: nest + nested-attributes: io-uring-provider-info - name: page-pool-info subset-of: page-pool @@ -296,6 +304,11 @@ attribute-sets: name: dmabuf doc: ID of the dmabuf attached to this queue, if any. type: u32 + - + name: io-uring + doc: io_uring memory provider information. + type: nest + nested-attributes: io-uring-provider-info - name: qstats @@ -572,6 +585,7 @@ operations: - inflight-mem - detach-time - dmabuf + - io-uring dump: reply: *pp-reply config-cond: page-pool @@ -637,6 +651,7 @@ operations: - napi-id - ifindex - dmabuf + - io-uring dump: request: attributes: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index e4be227d3ad6..684090732068 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -86,6 +86,12 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + + __NETDEV_A_IO_URING_PROVIDER_INFO_MAX, + NETDEV_A_IO_URING_PROVIDER_INFO_MAX = (__NETDEV_A_IO_URING_PROVIDER_INFO_MAX - 1) +}; + enum { NETDEV_A_PAGE_POOL_ID = 1, NETDEV_A_PAGE_POOL_IFINDEX, @@ -94,6 +100,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, NETDEV_A_PAGE_POOL_DMABUF, + NETDEV_A_PAGE_POOL_IO_URING, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -136,6 +143,7 @@ enum { NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, NETDEV_A_QUEUE_DMABUF, + NETDEV_A_QUEUE_IO_URING, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index e4be227d3ad6..684090732068 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -86,6 +86,12 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + + __NETDEV_A_IO_URING_PROVIDER_INFO_MAX, + NETDEV_A_IO_URING_PROVIDER_INFO_MAX = (__NETDEV_A_IO_URING_PROVIDER_INFO_MAX - 1) +}; + enum { NETDEV_A_PAGE_POOL_ID = 1, NETDEV_A_PAGE_POOL_IFINDEX, @@ -94,6 +100,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, NETDEV_A_PAGE_POOL_DMABUF, + NETDEV_A_PAGE_POOL_IO_URING, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -136,6 +143,7 @@ enum { NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, NETDEV_A_QUEUE_DMABUF, + NETDEV_A_QUEUE_IO_URING, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) From patchwork Fri Jan 17 16:11:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943539 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E43519F128; Fri, 17 Jan 2025 16:11:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130288; cv=none; b=DiWIkmf/pno+Gr/o0pseZH/a5dAwxEekEfUD+doh8D9dHZjJB6tPe9DswGn5MJ0tU0d8LYWBepFfLm2AuBTb4Qk+0UnlTBNhJLfIrBqfMD2+G6HoxxzTQsDzr23puOHo3Cih0Hs6Wa9c2zQwNtbwatXQD70mIr0S/yZIpEf3LHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130288; c=relaxed/simple; bh=p7K/xdJLgGwHUuPTNQQLhCdfTcMkRBmh6lhbm8pt8hs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uKzP8gznt9ZQXA4cVeZDEShvDyP1ihD+UkaRp3VhZmUWRZ8FG1hP/NB7QLd7uRqWbgqlOZPq9w4WFiA/P2cOmyPWfP4ZXuKwXoH27yf5L1LtjoP7KBbnQjdY5ua4P/QAUEodiejrW6kGEtp5YjWEhcbUJq1HhvuFitWp5cfjp3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CJLcDhhQ; arc=none smtp.client-ip=209.85.218.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CJLcDhhQ" Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-ab39f84cbf1so37248866b.3; Fri, 17 Jan 2025 08:11:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130285; x=1737735085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WzVMUBn7a26Q24qLiDr8VqZp5nQZKrJFA5mDpOlyJHE=; b=CJLcDhhQqehYpH6v92DQ739Ck6Of+GonMVINHx8fKiKspD1UX6l1g+4Fe44LmCMANm h6QhYlGOYExC17KC8ax9tbBFjex5ZhYX3ihl2zN44jlM5ywYbzlL5D+q2wrFTCGCIWT5 1PdndcgZru3F29FjXMgZ05uzbZ4x2ra20PLGACPaTiW0QCqMzr73FBwZspysB3X70dCt zw0Jiza7LKJKK7loaws9X9XSuoMMOZADtafJigyl8vbZewJmuXpxWp+fcRJnnxas/sVa RHQBtGdQB2i+GpSNLdBZ/6hFhUZU7rRxM7V5EZBeVS/F3XNmnI8EOHDTsXFqTXweh7lc 3s8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130285; x=1737735085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WzVMUBn7a26Q24qLiDr8VqZp5nQZKrJFA5mDpOlyJHE=; b=PI8BchzEx67Vrwnp9LOSholteoB6gbErA0U3z1PVL/ajqYf6OokIh6a8n67tA0sJQw 03F/DB1BPzkoNvMO8vvBs3AtVqhuETZBQdq5/QBKzfmaoPg8gWLeJ9hu88FBDi2q1qkq id2qTGJ+17HQJJQMjKO2Fubd2GwzoAlAj7HkhUkCs+IBHMWCMwMe9X/qm0qxLqthyrCE l0NtHScGbH5vcQhrqRLeR+IJDi0HspHH5ku4Rj9LUUI58Wj4O4Yu7iHqtZu4mYczOpmT YLhs3No7q3WK1KGQAEfd4fMudJ2ieKyDbM6xTf+Ouyb+LQv5OGgMhk0Xf0grCSaPgOok maNg== X-Forwarded-Encrypted: i=1; AJvYcCW3W44ZxMfmS17lHcPMQVjRt0WRZZrDT24Rof+uoRgs97F6T0vT13E8WHcPWNJY+GkHgEjaCd4=@vger.kernel.org X-Gm-Message-State: AOJu0Yze5jHH1tijo7s28z7u4DSyqxMl9XL14eXc1naJ9yrSx7+wg/fe Qt16pd6edxf/9oOdxHDuLlSLEOGxaR46NJxKYlUXsLM7/DOsKluWiB82HQ== X-Gm-Gg: ASbGnctGKOh61kVIohg0HNxk+RWdN/4bq37OPLGaBVQQFYG7cUakqXmVdAR+k50H5yM EGd76N5BldElNrD0+HiMRbfrtRSywR38BWvjvCQwj9oh8DowRlINZPLfZgFzzUhZpTSiRSsqrzs nqetniwfxNPwCi23Y1fVFAhtboC/8ZsViMnN6TlW33UZL4EVeiacibHZcGvQL6FDW9x4pLgq8xW hD3hKPiY0QbdyotAkjYyEgL4JexbGBthEWwf1gX X-Google-Smtp-Source: AGHT+IH9+qbulkER8NHWk8rXthdHqvzyCEzohkqKSf7pN01ZQo5ySq/Mo5p8QpNaGvTeQzSU7fJ86Q== X-Received: by 2002:a17:907:86a9:b0:aa6:88f5:5fef with SMTP id a640c23a62f3a-ab38b1bb297mr270786166b.32.1737130283000; Fri, 17 Jan 2025 08:11:23 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:22 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 06/10] net: page_pool: add callback for mp info printing Date: Fri, 17 Jan 2025 16:11:44 +0000 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add a mandatory callback that prints information about the memory provider to netlink. Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- include/net/page_pool/memory_provider.h | 5 +++++ net/core/devmem.c | 10 ++++++++++ net/core/netdev-genl.c | 11 ++++++----- net/core/page_pool_user.c | 5 ++--- 4 files changed, 23 insertions(+), 8 deletions(-) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index e49d0a52629d..6d10a0959d00 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -5,11 +5,16 @@ #include #include +struct netdev_rx_queue; +struct sk_buff; + struct memory_provider_ops { netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); int (*init)(struct page_pool *pool); void (*destroy)(struct page_pool *pool); + int (*nl_fill)(void *mp_priv, struct sk_buff *rsp, + struct netdev_rx_queue *rxq); }; #endif diff --git a/net/core/devmem.c b/net/core/devmem.c index 1a88ab6faf06..b33b978fa28f 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -406,9 +406,19 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) return false; } +static int mp_dmabuf_devmem_nl_fill(void *mp_priv, struct sk_buff *rsp, + struct netdev_rx_queue *rxq) +{ + const struct net_devmem_dmabuf_binding *binding = mp_priv; + int type = rxq ? NETDEV_A_QUEUE_DMABUF : NETDEV_A_PAGE_POOL_DMABUF; + + return nla_put_u32(rsp, type, binding->id); +} + static const struct memory_provider_ops dmabuf_devmem_ops = { .init = mp_dmabuf_devmem_init, .destroy = mp_dmabuf_devmem_destroy, .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, .release_netmem = mp_dmabuf_devmem_release_page, + .nl_fill = mp_dmabuf_devmem_nl_fill, }; diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 715f85c6b62e..5b459b4fef46 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "dev.h" #include "devmem.h" @@ -368,7 +369,7 @@ static int netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx, u32 q_type, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding; + struct pp_memory_provider_params *params; struct netdev_rx_queue *rxq; struct netdev_queue *txq; void *hdr; @@ -385,15 +386,15 @@ netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, switch (q_type) { case NETDEV_QUEUE_TYPE_RX: rxq = __netif_get_rx_queue(netdev, q_idx); + if (rxq->napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, rxq->napi->napi_id)) goto nla_put_failure; - binding = rxq->mp_params.mp_priv; - if (binding && - nla_put_u32(rsp, NETDEV_A_QUEUE_DMABUF, binding->id)) + params = &rxq->mp_params; + if (params->mp_ops && + params->mp_ops->nl_fill(params->mp_priv, rsp, rxq)) goto nla_put_failure; - break; case NETDEV_QUEUE_TYPE_TX: txq = netdev_get_tx_queue(netdev, q_idx); diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 8d31c71bea1a..bd017537fa80 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -7,9 +7,9 @@ #include #include #include +#include #include -#include "devmem.h" #include "page_pool_priv.h" #include "netdev-genl-gen.h" @@ -214,7 +214,6 @@ static int page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding = pool->mp_priv; size_t inflight, refsz; void *hdr; @@ -244,7 +243,7 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, pool->user.detach_time)) goto err_cancel; - if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) + if (pool->mp_ops && pool->mp_ops->nl_fill(pool->mp_priv, rsp, NULL)) goto err_cancel; genlmsg_end(rsp, hdr); From patchwork Fri Jan 17 16:11:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943538 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25C0F19F116; Fri, 17 Jan 2025 16:11:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130287; cv=none; b=ZF8cxNkieOOuKIUrGI6IRbaAxfIcxvuZIg5hZcH2zlyAmdk1w6OJtE6aUwygMOi/ZtbPo/olUVtnFu2Dz3IdqQZ1BYAvUzLPW0/FLYX/oKLCCsKVWjWhb5aWcQdwcNUppfW3/4oiMfNDdEubx1JAfDRZYKDxzx1r6PWoqYm1P9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130287; c=relaxed/simple; bh=Y/NBjhmAC2GWFXk9LV84G6TfpcdH39Vpnhp62cne8uk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eK/zdmxE6DjqWIVzKOfcyHwdlecqCB0vPRPQ6PqKOQxqp6Hofai4fxTaWYD3YQCTB6BeRbV+LWOYw4RkzxmIxY+xJKJhlEyOTtp8WGbix5iGCG8Y0ngFS28R8PMQL55g16wi9cGP6nAFu5LBLeMXvbgcoMT0G1PB1I1D22uwo74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Y+rktN6O; arc=none smtp.client-ip=209.85.218.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y+rktN6O" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-ab2aea81cd8so384441066b.2; Fri, 17 Jan 2025 08:11:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130284; x=1737735084; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N6GPHVXnzoSzCcTCYbLC1dfjzUc0I8n6DCqe9YSVcw0=; b=Y+rktN6O4051cMm/UEsHb8RvN8bPJZgVh4TUowcrvcrvDMIwoT6rxjglQh6/JiIAig GuV39TCQyaaGMF3zH1UdUwD0Bxj9Jy5Jd+vNkS1GQtq7vPAC1Lc0d9CHOhKu7V8n8aBB fHhmnX9ueZgseF1nqjqnpl9wPJeEDxxEl1BMQ4CVdY+FcxZwKJO0PVTfqrNfqdi/zY+Z 50JS1HM9QyJqnJ+rmrEbml5TLPg13zgByK131P2SRKQ3LabZwws6bFJvcEM7M/mO+lXr 46APTKEAE1RUkPyiktWgjnQu54Be66xVtGq6lgFDvdlnPYSvgv9Rs2aqSuiMoe/LCczM Uorg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130284; x=1737735084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N6GPHVXnzoSzCcTCYbLC1dfjzUc0I8n6DCqe9YSVcw0=; b=mWKBuXDPzvGxR7tjJLRufSuq98bxGdhWvBX29RwSXWLDyMSHQM2RAp4urMziWuawTT 3Yz8H2pWzh5uvkrOB2un+53Bbe4xRvQnzdbfKKM5LbYjnPiET9UrVwbk8bkm4ccdQwYq Ibe1iKAN+bpDUrMD0+qZcCNWSoM1ltQCuepm+mawNuke2/1dqEnQeu0+YUHXAlbN9ny3 adkMmJWfhQ3IboscD8AFEL7mQBOVfyxjFL4hEp5W6zcgeKkh9TYneyorDJfj8+MHq/5H WYH1DUkz9oEBJvQkoRdyj8ABJOAydLHlmxLjdw5Ksq58lAnWklTQhIv4REws+xtnqVdq 3C7g== X-Forwarded-Encrypted: i=1; AJvYcCWw25zwMOup7mAa77oX0/dK5hM1oZ4488oVpOpeNyq2vHF34AQh5rVoL91GFJhnWHEueNypryg=@vger.kernel.org X-Gm-Message-State: AOJu0YzNruRwoSL3A5T5qqg1+fZR7Dt4Tn+gaevcOWA+mBP1Twoj+37I 27MnU0aKdCjOqtmZay+rRTuaLBWu+zPou3ClTIdBaNpto2cN/pknDXHA2w== X-Gm-Gg: ASbGnctrp8fICa5WmQI183Uq6En4JLnJy0+CAGrhXJrlv4WXYNH9DJvpMd5ByK4krz6 3iOFvE5RC3I9bLWA1j91EAQ2RYGyNhKKYrg0KEOiO+/4wcyggHtcMxJLmh3+JMmEP+iqf3/Tv1f V9Fs3xtOLz89VFpE74SnHlnYe3vtJ2iIbpz1JUlJod4KuAPQZQqUP0/amHFfhPqHK8+GRr/4kuH CGJQfKs2uMfBzZYbbOXtHQbL2dZRVTMafzf+XNZ X-Google-Smtp-Source: AGHT+IGEUJIylfalCzblTC3hI3yckpPg3cyrTqdILk5+YiaoI48GdujOKGPfhAwV51E5J+FY0gH4wA== X-Received: by 2002:a17:907:930b:b0:aab:c78c:a705 with SMTP id a640c23a62f3a-ab38b3d4253mr329812766b.52.1737130283907; Fri, 17 Jan 2025 08:11:23 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:23 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 07/10] net: page_pool: add a mp hook to unregister_netdevice* Date: Fri, 17 Jan 2025 16:11:45 +0000 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Devmem TCP needs a hook in unregister_netdevice_many_notify() to upkeep the set tracking queues it's bound to, i.e. ->bound_rxqs. Instead of devmem sticking directly out of the genetic path, add a mp function. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov --- include/net/page_pool/memory_provider.h | 1 + net/core/dev.c | 16 ++++++++++- net/core/devmem.c | 36 +++++++++++-------------- net/core/devmem.h | 5 ---- 4 files changed, 32 insertions(+), 26 deletions(-) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 6d10a0959d00..36469a7e649f 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -15,6 +15,7 @@ struct memory_provider_ops { void (*destroy)(struct page_pool *pool); int (*nl_fill)(void *mp_priv, struct sk_buff *rsp, struct netdev_rx_queue *rxq); + void (*uninstall)(void *mp_priv, struct netdev_rx_queue *rxq); }; #endif diff --git a/net/core/dev.c b/net/core/dev.c index fe5f5855593d..e5a4ba3fc24f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -158,6 +158,7 @@ #include #include #include +#include #include #include @@ -11721,6 +11722,19 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head) } EXPORT_SYMBOL(unregister_netdevice_queue); +static void dev_memory_provider_uninstall(struct net_device *dev) +{ + unsigned int i; + + for (i = 0; i < dev->real_num_rx_queues; i++) { + struct netdev_rx_queue *rxq = &dev->_rx[i]; + struct pp_memory_provider_params *p = &rxq->mp_params; + + if (p->mp_ops && p->mp_ops->uninstall) + p->mp_ops->uninstall(rxq->mp_params.mp_priv, rxq); + } +} + void unregister_netdevice_many_notify(struct list_head *head, u32 portid, const struct nlmsghdr *nlh) { @@ -11777,7 +11791,7 @@ void unregister_netdevice_many_notify(struct list_head *head, dev_tcx_uninstall(dev); dev_xdp_uninstall(dev); bpf_dev_bound_netdev_unregister(dev); - dev_dmabuf_uninstall(dev); + dev_memory_provider_uninstall(dev); netdev_offload_xstats_disable_all(dev); diff --git a/net/core/devmem.c b/net/core/devmem.c index b33b978fa28f..ebb77d2f30f4 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -320,26 +320,6 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, return ERR_PTR(err); } -void dev_dmabuf_uninstall(struct net_device *dev) -{ - struct net_devmem_dmabuf_binding *binding; - struct netdev_rx_queue *rxq; - unsigned long xa_idx; - unsigned int i; - - for (i = 0; i < dev->real_num_rx_queues; i++) { - binding = dev->_rx[i].mp_params.mp_priv; - if (!binding) - continue; - - xa_for_each(&binding->bound_rxqs, xa_idx, rxq) - if (rxq == &dev->_rx[i]) { - xa_erase(&binding->bound_rxqs, xa_idx); - break; - } - } -} - /*** "Dmabuf devmem memory provider" ***/ int mp_dmabuf_devmem_init(struct page_pool *pool) @@ -415,10 +395,26 @@ static int mp_dmabuf_devmem_nl_fill(void *mp_priv, struct sk_buff *rsp, return nla_put_u32(rsp, type, binding->id); } +static void mp_dmabuf_devmem_uninstall(void *mp_priv, + struct netdev_rx_queue *rxq) +{ + struct net_devmem_dmabuf_binding *binding = mp_priv; + struct netdev_rx_queue *bound_rxq; + unsigned long xa_idx; + + xa_for_each(&binding->bound_rxqs, xa_idx, bound_rxq) { + if (bound_rxq == rxq) { + xa_erase(&binding->bound_rxqs, xa_idx); + break; + } + } +} + static const struct memory_provider_ops dmabuf_devmem_ops = { .init = mp_dmabuf_devmem_init, .destroy = mp_dmabuf_devmem_destroy, .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, .release_netmem = mp_dmabuf_devmem_release_page, .nl_fill = mp_dmabuf_devmem_nl_fill, + .uninstall = mp_dmabuf_devmem_uninstall, }; diff --git a/net/core/devmem.h b/net/core/devmem.h index a2b9913e9a17..8e999fe2ae67 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -68,7 +68,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding, struct netlink_ext_ack *extack); -void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * net_devmem_iov_to_chunk_owner(const struct net_iov *niov) @@ -145,10 +144,6 @@ net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return -EOPNOTSUPP; } -static inline void dev_dmabuf_uninstall(struct net_device *dev) -{ -} - static inline struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) { From patchwork Fri Jan 17 16:11:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943540 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B9F819F419; Fri, 17 Jan 2025 16:11:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130289; cv=none; b=YuMGvU/350i9fM8V0OCC/8YGH915Mn55aInybZ5TcUjvd9nysgw4NDYkfIa0R0677Gp4BCG33Tu+1A3nXnt43XNn6x+ICvG0UO0nw+OC/+swiWv/+2CWyeCS5sXPeROShzAQYr1KoZ9fkUN6UWonKxvYNjrMGhC/Enr3aNoFcug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130289; c=relaxed/simple; bh=ly53jWrQZYf+pNZW151TipXpqUbbhUTcHCDgsjtMd+o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OESrSKnuaTK47pLrq3mszhfo9gopM0Fyq226M34UaT5QpikNtrZUIPVEz0LlG47Us6uHZeFu7lx0VWGXLMFD+AeqtOzrxrwkYKCIh0jE1bZYABR9iXZHFYhlJaAmbuDJ83ciLf3Lonn8DPt7GsvoIwpZdqdbEml5fuZpnETYXZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DRRUKEY6; arc=none smtp.client-ip=209.85.208.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DRRUKEY6" Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5d3e6f6cf69so3644206a12.1; Fri, 17 Jan 2025 08:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130285; x=1737735085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pV0TBsETouNvQcQA3TQi1mvH/PDQhNV5J//Yfsg0S3k=; b=DRRUKEY6YW9pELU7optdz6GYfdMzXt+s71jEZ+0GR0u3tiCCqK0yFzVqBsccr/d1hy gDajFUMYywgPmp+HO+DbwVoJP7JIRuHTeQjFk8meaibHBdaJUQqIG43RVCq6a8/2WE5l R+2eHNIiI1V9AYIaakmXgcieHQqgZcGe80gmgswNWvs2T/WN2M8fWpPxLqmekqsQ08tn 5xuzzbYnNSJvf0CFKeRIyqR6aptUG5isgQSPmyCfdR4Au3gABeQAq+LUXRx5TdIJHDLR +fz93n3M9pwl5n3KpchtyI/oT1M2jWMEw/7pY8CosbqsiYP0vXVJH0hM0Xz3UaXd6ak6 uvVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130285; x=1737735085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pV0TBsETouNvQcQA3TQi1mvH/PDQhNV5J//Yfsg0S3k=; b=PMNBskZzJhgprKdyp6yVPvZ9oEEoykNYfPqcZI98v2R4a8PXq7lVqZ1lKrZA5riHhc n9bgoacfGEHI1DrBT1nPGMXP0Um0ONIFijOChPMedkvF3MqJnR3RxkYruHWI+EQbvwOK 7BEAcOCCuCdoy7PehxX3cLW7YznVQamcHKvL/kC7mw9exOTLDzW019SIbvUTwHJjlUDR 2oRhCy2MaUSAOlpOUFSyXAgLRZgUm2q3HpUz+9qe9NzqMKqam6/Z14wneRoIjRWE8rGM zSGkvyUfCtwA5G6dj7MJRiq/SSD01yrc3qatZZUCpxfNmOhyN8nj4eTLHENdloqtqXFP iHdQ== X-Forwarded-Encrypted: i=1; AJvYcCVRPiQOmgUd990FyeTVNp9gLIIoPK6dEuxCh2ddzeqRmAOUT01rsfheUalRyg7Vh1GrWHaIbck=@vger.kernel.org X-Gm-Message-State: AOJu0Yzez8VflzOY0XOSiseXU07qQ24S7xBKcErgKq8rAoENVjiBmWRT LoKGB9jiyOxObLbyudqP1hG9JasDvAUO74WLvFmIz2eWrGwlBObyXYWSfw== X-Gm-Gg: ASbGncvzYQyu8fKq+vPMZQE/eJnfAnHKnuoconjJih6NkaoyEjVa/2hmpZG2gj/gbmW eO5XU8VuKJUGMKCzGd2WCckcnfy6grtyPuJZrTcaOvVFEUez9FfDj4MR5b4Q3U3xPw7FiwzmKpu CQoONukMlw8unZc/i4dYw33Brz1fOODxh1La+mPDge97fWRuwVTiJckAGJEOfGvxnVbgKf4lZKz CoF+FEo5JlUe15Pp/ZBc9baeurfHlKKrZLC5WCk X-Google-Smtp-Source: AGHT+IF5yfGP9qJlrYDhBOYPkvANvVEe8Iv7JhW93RvGDz1ZlGF4qcC+677vPM1G2+sjnxiIjPwhuA== X-Received: by 2002:a17:907:2d2a:b0:ab3:ed0:cda with SMTP id a640c23a62f3a-ab38b191674mr331868266b.9.1737130284777; Fri, 17 Jan 2025 08:11:24 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:24 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 08/10] net: prepare for non devmem TCP memory providers Date: Fri, 17 Jan 2025 16:11:46 +0000 Message-ID: <97d1697b6c6535b24f2184a9662574811b36731f.1737129699.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org There is a good bunch of places in generic paths assuming that the only page pool memory provider is devmem TCP. As we want to reuse the net_iov and provider infrastructure, we need to patch it up and explicitly check the provider type when we branch into devmem TCP code. Reviewed-by: Mina Almasry Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- net/core/devmem.c | 5 +++++ net/core/devmem.h | 7 +++++++ net/ipv4/tcp.c | 5 +++++ 3 files changed, 17 insertions(+) diff --git a/net/core/devmem.c b/net/core/devmem.c index ebb77d2f30f4..2f46f271b80e 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -30,6 +30,11 @@ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); static const struct memory_provider_ops dmabuf_devmem_ops; +bool net_is_devmem_iov(struct net_iov *niov) +{ + return niov->pp->mp_ops == &dmabuf_devmem_ops; +} + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) diff --git a/net/core/devmem.h b/net/core/devmem.h index 8e999fe2ae67..7fc158d52729 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -115,6 +115,8 @@ struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); void net_devmem_free_dmabuf(struct net_iov *ppiov); +bool net_is_devmem_iov(struct net_iov *niov); + #else struct net_devmem_dmabuf_binding; @@ -163,6 +165,11 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } + +static inline bool net_is_devmem_iov(struct net_iov *niov) +{ + return false; +} #endif #endif /* _NET_DEVMEM_H */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index b872de9a8271..7f43d31c9400 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2476,6 +2476,11 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, } niov = skb_frag_net_iov(frag); + if (!net_is_devmem_iov(niov)) { + err = -ENODEV; + goto out; + } + end = start + skb_frag_size(frag); copy = end - offset; From patchwork Fri Jan 17 16:11:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943541 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E8CA19F436; Fri, 17 Jan 2025 16:11:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130290; cv=none; b=a727DQBIWoM0VxOKE9dGQ2tSYVJVWZFAwG7vBpkaLCmw7SvtpVFEXEOiK5zsxKXvbuKxzjyAK4azY0EA8+KRgdkzwGB3Hvs+hBptP+fMlgF/TN0aEhvL3UU+5fEXhokTw+JbVELIbnZG2R7GzflOSIVlCjAIwIgsYdg/GNiKvpo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130290; c=relaxed/simple; bh=6uXEcPOcqpwPfawayspj5a99uUO8shn7CTEhuQt498k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IwxEwCt0xt1mZ1s+ap2Eu5kXPVKJyhfCO76EOGapyi+CQfrQkZk/aN69rsKOmqkkOZQyZwTJGEtJi21/3snaxBXzfRVPaFFkFFfzqChfnzh7pbGXH8i9WGYsumJuHtp9ImBc7kUuuUCIe80UqZXqHad64fkE4ywapDR1SfMqa0g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Kv3IhusW; arc=none smtp.client-ip=209.85.218.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Kv3IhusW" Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-aaeecbb7309so434035466b.0; Fri, 17 Jan 2025 08:11:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130286; x=1737735086; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fiOiGdhB+hlFZy1AjRpHYrKJqzKugrGCkwNAf4xmJas=; b=Kv3IhusW3ievo1heCqdL4Hxcj+W8kGYB64kuaSlnd6BzAqL+/VTvJTp6VLHajnrmMz vxjMa0b2Crx6IFZ8Qc9i50z38JPu2lxAqlVwUq6os/p70UjEwaDjejOuXKimhdTUpWYx OaYO5kOs1NdZvZzBJJsw0tgGvpm5iopOb7gUsaHOOGvccr1yj0zoXqDRbHB2lMgYEPJv U958Bmz1gJrOB9LYCzi5aplvx37oWgwlSAwhxs30y7B64jklE9YKW90hiMrze++ybjWB bVXgV7Ktak1cC5FmKNiqrT/sEfu4lFbZMRJjf6/C1kZqrVN76t8K+XIu3Dcp6svWY/o3 VO5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130286; x=1737735086; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fiOiGdhB+hlFZy1AjRpHYrKJqzKugrGCkwNAf4xmJas=; b=j1y4s/0w5UmtjvLJRsRqRuS5tBBufrfSPzKhH+8YXhi7Aa7qo/arIBhxBjbyR+vqAG 3DV1sIuseERxZLeECLH2jbxYoixjr5TN1HvdIQrYoO1GPYWGwI6av8tR6f4Ne+5o58J4 HCxR4V5+0n2uAkb6rdXg/yHC5RBHPUIqUpKOs15nZHdZLV/YdyT5IMZFM0kjXoMIGDxV V438AxKnBlKYdsRJNs3hRK2pK8eWpgiGS3mFwPgcRlTjEZPxegHT+6P2FX8JKyb8wlyk QhEseZB7Z1tRSPXstJcOEKJon9vHYOEZ11pXZPsxETHG4d/WKI3RTCamJroA5lKWt3YW JWyg== X-Forwarded-Encrypted: i=1; AJvYcCXaBUTbe3SmHcL760Hk3JH6CeOwoa393YW9lnGcGMk0a7AJRrOdpcwfSSXaoP1tT0w+dz67UHs=@vger.kernel.org X-Gm-Message-State: AOJu0Yyb9JQ5ZWh2zoQFobhqnRO3dNzePzG1EqeQkmK5ymP0pXkvatJQ fg9oeqxUkatoaWBWKLbmu8cXTLofv4/KPiFb/JqwqOSLB8+FF4QEmi/cZw== X-Gm-Gg: ASbGncv1PbamY1U6vo7N36CWMcOGVey00RXLutxOe97cRJKMvoc80Nxj02IbcXjCoCc HBtWihxEp4f0xR2itNlOrVKqHzyvfdnh20aecuu219zBLQP37sPvDK3d8zr+Ixfa/gG1R3zsGJS j87ShiWBWLBXZz0bfS5LGYCkkp/9qaWujC+G2IdAWVXzGvvDlgs4ldG3VQtTXRi5FcOLXMPJ+r1 CCEpIIoH5cMmDuNzNv+wenYYdKeSQRb9zeIsadr X-Google-Smtp-Source: AGHT+IF0npe3FHCddSILmUrjvtMDQxHQIYEw/5sx671zsUYbbUQJYcuc7cpfvyKQg2zisQYADzRLHw== X-Received: by 2002:a17:907:1c0a:b0:aa6:abb2:be12 with SMTP id a640c23a62f3a-ab38b42ae4bmr289013466b.37.1737130286267; Fri, 17 Jan 2025 08:11:26 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:25 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 09/10] net: page_pool: add memory provider helpers Date: Fri, 17 Jan 2025 16:11:47 +0000 Message-ID: <3899b385936a398bf562e184db3e296566f7fcf8.1737129699.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add helpers for memory providers to interact with page pools. net_mp_niov_{set,clear}_page_pool() serve to [dis]associate a net_iov with a page pool. If used, the memory provider is responsible to match "set" calls with "clear" once a net_iov is not going to be used by a page pool anymore, changing a page pool, etc. Acked-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- include/net/page_pool/memory_provider.h | 19 +++++++++++++++++ net/core/page_pool.c | 28 +++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 36469a7e649f..4f0ffb8f6a0a 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -18,4 +18,23 @@ struct memory_provider_ops { void (*uninstall)(void *mp_priv, struct netdev_rx_queue *rxq); }; +bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr); +void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov); +void net_mp_niov_clear_page_pool(struct net_iov *niov); + +/** + * net_mp_netmem_place_in_cache() - give a netmem to a page pool + * @pool: the page pool to place the netmem into + * @netmem: netmem to give + * + * Push an accounted netmem into the page pool's allocation cache. The caller + * must ensure that there is space in the cache. It should only be called off + * the mp_ops->alloc_netmems() path. + */ +static inline void net_mp_netmem_place_in_cache(struct page_pool *pool, + netmem_ref netmem) +{ + pool->alloc.cache[pool->alloc.count++] = netmem; +} + #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 199564b03533..c003b9263bd3 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -1196,3 +1196,31 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr) +{ + return page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), addr); +} + +/* Associate a niov with a page pool. Should follow with a matching + * net_mp_niov_clear_page_pool() + */ +void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov) +{ + netmem_ref netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt); +} + +/* Disassociate a niov from a page pool. Should only be used in the + * ->release_netmem() path. + */ +void net_mp_niov_clear_page_pool(struct net_iov *niov) +{ + netmem_ref netmem = net_iov_to_netmem(niov); + + page_pool_clear_pp_info(netmem); +} From patchwork Fri Jan 17 16:11:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13943542 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EE931A01D4; Fri, 17 Jan 2025 16:11:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130291; cv=none; b=IsB/tOlXtG5v1egW+UuvPy4pzlZE+n1kbqK55Dd6kQVUqyXbk6dyMUsrkzT/kqjUpIx5PMbZyuAIpogNOlxT5m3/iRUe6lUxiP/WQub7tSMAulwumGi1sclzvEUtNHUSnyJcroTi1En2EZY7fzkdY/CDa6MJIa3zpsxRqo3Hv9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737130291; c=relaxed/simple; bh=SB1uvCcLKvxV8PevAgK5gt5B+TT8KvSjoci9vgD8tcg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T2HIcuxH37orQbd+8JXGjHv9V7T4FnlOPljuIkGl5yWnqfioNPRVhIYDTEwlBeI5HLibiZUSveLjacKvQw3DS+9UBzWmXsfeOeHLi8XfSj2yqQ/jYVAACd4/zDFW+peDiRD6vRiB74WwdmQSDg75vYystubfRv8VDNVwFvP9n3Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P6qutVqo; arc=none smtp.client-ip=209.85.218.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P6qutVqo" Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-ab2e308a99bso477338966b.1; Fri, 17 Jan 2025 08:11:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737130288; x=1737735088; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t7dfg+74umbk+ITjMhIMvDKTxg0EYGAZyZgxeW3moQQ=; b=P6qutVqou43oijhZ0palMomaNMotdioL0W2D8O4f9qxDUpSc349C8jqwSCmmmdDlwH iks8ip62IqKIm3X9B6367Ck7CH6gmmCDbR/93eDIPZK0Te8g+6AoHnvCVPN5rLwP4+Xt NiYRJJ5+2WtTyLcmira4hx4465PIcxa+m/HouDCBFx+jER+gTgzu0TyswsgiA4nclZuB hkHsg02Toog0S0pVYJiNFFqJNuxdYMF+nPrF8fXbS1/TyeO+sHL0sB3XamFIICvDN7RY BzwycPKFkzR7ozgBhY1m6f/O+19Wkfp3j9I5paUohC0Ks5YY8ZQ91d8YsgWKWs+BiWrB tsfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737130288; x=1737735088; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t7dfg+74umbk+ITjMhIMvDKTxg0EYGAZyZgxeW3moQQ=; b=I8PGmfU/f6hlDJLWLTy9S6+bRHkt9e6Q41WvDeX13+Pny/nwKZ01cdnbE33s9olZVQ gJH2vEi9cl7OycD+Tp/JAct9QJIjomjZOIZJcVl/410LEZBL1QsNOlJ6L0/vVbGRljhT /F6j0dQSvLXbChz6k0+O2IQlEvAjnukdi5XkU6lQMn3zQDmRQsGSam/LuaQ44TNgF6tj mZMRHb2Hwt2UFKlcVoGj1BwZXUaR31LVq7MeVoulqq1mcZsOGWThmNtSfLNUpru+Lscc 7z9lLvDyAsKdLvYeP3wOLiWUssF3RokX3twneh+j/WLWiHS943HOwMehSASCrZQOhyic CoUQ== X-Forwarded-Encrypted: i=1; AJvYcCV8GiIrYN5S+TS0cWMSpwiQHyeXc/jFbFe7Eoosg4fE+XWdYvZz67s+Arrovr8bpbA2MHqVKqs=@vger.kernel.org X-Gm-Message-State: AOJu0YzMdz5ZAF/z4ICAb3swbVUn7bgLeBlW4itq3VmB98hoNtf3fRQW KEKpTJDZhm16gKADebqzwVp3hxEtp65NWg/LAhRxHwJIi/UxpUSGws8dHw== X-Gm-Gg: ASbGncsjiY+yHbLYUUVRad+SvVOq1nkcCS93jyuR6PcB2/4EvWG3e9gL+Gw4qFAgbXW THFZyKgt3MX/6Ai72zdQISFfOLt3DNGmCPYHFxGhRLo3ve+znDse2nbLa+Q0xrAM1WGHlrBRoo8 p/V9Zt0rBM80dWwr7zINZ1gjDn82VETOvJdeHJjXS9HhzRv/LPKsucL338ztwqCtm5jZKJKZ9Ma bRIVKpry6dwSam3trQT+3wYd73QTaYTIURE63uY X-Google-Smtp-Source: AGHT+IGYFHbH2ri6ai17wNuYmsnebUXj5qhYaZNWRpGcMog9apppumAPuJgk1mnTbPfSRYY+DxiwhQ== X-Received: by 2002:a17:907:9611:b0:aa6:9540:570f with SMTP id a640c23a62f3a-ab36e30b50emr624419366b.18.1737130287406; Fri, 17 Jan 2025 08:11:27 -0800 (PST) Received: from 127.com ([2620:10d:c092:600::1:56de]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab384f23007sm193716366b.96.2025.01.17.08.11.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 08:11:26 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: asml.silence@gmail.com, Jens Axboe , Jakub Kicinski , Paolo Abeni , "David S . Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela , David Wei Subject: [PATCH net-next v12 10/10] net: add helpers for setting a memory provider on an rx queue Date: Fri, 17 Jan 2025 16:11:48 +0000 Message-ID: <0ed20ccc0c99e8b729cfbde6fb3fc571fcf38294.1737129699.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: David Wei Add helpers that properly prep or remove a memory provider for an rx queue then restart the queue. Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 5 ++ net/core/netdev_rx_queue.c | 62 +++++++++++++++++++++++++ 2 files changed, 67 insertions(+) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 4f0ffb8f6a0a..b3e665897767 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -22,6 +22,11 @@ bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr); void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov); void net_mp_niov_clear_page_pool(struct net_iov *niov); +int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p); +void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p); + /** * net_mp_netmem_place_in_cache() - give a netmem to a page pool * @pool: the page pool to place the netmem into diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index db82786fa0c4..a13deedf6fc1 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "page_pool_priv.h" @@ -80,3 +81,64 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx) return err; } EXPORT_SYMBOL_NS_GPL(netdev_rx_queue_restart, "NETDEV_INTERNAL"); + +static int __net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p) +{ + struct netdev_rx_queue *rxq; + int ret; + + if (ifq_idx >= dev->real_num_rx_queues) + return -EINVAL; + ifq_idx = array_index_nospec(ifq_idx, dev->real_num_rx_queues); + + rxq = __netif_get_rx_queue(dev, ifq_idx); + if (rxq->mp_params.mp_ops) + return -EEXIST; + + rxq->mp_params = *p; + ret = netdev_rx_queue_restart(dev, ifq_idx); + if (ret) { + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + } + return ret; +} + +int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p) +{ + int ret; + + rtnl_lock(); + ret = __net_mp_open_rxq(dev, ifq_idx, p); + rtnl_unlock(); + return ret; +} + +static void __net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p) +{ + struct netdev_rx_queue *rxq; + + if (WARN_ON_ONCE(ifq_idx >= dev->real_num_rx_queues)) + return; + + rxq = __netif_get_rx_queue(dev, ifq_idx); + + if (WARN_ON_ONCE(rxq->mp_params.mp_ops != old_p->mp_ops || + rxq->mp_params.mp_priv != old_p->mp_priv)) + return; + + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + WARN_ON(netdev_rx_queue_restart(dev, ifq_idx)); +} + +void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p) +{ + rtnl_lock(); + __net_mp_close_rxq(dev, ifq_idx, old_p); + rtnl_unlock(); +}