From patchwork Tue Feb 4 21:56:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959974 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3538321ADB4 for ; Tue, 4 Feb 2025 21:56:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706187; cv=none; b=r5ALKOBeMpJ/wZwdiLSbBehhetbidb7h2V7Ccpkdoa5vFYCW1guD/4ZIsUKojn8ovQpJ+Fe8tbEfoar2YGZynSSW9Ii44sazxUawTo9WMmM6qr8Y/2tyUV0LjjsAq1JIjjg1wBhz2pSq9aqaCowV+13BZZ1raulXN6PfniCreeA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706187; c=relaxed/simple; bh=MXm9cNzcNAKRwy4sqAWVUwxI2/xHrAmidqJORw4XcS0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IfVNXmIa0M0AaUuxHQ4zGqbA2kNDVO0iTqjZhFeccXaUA63zUDOmVjY/ebU+2sDnTxfUs5Y5YPhj5aYcoBLbkeX0sUMY0Q+pj1aoTk6lC7CVzMQ5QqIZtWUhAM0TojuqakBVDE3kP4anuMvAIx0dar5U8nXR+guouqUGY9lnoXc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=Mf0wS6//; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="Mf0wS6//" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-21f0c4275a1so14426155ad.2 for ; Tue, 04 Feb 2025 13:56:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706185; x=1739310985; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=p9LdqNR7jRyKbZmm3+EwvlUzO8DkJ2xrN0yhlIHcgzw=; b=Mf0wS6//sPc1MQQ9J+8pbXPlOHmHVGcm3xOfv15CglB4yKV57NooF3CtnfV8/OLIZl Fh/RM35Z7+R/RIBddgDpTlhTB3o3DzM8A/PAasEI6G4jHIPbqwN5fJiOnX0XnixQLzOs qIZpEXeszrCe80njWQXm+ExptviNRxPY03VRhFGwLYX0pO1Frb8h2QDcl2J8TPvulcFs kWXrOFvqKwUk3vXE7u0epPNjP2Y81JqgLqeQQJNaOkGeb+eJ64J9COLq9GPSBGitajlI xiI6uhdIOae4yWVjFMODHQgv1IaXkF96DocSQ2Oj6JMsz8gts5MZKuO1g22SVLMqOyX+ rXhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706185; x=1739310985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=p9LdqNR7jRyKbZmm3+EwvlUzO8DkJ2xrN0yhlIHcgzw=; b=dDD4MwnRuYcKo36EKMUX2tPJY9Yku3vQwvpEUk4QOGXon67TiQu8o3W34DSoJ05HzX MrDumCPtJKrfHQ6BwSNQ5gzhvr/O3CkTpyBMZB7uqCaIy2Dk4WMK7GPz05B/gMOLK5MW 6c7ZH/7QO5mcZnj7WPvFN1HMCWGfkht2YoGzKFtm9BqeTK1KNXySP5r/xBTzWxhIfz2n peZ1OGWkwsISgfnpSruUF+VsPHoTeW2TQrZtlicQwUR3E/K++L8KiH1DuqxLjz3kUCdX O4uozLY2bl2bKdo5chbNG1/cP+NULGFYINhVZxu01B66FrYFZrbkJcF6Vl0vG77qMH30 o2DA== X-Gm-Message-State: AOJu0YxxupgJQAeswQybbck+oyrHoUx04pSpEkECaSOgUZuWQO8zn5wf iEcUojiI/W/D4zjVDrh767wfSdpD/FGEm/udfFUYCGvs3gAQOD9tVian+lJ2TbPiVaTqJ0um4Tm a X-Gm-Gg: ASbGncvBcFsoyJ/PTFz/mHrEjTZso7T7OZMys0gvaDOkgVS9MsMYz+nPAIq50UOadLx RUQiy63Not5Qf1BJStgGnhE52OPqgPStLVd0KQtEitO1/5ZBbKlj+Qc1UmvceWuPkWHyZLnc0q8 NMDn+gQ8YS2z3HGWG2m1WgRs1XUrQEpEHRzF9lfDEFBy8KVRjoK1RTpogGgA+V35TOREPhlgEF2 Vaxohk/ZlUQuNoNIaLVFw56iD7OHbffvGpEICp51REii5BcJH1sswibk5xYD+ZxwamujMEBo/8= X-Google-Smtp-Source: AGHT+IECgOtFsPT2Y4kChST0pUJKzsOLP/lThVWBw6QmlrKvi5S3KvwH7Ikz9NQkERFo7gFIZI8gFg== X-Received: by 2002:a05:6a00:2181:b0:72a:bc6a:3a88 with SMTP id d2e1a72fcca58-73035214d2fmr560465b3a.22.1738706185337; Tue, 04 Feb 2025 13:56:25 -0800 (PST) Received: from localhost ([2a03:2880:ff:3::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe69ba322sm11049926b3a.110.2025.02.04.13.56.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:25 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 01/10] net: page_pool: don't cast mp param to devmem Date: Tue, 4 Feb 2025 13:56:12 -0800 Message-ID: <20250204215622.695511-2-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov page_pool_check_memory_provider() is a generic path and shouldn't assume anything about the actual type of the memory provider argument. It's fine while devmem is the only provider, but cast away the devmem specific binding types to avoid confusion. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/page_pool_user.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 6677e0c2e256..d5e214c30c31 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -356,7 +356,7 @@ void page_pool_unlist(struct page_pool *pool) int page_pool_check_memory_provider(struct net_device *dev, struct netdev_rx_queue *rxq) { - struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv; + void *binding = rxq->mp_params.mp_priv; struct page_pool *pool; struct hlist_node *n; From patchwork Tue Feb 4 21:56:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959975 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B26521A45D for ; Tue, 4 Feb 2025 21:56:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706188; cv=none; b=un2RLRaBZEnP2J4HWN0YQCNBsgrAHL954axGHK6+IZ+l6A//nhzZSpRVhPdD4lSOqxlbjpEY74IUIBfGoR4Kmm86CBAtup+PBb/QN9iGXJDPi04uHdadb3K4DWZ/VWCyU55/lvwyRFlgCfRcghlHdyqMKo/cw9XOpVZxdJRnt/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706188; c=relaxed/simple; bh=mSP3ZSk0F+wwuNsw721XLyIhCV2GBg4EmkZNpJfy49U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gB/jklL7XInsDwcTmjRvBmm371bPWREqycI0sbWqkGpzrmzqY+oYeh/wimT1qP7KtCsztVt7mFHZtqgxQMWnzHOrkcV/5N4mD79BjR1v3HWnvCfRAhWPfYVA78HMcR4MUgJfC2m7OqmzYFc6iN0Q2gov+lNV0iIN6CpgQPER8zI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=Yl6wdaHz; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="Yl6wdaHz" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21628b3fe7dso113905275ad.3 for ; Tue, 04 Feb 2025 13:56:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706187; x=1739310987; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cjzFvEiohlFPJGTO3QTgBTUqYxbIV4/U/lISI94SKuM=; b=Yl6wdaHz3TjY9x2lUM5/1/sOqYgceB0fanQu5R9U8Fa7jgrMTehdcv4euRAgSXrn7f fK5J9I9WdGVa2vIOQl2WwBhNs60H66H0DRe5z+HH+jrCGfOInLZ+Qr+E/WQs68HQqUMl sc9xPJ7eNiSrogk90StVz287gdwvz+xCm28Q/WP4hcTyhAdiD9hFYDYjOMbWU31O0ELv laPQFc9OY5S2HaX44iZn1qYsEKbXaZ1gaDre4IBgDZwbFLqzyreIoFrrFG9fHT39Xzrs rrcADUMVisc8LJTlITq8nfjDI4lg3DlgO8JBuJVdWAGGURXTqHRA6cTj8lULK3CpG8bb E1ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706187; x=1739310987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cjzFvEiohlFPJGTO3QTgBTUqYxbIV4/U/lISI94SKuM=; b=pEqQM54IeAQ3n7cDTg7XzcmcwXTXl9z0lPPmlpjdcukefxvQ5HDcNIfgJoMrClcT7p q/a7YxSMhx5nQStNZpDRPLFxf5XZYArJZ5R4uy7t+KJ3hKJy2Ntk6klAc0tmjzCJUNzh 1xfbei2F0jRUZe5vp4X7z7mtuymrMTjKZ/ZVntYu2fbcl3izIG5F99AvwzEbnbwx4gSN pOfqxKR+RLxvMi6wAmPdIJtXARgL4n6PNnV1qKX5FB3Na5CUW1qdE++wdponxUecafH2 hM4d22awE1Cib1v3+ijt/dxMWYR6UZz6tXFuwiLdwl4AtBQwILcFgytGU27hjf7F0iBk f4kA== X-Gm-Message-State: AOJu0YxxdR+HoEHMFA7orUYu2mq5MOPVV1RYBp0g0CJ0Vnx2pp4nDQBd tQ5Bs1a2AK4IFdO0P7rIQ9IwtSqb2OpR89jrcG4FWDhyR9otrIoWV+3YYK5BjI77JlY8TcVCNjs B X-Gm-Gg: ASbGncsMfcRlmdlutzh/r931ovuhoaZLtV4NNJdCXZY/2In6SpMEkdoJHj9OrpKJwSz oFSr2caMsie9gU31GQiMsE0/q8bH4pUCYhvO3cr39kKI8wahR8RcOYYkVnJb6mtpyCXfFtUZbGb hLki0vRLVRm2j1GJT3q0bcZSZvgCKoDmdmmG+2k97ScAZkqz2wwwoTlSNUJW+YQCxvdhehIapXp o12Ecl0RtwemUfBaKpfwCls9aMaSPSWgxA0ZyrdfTr9Y+W65e1tDmNJH62e0GHLJVK2VbLIfsTd X-Google-Smtp-Source: AGHT+IErs6IioWuin52tx/BOqoeSaD3Nh2foCw6UP+aufNH2K5DrmS5KTnQz17mFHcvOoPB3S3MbGg== X-Received: by 2002:a17:902:ef43:b0:216:73a5:ea16 with SMTP id d9443c01a7336-21f17e4e7d7mr7729945ad.21.1738706186337; Tue, 04 Feb 2025 13:56:26 -0800 (PST) Received: from localhost ([2a03:2880:ff:19::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21de33007e6sm101739485ad.173.2025.02.04.13.56.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:26 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 02/10] net: prefix devmem specific helpers Date: Tue, 4 Feb 2025 13:56:13 -0800 Message-ID: <20250204215622.695511-3-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add prefixes to all helpers that are specific to devmem TCP, i.e. net_iov_binding[_id]. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/devmem.c | 2 +- net/core/devmem.h | 14 +++++++------- net/ipv4/tcp.c | 2 +- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 3bba3f018df0..66cd1ab9224f 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -94,7 +94,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) void net_devmem_free_dmabuf(struct net_iov *niov) { - struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + struct net_devmem_dmabuf_binding *binding = net_devmem_iov_binding(niov); unsigned long dma_addr = net_devmem_get_dma_addr(niov); if (WARN_ON(!gen_pool_has_addr(binding->chunk_pool, dma_addr, diff --git a/net/core/devmem.h b/net/core/devmem.h index 76099ef9c482..99782ddeca40 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -86,11 +86,16 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov) } static inline struct net_devmem_dmabuf_binding * -net_iov_binding(const struct net_iov *niov) +net_devmem_iov_binding(const struct net_iov *niov) { return net_iov_owner(niov)->binding; } +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) +{ + return net_devmem_iov_binding(niov)->id; +} + static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); @@ -99,11 +104,6 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); } -static inline u32 net_iov_binding_id(const struct net_iov *niov) -{ - return net_iov_owner(niov)->binding->id; -} - static inline void net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) { @@ -171,7 +171,7 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) return 0; } -static inline u32 net_iov_binding_id(const struct net_iov *niov) +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0d704bda6c41..b872de9a8271 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2494,7 +2494,7 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, /* Will perform the exchange later */ dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx]; - dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov); + dmabuf_cmsg.dmabuf_id = net_devmem_iov_binding_id(niov); offset += copy; remaining_len -= copy; From patchwork Tue Feb 4 21:56:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959976 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18A2121ADCB for ; Tue, 4 Feb 2025 21:56:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706189; cv=none; b=CybM5+u1DurUPLKKvCX7OQTOxX4loueUyCrpbSKHojxhNsecbq5oLOkDVEy8ZACr95qHeT8I50A6yIjhQgJH4jkL1ooGXxkDTJkxIRyA7uTDx0P3PyykgnA52LXPJaT4RjVTYoOzPKNf0osF+5nYYH8FXFFx8d041CgzgdIXbVg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706189; c=relaxed/simple; bh=wQAH12+u9SH8s8aaHLbjLYo2wle02yX3U3c5UV1vGPU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Tq/Fkv460dKxbSrbMMAGD2mE5fkT9yCemBS84MmnO+ZGmcVvBx2WlrdFSOTrnSBpfBVQ/Yqz2bRyeEbalQHi3Dwu5YYTiYhK2aoca3pnRlQrl3cVrjoKeavLNQqTsCqA4n4eaKilUb6wYj6uR3rDsw0bCbld8rdG6Oxbkmd2wiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=uYhmAm+j; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="uYhmAm+j" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21644aca3a0so142689145ad.3 for ; Tue, 04 Feb 2025 13:56:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706187; x=1739310987; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TA1fLy5emdqDCQ/2QEOFVDqLJ+sP5b06n93RKtRhfIs=; b=uYhmAm+jn7G+iIK9kgZ02DoUF/BHz/nmEg6PbSPIc6Ntd++aNs5YmC5CNc8gPG+zVf hcEllqCPbQCe5mlb9l01V6ZcqXOgFKj4ETo3MSbfURK0fG5R4bEdxNNwoSBKmhVpLbka oDMTWjSXQLy05lkZCbNzrIeEI0trF6o3yYC/C9PFjeiYwZwppb/mz58qU5jZYDZ0l07C GB4Ca+ky1Wq5TF5qGyO1S1/QlEJRnDZ3CiB4BGRpS/z8i89hGuZgrskJBtH7au4xf833 XQUXAVduZrWls2jrD0a8gCdK9mzOsAh2y1PazURioXTYzNhLpcMpGjoW3OS7fxMkq1XM zQ7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706187; x=1739310987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TA1fLy5emdqDCQ/2QEOFVDqLJ+sP5b06n93RKtRhfIs=; b=IHsaUP/vrD5VXyP8nL6s/UtYGs4yEz4CnF/ES7xTkfPLfboCE6Zxly+DJ8fHPcfVyJ zejkKzkdp7bYhpuyhPmfx2FtN7f8ZmAmqJnBbpfYC4b5g95bhv18E5ke+UR28Yi3xuc5 AFlMZfgn5V1Mduq86oWxh6zVRAfPDiYg5r3RHOzgssY3hz5MLkcUDfYy+HU0Vn8NLvbj FgAO/1qDMyAd67+TjaEz3LwFAAw/84F1SkHWUaHKLy41jpBDuYwKBK1M2jqJpFyuRdPO zE8ZwIYfUn3g5hOBGc4iodotKEnVp4uTFabr6jZuxqisOVCQkLSrWVrjegFlB67WO5F1 /GtQ== X-Gm-Message-State: AOJu0YzBDFn7LYC0oHON/fYOT4laMuLWfTp1I6mmaoa5c5tNma4XRpNC dDDgmggvQe5/CvfViYv4T5XdKCHBduoSupJtmu89r8/rR7vHOCrVIKd4V6BCX6GCTLfUQBV0pf6 1 X-Gm-Gg: ASbGncufXSqfs5qkNMQS3QpnCY1Yz+YbbNDrtHtxgGshazurgY+WiVqBuTylgyaqduf dL0BzS8Hq3MPyI4uXCI/LDNfh+tVmcVMgoslHgWQHPmW+TkrRIEUXdsVDfbu2JxKg8/8TxWGL43 oikXarrc8Q2fdkm/U5yzs3md0yeJPb81Qrdf5wCI+bhoYqQ1kc+e3xBmiN0lyeMj+FFks/nfkz3 g9HsXXKbdsJuj0367KnkP+PgoBCYL4zyVTiDMMcalVNfAcCZC1b7D3KV7UyU+o7nmlO+E1kWmo= X-Google-Smtp-Source: AGHT+IFcOjF4szIQP/QooPu2iC+61phwbnzxSGwIujrnK2BA803J0CiAlPwLtMIwNTtqoxHJrm59IA== X-Received: by 2002:a17:902:ced1:b0:216:2d42:2e05 with SMTP id d9443c01a7336-21f17e272c9mr9167375ad.22.1738706187322; Tue, 04 Feb 2025 13:56:27 -0800 (PST) Received: from localhost ([2a03:2880:ff:3::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21de31ee206sm100839705ad.16.2025.02.04.13.56.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:27 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 03/10] net: generalise net_iov chunk owners Date: Tue, 4 Feb 2025 13:56:14 -0800 Message-ID: <20250204215622.695511-4-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner, which serves as a useful abstraction to share data and provide a context. However, it's too devmem specific, and we want to reuse it for other memory providers, and for that we need to decouple net_iov from devmem. Make net_iov to point to a new base structure called net_iov_area, which dmabuf_genpool_chunk_owner extends. Reviewed-by: Mina Almasry Acked-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/netmem.h | 21 ++++++++++++++++++++- net/core/devmem.c | 25 +++++++++++++------------ net/core/devmem.h | 25 +++++++++---------------- 3 files changed, 42 insertions(+), 29 deletions(-) diff --git a/include/net/netmem.h b/include/net/netmem.h index 1b58faa4f20f..c61d5b21e7b4 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -24,11 +24,20 @@ struct net_iov { unsigned long __unused_padding; unsigned long pp_magic; struct page_pool *pp; - struct dmabuf_genpool_chunk_owner *owner; + struct net_iov_area *owner; unsigned long dma_addr; atomic_long_t pp_ref_count; }; +struct net_iov_area { + /* Array of net_iovs for this area. */ + struct net_iov *niovs; + size_t num_niovs; + + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; +}; + /* These fields in struct page are used by the page_pool and net stack: * * struct { @@ -54,6 +63,16 @@ NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); #undef NET_IOV_ASSERT_OFFSET +static inline struct net_iov_area *net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index 66cd1ab9224f..fb0dddcb4e60 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -33,14 +33,15 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, { struct dmabuf_genpool_chunk_owner *owner = chunk->owner; - kvfree(owner->niovs); + kvfree(owner->area.niovs); kfree(owner); } static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct dmabuf_genpool_chunk_owner *owner; + owner = net_devmem_iov_to_chunk_owner(niov); return owner->base_dma_addr + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); } @@ -83,7 +84,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) offset = dma_addr - owner->base_dma_addr; index = offset / PAGE_SIZE; - niov = &owner->niovs[index]; + niov = &owner->area.niovs[index]; niov->pp_magic = 0; niov->pp = NULL; @@ -261,9 +262,9 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->base_virtual = virtual; + owner->area.base_virtual = virtual; owner->base_dma_addr = dma_addr; - owner->num_niovs = len / PAGE_SIZE; + owner->area.num_niovs = len / PAGE_SIZE; owner->binding = binding; err = gen_pool_add_owner(binding->chunk_pool, dma_addr, @@ -275,17 +276,17 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->niovs = kvmalloc_array(owner->num_niovs, - sizeof(*owner->niovs), - GFP_KERNEL); - if (!owner->niovs) { + owner->area.niovs = kvmalloc_array(owner->area.num_niovs, + sizeof(*owner->area.niovs), + GFP_KERNEL); + if (!owner->area.niovs) { err = -ENOMEM; goto err_free_chunks; } - for (i = 0; i < owner->num_niovs; i++) { - niov = &owner->niovs[i]; - niov->owner = owner; + for (i = 0; i < owner->area.num_niovs; i++) { + niov = &owner->area.niovs[i]; + niov->owner = &owner->area; page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), net_devmem_get_dma_addr(niov)); } diff --git a/net/core/devmem.h b/net/core/devmem.h index 99782ddeca40..a2b9913e9a17 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -10,6 +10,8 @@ #ifndef _NET_DEVMEM_H #define _NET_DEVMEM_H +#include + struct netlink_ext_ack; struct net_devmem_dmabuf_binding { @@ -51,17 +53,11 @@ struct net_devmem_dmabuf_binding { * allocations from this chunk. */ struct dmabuf_genpool_chunk_owner { - /* Offset into the dma-buf where this chunk starts. */ - unsigned long base_virtual; + struct net_iov_area area; + struct net_devmem_dmabuf_binding *binding; /* dma_addr of the start of the chunk. */ dma_addr_t base_dma_addr; - - /* Array of net_iovs for this chunk. */ - struct net_iov *niovs; - size_t num_niovs; - - struct net_devmem_dmabuf_binding *binding; }; void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); @@ -75,20 +71,17 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * -net_iov_owner(const struct net_iov *niov) +net_devmem_iov_to_chunk_owner(const struct net_iov *niov) { - return niov->owner; -} + struct net_iov_area *owner = net_iov_owner(niov); -static inline unsigned int net_iov_idx(const struct net_iov *niov) -{ - return niov - net_iov_owner(niov)->niovs; + return container_of(owner, struct dmabuf_genpool_chunk_owner, area); } static inline struct net_devmem_dmabuf_binding * net_devmem_iov_binding(const struct net_iov *niov) { - return net_iov_owner(niov)->binding; + return net_devmem_iov_to_chunk_owner(niov)->binding; } static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) @@ -98,7 +91,7 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct net_iov_area *owner = net_iov_owner(niov); return owner->base_virtual + ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); From patchwork Tue Feb 4 21:56:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959977 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C51021ADB4 for ; Tue, 4 Feb 2025 21:56:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706190; cv=none; b=Ts3Y1jFsuDy9uJChFHDVwrHAH1EodLLlTKJ3LyCs+LFR94iNAKxksKP3qix9xlwxQpnOQVZQElZPoK5tjC1gj2TVFX6/eVYnexV2t7FKY3Qyd90oTVFAlXRFjyjogRPQjZjzYMP1A6dBYWWGmCdvwU1JCoWuM7iM8JvCxO2WMgo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706190; c=relaxed/simple; bh=VCfnp3tbSTbnXHNZ32P1p/3YtTKNQGfROVjDj6FO8Fo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dOtza1255pk8CSC+apBoax/sVmcxkmF6Y2OIFVYl0qOxFoCbCNxgXA/L3iduExDsrEY3w5h51MleLu4xMbx2g61X5y8OSDKUhG1QNIHc4LPP9NsotBelPkYIbfGteRaGZ5SCjFC8uKz8aQ0AiCqQmVixuDgeSaKa83sZsqk0p8c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=PL0gV391; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="PL0gV391" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-21f01fe1ce8so18787745ad.2 for ; Tue, 04 Feb 2025 13:56:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706188; x=1739310988; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2dsKyIS4agUxuloeTXRg37ZYsUhVCI+vxxX48deMrJg=; b=PL0gV391THfTxqMK3+Gy50wp3CJpE1iuGXFhsHjbxZfh37RgD27MRsxRXNR8XsjxI4 LFUUc2N77R4NaPlBt3ltsvLLrYjVpXd9ISgWjJFpy78FpJwIZ69KkDDBSTepbZeujaql kOY85Z2cj9FE8priG78X/v/zKWyvONYq+LSOaz97r5qkTlbCu9C5QDkVrVIAf9bqCSh2 p3odzslL5Uq/gkyiRwGL6TvO/4+w5QBmSRD+gemAdNiDSZ8WmlIO2mvNnbTmUsuNJiFC 2SENcCbA+o2J2SxYPC3Z8lfAESRtpIQcfJgprurno89Ih1Xi4PUAA3HK09IwvxQBPOLf U4iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706188; x=1739310988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2dsKyIS4agUxuloeTXRg37ZYsUhVCI+vxxX48deMrJg=; b=Cv2/M6nTFQIe0owcxrn4sJq7XFOrBdQBJ8koHY8Ycne3UDtF3U2FKy7NuYv8VonQ2I WzUu+Em5gjh1XaopuUHLDjIXB3hNcE2l3yArGyzBLl2OXWqjE+h7XeeIcezKuDVpF7xh UWHQ8nGahSpfSYvrhSko7HB9if8yFX9Tl5ASdBJZmNqejJAQisj7G4ILTUD8jrR/LyG8 X1fA82dai74ooTFFUu4w9tTx1U6u6slI4jrkX21WaSwa92SHlPPNJPNKF7xQlURrEHOp 78jm5te3LCDi0FihFmjIwCdU2a9rUysh+65BsFgtqQWypSpVmh1uZA/MdjaFNSumcRyO p10g== X-Gm-Message-State: AOJu0YyoqV4rm4rOJnI8l38CbOZnYXh52Gl/l17zAi+JS0yROmJiNNDy evxFgDoG5b04dYHqgt5OOSIAv9aWJOvDDm+CsRJMQPqWH2QQzFuq3V3vWdPc+AS9+jpvfEp3oB0 i X-Gm-Gg: ASbGnctw8u2Osdrp29fMhxetrdl/00pXFyOVl5Lsgx73Ttln2+icuvUX5IuRJXvz98z jKLlQ5ECiaNpQr1kb1eZetX65iXpBM9QDNGR+7SaoD+BCxrfbZP6TgYOd2BTsVanGU7ReCDYDzk 2xN9A0Cv+spI0hrinNQFZmD2jzW+/07lFj51Jl4KzkuoNCI4E1cgRF0li4AYwOFMYV0bszKz/zI xVIK8eRxCMg8VgU/a7krnfyvs87qN/dv006ehZ6dkFA6vHkyxX40dITm6C0nofTNkG7zQdoCe8= X-Google-Smtp-Source: AGHT+IGX6aH+5CVHV3h1t1THm4THzFZogTV4q38dNuQa3tMmEUGJ+7ZNVPlXcf/IEEI4uNiUIWtQHQ== X-Received: by 2002:a05:6a21:b98:b0:1e1:af70:a30b with SMTP id adf61e73a8af0-1ede88bc4bdmr883537637.34.1738706188312; Tue, 04 Feb 2025 13:56:28 -0800 (PST) Received: from localhost ([2a03:2880:ff:d::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe6a1a78asm11010194b3a.164.2025.02.04.13.56.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:28 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 04/10] net: page_pool: create hooks for custom memory providers Date: Tue, 4 Feb 2025 13:56:15 -0800 Message-ID: <20250204215622.695511-5-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov A spin off from the original page pool memory providers patch by Jakub, which allows extending page pools with custom allocators. One of such providers is devmem TCP, and the other is io_uring zerocopy added in following patches. Link: https://lore.kernel.org/netdev/20230707183935.997267-7-kuba@kernel.org/ Co-developed-by: Jakub Kicinski # initial mp proposal Signed-off-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 15 +++++++++++++++ include/net/page_pool/types.h | 4 ++++ net/core/devmem.c | 15 ++++++++++++++- net/core/page_pool.c | 23 +++++++++++++++-------- 4 files changed, 48 insertions(+), 9 deletions(-) create mode 100644 include/net/page_pool/memory_provider.h diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h new file mode 100644 index 000000000000..e49d0a52629d --- /dev/null +++ b/include/net/page_pool/memory_provider.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H + +#include +#include + +struct memory_provider_ops { + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); +}; + +#endif diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 7f405672b089..36eb57d73abc 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -152,8 +152,11 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct memory_provider_ops; + struct pp_memory_provider_params { void *mp_priv; + const struct memory_provider_ops *mp_ops; }; struct page_pool { @@ -216,6 +219,7 @@ struct page_pool { struct ptr_ring ring; void *mp_priv; + const struct memory_provider_ops *mp_ops; #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ diff --git a/net/core/devmem.c b/net/core/devmem.c index fb0dddcb4e60..c81625ca57c6 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include "devmem.h" @@ -27,6 +28,8 @@ /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); +static const struct memory_provider_ops dmabuf_devmem_ops; + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -118,6 +121,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(rxq->mp_params.mp_priv != binding); rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; rxq_idx = get_netdev_rx_queue_index(rxq); @@ -153,7 +157,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } rxq = __netif_get_rx_queue(dev, rxq_idx); - if (rxq->mp_params.mp_priv) { + if (rxq->mp_params.mp_ops) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); return -EEXIST; } @@ -171,6 +175,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return err; rxq->mp_params.mp_priv = binding; + rxq->mp_params.mp_ops = &dmabuf_devmem_ops; err = netdev_rx_queue_restart(dev, rxq_idx); if (err) @@ -180,6 +185,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, err_xa_erase: rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; xa_erase(&binding->bound_rxqs, xa_idx); return err; @@ -399,3 +405,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) /* We don't want the page pool put_page()ing our net_iovs. */ return false; } + +static const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, + .release_netmem = mp_dmabuf_devmem_release_page, +}; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f5e908c9e7ad..d632cf2c91c3 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -285,13 +286,19 @@ static int page_pool_init(struct page_pool *pool, rxq = __netif_get_rx_queue(pool->slow.netdev, pool->slow.queue_idx); pool->mp_priv = rxq->mp_params.mp_priv; + pool->mp_ops = rxq->mp_params.mp_ops; } - if (pool->mp_priv) { + if (pool->mp_ops) { if (!pool->dma_map || !pool->dma_sync) return -EOPNOTSUPP; - err = mp_dmabuf_devmem_init(pool); + if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { + err = -EFAULT; + goto free_ptr_ring; + } + + err = pool->mp_ops->init(pool); if (err) { pr_warn("%s() mem-provider init failed %d\n", __func__, err); @@ -587,8 +594,8 @@ netmem_ref page_pool_alloc_netmems(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + netmem = pool->mp_ops->alloc_netmems(pool, gfp); else netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; @@ -679,8 +686,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) bool put; put = true; - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - put = mp_dmabuf_devmem_release_page(pool, netmem); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_netmem(pool, netmem); else __page_pool_release_page_dma(pool, netmem); @@ -1048,8 +1055,8 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_priv) { - mp_dmabuf_devmem_destroy(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); static_branch_dec(&page_pool_mem_providers); } From patchwork Tue Feb 4 21:56:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959978 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E0FF21B1BF for ; Tue, 4 Feb 2025 21:56:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706191; cv=none; b=EmG33+R7T5POOjYBlQ6mvRJAaSt7ukQCGi1lRwWiFcAB7BbmqtQGGkrAeNZ+Jmmag+WJAS8bhf1tumHSym0y2/kJjYPqbvmk6+Qv+MN+owdwgWwi81Zy12hm8ELcetoyDvW58Jzmz+Cwn95MlDbbuGGbbbEztpqxFk3YXVBJVn4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706191; c=relaxed/simple; bh=iwX3RnN9NnpgzE/vlb2BVlnnEFmEQ/G8Em5DRbix/fM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ziy/GY2F3fXoFIoXCNM7JA4Mm5cVVPMr9veCx65Wp4DbyliwVjTO8njjOqvDsIBwxLvKnyPvvBDPRCkP8CKc19HhMn5UtY1Kp+u1Rgi7KEiwRL26JNpcVoYqHUUsiU5t6YqoPuxlYKte5gWKV+r9bM85fppYhc94SN7RElTnTkY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=J52Oj3cp; arc=none smtp.client-ip=209.85.216.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="J52Oj3cp" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-2f13acbe29bso354697a91.1 for ; Tue, 04 Feb 2025 13:56:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706189; x=1739310989; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bPSri8muoWoTXN531R9KlT5ZCocyL3/ObF5jo4x5P2A=; b=J52Oj3cpaqccz0iNFO14aXUsowdny9RsHq776IKDBarIeglsyq2GyjpR3j3I+7ZZxS yXSa2+b5EuauBDTus/RDCucIk+R+D4dCNgql0n0lLd607jtM9PLDevPJEpKmXRiJePI8 NaCv1nCBs6UQlxfAf29acrP6wfTsFVHBCEGk4WehNV2rqTw4507/2EPeN9Gxj1ZsRPIX 16MfK3DXRi//jIIAVK/F0Ag68JG+QeBfmpb7hykni7ibHtvxLaY8UpZE2jrosyIJe+LS 8ZhZitd13abngAB/Km8JFz8IflUFf+1UyowyBZqQVNQlGpx6sC99664PzRPTUb/TT2ap YfFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706189; x=1739310989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bPSri8muoWoTXN531R9KlT5ZCocyL3/ObF5jo4x5P2A=; b=ppcDVdSm7DlKpMXGg49WERFSHfA2DccVrdxP24pouHxZo6dsC9U6T3+C/goZS1C/S/ lyxKZS5KicJttrghGJ4il4yyC70iultp8KXU+h6exu6uE2INRXBZLZMNalSv49XWUJ0a ix3RQ6MK3sexi3Cc32MNt4y48ME0wrbchq2YnlvE1K2DYua+doOTgWTLq1ujlioMZ3DC H6adaH7D14VzriaB05bpK0R5Xy7OoqqIYmaKwl+a4TPbkNJhGYWiC+FWA1dZTl+yiVcs 27sf6Kh3cPf1Pm7i21EQdv4GQqpMy3qGT00RvBWYONq8VnpXeDNmFaFDt0t6ha+pNxt8 /SMw== X-Gm-Message-State: AOJu0YxmW2w3X32QNBbEoCDRcu4nUock2KCeDTOXaituBqkIy5SjIZGE rrQgYBoQ/nNr0FSsS5rc0CFC1VKxhTRBBef7T7Kq6duGZC4S3IJrpkqaJftSgJJjZKDK7WeLEaQ k X-Gm-Gg: ASbGncsK+DNgFMis4h7znJQedc5/DnQ0JTA6GLAU6b+xNBNltUFpLr/wvEd++BqoFx1 cggQmJxSRZhqh/xFBoFEQ8+xDH+4+84H/S9LauTuQ1MChoh54/4vlgZXV4uST+sd96+FlhLU5tz 3Or/J1Hhx32WtoZ7zBGSPaRwCYxrlYZhnwxjP2Fkjwf2XIa1Y72cBS9XODNYlGRi+eQ++zzauJE LviYPNSHJ+CNI+GDXNNlVJxgXii7CCRfzPSUg3AbQIKDAUfhslMFCjMtzZ0uMWC+i79oqNRsh8= X-Google-Smtp-Source: AGHT+IGkJfXdujRU0F8nZ6hdYhVO5j5ImD/O0TRvDZK+ZbotWaEdBuH2VVaRcUnlK5mYeCQG3oVABA== X-Received: by 2002:a05:6a00:3a19:b0:729:1c0f:b94e with SMTP id d2e1a72fcca58-73035131885mr923620b3a.6.1738706189241; Tue, 04 Feb 2025 13:56:29 -0800 (PST) Received: from localhost ([2a03:2880:ff:e::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe6a1a773sm11179239b3a.166.2025.02.04.13.56.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:28 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 05/10] netdev: add io_uring memory provider info Date: Tue, 4 Feb 2025 13:56:16 -0800 Message-ID: <20250204215622.695511-6-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add a nested attribute for io_uring memory provider info. For now it is empty and its presence indicates that a particular page pool or queue has an io_uring memory provider attached. $ ./cli.py --spec netlink/specs/netdev.yaml --dump page-pool-get [{'id': 80, 'ifindex': 2, 'inflight': 64, 'inflight-mem': 262144, 'napi-id': 525}, {'id': 79, 'ifindex': 2, 'inflight': 320, 'inflight-mem': 1310720, 'io_uring': {}, 'napi-id': 525}, ... $ ./cli.py --spec netlink/specs/netdev.yaml --dump queue-get [{'id': 0, 'ifindex': 1, 'type': 'rx'}, {'id': 0, 'ifindex': 1, 'type': 'tx'}, {'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'rx'}, {'id': 1, 'ifindex': 2, 'napi-id': 514, 'type': 'rx'}, ... {'id': 12, 'ifindex': 2, 'io_uring': {}, 'napi-id': 525, 'type': 'rx'}, ... Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- Documentation/netlink/specs/netdev.yaml | 15 +++++++++++++++ include/uapi/linux/netdev.h | 7 +++++++ tools/include/uapi/linux/netdev.h | 7 +++++++ 3 files changed, 29 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index cbb544bd6c84..288923e965ae 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -114,6 +114,9 @@ attribute-sets: doc: Bitmask of enabled AF_XDP features. type: u64 enum: xsk-flags + - + name: io-uring-provider-info + attributes: [] - name: page-pool attributes: @@ -171,6 +174,11 @@ attribute-sets: name: dmabuf doc: ID of the dmabuf this page-pool is attached to. type: u32 + - + name: io-uring + doc: io-uring memory provider information. + type: nest + nested-attributes: io-uring-provider-info - name: page-pool-info subset-of: page-pool @@ -296,6 +304,11 @@ attribute-sets: name: dmabuf doc: ID of the dmabuf attached to this queue, if any. type: u32 + - + name: io-uring + doc: io_uring memory provider information. + type: nest + nested-attributes: io-uring-provider-info - name: qstats @@ -572,6 +585,7 @@ operations: - inflight-mem - detach-time - dmabuf + - io-uring dump: reply: *pp-reply config-cond: page-pool @@ -637,6 +651,7 @@ operations: - napi-id - ifindex - dmabuf + - io-uring dump: request: attributes: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index e4be227d3ad6..6c6ee183802d 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -86,6 +86,11 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + __NETDEV_A_IO_URING_PROVIDER_INFO_MAX, + NETDEV_A_IO_URING_PROVIDER_INFO_MAX = (__NETDEV_A_IO_URING_PROVIDER_INFO_MAX - 1) +}; + enum { NETDEV_A_PAGE_POOL_ID = 1, NETDEV_A_PAGE_POOL_IFINDEX, @@ -94,6 +99,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, NETDEV_A_PAGE_POOL_DMABUF, + NETDEV_A_PAGE_POOL_IO_URING, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -136,6 +142,7 @@ enum { NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, NETDEV_A_QUEUE_DMABUF, + NETDEV_A_QUEUE_IO_URING, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index e4be227d3ad6..6c6ee183802d 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -86,6 +86,11 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + __NETDEV_A_IO_URING_PROVIDER_INFO_MAX, + NETDEV_A_IO_URING_PROVIDER_INFO_MAX = (__NETDEV_A_IO_URING_PROVIDER_INFO_MAX - 1) +}; + enum { NETDEV_A_PAGE_POOL_ID = 1, NETDEV_A_PAGE_POOL_IFINDEX, @@ -94,6 +99,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, NETDEV_A_PAGE_POOL_DMABUF, + NETDEV_A_PAGE_POOL_IO_URING, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -136,6 +142,7 @@ enum { NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, NETDEV_A_QUEUE_DMABUF, + NETDEV_A_QUEUE_IO_URING, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) From patchwork Tue Feb 4 21:56:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959979 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4F1421B18B for ; Tue, 4 Feb 2025 21:56:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706192; cv=none; b=sm2Sb7+EV/87DU1jvgfwNXmqdPUYodNIEIUUBsodOAqmAL8PVxR6i8OYtekzZH+FR0gxN9cGRQ5HUubrv3xujz0gSIK3l4ZkQMRBdSMfJ1d4SVaO3Afsv1ynBP3BwN+Ui66YCUsaPWKMLPm9dBHI+TDYge01LAWFeEaSUvfzV/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706192; c=relaxed/simple; bh=cwr/uINJIRGEk6puk3yj3oy/axiwQJMW5fmJKx7ZmAY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZNbm34BcH+VNqWvNGEb730yWL02QcHHoxbE4tbQjgWym4AbxDWhgbGmC5v13e/BZwi+OIJ9kbvlaeZIVAGBw/rMSJ0zpEjHgMU3XQ8FIPadXf+uyRCmrrMDpPCCYdVC3yb5CUOQ6Dn6Ec/SqDODqEpA8l3osW3Xz533+AC/trWc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=0KewW2DQ; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="0KewW2DQ" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21669fd5c7cso111695765ad.3 for ; Tue, 04 Feb 2025 13:56:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706190; x=1739310990; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IYyvUQb7OoVlPjPLuelYlwYke/iL4QRP4ZkF3M/sZ3s=; b=0KewW2DQtAQBzaYIQhjXP3W0UEarNQDbxeKx2jWTDYPm1R/7uHXLnoidW7hxQLKuMX eG/OQJgaBqQhPJlLEa1DV05aUA3Qf92DWkK2f9fON+RkryLqFZS4ymeIDU4DFYCf4VDB fmAKWPoJEyuhRbOeTaDB0O/vYVWpny/vgjeQSvKgiTy3E9zClcmzA4O4IbQiSlMLLle0 XsJa4URLaYCF2J+7DwC5RUcrJd8XQq0RCpoiT5speG/Vx1l8coJpMvcXMPyNP04tT4EG R51v/vqBImxGcCpS5urjfG45QyUtNl1b+J+pGEPBkpC1KmSeiITtYdC5ozVsZgueZ6CK k71w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706190; x=1739310990; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IYyvUQb7OoVlPjPLuelYlwYke/iL4QRP4ZkF3M/sZ3s=; b=ILGiEIR1jo7uJV+EWpAnyt0eRkjMVilNspx0kzKTAqrCzbf1prMqTDswnUhekXwdDI Wdc+wRZx9LxGJDH1tH2kZ3OoLNqqa2GgPk3N9JAIhWxwCEI2InMoM2S1tLxSpWHM31LP ISZVVrxBmDEDkVvaCYxCjYisFSmpd1xEyu++Z83z2WonpgehxhvMRKL6l5RC1pJHk3vn IsTOvMkcRPxyuUpINiGEXxdP780ALlbKWkqUGdCjJ6VFWTCH4g3hRVI6JetrEH1nxzp1 a/7opiEhwbhukXTotXQH5AxqHcf8on6+/A4hthl8ZmHko1JReDYC2luu558nZ5y8B7+/ 8vxQ== X-Gm-Message-State: AOJu0YztVroUsMWgZigNPCbGx37hu6vTtVWEhfExOOwRTYdr+t0V5hw2 Am2n57sSH3CEOF9q+sqRY+hMAsZpVYfj4dGegHStuQ69XYS/ddXgtQcqBDDaFUmqQz2P2Qf+nYM G X-Gm-Gg: ASbGncvHMEqY7Led/WzIOBlfzbTk5joHL0iAaxVK32oMv5UfILS7bOSOW1N0q7QQHYf vZt1PQKq35rSKrRdU5qd5e9DSblr10HEHGzKJvMhNtlBGM9b7t2EZphC4H96RdJwpZAJgVugLiu MEbCYqZoHhPhL18Uxpc8h5asxo9k9+YSIbyHxkXEDJ9iYibD9kbCcsus3OCiLeXDgucjyNqOGMv gLqzkHLBM9NFuCwCFISSWijJeveFvw/Eor/ZI5msjnRKI0M2Sgp7YeFS906S1MVHUxl1hs3Vmg= X-Google-Smtp-Source: AGHT+IFbdI6LPbRYj8oFr8xHMtENhnI2TMzfXgrhOgMB2oIJ80an0Hyi0mQxyJ5AvIYjY7RA+lvRhA== X-Received: by 2002:a05:6a00:1411:b0:72a:bc54:be9e with SMTP id d2e1a72fcca58-730351ec0c5mr664744b3a.15.1738706190175; Tue, 04 Feb 2025 13:56:30 -0800 (PST) Received: from localhost ([2a03:2880:ff:3::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe631be20sm11086030b3a.34.2025.02.04.13.56.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:29 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 06/10] net: page_pool: add callback for mp info printing Date: Tue, 4 Feb 2025 13:56:17 -0800 Message-ID: <20250204215622.695511-7-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add a mandatory callback that prints information about the memory provider to netlink. Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 5 +++++ net/core/devmem.c | 10 ++++++++++ net/core/netdev-genl.c | 11 ++++++----- net/core/page_pool_user.c | 5 ++--- 4 files changed, 23 insertions(+), 8 deletions(-) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index e49d0a52629d..6d10a0959d00 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -5,11 +5,16 @@ #include #include +struct netdev_rx_queue; +struct sk_buff; + struct memory_provider_ops { netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); int (*init)(struct page_pool *pool); void (*destroy)(struct page_pool *pool); + int (*nl_fill)(void *mp_priv, struct sk_buff *rsp, + struct netdev_rx_queue *rxq); }; #endif diff --git a/net/core/devmem.c b/net/core/devmem.c index c81625ca57c6..63b790326c7d 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -406,9 +406,19 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) return false; } +static int mp_dmabuf_devmem_nl_fill(void *mp_priv, struct sk_buff *rsp, + struct netdev_rx_queue *rxq) +{ + const struct net_devmem_dmabuf_binding *binding = mp_priv; + int type = rxq ? NETDEV_A_QUEUE_DMABUF : NETDEV_A_PAGE_POOL_DMABUF; + + return nla_put_u32(rsp, type, binding->id); +} + static const struct memory_provider_ops dmabuf_devmem_ops = { .init = mp_dmabuf_devmem_init, .destroy = mp_dmabuf_devmem_destroy, .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, .release_netmem = mp_dmabuf_devmem_release_page, + .nl_fill = mp_dmabuf_devmem_nl_fill, }; diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 715f85c6b62e..5b459b4fef46 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "dev.h" #include "devmem.h" @@ -368,7 +369,7 @@ static int netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx, u32 q_type, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding; + struct pp_memory_provider_params *params; struct netdev_rx_queue *rxq; struct netdev_queue *txq; void *hdr; @@ -385,15 +386,15 @@ netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, switch (q_type) { case NETDEV_QUEUE_TYPE_RX: rxq = __netif_get_rx_queue(netdev, q_idx); + if (rxq->napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, rxq->napi->napi_id)) goto nla_put_failure; - binding = rxq->mp_params.mp_priv; - if (binding && - nla_put_u32(rsp, NETDEV_A_QUEUE_DMABUF, binding->id)) + params = &rxq->mp_params; + if (params->mp_ops && + params->mp_ops->nl_fill(params->mp_priv, rsp, rxq)) goto nla_put_failure; - break; case NETDEV_QUEUE_TYPE_TX: txq = netdev_get_tx_queue(netdev, q_idx); diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index d5e214c30c31..9d8a3d8597fa 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -8,9 +8,9 @@ #include #include #include +#include #include -#include "devmem.h" #include "page_pool_priv.h" #include "netdev-genl-gen.h" @@ -216,7 +216,6 @@ static int page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding = pool->mp_priv; size_t inflight, refsz; unsigned int napi_id; void *hdr; @@ -249,7 +248,7 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, pool->user.detach_time)) goto err_cancel; - if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) + if (pool->mp_ops && pool->mp_ops->nl_fill(pool->mp_priv, rsp, NULL)) goto err_cancel; genlmsg_end(rsp, hdr); From patchwork Tue Feb 4 21:56:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959980 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0610A21C172 for ; Tue, 4 Feb 2025 21:56:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706193; cv=none; b=V6B3ERPjeZO2JgX+5JSZXlnneGZ92YtR4li+2Sd4v5aNZ+Ein3pB1K1PejUsU4cqx6DD63l3izJWBIMQS75LTdMvcarj4LgaSPvvczium0BtdF/vGEyW0U9osraUb9GQVIJbBjJak/DpjcFJNGxmr1Dp1GswJK19bwYv9uDP2dk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706193; c=relaxed/simple; bh=XhYDQi1clFZRqF7t94s2n85+aldNY7/iQ78okp5gHc4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DGJNxJduiedOmRTBo0ibiE+T/22gDKrdu+K5PiMY+zGYZqo4ssUTwPW4nPI+9Yco+ScIKy6LCQNFtENU+ZreawrlrsmMGh9Kxt40medWUWo0CU8XPmmkp6iF8zNnNLlsiGFjxT6VjcZPmkR9C/3K37TeZ1W7MZ7gTaYvuy6e1bY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=aS6a/OId; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="aS6a/OId" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21654fdd5daso106881265ad.1 for ; Tue, 04 Feb 2025 13:56:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706191; x=1739310991; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8g2nOnsSPtyA6MdoGolfoJakySG02rGgvojfspNAyCg=; b=aS6a/OIdnQek5p3B5JN825NvTsj1t0JLBCH4/joz5ZZe5uyCU0e05HMlZDkY+1jHm7 0fC54C0b7vWkJE5G8mkiQdWXRIqq8Tr2FCvs84QXLSC5rIa+rO65RUPs53oHi6AQY0bM V4/kMwPkUvwTc4SVYlcmsH4I19lI+d85j+OgRvAFmEzED32JIEY/KA6nSk1TwuYs1iKj dR3nQLvBqjgd8rC6KFVcHsRylex1ZKvbHy/pFF+WmU85xvHJnIgjq/NgtwMNE/AE40F+ CmPp60dNPssLBZAIqz8luEboeluOoSCShB+hsJ9MZf5IsvKMAmTPr27BaWu2Dew++F8P genw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706191; x=1739310991; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8g2nOnsSPtyA6MdoGolfoJakySG02rGgvojfspNAyCg=; b=F/1NtP7VItUgB8eNDna1XByHYB5VGl9wMWLjGCw+o3uGXKIZls9JVRhZIYAgR/nFth Sr1fxaW3TdEohjA43XwqDSF9gwkopsK6d56h5PKQ2MFi+pIrP3gBEUg4DODn+PwvWvDy gYAFM3fmO6adO9ekwlhr8OUWVQpzrREq5D/LHikgrxwQNlJvqII6mWsFq0Q1LX2jp09m gAogCyA/OEkI6JTmA2LPjIJqtAL6t1DUFUJP902AWXdd9njAOefu1AwQg1RLJvtiesF9 29A1puErs3Mf0TIuoIVWlaOi9f2RhesBUBzCwUe6vcz73fWM2jkfZVpgodYabHxcZ1cF C21Q== X-Gm-Message-State: AOJu0YxRBjjNNbTd+X37bVk/cs7dRTRPMa1MJ1dGEqtkfQ4elDl6CF1S g2Pi7k+BgCMmneOJ/18Hpgb34idf1s0n+k5EMMZE4yHmP0KpG7wgdmbkJ4YoDfEmu41BTjcYJCX k X-Gm-Gg: ASbGncsKEQvJmOpamwD4p+0sD7sBF9ueai7ZXoPTeJrFKIGz6Xqq2pCg7H8GQZ+15qB /cozMuCvQwCdp1axpgMn/PGc8fVZO71//Inut3jYI+hUfYfAvuBvHHMbiZywNM2HQZJjYEFeudr cTvhpGrxuFxRD7rhX70YMI9Rzu5Rtkp7odpFplTKmWGTIl1PDjLYYE576bcWG+4XDEP1X6ak+mi WAVmIGMWqgpsM2vjQDwgeIjVREn0O0YMlPhAbdtO8ksH/vlneMv9KjGzHHRsaEmxzPFlT4YMXcG X-Google-Smtp-Source: AGHT+IEHNxA9nZZ4ojd4hmMyLsyMrw0GtwS7haEpSJLlJqXH1TN+CUmPiSZ7pF+S1MTCzT4HAGkOmw== X-Received: by 2002:a17:902:f64f:b0:215:b75f:a1cb with SMTP id d9443c01a7336-21f17e26cf8mr7741275ad.9.1738706191206; Tue, 04 Feb 2025 13:56:31 -0800 (PST) Received: from localhost ([2a03:2880:ff:11::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21de31ee5a2sm102015585ad.26.2025.02.04.13.56.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:30 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 07/10] net: page_pool: add a mp hook to unregister_netdevice* Date: Tue, 4 Feb 2025 13:56:18 -0800 Message-ID: <20250204215622.695511-8-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Devmem TCP needs a hook in unregister_netdevice_many_notify() to upkeep the set tracking queues it's bound to, i.e. ->bound_rxqs. Instead of devmem sticking directly out of the genetic path, add a mp function. Reviewed-by: Jakub Kicinski Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 1 + net/core/dev.c | 16 ++++++++++- net/core/devmem.c | 36 +++++++++++-------------- net/core/devmem.h | 5 ---- 4 files changed, 32 insertions(+), 26 deletions(-) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 6d10a0959d00..36469a7e649f 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -15,6 +15,7 @@ struct memory_provider_ops { void (*destroy)(struct page_pool *pool); int (*nl_fill)(void *mp_priv, struct sk_buff *rsp, struct netdev_rx_queue *rxq); + void (*uninstall)(void *mp_priv, struct netdev_rx_queue *rxq); }; #endif diff --git a/net/core/dev.c b/net/core/dev.c index c0021cbd28fc..384b9109932a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -159,6 +159,7 @@ #include #include #include +#include #include #include @@ -11724,6 +11725,19 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head) } EXPORT_SYMBOL(unregister_netdevice_queue); +static void dev_memory_provider_uninstall(struct net_device *dev) +{ + unsigned int i; + + for (i = 0; i < dev->real_num_rx_queues; i++) { + struct netdev_rx_queue *rxq = &dev->_rx[i]; + struct pp_memory_provider_params *p = &rxq->mp_params; + + if (p->mp_ops && p->mp_ops->uninstall) + p->mp_ops->uninstall(rxq->mp_params.mp_priv, rxq); + } +} + void unregister_netdevice_many_notify(struct list_head *head, u32 portid, const struct nlmsghdr *nlh) { @@ -11778,7 +11792,7 @@ void unregister_netdevice_many_notify(struct list_head *head, dev_tcx_uninstall(dev); dev_xdp_uninstall(dev); bpf_dev_bound_netdev_unregister(dev); - dev_dmabuf_uninstall(dev); + dev_memory_provider_uninstall(dev); netdev_offload_xstats_disable_all(dev); diff --git a/net/core/devmem.c b/net/core/devmem.c index 63b790326c7d..cbac6419fcc4 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -320,26 +320,6 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, return ERR_PTR(err); } -void dev_dmabuf_uninstall(struct net_device *dev) -{ - struct net_devmem_dmabuf_binding *binding; - struct netdev_rx_queue *rxq; - unsigned long xa_idx; - unsigned int i; - - for (i = 0; i < dev->real_num_rx_queues; i++) { - binding = dev->_rx[i].mp_params.mp_priv; - if (!binding) - continue; - - xa_for_each(&binding->bound_rxqs, xa_idx, rxq) - if (rxq == &dev->_rx[i]) { - xa_erase(&binding->bound_rxqs, xa_idx); - break; - } - } -} - /*** "Dmabuf devmem memory provider" ***/ int mp_dmabuf_devmem_init(struct page_pool *pool) @@ -415,10 +395,26 @@ static int mp_dmabuf_devmem_nl_fill(void *mp_priv, struct sk_buff *rsp, return nla_put_u32(rsp, type, binding->id); } +static void mp_dmabuf_devmem_uninstall(void *mp_priv, + struct netdev_rx_queue *rxq) +{ + struct net_devmem_dmabuf_binding *binding = mp_priv; + struct netdev_rx_queue *bound_rxq; + unsigned long xa_idx; + + xa_for_each(&binding->bound_rxqs, xa_idx, bound_rxq) { + if (bound_rxq == rxq) { + xa_erase(&binding->bound_rxqs, xa_idx); + break; + } + } +} + static const struct memory_provider_ops dmabuf_devmem_ops = { .init = mp_dmabuf_devmem_init, .destroy = mp_dmabuf_devmem_destroy, .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, .release_netmem = mp_dmabuf_devmem_release_page, .nl_fill = mp_dmabuf_devmem_nl_fill, + .uninstall = mp_dmabuf_devmem_uninstall, }; diff --git a/net/core/devmem.h b/net/core/devmem.h index a2b9913e9a17..8e999fe2ae67 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -68,7 +68,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding, struct netlink_ext_ack *extack); -void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * net_devmem_iov_to_chunk_owner(const struct net_iov *niov) @@ -145,10 +144,6 @@ net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return -EOPNOTSUPP; } -static inline void dev_dmabuf_uninstall(struct net_device *dev) -{ -} - static inline struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) { From patchwork Tue Feb 4 21:56:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959981 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C66E521C17F for ; Tue, 4 Feb 2025 21:56:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706194; cv=none; b=g6/X4hkHK+L5IB8TPEwLZyjJ2DCZwaJHmoDblbeenTfb7SvxQ6pV8BTXs+xmJpBIVcQuxpq2zkuwyLnNfZRqsD9ROlG30WQrTecXxF66Rqobm1UJTP4JMrSyF76hFG7/SCee7dfZbKcFjxSMJqo1ZbtTxInbR2XuwzY/M0Ign7Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706194; c=relaxed/simple; bh=lG2z0hn6DZ4HrBGW2ckt3E0dnlk19OJByJdCg8rBNho=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qm8BJBOJ0pLEu9GPHcMOzymgHqDoZl2eQWLKBRPzPf9B2aLHiS128j0THyKN0SPe+KfdS8U7kb0yVxmXzQSFlCRKdraq3nX+6X8ooknRWyblecUcaCUrKHhc4nG8J2RGeBgEZ1t4dx3USryHSLftSLdWAUo0jNwnDlLzrZtg5FY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=ZCLcCx91; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="ZCLcCx91" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-21619108a6bso103175185ad.3 for ; Tue, 04 Feb 2025 13:56:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706192; x=1739310992; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E7MXw0r+iM5bOECzDlk9lTDUMp4kdULo9Dt3lX8ZVKI=; b=ZCLcCx91z0EOySV/tOUoH5EAGoYMH9DMXad+b1busEtfk4/du/03Zd/KIAb/4v7zTD l7aeR+r8opvmlgwcKaMyxyy48OIIiR4NjcExE4qy6zUZdVPXDG2dIANQAuIFJz8ThiKk 0ly0+IxaKLczl1idA09LDnryO76t2ex0FB+hqenQM4yla6jyk3jGxUtpUJDZoabeBuGz yttXfG2rcaR+3cHh+bmy6LGgPEmwfuGg8i6QhR2vAXyg79E1RSNYeZkJc1EWLx/DtP8w PjJumQxfXg6Z2AkkehQ08Vy5IvwB7shraCjK+kpnYbqd7oZg/V605DxbPBvYiEfqcKCb iLtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706192; x=1739310992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E7MXw0r+iM5bOECzDlk9lTDUMp4kdULo9Dt3lX8ZVKI=; b=gn2/rGbsXmyv80LswlcGd9GyVmnuoVZ945uR7sh8iq7/+sLDqtR0BEd+uJFhhlc2j4 Nfc46+PbJwwpue2yU5NRku1i1E/GvHgJg95MOekxxo8q0XkeOSHcU7T+S/XV66JdKNi0 2riPXxgeja5J7ZAeD16osdWziAmcUH55wE+RPDee1kL+2KELlxEkaGHIHwxa+EFRO2HC BiYoEsRHfDmphKpRg9Yh7yeMJ94miGoXexjlL/5GM19iOii4LwHdTkrkcOaQRCQDqubK z08YQyOQF6e0PKj1zw6WgjIh9hBBNnC08FSpNEpL3A0TJf5EQ1IAuK0lTCVynvIESoXP zZlg== X-Gm-Message-State: AOJu0YwSCj726EU+yhnXLdHGQEwiRaqrYHKSrsyVwmC45J/Q1xUUF+8D 2+sxGwKewabzsGD6PcpUagdd6VPIS6DDYbaDJstbN8iaO7A646hVHmG/lvHsHtsbnxL0nZOjTwf Y X-Gm-Gg: ASbGncvWYQwwhB/qfcHAZd8gbJ5blPCFzZD1d/9gk59kXrhwsK9DFryPY4JsOzsAdJZ 2OsRnByQW/lY+caX5fb9qZARBiMNDwlUtcJVDPjj2bRVS84W3wOzpAkvm87AwzGz9gtGf5++Evv eUSrie1wB/URycBr0zzzVIoguUXDXomaQSIWw0Qh2L5M3puvsCoGMfeGLazBV0ZFvdEH/BWlppP IcyJCzq96fAqe15c/jGouo96DXSSY9Q7uQYPv+1+5oyelB5KyFT2J2kwDFiJlU1QCR1uh2Z X-Google-Smtp-Source: AGHT+IEeXAtgFahjPW7YdQS2W59wSDfZANuCUUyvcDi/62NpP8cFS974cnlJ3Awrq0bMP4Zkl62PJw== X-Received: by 2002:a17:902:e74a:b0:216:5b64:90f6 with SMTP id d9443c01a7336-21f17edc502mr4680835ad.45.1738706192135; Tue, 04 Feb 2025 13:56:32 -0800 (PST) Received: from localhost ([2a03:2880:ff::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21effb60da0sm22221275ad.120.2025.02.04.13.56.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:31 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 08/10] net: prepare for non devmem TCP memory providers Date: Tue, 4 Feb 2025 13:56:19 -0800 Message-ID: <20250204215622.695511-9-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov There is a good bunch of places in generic paths assuming that the only page pool memory provider is devmem TCP. As we want to reuse the net_iov and provider infrastructure, we need to patch it up and explicitly check the provider type when we branch into devmem TCP code. Reviewed-by: Mina Almasry Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/devmem.c | 5 +++++ net/core/devmem.h | 7 +++++++ net/ipv4/tcp.c | 5 +++++ 3 files changed, 17 insertions(+) diff --git a/net/core/devmem.c b/net/core/devmem.c index cbac6419fcc4..7c6e0b5b6acb 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -30,6 +30,11 @@ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); static const struct memory_provider_ops dmabuf_devmem_ops; +bool net_is_devmem_iov(struct net_iov *niov) +{ + return niov->pp->mp_ops == &dmabuf_devmem_ops; +} + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) diff --git a/net/core/devmem.h b/net/core/devmem.h index 8e999fe2ae67..7fc158d52729 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -115,6 +115,8 @@ struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); void net_devmem_free_dmabuf(struct net_iov *ppiov); +bool net_is_devmem_iov(struct net_iov *niov); + #else struct net_devmem_dmabuf_binding; @@ -163,6 +165,11 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } + +static inline bool net_is_devmem_iov(struct net_iov *niov) +{ + return false; +} #endif #endif /* _NET_DEVMEM_H */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index b872de9a8271..7f43d31c9400 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2476,6 +2476,11 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, } niov = skb_frag_net_iov(frag); + if (!net_is_devmem_iov(niov)) { + err = -ENODEV; + goto out; + } + end = start + skb_frag_size(frag); copy = end - offset; From patchwork Tue Feb 4 21:56:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959983 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAAA821C9ED for ; Tue, 4 Feb 2025 21:56:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706196; cv=none; b=LhBr8cSczMnrowWqYbhG4Qf1hqjoUKjlT6/+vXEwoZtUN5hz3KIm5mrSUaovYxRl6mJUCA5KMxOSZ1cjOboomQW4nWnZCWEoIdyKTBhNNeF7rF/ly302vYTJZnk5+LGn3cRhHj//DtBuf1aWy5wogC+FMv6flqGqPpwopTr0M7c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706196; c=relaxed/simple; bh=mbXtUnP/WKL7seD8UicVZtx/K5SuPbOrPlmp/jJLm14=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JxTdY4+wYMWxt7/YyW8b3jYIKLcy2IMKvpOYyPIz8g11jV96izabllfnGMTbcBCWhTwmqswPSOJObj3vg2kMCHqgp5Cqbil4Kc8kpZfgcybrpV+sJTZbRYPC2tSV/XYBMFEtobSo077WHssYWOYxC3Jlyx/GYQ1LxC67fBjDnn0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=A14y07Cm; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="A14y07Cm" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2165448243fso24796875ad.1 for ; Tue, 04 Feb 2025 13:56:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706193; x=1739310993; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uhZ6eD3glG9OYssaUEfMWj1JxNHLYtd+Ux2YSvdXR7Q=; b=A14y07CmeVG3cl5Ekq8y4+XBdUALOfJw9zn2zVveF7VJDCkrrrzXneLenPqtGoHgHh AOT6+C6UlnvgwlWMkyTLYIQPzUcTbGG2UkX+CCAFeLRRFUWZ8+Zq5u2w7geUnvbXHxGz mzkarpbiIhS6O5zbOgbw1+xvKGlLmVJZxrXSgsgLDtksWdqPYTIBZHuF9hHxarcnHX32 CYrlA6kymJNSPV9HGly7T3qv8z9L6kaj9VwWyECko79CoAUg0VuaRt/4763qCxCHyUAg qUzof4G2B+qRHzhNYk2xwlKbXIawo1fTcJmmRE6X0uzwxXrXCM6p/ow/Lo89V7EPgtEe Opuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706193; x=1739310993; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uhZ6eD3glG9OYssaUEfMWj1JxNHLYtd+Ux2YSvdXR7Q=; b=I9b+pgfyaIN5GHlC1CIghzRXYYmCENu3K61Ym0XzzQ+dowP/Xg8rUASnHjWfrmPwb+ udS9AckcqJMaVDYDfE6nUk6wIgwHVPKgzKsxhZAt8yyHYokg6QR8ZcwbEZrZ7kuk1CoY ak4z4YClCcOMJT8ray/ImkIdumqmmxMxNIEEfGNAALCJAbEVD/113n4OUeEcETUULP6/ tB31F4amhgByXmDaojD71jQy4z8vmtJyUB69VfHnJSQazp9Y4ePT39f6bMfc6OnGK+Aa BwtZh5tQQbxAuLZcKXMDPUTEdUrzSgIXgWca29t9Kp2d9cOcNv0Ev1mYu13ZnV+0WM8O PH4A== X-Gm-Message-State: AOJu0YwG5tTBDDz1cwqjOvrb2yyG8oI/0dYobY56oifjzvSEhsEJaxON tUki2//7RwcPv9t8RbnnDhz0P3ivNVEl3G2FO149RIP//Z2dzGB2RFC0H8y69keSUXHqiOTeYh8 x X-Gm-Gg: ASbGncuHljKC+uOeYTD38ViG5N4Es9FdjtItP57AGDII4tpqTKdM7TbHZONc5L8TXJx ul5oJEjqARJ7DlLUQD/8IWwDg1iqTw6QRCLFqEFu+xj5Y9XfLsWFmPjylhq2sbo+IXFaH/5DzXi GO7zDOzfkFsBE71LqdFMxhJ/0QQGhnce9j2/wVmYGI6olxiQE4TnHpa8/vKoVaqJ2y0E3LBQnNn ISC1PQfPJeUuCMUSFn7naAlEunTcuVGWA9P4280pJB9YQDx5+oH4X41DGMyeDgl5kvc3lUM77ul X-Google-Smtp-Source: AGHT+IGozdxYzNKuYFfVoVfzlfV9RBiFrYddMNuoyiF83QhheY9lQks53EbQBnsbnWTZgMTLt7Cn8Q== X-Received: by 2002:a05:6a00:44c4:b0:725:db34:6a8c with SMTP id d2e1a72fcca58-730351428b2mr934167b3a.13.1738706193154; Tue, 04 Feb 2025 13:56:33 -0800 (PST) Received: from localhost ([2a03:2880:ff:73::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe69cea77sm11022164b3a.150.2025.02.04.13.56.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:32 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 09/10] net: page_pool: add memory provider helpers Date: Tue, 4 Feb 2025 13:56:20 -0800 Message-ID: <20250204215622.695511-10-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add helpers for memory providers to interact with page pools. net_mp_niov_{set,clear}_page_pool() serve to [dis]associate a net_iov with a page pool. If used, the memory provider is responsible to match "set" calls with "clear" once a net_iov is not going to be used by a page pool anymore, changing a page pool, etc. Acked-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 19 +++++++++++++++++ net/core/page_pool.c | 28 +++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 36469a7e649f..4f0ffb8f6a0a 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -18,4 +18,23 @@ struct memory_provider_ops { void (*uninstall)(void *mp_priv, struct netdev_rx_queue *rxq); }; +bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr); +void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov); +void net_mp_niov_clear_page_pool(struct net_iov *niov); + +/** + * net_mp_netmem_place_in_cache() - give a netmem to a page pool + * @pool: the page pool to place the netmem into + * @netmem: netmem to give + * + * Push an accounted netmem into the page pool's allocation cache. The caller + * must ensure that there is space in the cache. It should only be called off + * the mp_ops->alloc_netmems() path. + */ +static inline void net_mp_netmem_place_in_cache(struct page_pool *pool, + netmem_ref netmem) +{ + pool->alloc.cache[pool->alloc.count++] = netmem; +} + #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index d632cf2c91c3..686bd4a117d9 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -1197,3 +1197,31 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr) +{ + return page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), addr); +} + +/* Associate a niov with a page pool. Should follow with a matching + * net_mp_niov_clear_page_pool() + */ +void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov) +{ + netmem_ref netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt); +} + +/* Disassociate a niov from a page pool. Should only be used in the + * ->release_netmem() path. + */ +void net_mp_niov_clear_page_pool(struct net_iov *niov) +{ + netmem_ref netmem = net_iov_to_netmem(niov); + + page_pool_clear_pp_info(netmem); +} From patchwork Tue Feb 4 21:56:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13959982 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C860B21C9E8 for ; Tue, 4 Feb 2025 21:56:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706196; cv=none; b=GNzlzOJZoxz0aa97kCsVcmTr8AlFAXF3gn9kdji+fpYoT9mUUSo5xatkWPQIOu+9ShLnD6fqXUygMp8knl23mmooz9MN6CojWhdlsMjyL6mhcVkn0eeQxEJgfWzRYUrEXCv268XBhCmSHz9bAmO7sGHFC4bRiUJmOb/FRnuLvOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738706196; c=relaxed/simple; bh=6QDqwWHWGPf8NoYVfbLhpDnV9LHjat0kKaCxvrsawP4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qyhJm2luk290VhAf6dKtm7CJCc9GVzNtkpi5B3ysuPUB2qfrHuLjA9YvNx5RuBSlIBQUXSIJVLbu+KyQgS7oBDJxUe8/FaDvMhYIkbpmwbPhJ/m6a5MiUgznVxfzF/ePGfAs7jvQ99ng37gKXmWT+Op2LgFvmUbFNWAFAXK8C1o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=DAYfqHWs; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="DAYfqHWs" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2165448243fso24797095ad.1 for ; Tue, 04 Feb 2025 13:56:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1738706194; x=1739310994; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aCQjv963bDn8vs1vNijlQJNkBfHhMzXuqY8JojgkIBc=; b=DAYfqHWs6trcSibPfZ+ReH7jsce3W4+dtstixqCzOhv4+cM2gQZwbTQiIM02j56wJx o4BljdulfSLNqygLUvDnk7Ytx7p018L9Mt7poLgFh6Up+UPKJgb8jYF048Ik8xraQauv AdWSqilgqlbEeHOCIs1aSKBMNulr2myIqzhiv3W2gztEGTmurD/p/oh3O7I29xd5+mHt Q8ZX9OOHHT0vwTvhl0hojEvwgxkONvaw/gs5DIfz3YcyB/81SWBRWQzv8MP23XPIkE+D 5Lmg/MS5tGD9F4N811QW8Al9tZS92VDtznso2hA2e74O5c1KZ8TfCyJlFCFlq/XA7EbD 2bJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738706194; x=1739310994; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aCQjv963bDn8vs1vNijlQJNkBfHhMzXuqY8JojgkIBc=; b=w2uDCPhDDRzIEGY9WaJhQqOH17igdCMluo1mxzKTouf4Bx0WbfdT3Tgz8gO3pC1D2R YtoRiWmMG7E3LqGhZjjomUViDHod/J+VXNP4GxT/jj7rFp9Jzgwj928q4MmpdtlcFenU 82sywoeO6ajsUboVvBvpobNa5Z8MZ8SfShyKuqNgAv3XO/whjtdv5rZy/bhPRw5rewmU XFw5rh9JlCo0HEcjLkEtW37w3KQHs7be1m8AcGnnz0XJNkOlqhtN+R15yLzvGoUw0zNu Jkqs/daFFkCEscrdJnreMIcJgBgGpgQtCMuUNbPurLkpJFuMHCx9B5tz1fuBlt6Ak30y gvEQ== X-Gm-Message-State: AOJu0YyXVDpeP8vbF2Kf5TYMPArNhaHlLOTB4a0Pza0jdvVT0IZLih5H uCYHtYJOm2Xd3KCbBzZyvCAfx+/A5I6cvVWxnWfe71UDg248Lr0yFi7lECisxIMor6OO7M06Eeq 7 X-Gm-Gg: ASbGncsXim43f5uquW76qK1okv1C/y4Y9averN1kFOqN5+vX7fWCBh2xR3doGIVNl+7 616bi4JRS3rsateX9otPzqTwxFJ+vvIkxbIFgDwvV9wUyNg2uQb6cKXs5reHb8gIxAFxD0tT5zj arSUxpTCsLNYUsiblw1Za5qxt5oh8/k9QVcFIy/hLwKhIvX3zkkjFngzYg/uH1b7phEdmGyUiM6 SICIWow+qIJBkpiHksKQKB5qjtGx9Wg4DTQmzJ+uI+RMueL/dO3jiVIuJ6BkqfTixrMXBsfBOk= X-Google-Smtp-Source: AGHT+IFhoRZ6ciJkTOhC8GiIyAbHlA5un1pb5yAzwPqmpzXj7z4mxkNlDD5FaBCAN1Gxiz+16KXv4Q== X-Received: by 2002:a17:902:ce86:b0:215:431f:268f with SMTP id d9443c01a7336-21f17ddf316mr9252985ad.10.1738706194110; Tue, 04 Feb 2025 13:56:34 -0800 (PST) Received: from localhost ([2a03:2880:ff:1::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21de31fcdd0sm102869805ad.103.2025.02.04.13.56.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 13:56:33 -0800 (PST) From: David Wei To: netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 10/10] net: add helpers for setting a memory provider on an rx queue Date: Tue, 4 Feb 2025 13:56:21 -0800 Message-ID: <20250204215622.695511-11-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250204215622.695511-1-dw@davidwei.uk> References: <20250204215622.695511-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add helpers that properly prep or remove a memory provider for an rx queue then restart the queue. Reviewed-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 5 ++ net/core/netdev_rx_queue.c | 69 +++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 4f0ffb8f6a0a..b3e665897767 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -22,6 +22,11 @@ bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr); void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov); void net_mp_niov_clear_page_pool(struct net_iov *niov); +int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p); +void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p); + /** * net_mp_netmem_place_in_cache() - give a netmem to a page pool * @pool: the page pool to place the netmem into diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index db82786fa0c4..db46880f37cc 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "page_pool_priv.h" @@ -80,3 +81,71 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx) return err; } EXPORT_SYMBOL_NS_GPL(netdev_rx_queue_restart, "NETDEV_INTERNAL"); + +static int __net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p) +{ + struct netdev_rx_queue *rxq; + int ret; + + if (ifq_idx >= dev->real_num_rx_queues) + return -EINVAL; + ifq_idx = array_index_nospec(ifq_idx, dev->real_num_rx_queues); + + rxq = __netif_get_rx_queue(dev, ifq_idx); + if (rxq->mp_params.mp_ops) + return -EEXIST; + + rxq->mp_params = *p; + ret = netdev_rx_queue_restart(dev, ifq_idx); + if (ret) { + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + } + return ret; +} + +int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *p) +{ + int ret; + + rtnl_lock(); + ret = __net_mp_open_rxq(dev, ifq_idx, p); + rtnl_unlock(); + return ret; +} + +static void __net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p) +{ + struct netdev_rx_queue *rxq; + + if (WARN_ON_ONCE(ifq_idx >= dev->real_num_rx_queues)) + return; + + rxq = __netif_get_rx_queue(dev, ifq_idx); + + /* Callers holding a netdev ref may get here after we already + * went thru shutdown via dev_memory_provider_uninstall(). + */ + if (dev->reg_state > NETREG_REGISTERED && + !rxq->mp_params.mp_ops) + return; + + if (WARN_ON_ONCE(rxq->mp_params.mp_ops != old_p->mp_ops || + rxq->mp_params.mp_priv != old_p->mp_priv)) + return; + + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + WARN_ON(netdev_rx_queue_restart(dev, ifq_idx)); +} + +void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, + struct pp_memory_provider_params *old_p) +{ + rtnl_lock(); + __net_mp_close_rxq(dev, ifq_idx, old_p); + rtnl_unlock(); +}