From patchwork Sun Aug 25 04:14:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776583 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BFA92576F for ; Sun, 25 Aug 2024 04:15:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559326; cv=none; b=YK9n0sQePgE3YZVC/nbSt0sa1LR0UrZDenFykefsHHcKwbHvz6oedLLvJBfZJaaHL1Xs+wdtMthuouhSZ0ASRmnWelW5Maw/hJl8zX6ZROKceeb5LUinEInypG//UgYJE8Jp4ses9UpeTDNfODQwRNagFoU5XXiinxMlZTwSIE8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559326; c=relaxed/simple; bh=Tl0ko8NJvp3n+QWLXTHikeH54B+GqSEkJrY6Ho4nEpg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=grtvoG6YE3/Vm7iIc8hB5lptHWyFma1ZnJb56dryDg1Vy45mbV0y6euD/Z8QZ3paErbCtoRtv4X8DqHOs1ckh2Ekwj/hHtfwl+IfVuHrqrZPdiyDDZ2tqlFfDeex2c/nmI5rLqldGZY9qNSK7nfU2jW0sYnQA/1iMbIJvIac93c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=U2Z05wlA; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U2Z05wlA" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2d6421b8bf9so2087430a91.2 for ; Sat, 24 Aug 2024 21:15:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559320; x=1725164120; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UkxZa5OEAXaq7WVy69PI69NC7Lcc38rHEmyOVlRdCoA=; b=U2Z05wlABkeWb9bdtJEibItadZ1wZIwdBwVZ6LGuBk751nHj7R3/IX/TzAI6D4WMXW LXSrOpp3AlV7isE+a18Z/1VB3yf6R5sJiRurksSEehYP+occ8GRxScgNGJnJ4veBKrLt 1fS2HKu5NrWHJCNVtuGjg6WvpRqy162gZK7zC97LOT07uFJk6knucL28xiW8BnWewghH OcbIpnk/+/CyzYXh0MSi63L34H3MOWUc1KplxWitsslccYbuehsBawQSiiBO4cIugEF1 GEP4azkVquPmGzEl4cMnYR605NyVWvEtsH2jqLJ3dlIWmClKtOw+4LMS45wgMYXobNHA h51w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559320; x=1725164120; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UkxZa5OEAXaq7WVy69PI69NC7Lcc38rHEmyOVlRdCoA=; b=tiFv9JFG66AKsInS+0IMFyuKX5K5ZFIemj4jV4L0Xn8GedAHVx4Me/ya84A9186gvY fSflIFt7Xj04fFlNzsOeuVMPjazs6cblCf8ieNEbL8DCd4n7ymOeqdEQ/s3HfMwAwj7u SCzVv5u1vgy2vkftBzh5rIE2hgXiwdaH9EhLsB3f6Zai8DvI8Ud4F/Dl0aNVJSZj9or4 6NB50mInG9/DDXO+KN6AAL1Qq/k+4vQvMdxweueAMevJWGPz/IPNF+AZ1JbaU8df/DIB 8QJQ7JXfqLs1TgiCctd8kQbo3txHfN+uv9RiWFOrqrTSsQ2PqpC6sf1B/AFvQ43S2MWu RYoQ== X-Forwarded-Encrypted: i=1; AJvYcCUIZO9AyJKuIcwycUW10sgEKTVWZhGkLMkQbiIMXwpo/SRXEYoboWSypo7a4YBG/CLeu/921Puikz41YRk=@vger.kernel.org X-Gm-Message-State: AOJu0YzTuD1K4s2Zyu5sSt/EB5N8KoScyDn1g9FuNLbQf0WemsZ/jnOn ctg4B1uDeOOyh60ZSWtoMy5MJQ4pmzeNvSNNZR6fHfiIREf76N7ZeotmbSkkEpwgCgfMPC3Jkem ypLQAQbIe2JikkMb+Etvm3Q== X-Google-Smtp-Source: AGHT+IFX5zXRfPbms2Wgsd/mIN0x02khqJowZqqc58g4LFBqSLRx508tU9TQnQ/OrsOW7N26Qhn2KME3015U0JVwLg== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a17:902:c402:b0:1fc:6c23:8a69 with SMTP id d9443c01a7336-2039e4f40c1mr1766435ad.7.1724559319799; Sat, 24 Aug 2024 21:15:19 -0700 (PDT) Date: Sun, 25 Aug 2024 04:14:59 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-2-almasrymina@google.com> Subject: [PATCH net-next v22 01/13] netdev: add netdev_rx_queue_restart() From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo Add netdev_rx_queue_restart(), which resets an rx queue using the queue API recently merged[1]. The queue API was merged to enable the core net stack to reset individual rx queues to actuate changes in the rx queue's configuration. In later patches in this series, we will use netdev_rx_queue_restart() to reset rx queues after binding or unbinding dmabuf configuration, which will cause reallocation of the page_pool to repopulate its memory using the new configuration. [1] https://lore.kernel.org/netdev/20240430231420.699177-1-shailend@google.com/T/ Signed-off-by: David Wei Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov Reviewed-by: Jakub Kicinski --- v18: - Add more color to commit message (Xuan Zhuo). v17: - Use ASSERT_RTNL() (Jakub). v13: - Add reviewed-by from Pavel (thanks!) - Fixed comment (Pavel) v11: - Fix not checking dev->queue_mgmt_ops (Pavel). - Fix ndo_queue_mem_free call that passed the wrong pointer (David). v9: https://lore.kernel.org/all/20240502045410.3524155-4-dw@davidwei.uk/ (submitted by David). - fixed SPDX license identifier (Simon). - Rebased on top of merged queue API definition, and changed implementation to match that. - Replace rtnl_lock() with rtnl_is_locked() to make it useable from my netlink code where rtnl is already locked. --- include/net/netdev_rx_queue.h | 3 ++ net/core/Makefile | 1 + net/core/netdev_rx_queue.c | 74 +++++++++++++++++++++++++++++++++++ 3 files changed, 78 insertions(+) create mode 100644 net/core/netdev_rx_queue.c diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index aa1716fb0e53..e78ca52d67fb 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -54,4 +54,7 @@ get_netdev_rx_queue_index(struct netdev_rx_queue *queue) return index; } #endif + +int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq); + #endif diff --git a/net/core/Makefile b/net/core/Makefile index 62be9aef2528..f82232b358a2 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o obj-y += net-sysfs.o obj-y += hotdata.o +obj-y += netdev_rx_queue.o obj-$(CONFIG_PAGE_POOL) += page_pool.o page_pool_user.o obj-$(CONFIG_PROC_FS) += net-procfs.o obj-$(CONFIG_NET_PKTGEN) += pktgen.o diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c new file mode 100644 index 000000000000..da11720a5983 --- /dev/null +++ b/net/core/netdev_rx_queue.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include +#include +#include + +int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx) +{ + void *new_mem, *old_mem; + int err; + + if (!dev->queue_mgmt_ops || !dev->queue_mgmt_ops->ndo_queue_stop || + !dev->queue_mgmt_ops->ndo_queue_mem_free || + !dev->queue_mgmt_ops->ndo_queue_mem_alloc || + !dev->queue_mgmt_ops->ndo_queue_start) + return -EOPNOTSUPP; + + ASSERT_RTNL(); + + new_mem = kvzalloc(dev->queue_mgmt_ops->ndo_queue_mem_size, GFP_KERNEL); + if (!new_mem) + return -ENOMEM; + + old_mem = kvzalloc(dev->queue_mgmt_ops->ndo_queue_mem_size, GFP_KERNEL); + if (!old_mem) { + err = -ENOMEM; + goto err_free_new_mem; + } + + err = dev->queue_mgmt_ops->ndo_queue_mem_alloc(dev, new_mem, rxq_idx); + if (err) + goto err_free_old_mem; + + err = dev->queue_mgmt_ops->ndo_queue_stop(dev, old_mem, rxq_idx); + if (err) + goto err_free_new_queue_mem; + + err = dev->queue_mgmt_ops->ndo_queue_start(dev, new_mem, rxq_idx); + if (err) + goto err_start_queue; + + dev->queue_mgmt_ops->ndo_queue_mem_free(dev, old_mem); + + kvfree(old_mem); + kvfree(new_mem); + + return 0; + +err_start_queue: + /* Restarting the queue with old_mem should be successful as we haven't + * changed any of the queue configuration, and there is not much we can + * do to recover from a failure here. + * + * WARN if we fail to recover the old rx queue, and at least free + * old_mem so we don't also leak that. + */ + if (dev->queue_mgmt_ops->ndo_queue_start(dev, old_mem, rxq_idx)) { + WARN(1, + "Failed to restart old queue in error path. RX queue %d may be unhealthy.", + rxq_idx); + dev->queue_mgmt_ops->ndo_queue_mem_free(dev, old_mem); + } + +err_free_new_queue_mem: + dev->queue_mgmt_ops->ndo_queue_mem_free(dev, new_mem); + +err_free_old_mem: + kvfree(old_mem); + +err_free_new_mem: + kvfree(new_mem); + + return err; +} From patchwork Sun Aug 25 04:15:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776584 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C045B2A1CF for ; Sun, 25 Aug 2024 04:15:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559327; cv=none; b=Qz9VQGU0CEYUBFjhP6ujr2+zgps1udat6Lqh2V/Kv6YYvQDbI+o0Yn4fbPN6e88/kBwxfEsqc/bEpdUF6wrWD38rwB9D9mtHZS4KMFV6bDZv2ETSeWetTcvgby8HY1PkXSU3cbq4nqpM6NTKgPpK0mV0X5Uc2RB1AtZcWfCsJ9s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559327; c=relaxed/simple; bh=em/nmb8fV3Z3faWrXtodThT+RRuZjdCg+ZdZu/ztC9A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uFnRuVev0LYugx4F+dSZenGXbUxImty0Pu05RnOUJa9FaBZmgjdsBgzc9dIMuwcB6tkXeFcH+fq057fuBUczNH1ZWFZ6AeGAAvE4CrbMLwITuoFppeUfI0MtiBFcSU3C8/X/K4RELs3qxfK66Xp1GxF2FKEGFRuLLbdQzmz2Pgc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gK4CQsRO; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gK4CQsRO" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6b198ecf931so44014677b3.1 for ; Sat, 24 Aug 2024 21:15:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559322; x=1725164122; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=N8QAP3NnvasVGSQZe2ki2QWuBxbH3ggZl9JAD9Od/Co=; b=gK4CQsROoKuVHGr0kvE0ZE2iNt5LZm86Ew6MGSsGYp/5hGxKXXWZmFFKTh470A91u8 lSkSbfYKfErAIKn/dIy+hI2FvFmlyK/FM7L65FExijn+V4XwoV86KpCfQY5CD+tWYdv+ i083CJ6lUZImZafZIKmS4xQIk7Vn3tyRD4cKkHA+pGb3OutvywW+PxzBjfn40W8NLl7H Crc3k5zPl4dzF89iBLQOdS7zIuovWR0K0NXJWLA5vQot8Vw1ttQrUZJDRv81PtOfOMu3 pgiuibHXRiydtpuPsW4qrpI7tZCYbZEBZOwwMgJirh8N7YgneHU/7knwdLC0a0Xfep0H gPBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559322; x=1725164122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=N8QAP3NnvasVGSQZe2ki2QWuBxbH3ggZl9JAD9Od/Co=; b=phit+FIDQLTQvWNMhgs4mcuAMc+LehdCLaDxz1C9EIlloHqKFwjrB/9E3fsu5CwY27 uBHA6mB0tBGMa0NRN+ZjuxMfXCmJBWfauAtC6OZ26bNQHKn0AoybgKPfbtuI6VZ/SWzV Q9sYUjpGMWoh8JIehKbbzX0VvyUFpfs53hzXZ4isDa30EihKcB2jxeYeeeolwEhq2Y2r U8wgy7HVG7CRGO6AmQYUjUU4DD6IkQbb5zWPQ6zlZybLHcUepI/nqXIneVbFSPcVI0ht 5eGhKRpmAj7S5yMIYjLT2NamkohfziD/iF/+I7x8pLSJ+N3qcBDn/n3fncKpFErWXoNy xHHA== X-Forwarded-Encrypted: i=1; AJvYcCViCDdirctRFJ/YVy2pzQMHHL/H2FgAS5ZeA6e6vfsxFOvxXwa+WWEwFB2DKkp7b4wqi4oW7L/bmCfx2Zc=@vger.kernel.org X-Gm-Message-State: AOJu0YwApo4EWLlgeQhwcB1z7D4LlRwZRbHX/Kf5nE/79ShgaqByIS6T F1JuMcFMlGxF1B3sBaH2L9mCImNj0AE5TZSmfkBVrdrVf1g1WSTRNdVrrQWn9WWodJqIV7nLHkj s0it46w5xxo30YuO+sD66cw== X-Google-Smtp-Source: AGHT+IGoMfLn28kYbDRkGRhU8axm3DMzxcbi5aFVFUdclkNoGhqeiBYCaKmgt15Ja0vMWykhCjPHFuJ3FuwpSQxwYA== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a5b:b08:0:b0:e11:757f:30af with SMTP id 3f1490d57ef6-e177652becbmr254263276.3.1724559321528; Sat, 24 Aug 2024 21:15:21 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:00 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-3-almasrymina@google.com> Subject: [PATCH net-next v22 02/13] net: netdev netlink api to bind dma-buf to a net device From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Stanislav Fomichev API takes the dma-buf fd as input, and binds it to the netdevice. The user can specify the rx queues to bind the dma-buf to. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry Reviewed-by: Donald Hunter Reviewed-by: Jakub Kicinski --- v16: - Use subset-of: queue queue-id instead of queue-dmabuf (Jakub). - Rename attribute 'bind-dmabuf' to more generic 'dmabuf' (Jakub). - Use 'dmabuf' everywhere instead of mix of 'dma-buf' and 'dmabuf' (Donald). - Remove repetitive 'dmabuf' naming that appeared in some places (Jakub). - Reordered where the new operations went so I don't break the enum UAPI (Jakub). v7: - Use flags: [ admin-perm ] instead of a CAP_NET_ADMIN check. Changes in v1: - Add rx-queue-type to distingish rx from tx (Jakub) - Return dma-buf ID from netlink API (David, Stan) Changes in RFC-v3: - Support binding multiple rx rx-queues --- Documentation/netlink/specs/netdev.yaml | 47 +++++++++++++++++++++++++ include/uapi/linux/netdev.h | 11 ++++++ net/core/netdev-genl-gen.c | 19 ++++++++++ net/core/netdev-genl-gen.h | 2 ++ net/core/netdev-genl.c | 6 ++++ tools/include/uapi/linux/netdev.h | 11 ++++++ 6 files changed, 96 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 959755be4d7f..4930e8142aa6 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -457,6 +457,39 @@ attribute-sets: Number of times driver re-started accepting send requests to this queue from the stack. type: uint + - + name: queue-id + subset-of: queue + attributes: + - + name: id + - + name: type + - + name: dmabuf + attributes: + - + name: ifindex + doc: netdev ifindex to bind the dmabuf to. + type: u32 + checks: + min: 1 + - + name: queues + doc: receive queues to bind the dmabuf to. + type: nest + nested-attributes: queue-id + multi-attr: true + - + name: fd + doc: dmabuf file descriptor to bind. + type: u32 + - + name: id + doc: id of the dmabuf binding + type: u32 + checks: + min: 1 operations: list: @@ -619,6 +652,20 @@ operations: - rx-bytes - tx-packets - tx-bytes + - + name: bind-rx + doc: Bind dmabuf to netdev + attribute-set: dmabuf + flags: [ admin-perm ] + do: + request: + attributes: + - ifindex + - fd + - queues + reply: + attributes: + - id mcast-groups: list: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 43742ac5b00d..91bf3ecc5f1d 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -173,6 +173,16 @@ enum { NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1) }; +enum { + NETDEV_A_DMABUF_IFINDEX = 1, + NETDEV_A_DMABUF_QUEUES, + NETDEV_A_DMABUF_FD, + NETDEV_A_DMABUF_ID, + + __NETDEV_A_DMABUF_MAX, + NETDEV_A_DMABUF_MAX = (__NETDEV_A_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, @@ -186,6 +196,7 @@ enum { NETDEV_CMD_QUEUE_GET, NETDEV_CMD_NAPI_GET, NETDEV_CMD_QSTATS_GET, + NETDEV_CMD_BIND_RX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index 8350a0afa9ec..6b7fe6035067 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -27,6 +27,11 @@ const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFIND [NETDEV_A_PAGE_POOL_IFINDEX] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_page_pool_ifindex_range), }; +const struct nla_policy netdev_queue_id_nl_policy[NETDEV_A_QUEUE_TYPE + 1] = { + [NETDEV_A_QUEUE_ID] = { .type = NLA_U32, }, + [NETDEV_A_QUEUE_TYPE] = NLA_POLICY_MAX(NLA_U32, 1), +}; + /* NETDEV_CMD_DEV_GET - do */ static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1] = { [NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), @@ -74,6 +79,13 @@ static const struct nla_policy netdev_qstats_get_nl_policy[NETDEV_A_QSTATS_SCOPE [NETDEV_A_QSTATS_SCOPE] = NLA_POLICY_MASK(NLA_UINT, 0x1), }; +/* NETDEV_CMD_BIND_RX - do */ +static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_DMABUF_FD + 1] = { + [NETDEV_A_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_DMABUF_FD] = { .type = NLA_U32, }, + [NETDEV_A_DMABUF_QUEUES] = NLA_POLICY_NESTED(netdev_queue_id_nl_policy), +}; + /* Ops table for netdev */ static const struct genl_split_ops netdev_nl_ops[] = { { @@ -151,6 +163,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .maxattr = NETDEV_A_QSTATS_SCOPE, .flags = GENL_CMD_CAP_DUMP, }, + { + .cmd = NETDEV_CMD_BIND_RX, + .doit = netdev_nl_bind_rx_doit, + .policy = netdev_bind_rx_nl_policy, + .maxattr = NETDEV_A_DMABUF_FD, + .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, + }, }; static const struct genl_multicast_group netdev_nl_mcgrps[] = { diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index 4db40fd5b4a9..67c34005750c 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -13,6 +13,7 @@ /* Common nested types */ extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1]; +extern const struct nla_policy netdev_queue_id_nl_policy[NETDEV_A_QUEUE_TYPE + 1]; int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); @@ -30,6 +31,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info); enum { NETDEV_NLGRP_MGMT, diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 05f9515d2c05..2d726e65211d 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -721,6 +721,12 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, return err; } +/* Stub */ +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) +{ + return 0; +} + static int netdev_genl_netdevice_event(struct notifier_block *nb, unsigned long event, void *ptr) { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 43742ac5b00d..91bf3ecc5f1d 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -173,6 +173,16 @@ enum { NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1) }; +enum { + NETDEV_A_DMABUF_IFINDEX = 1, + NETDEV_A_DMABUF_QUEUES, + NETDEV_A_DMABUF_FD, + NETDEV_A_DMABUF_ID, + + __NETDEV_A_DMABUF_MAX, + NETDEV_A_DMABUF_MAX = (__NETDEV_A_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, @@ -186,6 +196,7 @@ enum { NETDEV_CMD_QUEUE_GET, NETDEV_CMD_NAPI_GET, NETDEV_CMD_QSTATS_GET, + NETDEV_CMD_BIND_RX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) From patchwork Sun Aug 25 04:15:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776586 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 084142A1D6 for ; Sun, 25 Aug 2024 04:15:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559331; cv=none; b=OGhq5cN+NLaQSYjQfkJ3Lszy6MX6xgeWbRAlD5bw9Kr9pEraTvB0Y3Pbfxur11gMLm1qYzJCY0HAbtx/+/Y3tsih4hIs3BuzUMQyjr/FIcjgYQd0QCLaykFRfM4RO4E6C8xrJw/bHC7lhlhsMkYhvK0OWX7LeO83iKcYwVurbK0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559331; c=relaxed/simple; bh=/69Tlh9UW1UcAyi/RjGkjrnNeNSIbzbmfdtKXXJJU8I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=osSbrkOdnxs/gttNARsax1nFjX6wFJ5U2Kjpz8jQPlqk27HmHGiHXD9ngR8pIv1xzGMB9rBtPskbcrdgC00fzcpA84AS9P49QjBlMpsLwYT/Ujzvg8i82L8h88rqRkQCsIm2gtZ5FC4JZcr8LqUtM6bJlu+QwWpeOorOofkv0rI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Zf+FB6CP; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Zf+FB6CP" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d3c0088813so3244961a91.1 for ; Sat, 24 Aug 2024 21:15:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559323; x=1725164123; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LJto7UCtoui4viMcG9/ES6P0XTXJBqWuwMbsQHWO3fs=; b=Zf+FB6CPrNaK3H31G3MOFLW/ekTTTPQpuz65Dzl6DgP+Gazpnq38zpqHUZ32IhACkD E8HWMNsG+aoSs5iDj8fSJZDJsQKniE04T1Z+Au8iFQFEWbYzG8+RdQB7wpw7CUNjP03K zxV8ki3d33aXnqT/L0qRCd+oAEjmJD0eZwYqxghfERVXzTlmFDo8v4NNzX4hl3vFuK1K K85dzorrWupRkWCSnZkK+TFFrycAG4qCZgdiEDmlMG1FDc/xunPIXsGtPgJT9qY/KDKJ BoqmNA092simK+2ypRMBPEj9ohS+OT6JfCz4J0W3aKHIcPbcrIdwPuPen8cfJ3Vd1MhB uusw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559323; x=1725164123; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LJto7UCtoui4viMcG9/ES6P0XTXJBqWuwMbsQHWO3fs=; b=cQNUPBfrXUnySWZd6TQbqky/ZuNtbrGynz4Dxu0EbcGPnUxwJIb2aSsktzcGrkIP2z 9lKTEowlA04yV/R4n1qwPCC5RIqTRyCnjrOrLqUtNVbykmmzNaoCNS1NUrZqf7+NX7Ed gk/hgm8LKmTOZSr+8HqHXg4o+ojB0KIQHU7fsRkNVWrT6HhcKho9Doad11AZylHuH4gh KjJBlKTkelAVr6W6Xf4WeV6vSoRjrVLd9QLraDJVoPiQJXdfGkGcUYO67iv019alKExf u2z97ax4mJNDkNYhLM6fFMJoo8Jd8oWTj0tpSjiG3WWiI52nw+5yUTDk3lAbOwtkcGyO S/CA== X-Forwarded-Encrypted: i=1; AJvYcCXurt/QlIHcCUj4+ZUaBNiIKP/+4NzsUInb4NORgrvKX5ERlpMYSocWv8xsCYdsycYnoNLPvmOCl/QOL9U=@vger.kernel.org X-Gm-Message-State: AOJu0YzorHh9IvdLOf5Vq4826ruuYunvG3WPce5m9wCAnk7FQQr9cSqv Jsv4mvxhpdlB2oIXxzjhzyLioP+Wx+tVeKWov9/BJmwXzFvOibZKZ8GFp3BCliFxZjx5tR4GfQe 1GL6m8LDMRfZyoz8JM0IUag== X-Google-Smtp-Source: AGHT+IErGPvH9OV6X2ew8n54pU+om6X4I9jTEq2NyPHmSmLfJt/HBpuKpVn5LDdsrRMlcxp1ptXxpYX5RY8V6G7QDQ== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a17:903:2288:b0:201:e640:81e0 with SMTP id d9443c01a7336-2039e485aa6mr1373055ad.4.1724559323386; Sat, 24 Aug 2024 21:15:23 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:01 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-4-almasrymina@google.com> Subject: [PATCH net-next v22 03/13] netdev: support binding dma-buf to netdevice From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang , Daniel Vetter Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation. The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk. We create a new type that represents an allocation from the genpool: net_iov. We setup the net_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers. The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes. The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, and the binding is actuated by resetting the rx queue using the queue API. The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov # excluding netlink Acked-by: Daniel Vetter --- v22: - Disable binding xdp to mp bound netdevs, and propagating xdp configuration to mp bound netdevs - Add extack error messages. - Prevent binding dmabuf to a queue with xsk_buff_pool (Jakub) - Prevent xp_assign_dev to a device using memory provider (Jakub) v21: - Move definition of net_devmem_dmabuf_bindings to inside the #ifdef to prevent unused static variable warning. v20: - rename dev_get_max_mp_channel to dev_get_min_mp_channel_count (Jakub) - Removed unnecessary includes from dev.c - Fixed bug with return value of dev_get_min_mp_channel_count getting implicitly cast to unsigned int (Jakub) - Combine netlink attr checks into one statement with || (Jakub) - Removed unnecessary include from ethtool/common.c v19: - Prevent deactivating queues bound to mp (Jakub). - disable attaching xdp to memory provider netdev (Jakub). - Address various nits from Jakub. - In the netlink API, check for presence of queue_id, queue_type and that the queue_type is RX (Jakub). v17: - Add missing kfree(owner) (Jakub) - Fix issue found by Taehee, where may access an rxq that has already been freed if the driver has been unloaded in the meantime (thanks!) v16: - Fix rtnl_lock() not being acquired on unbind path (Reported by Taehee). - Use unlocked versions of dma_buf_[un]map_attachment (Reported by Taehee). - Use real_num_rx_queues instead of num_rx_queues (Taehee). - bound_rxq_list -> bound_rxqs (Jakub). - Removed READ_ONCE/WRITE_ONCE infavor of rtnl_lock() sync. (Jakub). - Use ERR_CAST instead of out param (Jakub). - Add NULL Check for kzalloc_node() call (Paolo). - Move genl_sk_priv_get, genlmsg_new, genlmsg_input outside of the lock acquisition (Jakub). - Add netif_device_present() check (Jakub). - Use nla_for_each_attr_type(Jakub). v13: - Fixed a couple of places that still listed DMA_BIDIRECTIONAL (Pavel). - Added reviewed-by from Pavel. v11: - Fix build error with CONFIG_DMA_SHARED_BUFFER && !CONFIG_GENERIC_ALLOCATOR - Rebased on top of no memory provider ops. v10: - Moved net_iov_dma_addr() to devmem.h and made it devmem specific helper (David). v9: https://lore.kernel.org/all/20240403002053.2376017-5-almasrymina@google.com/ - Removed net_devmem_restart_rx_queues and put it in its own patch (David). v8: - move dmabuf_devmem_ops usage to later patch to avoid patch-by-patch build error. v7: - Use IS_ERR() instead of IS_ERR_OR_NULL() for the dma_buf_get() return value. - Changes netdev_* naming in devmem.c to net_devmem_* (Yunsheng). - DMA_BIDIRECTIONAL -> DMA_FROM_DEVICE (Yunsheng). - Added a comment around recovering of the old rx queue in net_devmem_restart_rx_queue(), and added freeing of old_mem if the restart of the old queue fails. (Yunsheng). - Use kernel-family sock-priv (Jakub). - Put pp_memory_provider_params in netdev_rx_queue instead of the dma-buf specific binding (Pavel & David). - Move queue management ops to queue_mgmt_ops instead of netdev_ops (Jakub). - Remove excess whitespaces (Jakub). - Use genlmsg_iput (Jakub). v6: - Validate rx queue index - Refactor new functions into devmem.c (Pavel) v5: - Renamed page_pool_iov to net_iov, and moved that support to devmem.h or netmem.h. v1: - Introduce devmem.h instead of bloating netdevice.h (Jakub) - ENOTSUPP -> EOPNOTSUPP (checkpatch.pl I think) - Remove unneeded rcu protection for binding->list (rtnl protected) - Removed extraneous err_binding_put: label. - Removed dma_addr += len (Paolo). - Don't override err on netdev_bind_dmabuf_to_queue failure. - Rename devmem -> dmabuf (David). - Add id to dmabuf binding (David/Stan). - Fix missing xa_destroy bound_rq_list. - Use queue api to reset bound RX queues (Jakub). - Update netlink API for rx-queue type (tx/re) (Jakub). RFC v3: - Support multi rx-queue binding --- Documentation/netlink/specs/netdev.yaml | 4 + include/linux/netdevice.h | 2 + include/net/devmem.h | 120 +++++++++++ include/net/netdev_rx_queue.h | 2 + include/net/netmem.h | 10 + include/net/page_pool/types.h | 6 + net/core/Makefile | 2 +- net/core/dev.c | 21 ++ net/core/devmem.c | 270 ++++++++++++++++++++++++ net/core/netdev-genl-gen.c | 4 + net/core/netdev-genl-gen.h | 4 + net/core/netdev-genl.c | 106 +++++++++- net/ethtool/common.c | 8 + net/xdp/xsk_buff_pool.c | 5 + 14 files changed, 561 insertions(+), 3 deletions(-) create mode 100644 include/net/devmem.h create mode 100644 net/core/devmem.c diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 4930e8142aa6..0c747530c275 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -667,6 +667,10 @@ operations: attributes: - id +kernel-family: + headers: [ "linux/list.h"] + sock-priv: struct list_head + mcast-groups: list: - diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 7dfa836d9dbf..752a85412560 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3928,6 +3928,8 @@ u8 dev_xdp_prog_count(struct net_device *dev); int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf); u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode); +u32 dev_get_min_mp_channel_count(const struct net_device *dev); + int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb); int dev_forward_skb(struct net_device *dev, struct sk_buff *skb); int dev_forward_skb_nomtu(struct net_device *dev, struct sk_buff *skb); diff --git a/include/net/devmem.h b/include/net/devmem.h new file mode 100644 index 000000000000..02925d897aa6 --- /dev/null +++ b/include/net/devmem.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Device memory TCP support + * + * Authors: Mina Almasry + * Willem de Bruijn + * Kaiyuan Zhang + * + */ +#ifndef _NET_DEVMEM_H +#define _NET_DEVMEM_H + +struct net_devmem_dmabuf_binding { + struct dma_buf *dmabuf; + struct dma_buf_attachment *attachment; + struct sg_table *sgt; + struct net_device *dev; + struct gen_pool *chunk_pool; + + /* The user holds a ref (via the netlink API) for as long as they want + * the binding to remain alive. Each page pool using this binding holds + * a ref to keep the binding alive. Each allocated net_iov holds a + * ref. + * + * The binding undos itself and unmaps the underlying dmabuf once all + * those refs are dropped and the binding is no longer desired or in + * use. + */ + refcount_t ref; + + /* The list of bindings currently active. Used for netlink to notify us + * of the user dropping the bind. + */ + struct list_head list; + + /* rxq's this binding is active on. */ + struct xarray bound_rxqs; + + /* ID of this binding. Globally unique to all bindings currently + * active. + */ + u32 id; +}; + +/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist + * entry from the dmabuf is inserted into the genpool as a chunk, and needs + * this owner struct to keep track of some metadata necessary to create + * allocations from this chunk. + */ +struct dmabuf_genpool_chunk_owner { + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; + + /* dma_addr of the start of the chunk. */ + dma_addr_t base_dma_addr; + + /* Array of net_iovs for this chunk. */ + struct net_iov *niovs; + size_t num_niovs; + + struct net_devmem_dmabuf_binding *binding; +}; + +struct netlink_ext_ack; +#if defined(CONFIG_DMA_SHARED_BUFFER) && defined(CONFIG_GENERIC_ALLOCATOR) + +void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); +struct net_devmem_dmabuf_binding * +net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netlink_ext_ack *extack); +void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct net_devmem_dmabuf_binding *binding); +void dev_dmabuf_uninstall(struct net_device *dev); +#else +static inline void +__net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) +{ +} + +static inline struct net_devmem_dmabuf_binding * +net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netlink_ext_ack *extack) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline void +net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ +} + +static inline int +net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct net_devmem_dmabuf_binding *binding) +{ + return -EOPNOTSUPP; +} + +static inline void dev_dmabuf_uninstall(struct net_device *dev) +{ +} +#endif + +static inline void +net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) +{ + refcount_inc(&binding->ref); +} + +static inline void +net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding) +{ + if (!refcount_dec_and_test(&binding->ref)) + return; + + __net_devmem_dmabuf_binding_free(binding); +} + +#endif /* _NET_DEVMEM_H */ diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index e78ca52d67fb..ac34f5fb4f71 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -6,6 +6,7 @@ #include #include #include +#include /* This structure contains an instance of an RX queue. */ struct netdev_rx_queue { @@ -25,6 +26,7 @@ struct netdev_rx_queue { * Readers and writers must hold RTNL */ struct napi_struct *napi; + struct pp_memory_provider_params mp_params; } ____cacheline_aligned_in_smp; /* diff --git a/include/net/netmem.h b/include/net/netmem.h index 46cc9b89ac79..41e96c2f94b5 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -8,6 +8,16 @@ #ifndef _NET_NETMEM_H #define _NET_NETMEM_H +#include + +/* net_iov */ + +struct net_iov { + struct dmabuf_genpool_chunk_owner *owner; +}; + +/* netmem */ + /** * typedef netmem_ref - a nonexistent type marking a reference to generic * network memory. diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 50569fed7868..4afd6dd56351 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -139,6 +139,10 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct pp_memory_provider_params { + void *mp_priv; +}; + struct page_pool { struct page_pool_params_fast p; @@ -197,6 +201,8 @@ struct page_pool { */ struct ptr_ring ring; + void *mp_priv; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/Makefile b/net/core/Makefile index f82232b358a2..6b43611fb4a4 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -13,7 +13,7 @@ obj-y += dev.o dev_addr_lists.o dst.o netevent.o \ neighbour.o rtnetlink.o utils.o link_watch.o filter.o \ sock_diag.o dev_ioctl.o tso.o sock_reuseport.o \ fib_notifier.o xdp.o flow_offload.o gro.o \ - netdev-genl.o netdev-genl-gen.o gso.o + netdev-genl.o netdev-genl-gen.o gso.o devmem.o obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o diff --git a/net/core/dev.c b/net/core/dev.c index 271d6aa0cfb9..2cab5f969778 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9375,6 +9375,9 @@ int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf) if (!dev->netdev_ops->ndo_bpf) return -EOPNOTSUPP; + if (dev_get_min_mp_channel_count(dev)) + return -EBUSY; + return dev->netdev_ops->ndo_bpf(dev, bpf); } EXPORT_SYMBOL_GPL(dev_xdp_propagate); @@ -9407,6 +9410,9 @@ static int dev_xdp_install(struct net_device *dev, enum bpf_xdp_mode mode, struct netdev_bpf xdp; int err; + if (dev_get_min_mp_channel_count(dev)) + return -EBUSY; + memset(&xdp, 0, sizeof(xdp)); xdp.command = mode == XDP_MODE_HW ? XDP_SETUP_PROG_HW : XDP_SETUP_PROG; xdp.extack = extack; @@ -9831,6 +9837,20 @@ int dev_change_xdp_fd(struct net_device *dev, struct netlink_ext_ack *extack, return err; } +u32 dev_get_min_mp_channel_count(const struct net_device *dev) +{ + u32 i, max = 0; + + ASSERT_RTNL(); + + for (i = 0; i < dev->real_num_rx_queues; i++) + if (dev->_rx[i].mp_params.mp_priv) + /* The channel count is the idx plus 1. */ + max = i + 1; + + return max; +} + /** * dev_index_reserve() - allocate an ifindex in a namespace * @net: the applicable net namespace @@ -11367,6 +11387,7 @@ void unregister_netdevice_many_notify(struct list_head *head, dev_tcx_uninstall(dev); dev_xdp_uninstall(dev); bpf_dev_bound_netdev_unregister(dev); + dev_dmabuf_uninstall(dev); netdev_offload_xstats_disable_all(dev); diff --git a/net/core/devmem.c b/net/core/devmem.c new file mode 100644 index 000000000000..b2b271a921c1 --- /dev/null +++ b/net/core/devmem.c @@ -0,0 +1,270 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Devmem TCP + * + * Authors: Mina Almasry + * Willem de Bruijn + * Kaiyuan Zhang +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Device memory support */ + +#if defined(CONFIG_DMA_SHARED_BUFFER) && defined(CONFIG_GENERIC_ALLOCATOR) +/* Protected by rtnl_lock() */ +static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); + +static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, + struct gen_pool_chunk *chunk, + void *not_used) +{ + struct dmabuf_genpool_chunk_owner *owner = chunk->owner; + + kvfree(owner->niovs); + kfree(owner); +} + +void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) +{ + size_t size, avail; + + gen_pool_for_each_chunk(binding->chunk_pool, + net_devmem_dmabuf_free_chunk_owner, NULL); + + size = gen_pool_size(binding->chunk_pool); + avail = gen_pool_avail(binding->chunk_pool); + + if (!WARN(size != avail, "can't destroy genpool. size=%zu, avail=%zu", + size, avail)) + gen_pool_destroy(binding->chunk_pool); + + dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt, + DMA_FROM_DEVICE); + dma_buf_detach(binding->dmabuf, binding->attachment); + dma_buf_put(binding->dmabuf); + xa_destroy(&binding->bound_rxqs); + kfree(binding); +} + +void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + unsigned long xa_idx; + unsigned int rxq_idx; + + if (binding->list.next) + list_del(&binding->list); + + xa_for_each(&binding->bound_rxqs, xa_idx, rxq) { + if (rxq->mp_params.mp_priv == binding) { + rxq->mp_params.mp_priv = NULL; + + rxq_idx = get_netdev_rx_queue_index(rxq); + + WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx)); + } + } + + xa_erase(&net_devmem_dmabuf_bindings, binding->id); + + net_devmem_dmabuf_binding_put(binding); +} + +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct net_devmem_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + u32 xa_idx; + int err; + + if (rxq_idx >= dev->real_num_rx_queues) + return -ERANGE; + + rxq = __netif_get_rx_queue(dev, rxq_idx); + if (rxq->mp_params.mp_priv) + return -EEXIST; + +#ifdef CONFIG_XDP_SOCKETS + if (rxq->pool) + return -EEXIST; +#endif + + if (dev_xdp_prog_count(dev)) + return -EEXIST; + + err = xa_alloc(&binding->bound_rxqs, &xa_idx, rxq, xa_limit_32b, + GFP_KERNEL); + if (err) + return err; + + rxq->mp_params.mp_priv = binding; + + err = netdev_rx_queue_restart(dev, rxq_idx); + if (err) + goto err_xa_erase; + + return 0; + +err_xa_erase: + rxq->mp_params.mp_priv = NULL; + xa_erase(&binding->bound_rxqs, xa_idx); + + return err; +} + +struct net_devmem_dmabuf_binding * +net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netlink_ext_ack *extack) +{ + struct net_devmem_dmabuf_binding *binding; + static u32 id_alloc_next; + struct scatterlist *sg; + struct dma_buf *dmabuf; + unsigned int sg_idx, i; + unsigned long virtual; + int err; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR(dmabuf)) + return ERR_CAST(dmabuf); + + binding = kzalloc_node(sizeof(*binding), GFP_KERNEL, + dev_to_node(&dev->dev)); + if (!binding) { + err = -ENOMEM; + goto err_put_dmabuf; + } + + binding->dev = dev; + + err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id, + binding, xa_limit_32b, &id_alloc_next, + GFP_KERNEL); + if (err < 0) + goto err_free_binding; + + xa_init_flags(&binding->bound_rxqs, XA_FLAGS_ALLOC); + + refcount_set(&binding->ref, 1); + + binding->dmabuf = dmabuf; + + binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent); + if (IS_ERR(binding->attachment)) { + err = PTR_ERR(binding->attachment); + NL_SET_ERR_MSG(extack, "Failed to bind dmabuf to device"); + goto err_free_id; + } + + binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment, + DMA_FROM_DEVICE); + if (IS_ERR(binding->sgt)) { + err = PTR_ERR(binding->sgt); + NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment"); + goto err_detach; + } + + /* For simplicity we expect to make PAGE_SIZE allocations, but the + * binding can be much more flexible than that. We may be able to + * allocate MTU sized chunks here. Leave that for future work... + */ + binding->chunk_pool = + gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev)); + if (!binding->chunk_pool) { + err = -ENOMEM; + goto err_unmap; + } + + virtual = 0; + for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) { + dma_addr_t dma_addr = sg_dma_address(sg); + struct dmabuf_genpool_chunk_owner *owner; + size_t len = sg_dma_len(sg); + struct net_iov *niov; + + owner = kzalloc_node(sizeof(*owner), GFP_KERNEL, + dev_to_node(&dev->dev)); + if (!owner) { + err = -ENOMEM; + goto err_free_chunks; + } + + owner->base_virtual = virtual; + owner->base_dma_addr = dma_addr; + owner->num_niovs = len / PAGE_SIZE; + owner->binding = binding; + + err = gen_pool_add_owner(binding->chunk_pool, dma_addr, + dma_addr, len, dev_to_node(&dev->dev), + owner); + if (err) { + kfree(owner); + err = -EINVAL; + goto err_free_chunks; + } + + owner->niovs = kvmalloc_array(owner->num_niovs, + sizeof(*owner->niovs), + GFP_KERNEL); + if (!owner->niovs) { + err = -ENOMEM; + goto err_free_chunks; + } + + for (i = 0; i < owner->num_niovs; i++) { + niov = &owner->niovs[i]; + niov->owner = owner; + } + + virtual += len; + } + + return binding; + +err_free_chunks: + gen_pool_for_each_chunk(binding->chunk_pool, + net_devmem_dmabuf_free_chunk_owner, NULL); + gen_pool_destroy(binding->chunk_pool); +err_unmap: + dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt, + DMA_FROM_DEVICE); +err_detach: + dma_buf_detach(dmabuf, binding->attachment); +err_free_id: + xa_erase(&net_devmem_dmabuf_bindings, binding->id); +err_free_binding: + kfree(binding); +err_put_dmabuf: + dma_buf_put(dmabuf); + return ERR_PTR(err); +} + +void dev_dmabuf_uninstall(struct net_device *dev) +{ + struct net_devmem_dmabuf_binding *binding; + struct netdev_rx_queue *rxq; + unsigned long xa_idx; + unsigned int i; + + for (i = 0; i < dev->real_num_rx_queues; i++) { + binding = dev->_rx[i].mp_params.mp_priv; + if (!binding) + continue; + + xa_for_each(&binding->bound_rxqs, xa_idx, rxq) + if (rxq == &dev->_rx[i]) + xa_erase(&binding->bound_rxqs, xa_idx); + } +} +#endif diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index 6b7fe6035067..b28424ae06d5 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -9,6 +9,7 @@ #include "netdev-genl-gen.h" #include +#include /* Integer value ranges */ static const struct netlink_range_validation netdev_a_page_pool_id_range = { @@ -187,4 +188,7 @@ struct genl_family netdev_nl_family __ro_after_init = { .n_split_ops = ARRAY_SIZE(netdev_nl_ops), .mcgrps = netdev_nl_mcgrps, .n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps), + .sock_priv_size = sizeof(struct list_head), + .sock_priv_init = (void *)netdev_nl_sock_priv_init, + .sock_priv_destroy = (void *)netdev_nl_sock_priv_destroy, }; diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index 67c34005750c..8cda334fd042 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -10,6 +10,7 @@ #include #include +#include /* Common nested types */ extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1]; @@ -40,4 +41,7 @@ enum { extern struct genl_family netdev_nl_family; +void netdev_nl_sock_priv_init(struct list_head *priv); +void netdev_nl_sock_priv_destroy(struct list_head *priv); + #endif /* _LINUX_NETDEV_GEN_H */ diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 2d726e65211d..269faa37f84e 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "netdev-genl-gen.h" #include "dev.h" @@ -721,10 +722,111 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, return err; } -/* Stub */ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) { - return 0; + struct nlattr *tb[ARRAY_SIZE(netdev_queue_id_nl_policy)]; + struct net_devmem_dmabuf_binding *binding; + struct list_head *sock_binding_list; + u32 ifindex, dmabuf_fd, rxq_idx; + struct net_device *netdev; + struct sk_buff *rsp; + struct nlattr *attr; + int rem, err = 0; + void *hdr; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_FD) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_QUEUES)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]); + + sock_binding_list = genl_sk_priv_get(&netdev_nl_family, + NETLINK_CB(skb).sk); + if (IS_ERR(sock_binding_list)) + return PTR_ERR(sock_binding_list); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + hdr = genlmsg_iput(rsp, info); + if (!hdr) { + err = -EMSGSIZE; + goto err_genlmsg_free; + } + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (!netdev || !netif_device_present(netdev)) { + err = -ENODEV; + goto err_unlock; + } + + binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack); + if (IS_ERR(binding)) { + err = PTR_ERR(binding); + goto err_unlock; + } + + nla_for_each_attr_type(attr, NETDEV_A_DMABUF_QUEUES, + genlmsg_data(info->genlhdr), + genlmsg_len(info->genlhdr), rem) { + err = nla_parse_nested( + tb, ARRAY_SIZE(netdev_queue_id_nl_policy) - 1, attr, + netdev_queue_id_nl_policy, info->extack); + if (err < 0) + goto err_unbind; + + if (NL_REQ_ATTR_CHECK(info->extack, attr, tb, NETDEV_A_QUEUE_ID) || + NL_REQ_ATTR_CHECK(info->extack, attr, tb, NETDEV_A_QUEUE_TYPE) || + nla_get_u32(tb[NETDEV_A_QUEUE_TYPE]) != NETDEV_QUEUE_TYPE_RX) { + err = -EINVAL; + goto err_unlock; + } + + rxq_idx = nla_get_u32(tb[NETDEV_A_QUEUE_ID]); + + err = net_devmem_bind_dmabuf_to_queue(netdev, rxq_idx, binding); + if (err) + goto err_unbind; + } + + list_add(&binding->list, sock_binding_list); + + rtnl_unlock(); + + nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id); + genlmsg_end(rsp, hdr); + + return genlmsg_reply(rsp, info); + +err_unbind: + net_devmem_unbind_dmabuf(binding); +err_unlock: + rtnl_unlock(); +err_genlmsg_free: + nlmsg_free(rsp); + return err; +} + +void netdev_nl_sock_priv_init(struct list_head *priv) +{ + INIT_LIST_HEAD(priv); +} + +void netdev_nl_sock_priv_destroy(struct list_head *priv) +{ + struct net_devmem_dmabuf_binding *binding; + struct net_devmem_dmabuf_binding *temp; + + list_for_each_entry_safe(binding, temp, priv, list) { + rtnl_lock(); + net_devmem_unbind_dmabuf(binding); + rtnl_unlock(); + } } static int netdev_genl_netdevice_event(struct notifier_block *nb, diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 7257ae272296..496a5f040142 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -657,6 +657,7 @@ int ethtool_check_max_channel(struct net_device *dev, { u64 max_rxnfc_in_use; u32 max_rxfh_in_use; + int max_mp_in_use; /* ensure the new Rx count fits within the configured Rx flow * indirection table/rxnfc settings @@ -675,6 +676,13 @@ int ethtool_check_max_channel(struct net_device *dev, return -EINVAL; } + max_mp_in_use = dev_get_min_mp_channel_count(dev); + if (channels.combined_count + channels.rx_count <= max_mp_in_use) { + if (info) + GENL_SET_ERR_MSG_FMT(info, "requested channel counts are too low for existing memory provider setting (%d)", max_mp_in_use); + return -EINVAL; + } + return 0; } diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index c0e0204b9630..6b2756f95629 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -211,6 +211,11 @@ int xp_assign_dev(struct xsk_buff_pool *pool, goto err_unreg_pool; } + if (dev_get_min_mp_channel_count(netdev)) { + err = -EBUSY; + goto err_unreg_pool; + } + bpf.command = XDP_SETUP_XSK_POOL; bpf.xsk.pool = pool; bpf.xsk.queue_id = queue_id; From patchwork Sun Aug 25 04:15:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776585 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96CE93F9EC for ; Sun, 25 Aug 2024 04:15:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559332; cv=none; b=kxPEexAnpfH9O5s0S63LJfsT5HFcKFyNcdKfYbwY2q3NPVHeaorXO4+R8QbPNufEI3uQgl+d4xOENrjka+oufjnykgCEYjmScHDX23DnSrk49jCZp9XZU1QALdZHgo1RpnjE0bpA3hg7kRLq2odNNVR35nmtNQV8i+iWKZ0OFh0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559332; c=relaxed/simple; bh=6n7vgNaQA5VbZnYe1tlSXfeyTcxf7vl+pBlHZC1ER3E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m3HcnKsT0sErbxxhX0cip35EUJNlm0KU/TQHHLfbRO47dyr9GqbgbUiSiI3sZ29bkfr4F/SUtzP8hJhf9KTjL4QR9dY4d8xqibi6K8yqfPWU/X/W3LpBzSmVBxhj+Q1Gu/q5l6xBRjH94dRTMe61pcbRRRlnZXhe9whqd3Y5/1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QiCnD/sV; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QiCnD/sV" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e02a4de4f4eso5245520276.1 for ; Sat, 24 Aug 2024 21:15:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559325; x=1725164125; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gC14bSn9nIk/1KhOXZdVWzArlPoGrShi4ykhdecJBd0=; b=QiCnD/sVhACklN0eV+s9deIRGTURe0lVIMYDOO8hjHWOl3kYKVRNuV3PyFgKOf0WLJ g9zm6ptPVbcMX1BJWYS2aRUgO1YDWRpmhTrDJQyBPQS06NWJCOvWTbCFnxBErbmdZzFj pVhcYB2a6o/Y6MOkcWYQkm1R+hJ1+bILD+mHCjPufnnXvDVKr8roqNOe2qgx95Kh07+d q7A43PRkEwpGlFDMO00VFYAO2eSjciKVVB1RbZ7z0GMqVxCrveTNj2hEfQQc/unq7s8U ChgH9uNbNh2IHEZO6Mb0Q7nBaYVttDjk+w/rQDrI3OeidrF/bepoa7ZwPwk5TeXQHMAQ rEUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559325; x=1725164125; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gC14bSn9nIk/1KhOXZdVWzArlPoGrShi4ykhdecJBd0=; b=eNaQ1ZcKCTjCrxq7xmVAtT2zBW7Pop/j+BRXt6eDtZEJrn1pAKmm2MJS9no6veS6pt hHEvmuEa9gOaKZvSRlZKK/h9Cp21jIMzBGrUK3Iyj6lZuUPLSVZHVbe5JoNp4Ok9v4Sr 4Jv/vF1IdrXHo4tFLwSWhfE4H0BsN0kgv+oecGcERE9IHy7+2oTcJyauREAHa2ac8vAF MxbdJzyTFQNvEaweEsNqBOY5j8oc6Jmcaf1uKkx9UiuHc/UxLnqszqA9tLbnD2RoenkN dhc2lqAIUxGNxIkdG928MRe84Bt51Q+ofR4VFi0oBbB78wyeWD5n2mJkTdEsw2JDzZMK 7Dcg== X-Forwarded-Encrypted: i=1; AJvYcCV14Trw/aFbrNlQn4TN2lcyv4stJwx0NK6AV+wVIMWECk9d2Er6eBvUrNKB59fCpVtJR1Lh6k4szh37Rbc=@vger.kernel.org X-Gm-Message-State: AOJu0YzqKoEbF0Km8EV1EdKlmZ32R/SYwwjfHKJ+xHHuNypbzIyKkjuG lyz/VyWKAUdw9qxs3v7TOH5FpcZLyNC3ZL8zjvwr+4lHItq8Xh3ywQCtZOAjwnlqVEbE6FyWBHN RFX9LRFKOeHjpGF/vG3kqHQ== X-Google-Smtp-Source: AGHT+IEDMKIOfGJ0be5cWCI5w9NxExZ8m0XUevK+c8zt4MfO1KNEHjWjMcWb1LKaVDudbtIl83DEZ3okh3SojTFoSA== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:3617:0:b0:e13:c772:5c18 with SMTP id 3f1490d57ef6-e17a866a769mr11053276.12.1724559325345; Sat, 24 Aug 2024 21:15:25 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:02 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-5-almasrymina@google.com> Subject: [PATCH net-next v22 04/13] netdev: netdevice devmem allocator From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang Implement netdev devmem allocator. The allocator takes a given struct netdev_dmabuf_binding as input and allocates net_iov from that binding. The allocation simply delegates to the binding's genpool for the allocation logic and wraps the returned memory region in a net_iov struct. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov Reviewed-by: Jakub Kicinski --- v20: - Removed dma_addr field in dmabuf_genpool_chunk_owner not used in this patch (moved to later patch where it's used). v19: - Don't reset dma_addr on allocation/free (Jakub) v17: - Don't acquire a binding ref for every allocation (Jakub). v11: - Fix extraneous inline directive (Paolo) v8: - Rename netdev_dmabuf_binding -> net_devmem_dmabuf_binding to avoid patch-by-patch build error. - Move niov->pp_magic/pp/pp_ref_counter usage to later patch to avoid patch-by-patch build error. v7: - netdev_ -> net_devmem_* naming (Yunsheng). v6: - Add comment on net_iov_dma_addr to explain why we don't use niov->dma_addr (Pavel) - Refactor new functions into net/core/devmem.c (Pavel) v1: - Rename devmem -> dmabuf (David). --- include/net/devmem.h | 13 +++++++++++++ include/net/netmem.h | 17 +++++++++++++++++ net/core/devmem.c | 38 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 68 insertions(+) diff --git a/include/net/devmem.h b/include/net/devmem.h index 02925d897aa6..7b856a3540a9 100644 --- a/include/net/devmem.h +++ b/include/net/devmem.h @@ -72,7 +72,20 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding); void dev_dmabuf_uninstall(struct net_device *dev); +struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); +void net_devmem_free_dmabuf(struct net_iov *ppiov); #else +static inline struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ + return NULL; +} + +static inline void net_devmem_free_dmabuf(struct net_iov *ppiov) +{ +} + static inline void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) { diff --git a/include/net/netmem.h b/include/net/netmem.h index 41e96c2f94b5..0fbc0999091a 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -16,6 +16,23 @@ struct net_iov { struct dmabuf_genpool_chunk_owner *owner; }; +static inline struct dmabuf_genpool_chunk_owner * +net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + +static inline struct net_devmem_dmabuf_binding * +net_iov_binding(const struct net_iov *niov) +{ + return net_iov_owner(niov)->binding; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index b2b271a921c1..d636a8b34215 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -35,6 +35,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, kfree(owner); } +static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) +{ + struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + + return owner->base_dma_addr + + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); +} + void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) { size_t size, avail; @@ -57,6 +65,36 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) kfree(binding); } +struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ + struct dmabuf_genpool_chunk_owner *owner; + unsigned long dma_addr; + struct net_iov *niov; + ssize_t offset; + ssize_t index; + + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE, + (void **)&owner); + if (!dma_addr) + return NULL; + + offset = dma_addr - owner->base_dma_addr; + index = offset / PAGE_SIZE; + niov = &owner->niovs[index]; + + return niov; +} + +void net_devmem_free_dmabuf(struct net_iov *niov) +{ + struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + unsigned long dma_addr = net_devmem_get_dma_addr(niov); + + if (gen_pool_has_addr(binding->chunk_pool, dma_addr, PAGE_SIZE)) + gen_pool_free(binding->chunk_pool, dma_addr, PAGE_SIZE); +} + void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) { struct netdev_rx_queue *rxq; From patchwork Sun Aug 25 04:15:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776587 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 48986558B6 for ; Sun, 25 Aug 2024 04:15:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559341; cv=none; b=sNP55A4uf13if1Ei+tuFb+6WQkMaAPdn3xDpE+3DaNAirAhfiAjMHCR2Dfx+/l3YlJlQDNB0/tiJQ2dTehILEUxOrVxgmjFITvxsj3lBseaZ3mUJm94E0ehEIXbasnJq3RiRQHh7iY9OFbCASo6/SGXuDt7tu9umVT4ms1EXiDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559341; c=relaxed/simple; bh=GtojtkeaEEA1r/chQ4MiOBeTJT4i2SS2I7B7VaPwXOM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WXMmtLDQI0gJwJ7sPTUsM/ac+FOmOT7KTDI5HM7xtJopLbuFnpHQChiv0NnQS/WNDB4kZQtXj2QByfNrmKn4u9FjTmkW3+q7LpyIJPJI8rvp23lCJ5r+R+C6ca33x/eF2E2pUCwNa/oCzOCLHkm/W4kx8HPNzNJaT230xHl/zPg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QpD+tm4C; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QpD+tm4C" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6b46d8bc153so62908147b3.2 for ; Sat, 24 Aug 2024 21:15:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559327; x=1725164127; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=87KGItvVtQXs8/B+jKHIQL1MzKRWZGgjddsd4mi7vb0=; b=QpD+tm4Cmw62hcXyTFCPSB/7qvCRZaMeT0cVUJmgnGBHXv7oC++kgGmVvU3v6kErbn 0e7dZ8f0Grw54Sj0+qw7ZkJEajc9f94JdZJi0RuicRcoHG5RDW8yQ4lvzlzudKvlz+c3 4cURo/PBtzI9IoYQjVmYUVdBULANxj9hp1e6G3Hgoc+KRQ1qlax5yp30T1/qA6QvDcKQ b7X/BWbUDGKe/UbF2VUki+fBR3hWU2G2fFd80SWF9iUuU7+yGMdfE7JIOPHM8yd0Cc0O CcnUpSfmfvC63PtxLmhofkjXgMp1VbHdWcDQSi1RN+5Lb6EGgqKmllBxGv2fGp6MtdEY 9rkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559327; x=1725164127; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=87KGItvVtQXs8/B+jKHIQL1MzKRWZGgjddsd4mi7vb0=; b=kJS2E6mRkNWbWybP7oQsAU080oP5CHK4C4ym2iBsFqK2a+PVjw2X4xBW7J0SC3o6FU msE5XAVwRGSLIPPzmXgaB6h3MYbA7YAlrGmlEsKJNVKQ9+HcMaNPmfpuO+mEmvar4jg6 /Hc6qD8NTsSj4hM7edG+70W3pr6tgD0b6J/lNwiUna246Oj543BslJ+ykQ6JFBW9OD0g SDuLfDTFutn4IJsmT3BvcaX1bAIrSp024CVVdZn7Ftz9BaohuTlPMQzMzYpxUJrWOptp fTilA/y9MuK3TDpEfUeFhprLjs2TohWWdcRMm6eYjJCtZcNhMqC8i6i+IqoEcCMXMFyP Ujhw== X-Forwarded-Encrypted: i=1; AJvYcCVsyQK22vo+A0xXUZ2WVztUaCQGfL4ATvtDnIMoqXH0YQU7nf7rPlta7cc6DW+EDfvo+Edw5kGahtN4Dkg=@vger.kernel.org X-Gm-Message-State: AOJu0YzUYayc8Nwa8UXHN1PwWjCHj+kbizFl0Yw3Bwn77yePcwGOiP6A M5TeAVbGGGT74B062Y6mkPu8WFP5JHY6VnlVOqSB3eN3T1N56MxWzip8OBvyMxtxAfwPvvxYv6I CUEq7p2wwG3VomGzWeSdm0w== X-Google-Smtp-Source: AGHT+IFySAo9TpYxgVDRcDSGK8qPnazxw/VuCjz9FLGBBbFGS4sF85A2ufOUBCDHpNbsFZiwteyWpu2QRoeq0GaxhA== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:abb0:0:b0:e16:6e0a:bb0b with SMTP id 3f1490d57ef6-e17a8000831mr13005276.0.1724559327282; Sat, 24 Aug 2024 21:15:27 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:03 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-6-almasrymina@google.com> Subject: [PATCH net-next v22 05/13] page_pool: devmem support From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , linux-mm@kvack.org, Matthew Wilcox Convert netmem to be a union of struct page and struct netmem. Overload the LSB of struct netmem* to indicate that it's a net_iov, otherwise it's a page. Currently these entries in struct page are rented by the page_pool and used exclusively by the net stack: struct { unsigned long pp_magic; struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; atomic_long_t pp_ref_count; }; Mirror these (and only these) entries into struct net_iov and implement netmem helpers that can access these common fields regardless of whether the underlying type is page or net_iov. Implement checks for net_iov in netmem helpers which delegate to mm APIs, to ensure net_iov are never passed to the mm stack. Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov Acked-by: Jakub Kicinski --- v19: - Move page_pool_set_dma_addr(_netmem) to page_pool_priv.h - Don't reset niov dma_addr on allocation/free. Instead, it's set once when the binding happens and it never changes (Jakub) v17: - Rename netmem_to_pfn to netmem_pfn_trace (Jakub) - Move some low level netmem helpers to netmem_priv.h (Jakub). v13: - Move NET_IOV dependent changes to this patch. - Fixed comment (Pavel) - Applied Reviewed-by from Pavel. v9: https://lore.kernel.org/netdev/20240403002053.2376017-8-almasrymina@google.com/ - Remove CONFIG checks in netmem_is_net_iov() (Pavel/David/Jens) v7: - Remove static_branch_unlikely from netmem_to_net_iov(). We're getting better results from the fast path in bench_page_pool_simple tests without the static_branch_unlikely, and the addition of static_branch_unlikely doesn't improve performance of devmem TCP. Additionally only check netmem_to_net_iov() if CONFIG_DMA_SHARED_BUFFER is enabled, otherwise dmabuf net_iovs cannot exist anyway. net-next base: 8 cycle fast path. with static_branch_unlikely: 10 cycle fast path. without static_branch_unlikely: 9 cycle fast path. CONFIG_DMA_SHARED_BUFFER disabled: 8 cycle fast path as baseline. Performance of devmem TCP is at 95% line rate is regardless of static_branch_unlikely or not. v6: - Rebased on top of the merged netmem_ref type. - Rebased on top of the merged skb_pp_frag_ref() changes. v5: - Use netmem instead of page* with LSB set. - Use pp_ref_count for refcounting net_iov. - Removed many of the custom checks for netmem. v1: - Disable fragmentation support for iov properly. - fix napi_pp_put_page() path (Yunsheng). - Use pp_frag_count for devmem refcounting. Cc: linux-mm@kvack.org Cc: Matthew Wilcox --- include/net/netmem.h | 129 +++++++++++++++++++++++++++++-- include/net/page_pool/helpers.h | 39 ++-------- include/trace/events/page_pool.h | 12 +-- net/core/devmem.c | 8 ++ net/core/netmem_priv.h | 31 ++++++++ net/core/page_pool.c | 25 +++--- net/core/page_pool_priv.h | 26 +++++++ net/core/skbuff.c | 23 +++--- 8 files changed, 224 insertions(+), 69 deletions(-) create mode 100644 net/core/netmem_priv.h diff --git a/include/net/netmem.h b/include/net/netmem.h index 0fbc0999091a..284f84a312c2 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -9,13 +9,51 @@ #define _NET_NETMEM_H #include +#include /* net_iov */ +DECLARE_STATIC_KEY_FALSE(page_pool_mem_providers); + +/* We overload the LSB of the struct page pointer to indicate whether it's + * a page or net_iov. + */ +#define NET_IOV 0x01UL + struct net_iov { + unsigned long __unused_padding; + unsigned long pp_magic; + struct page_pool *pp; struct dmabuf_genpool_chunk_owner *owner; + unsigned long dma_addr; + atomic_long_t pp_ref_count; }; +/* These fields in struct page are used by the page_pool and net stack: + * + * struct { + * unsigned long pp_magic; + * struct page_pool *pp; + * unsigned long _pp_mapping_pad; + * unsigned long dma_addr; + * atomic_long_t pp_ref_count; + * }; + * + * We mirror the page_pool fields here so the page_pool can access these fields + * without worrying whether the underlying fields belong to a page or net_iov. + * + * The non-net stack fields of struct page are private to the mm stack and must + * never be mirrored to net_iov. + */ +#define NET_IOV_ASSERT_OFFSET(pg, iov) \ + static_assert(offsetof(struct page, pg) == \ + offsetof(struct net_iov, iov)) +NET_IOV_ASSERT_OFFSET(pp_magic, pp_magic); +NET_IOV_ASSERT_OFFSET(pp, pp); +NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); +NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); +#undef NET_IOV_ASSERT_OFFSET + static inline struct dmabuf_genpool_chunk_owner * net_iov_owner(const struct net_iov *niov) { @@ -46,20 +84,37 @@ net_iov_binding(const struct net_iov *niov) */ typedef unsigned long __bitwise netmem_ref; +static inline bool netmem_is_net_iov(const netmem_ref netmem) +{ + return (__force unsigned long)netmem & NET_IOV; +} + /* This conversion fails (returns NULL) if the netmem_ref is not struct page * backed. - * - * Currently struct page is the only possible netmem, and this helper never - * fails. */ static inline struct page *netmem_to_page(netmem_ref netmem) { + if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) + return NULL; + return (__force struct page *)netmem; } -/* Converting from page to netmem is always safe, because a page can always be - * a netmem. - */ +static inline struct net_iov *netmem_to_net_iov(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) + return (struct net_iov *)((__force unsigned long)netmem & + ~NET_IOV); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + +static inline netmem_ref net_iov_to_netmem(struct net_iov *niov) +{ + return (__force netmem_ref)((unsigned long)niov | NET_IOV); +} + static inline netmem_ref page_to_netmem(struct page *page) { return (__force netmem_ref)page; @@ -67,17 +122,77 @@ static inline netmem_ref page_to_netmem(struct page *page) static inline int netmem_ref_count(netmem_ref netmem) { + /* The non-pp refcount of net_iov is always 1. On net_iov, we only + * support pp refcounting which uses the pp_ref_count field. + */ + if (netmem_is_net_iov(netmem)) + return 1; + return page_ref_count(netmem_to_page(netmem)); } -static inline unsigned long netmem_to_pfn(netmem_ref netmem) +static inline unsigned long netmem_pfn_trace(netmem_ref netmem) { + if (netmem_is_net_iov(netmem)) + return 0; + return page_to_pfn(netmem_to_page(netmem)); } +static inline struct net_iov *__netmem_clear_lsb(netmem_ref netmem) +{ + return (struct net_iov *)((__force unsigned long)netmem & ~NET_IOV); +} + +static inline struct page_pool *netmem_get_pp(netmem_ref netmem) +{ + return __netmem_clear_lsb(netmem)->pp; +} + +static inline atomic_long_t *netmem_get_pp_ref_count_ref(netmem_ref netmem) +{ + return &__netmem_clear_lsb(netmem)->pp_ref_count; +} + +static inline bool netmem_is_pref_nid(netmem_ref netmem, int pref_nid) +{ + /* Assume net_iov are on the preferred node without actually + * checking... + * + * This check is only used to check for recycling memory in the page + * pool's fast paths. Currently the only implementation of net_iov + * is dmabuf device memory. It's a deliberate decision by the user to + * bind a certain dmabuf to a certain netdev, and the netdev rx queue + * would not be able to reallocate memory from another dmabuf that + * exists on the preferred node, so, this check doesn't make much sense + * in this case. Assume all net_iovs can be recycled for now. + */ + if (netmem_is_net_iov(netmem)) + return true; + + return page_to_nid(netmem_to_page(netmem)) == pref_nid; +} + static inline netmem_ref netmem_compound_head(netmem_ref netmem) { + /* niov are never compounded */ + if (netmem_is_net_iov(netmem)) + return netmem; + return page_to_netmem(compound_head(netmem_to_page(netmem))); } +static inline void *netmem_address(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) + return NULL; + + return page_address(netmem_to_page(netmem)); +} + +static inline unsigned long netmem_get_dma_addr(netmem_ref netmem) +{ + return __netmem_clear_lsb(netmem)->dma_addr; +} + #endif /* _NET_NETMEM_H */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 2b43a893c619..793e6fd78bc5 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -216,7 +216,7 @@ page_pool_get_dma_dir(const struct page_pool *pool) static inline void page_pool_fragment_netmem(netmem_ref netmem, long nr) { - atomic_long_set(&netmem_to_page(netmem)->pp_ref_count, nr); + atomic_long_set(netmem_get_pp_ref_count_ref(netmem), nr); } /** @@ -244,7 +244,7 @@ static inline void page_pool_fragment_page(struct page *page, long nr) static inline long page_pool_unref_netmem(netmem_ref netmem, long nr) { - struct page *page = netmem_to_page(netmem); + atomic_long_t *pp_ref_count = netmem_get_pp_ref_count_ref(netmem); long ret; /* If nr == pp_ref_count then we have cleared all remaining @@ -261,19 +261,19 @@ static inline long page_pool_unref_netmem(netmem_ref netmem, long nr) * initially, and only overwrite it when the page is partitioned into * more than one piece. */ - if (atomic_long_read(&page->pp_ref_count) == nr) { + if (atomic_long_read(pp_ref_count) == nr) { /* As we have ensured nr is always one for constant case using * the BUILD_BUG_ON(), only need to handle the non-constant case * here for pp_ref_count draining, which is a rare case. */ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); if (!__builtin_constant_p(nr)) - atomic_long_set(&page->pp_ref_count, 1); + atomic_long_set(pp_ref_count, 1); return 0; } - ret = atomic_long_sub_return(nr, &page->pp_ref_count); + ret = atomic_long_sub_return(nr, pp_ref_count); WARN_ON(ret < 0); /* We are the last user here too, reset pp_ref_count back to 1 to @@ -282,7 +282,7 @@ static inline long page_pool_unref_netmem(netmem_ref netmem, long nr) * page_pool_unref_page() currently. */ if (unlikely(!ret)) - atomic_long_set(&page->pp_ref_count, 1); + atomic_long_set(pp_ref_count, 1); return ret; } @@ -401,9 +401,7 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va, static inline dma_addr_t page_pool_get_dma_addr_netmem(netmem_ref netmem) { - struct page *page = netmem_to_page(netmem); - - dma_addr_t ret = page->dma_addr; + dma_addr_t ret = netmem_get_dma_addr(netmem); if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<= PAGE_SHIFT; @@ -423,24 +421,6 @@ static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) return page_pool_get_dma_addr_netmem(page_to_netmem((struct page *)page)); } -static inline bool page_pool_set_dma_addr_netmem(netmem_ref netmem, - dma_addr_t addr) -{ - struct page *page = netmem_to_page(netmem); - - if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { - page->dma_addr = addr >> PAGE_SHIFT; - - /* We assume page alignment to shave off bottom bits, - * if this "compression" doesn't work we need to drop. - */ - return addr != (dma_addr_t)page->dma_addr << PAGE_SHIFT; - } - - page->dma_addr = addr; - return false; -} - /** * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW * @pool: &page_pool the @page belongs to @@ -463,11 +443,6 @@ static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, page_pool_get_dma_dir(pool)); } -static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) -{ - return page_pool_set_dma_addr_netmem(page_to_netmem(page), addr); -} - static inline bool page_pool_put(struct page_pool *pool) { return refcount_dec_and_test(&pool->user_cnt); diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index 543e54e432a1..31825ed30032 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -57,12 +57,12 @@ TRACE_EVENT(page_pool_state_release, __entry->pool = pool; __entry->netmem = (__force unsigned long)netmem; __entry->release = release; - __entry->pfn = netmem_to_pfn(netmem); + __entry->pfn = netmem_pfn_trace(netmem); ), - TP_printk("page_pool=%p netmem=%p pfn=0x%lx release=%u", + TP_printk("page_pool=%p netmem=%p is_net_iov=%lu pfn=0x%lx release=%u", __entry->pool, (void *)__entry->netmem, - __entry->pfn, __entry->release) + __entry->netmem & NET_IOV, __entry->pfn, __entry->release) ); TRACE_EVENT(page_pool_state_hold, @@ -83,12 +83,12 @@ TRACE_EVENT(page_pool_state_hold, __entry->pool = pool; __entry->netmem = (__force unsigned long)netmem; __entry->hold = hold; - __entry->pfn = netmem_to_pfn(netmem); + __entry->pfn = netmem_pfn_trace(netmem); ), - TP_printk("page_pool=%p netmem=%p pfn=0x%lx hold=%u", + TP_printk("page_pool=%p netmem=%p is_net_iov=%lu, pfn=0x%lx hold=%u", __entry->pool, (void *)__entry->netmem, - __entry->pfn, __entry->hold) + __entry->netmem & NET_IOV, __entry->pfn, __entry->hold) ); TRACE_EVENT(page_pool_update_nid, diff --git a/net/core/devmem.c b/net/core/devmem.c index d636a8b34215..8cc727bbd137 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -19,6 +19,8 @@ #include #include +#include "page_pool_priv.h" + /* Device memory support */ #if defined(CONFIG_DMA_SHARED_BUFFER) && defined(CONFIG_GENERIC_ALLOCATOR) @@ -83,6 +85,10 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) index = offset / PAGE_SIZE; niov = &owner->niovs[index]; + niov->pp_magic = 0; + niov->pp = NULL; + atomic_long_set(&niov->pp_ref_count, 0); + return niov; } @@ -263,6 +269,8 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, for (i = 0; i < owner->num_niovs; i++) { niov = &owner->niovs[i]; niov->owner = owner; + page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), + net_devmem_get_dma_addr(niov)); } virtual += len; diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h new file mode 100644 index 000000000000..7eadb8393e00 --- /dev/null +++ b/net/core/netmem_priv.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __NETMEM_PRIV_H +#define __NETMEM_PRIV_H + +static inline unsigned long netmem_get_pp_magic(netmem_ref netmem) +{ + return __netmem_clear_lsb(netmem)->pp_magic; +} + +static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic) +{ + __netmem_clear_lsb(netmem)->pp_magic |= pp_magic; +} + +static inline void netmem_clear_pp_magic(netmem_ref netmem) +{ + __netmem_clear_lsb(netmem)->pp_magic = 0; +} + +static inline void netmem_set_pp(netmem_ref netmem, struct page_pool *pool) +{ + __netmem_clear_lsb(netmem)->pp = pool; +} + +static inline void netmem_set_dma_addr(netmem_ref netmem, + unsigned long dma_addr) +{ + __netmem_clear_lsb(netmem)->dma_addr = dma_addr; +} +#endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 2abe6e919224..13277f05aebd 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -25,6 +25,9 @@ #include #include "page_pool_priv.h" +#include "netmem_priv.h" + +DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ) @@ -358,7 +361,7 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool) if (unlikely(!netmem)) break; - if (likely(page_to_nid(netmem_to_page(netmem)) == pref_nid)) { + if (likely(netmem_is_pref_nid(netmem, pref_nid))) { pool->alloc.cache[pool->alloc.count++] = netmem; } else { /* NUMA mismatch; @@ -454,10 +457,8 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem) { - struct page *page = netmem_to_page(netmem); - - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; + netmem_set_pp(netmem, pool); + netmem_or_pp_magic(netmem, PP_SIGNATURE); /* Ensuring all pages have been split into one fragment initially: * page_pool_set_pp_info() is only called once for every page when it @@ -472,10 +473,8 @@ static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem) static void page_pool_clear_pp_info(netmem_ref netmem) { - struct page *page = netmem_to_page(netmem); - - page->pp_magic = 0; - page->pp = NULL; + netmem_clear_pp_magic(netmem); + netmem_set_pp(netmem, NULL); } static struct page *__page_pool_alloc_page_order(struct page_pool *pool, @@ -692,8 +691,9 @@ static bool page_pool_recycle_in_cache(netmem_ref netmem, static bool __page_pool_page_can_be_recycled(netmem_ref netmem) { - return page_ref_count(netmem_to_page(netmem)) == 1 && - !page_is_pfmemalloc(netmem_to_page(netmem)); + return netmem_is_net_iov(netmem) || + (page_ref_count(netmem_to_page(netmem)) == 1 && + !page_is_pfmemalloc(netmem_to_page(netmem))); } /* If the page refcnt == 1, this will try to recycle the page. @@ -728,6 +728,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem, /* Page found as candidate for recycling */ return netmem; } + /* Fallback/non-XDP mode: API user have elevated refcnt. * * Many drivers split up the page into fragments, and some @@ -949,7 +950,7 @@ static void page_pool_empty_ring(struct page_pool *pool) /* Empty recycle ring */ while ((netmem = (__force netmem_ref)ptr_ring_consume_bh(&pool->ring))) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(netmem_to_page(netmem)) == 1)) + if (!(netmem_ref_count(netmem) == 1)) pr_crit("%s() page_pool refcnt %d violation\n", __func__, netmem_ref_count(netmem)); diff --git a/net/core/page_pool_priv.h b/net/core/page_pool_priv.h index 90665d40f1eb..2142caeddb7c 100644 --- a/net/core/page_pool_priv.h +++ b/net/core/page_pool_priv.h @@ -3,10 +3,36 @@ #ifndef __PAGE_POOL_PRIV_H #define __PAGE_POOL_PRIV_H +#include + +#include "netmem_priv.h" + s32 page_pool_inflight(const struct page_pool *pool, bool strict); int page_pool_list(struct page_pool *pool); void page_pool_detached(struct page_pool *pool); void page_pool_unlist(struct page_pool *pool); +static inline bool page_pool_set_dma_addr_netmem(netmem_ref netmem, + dma_addr_t addr) +{ + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { + netmem_set_dma_addr(netmem, addr >> PAGE_SHIFT); + + /* We assume page alignment to shave off bottom bits, + * if this "compression" doesn't work we need to drop. + */ + return addr != (dma_addr_t)netmem_get_dma_addr(netmem) + << PAGE_SHIFT; + } + + netmem_set_dma_addr(netmem, addr); + return false; +} + +static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + return page_pool_set_dma_addr_netmem(page_to_netmem(page), addr); +} + #endif diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 1748673e1fe0..e418da9e2379 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -89,6 +89,7 @@ #include "dev.h" #include "sock_destructor.h" +#include "netmem_priv.h" #ifdef CONFIG_SKB_EXTENSIONS static struct kmem_cache *skbuff_ext_cache __ro_after_init; @@ -920,9 +921,9 @@ static void skb_clone_fraglist(struct sk_buff *skb) skb_get(list); } -static bool is_pp_page(struct page *page) +static bool is_pp_netmem(netmem_ref netmem) { - return (page->pp_magic & ~0x3UL) == PP_SIGNATURE; + return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE; } int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, @@ -1020,9 +1021,7 @@ EXPORT_SYMBOL(skb_cow_data_for_xdp); #if IS_ENABLED(CONFIG_PAGE_POOL) bool napi_pp_put_page(netmem_ref netmem) { - struct page *page = netmem_to_page(netmem); - - page = compound_head(page); + netmem = netmem_compound_head(netmem); /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the @@ -1031,10 +1030,10 @@ bool napi_pp_put_page(netmem_ref netmem) * and page_is_pfmemalloc() is checked in __page_pool_put_page() * to avoid recycling the pfmemalloc page. */ - if (unlikely(!is_pp_page(page))) + if (unlikely(!is_pp_netmem(netmem))) return false; - page_pool_put_full_netmem(page->pp, page_to_netmem(page), false); + page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false); return true; } @@ -1061,7 +1060,7 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data) static int skb_pp_frag_ref(struct sk_buff *skb) { struct skb_shared_info *shinfo; - struct page *head_page; + netmem_ref head_netmem; int i; if (!skb->pp_recycle) @@ -1070,11 +1069,11 @@ static int skb_pp_frag_ref(struct sk_buff *skb) shinfo = skb_shinfo(skb); for (i = 0; i < shinfo->nr_frags; i++) { - head_page = compound_head(skb_frag_page(&shinfo->frags[i])); - if (likely(is_pp_page(head_page))) - page_pool_ref_page(head_page); + head_netmem = netmem_compound_head(shinfo->frags[i].netmem); + if (likely(is_pp_netmem(head_netmem))) + page_pool_ref_netmem(head_netmem); else - page_ref_inc(head_page); + page_ref_inc(netmem_to_page(head_netmem)); } return 0; } From patchwork Sun Aug 25 04:15:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776588 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 496B75644E for ; Sun, 25 Aug 2024 04:15:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559341; cv=none; b=LkY/+mHdjjhKuNq2Af7vEp10RGBG/TXY4q/oCQImhKBdnfkoUVzCNlLdYOquDEEM3B7koOFAxz19ZyvWvbz+LnIPgVA78D80Z6FkbG93USSCXTEPXspMQKaZz3KuX/BsQgEdu6RPMuNMQFJ4sEzszuIok/3oFXZ00AQ/eyKhOC0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559341; c=relaxed/simple; bh=DRM+FYdcF0Ev13sYm3ANtyaLVM53o5C8sEo2YNEiQkM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sz7vO1d2rXpw8SdBvyEEB3bbmZBvYE6oE9HvYjI+r2iSQHAFMl+GPiK40agB8VRWeqWtxYC6+1pJx+gaWzYO3DJuG1KO+uw2UzeB7USopFxehhBjCwUOSMdzXbeCbiT753NJKAj1H4mxla5r64LGtenMkbfLFYv0KV1sF3GoZ38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GhUtjHZo; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GhUtjHZo" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7d1fe1dd173so22731a12.0 for ; Sat, 24 Aug 2024 21:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559329; x=1725164129; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0iCz4sG2mVU6zUdnL5uh1i4PKvsY29vaiw4KEAUFGzo=; b=GhUtjHZojnDDLCATH0H+ZUET3Ta/r69Gi3NuP8SVwD3pFSnDVFxN2vfOmmk0ymL2wo hVcg2mi7zasdYqeC/Psys99TAojVE+cT+2FzQHSMbjA6JsrXrLb9F2GU/JqbF8fHyBau 7DQ1LqMsdUPH6xynyS2Xip0/VQu9WKIH/YpwJRjeAkzugy/N+4Rm5t2LwOdcYW/f3Abo CtUUUwCxy4Mc5hloxL3Hgp75mwhWBrHt5yY9B2+M0B147w79k5QWdJbqxAQoliAcPgX1 phV9xDjtVZSqJguw/8HakBmU9ixZYE20FH2Ymj9XeFe7hVyxYmEcRhIH360fO3NI61JZ /TcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559329; x=1725164129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0iCz4sG2mVU6zUdnL5uh1i4PKvsY29vaiw4KEAUFGzo=; b=o8ueH8maMsJuSpJ134YoSL79vg7KqnxrXiUyFrPECWipbVH2T5Ekw02RVX74CHwWBh T/YmqiltL4C05Joh/EwauwDW5VvuzK9PgjxNBRxTwA5tLduzBlA+CLyVcEjRqmHtX02C bJE9kSI/4BxpOGVtFd6hFXlwtJCNuHBn4zA8XQIaaqL/9sl9WB2LwZUsejl/ApXze/CY L6X1nns0foagIv9TnbZexNEWA7+ZbgwlwzCWPDd1WK7nP235vmxPyISUQqZLWC6tNyTz JcYyxYYVBcXQK+V+m7uldhxvoMn8I/z1Iq4aztbxdVzIhBWdejz6kxlGM5g/HDzvWxOY 4HTw== X-Forwarded-Encrypted: i=1; AJvYcCV+XglC0CuRrGHXsWtLKqc7wvvNm0ZN5X5BZdxkA0AHqUHNv7upmOPCk7SLyByEHEwr10uCI+inYaJritQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwMZ//VGze63Eewp5YqTaOYKgs9SytZbjrCD/BjJvqB7TFbIkcS QIB7lriTOMOUMp8WdX3tHty8V0ppb/yotdtJMQ0XMMT2GGZ+zosnPOCF5cEmn1UhrZFOe1KD+fr F+Rbc3009zhrpW9sy5iqeLA== X-Google-Smtp-Source: AGHT+IE6HqAIiwcirRiKT/XoEMYLvSRBu72ptNqHtFBqbqZphzyaeyQdeU0BXNJ+CnpZjzr694sfu9DyXEjCHgilCQ== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a63:f5f:0:b0:75a:e1e7:275c with SMTP id 41be03b00d2f7-7cf54ec72fdmr13489a12.9.1724559329212; Sat, 24 Aug 2024 21:15:29 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:04 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-7-almasrymina@google.com> Subject: [PATCH net-next v22 06/13] memory-provider: dmabuf devmem memory provider From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang Implement a memory provider that allocates dmabuf devmem in the form of net_iov. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the net_iov. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Usage of PP_FLAG_DMA_MAP is required for this memory provide such that the page_pool can provide the driver with the dma-addrs of the devmem. Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity & p.order != 0. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov --- v21: - Provide empty definitions of functions moved to page_pool_priv.h, so that the build still succeeds when CONFIG_PAGE_POOL is not set. v20: - Moved queue pp_params field from fast path entries to slow path entries. - Moved page_pool_check_memory_provider() call to inside netdev_rx_queue_restart (Pavel). - Removed binding arg to page_pool_check_memory_provider() (Pavel). - Removed unnecessary includes from page_pool.c - Removed EXPORT_SYMBOL(page_pool_mem_providers) (Jakub) - Check pool->slow.queue instead of walking binding xarray (Pavel & Jakub). v19: - Add PP_FLAG_ALLOW_UNREADABLE_NETMEM flag. It serves 2 purposes, (a) it guards drivers that don't support unreadable netmem (net_iov backed) from accidentally getting exposed to it, and (b) drivers that wish to create header pools can unset it for that pool to force readable netmem. - Add page_pool_check_memory_provider, which verifies that the driver has created a page_pool with the expected configuration. This is used to report to the user if the mp configuration succeeded, and also verify that the driver is doing the right thing. - Don't reset niov->dma_addr on allocation/free. v17: - Use ASSERT_RTNL (Jakub) v16: - Add DEBUG_NET_WARN_ON_ONCE(!rtnl_is_locked()), to catch cases if page_pool_init without rtnl_locking when the queue is provided. In this case, the queue configuration may be changed while we're initing the page_pool, which could be a race. v13: - Return on warning (Pavel). - Fixed pool->recycle_stats not being freed on error (Pavel). - Applied reviewed-by from Pavel. v11: - Rebase to not use the ops. (Christoph) v8: - Use skb_frag_size instead of frag->bv_len to fix patch-by-patch build error v6: - refactor new memory provider functions into net/core/devmem.c (Pavel) v2: - Disable devmem for p.order != 0 v1: - static_branch check in page_is_page_pool_iov() (Willem & Paolo). - PP_DEVMEM -> PP_IOV (David). - Require PP_FLAG_DMA_MAP (Jakub). build fixes --- include/net/mp_dmabuf_devmem.h | 44 +++++++++++++++ include/net/page_pool/types.h | 16 +++++- net/core/devmem.c | 66 ++++++++++++++++++++++ net/core/netdev_rx_queue.c | 7 +++ net/core/page_pool.c | 100 ++++++++++++++++++++++++--------- net/core/page_pool_priv.h | 20 +++++++ net/core/page_pool_user.c | 25 +++++++++ 7 files changed, 250 insertions(+), 28 deletions(-) create mode 100644 include/net/mp_dmabuf_devmem.h diff --git a/include/net/mp_dmabuf_devmem.h b/include/net/mp_dmabuf_devmem.h new file mode 100644 index 000000000000..300a2356eed0 --- /dev/null +++ b/include/net/mp_dmabuf_devmem.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Dmabuf device memory provider. + * + * Authors: Mina Almasry + * + */ +#ifndef _NET_MP_DMABUF_DEVMEM_H +#define _NET_MP_DMABUF_DEVMEM_H + +#include + +#if defined(CONFIG_DMA_SHARED_BUFFER) && defined(CONFIG_GENERIC_ALLOCATOR) +int mp_dmabuf_devmem_init(struct page_pool *pool); + +netmem_ref mp_dmabuf_devmem_alloc_netmems(struct page_pool *pool, gfp_t gfp); + +void mp_dmabuf_devmem_destroy(struct page_pool *pool); + +bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem); +#else +static inline int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + return -EOPNOTSUPP; +} + +static inline netmem_ref mp_dmabuf_devmem_alloc_netmems(struct page_pool *pool, + gfp_t gfp) +{ + return 0; +} + +static inline void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ +} + +static inline bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + netmem_ref netmem) +{ + return false; +} +#endif + +#endif /* _NET_MP_DMABUF_DEVMEM_H */ diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 4afd6dd56351..1b4698710f25 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -20,8 +20,17 @@ * device driver responsibility */ #define PP_FLAG_SYSTEM_POOL BIT(2) /* Global system page_pool */ +#define PP_FLAG_ALLOW_UNREADABLE_NETMEM BIT(3) /* Allow unreadable (net_iov + * backed) netmem in this + * page_pool. Drivers setting + * this must be able to support + * unreadable netmem, where + * netmem_address() would return + * NULL. This flag should not be + * set for header page_pools. + */ #define PP_FLAG_ALL (PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | \ - PP_FLAG_SYSTEM_POOL) + PP_FLAG_SYSTEM_POOL | PP_FLAG_ALLOW_UNREADABLE_NETMEM) /* * Fast allocation side cache array/stack @@ -57,7 +66,9 @@ struct pp_alloc_cache { * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV * @slow: params with slowpath access only (initialization and Netlink) * @netdev: netdev this pool will serve (leave as NULL if none or multiple) - * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_SYSTEM_POOL + * @queue: struct netdev_rx_queue this page_pool is being created for. + * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_SYSTEM_POOL, + * PP_FLAG_ALLOW_UNREADABLE_NETMEM. */ struct page_pool_params { struct_group_tagged(page_pool_params_fast, fast, @@ -72,6 +83,7 @@ struct page_pool_params { ); struct_group_tagged(page_pool_params_slow, slow, struct net_device *netdev; + struct netdev_rx_queue *queue; unsigned int flags; /* private: used by test code only */ void (*init_callback)(netmem_ref netmem, void *arg); diff --git a/net/core/devmem.c b/net/core/devmem.c index 8cc727bbd137..fb088ebb945a 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include "page_pool_priv.h" @@ -313,4 +314,69 @@ void dev_dmabuf_uninstall(struct net_device *dev) xa_erase(&binding->bound_rxqs, xa_idx); } } + +/*** "Dmabuf devmem memory provider" ***/ + +int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (!pool->dma_map) + return -EOPNOTSUPP; + + if (pool->dma_sync) + return -EOPNOTSUPP; + + if (pool->p.order != 0) + return -E2BIG; + + net_devmem_dmabuf_binding_get(binding); + return 0; +} + +netmem_ref mp_dmabuf_devmem_alloc_netmems(struct page_pool *pool, gfp_t gfp) +{ + struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + netmem_ref netmem; + struct net_iov *niov; + + niov = net_devmem_alloc_dmabuf(binding); + if (!niov) + return 0; + + netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt); + return netmem; +} + +void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + + net_devmem_dmabuf_binding_put(binding); +} + +bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) +{ + if (WARN_ON_ONCE(!netmem_is_net_iov(netmem))) + return false; + + if (WARN_ON_ONCE(atomic_long_read(netmem_get_pp_ref_count_ref(netmem)) != + 1)) + return false; + + page_pool_clear_pp_info(netmem); + + net_devmem_free_dmabuf(netmem_to_net_iov(netmem)); + + /* We don't want the page pool put_page()ing our net_iovs. */ + return false; +} #endif diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c index da11720a5983..e217a5838c87 100644 --- a/net/core/netdev_rx_queue.c +++ b/net/core/netdev_rx_queue.c @@ -4,8 +4,11 @@ #include #include +#include "page_pool_priv.h" + int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx) { + struct netdev_rx_queue *rxq = __netif_get_rx_queue(dev, rxq_idx); void *new_mem, *old_mem; int err; @@ -31,6 +34,10 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx) if (err) goto err_free_old_mem; + err = page_pool_check_memory_provider(dev, rxq); + if (err) + goto err_free_new_queue_mem; + err = dev->queue_mgmt_ops->ndo_queue_stop(dev, old_mem, rxq_idx); if (err) goto err_free_new_queue_mem; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 13277f05aebd..267c06dadf4a 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include @@ -190,6 +192,7 @@ static int page_pool_init(struct page_pool *pool, int cpuid) { unsigned int ring_qsize = 1024; /* Default */ + int err; page_pool_struct_check(); @@ -271,7 +274,36 @@ static int page_pool_init(struct page_pool *pool, if (pool->dma_map) get_device(pool->p.dev); + if (pool->slow.queue && + pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) { + /* We rely on rtnl_lock()ing to make sure netdev_rx_queue + * configuration doesn't change while we're initializing the + * page_pool. + */ + ASSERT_RTNL(); + pool->mp_priv = pool->slow.queue->mp_params.mp_priv; + } + + if (pool->mp_priv) { + err = mp_dmabuf_devmem_init(pool); + if (err) { + pr_warn("%s() mem-provider init failed %d\n", __func__, + err); + goto free_ptr_ring; + } + + static_branch_inc(&page_pool_mem_providers); + } + return 0; + +free_ptr_ring: + ptr_ring_cleanup(&pool->ring, NULL); +#ifdef CONFIG_PAGE_POOL_STATS + if (!pool->system) + free_percpu(pool->recycle_stats); +#endif + return err; } static void page_pool_uninit(struct page_pool *pool) @@ -455,28 +487,6 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) return false; } -static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem) -{ - netmem_set_pp(netmem, pool); - netmem_or_pp_magic(netmem, PP_SIGNATURE); - - /* Ensuring all pages have been split into one fragment initially: - * page_pool_set_pp_info() is only called once for every page when it - * is allocated from the page allocator and page_pool_fragment_page() - * is dirtying the same cache line as the page->pp_magic above, so - * the overhead is negligible. - */ - page_pool_fragment_netmem(netmem, 1); - if (pool->has_init_callback) - pool->slow.init_callback(netmem, pool->slow.init_arg); -} - -static void page_pool_clear_pp_info(netmem_ref netmem) -{ - netmem_clear_pp_magic(netmem); - netmem_set_pp(netmem, NULL); -} - static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { @@ -572,7 +582,10 @@ netmem_ref page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - netmem = __page_pool_alloc_pages_slow(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) + netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + else + netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; } EXPORT_SYMBOL(page_pool_alloc_netmem); @@ -608,6 +621,28 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict) return inflight; } +void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem) +{ + netmem_set_pp(netmem, pool); + netmem_or_pp_magic(netmem, PP_SIGNATURE); + + /* Ensuring all pages have been split into one fragment initially: + * page_pool_set_pp_info() is only called once for every page when it + * is allocated from the page allocator and page_pool_fragment_page() + * is dirtying the same cache line as the page->pp_magic above, so + * the overhead is negligible. + */ + page_pool_fragment_netmem(netmem, 1); + if (pool->has_init_callback) + pool->slow.init_callback(netmem, pool->slow.init_arg); +} + +void page_pool_clear_pp_info(netmem_ref netmem) +{ + netmem_clear_pp_magic(netmem); + netmem_set_pp(netmem, NULL); +} + static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, netmem_ref netmem) { @@ -636,8 +671,13 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) { int count; + bool put; - __page_pool_release_page_dma(pool, netmem); + put = true; + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) + put = mp_dmabuf_devmem_release_page(pool, netmem); + else + __page_pool_release_page_dma(pool, netmem); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. @@ -645,8 +685,10 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); trace_page_pool_state_release(pool, netmem, count); - page_pool_clear_pp_info(netmem); - put_page(netmem_to_page(netmem)); + if (put) { + page_pool_clear_pp_info(netmem); + put_page(netmem_to_page(netmem)); + } /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). @@ -965,6 +1007,12 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); + + if (pool->mp_priv) { + mp_dmabuf_devmem_destroy(pool); + static_branch_dec(&page_pool_mem_providers); + } + kfree(pool); } diff --git a/net/core/page_pool_priv.h b/net/core/page_pool_priv.h index 2142caeddb7c..f90171dc477c 100644 --- a/net/core/page_pool_priv.h +++ b/net/core/page_pool_priv.h @@ -35,4 +35,24 @@ static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) return page_pool_set_dma_addr_netmem(page_to_netmem(page), addr); } +#if defined(CONFIG_PAGE_POOL) +void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem); +void page_pool_clear_pp_info(netmem_ref netmem); +int page_pool_check_memory_provider(struct net_device *dev, + struct netdev_rx_queue *rxq); +#else +static inline void page_pool_set_pp_info(struct page_pool *pool, + netmem_ref netmem) +{ +} +static inline void page_pool_clear_pp_info(netmem_ref netmem) +{ +} +static inline int page_pool_check_memory_provider(struct net_device *dev, + struct netdev_rx_queue *rxq) +{ + return 0; +} +#endif + #endif diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 3a3277ba167b..9b69066cc07e 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "page_pool_priv.h" #include "netdev-genl-gen.h" @@ -344,6 +345,30 @@ void page_pool_unlist(struct page_pool *pool) mutex_unlock(&page_pools_lock); } +int page_pool_check_memory_provider(struct net_device *dev, + struct netdev_rx_queue *rxq) +{ + struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv; + struct page_pool *pool; + struct hlist_node *n; + + if (!binding) + return 0; + + mutex_lock(&page_pools_lock); + hlist_for_each_entry_safe(pool, n, &dev->page_pools, user.list) { + if (pool->mp_priv != binding) + continue; + + if (pool->slow.queue == rxq) { + mutex_unlock(&page_pools_lock); + return 0; + } + } + mutex_unlock(&page_pools_lock); + return -ENODATA; +} + static void page_pool_unreg_netdev_wipe(struct net_device *netdev) { struct page_pool *pool; From patchwork Sun Aug 25 04:15:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776589 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB7FA3987D for ; Sun, 25 Aug 2024 04:15:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559343; cv=none; b=ar5LizGfjfh0qFXX16ur/5ts8RI+aM6gdJqlA5skNGUSKbJXMHL9pzoa717Lp5R8e+dSMILPKgEIDvT99uB7cTUnmVeK9gvhUJqSUa7sPHIyJuuBhlHfrIsKplTZcXDUOzYV+6mojb2uvTufLHz4f6raC//dLVqfe1q4+29xYDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559343; c=relaxed/simple; bh=UG4Odx7ZlnicK/H+zvvvn0IwK+xDR9UZKy6QGbiGbHc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W6WuISzKfXSMZgvDSdMryl8Cilhzza4t/yBT+HEBKOaLQyf9MG36ELCDXf13p4IE/6s9oVnAmpIQbSd/AwqrbPRZbg7pPapDQJ6L4obndvAZmFHWo1s3rKST16l87TQEfI9rtMOEIbVho5F8Dtqicj+RTHLhn7KnVvBf6My9klw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gI4xAKs2; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gI4xAKs2" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7d1fe1dd173so22766a12.0 for ; Sat, 24 Aug 2024 21:15:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559331; x=1725164131; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qBaBPkUGzp81y+Ko+TRyZGwAtxViy7iHwU3rKfDaFg4=; b=gI4xAKs2OAujyP/oyo2kC5F+uTCu/igAGfibNoHDGlJUpw+eP5msKV13JljNn1gZNh KsKfstR9/PdyaJBDbKhESZJeedhBMl/DcECYDi3ZiuBJ4IdMIZpp0WOjHu2hzI6GjyYj MBpsZyu2kAeXRVO57XIJyKQjSOqp85JpzG5BbmTy7I2y18eKQr8DyPyQKJqhoPgTl3rl OOPBue3iVvxx7dtWnzHUpIOCd9ladbLwjam7tPRYocW58rLz7YgrF2SdzUi3CeIJSjNd agf56tMNfsPRv9oyCqRU5S71z4n+RXTHZrmp9U8IUADFvl5ICkngFRpmtZ+NO7FFEqsf 8a/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559331; x=1725164131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qBaBPkUGzp81y+Ko+TRyZGwAtxViy7iHwU3rKfDaFg4=; b=rGDkT4QOgtnY5GQfQnb8ajVRquv4s7cG1ASbXsu8jtXdsf3MxpDJqGV+SZZhu4Jcua E3MDnar/N8TSlKnqgG74CIsCRZivzyqesAYu//lrZ6ztYqmdtKFdP92bmORDhoDpgpkr i2CmsM54U6XhoHVQNB9NkETAEdHYN2QYxAiKAduuVg2AmonMj3mgzuz2o5eCSSUmaHUW gwpKEb0sLhe0DKz22dM8K+dyqw6T7m8rpNKdCZ5rDH0W/EADTYBUe8Ox6Ruj72Co7m/D R74LtSyMsn6mbot5KzeOzxLUhFUXNG03x27EPP/3XCZ/O00ffe66YGWS/8O/r/NmVLvu JK9g== X-Forwarded-Encrypted: i=1; AJvYcCXLT4O0N7pch0lk9cvxOCrDV82KVql+5b3uniilkLF2h5GjO6+eU66gJg1lw0gwogV9WWEaOgzGJO9uLGI=@vger.kernel.org X-Gm-Message-State: AOJu0YxEAxmh6NbYYpmhF8WdiAe00NpxX/YxpS50FWH8ikqmz3zNgyki c1Ha9PZZ+uj3l+M3dnTx+f9k9KwMJZW1PhMbwzjg2yeOgYlCvhtQnqA4OnRH0WqMgwOog3nNL5Z cqh7AtMbFuzOFA+cCjAdXcw== X-Google-Smtp-Source: AGHT+IFYddEuV+6QD0mIFU1bJlBuaZgRaIPbtrt42PbKm8wRXB/WccdhoJa9w5dg05KWL7sZC43e64uPo59mKi+SYA== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a65:6884:0:b0:7c2:1887:5f9f with SMTP id 41be03b00d2f7-7cf54e4c5dfmr11034a12.3.1724559331211; Sat, 24 Aug 2024 21:15:31 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:05 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-8-almasrymina@google.com> Subject: [PATCH net-next v22 07/13] net: support non paged skb frags From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevant callers to handle this case. Signed-off-by: Mina Almasry Reviewed-by: Eric Dumazet --- v10: - Fixed newly generated kdoc warnings found by patchwork. While we're at it, fix the Return section of the functions I touched. v6: - Rebased on top of the merged netmem changes. Changes in v1: - Fix illegal_highdma() (Yunsheng). - Rework napi_pp_put_page() slightly to reduce code churn (Willem). --- include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++++- include/linux/skbuff_ref.h | 9 ++++---- net/core/dev.c | 3 ++- net/core/gro.c | 3 ++- net/core/skbuff.c | 11 ++++++++++ net/ipv4/esp4.c | 3 ++- net/ipv4/tcp.c | 3 +++ net/ipv6/esp6.c | 3 ++- 8 files changed, 67 insertions(+), 10 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index cf8f6ce06742..dbadf2dd6b35 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3523,21 +3523,58 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->offset = fragfrom->offset; } +/* Return: true if the skb_frag contains a net_iov. */ +static inline bool skb_frag_is_net_iov(const skb_frag_t *frag) +{ + return netmem_is_net_iov(frag->netmem); +} + +/** + * skb_frag_net_iov - retrieve the net_iov referred to by fragment + * @frag: the fragment + * + * Return: the &struct net_iov associated with @frag. Returns NULL if this + * frag has no associated net_iov. + */ +static inline struct net_iov *skb_frag_net_iov(const skb_frag_t *frag) +{ + if (!skb_frag_is_net_iov(frag)) + return NULL; + + return netmem_to_net_iov(frag->netmem); +} + /** * skb_frag_page - retrieve the page referred to by a paged fragment * @frag: the paged fragment * - * Returns the &struct page associated with @frag. + * Return: the &struct page associated with @frag. Returns NULL if this frag + * has no associated page. */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { + if (skb_frag_is_net_iov(frag)) + return NULL; + return netmem_to_page(frag->netmem); } +/** + * skb_frag_netmem - retrieve the netmem referred to by a fragment + * @frag: the fragment + * + * Return: the &netmem_ref associated with @frag. + */ +static inline netmem_ref skb_frag_netmem(const skb_frag_t *frag) +{ + return frag->netmem; +} + int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, unsigned int headroom); int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, struct bpf_prog *prog); + /** * skb_frag_address - gets the address of the data contained in a paged fragment * @frag: the paged fragment buffer @@ -3547,6 +3584,9 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, */ static inline void *skb_frag_address(const skb_frag_t *frag) { + if (!skb_frag_page(frag)) + return NULL; + return page_address(skb_frag_page(frag)) + skb_frag_off(frag); } diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h index 16c241a23472..0f3c58007488 100644 --- a/include/linux/skbuff_ref.h +++ b/include/linux/skbuff_ref.h @@ -34,14 +34,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) bool napi_pp_put_page(netmem_ref netmem); -static inline void -skb_page_unref(struct page *page, bool recycle) +static inline void skb_page_unref(netmem_ref netmem, bool recycle) { #ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page_to_netmem(page))) + if (recycle && napi_pp_put_page(netmem)) return; #endif - put_page(page); + put_page(netmem_to_page(netmem)); } /** @@ -54,7 +53,7 @@ skb_page_unref(struct page *page, bool recycle) */ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { - skb_page_unref(skb_frag_page(frag), recycle); + skb_page_unref(skb_frag_netmem(frag), recycle); } /** diff --git a/net/core/dev.c b/net/core/dev.c index 2cab5f969778..8be922165737 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3432,8 +3432,9 @@ static int illegal_highdma(struct net_device *dev, struct sk_buff *skb) if (!(dev->features & NETIF_F_HIGHDMA)) { for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + struct page *page = skb_frag_page(frag); - if (PageHighMem(skb_frag_page(frag))) + if (page && PageHighMem(page)) return 1; } } diff --git a/net/core/gro.c b/net/core/gro.c index b3b43de1a650..26f09c3e830b 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -408,7 +408,8 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) pinfo = skb_shinfo(skb); frag0 = &pinfo->frags[0]; - if (pinfo->nr_frags && !PageHighMem(skb_frag_page(frag0)) && + if (pinfo->nr_frags && skb_frag_page(frag0) && + !PageHighMem(skb_frag_page(frag0)) && (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) { NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index e418da9e2379..d3bcb37c9e7a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1371,6 +1371,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr; + if (skb_frag_is_net_iov(frag)) { + printk("%sskb frag %d: not readable\n", level, i); + len -= skb_frag_size(frag); + if (!len) + break; + continue; + } + skb_frag_foreach_page(frag, skb_frag_off(frag), skb_frag_size(frag), p, p_off, p_len, copied) { @@ -3163,6 +3171,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; + if (WARN_ON_ONCE(!skb_frag_page(f))) + return false; + if (__splice_segment(skb_frag_page(f), skb_frag_off(f), skb_frag_size(f), offset, len, spd, false, sk, pipe)) diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 47378ca41904..f3281312eb5e 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -115,7 +115,8 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb) */ if (req->src != req->dst) for (sg = sg_next(req->src); sg; sg = sg_next(sg)) - skb_page_unref(sg_page(sg), skb->pp_recycle); + skb_page_unref(page_to_netmem(sg_page(sg)), + skb->pp_recycle); } #ifdef CONFIG_INET_ESPINTCP diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 8514257f4ecd..20a1b0333017 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2177,6 +2177,9 @@ static int tcp_zerocopy_receive(struct sock *sk, break; } page = skb_frag_page(frags); + if (WARN_ON_ONCE(!page)) + break; + prefetchw(page); pages[pages_to_map++] = page; length += PAGE_SIZE; diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index 3920e8aa1031..b2400c226a32 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -132,7 +132,8 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb) */ if (req->src != req->dst) for (sg = sg_next(req->src); sg; sg = sg_next(sg)) - skb_page_unref(sg_page(sg), skb->pp_recycle); + skb_page_unref(page_to_netmem(sg_page(sg)), + skb->pp_recycle); } #ifdef CONFIG_INET6_ESPINTCP From patchwork Sun Aug 25 04:15:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776590 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A9988C1E for ; Sun, 25 Aug 2024 04:15:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559346; cv=none; b=TqHx6VjWnB2HOPZz21ii8H9rAIm+fTj8hDlER5UqvlN8qMjUypKEdHtFYnJDmmWdH0CEeUOcIJAnVBFv/bcSrhUXDs1X5FxKUOCjKHLy3hvfjPel86MK5k40OMTQGoqZzqfCkU91K3l568qU2Wiq4uC6oX16hRM+zH/5M3zJ68w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559346; c=relaxed/simple; bh=9G7eEkoIuIJx5wGIPnjSnTWN6gBlradmLpZzkXM2i20=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tfVsI/bcVIxsjKUNTal9vtppwKeaxd6VDnErCpaWzKoMev5hq3edGsfnlPz41K7+TCV5MBAQUBk0KSioL01CAQzAA764PguiV023PYK160pXwSLT1Ul+PxhcrMiKXZ9GdzX5lmatI2MkRzwBlWkWEEUbyUi/U2mxG08PWMYzasw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ys9GS1bw; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ys9GS1bw" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0b3d35ccfbso4677859276.3 for ; Sat, 24 Aug 2024 21:15:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559333; x=1725164133; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WmIpOBv40jRWoF3cThyHTFY+/rnNv9Wo69rZqVdLg9Y=; b=ys9GS1bwSZYrtCb9Xrm42Ecd2fmLzRg1dVdm0k/hBnkTVrF7YkJQ5yUS20NZC0GtU0 4p5GzDqggrcnPbag78ytk4s3TQDHA/PDyRj6XOFSVJhF+kgsYIJet7InDZ7aATMc7m8e saook9yZj9huV52xk1rGlwvjQsfLxKGqNv8lI5EExxB12GDQmWD2FSD1Iph23C2vKXth C+5MNyR+2mgXFC72FR91BeQqDWR3HC0oWyaR5k3CioKVQPmFbOfGN8VvnkFSHXr0sON1 x1BSmf9nfTaWVtgUbUrf6d/k6aAUINz15WXyhV/P5NNOdnwqf7PImdJ3RBSKEhoU+qD1 48Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559333; x=1725164133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WmIpOBv40jRWoF3cThyHTFY+/rnNv9Wo69rZqVdLg9Y=; b=Met6vuJGPVYduuywcCoCDIb+A1saYAa74JnG2fSdc6gocPa6nJZoXABjAauwV79Ikp qnurLL1AyYbkNOwuntIGz9caSFDWNe/9osqPIH4ex0CPaew2sDMkgJFl44+NMHtjeWRh UNhNArMuZ2USA3bUcvsYWnWwzPdzhqcXpvAWjA4lXYGnrfRO2EphmddNmn25LpZpLicz 652ggk+/y8HmVfA+5ksAKgTiSGfQ1TQ9mynB8LxwE3BF9xEtieuy1fKZDnPcX0BtSxdZ SUUAg+De6RetiCOVojkl5uzDtpqtW+khy0VySHpjOIH9eI5CFL5F1726/9ZuSNiH/yRo 4XRg== X-Forwarded-Encrypted: i=1; AJvYcCX7USNHTNfhiaXSgxJCTRCv99LwLncXhoicceMk4qwLwqnaQGq4LdBAgH2o1lSH3xwvnUfiKT5cIJxSojE=@vger.kernel.org X-Gm-Message-State: AOJu0YwLPMeSU+T+55PBsEWKOkegrYT3Q4+QLQDop4kOQsjyMfrtEE9O tZE8Cw7dvwnyzSJ4aOx8BqQkDo871iq3YY6BnN8zUNoUHraYzp4DlEssztldz0wHr0K2DyYUmVF dlArtWZ8ckC9YAec9rLAG5w== X-Google-Smtp-Source: AGHT+IEEVG1ADQdL83oTQcxcG0bznxMpdRkoMjnX0MiiUT3ZeOnC/DN4IGSJ6WCXfngk4A3ssPFVvwy/m4BunRvioA== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a05:6902:1352:b0:e03:59e2:e82 with SMTP id 3f1490d57ef6-e17a8659c57mr12087276.10.1724559333103; Sat, 24 Aug 2024 21:15:33 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:06 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-9-almasrymina@google.com> Subject: [PATCH net-next v22 08/13] net: add support for skbs with unreadable frags From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb. Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not. __skb_fill_netmem_desc() now checks frags added to skbs for net_iov, and marks the skb as skb->devmem accordingly. Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Eric Dumazet --- v16: - Fix unreadable handling in skb_split_no_header() (Eric). v11: - drop excessive checks for frag 0 pull (Paolo) v9: https://lore.kernel.org/netdev/20240403002053.2376017-11-almasrymina@google.com/ - change skb->readable to skb->unreadable (Pavel/David). skb->readable was very complicated, because by default skbs are readable so the flag needed to be set to true in all code paths where new skbs were created or cloned. Forgetting to set skb->readable=true in some paths caused crashes. Flip it to skb->unreadable so that the default 0 value works well, and we only need to set it to true when we add unreadable frags. v6 - skb->dmabuf -> skb->readable (Pavel). Pavel's original suggestion was to remove the skb->dmabuf flag entirely, but when I looked into it closely, I found the issue that if we remove the flag we have to dereference the shinfo(skb) pointer to obtain the first frag, which can cause a performance regression if it dirties the cache line when the shinfo(skb) was not really needed. Instead, I converted the skb->dmabuf flag into a generic skb->readable flag which can be re-used by io_uring. Changes in v1: - Rename devmem -> dmabuf (David). - Flip skb_frags_not_readable (Jakub). --- include/linux/skbuff.h | 19 +++++++++++++++++-- include/net/tcp.h | 5 +++-- net/core/datagram.c | 6 ++++++ net/core/skbuff.c | 43 ++++++++++++++++++++++++++++++++++++++++-- net/ipv4/tcp.c | 3 +++ net/ipv4/tcp_input.c | 13 ++++++++++--- net/ipv4/tcp_output.c | 5 ++++- net/packet/af_packet.c | 4 ++-- 8 files changed, 86 insertions(+), 12 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index dbadf2dd6b35..d02a88bad953 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -827,6 +827,8 @@ enum skb_tstamp_type { * @csum_level: indicates the number of consecutive checksums found in * the packet minus one that have been verified as * CHECKSUM_UNNECESSARY (max 3) + * @unreadable: indicates that at least 1 of the fragments in this skb is + * unreadable. * @dst_pending_confirm: need to confirm neighbour * @decrypted: Decrypted SKB * @slow_gro: state present at GRO time, slower prepare step required @@ -1008,7 +1010,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif - + __u8 unreadable:1; #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1823,6 +1825,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); } +/* Return true if frags in this skb are readable by the host. */ +static inline bool skb_frags_readable(const struct sk_buff *skb) +{ + return !skb->unreadable; +} + static inline void skb_mark_not_on_list(struct sk_buff *skb) { skb->next = NULL; @@ -2539,10 +2547,17 @@ static inline void skb_len_add(struct sk_buff *skb, int delta) static inline void __skb_fill_netmem_desc(struct sk_buff *skb, int i, netmem_ref netmem, int off, int size) { - struct page *page = netmem_to_page(netmem); + struct page *page; __skb_fill_netmem_desc_noacc(skb_shinfo(skb), i, netmem, off, size); + if (netmem_is_net_iov(netmem)) { + skb->unreadable = true; + return; + } + + page = netmem_to_page(netmem); + /* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely * on page_is_pfmemalloc doing the right thing(tm). diff --git a/include/net/tcp.h b/include/net/tcp.h index 2aac11e7e1cc..e8f6e602c2ad 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1060,7 +1060,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb) static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) { - return likely(!TCP_SKB_CB(skb)->eor); + return likely(!TCP_SKB_CB(skb)->eor && skb_frags_readable(skb)); } static inline bool tcp_skb_can_collapse(const struct sk_buff *to, @@ -1069,7 +1069,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to, /* skb_cmp_decrypted() not needed, use tcp_write_collapse_fence() */ return likely(tcp_skb_can_collapse_to(to) && mptcp_skb_can_collapse(to, from) && - skb_pure_zcopy_same(to, from)); + skb_pure_zcopy_same(to, from) && + skb_frags_readable(to) == skb_frags_readable(from)); } static inline bool tcp_skb_can_collapse_rx(const struct sk_buff *to, diff --git a/net/core/datagram.c b/net/core/datagram.c index a40f733b37d7..f0693707aece 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -407,6 +407,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; } + if (!skb_frags_readable(skb)) + goto short_copy; + /* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -623,6 +626,9 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb, { int frag = skb_shinfo(skb)->nr_frags; + if (!skb_frags_readable(skb)) + return -EFAULT; + while (length && iov_iter_count(from)) { struct page *head, *last_head = NULL; struct page *pages[MAX_SKB_FRAGS]; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d3bcb37c9e7a..65880122c73d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1972,6 +1972,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) return -EINVAL; + if (!skb_frags_readable(skb)) + return -EFAULT; + if (!num_frags) goto release; @@ -2145,6 +2148,9 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) unsigned int size; int headerlen; + if (!skb_frags_readable(skb)) + return NULL; + if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) return NULL; @@ -2483,6 +2489,9 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, struct sk_buff *n; int oldheadroom; + if (!skb_frags_readable(skb)) + return NULL; + if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) return NULL; @@ -2827,6 +2836,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta) */ int i, k, eat = (skb->tail + delta) - skb->end; + if (!skb_frags_readable(skb)) + return NULL; + if (eat > 0 || skb_cloned(skb)) { if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0, GFP_ATOMIC)) @@ -2980,6 +2992,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len) to += copy; } + if (!skb_frags_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *f = &skb_shinfo(skb)->frags[i]; @@ -3168,6 +3183,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, /* * then map the fragments */ + if (!skb_frags_readable(skb)) + return false; + for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; @@ -3391,6 +3409,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len) from += copy; } + if (!skb_frags_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; int end; @@ -3470,6 +3491,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len, pos = copy; } + if (!skb_frags_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -3570,6 +3594,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, pos = copy; } + if (!skb_frags_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -4061,6 +4088,7 @@ static inline void skb_split_inside_header(struct sk_buff *skb, skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i]; skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags; + skb1->unreadable = skb->unreadable; skb_shinfo(skb)->nr_frags = 0; skb1->data_len = skb->data_len; skb1->len += skb1->data_len; @@ -4108,6 +4136,8 @@ static inline void skb_split_no_header(struct sk_buff *skb, pos += size; } skb_shinfo(skb1)->nr_frags = k; + + skb1->unreadable = skb->unreadable; } /** @@ -4345,6 +4375,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data, return block_limit - abs_offset; } + if (!skb_frags_readable(st->cur_skb)) + return 0; + if (st->frag_idx == 0 && !st->frag_data) st->stepped_offset += skb_headlen(st->cur_skb); @@ -5957,7 +5990,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, if (to->pp_recycle != from->pp_recycle) return false; - if (len <= skb_tailroom(to)) { + if (skb_frags_readable(from) != skb_frags_readable(to)) + return false; + + if (len <= skb_tailroom(to) && skb_frags_readable(from)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); *delta_truesize = 0; @@ -6134,6 +6170,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len) if (!pskb_may_pull(skb, write_len)) return -ENOMEM; + if (!skb_frags_readable(skb)) + return -EFAULT; + if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) return 0; @@ -6813,7 +6852,7 @@ void skb_condense(struct sk_buff *skb) { if (skb->data_len) { if (skb->data_len > skb->end - skb->tail || - skb_cloned(skb)) + skb_cloned(skb) || !skb_frags_readable(skb)) return; /* Nice, we can free page frag(s) right now */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 20a1b0333017..30e0aa38ba9b 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2160,6 +2160,9 @@ static int tcp_zerocopy_receive(struct sock *sk, skb = tcp_recv_skb(sk, seq, &offset); } + if (!skb_frags_readable(skb)) + break; + if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, tss); zc->msg_flags |= TCP_CMSG_TS; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index e37488d3453f..9f314dfa1490 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5391,6 +5391,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) { n = tcp_skb_next(skb, list); + if (!skb_frags_readable(skb)) + goto skip_this; + /* No new bits? It is possible on ofo queue. */ if (!before(start, TCP_SKB_CB(skb)->end_seq)) { skb = tcp_collapse_one(sk, skb, list, root); @@ -5411,17 +5414,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, break; } - if (n && n != tail && tcp_skb_can_collapse_rx(skb, n) && + if (n && n != tail && skb_frags_readable(n) && + tcp_skb_can_collapse_rx(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; } +skip_this: /* Decided to skip this, advance start seq. */ start = TCP_SKB_CB(skb)->end_seq; } if (end_of_skbs || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + !skb_frags_readable(skb)) return; __skb_queue_head_init(&tmp); @@ -5463,7 +5469,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, if (!skb || skb == tail || !tcp_skb_can_collapse_rx(nskb, skb) || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + !skb_frags_readable(skb)) goto end; } } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index cdd0def14427..4fd746bd4d54 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2344,7 +2344,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len) if (unlikely(TCP_SKB_CB(skb)->eor) || tcp_has_tx_tstamp(skb) || - !skb_pure_zcopy_same(skb, next)) + !skb_pure_zcopy_same(skb, next) || + skb_frags_readable(skb) != skb_frags_readable(next)) return false; len -= skb->len; @@ -3264,6 +3265,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb) return false; if (skb_cloned(skb)) return false; + if (!skb_frags_readable(skb)) + return false; /* Some heuristics for collapsing over SACK'd could be invented */ if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) return false; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 4a364cdd445e..a705ec214254 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2216,7 +2216,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb); res = run_filter(skb, sk, snaplen); if (!res) @@ -2336,7 +2336,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb); res = run_filter(skb, sk, snaplen); if (!res) From patchwork Sun Aug 25 04:15:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776592 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5228077119 for ; Sun, 25 Aug 2024 04:15:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559348; cv=none; b=iPNPrMva3r3BaY9cCOdA++qme3x6PIAGSugqS/W3c5FLYgiwuV3E6sQbqbRzMhrVuX74CaHqAh+ass7zm5/0kpk2HXbMLZj63auGfivUKful/68SZepZpQwx8OFavm/ydcaiLTMnffjejnxLRCHfij0IxkRUzGk/WIz2JiRSIvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559348; c=relaxed/simple; bh=W64Rlit+0GkHHuWXgIH5tl7ZIAPXGIt0bu4HPqhM448=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=U1zOB2+k17HdWQLsTPpwbcEX379Gi2OrJB1f2Zo4otUK2oL9qlb5rns2kQCAhxuQOFCWuxrPYONY0HJ/LPeWafwopPKbmSQnbqXRz/f9QQ87zuvug+FJN4wuLey4DUfXhYRpwj8to1hbVm6H9voOeAxZGjeFTbxUEaxFSdxoVVg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qmuJDJ0d; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qmuJDJ0d" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e035949cc4eso6104349276.1 for ; Sat, 24 Aug 2024 21:15:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559335; x=1725164135; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+T/TSYleU3+k3Rarv5rc1c7191uPgY0o4j/JB4S8f9M=; b=qmuJDJ0di5yITNWflRd1hwqlDdkU5Kk3yZROfxPDXg69kf0AjTMdi9ZlcgjV3hcF10 Qdr9KJN3dKLZ+EYPtwVrN7Swo65+zClHFgpMlluBtE9Rza0wcd4rS1DZZaxfg9OS3DUp WoePs38/F9XkAHzxixW5G/ua6IvZRyeqwviCTL6doFSQQDmkq+FG2b9YiqTSJ4lOcao9 oMRE2Z443U90erKvhnPhmtJt45c7/2sCHccLUupOXfQ0W0SGnpWXhln+kqswnNEETpwx 85Lms/72SwxsrBbLeTJrE9ZnJTCRKBNLwc6O8T/2G6ukzKWwdlLZIpLcGER22qiEfLYt LpQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559335; x=1725164135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+T/TSYleU3+k3Rarv5rc1c7191uPgY0o4j/JB4S8f9M=; b=iqyFRTo5FoDmw299bLZ8HMca6KsqNEA67eEGgMaQsYgJJBsHjeMuMj6bEK9i9p921R JGNm8Q6pk/slnFNMlXEg+lRLe4s0WUDHcDAj5JqU62uJjXMlx54o3oXBamoG/jUC4iTc pn5LyTTMwwmgn6cHbCjnqrK3f8wkdIwh5KVkQ25xHJrB/8yCfBQ2jm6Y6R60CHY4OcAc 93PuNIKCNz+cj/704usx0OxrxKTDPuy+/v+bQTo+9SaA+SDF8LR+oIm+nR1DJlqJ2pvi A3cix/uHHwN/cz1bVgGzlc0RDIuZX/l8FjqsaB9rrqBmuLtaNArrTgD2AVZQ4uIw7mCk 7i/Q== X-Forwarded-Encrypted: i=1; AJvYcCW1vBx86vDL/ZQLz4XAqG/LxwY0Vd0uBlTr6SlPfrYD+RcAtSu+ER45hunJYX/CIEKBXwJUcdGK5/Y7B/k=@vger.kernel.org X-Gm-Message-State: AOJu0YxJbQ6V82pbkZWJDhzQdX5UCzvzR1wk1s2OhlXCP8FzizgWAwRk NFUIpWxIZmcqSo1AQHmik4VaIEurXkkWZboQPwxnquseGolM1VVapcdkZHppwU1ef6z0AusUVl2 aEMVGmYRYjgTJoys2ApfW0g== X-Google-Smtp-Source: AGHT+IHHa61/X0idqjBhP9CvkURBthY6DgE9tatmbXWRVbBxR6b4eSdntNXUGE2UWer70lDz4uXHhr2K/cukIBTM6g== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a5b:bc6:0:b0:e16:57bc:ac26 with SMTP id 3f1490d57ef6-e17a83c989dmr319837276.5.1724559334977; Sat, 24 Aug 2024 21:15:34 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:07 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-10-almasrymina@google.com> Subject: [PATCH net-next v22 09/13] tcp: RX path for devmem TCP From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling. tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer. tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information: 1. the offset into the dmabuf where the payload starts. 'frag_offset'. 2. the size of the frag. 'frag_size'. 3. an opaque token 'frag_token' to return to the kernel when the buffer is to be released. The pages awaiting freeing are stored in the newly added sk->sk_user_frags, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov Reviewed-by: Eric Dumazet --- v20: - Change `offset = 0` to `offset = offset - start` to resolve issue reported by Taehee. v16: - Fix number assignement (Arnd). v13: - Refactored user frags cleanup into a common function to avoid __maybe_unused. (Pavel) - change to offset = 0 for some improved clarity. v11: - Refactor to common function te remove conditional lock sparse warning (Paolo) v7: - Updated the SO_DEVMEM_* uapi to use the next available entries (Arnd). - Updated dmabuf_cmsg struct to be __u64 padded (Arnd). - Squashed fix from Eric to initialize sk_user_frags for passive sockets (Eric). v6 - skb->dmabuf -> skb->readable (Pavel) - Fixed asm definitions of SO_DEVMEM_LINEAR/SO_DEVMEM_DMABUF not found on some archs. - Squashed in locking optimizations from edumazet@google.com. With this change we lock the xarray once per per tcp_recvmsg_dmabuf() rather than once per frag in xa_alloc(). Changes in v1: - Added dmabuf_id to dmabuf_cmsg (David/Stan). - Devmem -> dmabuf (David). - Change tcp_recvmsg_dmabuf() check to skb->dmabuf (Paolo). - Use __skb_frag_ref() & napi_pp_put_page() for refcounting (Yunsheng). RFC v3: - Fixed issue with put_cmsg() failing silently. --- arch/alpha/include/uapi/asm/socket.h | 5 + arch/mips/include/uapi/asm/socket.h | 5 + arch/parisc/include/uapi/asm/socket.h | 5 + arch/sparc/include/uapi/asm/socket.h | 5 + include/linux/socket.h | 1 + include/net/netmem.h | 13 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 13 ++ net/ipv4/tcp.c | 255 +++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 16 ++ net/ipv4/tcp_minisocks.c | 2 + 12 files changed, 322 insertions(+), 5 deletions(-) diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h index e94f621903fe..ef4656a41058 100644 --- a/arch/alpha/include/uapi/asm/socket.h +++ b/arch/alpha/include/uapi/asm/socket.h @@ -140,6 +140,11 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77 +#define SO_DEVMEM_LINEAR 78 +#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR +#define SO_DEVMEM_DMABUF 79 +#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF + #if !defined(__KERNEL__) #if __BITS_PER_LONG == 64 diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h index 60ebaed28a4c..414807d55e33 100644 --- a/arch/mips/include/uapi/asm/socket.h +++ b/arch/mips/include/uapi/asm/socket.h @@ -151,6 +151,11 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77 +#define SO_DEVMEM_LINEAR 78 +#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR +#define SO_DEVMEM_DMABUF 79 +#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF + #if !defined(__KERNEL__) #if __BITS_PER_LONG == 64 diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h index be264c2b1a11..2b817efd4544 100644 --- a/arch/parisc/include/uapi/asm/socket.h +++ b/arch/parisc/include/uapi/asm/socket.h @@ -132,6 +132,11 @@ #define SO_PASSPIDFD 0x404A #define SO_PEERPIDFD 0x404B +#define SO_DEVMEM_LINEAR 78 +#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR +#define SO_DEVMEM_DMABUF 79 +#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF + #if !defined(__KERNEL__) #if __BITS_PER_LONG == 64 diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h index 682da3714686..00248fc68977 100644 --- a/arch/sparc/include/uapi/asm/socket.h +++ b/arch/sparc/include/uapi/asm/socket.h @@ -133,6 +133,11 @@ #define SO_PASSPIDFD 0x0055 #define SO_PEERPIDFD 0x0056 +#define SO_DEVMEM_LINEAR 0x0057 +#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR +#define SO_DEVMEM_DMABUF 0x0058 +#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF + #if !defined(__KERNEL__) diff --git a/include/linux/socket.h b/include/linux/socket.h index df9cdb8bbfb8..d18cc47e89bd 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -327,6 +327,7 @@ struct ucred { * plain text and require encryption */ +#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */ #define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */ #define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */ #define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ diff --git a/include/net/netmem.h b/include/net/netmem.h index 284f84a312c2..84043fbdd797 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -65,6 +65,19 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov) return niov - net_iov_owner(niov)->niovs; } +static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) +{ + struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + + return owner->base_virtual + + ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); +} + +static inline u32 net_iov_binding_id(const struct net_iov *niov) +{ + return net_iov_owner(niov)->binding->id; +} + static inline struct net_devmem_dmabuf_binding * net_iov_binding(const struct net_iov *niov) { diff --git a/include/net/sock.h b/include/net/sock.h index cce23ac4d514..f8ec869be238 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -337,6 +337,7 @@ struct sk_filter; * @sk_txtime_report_errors: set report errors mode for SO_TXTIME * @sk_txtime_unused: unused txtime flags * @ns_tracker: tracker for netns reference + * @sk_user_frags: xarray of pages the user is holding a reference on. */ struct sock { /* @@ -542,6 +543,7 @@ struct sock { #endif struct rcu_head sk_rcu; netns_tracker ns_tracker; + struct xarray sk_user_frags; }; struct sock_bh_locked { diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index 8ce8a39a1e5f..e993edc9c0ee 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,11 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77 +#define SO_DEVMEM_LINEAR 78 +#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR +#define SO_DEVMEM_DMABUF 79 +#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF + #if !defined(__KERNEL__) #if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__)) diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 059b1a9147f4..3a22ddae376a 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -20,6 +20,19 @@ struct iovec __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ }; +struct dmabuf_cmsg { + __u64 frag_offset; /* offset into the dmabuf where the frag starts. + */ + __u32 frag_size; /* size of the frag. */ + __u32 frag_token; /* token representing this frag for + * DEVMEM_DONTNEED. + */ + __u32 dmabuf_id; /* dmabuf id this frag belongs to. */ + __u32 flags; /* Currently unused. Reserved for future + * uses. + */ +}; + /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 30e0aa38ba9b..984e28c5d096 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -471,6 +471,7 @@ void tcp_init_sock(struct sock *sk) set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); sk_sockets_allocated_inc(sk); + xa_init_flags(&sk->sk_user_frags, XA_FLAGS_ALLOC1); } EXPORT_SYMBOL(tcp_init_sock); @@ -2323,6 +2324,220 @@ static int tcp_inq_hint(struct sock *sk) return inq; } +/* batch __xa_alloc() calls and reduce xa_lock()/xa_unlock() overhead. */ +struct tcp_xa_pool { + u8 max; /* max <= MAX_SKB_FRAGS */ + u8 idx; /* idx <= max */ + __u32 tokens[MAX_SKB_FRAGS]; + netmem_ref netmems[MAX_SKB_FRAGS]; +}; + +static void tcp_xa_pool_commit_locked(struct sock *sk, struct tcp_xa_pool *p) +{ + int i; + + /* Commit part that has been copied to user space. */ + for (i = 0; i < p->idx; i++) + __xa_cmpxchg(&sk->sk_user_frags, p->tokens[i], XA_ZERO_ENTRY, + (__force void *)p->netmems[i], GFP_KERNEL); + /* Rollback what has been pre-allocated and is no longer needed. */ + for (; i < p->max; i++) + __xa_erase(&sk->sk_user_frags, p->tokens[i]); + + p->max = 0; + p->idx = 0; +} + +static void tcp_xa_pool_commit(struct sock *sk, struct tcp_xa_pool *p) +{ + if (!p->max) + return; + + xa_lock_bh(&sk->sk_user_frags); + + tcp_xa_pool_commit_locked(sk, p); + + xa_unlock_bh(&sk->sk_user_frags); +} + +static int tcp_xa_pool_refill(struct sock *sk, struct tcp_xa_pool *p, + unsigned int max_frags) +{ + int err, k; + + if (p->idx < p->max) + return 0; + + xa_lock_bh(&sk->sk_user_frags); + + tcp_xa_pool_commit_locked(sk, p); + + for (k = 0; k < max_frags; k++) { + err = __xa_alloc(&sk->sk_user_frags, &p->tokens[k], + XA_ZERO_ENTRY, xa_limit_31b, GFP_KERNEL); + if (err) + break; + } + + xa_unlock_bh(&sk->sk_user_frags); + + p->max = k; + p->idx = 0; + return k ? 0 : err; +} + +/* On error, returns the -errno. On success, returns number of bytes sent to the + * user. May not consume all of @remaining_len. + */ +static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, + unsigned int offset, struct msghdr *msg, + int remaining_len) +{ + struct dmabuf_cmsg dmabuf_cmsg = { 0 }; + struct tcp_xa_pool tcp_xa_pool; + unsigned int start; + int i, copy, n; + int sent = 0; + int err = 0; + + tcp_xa_pool.max = 0; + tcp_xa_pool.idx = 0; + do { + start = skb_headlen(skb); + + if (skb_frags_readable(skb)) { + err = -ENODEV; + goto out; + } + + /* Copy header. */ + copy = start - offset; + if (copy > 0) { + copy = min(copy, remaining_len); + + n = copy_to_iter(skb->data + offset, copy, + &msg->msg_iter); + if (n != copy) { + err = -EFAULT; + goto out; + } + + offset += copy; + remaining_len -= copy; + + /* First a dmabuf_cmsg for # bytes copied to user + * buffer. + */ + memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg)); + dmabuf_cmsg.frag_size = copy; + err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR, + sizeof(dmabuf_cmsg), &dmabuf_cmsg); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + if (!err) + err = -ETOOSMALL; + goto out; + } + + sent += copy; + + if (remaining_len == 0) + goto out; + } + + /* after that, send information of dmabuf pages through a + * sequence of cmsg + */ + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + struct net_iov *niov; + u64 frag_offset; + int end; + + /* !skb_frags_readable() should indicate that ALL the + * frags in this skb are dmabuf net_iovs. We're checking + * for that flag above, but also check individual frags + * here. If the tcp stack is not setting + * skb_frags_readable() correctly, we still don't want + * to crash here. + */ + if (!skb_frag_net_iov(frag)) { + net_err_ratelimited("Found non-dmabuf skb with net_iov"); + err = -ENODEV; + goto out; + } + + niov = skb_frag_net_iov(frag); + end = start + skb_frag_size(frag); + copy = end - offset; + + if (copy > 0) { + copy = min(copy, remaining_len); + + frag_offset = net_iov_virtual_addr(niov) + + skb_frag_off(frag) + offset - + start; + dmabuf_cmsg.frag_offset = frag_offset; + dmabuf_cmsg.frag_size = copy; + err = tcp_xa_pool_refill(sk, &tcp_xa_pool, + skb_shinfo(skb)->nr_frags - i); + if (err) + goto out; + + /* Will perform the exchange later */ + dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx]; + dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov); + + offset += copy; + remaining_len -= copy; + + err = put_cmsg(msg, SOL_SOCKET, + SO_DEVMEM_DMABUF, + sizeof(dmabuf_cmsg), + &dmabuf_cmsg); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + if (!err) + err = -ETOOSMALL; + goto out; + } + + atomic_long_inc(&niov->pp_ref_count); + tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag); + + sent += copy; + + if (remaining_len == 0) + goto out; + } + start = end; + } + + tcp_xa_pool_commit(sk, &tcp_xa_pool); + if (!remaining_len) + goto out; + + /* if remaining_len is not satisfied yet, we need to go to the + * next frag in the frag_list to satisfy remaining_len. + */ + skb = skb_shinfo(skb)->frag_list ?: skb->next; + + offset = offset - start; + } while (skb); + + if (remaining_len) { + err = -EFAULT; + goto out; + } + +out: + tcp_xa_pool_commit(sk, &tcp_xa_pool); + if (!sent) + sent = err; + + return sent; +} + /* * This routine copies from a sock struct into the user buffer. * @@ -2336,6 +2551,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, int *cmsg_flags) { struct tcp_sock *tp = tcp_sk(sk); + int last_copied_dmabuf = -1; /* uninitialized */ int copied = 0; u32 peek_seq; u32 *seq; @@ -2515,15 +2731,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, } if (!(flags & MSG_TRUNC)) { - err = skb_copy_datagram_msg(skb, offset, msg, used); - if (err) { - /* Exception. Bailout! */ - if (!copied) - copied = -EFAULT; + if (last_copied_dmabuf != -1 && + last_copied_dmabuf != !skb_frags_readable(skb)) break; + + if (skb_frags_readable(skb)) { + err = skb_copy_datagram_msg(skb, offset, msg, + used); + if (err) { + /* Exception. Bailout! */ + if (!copied) + copied = -EFAULT; + break; + } + } else { + if (!(flags & MSG_SOCK_DEVMEM)) { + /* dmabuf skbs can only be received + * with the MSG_SOCK_DEVMEM flag. + */ + if (!copied) + copied = -EFAULT; + + break; + } + + err = tcp_recvmsg_dmabuf(sk, skb, offset, msg, + used); + if (err <= 0) { + if (!copied) + copied = -EFAULT; + + break; + } + used = err; } } + last_copied_dmabuf = !skb_frags_readable(skb); + WRITE_ONCE(*seq, *seq + used); copied += used; len -= used; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index a4e510846905..a76e56887d97 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -79,6 +79,7 @@ #include #include #include +#include #include #include @@ -2509,10 +2510,25 @@ static void tcp_md5sig_info_free_rcu(struct rcu_head *head) } #endif +static void tcp_release_user_frags(struct sock *sk) +{ +#ifdef CONFIG_PAGE_POOL + unsigned long index; + void *netmem; + + xa_for_each(&sk->sk_user_frags, index, netmem) + WARN_ON_ONCE(!napi_pp_put_page((__force netmem_ref)netmem)); +#endif +} + void tcp_v4_destroy_sock(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); + tcp_release_user_frags(sk); + + xa_destroy(&sk->sk_user_frags); + trace_tcp_destroy_sock(sk); tcp_clear_xmit_timers(sk); diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index a19a9dbd3409..9ab87a41255d 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -625,6 +625,8 @@ struct sock *tcp_create_openreq_child(const struct sock *sk, __TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS); + xa_init_flags(&newsk->sk_user_frags, XA_FLAGS_ALLOC1); + return newsk; } EXPORT_SYMBOL(tcp_create_openreq_child); From patchwork Sun Aug 25 04:15:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776591 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2D4A7C6D4 for ; Sun, 25 Aug 2024 04:15:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559348; cv=none; b=NCBAxVt5g30Cn6xMeaGqyfGmk3gLdaWrrMBdcYrCXknfu87rKUrWH2NopAUlW8P8ZpkigJlIu/meDvyDIvi518IvTwxXtsT4QS/IhcF0JuX/vP9uQYn3p4q3tuW8hZn6OFJ8KSB+Gsxx5uFlrKtAuzLDNtaL099DoCcGGjKwNmE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559348; c=relaxed/simple; bh=r4IDRoFz6d6YeGEWBFAc2k7e3NUYzHzr5emFVL2HTTY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=giDr4fu5bHnHse7yOH/ubIA8hU/0szcq+A68CrxIhz4wNYfnZMGY1h/w7cU2v2DK6iMTpyCShb9nhTNKiuUp8OwRaJlY9XZ94MDJrJWuJsugb24sf2RQmmZxlL938335z5CWt8I0HYIyImVjYLwjAeg0Zn/nz/z8D+54BI5ZbOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NuLO6u3P; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NuLO6u3P" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1651bac405so5809265276.3 for ; Sat, 24 Aug 2024 21:15:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559337; x=1725164137; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lyMlZ1HboyPMgKC+GnebTeFfs19D4O4ibgqfsAlPQjg=; b=NuLO6u3P1JXIR4EQ0ldEZ0Y+yvbjIUH2i6OtNaX4r0nqAef1sU2gQR6y7nBqn96qjZ 7C1MJNJ+14mZpqMcScQtGvLSiokW4dDD1b9JeYLQPv88lB4e2SuXUPAzDclETtG/uOC2 DwojAkqd4npZ+DYImXxXp70WBOZwXdoIRT40tkPBI5fMvEkP/FB79EdjVnYlIHVUlaXK 7vauMvxiMhrYAfbmET2n/25YOlcU/KNikWlPwdaMp9Os7+VMOFT+bxqsoJDu2U6ssSgV LfgrrforcGu7h482O6r/SLKjCwFj00+kxZpYrEI+si5SQJvcOgmpgzRkUrTZduciNtsi kGQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559337; x=1725164137; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lyMlZ1HboyPMgKC+GnebTeFfs19D4O4ibgqfsAlPQjg=; b=szZoG66OZkUbRtfx9Aw5Oi7xzPpXezkOFN0FhUqpmFsOLY/4DX/zrqY+PrhvKrddNt aGj+TsKLVrcxj3VYh/3h8OTe5vuBb3YmU2P7/VWSaCqNzwEhwHy1T/bvu0Bb+Q3kuOXg Lb3FvrW38mp3rcHarSLr8uSPpX6+ZCNx+XnmWL0WjElZhRWhw0NmtmcljIxF0iQO9L7D UVWDTpuEcShNtKXvS9AF79k5CUUtVByV4OX2UnqINi8lnwO7SPV0rzbG2bHKgbdSqpsd Eu1dbFw31IXu6zxvO9Rp9C41sGPkR/bd3Q7jzXCABpndck5oRIK36MAkEp/jgJj5i1z0 frZQ== X-Forwarded-Encrypted: i=1; AJvYcCXp7MnWwozcFeIw65+rIDkqVm6ry8H6XwYNpuexqKXxajz8AmgY3eIrJBhvRqa3FVAo3BAKQSiNuijl8us=@vger.kernel.org X-Gm-Message-State: AOJu0YwQXBAyo70AhuADZ4xQzaZk+ngA6wSFWYxZkN22QOUnJ9A6JEpH RQHhMvrgSoa3aCohF7Xa5/rqLjTRkso6w/lmaZ5x3UvmwlQZknOFIoMeDgUZWTitYbP9frCBKC7 gQ33yfa/SHuhZ9P+RxNHT9Q== X-Google-Smtp-Source: AGHT+IH0FiZhOsvMNveNOdjnxRvE2ZQNrOI0hExfks8fTOn/ZB1unqrXIv7COCCNJt6tQJQmalpRYC5qtxS9hWP0Ig== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:aa4d:0:b0:e0b:bafe:a7ff with SMTP id 3f1490d57ef6-e17a83d4bc5mr10809276.6.1724559336772; Sat, 24 Aug 2024 21:15:36 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:08 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-11-almasrymina@google.com> Subject: [PATCH net-next v22 10/13] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Willem de Bruijn , Kaiyuan Zhang Add an interface for the user to notify the kernel that it is done reading the devmem dmabuf frags returned as cmsg. The kernel will drop the reference on the frags to make them available for reuse. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov Reviewed-by: Eric Dumazet --- v16: - Use sk_is_tcp(). - Fix unnamed 128 DONTNEED limit (David). - Fix kernel allocating for 128 tokens even if the user didn't ask for that much (Eric). - Fix number assignement (Arnd). v10: - Fix leak of tokens (Nikolay). v7: - Updated SO_DEVMEM_* uapi to use the next available entry (Arnd). v6: - Squash in locking optimizations from edumazet@google.com. With his changes we lock the xarray once per sock_devmem_dontneed operation rather than once per frag. Changes in v1: - devmemtoken -> dmabuf_token (David). - Use napi_pp_put_page() for refcounting (Yunsheng). - Fix build error with missing socket options on other asms. --- arch/alpha/include/uapi/asm/socket.h | 1 + arch/mips/include/uapi/asm/socket.h | 1 + arch/parisc/include/uapi/asm/socket.h | 1 + arch/sparc/include/uapi/asm/socket.h | 1 + include/uapi/asm-generic/socket.h | 1 + include/uapi/linux/uio.h | 4 ++ net/core/sock.c | 68 +++++++++++++++++++++++++++ 7 files changed, 77 insertions(+) diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h index ef4656a41058..251b73c5481e 100644 --- a/arch/alpha/include/uapi/asm/socket.h +++ b/arch/alpha/include/uapi/asm/socket.h @@ -144,6 +144,7 @@ #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR #define SO_DEVMEM_DMABUF 79 #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF +#define SO_DEVMEM_DONTNEED 80 #if !defined(__KERNEL__) diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h index 414807d55e33..8ab7582291ab 100644 --- a/arch/mips/include/uapi/asm/socket.h +++ b/arch/mips/include/uapi/asm/socket.h @@ -155,6 +155,7 @@ #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR #define SO_DEVMEM_DMABUF 79 #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF +#define SO_DEVMEM_DONTNEED 80 #if !defined(__KERNEL__) diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h index 2b817efd4544..38fc0b188e08 100644 --- a/arch/parisc/include/uapi/asm/socket.h +++ b/arch/parisc/include/uapi/asm/socket.h @@ -136,6 +136,7 @@ #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR #define SO_DEVMEM_DMABUF 79 #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF +#define SO_DEVMEM_DONTNEED 80 #if !defined(__KERNEL__) diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h index 00248fc68977..57084ed2f3c4 100644 --- a/arch/sparc/include/uapi/asm/socket.h +++ b/arch/sparc/include/uapi/asm/socket.h @@ -137,6 +137,7 @@ #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR #define SO_DEVMEM_DMABUF 0x0058 #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF +#define SO_DEVMEM_DONTNEED 0x0059 #if !defined(__KERNEL__) diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index e993edc9c0ee..3b4e3e815602 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -139,6 +139,7 @@ #define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR #define SO_DEVMEM_DMABUF 79 #define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF +#define SO_DEVMEM_DONTNEED 80 #if !defined(__KERNEL__) diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 3a22ddae376a..d17f8fcd93ec 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -33,6 +33,10 @@ struct dmabuf_cmsg { */ }; +struct dmabuf_token { + __u32 token_start; + __u32 token_count; +}; /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/core/sock.c b/net/core/sock.c index 9abc4fe25953..df2a2dd6409d 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -124,6 +124,7 @@ #include #include #include +#include #include #include #include @@ -1049,6 +1050,69 @@ static int sock_reserve_memory(struct sock *sk, int bytes) return 0; } +#ifdef CONFIG_PAGE_POOL + +/* This is the number of tokens that the user can SO_DEVMEM_DONTNEED in + * 1 syscall. The limit exists to limit the amount of memory the kernel + * allocates to copy these tokens. + */ +#define MAX_DONTNEED_TOKENS 128 + +static noinline_for_stack int +sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen) +{ + unsigned int num_tokens, i, j, k, netmem_num = 0; + struct dmabuf_token *tokens; + netmem_ref netmems[16]; + int ret = 0; + + if (!sk_is_tcp(sk)) + return -EBADF; + + if (optlen % sizeof(struct dmabuf_token) || + optlen > sizeof(*tokens) * MAX_DONTNEED_TOKENS) + return -EINVAL; + + tokens = kvmalloc_array(optlen, sizeof(*tokens), GFP_KERNEL); + if (!tokens) + return -ENOMEM; + + num_tokens = optlen / sizeof(struct dmabuf_token); + if (copy_from_sockptr(tokens, optval, optlen)) { + kvfree(tokens); + return -EFAULT; + } + + xa_lock_bh(&sk->sk_user_frags); + for (i = 0; i < num_tokens; i++) { + for (j = 0; j < tokens[i].token_count; j++) { + netmem_ref netmem = (__force netmem_ref)__xa_erase( + &sk->sk_user_frags, tokens[i].token_start + j); + + if (netmem && + !WARN_ON_ONCE(!netmem_is_net_iov(netmem))) { + netmems[netmem_num++] = netmem; + if (netmem_num == ARRAY_SIZE(netmems)) { + xa_unlock_bh(&sk->sk_user_frags); + for (k = 0; k < netmem_num; k++) + WARN_ON_ONCE(!napi_pp_put_page(netmems[k])); + netmem_num = 0; + xa_lock_bh(&sk->sk_user_frags); + } + ret++; + } + } + } + + xa_unlock_bh(&sk->sk_user_frags); + for (k = 0; k < netmem_num; k++) + WARN_ON_ONCE(!napi_pp_put_page(netmems[k])); + + kvfree(tokens); + return ret; +} +#endif + void sockopt_lock_sock(struct sock *sk) { /* When current->bpf_ctx is set, the setsockopt is called from @@ -1211,6 +1275,10 @@ int sk_setsockopt(struct sock *sk, int level, int optname, ret = -EOPNOTSUPP; return ret; } +#ifdef CONFIG_PAGE_POOL + case SO_DEVMEM_DONTNEED: + return sock_devmem_dontneed(sk, optval, optlen); +#endif } sockopt_lock_sock(sk); From patchwork Sun Aug 25 04:15:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776594 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37CC380025 for ; Sun, 25 Aug 2024 04:15:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559353; cv=none; b=MLvmFhyXqdm8CsGaKqThQnwe2SIrzi8Obe5tVpj9FVna8j080UVcqnFfoBpSmqT27gatuoBfedKSn0v9vJGNJ706j5lWPjYzqU5nkKCmteqxhKHnXY6kQlDHbqPRPWGBHQ3Q+eC64iIR0YO90CvnyqzszXdToSjU44s9VgfMuq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559353; c=relaxed/simple; bh=tYTvD+0Km68ojLP6V9XQX/oVNkPZLntypHL7vHEaYWo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qMmMK+vsFdn9z8gv5LToTAb/eByYb+wLKdtcDpzsA2e5u3XLHst1YK+fueG18wbccgI/EaAuuiyAZSyF3ue7xEJiHLbw2xe8QeSZtRqa+FFO3oF3HI1ISLPim10FScUcGNNsBbz1CbXW9aVW4fs4jfqO2/tvHUOx1drQzai9uBc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Fasl91CE; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Fasl91CE" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e11368fa2e3so6208319276.3 for ; Sat, 24 Aug 2024 21:15:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559339; x=1725164139; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sqDPlBenPZT/vr+LnmkCTZ1L+vju9lIHvJ1xwAgpw/o=; b=Fasl91CEE4LEJ2AtT9rwAfqpwlbOGG9XxfciZZPsD+BCLrk2qV8le2q8UnhmrcYwVb lsElfZ1tmDnsFGC108wN3YUpJ5bbY9DjgMnkRTNtd3SgjCx87+UcBjZJJfglfIQ7al96 EuBY0ErSMLj2GIX0+Ckay5KeNsUqT4FyYn3WpsEKK7vvA457Ox2Brf6jOQ72YGuFZ9MI p7ffW53DVI2sF71FmnK9KJubBjflFR5bG8F/BOCDTILv98LDVfT1+75zMvzSfvDM5vHk zOcWHskMnTbt7FNG5CAfTOe2ePtS+ZJjV+96GwosbBUD/lyk80ifqCA5fD3tpfMhR7ZZ swfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559339; x=1725164139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sqDPlBenPZT/vr+LnmkCTZ1L+vju9lIHvJ1xwAgpw/o=; b=JDqQRix9z9I2u3NvCDpR+Ep/7RfRc3iN7fRh6fah1Ssyhmzt7Awm8qZFIND60zyN6Y KtoJWdPpdjPEgyXNfyW1qfmjPejjGXJhcQd52sOGkWylRCnqRwtphohD21TtPq69Tsy3 RHMKeO1yVutVDLhqaKCKPorTXgoNsiWlCNgOZZ7627s7JcXbpQOd8KUg5OdEN58UHRC8 wYAmRxBNWL1oaUE1mI8/xMKu5/+2e97moQrqMJ2/04SPRaREuBfGrVyxMpIMf2LwSEHZ CihBEeMvU7YE5AKDKoY4kwGa5/ahG9jDGSVfiabe37bAQmCe1mmbysptDg5zup4Ue15k Zidw== X-Forwarded-Encrypted: i=1; AJvYcCW73R+xntTFKqNVDC4VRm/ml6HPBdeLlJ8sFzIqhBoWVnJdKsIaqKaOd84KjIoxxYHyuErApyxj9WuwSTM=@vger.kernel.org X-Gm-Message-State: AOJu0YyDl0+bVZENao5IhahSvui74mbxZwLXp680PXYsvDZtutVz3FAE +bhumm3lBsu+Xen4APX/7KXfjz6Da5hQeYsV3pKzYuc/kVhRr8GQnRvqa13JO4NvzImuNC0IuU0 SxTLtcZX9Eek3oguPnpmz3g== X-Google-Smtp-Source: AGHT+IFsugZHh2ZJcs9w5IfnmjiCF1dHCjWgPaLkMLCnC6ROLeLsiG8eMrJROl9guFvjFYnANH5z1PI34A47Pdo5TQ== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:b19f:0:b0:e05:eccb:95dc with SMTP id 3f1490d57ef6-e17a83d498fmr61841276.6.1724559339044; Sat, 24 Aug 2024 21:15:39 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:09 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-12-almasrymina@google.com> Subject: [PATCH net-next v22 11/13] net: add devmem TCP documentation From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo Add documentation outlining the usage and details of devmem TCP. Signed-off-by: Mina Almasry Reviewed-by: Bagas Sanjaya Reviewed-by: Donald Hunter --- v16: - Add documentation on unbinding the NIC from dmabuf (Donald). - Add note that any dmabuf should work (Donald). v9: https://lore.kernel.org/netdev/20240403002053.2376017-14-almasrymina@google.com/ - Bagas doc suggestions. v8: - Applied docs suggestions (Randy). Thanks! v7: - Applied docs suggestions (Jakub). v2: - Missing spdx (simon) - add to index.rst (simon) --- Documentation/networking/devmem.rst | 269 ++++++++++++++++++++++++++++ Documentation/networking/index.rst | 1 + 2 files changed, 270 insertions(+) create mode 100644 Documentation/networking/devmem.rst diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst new file mode 100644 index 000000000000..417fc977844e --- /dev/null +++ b/Documentation/networking/devmem.rst @@ -0,0 +1,269 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================= +Device Memory TCP +================= + + +Intro +===== + +Device memory TCP (devmem TCP) enables receiving data directly into device +memory (dmabuf). The feature is currently implemented for TCP sockets. + + +Opportunity +----------- + +A large number of data transfers have device memory as the source and/or +destination. Accelerators drastically increased the prevalence of such +transfers. Some examples include: + +- Distributed training, where ML accelerators, such as GPUs on different hosts, + exchange data. + +- Distributed raw block storage applications transfer large amounts of data with + remote SSDs. Much of this data does not require host processing. + +Typically the Device-to-Device data transfers in the network are implemented as +the following low-level operations: Device-to-Host copy, Host-to-Host network +transfer, and Host-to-Device copy. + +The flow involving host copies is suboptimal, especially for bulk data transfers, +and can put significant strains on system resources such as host memory +bandwidth and PCIe bandwidth. + +Devmem TCP optimizes this use case by implementing socket APIs that enable +the user to receive incoming network packets directly into device memory. + +Packet payloads go directly from the NIC to device memory. + +Packet headers go to host memory and are processed by the TCP/IP stack +normally. The NIC must support header split to achieve this. + +Advantages: + +- Alleviate host memory bandwidth pressure, compared to existing + network-transfer + device-copy semantics. + +- Alleviate PCIe bandwidth pressure, by limiting data transfer to the lowest + level of the PCIe tree, compared to the traditional path which sends data + through the root complex. + + +More Info +--------- + + slides, video + https://netdevconf.org/0x17/sessions/talk/device-memory-tcp.html + + patchset + [RFC PATCH v6 00/12] Device Memory TCP + https://lore.kernel.org/netdev/20240305020153.2787423-1-almasrymina@google.com/ + + +Interface +========= + + +Example +------- + +tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up +the RX path of this API. + + +NIC Setup +--------- + +Header split, flow steering, & RSS are required features for devmem TCP. + +Header split is used to split incoming packets into a header buffer in host +memory, and a payload buffer in device memory. + +Flow steering & RSS are used to ensure that only flows targeting devmem land on +an RX queue bound to devmem. + +Enable header split & flow steering:: + + # enable header split + ethtool -G eth1 tcp-data-split on + + + # enable flow steering + ethtool -K eth1 ntuple on + +Configure RSS to steer all traffic away from the target RX queue (queue 15 in +this example):: + + ethtool --set-rxfh-indir eth1 equal 15 + + +The user must bind a dmabuf to any number of RX queues on a given NIC using +the netlink API:: + + /* Bind dmabuf to NIC RX queue 15 */ + struct netdev_queue *queues; + queues = malloc(sizeof(*queues) * 1); + + queues[0]._present.type = 1; + queues[0]._present.idx = 1; + queues[0].type = NETDEV_RX_QUEUE_TYPE_RX; + queues[0].idx = 15; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + + req = netdev_bind_rx_req_alloc(); + netdev_bind_rx_req_set_ifindex(req, 1 /* ifindex */); + netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd); + __netdev_bind_rx_req_set_queues(req, queues, n_queue_index); + + rsp = netdev_bind_rx(*ys, req); + + dmabuf_id = rsp->dmabuf_id; + + +The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf +that has been bound. + +The user can unbind the dmabuf from the netdevice by closing the netlink socket +that established the binding. We do this so that the binding is automatically +unbound even if the userspace process crashes. + +Note that any reasonably well-behaved dmabuf from any exporter should work with +devmem TCP, even if the dmabuf is not actually backed by devmem. An example of +this is udmabuf, which wraps user memory (non-devmem) in a dmabuf. + + +Socket Setup +------------ + +The socket must be flow steered to the dmabuf bound RX queue:: + + ethtool -N eth1 flow-type tcp4 ... queue 15, + + +Receiving data +-------------- + +The user application must signal to the kernel that it is capable of receiving +devmem data by passing the MSG_SOCK_DEVMEM flag to recvmsg:: + + ret = recvmsg(fd, &msg, MSG_SOCK_DEVMEM); + +Applications that do not specify the MSG_SOCK_DEVMEM flag will receive an EFAULT +on devmem data. + +Devmem data is received directly into the dmabuf bound to the NIC in 'NIC +Setup', and the kernel signals such to the user via the SCM_DEVMEM_* cmsgs:: + + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_SOCKET || + (cm->cmsg_type != SCM_DEVMEM_DMABUF && + cm->cmsg_type != SCM_DEVMEM_LINEAR)) + continue; + + dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm); + + if (cm->cmsg_type == SCM_DEVMEM_DMABUF) { + /* Frag landed in dmabuf. + * + * dmabuf_cmsg->dmabuf_id is the dmabuf the + * frag landed on. + * + * dmabuf_cmsg->frag_offset is the offset into + * the dmabuf where the frag starts. + * + * dmabuf_cmsg->frag_size is the size of the + * frag. + * + * dmabuf_cmsg->frag_token is a token used to + * refer to this frag for later freeing. + */ + + struct dmabuf_token token; + token.token_start = dmabuf_cmsg->frag_token; + token.token_count = 1; + continue; + } + + if (cm->cmsg_type == SCM_DEVMEM_LINEAR) + /* Frag landed in linear buffer. + * + * dmabuf_cmsg->frag_size is the size of the + * frag. + */ + continue; + + } + +Applications may receive 2 cmsgs: + +- SCM_DEVMEM_DMABUF: this indicates the fragment landed in the dmabuf indicated + by dmabuf_id. + +- SCM_DEVMEM_LINEAR: this indicates the fragment landed in the linear buffer. + This typically happens when the NIC is unable to split the packet at the + header boundary, such that part (or all) of the payload landed in host + memory. + +Applications may receive no SO_DEVMEM_* cmsgs. That indicates non-devmem, +regular TCP data that landed on an RX queue not bound to a dmabuf. + + +Freeing frags +------------- + +Frags received via SCM_DEVMEM_DMABUF are pinned by the kernel while the user +processes the frag. The user must return the frag to the kernel via +SO_DEVMEM_DONTNEED:: + + ret = setsockopt(client_fd, SOL_SOCKET, SO_DEVMEM_DONTNEED, &token, + sizeof(token)); + +The user must ensure the tokens are returned to the kernel in a timely manner. +Failure to do so will exhaust the limited dmabuf that is bound to the RX queue +and will lead to packet drops. + + +Implementation & Caveats +======================== + +Unreadable skbs +--------------- + +Devmem payloads are inaccessible to the kernel processing the packets. This +results in a few quirks for payloads of devmem skbs: + +- Loopback is not functional. Loopback relies on copying the payload, which is + not possible with devmem skbs. + +- Software checksum calculation fails. + +- TCP Dump and bpf can't access devmem packet payloads. + + +Testing +======= + +More realistic example code can be found in the kernel source under +tools/testing/selftests/net/ncdevmem.c + +ncdevmem is a devmem TCP netcat. It works very similarly to netcat, but +receives data directly into a udmabuf. + +To run ncdevmem, you need to run it on a server on the machine under test, and +you need to run netcat on a peer to provide the TX data. + +ncdevmem has a validation mode as well that expects a repeating pattern of +incoming data and validates it as such. For example, you can launch +ncdevmem on the server by:: + + ncdevmem -s -c -f eth1 -d 3 -n 0000:06:00.0 -l \ + -p 5201 -v 7 + +On client side, use regular netcat to send TX data to ncdevmem process +on the server:: + + yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \ + tr \\n \\0 | head -c 5G | nc 5201 -p 5201 diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst index c71b87346178..08f437c326ab 100644 --- a/Documentation/networking/index.rst +++ b/Documentation/networking/index.rst @@ -49,6 +49,7 @@ Contents: cdc_mbim dccp dctcp + devmem dns_resolver driver eql From patchwork Sun Aug 25 04:15:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776595 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4ED3824BB for ; Sun, 25 Aug 2024 04:15:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559356; cv=none; b=dztfnpplAGdKm5t50FzcYCD2q6cUJrBmZ1aLm21ULsvqYEgNxtVKeD7ODaH6jGi2BCsMdl19wIaAphTh8V335GFpNO3yMEF3S9knYASI/k2DfvAW5MfE3OGh8MSA7XFKsuoH7UNRTD87qMcD1wykXXx2jqjoQfmsedofxalAZlY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559356; c=relaxed/simple; bh=mT8q1nnOlX1LU/a6WQwhAByo+MT3dGYIA+u2vxRow78=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i2GzzN5wwLwgtBdGpFUoiUsAIqWKUBf2zPirMPYETz///mgKP+reKqPY510NjcCVMAGnkjCpKIFQ7KyqwnEnERmEc5VkukbNdDeWeNcrvTFgo5oSSmCBV3aYouj4VjU0ifjbdvm0tsoiLNTg0OP0bwWS7ux1P3oe/P2L1JFb3Pg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kFTClTLw; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kFTClTLw" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e166ac5178eso6234372276.0 for ; Sat, 24 Aug 2024 21:15:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559341; x=1725164141; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JGfuxGPo8SDetMDHsPlBegdZmRLtiBfrdHUQXwd8C/o=; b=kFTClTLwOONJp0EYWZ6F6EUC6hm4NdoqrjLVKKvudqYBWFpBozwcPkyauJMA+sCQBU a8dVlQGoVOsqIBABJFj/7TRGIK9FwkJsnS/DeXyKNoIxfllsedwXgMmItWQ4reCg3+B1 wXlgbxD7GdxCYWkcX8kBzPz7YK3SrMxigN4GypMTZRCIVQjFcYaxb+wtTwrDslMRPhlh V6eIJxAbJJoG4GS2WveDaRmCwV23fBFefTYegZ595jejPLq7eC6jIVNu3RcUHaDtez8K BuB8EuW2hHWQIX13yYDNuzxxy1PCDa6E23VjezoTAEkLMWlghyHwCVBxjBfhKvlMHTh+ zn8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559341; x=1725164141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JGfuxGPo8SDetMDHsPlBegdZmRLtiBfrdHUQXwd8C/o=; b=jIRXGA/FIRMeG/Pdv/Li5HyLukl2XiCbDQQPIj/Rt99wFDhoWZvIBWCDA1pczD087U rCxq8Ae+EcgX/GJsTeH7D7eHe5wtaQzsObQlAro2ziEc0jBhIGCJHNUK+ps7ay+FYHFQ n7WHiOm2NUAm+q/1KFqLzawA0dwIzzdKccCWuD4j2cfWKNJK040Qw4Xo3v33CzCvKgWa Re/lpe6SbdtnEaIydBssnAGwCxhb+HoBgxTuCIEqBxenEF+y2kPaq7sOQuhy/ek2AYgL OvN+bDFLs0eF1XAF99xpGzeTIRHLf51BUHuTh4DEZ5SjMMyJ8c+mqm0LoQ7ApO4dCJhh ItSw== X-Forwarded-Encrypted: i=1; AJvYcCWBjzXRtPCxNFiVZj/QnCMaQaSvJTM4c9iC9uPsm7GMKuzj0G4Tz8BaUPqSDGoyQ+Ms2AKK4Pe8gNw9kSE=@vger.kernel.org X-Gm-Message-State: AOJu0YxWG7o0tQzu5BMbILfj3aFvi3zlTOpGpia439LHGoM4N5vKLhkF TW8AYHMgBAaDjgS789FBqcWP02uUMytudljz+iNzBLWEG+bWG6v0HKDNiPXFB+22cK23sfNmTQL Ys7cjwBOxU4fQa/4w8pnOjQ== X-Google-Smtp-Source: AGHT+IHmQ57clGvn5HbTuuXC44NEE7Cdhs+JxCfyCrlmNBk9l5UoEhm0gtMYGjePlTVwagJZ1wdGMvKkaBa8eAvibQ== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:c783:0:b0:e05:6532:166 with SMTP id 3f1490d57ef6-e17a83b325cmr74039276.1.1724559340875; Sat, 24 Aug 2024 21:15:40 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:10 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-13-almasrymina@google.com> Subject: [PATCH net-next v22 12/13] selftests: add ncdevmem, netcat for devmem TCP From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Stanislav Fomichev ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it sends and receives data using the devmem TCP APIs. It uses udmabuf as the dmabuf provider. It is compatible with a regular netcat running on a peer, or a ncdevmem running on a peer. In addition to normal netcat support, ncdevmem has a validation mode, where it sends a specific pattern and validates this pattern on the receiver side to ensure data integrity. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- v22: - Add run_command helper. It reduces boiler plate and prints the commands it is running. v20: - Remove unnecessary sleep(1) - Add test to ensure dmabuf binding fails if header split is disabled. v19: - Check return code of ethtool commands. - Add test for deactivating mp bound rx queues - Add test for attempting to bind with missing netlink attributes. v16: - Remove outdated -n option (Taehee). - Use 'ifname' instead of accidentally hardcoded 'eth1'. (Taehee) - Remove dead code 'iterations' (Taehee). - Use if_nametoindex() instead of passing device index (Taehee). v15: - Fix linking against libynl. (Jakub) v9: https://lore.kernel.org/netdev/20240403002053.2376017-15-almasrymina@google.com/ - Remove unused nic_pci_addr entry (Cong). v6: - Updated to bind 8 queues. - Added RSS configuration. - Added some more tests for the netlink API. Changes in v1: - Many more general cleanups (Willem). - Removed driver reset (Jakub). - Removed hardcoded if index (Paolo). RFC v2: - General cleanups (Willem). --- tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 9 + tools/testing/selftests/net/ncdevmem.c | 570 +++++++++++++++++++++++++ 3 files changed, 580 insertions(+) create mode 100644 tools/testing/selftests/net/ncdevmem.c diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index 666ab7d9390b..fe770903118c 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -17,6 +17,7 @@ ipv6_flowlabel ipv6_flowlabel_mgr log.txt msg_zerocopy +ncdevmem nettest psock_fanout psock_snd diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 8eaffd7a641c..e4708975ef42 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -95,6 +95,11 @@ TEST_PROGS += fq_band_pktlimit.sh TEST_PROGS += vlan_hw_filter.sh TEST_PROGS += bpf_offload.py +# YNL files, must be before "include ..lib.mk" +EXTRA_CLEAN += $(OUTPUT)/libynl.a +YNL_GEN_FILES := ncdevmem +TEST_GEN_FILES += $(YNL_GEN_FILES) + TEST_FILES := settings TEST_FILES += in_netns.sh lib.sh net_helper.sh setup_loopback.sh setup_veth.sh @@ -104,6 +109,10 @@ TEST_INCLUDES := forwarding/lib.sh include ../lib.mk +# YNL build +YNL_GENS := netdev +include ynl.mk + $(OUTPUT)/epoll_busy_poll: LDLIBS += -lcap $(OUTPUT)/reuseport_bpf_numa: LDLIBS += -lnuma $(OUTPUT)/tcp_mmap: LDLIBS += -lpthread -lcrypto diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c new file mode 100644 index 000000000000..64d6805381c5 --- /dev/null +++ b/tools/testing/selftests/net/ncdevmem.c @@ -0,0 +1,570 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#define __EXPORTED_HEADERS__ + +#include +#include +#include +#include +#include +#include +#include +#define __iovec_defined +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "netdev-user.h" +#include + +#define PAGE_SHIFT 12 +#define TEST_PREFIX "ncdevmem" +#define NUM_PAGES 16000 + +#ifndef MSG_SOCK_DEVMEM +#define MSG_SOCK_DEVMEM 0x2000000 +#endif + +/* + * tcpdevmem netcat. Works similarly to netcat but does device memory TCP + * instead of regular TCP. Uses udmabuf to mock a dmabuf provider. + * + * Usage: + * + * On server: + * ncdevmem -s -c -f eth1 -l -p 5201 -v 7 + * + * On client: + * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \ + * tr \\n \\0 | \ + * head -c 5G | \ + * nc 5201 -p 5201 + * + * Note this is compatible with regular netcat. i.e. the sender or receiver can + * be replaced with regular netcat to test the RX or TX path in isolation. + */ + +static char *server_ip = "192.168.1.4"; +static char *client_ip = "192.168.1.2"; +static char *port = "5201"; +static size_t do_validation; +static int start_queue = 8; +static int num_queues = 8; +static char *ifname = "eth1"; +static unsigned int ifindex; +static unsigned int dmabuf_id; + +void print_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + int i; + + for (i = 0; i < size; i++) + printf("%02hhX ", p[i]); + printf("\n"); +} + +void print_nonzero_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + unsigned int i; + + for (i = 0; i < size; i++) + putchar(p[i]); + printf("\n"); +} + +void validate_buffer(void *line, size_t size) +{ + static unsigned char seed = 1; + unsigned char *ptr = line; + int errors = 0; + size_t i; + + for (i = 0; i < size; i++) { + if (ptr[i] != seed) { + fprintf(stderr, + "Failed validation: expected=%u, actual=%u, index=%lu\n", + seed, ptr[i], i); + errors++; + if (errors > 20) + error(1, 0, "validation failed."); + } + seed++; + if (seed == do_validation) + seed = 0; + } + + fprintf(stdout, "Validated buffer\n"); +} + +#define run_command(cmd, ...) \ + ({ \ + char command[256]; \ + memset(command, 0, sizeof(command)); \ + snprintf(command, sizeof(command), cmd, ##__VA_ARGS__); \ + printf("Running: %s\n", command); \ + system(command); \ + }) + +static int reset_flow_steering(void) +{ + int ret = 0; + + ret = run_command("sudo ethtool -K %s ntuple off", ifname); + if (ret) + return ret; + + return run_command("sudo ethtool -K %s ntuple on", ifname); +} + +static int configure_headersplit(bool on) +{ + return run_command("sudo ethtool -G %s tcp-data-split %s", ifname, + on ? "on" : "off"); +} + +static int configure_rss(void) +{ + return run_command("sudo ethtool -X %s equal %d", ifname, start_queue); +} + +static int configure_channels(unsigned int rx, unsigned int tx) +{ + return run_command("sudo ethtool -L %s rx %u tx %u", ifname, rx, tx); +} + +static int configure_flow_steering(void) +{ + return run_command("sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d", + ifname, client_ip, server_ip, port, port, start_queue); +} + +static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd, + struct netdev_queue_id *queues, + unsigned int n_queue_index, struct ynl_sock **ys) +{ + struct netdev_bind_rx_req *req = NULL; + struct netdev_bind_rx_rsp *rsp = NULL; + struct ynl_error yerr; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + if (!*ys) { + fprintf(stderr, "YNL: %s\n", yerr.msg); + return -1; + } + + req = netdev_bind_rx_req_alloc(); + netdev_bind_rx_req_set_ifindex(req, ifindex); + netdev_bind_rx_req_set_fd(req, dmabuf_fd); + __netdev_bind_rx_req_set_queues(req, queues, n_queue_index); + + rsp = netdev_bind_rx(*ys, req); + if (!rsp) { + perror("netdev_bind_rx"); + goto err_close; + } + + if (!rsp->_present.id) { + perror("id not present"); + goto err_close; + } + + printf("got dmabuf id=%d\n", rsp->id); + dmabuf_id = rsp->id; + + netdev_bind_rx_req_free(req); + netdev_bind_rx_rsp_free(rsp); + + return 0; + +err_close: + fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg); + netdev_bind_rx_req_free(req); + ynl_sock_destroy(*ys); + return -1; +} + +static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size) +{ + struct udmabuf_create create; + int ret; + + *devfd = open("/dev/udmabuf", O_RDWR); + if (*devfd < 0) { + error(70, 0, + "%s: [skip,no-udmabuf: Unable to access DMA buffer device file]\n", + TEST_PREFIX); + } + + *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING); + if (*memfd < 0) + error(70, 0, "%s: [skip,no-memfd]\n", TEST_PREFIX); + + /* Required for udmabuf */ + ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK); + if (ret < 0) + error(73, 0, "%s: [skip,fcntl-add-seals]\n", TEST_PREFIX); + + ret = ftruncate(*memfd, dmabuf_size); + if (ret == -1) + error(74, 0, "%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); + + memset(&create, 0, sizeof(create)); + + create.memfd = *memfd; + create.offset = 0; + create.size = dmabuf_size; + *buf = ioctl(*devfd, UDMABUF_CREATE, &create); + if (*buf < 0) + error(75, 0, "%s: [FAIL, create udmabuf]\n", TEST_PREFIX); +} + +int do_server(void) +{ + char ctrl_data[sizeof(int) * 20000]; + struct netdev_queue_id *queues; + size_t non_page_aligned_frags = 0; + struct sockaddr_in client_addr; + struct sockaddr_in server_sin; + size_t page_aligned_frags = 0; + int devfd, memfd, buf, ret; + size_t total_received = 0; + socklen_t client_addr_len; + bool is_devmem = false; + char *buf_mem = NULL; + struct ynl_sock *ys; + size_t dmabuf_size; + char iobuf[819200]; + char buffer[256]; + int socket_fd; + int client_fd; + size_t i = 0; + int opt = 1; + + dmabuf_size = getpagesize() * NUM_PAGES; + + create_udmabuf(&devfd, &memfd, &buf, dmabuf_size); + + if (reset_flow_steering()) + error(1, 0, "Failed to reset flow steering\n"); + + /* Configure RSS to divert all traffic from our devmem queues */ + if (configure_rss()) + error(1, 0, "Failed to configure rss\n"); + + /* Flow steer our devmem flows to start_queue */ + if (configure_flow_steering()) + error(1, 0, "Failed to configure flow steering\n"); + + sleep(1); + + queues = malloc(sizeof(*queues) * num_queues); + + for (i = 0; i < num_queues; i++) { + queues[i]._present.type = 1; + queues[i]._present.id = 1; + queues[i].type = NETDEV_QUEUE_TYPE_RX; + queues[i].id = start_queue + i; + } + + if (bind_rx_queue(ifindex, buf, queues, num_queues, &ys)) + error(1, 0, "Failed to bind\n"); + + buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED, + buf, 0); + if (buf_mem == MAP_FAILED) + error(1, 0, "mmap()"); + + server_sin.sin_family = AF_INET; + server_sin.sin_port = htons(atoi(port)); + + ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr); + if (socket < 0) + error(79, 0, "%s: [FAIL, create socket]\n", TEST_PREFIX); + + socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0); + if (socket < 0) + error(errno, errno, "%s: [FAIL, create socket]\n", TEST_PREFIX); + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt, + sizeof(opt)); + if (ret) + error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX); + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt, + sizeof(opt)); + if (ret) + error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX); + + printf("binding to address %s:%d\n", server_ip, + ntohs(server_sin.sin_port)); + + ret = bind(socket_fd, &server_sin, sizeof(server_sin)); + if (ret) + error(errno, errno, "%s: [FAIL, bind]\n", TEST_PREFIX); + + ret = listen(socket_fd, 1); + if (ret) + error(errno, errno, "%s: [FAIL, listen]\n", TEST_PREFIX); + + client_addr_len = sizeof(client_addr); + + inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer, + sizeof(buffer)); + printf("Waiting or connection on %s:%d\n", buffer, + ntohs(server_sin.sin_port)); + client_fd = accept(socket_fd, &client_addr, &client_addr_len); + + inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer, + sizeof(buffer)); + printf("Got connection from %s:%d\n", buffer, + ntohs(client_addr.sin_port)); + + while (1) { + struct iovec iov = { .iov_base = iobuf, + .iov_len = sizeof(iobuf) }; + struct dmabuf_cmsg *dmabuf_cmsg = NULL; + struct dma_buf_sync sync = { 0 }; + struct cmsghdr *cm = NULL; + struct msghdr msg = { 0 }; + struct dmabuf_token token; + ssize_t ret; + + is_devmem = false; + printf("\n\n"); + + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM); + printf("recvmsg ret=%ld\n", ret); + if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) + continue; + if (ret < 0) { + perror("recvmsg"); + continue; + } + if (ret == 0) { + printf("client exited\n"); + goto cleanup; + } + + i++; + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_SOCKET || + (cm->cmsg_type != SCM_DEVMEM_DMABUF && + cm->cmsg_type != SCM_DEVMEM_LINEAR)) { + fprintf(stdout, "skipping non-devmem cmsg\n"); + continue; + } + + dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm); + is_devmem = true; + + if (cm->cmsg_type == SCM_DEVMEM_LINEAR) { + /* TODO: process data copied from skb's linear + * buffer. + */ + fprintf(stdout, + "SCM_DEVMEM_LINEAR. dmabuf_cmsg->frag_size=%u\n", + dmabuf_cmsg->frag_size); + + continue; + } + + token.token_start = dmabuf_cmsg->frag_token; + token.token_count = 1; + + total_received += dmabuf_cmsg->frag_size; + printf("received frag_page=%llu, in_page_offset=%llu, frag_offset=%llu, frag_size=%u, token=%u, total_received=%lu, dmabuf_id=%u\n", + dmabuf_cmsg->frag_offset >> PAGE_SHIFT, + dmabuf_cmsg->frag_offset % getpagesize(), + dmabuf_cmsg->frag_offset, dmabuf_cmsg->frag_size, + dmabuf_cmsg->frag_token, total_received, + dmabuf_cmsg->dmabuf_id); + + if (dmabuf_cmsg->dmabuf_id != dmabuf_id) + error(1, 0, + "received on wrong dmabuf_id: flow steering error\n"); + + if (dmabuf_cmsg->frag_size % getpagesize()) + non_page_aligned_frags++; + else + page_aligned_frags++; + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + if (do_validation) + validate_buffer( + ((unsigned char *)buf_mem) + + dmabuf_cmsg->frag_offset, + dmabuf_cmsg->frag_size); + else + print_nonzero_bytes( + ((unsigned char *)buf_mem) + + dmabuf_cmsg->frag_offset, + dmabuf_cmsg->frag_size); + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + ret = setsockopt(client_fd, SOL_SOCKET, + SO_DEVMEM_DONTNEED, &token, + sizeof(token)); + if (ret != 1) + error(1, 0, + "SO_DEVMEM_DONTNEED not enough tokens"); + } + if (!is_devmem) + error(1, 0, "flow steering error\n"); + + printf("total_received=%lu\n", total_received); + } + + fprintf(stdout, "%s: ok\n", TEST_PREFIX); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + +cleanup: + + munmap(buf_mem, dmabuf_size); + close(client_fd); + close(socket_fd); + close(buf); + close(memfd); + close(devfd); + ynl_sock_destroy(ys); + + return 0; +} + +void run_devmem_tests(void) +{ + struct netdev_queue_id *queues; + int devfd, memfd, buf; + struct ynl_sock *ys; + size_t dmabuf_size; + size_t i = 0; + + dmabuf_size = getpagesize() * NUM_PAGES; + + create_udmabuf(&devfd, &memfd, &buf, dmabuf_size); + + /* Configure RSS to divert all traffic from our devmem queues */ + if (configure_rss()) + error(1, 0, "rss error\n"); + + queues = calloc(num_queues, sizeof(*queues)); + + if (configure_headersplit(1)) + error(1, 0, "Failed to configure header split\n"); + + if (!bind_rx_queue(ifindex, buf, queues, num_queues, &ys)) + error(1, 0, "Binding empty queues array should have failed\n"); + + for (i = 0; i < num_queues; i++) { + queues[i]._present.type = 1; + queues[i]._present.id = 1; + queues[i].type = NETDEV_QUEUE_TYPE_RX; + queues[i].id = start_queue + i; + } + + if (configure_headersplit(0)) + error(1, 0, "Failed to configure header split\n"); + + if (!bind_rx_queue(ifindex, buf, queues, num_queues, &ys)) + error(1, 0, "Configure dmabuf with header split off should have failed\n"); + + if (configure_headersplit(1)) + error(1, 0, "Failed to configure header split\n"); + + for (i = 0; i < num_queues; i++) { + queues[i]._present.type = 1; + queues[i]._present.id = 1; + queues[i].type = NETDEV_QUEUE_TYPE_RX; + queues[i].id = start_queue + i; + } + + if (bind_rx_queue(ifindex, buf, queues, num_queues, &ys)) + error(1, 0, "Failed to bind\n"); + + /* Deactivating a bound queue should not be legal */ + if (!configure_channels(num_queues, num_queues - 1)) + error(1, 0, "Deactivating a bound queue should be illegal.\n"); + + /* Closing the netlink socket does an implicit unbind */ + ynl_sock_destroy(ys); +} + +int main(int argc, char *argv[]) +{ + int is_server = 0, opt; + + while ((opt = getopt(argc, argv, "ls:c:p:v:q:t:f:")) != -1) { + switch (opt) { + case 'l': + is_server = 1; + break; + case 's': + server_ip = optarg; + break; + case 'c': + client_ip = optarg; + break; + case 'p': + port = optarg; + break; + case 'v': + do_validation = atoll(optarg); + break; + case 'q': + num_queues = atoi(optarg); + break; + case 't': + start_queue = atoi(optarg); + break; + case 'f': + ifname = optarg; + break; + case '?': + printf("unknown option: %c\n", optopt); + break; + } + } + + ifindex = if_nametoindex(ifname); + + for (; optind < argc; optind++) + printf("extra arguments: %s\n", argv[optind]); + + run_devmem_tests(); + + if (is_server) + return do_server(); + + return 0; +} From patchwork Sun Aug 25 04:15:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13776593 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 435AE3C466 for ; Sun, 25 Aug 2024 04:15:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559353; cv=none; b=JsAxcCdEHLAuSPdAYU2EJOfwQ8YfXhsILPkyJhQh0t69IRKeXv6B7EgF0d0/9SF9vk1+ihePW9RRiNIJ0Jl/mCXsXny8x13EtaPYMHOWqC/ME09phKgIjlbcn1iz0GmWPc4gHsiD2PzIOqISx0VvdDX4whvKRHrFjsA7Rky8CsA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724559353; c=relaxed/simple; bh=TJAAMUYSEfHQpgrVp94762FrJpaRhDaUbadojmeKPFI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t2zslTcnXaT05ZdPsxZYufb9oxdIE8r2nr6SCnTYnAb/dtWm1qlWkTWzGcrZfiABxqpBAZDZcm4ykgTJhP7HmwV1d8GCZnseMBQ4xiFYKFgtAYGh5oBRWGgmGKZBtLSV2xsBZUZbgcbkXt7ZzKFqu6dTlKaESXpZsqumaf2hApY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hzN4IWav; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hzN4IWav" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1159fb1669so5554245276.2 for ; Sat, 24 Aug 2024 21:15:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724559343; x=1725164143; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h1SH0JAJ1B5PyyHx6pFWolw91iIxYtuOQavx/IgUURY=; b=hzN4IWavQV+i+euDuRfS3wC/D549Fv+04c1xE3pLu/1n0MZppCDUvQsBq+1ctAipDR IqWK0i6F9Z/plROi/iH7mff5GzYaaGceoyNnDIG9++DADRce5f67OHsozurdw7fUVYXT 0YGkn9+f8x2oLQUwXhZbRmkaG/GxDoRu9dfWOzS9YorS/53ueSwDLEQWLx/4NctOVeWh dGGU8SGB2cjdFq7N3G39tGDrvjYlNapUJxrLvQPHOExM++GtWjNUXedHojA6LtBOXy10 aTH1PMMntljh+c6n4ktpjnfnP+mIm9J3pjtIIHINnhYlT4HYEWGybihmKfRN0ofCdoIB uhpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724559343; x=1725164143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h1SH0JAJ1B5PyyHx6pFWolw91iIxYtuOQavx/IgUURY=; b=qCNvqraHr/YoNTykL+ZHF7786QccWAeBDYSkmv6EAwl+cIGQrHafVyottt3l7qxKzD 47y+DdcaSpNng3+fBSobRo5WStDNn9cNgw7O6VDMjKcnS2/MGZDrr2Z3r67axL/uE6tG QCn824DZqsgIYKg0w4K+rvwGosm9gyv6lJXjm9UEYMriiCEBpnkA9dlWJEOwpL75F//C u8gEkn89risBf3FLy3UWcEVotPbrj/mhAiWy2KHg1AHbaM+FETrmfOySnGLVI3kIKRGN 1Fs8eXhC/1UXBtp0d8A+uY1kcRX5OMnNtYSmlsA0b0bZ54ZL+pwyPEVlIvSzJqojdFp2 +MnQ== X-Forwarded-Encrypted: i=1; AJvYcCUI/MTr2UhRhxNYbvRWOo/jlcuIIs+HdPwZ4/t5vYPhri+HkfNNKQZvDy1W0AQo5LvjSqf8vfLPJo1le/w=@vger.kernel.org X-Gm-Message-State: AOJu0YxwjjcBG01gvCDCxpQMCO/ZWYxt51HDlGXA4apJXQVobVTLUqhy mgHSB2LrSl3sVwXf5lLQ195Ugw7R9ay9z6D3+4eD1HCa6PhXPYicDOM7ngkXY1YaLJlzUwRO5Y+ 0t8uVBdJ/Y4S9ANIHTJMH/A== X-Google-Smtp-Source: AGHT+IHS+Zu+bgRE5oFFPnUb1fdy8od53owLmEP+/LrN1rcw2bxR/R5+KhAaCeWdLTwgk9DZAKT2KFHGlGeKj1gETg== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a25:d348:0:b0:e0e:ce19:4e51 with SMTP id 3f1490d57ef6-e17a83b0cecmr10947276.3.1724559342511; Sat, 24 Aug 2024 21:15:42 -0700 (PDT) Date: Sun, 25 Aug 2024 04:15:11 +0000 In-Reply-To: <20240825041511.324452-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-parisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240825041511.324452-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240825041511.324452-14-almasrymina@google.com> Subject: [PATCH net-next v22 13/13] netdev: add dmabuf introspection From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , " =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= " , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Shuah Khan , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo Add dmabuf information to page_pool stats: $ ./cli.py --spec ../netlink/specs/netdev.yaml --dump page-pool-get ... {'dmabuf': 10, 'id': 456, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 455, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 454, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 453, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 452, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 451, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 450, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, {'dmabuf': 10, 'id': 449, 'ifindex': 3, 'inflight': 1023, 'inflight-mem': 4190208}, And queue stats: $ ./cli.py --spec ../netlink/specs/netdev.yaml --dump queue-get ... {'dmabuf': 10, 'id': 8, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 9, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 10, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 11, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 12, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 13, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 14, 'ifindex': 3, 'type': 'rx'}, {'dmabuf': 10, 'id': 15, 'ifindex': 3, 'type': 'rx'}, Suggested-by: Jakub Kicinski Signed-off-by: Mina Almasry Reviewed-by: Jakub Kicinski --- Documentation/netlink/specs/netdev.yaml | 10 ++++++++++ include/uapi/linux/netdev.h | 2 ++ net/core/netdev-genl.c | 10 ++++++++++ net/core/page_pool_user.c | 4 ++++ tools/include/uapi/linux/netdev.h | 2 ++ 5 files changed, 28 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 0c747530c275..08412c279297 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -167,6 +167,10 @@ attribute-sets: "re-attached", they are just waiting to disappear. Attribute is absent if Page Pool has not been detached, and can still be used to allocate new memory. + - + name: dmabuf + doc: ID of the dmabuf this page-pool is attached to. + type: u32 - name: page-pool-info subset-of: page-pool @@ -268,6 +272,10 @@ attribute-sets: name: napi-id doc: ID of the NAPI instance which services this queue. type: u32 + - + name: dmabuf + doc: ID of the dmabuf attached to this queue, if any. + type: u32 - name: qstats @@ -543,6 +551,7 @@ operations: - inflight - inflight-mem - detach-time + - dmabuf dump: reply: *pp-reply config-cond: page-pool @@ -607,6 +616,7 @@ operations: - type - napi-id - ifindex + - dmabuf dump: request: attributes: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 91bf3ecc5f1d..7c308f04e7a0 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -93,6 +93,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT, NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, + NETDEV_A_PAGE_POOL_DMABUF, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -131,6 +132,7 @@ enum { NETDEV_A_QUEUE_IFINDEX, NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, + NETDEV_A_QUEUE_DMABUF, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 269faa37f84e..8c14676cf4ca 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -293,6 +293,7 @@ static int netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx, u32 q_type, const struct genl_info *info) { + struct net_devmem_dmabuf_binding *binding; struct netdev_rx_queue *rxq; struct netdev_queue *txq; void *hdr; @@ -312,6 +313,15 @@ netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, if (rxq->napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, rxq->napi->napi_id)) goto nla_put_failure; + + binding = (struct net_devmem_dmabuf_binding *) + rxq->mp_params.mp_priv; + if (binding) { + if (nla_put_u32(rsp, NETDEV_A_QUEUE_DMABUF, + binding->id)) + goto nla_put_failure; + } + break; case NETDEV_QUEUE_TYPE_TX: txq = netdev_get_tx_queue(netdev, q_idx); diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 9b69066cc07e..7995c1e3477d 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -213,6 +213,7 @@ static int page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, const struct genl_info *info) { + struct net_devmem_dmabuf_binding *binding = pool->mp_priv; size_t inflight, refsz; void *hdr; @@ -242,6 +243,9 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, pool->user.detach_time)) goto err_cancel; + if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) + goto err_cancel; + genlmsg_end(rsp, hdr); return 0; diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 91bf3ecc5f1d..7c308f04e7a0 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -93,6 +93,7 @@ enum { NETDEV_A_PAGE_POOL_INFLIGHT, NETDEV_A_PAGE_POOL_INFLIGHT_MEM, NETDEV_A_PAGE_POOL_DETACH_TIME, + NETDEV_A_PAGE_POOL_DMABUF, __NETDEV_A_PAGE_POOL_MAX, NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1) @@ -131,6 +132,7 @@ enum { NETDEV_A_QUEUE_IFINDEX, NETDEV_A_QUEUE_TYPE, NETDEV_A_QUEUE_NAPI_ID, + NETDEV_A_QUEUE_DMABUF, __NETDEV_A_QUEUE_MAX, NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)