From patchwork Mon Feb 3 22:39:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958365 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 701A021148E for ; Mon, 3 Feb 2025 22:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622364; cv=none; b=CsPeo3+hvZEtZYdXVcO2dYdpoUQPw514ONQuK3z6hBZt1x313GaDQzA79d+dRK1GkYLLAcaTwn9pDpl7RpfQ2iGl8ScNqBbpjCELSsyVm+ABOmzxwLIDrFxQ8/N1kYSgGFfJZMXxBWlwlMoGFF50G/pyi5kFvBLJ1Xnf4OWsuwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622364; c=relaxed/simple; bh=3u/7jLBYe2KfRcVNNTNkFn5t4vUJKiOG+tUb7ZrbBhE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Fed+0kRfufyNdIv6YmWEKEUKOYGWcG5uoBeG0Zi7jiRdKALBNF7doVTO4h9qV5QELsfMKo0Tm0Z6nlRn4EVRiWjKBmAQWqqIPWde6dW4iF1vfx+DF/pY2Ny6ckyH2pRRC4HE4MoB+cIaOcv3605fK2DpfR5aSssSEGDBQH25VIg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gS862unL; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gS862unL" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2164fad3792so103764295ad.0 for ; Mon, 03 Feb 2025 14:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622361; x=1739227161; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4kMF0lC9HmaCcCSq7dy1+BxkweYLRN4mbfBRIjXSD14=; b=gS862unLdby7Vqz8YcfHdDZ6r/lyVHozWxkUkMeE8QjoNQoBLiTei9IHgP2GrK796Z a/UkPqYVRAa9ELBinoA+6cb5wE6BnWJ9fv+hV05GrwOP2YP4qiWsZU4qqe+nzZ5n9UtH BGEP8PR/ctDeavQ1JGnVPQbcBfM3T5bRfUJk6is9cC/mgq6iRm7zu/Rpq8v2j1hnSdgW +P5iBG+bkliaQn4lfszut+3HBlJxNLV9hRvNdTJBYJGW5M3VSyqdNtwBI075bz5MOyMT XH2CVrEn2jIg9mqtjaU98ng0X3bD15wu/8ZrXvLj3ily23AJqIBMcBVg2E6x8zmvadpS 6teQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622361; x=1739227161; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4kMF0lC9HmaCcCSq7dy1+BxkweYLRN4mbfBRIjXSD14=; b=k7wNsQlLdb8lr5gkcWzGtSSTG047BUS7UFjFKtDI1gfrBD9FJ7Za411kIco3oCzcO6 Z9+Fh0ZvEx2V0j08JJNIVJ//OIq84r5U82zn5q7CMlCaIyDO4gFebgXEvMhKSovX3LDn 4qSRlFS5+evpYQjoZf76qYJarderS33wi+LeVIWHdfzMDWC2aFH4bXCCXyJLqiGcjskT SzRdNH4obnPhp4MbxjTxH9j/JghVzUVlL0YJZR4AYlHBXhspF4fPC7fsOGiPItaZSuFW G89PWv+GkTS8R7QClr6KdT6PW2azHUPDfeyEdAcNCLTlBJFyjVB6i9vAtdTBJX0LCkar Xx9A== X-Gm-Message-State: AOJu0Yw5wN7yMMTg+k3mQ8Nx5Oi9cEm4CoSa0bcsaoXzntUQuqWIcRQb BoZ26qR6458nfru0Q6QEeHgodwxJTiCTnzOOuuyNkOtTWl9jrCfvshIj0GF19J24ynqKrifekRg 5D2MYHcMFKzW5qReWxgTe5qcFxuJbWiDWfQ9jSsMXwnLo3w0D/uL6FEagBrd28TBjhNZRbAjBm8 6cexXozWgj34GGEe1nkzCJGhwKJH+of6yYkI3b4GCnuToD8p5hGCgznbrdNJQ= X-Google-Smtp-Source: AGHT+IHb13gQXiLhCy8lu7S+LD4XF4ggchabFxgYbLgWDs+s8DOSBAGbImDSpE2rflM6SVyUqDbN1DXEF3FRb/M8sA== X-Received: from pfbha6.prod.google.com ([2002:a05:6a00:8506:b0:72d:7a13:7a84]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:10cb:b0:725:ebab:bb2e with SMTP id d2e1a72fcca58-72fd0bfae9bmr34000927b3a.11.1738622359422; Mon, 03 Feb 2025 14:39:19 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:11 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-2-almasrymina@google.com> Subject: [PATCH net-next v3 1/6] net: add devmem TCP TX documentation From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja X-Patchwork-Delegate: kuba@kernel.org Add documentation outlining the usage and details of the devmem TCP TX API. Signed-off-by: Mina Almasry --- v2: - Update documentation for iov_base is the dmabuf offset (Stan) --- Documentation/networking/devmem.rst | 144 +++++++++++++++++++++++++++- 1 file changed, 140 insertions(+), 4 deletions(-) diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst index d95363645331..8166fe09da13 100644 --- a/Documentation/networking/devmem.rst +++ b/Documentation/networking/devmem.rst @@ -62,15 +62,15 @@ More Info https://lore.kernel.org/netdev/20240831004313.3713467-1-almasrymina@google.com/ -Interface -========= +RX Interface +============ Example ------- -tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up -the RX path of this API. +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_server shows an example of +setting up the RX path of this API. NIC Setup @@ -235,6 +235,142 @@ can be less than the tokens provided by the user in case of: (a) an internal kernel leak bug. (b) the user passed more than 1024 frags. +TX Interface +============ + + +Example +------- + +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_client shows an example of +setting up the TX path of this API. + + +NIC Setup +--------- + +The user must bind a TX dmabuf to a given NIC using the netlink API:: + + struct netdev_bind_tx_req *req = NULL; + struct netdev_bind_tx_rsp *rsp = NULL; + struct ynl_error yerr; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + + req = netdev_bind_tx_req_alloc(); + netdev_bind_tx_req_set_ifindex(req, ifindex); + netdev_bind_tx_req_set_fd(req, dmabuf_fd); + + rsp = netdev_bind_tx(*ys, req); + + tx_dmabuf_id = rsp->id; + + +The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf +that has been bound. + +The user can unbind the dmabuf from the netdevice by closing the netlink socket +that established the binding. We do this so that the binding is automatically +unbound even if the userspace process crashes. + +Note that any reasonably well-behaved dmabuf from any exporter should work with +devmem TCP, even if the dmabuf is not actually backed by devmem. An example of +this is udmabuf, which wraps user memory (non-devmem) in a dmabuf. + +Socket Setup +------------ + +The user application must use MSG_ZEROCOPY flag when sending devmem TCP. Devmem +cannot be copied by the kernel, so the semantics of the devmem TX are similar +to the semantics of MSG_ZEROCOPY. + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt)); + +Sending data +-------------- + +Devmem data is sent using the SCM_DEVMEM_DMABUF cmsg. + +The user should create a msghdr where, + +iov_base is set to the offset into the dmabuf to start sending from. +iov_len is set to the number of bytes to be sent from the dmabuf. + +The user passes the dma-buf id to send from via the dmabuf_tx_cmsg.dmabuf_id. + +The example below sends 1024 bytes from offset 100 into the dmabuf, and 2048 +from offset 2000 into the dmabuf. The dmabuf to send from is tx_dmabuf_id:: + + char ctrl_data[CMSG_SPACE(sizeof(struct dmabuf_tx_cmsg))]; + struct dmabuf_tx_cmsg ddmabuf; + struct msghdr msg = {}; + struct cmsghdr *cmsg; + struct iovec iov[2]; + + iov[0].iov_base = (void*)100; + iov[0].iov_len = 1024; + iov[1].iov_base = (void*)2000; + iov[1].iov_len = 2048; + + msg.msg_iov = iov; + msg.msg_iovlen = 2; + + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_DEVMEM_DMABUF; + cmsg->cmsg_len = CMSG_LEN(sizeof(struct dmabuf_tx_cmsg)); + + ddmabuf.dmabuf_id = tx_dmabuf_id; + + *((struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg)) = ddmabuf; + + sendmsg(socket_fd, &msg, MSG_ZEROCOPY); + + +Reusing TX dmabufs +------------------ + +Similar to MSG_ZEROCOPY with regular memory, the user should not modify the +contents of the dma-buf while a send operation is in progress. This is because +the kernel does not keep a copy of the dmabuf contents. Instead, the kernel +will pin and send data from the buffer available to the userspace. + +Just as in MSG_ZEROCOPY, the kernel notifies the userspace of send completions +using MSG_ERRQUEUE:: + + int64_t tstop = gettimeofday_ms() + waittime_ms; + char control[CMSG_SPACE(100)] = {}; + struct sock_extended_err *serr; + struct msghdr msg = {}; + struct cmsghdr *cm; + int retries = 10; + __u32 hi, lo; + + msg.msg_control = control; + msg.msg_controllen = sizeof(control); + + while (gettimeofday_ms() < tstop) { + if (!do_poll(fd)) continue; + + ret = recvmsg(fd, &msg, MSG_ERRQUEUE); + + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + serr = (void *)CMSG_DATA(cm); + + hi = serr->ee_data; + lo = serr->ee_info; + + fprintf(stdout, "tx complete [%d,%d]\n", lo, hi); + } + } + +After the associated sendmsg has been completed, the dmabuf can be reused by +the userspace. + + Implementation & Caveats ======================== From patchwork Mon Feb 3 22:39:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958366 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF34B21146B for ; Mon, 3 Feb 2025 22:39:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622364; cv=none; b=Dl437jfZ9vaVxCBuBsovPJCqzhT+EQTAR2kCYW+Tt92Mfk21aCh8FBUxJoRa+yoEtwQHha+DZcSnntZcxx+CamfBCpQGLswPJnthR+r0oyeVAoimlfylFgX6fVmMXMh75lpkco7mqFhUlRYisB78zxsP48IdAe9RCdY0AjmjX18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622364; c=relaxed/simple; bh=B57kW1yvNRnK7o9wBpD56b5z9WEFuoFDqC2vAcf1tIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rbRonPJdw9a+UVJlCAcvnitE5OeLwDcBaK4fP5Y86mptU4tPBLFTCck457DzxTQEbt0VWBw88aiHMVfIburZIR0j3c1ISDQTiTt/EMgW90lDCgHUMRgd1Aul9iUC8a+jUWftw06mV9yXQxJ83OKYWETNFDV4dLGNle7eOGXYd+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g5AGptMl; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g5AGptMl" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee5616e986so13751233a91.2 for ; Mon, 03 Feb 2025 14:39:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622361; x=1739227161; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NoumIr2NWF7hd9cn5yHj8D4DJ+Pk9SACiZ5SNvWOoS0=; b=g5AGptMl4tKdkZWjobe84lufY8UY4s6jO6iwbwLhsdl29pah3qsDtFAxefJVXtheFx MsZoqQRDGTo455QQOJamVs/JjucC4SlLeiBfOTfiRvr/Hvz3/MkBCtnYJ/wlwEmcy+M+ U6xvpS/tXL/QLcSu7nSJ7gdM5ovZdgIy7aXnzJPo6jbX5AwfO/KXnx70OH8LHZEGHuR0 Ls5F90D15ZL6/TmTM+p/o/G2GgLKOUerIBzervnS2C7LMhhW+u5q07tdgIvLhcipY7KE quUIVMeVkXoB7NjNEb4NADEDHKLNYILz0jPRYFV9i2O07k+aEky0PjhyinsZuvWL9pBP F8yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622361; x=1739227161; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NoumIr2NWF7hd9cn5yHj8D4DJ+Pk9SACiZ5SNvWOoS0=; b=e+Jnt7COj5gQyqJ05B4BC0+hkfULFKEGdb2BsoveN6/fojTiwBcQgsGxZAe6cd7xoI JlCgl/LsrNdJ8At3M/+Br2WupEW8/RuloUuaSRWIbX71JHH5FY4EQQsimgthvbOu4h4v ZPrAgYHP0Eim+ytvn1F/tRsMQTcgNZBxUQuIv7rgF0ou/NliDi7YkzFrHHKydhLof7/T Oo8LBI6PAHXf1YAHQVF6yvuTDKZhIUc7oD7veYyhnD/2QGo94dCPdPbn3sRs3R5O1jBs w33oDkyAKQpXKguTosglISIWqIsUypr9Opw1V4LQvJ/w7rQORrPzAcZ5crsha7fe30Q2 U64g== X-Gm-Message-State: AOJu0YyxOj2TxZEFNP13EQtkohFBC5JoNIKt8WqGE6JnAmdabCTQpuRU Cv2RPbfQFu/IVxQF98ApjTOuMhLYGOkGVykyGce9BRxWYQCCCtavlpN0rZ1doFq7e8ERULJan7a KXr2XQO6LG03vkqei5mT1BVWWteJmiGnLcRVg7AT+zBqCBn2XGz8CujV2XsJyLmsv4z0R14r6Y8 RH5joggjUzh1ZbVhL2u+EdYTwFdON/P2q+d7kqgPqv4eE1q2NeRM9MRhAqYj8= X-Google-Smtp-Source: AGHT+IFY5oJEaEe/sPoO/XYQkh/VUAJ2yDKDeukpK+zFchJO8a0EcIJDLprjVo/YUe8JqQL4TV0kfjYLAlixoIbEwQ== X-Received: from pjbsi11.prod.google.com ([2002:a17:90b:528b:b0:2ea:4139:e72d]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1f8c:b0:2ee:9d49:3ae6 with SMTP id 98e67ed59e1d1-2f83abd7da8mr37215866a91.10.1738622361011; Mon, 03 Feb 2025 14:39:21 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:12 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-3-almasrymina@google.com> Subject: [PATCH net-next v3 2/6] selftests: ncdevmem: Implement devmem TCP TX From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja X-Patchwork-Delegate: kuba@kernel.org Add support for devmem TX in ncdevmem. This is a combination of the ncdevmem from the devmem TCP series RFCv1 which included the TX path, and work by Stan to include the netlink API and refactored on top of his generic memory_provider support. Signed-off-by: Mina Almasry Signed-off-by: Stanislav Fomichev --- v3: - Update ncdevmem docs to run validation with RX-only and RX-with-TX. - Fix build warnings (Stan). - Make the validation expect new lines in the pattern so we can have the TX path behave like netcat (Stan). - Change ret to errno in error() calls (Stan). - Handle the case where client_ip is not provided (Stan). - Don't assume mid is <= 2000 (Stan). v2: - make errors a static variable so that we catch instances where there are less than 20 errors across different buffers. - Fix the issue where the seed is reset to 0 instead of its starting value 1. - Use 1000ULL instead of 1000 to guard against overflow (Willem). - Do not set POLLERR (Willem). - Update the test to use the new interface where iov_base is the dmabuf_offset. - Update the test to send 2 iov instead of 1, so we get some test coverage over sending multiple iovs at once. - Print the ifindex the test is using, useful for debugging issues where maybe the test may fail because the ifindex of the socket is different from the dmabuf binding. --- .../selftests/drivers/net/hw/ncdevmem.c | 300 +++++++++++++++++- 1 file changed, 289 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/drivers/net/hw/ncdevmem.c b/tools/testing/selftests/drivers/net/hw/ncdevmem.c index 19a6969643f4..a5ac78ed007e 100644 --- a/tools/testing/selftests/drivers/net/hw/ncdevmem.c +++ b/tools/testing/selftests/drivers/net/hw/ncdevmem.c @@ -9,22 +9,31 @@ * ncdevmem -s [-c ] -f eth1 -l -p 5201 * * On client: - * echo -n "hello\nworld" | nc -s 5201 -p 5201 + * echo -n "hello\nworld" | \ + * ncdevmem -s [-c ] -p 5201 -f eth1 * - * Test data validation: + * Note this is compatible with regular netcat. i.e. the sender or receiver can + * be replaced with regular netcat to test the RX or TX path in isolation. + * + * Test data validation (devmem TCP on RX only): * * On server: * ncdevmem -s [-c ] -f eth1 -l -p 5201 -v 7 * * On client: * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \ - * tr \\n \\0 | \ - * head -c 5G | \ + * head -c 1G | \ * nc 5201 -p 5201 * + * Test data validation (devmem TCP on RX and TX, validation happens on RX): * - * Note this is compatible with regular netcat. i.e. the sender or receiver can - * be replaced with regular netcat to test the RX or TX path in isolation. + * On server: + * ncdevmem -s [-c ] -l -p 5201 -v 8 -f eth1 + * + * On client: + * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06\\x07) | \ + * head -c 1M | \ + * ncdevmem -s [-c ] -p 5201 -f eth1 */ #define _GNU_SOURCE #define __EXPORTED_HEADERS__ @@ -40,15 +49,18 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include +#include #include #include #include @@ -80,6 +92,8 @@ static int num_queues = -1; static char *ifname; static unsigned int ifindex; static unsigned int dmabuf_id; +static uint32_t tx_dmabuf_id; +static int waittime_ms = 500; struct memory_buffer { int fd; @@ -93,6 +107,8 @@ struct memory_buffer { struct memory_provider { struct memory_buffer *(*alloc)(size_t size); void (*free)(struct memory_buffer *ctx); + void (*memcpy_to_device)(struct memory_buffer *dst, size_t off, + void *src, int n); void (*memcpy_from_device)(void *dst, struct memory_buffer *src, size_t off, int n); }; @@ -153,6 +169,20 @@ static void udmabuf_free(struct memory_buffer *ctx) free(ctx); } +static void udmabuf_memcpy_to_device(struct memory_buffer *dst, size_t off, + void *src, int n) +{ + struct dma_buf_sync sync = {}; + + sync.flags = DMA_BUF_SYNC_START | DMA_BUF_SYNC_WRITE; + ioctl(dst->fd, DMA_BUF_IOCTL_SYNC, &sync); + + memcpy(dst->buf_mem + off, src, n); + + sync.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_WRITE; + ioctl(dst->fd, DMA_BUF_IOCTL_SYNC, &sync); +} + static void udmabuf_memcpy_from_device(void *dst, struct memory_buffer *src, size_t off, int n) { @@ -170,6 +200,7 @@ static void udmabuf_memcpy_from_device(void *dst, struct memory_buffer *src, static struct memory_provider udmabuf_memory_provider = { .alloc = udmabuf_alloc, .free = udmabuf_free, + .memcpy_to_device = udmabuf_memcpy_to_device, .memcpy_from_device = udmabuf_memcpy_from_device, }; @@ -188,14 +219,16 @@ void validate_buffer(void *line, size_t size) { static unsigned char seed = 1; unsigned char *ptr = line; - int errors = 0; + unsigned char expected; + static int errors; size_t i; for (i = 0; i < size; i++) { - if (ptr[i] != seed) { + expected = seed ? seed : '\n'; + if (ptr[i] != expected) { fprintf(stderr, "Failed validation: expected=%u, actual=%u, index=%lu\n", - seed, ptr[i], i); + expected, ptr[i], i); errors++; if (errors > 20) error(1, 0, "validation failed."); @@ -394,6 +427,49 @@ static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd, return -1; } +static int bind_tx_queue(unsigned int ifindex, unsigned int dmabuf_fd, + struct ynl_sock **ys) +{ + struct netdev_bind_tx_req *req = NULL; + struct netdev_bind_tx_rsp *rsp = NULL; + struct ynl_error yerr; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + if (!*ys) { + fprintf(stderr, "YNL: %s\n", yerr.msg); + return -1; + } + + req = netdev_bind_tx_req_alloc(); + netdev_bind_tx_req_set_ifindex(req, ifindex); + netdev_bind_tx_req_set_fd(req, dmabuf_fd); + + rsp = netdev_bind_tx(*ys, req); + if (!rsp) { + perror("netdev_bind_tx"); + goto err_close; + } + + if (!rsp->_present.id) { + perror("id not present"); + goto err_close; + } + + fprintf(stderr, "got tx dmabuf id=%d\n", rsp->id); + tx_dmabuf_id = rsp->id; + + netdev_bind_tx_req_free(req); + netdev_bind_tx_rsp_free(rsp); + + return 0; + +err_close: + fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg); + netdev_bind_tx_req_free(req); + ynl_sock_destroy(*ys); + return -1; +} + static void enable_reuseaddr(int fd) { int opt = 1; @@ -432,7 +508,7 @@ static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6) return 0; } -int do_server(struct memory_buffer *mem) +static int do_server(struct memory_buffer *mem) { char ctrl_data[sizeof(int) * 20000]; struct netdev_queue_id *queues; @@ -686,6 +762,206 @@ void run_devmem_tests(void) provider->free(mem); } +static uint64_t gettimeofday_ms(void) +{ + struct timeval tv; + + gettimeofday(&tv, NULL); + return (tv.tv_sec * 1000ULL) + (tv.tv_usec / 1000ULL); +} + +static int do_poll(int fd) +{ + struct pollfd pfd; + int ret; + + pfd.revents = 0; + pfd.fd = fd; + + ret = poll(&pfd, 1, waittime_ms); + if (ret == -1) + error(1, errno, "poll"); + + return ret && (pfd.revents & POLLERR); +} + +static void wait_compl(int fd) +{ + int64_t tstop = gettimeofday_ms() + waittime_ms; + char control[CMSG_SPACE(100)] = {}; + struct sock_extended_err *serr; + struct msghdr msg = {}; + struct cmsghdr *cm; + __u32 hi, lo; + int ret; + + msg.msg_control = control; + msg.msg_controllen = sizeof(control); + + while (gettimeofday_ms() < tstop) { + if (!do_poll(fd)) + continue; + + ret = recvmsg(fd, &msg, MSG_ERRQUEUE); + if (ret < 0) { + if (errno == EAGAIN) + continue; + error(1, errno, "recvmsg(MSG_ERRQUEUE)"); + return; + } + if (msg.msg_flags & MSG_CTRUNC) + error(1, 0, "MSG_CTRUNC\n"); + + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_IP && + cm->cmsg_level != SOL_IPV6) + continue; + if (cm->cmsg_level == SOL_IP && + cm->cmsg_type != IP_RECVERR) + continue; + if (cm->cmsg_level == SOL_IPV6 && + cm->cmsg_type != IPV6_RECVERR) + continue; + + serr = (void *)CMSG_DATA(cm); + if (serr->ee_origin != SO_EE_ORIGIN_ZEROCOPY) + error(1, 0, "wrong origin %u", serr->ee_origin); + if (serr->ee_errno != 0) + error(1, 0, "wrong errno %d", serr->ee_errno); + + hi = serr->ee_data; + lo = serr->ee_info; + + fprintf(stderr, "tx complete [%d,%d]\n", lo, hi); + return; + } + } + + error(1, 0, "did not receive tx completion"); +} + +static int do_client(struct memory_buffer *mem) +{ + char ctrl_data[CMSG_SPACE(sizeof(struct dmabuf_tx_cmsg))]; + struct sockaddr_in6 server_sin; + struct sockaddr_in6 client_sin; + struct dmabuf_tx_cmsg ddmabuf; + struct ynl_sock *ys = NULL; + struct msghdr msg = {}; + ssize_t line_size = 0; + struct cmsghdr *cmsg; + struct iovec iov[2]; + char *line = NULL; + unsigned long mid; + size_t len = 0; + int socket_fd; + int ret; + int opt = 1; + + ret = parse_address(server_ip, atoi(port), &server_sin); + if (ret < 0) + error(1, 0, "parse server address"); + + socket_fd = socket(AF_INET6, SOCK_STREAM, 0); + if (socket_fd < 0) + error(1, socket_fd, "create socket"); + + enable_reuseaddr(socket_fd); + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_BINDTODEVICE, ifname, + strlen(ifname) + 1); + if (ret) + error(1, errno, "bindtodevice"); + + if (bind_tx_queue(ifindex, mem->fd, &ys)) + error(1, 0, "Failed to bind\n"); + + if (client_ip) { + ret = parse_address(client_ip, atoi(port), &client_sin); + if (ret < 0) + error(1, 0, "parse client address"); + + ret = bind(socket_fd, &client_sin, sizeof(client_sin)); + if (ret) + error(1, errno, "bind"); + } + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt)); + if (ret) + error(1, errno, "set sock opt"); + + fprintf(stderr, "Connect to %s %d (via %s)\n", server_ip, + ntohs(server_sin.sin6_port), ifname); + + ret = connect(socket_fd, &server_sin, sizeof(server_sin)); + if (ret) + error(1, errno, "connect"); + + while (1) { + free(line); + line = NULL; + line_size = getline(&line, &len, stdin); + + if (line_size < 0) + break; + + mid = (line_size / 2) + 1; + + iov[0].iov_base = (void *)1; + iov[0].iov_len = mid; + iov[1].iov_base = (void *)(mid + 2); + iov[1].iov_len = line_size - mid; + + provider->memcpy_to_device(mem, (size_t)iov[0].iov_base, line, + iov[0].iov_len); + provider->memcpy_to_device(mem, (size_t)iov[1].iov_base, + line + iov[0].iov_len, + iov[1].iov_len); + + fprintf(stderr, + "read line_size=%ld iov[0].iov_base=%lu, iov[0].iov_len=%lu, iov[1].iov_base=%lu, iov[1].iov_len=%lu\n", + line_size, (unsigned long)iov[0].iov_base, + iov[0].iov_len, (unsigned long)iov[1].iov_base, + iov[1].iov_len); + + msg.msg_iov = iov; + msg.msg_iovlen = 2; + + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_DEVMEM_DMABUF; + cmsg->cmsg_len = CMSG_LEN(sizeof(struct dmabuf_tx_cmsg)); + + ddmabuf.dmabuf_id = tx_dmabuf_id; + + *((struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg)) = ddmabuf; + + ret = sendmsg(socket_fd, &msg, MSG_ZEROCOPY); + if (ret < 0) + error(1, errno, "Failed sendmsg"); + + fprintf(stderr, "sendmsg_ret=%d\n", ret); + + if (ret != line_size) + error(1, errno, "Did not send all bytes"); + + wait_compl(socket_fd); + } + + fprintf(stderr, "%s: tx ok\n", TEST_PREFIX); + + free(line); + close(socket_fd); + + if (ys) + ynl_sock_destroy(ys); + + return 0; +} + int main(int argc, char *argv[]) { struct memory_buffer *mem; @@ -729,6 +1005,8 @@ int main(int argc, char *argv[]) ifindex = if_nametoindex(ifname); + fprintf(stderr, "using ifindex=%u\n", ifindex); + if (!server_ip && !client_ip) { if (start_queue < 0 && num_queues < 0) { num_queues = rxq_num(ifindex); @@ -779,7 +1057,7 @@ int main(int argc, char *argv[]) error(1, 0, "Missing -p argument\n"); mem = provider->alloc(getpagesize() * NUM_PAGES); - ret = is_server ? do_server(mem) : 1; + ret = is_server ? do_server(mem) : do_client(mem); provider->free(mem); return ret; From patchwork Mon Feb 3 22:39:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958367 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD2942116F9 for ; Mon, 3 Feb 2025 22:39:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622366; cv=none; b=IfR4KFEmc/HMwmsoaN0DElVRu9ZTfQBDlQamOey7oz9Vce4z3JLOAXFOOBd3pGqyJL4pa8DIJ1w7cPcu1c2JvWMhy9jWlVzqRnxMRSeI1jz2qEF4++h84K0XRXFW4zOPm78Yvss4uMx03HC5PypxCYzy5A3Suw403WY3AtWUwhc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622366; c=relaxed/simple; bh=yALvHvPZ06XHv8lSBAulVCm+vGbBL+K++sbCUEtN9RI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O7/9MODYUzrrXRVq5MVXOEgoxpMILu4KXcssQyFR0XlELOZLNGSwiTtYzeGXVSxoCbAc+O12ot2Mk4DqHA7q8iy7cTAf9TTu0lemL2nwnhICI8cDldrOxvxx7ehRawcOHchIDeJuatjqBKHTEUKPGCHrTK60LLfgzsZnt6pw2G0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=O1t8gFJx; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="O1t8gFJx" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21650d4612eso68741575ad.2 for ; Mon, 03 Feb 2025 14:39:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622363; x=1739227163; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XpaVlDvetL7z/kA++uxshgtjRqWTMmKZzr7CrXSNsGU=; b=O1t8gFJxWHXbJshEom2KpZELhLSDr082QdCOFdY7T2nkNV0pBUwYUGOf4ErDGFC2CC BChcVfUFxnGL9czFF26RPPKjxvN3BZjYjHtjXa/PRRAXvNw6AZo2EMCoZCu0HYMokats Wvtpng2mfqOlwUF3r+exswH1neB72+QIlOTnD7qe1fjN3XzY/GANIuR68iMd2MZEj9km VHxyL8QrVFdAXboK+dyeJ7vtfT9eitxGldW/SNYe3mdgJjKjnF53Atrhj5F3K4PdG9DP a6IhxvszQNHj+i7thxEtN/RBKSL5JlmFJWuvd7ZGE4BgJjbO3Ak6E1UMoU+juuz6JNfV mxLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622363; x=1739227163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XpaVlDvetL7z/kA++uxshgtjRqWTMmKZzr7CrXSNsGU=; b=di50eezg1Ven3DKJ/kRpzWc57Gg9UUKA5csoiMI1c+BwJbN+9xLh9Ne38yeMvG7nN2 UMpk33T0L18PcGp3wMpu3RSVcYbH5n0oQRQQdgsySZP6Ss7FNQ1ctA6hyrAlMPnzqtWq V0XXMV/r/i9pbumjak06HCNOKu96IhmW5RcqyWnwyHif+jKe61UsB/02wOO2qBP0VCrO StM6Exjz+M9gHe2SdRZJwmzc89foBDRR0daBgsdRiyWC+qjfsSIHpwC1Or40NEKcJLfV odXYMubJS+rdfjVCDAk1BT5hERRjqJXQzwcwAK1BJNJMc5xEcpI7NxP33pIA+OT0jXD0 2PFQ== X-Gm-Message-State: AOJu0YxoCu8LzqZAGkJozPxJ0jckTwqVW+tTb5v2X5XRKohRlT/81oXQ NEQKxgrVjW5La/QOlOtjswvPXBHAQjIW8yjQm7uj+kDA+udrEzrfuYM9XLnzZkBbARjWWCTnMuG Jjf2tRY7dWv6p10IG6OlmuQ57H5ZuZ/fr2x5yZJXhXVhh8da3pdtbe9XZikBbW/c7UDqFhwPhDE J6bIrY+9aggM/tYhJgIM2CEw8gdpLPWTegkDEDpqD7o8OcQb9x53VIZjJvH6Y= X-Google-Smtp-Source: AGHT+IHFrrUGIUMZPQVWORflbnoiwG8EF+LwGiuHsHmsbKPYPdBL0OXUy8C/28kCzgcGFB8+h6vXLwmDicd/+JIqDg== X-Received: from pfbll11.prod.google.com ([2002:a05:6a00:728b:b0:728:b8e3:9934]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:b93:b0:1e1:bdae:e054 with SMTP id adf61e73a8af0-1ed7a643882mr39519231637.25.1738622362684; Mon, 03 Feb 2025 14:39:22 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:13 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-4-almasrymina@google.com> Subject: [PATCH net-next v3 3/6] net: add get_netmem/put_netmem support From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja X-Patchwork-Delegate: kuba@kernel.org Currently net_iovs support only pp ref counts, and do not support a page ref equivalent. This is fine for the RX path as net_iovs are used exclusively with the pp and only pp refcounting is needed there. The TX path however does not use pp ref counts, thus, support for get_page/put_page equivalent is needed for netmem. Support get_netmem/put_netmem. Check the type of the netmem before passing it to page or net_iov specific code to obtain a page ref equivalent. For dmabuf net_iovs, we obtain a ref on the underlying binding. This ensures the entire binding doesn't disappear until all the net_iovs have been put_netmem'ed. We do not need to track the refcount of individual dmabuf net_iovs as we don't allocate/free them from a pool similar to what the buddy allocator does for pages. This code is written to be extensible by other net_iov implementers. get_netmem/put_netmem will check the type of the netmem and route it to the correct helper: pages -> [get|put]_page() dmabuf net_iovs -> net_devmem_[get|put]_net_iov() new net_iovs -> new helpers Signed-off-by: Mina Almasry --- v2: - Add comment on top of refcount_t ref explaining the usage in the XT path. - Fix missing definition of net_devmem_dmabuf_binding_put in this patch. --- include/linux/skbuff_ref.h | 4 ++-- include/net/netmem.h | 3 +++ net/core/devmem.c | 10 ++++++++++ net/core/devmem.h | 20 ++++++++++++++++++++ net/core/skbuff.c | 30 ++++++++++++++++++++++++++++++ 5 files changed, 65 insertions(+), 2 deletions(-) diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h index 0f3c58007488..9e49372ef1a0 100644 --- a/include/linux/skbuff_ref.h +++ b/include/linux/skbuff_ref.h @@ -17,7 +17,7 @@ */ static inline void __skb_frag_ref(skb_frag_t *frag) { - get_page(skb_frag_page(frag)); + get_netmem(skb_frag_netmem(frag)); } /** @@ -40,7 +40,7 @@ static inline void skb_page_unref(netmem_ref netmem, bool recycle) if (recycle && napi_pp_put_page(netmem)) return; #endif - put_page(netmem_to_page(netmem)); + put_netmem(netmem); } /** diff --git a/include/net/netmem.h b/include/net/netmem.h index 1b58faa4f20f..d30f31878a09 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -245,4 +245,7 @@ static inline unsigned long netmem_get_dma_addr(netmem_ref netmem) return __netmem_clear_lsb(netmem)->dma_addr; } +void get_netmem(netmem_ref netmem); +void put_netmem(netmem_ref netmem); + #endif /* _NET_NETMEM_H */ diff --git a/net/core/devmem.c b/net/core/devmem.c index 3bba3f018df0..20985a570662 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -333,6 +333,16 @@ void dev_dmabuf_uninstall(struct net_device *dev) } } +void net_devmem_get_net_iov(struct net_iov *niov) +{ + net_devmem_dmabuf_binding_get(niov->owner->binding); +} + +void net_devmem_put_net_iov(struct net_iov *niov) +{ + net_devmem_dmabuf_binding_put(niov->owner->binding); +} + /*** "Dmabuf devmem memory provider" ***/ int mp_dmabuf_devmem_init(struct page_pool *pool) diff --git a/net/core/devmem.h b/net/core/devmem.h index 76099ef9c482..8b51caff5a0e 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -27,6 +27,10 @@ struct net_devmem_dmabuf_binding { * The binding undos itself and unmaps the underlying dmabuf once all * those refs are dropped and the binding is no longer desired or in * use. + * + * net_devmem_get_net_iov() on dmabuf net_iovs will increment this + * reference, making sure that the binding remains alive until all the + * net_iovs are no longer used. */ refcount_t ref; @@ -119,6 +123,9 @@ net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding) __net_devmem_dmabuf_binding_free(binding); } +void net_devmem_get_net_iov(struct net_iov *niov); +void net_devmem_put_net_iov(struct net_iov *niov); + struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); void net_devmem_free_dmabuf(struct net_iov *ppiov); @@ -126,6 +133,19 @@ void net_devmem_free_dmabuf(struct net_iov *ppiov); #else struct net_devmem_dmabuf_binding; +static inline void +net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding) +{ +} + +static inline void net_devmem_get_net_iov(struct net_iov *niov) +{ +} + +static inline void net_devmem_put_net_iov(struct net_iov *niov) +{ +} + static inline void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) { diff --git a/net/core/skbuff.c b/net/core/skbuff.c index a441613a1e6c..815245d5c36b 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -88,6 +88,7 @@ #include #include "dev.h" +#include "devmem.h" #include "netmem_priv.h" #include "sock_destructor.h" @@ -7290,3 +7291,32 @@ bool csum_and_copy_from_iter_full(void *addr, size_t bytes, return false; } EXPORT_SYMBOL(csum_and_copy_from_iter_full); + +void get_netmem(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) { + /* Assume any net_iov is devmem and route it to + * net_devmem_get_net_iov. As new net_iov types are added they + * need to be checked here. + */ + net_devmem_get_net_iov(netmem_to_net_iov(netmem)); + return; + } + get_page(netmem_to_page(netmem)); +} +EXPORT_SYMBOL(get_netmem); + +void put_netmem(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) { + /* Assume any net_iov is devmem and route it to + * net_devmem_put_net_iov. As new net_iov types are added they + * need to be checked here. + */ + net_devmem_put_net_iov(netmem_to_net_iov(netmem)); + return; + } + + put_page(netmem_to_page(netmem)); +} +EXPORT_SYMBOL(put_netmem); From patchwork Mon Feb 3 22:39:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958368 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BAE9211A19 for ; Mon, 3 Feb 2025 22:39:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622367; cv=none; b=pei4tDbKk09asEvQxvdBCNHuougIgN4e8h/JFOqyrgBVTGmIjodUZ5yY8RXB/76/PWLAa9/7fiFW508jyV/1k93TxU0d6rbLjKveK8UY5wRfZqPy0pgDcm68NpW7h755YvS2/ZJYoo7B5P6DaGfkLlm/XCdE13sjo1jb1tBsUCg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622367; c=relaxed/simple; bh=E9ctK/poFQ58gKtoNoX5ieGA/joniFMVp8KL024oVgY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q3ErrtTXF5E1uv6LF9j2uKbuCf0KxhJqGpq2gODpl+euwv9/WDwByi4fqxEbW+VGIRrJjzDytvOW4rlgx668pCOx6fPeO58ab9eP0HcM0UQqxfXbw5QBw7ik+v2UBooCIqxd1lS2l/tvJ+Xf/mLpn9w4okTMSqQKuyCNdie8HkY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gyZSljAV; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gyZSljAV" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21648ddd461so106355205ad.0 for ; Mon, 03 Feb 2025 14:39:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622364; x=1739227164; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9QoG1IX3VKf0iSgQmXqfz0x8dnartqdM80wF5XjRoPo=; b=gyZSljAV8RRBQg2VmpeBkPGWruA7F5yiFn/atKqnxZP0PJG/ZEE0ZnTKqd8ytUWVRr pNVEwN/2kFnRQTMkZrOCngeSOd1EGnJw1CDFEjyJOUbq+GNrHr4M/hlrC5LOB2YLhjO4 L2o2CXgv7o+k5GsFxvOb5+QPIeni9NLnnDOdwfJrvm3SChzroXP0/MYgEr5EE0SeH14J g3JdUY5/YjUJTov8VRMFRt6Lp7ZxEPwyuT4FhJP6uW1sM5B2dlJ/9AdIwI2zK0KitZZE a3b3FeoQq90NWKoOFOqTN4gYb7vnfeuDRiAEPiO4ebad34Fo7FUbqpL3oLk5vS+LSed/ 7lTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622364; x=1739227164; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9QoG1IX3VKf0iSgQmXqfz0x8dnartqdM80wF5XjRoPo=; b=MKBC36/24n1M6ohDvsXYp3gaM/PnTi62blPX5cFK8j8P1wc1Iguz5h5GOytdfCzrQS pUYEWtWarnuoaCCo01txpG8r3KQxmyEPbdUV/5c65t2raDOOenwaqpCh7YnJXoi07por cRCHMso1khFkh+e4SdLXBHa7pccLbH7xkX8GUpVonWHqq6edvQG44SIOTmQXjL1af74n Tg9Ir3P0LKv6cDeYc050jVYoPfLuxsYi9q9ysDiUEMpI6NEBbo+INvv9luZdXkJZTfbq nbLOwjFpU5gyXKZxmvnA+g0xoxdJ5iajEiq91aUPm54vTWc149olWcCpXjjH3LOVHt3K 2nRQ== X-Gm-Message-State: AOJu0Yxxg0iaF1nZl1YBzv5MNhjC/4UyGnREfj7dxCdWEKKhPY6eoQIH qQy01ZQIx23Pq5pEUVP+XomlTTnxZPesLSHCOF9cCscR8oxuPYufDgnASYWudD0oNSj0djdxCy/ fcNb2jF4kNK0ZDJ2D66afqIyDYStyZJRJqhRs10tUkO0j2Vu761A+EMc/mQs4flB7QiOsDuMgzY +lNuE0pfO7xpTNDsMuqbnc4Yk8yw3VEIltj6tp+vR8RFYuozpDKvbDm9PMNPs= X-Google-Smtp-Source: AGHT+IGd9E1uzoDNwwzafdy0csHGORrkGiyiT6++YmdgkD7II8siJy6QkaN+wMaIlrdg6BZIw6g7aZw11tgOGPqaUQ== X-Received: from plgv14.prod.google.com ([2002:a17:902:e8ce:b0:21a:7e04:7006]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:1c5:b0:210:f706:dc4b with SMTP id d9443c01a7336-21dd7c653b7mr322527705ad.13.1738622364115; Mon, 03 Feb 2025 14:39:24 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:14 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-5-almasrymina@google.com> Subject: [PATCH net-next v3 4/6] net: devmem: TCP tx netlink api From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja X-Patchwork-Delegate: kuba@kernel.org From: Stanislav Fomichev Add bind-tx netlink call to attach dmabuf for TX; queue is not required, only ifindex and dmabuf fd for attachment. Signed-off-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- v3: - Fix ynl-regen.sh error (Simon). --- Documentation/netlink/specs/netdev.yaml | 12 ++++++++++++ include/uapi/linux/netdev.h | 1 + net/core/netdev-genl-gen.c | 13 +++++++++++++ net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 6 ++++++ tools/include/uapi/linux/netdev.h | 1 + 6 files changed, 34 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index cbb544bd6c84..93f4333e7bc6 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -711,6 +711,18 @@ operations: - defer-hard-irqs - gro-flush-timeout - irq-suspend-timeout + - + name: bind-tx + doc: Bind dmabuf to netdev for TX + attribute-set: dmabuf + do: + request: + attributes: + - ifindex + - fd + reply: + attributes: + - id kernel-family: headers: [ "linux/list.h"] diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index e4be227d3ad6..04364ef5edbe 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -203,6 +203,7 @@ enum { NETDEV_CMD_QSTATS_GET, NETDEV_CMD_BIND_RX, NETDEV_CMD_NAPI_SET, + NETDEV_CMD_BIND_TX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index 996ac6a449eb..f27608d6301c 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -99,6 +99,12 @@ static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_IRQ_SUSPE [NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, }, }; +/* NETDEV_CMD_BIND_TX - do */ +static const struct nla_policy netdev_bind_tx_nl_policy[NETDEV_A_DMABUF_FD + 1] = { + [NETDEV_A_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_DMABUF_FD] = { .type = NLA_U32, }, +}; + /* Ops table for netdev */ static const struct genl_split_ops netdev_nl_ops[] = { { @@ -190,6 +196,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .maxattr = NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT, .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, }, + { + .cmd = NETDEV_CMD_BIND_TX, + .doit = netdev_nl_bind_tx_doit, + .policy = netdev_bind_tx_nl_policy, + .maxattr = NETDEV_A_DMABUF_FD, + .flags = GENL_CMD_CAP_DO, + }, }; static const struct genl_multicast_group netdev_nl_mcgrps[] = { diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index e09dd7539ff2..c1fed66e92b9 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -34,6 +34,7 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_set_doit(struct sk_buff *skb, struct genl_info *info); +int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info); enum { NETDEV_NLGRP_MGMT, diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 715f85c6b62e..0e41699df419 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -911,6 +911,12 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) return err; } +/* stub */ +int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info) +{ + return 0; +} + void netdev_nl_sock_priv_init(struct list_head *priv) { INIT_LIST_HEAD(priv); diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index e4be227d3ad6..04364ef5edbe 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -203,6 +203,7 @@ enum { NETDEV_CMD_QSTATS_GET, NETDEV_CMD_BIND_RX, NETDEV_CMD_NAPI_SET, + NETDEV_CMD_BIND_TX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) From patchwork Mon Feb 3 22:39:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958369 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBC0F21146B for ; Mon, 3 Feb 2025 22:39:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622370; cv=none; b=c6xDnwcEtKJnHPa8xUrwAB1g3HEg/egaJVMJUGOZCwSyPRx18N4m+lZbSSMi+SI44fu1P+P5uuoddme31YMg3FzzdRD3v1TSetnULabsep6lCc1jfktbJ3TmbB76BD1OU7kkT0jAfFDjVy29mrP9/WXHLbxsYdOcUioi/NgGzP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622370; c=relaxed/simple; bh=4/qJJDHH1TEJLvlVOxkm7ZwqkfjTTELJzFNExFlX4P0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ObkqXDCn1Gi4ZuI7AQUDLcmVuHdodYS5gMfOJGL8OAKS7JtXV1SUirWWahhxLJMQaHK/sJiwc/lTSpwUjbMliP04RD2VZIX7TyeafCKzYZKdPDBONbeR9hkH3/OlZBqssDSb604s1IimBdt9omWyvU7+TPd4292l32kyeWlRPQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iYC1UaP0; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iYC1UaP0" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166464e236so157977915ad.1 for ; Mon, 03 Feb 2025 14:39:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622366; x=1739227166; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6sNSFwKy2SlkuHRhcxb5O3kc/Jpw2+N5Mf1LtZFVUzw=; b=iYC1UaP0tZ7HpUL8/TI0tWW2yFChS89nbr8/HOs0jXXg6BZwWB1D2QXiiPpQyOJnnp iygjPrwgDx444jmI9VFJDSSgB0Rl3lXtBhDVYEgSY4QWVVJaZk780+E3+FGuPuFIH3my b/4KvCk654z41H4OQEYB4ckyMcjpmHjFoTTUxgdPpYo0grrE8TCP9XVJY4Y5CIAKkxMu 5bPHVl5Z0Fmg08y0D2x3ygmXtL0MTh1BA32TKNExBy+RENhqKxCjrBGHv47/SzjT9sRu RLqcvOSshbP0nTh4x2EFphN0SkDyD6JOqc0oDemsXJ1g53ed0Hrc1B/evNp/oYDI394i t9rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622366; x=1739227166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6sNSFwKy2SlkuHRhcxb5O3kc/Jpw2+N5Mf1LtZFVUzw=; b=XvIU7m7zBzeHJB4FLQp76VcCzEnXsREkR1dVvH8tmyIJiU/4AGy2iPJFUG8mccdIxN xbUDUaJkottCi01B6qbaNrk9OtL0EnrR/YviHrMZj6fixxHNWWEyGZ95jVOhkvjg74uc FP3CD9elct8gofwwJCWL2Az9aTWFBwKMufDx9txUZQe7PAnGCO4lj2jhvv1spUKEsYRh FO7U3vA/rdJiRE0ioUFzYVfu5QK4urjuraAV4tQcMyFkAbGWKDocIdbQ/8MaRopJc6J4 US2pVHXkwUHVsAIuRaU9lJY4aIXorayuixfzDA53j+vgXVwd9emyOwoshbr8MWuPW19G hdyA== X-Gm-Message-State: AOJu0Yx+oUQZ6WPEISafYI4+LVbzxWWcYKALkzldj1s5qOn0jjOKPUKo den6YjjUkmsaqD6H9r99RXtZ+xW0rDN0iWmsu6/YuB36Ph3x8oZD7KwGEHDKvXs2JePTsCIHq8m IhjxkPExsuDHjFBFHzkmL8lQfTSuJWu69PnQDM8PXc+wqzvXxRy9+BQdoCB5e/IdYhSTcVKVm0E dCDTFqVnhEAxvO6JVHFq5OlNwRs/9Qu4KqSBW5RNejtUHPzLBD8Ur3pG84IyE= X-Google-Smtp-Source: AGHT+IGBQCyhW2OBvsDnt7j4GFfnSgLSgFOi9ZVZ2lRnfiaCSrIC0loo5qWjOFCMPPBWK+qK0ah/Ivm0oQtiOQTQ3g== X-Received: from pjbsg1.prod.google.com ([2002:a17:90b:5201:b0:2ea:4a74:ac2]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:191:b0:215:a3fb:b4d6 with SMTP id d9443c01a7336-21dd7c46c2bmr299959235ad.8.1738622365708; Mon, 03 Feb 2025 14:39:25 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:15 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-6-almasrymina@google.com> Subject: [PATCH net-next v3 5/6] net: devmem: Implement TX path From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja , Kaiyuan Zhang X-Patchwork-Delegate: kuba@kernel.org Augment dmabuf binding to be able to handle TX. Additional to all the RX binding, we also create tx_vec needed for the TX path. Provide API for sendmsg to be able to send dmabufs bound to this device: - Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from. - MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf. Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY implementation, while disabling instances where MSG_ZEROCOPY falls back to copying. We additionally pipe the binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb with net_iov netmems instead of the traditional page netmems. We also special case skb_frag_dma_map to return the dma-address of these dmabuf net_iovs instead of attempting to map pages. Based on work by Stanislav Fomichev . A lot of the meat of the implementation came from devmem TCP RFC v1[1], which included the TX path, but Stan did all the rebasing on top of netmem/net_iov. Cc: Stanislav Fomichev Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- v3: - Use kvmalloc_array instead of kcalloc (Stan). - Fix unreachable code warning (Simon). v2: - Remove dmabuf_offset from the dmabuf cmsg. - Update zerocopy_fill_skb_from_devmem to interpret the iov_base/iter_iov_addr as the offset into the dmabuf to send from (Stan). - Remove the confusing binding->tx_iter which is not needed if we interpret the iov_base/iter_iov_addr as offset into the dmabuf (Stan). - Remove check for binding->sgt and binding->sgt->nents in dmabuf binding. - Simplify the calculation of binding->tx_vec. - Check in net_devmem_get_binding that the binding we're returning has ifindex matching the sending socket (Willem). --- include/linux/skbuff.h | 15 +++- include/net/sock.h | 1 + include/uapi/linux/uio.h | 6 +- net/core/datagram.c | 41 ++++++++++- net/core/devmem.c | 97 +++++++++++++++++++++++-- net/core/devmem.h | 42 ++++++++++- net/core/netdev-genl.c | 64 +++++++++++++++- net/core/skbuff.c | 6 +- net/core/sock.c | 8 ++ net/ipv4/tcp.c | 36 ++++++--- net/vmw_vsock/virtio_transport_common.c | 3 +- 11 files changed, 285 insertions(+), 34 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index bb2b751d274a..3ff8f568c382 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1711,9 +1711,12 @@ struct ubuf_info *msg_zerocopy_realloc(struct sock *sk, size_t size, void msg_zerocopy_put_abort(struct ubuf_info *uarg, bool have_uref); +struct net_devmem_dmabuf_binding; + int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, struct sk_buff *skb, struct iov_iter *from, - size_t length); + size_t length, + struct net_devmem_dmabuf_binding *binding); int zerocopy_fill_skb_from_iter(struct sk_buff *skb, struct iov_iter *from, size_t length); @@ -1721,12 +1724,14 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb, static inline int skb_zerocopy_iter_dgram(struct sk_buff *skb, struct msghdr *msg, int len) { - return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len); + return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len, + NULL); } int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb, struct msghdr *msg, int len, - struct ubuf_info *uarg); + struct ubuf_info *uarg, + struct net_devmem_dmabuf_binding *binding); /* Internal */ #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB))) @@ -3697,6 +3702,10 @@ static inline dma_addr_t __skb_frag_dma_map(struct device *dev, size_t offset, size_t size, enum dma_data_direction dir) { + if (skb_frag_is_net_iov(frag)) { + return netmem_to_net_iov(frag->netmem)->dma_addr + offset + + frag->offset; + } return dma_map_page(dev, skb_frag_page(frag), skb_frag_off(frag) + offset, size, dir); } diff --git a/include/net/sock.h b/include/net/sock.h index 8036b3b79cd8..09eb918525b6 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1822,6 +1822,7 @@ struct sockcm_cookie { u32 tsflags; u32 ts_opt_id; u32 priority; + u32 dmabuf_id; }; static inline void sockcm_init(struct sockcm_cookie *sockc, diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 649739e0c404..866bd5dfe39f 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -38,10 +38,14 @@ struct dmabuf_token { __u32 token_count; }; +struct dmabuf_tx_cmsg { + __u32 dmabuf_id; +}; + /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ - + #define UIO_FASTIOV 8 #define UIO_MAXIOV 1024 diff --git a/net/core/datagram.c b/net/core/datagram.c index f0693707aece..c989606ff58d 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -63,6 +63,8 @@ #include #include +#include "devmem.h" + /* * Is a socket 'connection oriented' ? */ @@ -692,9 +694,42 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb, return 0; } +static int +zerocopy_fill_skb_from_devmem(struct sk_buff *skb, struct iov_iter *from, + int length, + struct net_devmem_dmabuf_binding *binding) +{ + int i = skb_shinfo(skb)->nr_frags; + size_t virt_addr, size, off; + struct net_iov *niov; + + while (length && iov_iter_count(from)) { + if (i == MAX_SKB_FRAGS) + return -EMSGSIZE; + + virt_addr = (size_t)iter_iov_addr(from); + niov = net_devmem_get_niov_at(binding, virt_addr, &off, &size); + if (!niov) + return -EFAULT; + + size = min_t(size_t, size, length); + size = min_t(size_t, size, iter_iov_len(from)); + + get_netmem(net_iov_to_netmem(niov)); + skb_add_rx_frag_netmem(skb, i, net_iov_to_netmem(niov), off, + size, PAGE_SIZE); + iov_iter_advance(from, size); + length -= size; + i++; + } + + return 0; +} + int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, struct sk_buff *skb, struct iov_iter *from, - size_t length) + size_t length, + struct net_devmem_dmabuf_binding *binding) { unsigned long orig_size = skb->truesize; unsigned long truesize; @@ -702,6 +737,8 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, if (msg && msg->msg_ubuf && msg->sg_from_iter) ret = msg->sg_from_iter(skb, from, length); + else if (unlikely(binding)) + ret = zerocopy_fill_skb_from_devmem(skb, from, length, binding); else ret = zerocopy_fill_skb_from_iter(skb, from, length); @@ -735,7 +772,7 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from) if (skb_copy_datagram_from_iter(skb, 0, from, copy)) return -EFAULT; - return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U); + return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U, NULL); } EXPORT_SYMBOL(zerocopy_sg_from_iter); diff --git a/net/core/devmem.c b/net/core/devmem.c index 20985a570662..5de887545f5e 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include "devmem.h" @@ -64,8 +65,10 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) dma_buf_detach(binding->dmabuf, binding->attachment); dma_buf_put(binding->dmabuf); xa_destroy(&binding->bound_rxqs); + kvfree(binding->tx_vec); kfree(binding); } +EXPORT_SYMBOL(__net_devmem_dmabuf_binding_free); struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) @@ -110,6 +113,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) unsigned long xa_idx; unsigned int rxq_idx; + xa_erase(&net_devmem_dmabuf_bindings, binding->id); + + /* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the + * erase. + */ + synchronize_net(); + if (binding->list.next) list_del(&binding->list); @@ -123,8 +133,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx)); } - xa_erase(&net_devmem_dmabuf_bindings, binding->id); - net_devmem_dmabuf_binding_put(binding); } @@ -185,8 +193,9 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } struct net_devmem_dmabuf_binding * -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, - struct netlink_ext_ack *extack) +net_devmem_bind_dmabuf(struct net_device *dev, + enum dma_data_direction direction, + unsigned int dmabuf_fd, struct netlink_ext_ack *extack) { struct net_devmem_dmabuf_binding *binding; static u32 id_alloc_next; @@ -229,7 +238,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, } binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment, - DMA_FROM_DEVICE); + direction); if (IS_ERR(binding->sgt)) { err = PTR_ERR(binding->sgt); NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment"); @@ -240,13 +249,23 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, * binding can be much more flexible than that. We may be able to * allocate MTU sized chunks here. Leave that for future work... */ - binding->chunk_pool = - gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev)); + binding->chunk_pool = gen_pool_create(PAGE_SHIFT, + dev_to_node(&dev->dev)); if (!binding->chunk_pool) { err = -ENOMEM; goto err_unmap; } + if (direction == DMA_TO_DEVICE) { + binding->tx_vec = kvmalloc_array(dmabuf->size / PAGE_SIZE, + sizeof(struct net_iov *), + GFP_KERNEL); + if (!binding->tx_vec) { + err = -ENOMEM; + goto err_free_chunks; + } + } + virtual = 0; for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) { dma_addr_t dma_addr = sg_dma_address(sg); @@ -288,6 +307,8 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, niov->owner = owner; page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), net_devmem_get_dma_addr(niov)); + if (direction == DMA_TO_DEVICE) + binding->tx_vec[owner->base_virtual / PAGE_SIZE + i] = niov; } virtual += len; @@ -313,6 +334,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, return ERR_PTR(err); } +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id) +{ + struct net_devmem_dmabuf_binding *binding; + + rcu_read_lock(); + binding = xa_load(&net_devmem_dmabuf_bindings, id); + if (binding) { + if (!net_devmem_dmabuf_binding_get(binding)) + binding = NULL; + } + rcu_read_unlock(); + + return binding; +} + void dev_dmabuf_uninstall(struct net_device *dev) { struct net_devmem_dmabuf_binding *binding; @@ -343,6 +379,53 @@ void net_devmem_put_net_iov(struct net_iov *niov) net_devmem_dmabuf_binding_put(niov->owner->binding); } +struct net_devmem_dmabuf_binding *net_devmem_get_binding(struct sock *sk, + unsigned int dmabuf_id) +{ + struct net_devmem_dmabuf_binding *binding; + struct dst_entry *dst = __sk_dst_get(sk); + int err = 0; + + binding = net_devmem_lookup_dmabuf(dmabuf_id); + if (!binding || !binding->tx_vec) { + err = -EINVAL; + goto out_err; + } + + /* The dma-addrs in this binding are only reachable to the corresponding + * net_device. + */ + if (!dst || !dst->dev || dst->dev->ifindex != binding->dev->ifindex) { + err = -ENODEV; + goto out_err; + } + + return binding; + +out_err: + if (binding) + net_devmem_dmabuf_binding_put(binding); + + return ERR_PTR(err); +} + +struct net_iov * +net_devmem_get_niov_at(struct net_devmem_dmabuf_binding *binding, + size_t virt_addr, size_t *off, size_t *size) +{ + size_t idx; + + if (virt_addr >= binding->dmabuf->size) + return NULL; + + idx = virt_addr / PAGE_SIZE; + + *off = virt_addr % PAGE_SIZE; + *size = PAGE_SIZE - *off; + + return binding->tx_vec[idx]; +} + /*** "Dmabuf devmem memory provider" ***/ int mp_dmabuf_devmem_init(struct page_pool *pool) diff --git a/net/core/devmem.h b/net/core/devmem.h index 8b51caff5a0e..874e891e70e0 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -46,6 +46,12 @@ struct net_devmem_dmabuf_binding { * active. */ u32 id; + + /* Array of net_iov pointers for this binding, sorted by virtual + * address. This array is convenient to map the virtual addresses to + * net_iovs in the TX path. + */ + struct net_iov **tx_vec; }; #if defined(CONFIG_NET_DEVMEM) @@ -70,12 +76,15 @@ struct dmabuf_genpool_chunk_owner { void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); struct net_devmem_dmabuf_binding * -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, - struct netlink_ext_ack *extack); +net_devmem_bind_dmabuf(struct net_device *dev, + enum dma_data_direction direction, + unsigned int dmabuf_fd, struct netlink_ext_ack *extack); +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id); void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding, struct netlink_ext_ack *extack); +void net_devmem_bind_tx_release(struct sock *sk); void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * @@ -108,10 +117,10 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov) return net_iov_owner(niov)->binding->id; } -static inline void +static inline bool net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) { - refcount_inc(&binding->ref); + return refcount_inc_not_zero(&binding->ref); } static inline void @@ -130,6 +139,12 @@ struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); void net_devmem_free_dmabuf(struct net_iov *ppiov); +struct net_devmem_dmabuf_binding * +net_devmem_get_binding(struct sock *sk, unsigned int dmabuf_id); +struct net_iov * +net_devmem_get_niov_at(struct net_devmem_dmabuf_binding *binding, size_t addr, + size_t *off, size_t *size); + #else struct net_devmem_dmabuf_binding; @@ -153,11 +168,17 @@ __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) static inline struct net_devmem_dmabuf_binding * net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + enum dma_data_direction direction, struct netlink_ext_ack *extack) { return ERR_PTR(-EOPNOTSUPP); } +static inline struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id) +{ + return NULL; +} + static inline void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) { @@ -195,6 +216,19 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov) { return 0; } + +static inline struct net_devmem_dmabuf_binding * +net_devmem_get_binding(struct sock *sk, unsigned int dmabuf_id) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline struct net_iov * +net_devmem_get_niov_at(struct net_devmem_dmabuf_binding *binding, size_t addr, + size_t *off, size_t *size) +{ + return NULL; +} #endif #endif /* _NET_DEVMEM_H */ diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 0e41699df419..3ecb3a6d3913 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -854,7 +854,8 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) goto err_unlock; } - binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack); + binding = net_devmem_bind_dmabuf(netdev, DMA_FROM_DEVICE, dmabuf_fd, + info->extack); if (IS_ERR(binding)) { err = PTR_ERR(binding); goto err_unlock; @@ -911,10 +912,67 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) return err; } -/* stub */ int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info) { - return 0; + struct net_devmem_dmabuf_binding *binding; + struct list_head *sock_binding_list; + struct net_device *netdev; + u32 ifindex, dmabuf_fd; + struct sk_buff *rsp; + int err = 0; + void *hdr; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_FD)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]); + + sock_binding_list = genl_sk_priv_get(&netdev_nl_family, + NETLINK_CB(skb).sk); + if (IS_ERR(sock_binding_list)) + return PTR_ERR(sock_binding_list); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + hdr = genlmsg_iput(rsp, info); + if (!hdr) { + err = -EMSGSIZE; + goto err_genlmsg_free; + } + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (!netdev || !netif_device_present(netdev)) { + err = -ENODEV; + goto err_unlock; + } + + binding = net_devmem_bind_dmabuf(netdev, DMA_TO_DEVICE, dmabuf_fd, + info->extack); + if (IS_ERR(binding)) { + err = PTR_ERR(binding); + goto err_unlock; + } + + list_add(&binding->list, sock_binding_list); + + nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id); + genlmsg_end(rsp, hdr); + + rtnl_unlock(); + + return genlmsg_reply(rsp, info); + +err_unlock: + rtnl_unlock(); +err_genlmsg_free: + nlmsg_free(rsp); + return err; } void netdev_nl_sock_priv_init(struct list_head *priv) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 815245d5c36b..6289ffcbb20b 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1882,7 +1882,8 @@ EXPORT_SYMBOL_GPL(msg_zerocopy_ubuf_ops); int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb, struct msghdr *msg, int len, - struct ubuf_info *uarg) + struct ubuf_info *uarg, + struct net_devmem_dmabuf_binding *binding) { int err, orig_len = skb->len; @@ -1901,7 +1902,8 @@ int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb, return -EEXIST; } - err = __zerocopy_sg_from_iter(msg, sk, skb, &msg->msg_iter, len); + err = __zerocopy_sg_from_iter(msg, sk, skb, &msg->msg_iter, len, + binding); if (err == -EFAULT || (err == -EMSGSIZE && skb->len == orig_len)) { struct sock *save_sk = skb->sk; diff --git a/net/core/sock.c b/net/core/sock.c index eae2ae70a2e0..353669f124ab 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2911,6 +2911,7 @@ EXPORT_SYMBOL(sock_alloc_send_pskb); int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg, struct sockcm_cookie *sockc) { + struct dmabuf_tx_cmsg dmabuf_tx; u32 tsflags; BUILD_BUG_ON(SOF_TIMESTAMPING_LAST == (1 << 31)); @@ -2964,6 +2965,13 @@ int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg, if (!sk_set_prio_allowed(sk, *(u32 *)CMSG_DATA(cmsg))) return -EPERM; sockc->priority = *(u32 *)CMSG_DATA(cmsg); + break; + case SCM_DEVMEM_DMABUF: + if (cmsg->cmsg_len != CMSG_LEN(sizeof(struct dmabuf_tx_cmsg))) + return -EINVAL; + dmabuf_tx = *(struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg); + sockc->dmabuf_id = dmabuf_tx.dmabuf_id; + break; default: return -EINVAL; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0d704bda6c41..44198ae7e44c 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1051,6 +1051,7 @@ int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *copied, int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) { + struct net_devmem_dmabuf_binding *binding = NULL; struct tcp_sock *tp = tcp_sk(sk); struct ubuf_info *uarg = NULL; struct sk_buff *skb; @@ -1063,6 +1064,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) flags = msg->msg_flags; + sockcm_init(&sockc, sk); + if (msg->msg_controllen) { + err = sock_cmsg_send(sk, msg, &sockc); + if (unlikely(err)) { + err = -EINVAL; + goto out_err; + } + } + if ((flags & MSG_ZEROCOPY) && size) { if (msg->msg_ubuf) { uarg = msg->msg_ubuf; @@ -1080,6 +1090,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) else uarg_to_msgzc(uarg)->zerocopy = 0; } + + if (sockc.dmabuf_id != 0) { + binding = net_devmem_get_binding(sk, sockc.dmabuf_id); + if (IS_ERR(binding)) { + err = PTR_ERR(binding); + binding = NULL; + goto out_err; + } + } } else if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES) && size) { if (sk->sk_route_caps & NETIF_F_SG) zc = MSG_SPLICE_PAGES; @@ -1123,15 +1142,6 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) /* 'common' sending to sendq */ } - sockcm_init(&sockc, sk); - if (msg->msg_controllen) { - err = sock_cmsg_send(sk, msg, &sockc); - if (unlikely(err)) { - err = -EINVAL; - goto out_err; - } - } - /* This should be in poll */ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); @@ -1248,7 +1258,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) goto wait_for_space; } - err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg); + err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg, + binding); if (err == -EMSGSIZE || err == -EEXIST) { tcp_mark_push(tp, skb); goto new_segment; @@ -1329,6 +1340,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) /* msg->msg_ubuf is pinned by the caller so we don't take extra refs */ if (uarg && !msg->msg_ubuf) net_zcopy_put(uarg); + if (binding) + net_devmem_dmabuf_binding_put(binding); return copied + copied_syn; do_error: @@ -1346,6 +1359,9 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) sk->sk_write_space(sk); tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED); } + if (binding) + net_devmem_dmabuf_binding_put(binding); + return err; } EXPORT_SYMBOL_GPL(tcp_sendmsg_locked); diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index 7f7de6d88096..f6d4bb798517 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -107,8 +107,7 @@ static int virtio_transport_fill_skb(struct sk_buff *skb, { if (zcopy) return __zerocopy_sg_from_iter(info->msg, NULL, skb, - &info->msg->msg_iter, - len); + &info->msg->msg_iter, len, NULL); return memcpy_from_msg(skb_put(skb, len), info->msg, len); } From patchwork Mon Feb 3 22:39:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13958370 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2798212B27 for ; Mon, 3 Feb 2025 22:39:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622371; cv=none; b=pOZCHmGijJ/RH5cOV7KHSsloF7Pusw/sALQ118HY2ae8jtT8XZZJ5ZspyOgLiVNxQ5G99l0J85zsdPY0PmkcCOt28Ji7m4q7p8aaB0XQ814tWkXpTHo9ZIoZnqONVOfUMNfQ2UcGJPQVkP3HcogOspgoqCHBVG2bqaEbEGGOzyY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738622371; c=relaxed/simple; bh=97oOx4z9pCZ0Iq/ojWzrHOGygbt98r31tFw6EuDjFTc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O6gzPIW0N3DK9wfVZvDyopbJo1aatJF509IHCx0DbIXV9eCkFlxvt98WP4WJlBaNCl4ngvJlgvvt7qLGKM5gYS5RqL3bXG3wHxXqOwhP2kwtiDo7VlN95mWLffeiWVM/5Lc+NhW7ohhI5WCgobJ3uK9bOdjVAZVmF707UpXYod0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=05SV4dk/; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="05SV4dk/" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef79d9c692so13954990a91.0 for ; Mon, 03 Feb 2025 14:39:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738622367; x=1739227167; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rc7c9JPXikCd8Iu9pgV4LRP9xA48AK8hD/MXmnBG3Aw=; b=05SV4dk/x3eiSCMAWP53WrkfCM4kvDiUKWnfVIejbpBTPu3MKBzJyLKcfzO4poy/kX Ke//cny2dUUax0s1K4eQ//i2ujZJqxVhxPPM4vaymmUYDxTv2yHJTfL8f/bLzyHDUq/L eK6mBODC96qEPMTbBeQuIoGazj8vTj3Gw0hgv19EKGKT8VsuxRJuXxqieAetsRFDylf8 aM/+B0B/pz2BJBvSNC3ElMSL03uc2HI6LvkYg2XOG1ECF8y93gjQWN4ANXp4ZUT25g84 CqlCRx8Z+rKOnbILEyBTXkvmEJh9DLDEcpxnyzxeomAbW5qxb9WVUGeRssQ6Bpu5kD5W 0TOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738622367; x=1739227167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rc7c9JPXikCd8Iu9pgV4LRP9xA48AK8hD/MXmnBG3Aw=; b=G9gxNNCYHYTj5ZTivXxzz63E72hewgfqDcG9+0KcPcRXgdXkbihvIvfSaqV34ct61f rC/O96HWtand/1aWo9wXGLZ33hvwQOkCsBhLKHHDlCb/2fYD47ihJSO2+BeMLtBMxWtc XbF2sUi+O+dBWR8f8HYmidikfQkRu0yEQKZ336DNzibiRDfGf7BDv8sC005obaE+cly3 cagCCNmd/b7Bn1v8KtkiPcOkEJ1poQvyGdqnW0fMfVnTiHZBy1Hj/OgPmD6/uDOhq1ZL jyaAozD6GLqj5KyALEcHY3OJdJrNEIQv4I9NSu4onwUbO33mKk6WY5V/rVC1Jva2hTmN XS4Q== X-Gm-Message-State: AOJu0YwdLLAWGzKFHHnbUlJNU3XvAmnyr9MZSxwdx4eawDLYxxwgsR7n CjFtm3jgkA/sMxUU78lXWVPLWHwqBENEUaTEVBseI4MvNUX2UW8fheNg48pUmalrsp/h+okPoc4 WrW3KFlzLF4xz0Mfw4VILWS8w53Fmnd46yF61nXPxX99juI90ptEuYXbGbXGcRqqUNMB+TAB2tk P+Esbiaki+u5XKWKItGbTket6SizyGpsdpVW55GLMT2gqon8+OkdfSSVwxB1o= X-Google-Smtp-Source: AGHT+IEVoq/odtPJGxLrrYEYPRP3ZsRO1cKfDMWv4+6BAejNJPoBnN+rujdSVvcz7HzR2bynurMrReZ/ogt/1YT+mg== X-Received: from pjbsx4.prod.google.com ([2002:a17:90b:2cc4:b0:2ea:aa56:49c]) (user=almasrymina job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:53c7:b0:2ee:7870:8835 with SMTP id 98e67ed59e1d1-2f83ac83791mr40443324a91.33.1738622367078; Mon, 03 Feb 2025 14:39:27 -0800 (PST) Date: Mon, 3 Feb 2025 22:39:16 +0000 In-Reply-To: <20250203223916.1064540-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250203223916.1064540-1-almasrymina@google.com> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog Message-ID: <20250203223916.1064540-7-almasrymina@google.com> Subject: [PATCH net-next v3 6/6] net: devmem: make dmabuf unbinding scheduled work From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-kselftest@vger.kernel.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Neal Cardwell , David Ahern , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , " =?utf-8?q?Eugenio_P=C3=A9rez?= " , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , sdf@fomichev.me, asml.silence@gmail.com, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja X-Patchwork-Delegate: kuba@kernel.org The TX path may release the dmabuf in a context where we cannot wait. This happens when the user unbinds a TX dmabuf while there are still references to its netmems in the TX path. In that case, the netmems will be put_netmem'd from a context where we can't unmap the dmabuf, resulting in a BUG like seen by Stan: [ 1.548495] BUG: sleeping function called from invalid context at drivers/dma-buf/dma-buf.c:1255 [ 1.548741] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 149, name: ncdevmem [ 1.548926] preempt_count: 201, expected: 0 [ 1.549026] RCU nest depth: 0, expected: 0 [ 1.549197] [ 1.549237] ============================= [ 1.549331] [ BUG: Invalid wait context ] [ 1.549425] 6.13.0-rc3-00770-gbc9ef9606dc9-dirty #15 Tainted: G W [ 1.549609] ----------------------------- [ 1.549704] ncdevmem/149 is trying to lock: [ 1.549801] ffff8880066701c0 (reservation_ww_class_mutex){+.+.}-{4:4}, at: dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.550051] other info that might help us debug this: [ 1.550167] context-{5:5} [ 1.550229] 3 locks held by ncdevmem/149: [ 1.550322] #0: ffff888005730208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x40/0xf0 [ 1.550530] #1: ffff88800b148f98 (sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_close+0x19/0x80 [ 1.550731] #2: ffff88800b148f18 (slock-AF_INET6){+.-.}-{3:3}, at: __tcp_close+0x185/0x4b0 [ 1.550921] stack backtrace: [ 1.550990] CPU: 0 UID: 0 PID: 149 Comm: ncdevmem Tainted: G W 6.13.0-rc3-00770-gbc9ef9606dc9-dirty #15 [ 1.551233] Tainted: [W]=WARN [ 1.551304] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014 [ 1.551518] Call Trace: [ 1.551584] [ 1.551636] dump_stack_lvl+0x86/0xc0 [ 1.551723] __lock_acquire+0xb0f/0xc30 [ 1.551814] ? dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.551941] lock_acquire+0xf1/0x2a0 [ 1.552026] ? dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.552152] ? dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.552281] ? dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.552408] __ww_mutex_lock+0x121/0x1060 [ 1.552503] ? dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.552648] ww_mutex_lock+0x3d/0xa0 [ 1.552733] dma_buf_unmap_attachment_unlocked+0x4b/0x90 [ 1.552857] __net_devmem_dmabuf_binding_free+0x56/0xb0 [ 1.552979] skb_release_data+0x120/0x1f0 [ 1.553074] __kfree_skb+0x29/0xa0 [ 1.553156] tcp_write_queue_purge+0x41/0x310 [ 1.553259] tcp_v4_destroy_sock+0x127/0x320 [ 1.553363] ? __tcp_close+0x169/0x4b0 [ 1.553452] inet_csk_destroy_sock+0x53/0x130 [ 1.553560] __tcp_close+0x421/0x4b0 [ 1.553646] tcp_close+0x24/0x80 [ 1.553724] inet_release+0x5d/0x90 [ 1.553806] sock_close+0x4a/0xf0 [ 1.553886] __fput+0x9c/0x2b0 [ 1.553960] task_work_run+0x89/0xc0 [ 1.554046] do_exit+0x27f/0x980 [ 1.554125] do_group_exit+0xa4/0xb0 [ 1.554211] __x64_sys_exit_group+0x17/0x20 [ 1.554309] x64_sys_call+0x21a0/0x21a0 [ 1.554400] do_syscall_64+0xec/0x1d0 [ 1.554487] ? exc_page_fault+0x8a/0xf0 [ 1.554585] entry_SYSCALL_64_after_hwframe+0x77/0x7f [ 1.554703] RIP: 0033:0x7f2f8a27abcd Resolve this by making __net_devmem_dmabuf_binding_free schedule_work'd. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- net/core/devmem.c | 4 +++- net/core/devmem.h | 10 ++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 5de887545f5e..23463de19f50 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -46,8 +46,10 @@ static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); } -void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) +void __net_devmem_dmabuf_binding_free(struct work_struct *wq) { + struct net_devmem_dmabuf_binding *binding = container_of(wq, typeof(*binding), unbind_w); + size_t size, avail; gen_pool_for_each_chunk(binding->chunk_pool, diff --git a/net/core/devmem.h b/net/core/devmem.h index 874e891e70e0..63d16dbaca2d 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -52,6 +52,8 @@ struct net_devmem_dmabuf_binding { * net_iovs in the TX path. */ struct net_iov **tx_vec; + + struct work_struct unbind_w; }; #if defined(CONFIG_NET_DEVMEM) @@ -74,7 +76,7 @@ struct dmabuf_genpool_chunk_owner { struct net_devmem_dmabuf_binding *binding; }; -void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); +void __net_devmem_dmabuf_binding_free(struct work_struct *wq); struct net_devmem_dmabuf_binding * net_devmem_bind_dmabuf(struct net_device *dev, enum dma_data_direction direction, @@ -129,7 +131,8 @@ net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding) if (!refcount_dec_and_test(&binding->ref)) return; - __net_devmem_dmabuf_binding_free(binding); + INIT_WORK(&binding->unbind_w, __net_devmem_dmabuf_binding_free); + schedule_work(&binding->unbind_w); } void net_devmem_get_net_iov(struct net_iov *niov); @@ -161,8 +164,7 @@ static inline void net_devmem_put_net_iov(struct net_iov *niov) { } -static inline void -__net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) +static inline void __net_devmem_dmabuf_binding_free(struct work_struct *wq) { }