From patchwork Fri Mar 21 18:48:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Caleb Sander Mateos X-Patchwork-Id: 14025905 Received: from mail-oo1-f98.google.com (mail-oo1-f98.google.com [209.85.161.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B7A913C914 for ; Fri, 21 Mar 2025 18:49:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582950; cv=none; b=YVzVcosgZO8FJB5Nf1wVWYMNp3o5om15VjI0KK+Jfreso8fUZpvVn3E9bUsVl1CrcFD+395sByaqUurKdegiVuoZNyqZdLvF/qQuk+91yviFFRVzPmPx2qnmTVHHeUfrIAIShg7I1G8DqyQOPlqiEnQsm5HrYgo1PDV3SYxR/Sg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582950; c=relaxed/simple; bh=4ffaNcfXCWGV/yaMUIFe4pBx/L0pwgKtwX1/8+wF6S0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IAxLgQ3ZCCamCRzmmh1O8Q9ORorPC15LObXgApYPceaNBH5tp/NjR7LAaTN/vADhpP6/aB2uziFBVZ8vbMPQnjLRpe0bLMrOZDwKVJjgAiMp7zXIEQ5e3TQ5fA+BNE1svU7KfUoPAbvk6Lm0X11RNMGqEkkkwo+Vv9M25/HL/FI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=EtrMiy+E; arc=none smtp.client-ip=209.85.161.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="EtrMiy+E" Received: by mail-oo1-f98.google.com with SMTP id 006d021491bc7-6002337132eso135545eaf.0 for ; Fri, 21 Mar 2025 11:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1742582948; x=1743187748; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JVXOFSIvNRMFYjPBLLvsP0Qii/nfc/7BrZwWwlECW8U=; b=EtrMiy+Ec4kqfz+pRrZO/gYdv5U8airFRGYKMSNlyaATf0u8Fy5kfURXOmb1IOuYnr uELr6/0Lsnoq38SiwMvpmJcLsLf+3nsoVQxyBVa/+1plQkqg5gujLasKGwvexOQFMKOg 6N07hS716+KYvr6xTajDPG5WwcA3fi4b+UmScEFVB8xiYdWkVVKECE80tJhZwsjCpZWR 8VdxMV6gi0oc0tIIGkkeviA3bFDlUVlzo2A3ns+3Jb/L/uC2guP5gtPA7vxf/Dogv6K8 QYlskyeyMSPefLuPfuxOBd6KbJ5Qi7PC4KwvNm9k1q8CEI55GrTkjwWe7ZzgUGTiFRtO +1tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742582948; x=1743187748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JVXOFSIvNRMFYjPBLLvsP0Qii/nfc/7BrZwWwlECW8U=; b=dbt3zwHLmXY94VghOp4SR7N42m7i9qCuI7VCHdz6n8cnCIrJjMfrqXovBsvIfcdtvL lgSPNn9w1igpp8LDiYF2izI8lz6UGFfY4K+H5gcQjxn3T+CUrabIVKFh5uahWwO+LZ5h R87Uuvz+kYXABG1w9f2nx/x9utahVzqmiSdd05TDRfoevf62l3weppggEyW6qcDkIgAD xwhuEGeD6MKETBIS86/XYsrBFUK9qAWYQd6KUO+2XhL/Q230QDArqTuytkAMYn4cTR6Q 7th88KQ2rohBdsB5O79C84v+WZ2pawk0muyRmwAg18itdzp+F90m62EdNJkQZdD7j7p7 OKgQ== X-Forwarded-Encrypted: i=1; AJvYcCWHG+7TUGnayS/PVr+Nh1aRc1pVZ2nIZSrYcLwfW5AY6688Sfs/rNQhrDWwgOzO5zckuegcOThGGA==@vger.kernel.org X-Gm-Message-State: AOJu0YwBOb+Y2u06Fa5fvMUE9ML2cni1ANBtA7WKa0H+WbbZuA6W1+Xq D/OZRYrYRnYlgp4JMNuIOp/+5W9ON1cNCo5mYLZLz8srPnGAJUjScDB3o209nm7dO85SKjGOmnl 9UAKzRrdWvo1PQGac+GIliOPwbuzMHaIxPCGZc0X/w+d8AT6z X-Gm-Gg: ASbGncs1Zevv5A9m2b0fXf94kbqVrxC4EhsO2AUS/Ixy4u7KNL8CTPp5MdmjLgYo7oz ot+Irgeh08hDYHZYxeOWfBDaoBwGoKV6/QTqqL0fmiknMm4OyrysA0tN+IkInTT67X1BAGwVkxG H5cJTuyVpkMU4AZivh3KPOdxStTlhNIpp7R+VqyACgis9T0QDUHQNnpsk6VKuCse+db/y0kOFVY I9xVU99p1TeCpPlGOQiQgatQS0ZiSxhWRsInQ5frKchc9bSTym7E1JF9x1K7/AXRV2gk7q603LC M8taSxldLNiCKHu1b4VZESSlw0WvjGXdzw== X-Google-Smtp-Source: AGHT+IEqvuOOYMmlzyldh84Pi9ajHdpqofupXAF4kcQ30tohmz/xvgicdhHjnstZUDim0CopywzMBqPAovBY X-Received: by 2002:a05:6871:153:b0:2c1:d224:bcaf with SMTP id 586e51a60fabf-2c7a3620a44mr159676fac.11.1742582948165; Fri, 21 Mar 2025 11:49:08 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 586e51a60fabf-2c77ebc0b6csm96315fac.4.2025.03.21.11.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Mar 2025 11:49:08 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 5C88E340245; Fri, 21 Mar 2025 12:49:07 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 52B22E4192A; Fri, 21 Mar 2025 12:48:37 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Pavel Begunkov , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: Xinyu Zhang , io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Caleb Sander Mateos Subject: [PATCH 1/3] io_uring/net: only import send_zc buffer once Date: Fri, 21 Mar 2025 12:48:17 -0600 Message-ID: <20250321184819.3847386-2-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250321184819.3847386-1-csander@purestorage.com> References: <20250321184819.3847386-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 io_send_zc() guards its call to io_send_zc_import() with if (!done_io) in an attempt to avoid calling it redundantly on the same req. However, if the initial non-blocking issue returns -EAGAIN, done_io will stay 0. This causes the subsequent issue to unnecessarily re-import the buffer. Add an explicit flag "imported" to io_sr_msg to track if its buffer has already been imported. Clear the flag in io_send_zc_prep(). Call io_send_zc_import() and set the flag in io_send_zc() if it is unset. Signed-off-by: Caleb Sander Mateos Fixes: 54cdcca05abd ("io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr") --- io_uring/net.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/io_uring/net.c b/io_uring/net.c index 6d13d378358b..a29893d567b8 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -74,10 +74,11 @@ struct io_sr_msg { unsigned nr_multishot_loops; u16 flags; /* initialised and used only by !msg send variants */ u16 buf_group; bool retry; + bool imported; /* only for io_send_zc */ void __user *msg_control; /* used only for send zerocopy */ struct io_kiocb *notif; }; @@ -1222,10 +1223,11 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) struct io_ring_ctx *ctx = req->ctx; struct io_kiocb *notif; zc->done_io = 0; zc->retry = false; + zc->imported = false; req->flags |= REQ_F_POLL_NO_LAZY; if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3))) return -EINVAL; /* we don't support IOSQE_CQE_SKIP_SUCCESS just yet */ @@ -1369,11 +1371,12 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) if (!(req->flags & REQ_F_POLLED) && (zc->flags & IORING_RECVSEND_POLL_FIRST)) return -EAGAIN; - if (!zc->done_io) { + if (!zc->imported) { + zc->imported = true; ret = io_send_zc_import(req, issue_flags); if (unlikely(ret)) return ret; } From patchwork Fri Mar 21 18:48:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Caleb Sander Mateos X-Patchwork-Id: 14025906 Received: from mail-yw1-f227.google.com (mail-yw1-f227.google.com [209.85.128.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2DAD1E7C19 for ; Fri, 21 Mar 2025 18:49:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582953; cv=none; b=cP+0jOMPhPvCf+SE+VPwi6e5/tlSDAFzEbSuDVSC589O6oBDfQjFxvjs8DecoR93wr/dXgQuyV+cR0Ts2s26f9Vsa8CPJpbYsSU3DV+X/C5XD3dDj4aGDvOY7aui/rQmOYQ0ICmByyzDmawsPO/yVYF6IIR79lzT0MFLm5rmXFs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582953; c=relaxed/simple; bh=lxmcofva5//2EQ+B85qMF6Mq3NUFCy30aYIO2O9+XFs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JLHzvUDQW+kPieXjb15t0jZbv9F7NVZCEvRtBs5PoXsdJmgwq4qOHWBty7A5d+mw9bqq3c1WGpOhRsvOnI4W9xumqb/l+ZDG6Nhr+b0OdrBnH/tO3u2sPAkn/qNkuiIvG7S6JUV57OLQ6h7u+Ahwt1UvirF+h92KGCQQPglXR0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=RlyHQmRe; arc=none smtp.client-ip=209.85.128.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="RlyHQmRe" Received: by mail-yw1-f227.google.com with SMTP id 00721157ae682-6fda19e90cfso1248237b3.3 for ; Fri, 21 Mar 2025 11:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1742582951; x=1743187751; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qRnlnXa062Pt8IsZFpPDsC1SEIO6IbPVYGDPK4L3Ww8=; b=RlyHQmReuWGk7Et8q6m4gZeXuINY97LZMswOROT5UGzKq8zW0s3BlLWvFVjdwos3OL ZSCl0bVacD9koCpoIEBBGHbINPQe5uQae5uGHGs8j5PYTyyZolTOrlUpPRMIFTlYtTT5 ZPirtpQu6pn/N1uYw36k9LsoGVzyProEW9Pijwd1rn2gzX8EDXH61auyKPp36qTyP8qU LsVniN3KLQw2btKFk+M4rr8izzLvWWWucgNCBkZtlGPVTvOL16YBQYWJjFiisNipOPQ9 OHMyBhfLVBawwfgNcnPpxdKBZK1K7qqfH+QwoMSZrH6LFvCkoZIhcCId8DfH4vDmWk/6 /fnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742582951; x=1743187751; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qRnlnXa062Pt8IsZFpPDsC1SEIO6IbPVYGDPK4L3Ww8=; b=Ky9xJiyTW8dUz8xGQIzcO2BgSZ9q4FQg4EIthY8QuDOJzRMs8MUdgVGzZV17ASvstj nmO7oJ1sVvXS2m1NWXa12iYLLhi6XxDLAkZyHnNZMnrl++Rp01ng6O/c24SUUjN2dEZK Cikm01y3ki8wAOSfLOMdTEjngP+HCBk8v8/rroAedOWUrTw/jJ5YwGLZN288C7i5iSTI cW57+//qCIf4Dqh7wyj+8qNB2+3XJ1OLuAi3wj0aNySL4NrCf5ojI6BD27cgT6LHoUSR PR9OJLOvFX1Rf7OrRK+ooM/d+/Ht/0OD0yqrtJJqxrAc8wFygcW/IIKoHKsAt86cG+Zf q7gQ== X-Forwarded-Encrypted: i=1; AJvYcCW1VWkrmmPx5Xqm7ci9oxK4CRv6gEiovIkDohnO/TiZb+okB10QR7GC1Vr3HKhnWvP+1wO4YcilMw==@vger.kernel.org X-Gm-Message-State: AOJu0Yx7MyVSG1V/uc4EqzhfWLLlSRL4PACLQqI/T2tOMxWT1j2pP/MY QVl7JjML4LF5SrvgLFaS39ADMXikaEKqFqIAjoUx7KREk84XCO2AITqS9Y0ZwLCYx62tYNQlfP5 YtBQW2vV1W90zk/ecnxn2Rn0ucmw0VByKxP4jxFhfZK7FixIh X-Gm-Gg: ASbGncuht67/skoxJccmt9y0hv5ho4jyEBl60xAqzS7slpS/5rxp2HI2Jrvqd2QA9JW apH4sT7ihRStDC7dEh9BtIwXQRjG6yhr4130C4H8EJXgyzVPKenaheDIUUrxRSJZ+Qvlg8PWVIv 1DLAWp+WmUnRLbddcNrc9S13cDvMfEGJq9xtz3qsMGv8zDbhY4pEn/EavfDBrV+3SbeA5ZknOTt BV9J6oN62E86lG6l2ENGFeyqmyQjkOR+AopoTyHnrDWQtU4VNhNGUNk1zeXmKQtSizqKfgkjSIY 5148z1TkZZdgpkhw5qK2+qT1+FdizKfBQw== X-Google-Smtp-Source: AGHT+IHA/F7ZUbUO8zzTMyUKj0OJFpa6I44qHvQ3vYuRPzv9/KJDLKWorZS1G/3i1I2rbpGBvsvGgFxESclq X-Received: by 2002:a05:690c:ecf:b0:700:a4e1:630e with SMTP id 00721157ae682-700c73ce4ccmr12634927b3.0.1742582950805; Fri, 21 Mar 2025 11:49:10 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 00721157ae682-700ba739011sm864927b3.39.2025.03.21.11.49.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Mar 2025 11:49:10 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id B6AA7340245; Fri, 21 Mar 2025 12:49:09 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id A7758E4195A; Fri, 21 Mar 2025 12:48:39 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Pavel Begunkov , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: Xinyu Zhang , io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Caleb Sander Mateos Subject: [PATCH 2/3] io_uring/net: import send_zc fixed buffer before going async Date: Fri, 21 Mar 2025 12:48:18 -0600 Message-ID: <20250321184819.3847386-3-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250321184819.3847386-1-csander@purestorage.com> References: <20250321184819.3847386-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When IORING_OP_SEND_ZC is used with the IORING_RECVSEND_POLL_FIRST flag, the initial issue will return -EAGAIN to force arming the poll handler. If the operation is also using fixed buffers, the fixed buffer lookup does not happen until the subsequent issue. This ordering difference is observable when using UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If the IORING_OP_SEND_ZC operation is followed immediately by a UBLK_U_IO_UNREGISTER_IO_BUF that unregisters the fixed buffer, IORING_RECVSEND_POLL_FIRST will cause the fixed buffer lookup to fail because it happens after the buffer is unregistered. Swap the order of the buffer import and IORING_RECVSEND_POLL_FIRST check to ensure the fixed buffer lookup happens on the initial issue even if the operation goes async. Signed-off-by: Caleb Sander Mateos Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs") --- io_uring/net.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index a29893d567b8..5adc7b80138e 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -1367,21 +1367,21 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!sock)) return -ENOTSOCK; if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags)) return -EOPNOTSUPP; - if (!(req->flags & REQ_F_POLLED) && - (zc->flags & IORING_RECVSEND_POLL_FIRST)) - return -EAGAIN; - if (!zc->imported) { zc->imported = true; ret = io_send_zc_import(req, issue_flags); if (unlikely(ret)) return ret; } + if (!(req->flags & REQ_F_POLLED) && + (zc->flags & IORING_RECVSEND_POLL_FIRST)) + return -EAGAIN; + msg_flags = zc->msg_flags; if (issue_flags & IO_URING_F_NONBLOCK) msg_flags |= MSG_DONTWAIT; if (msg_flags & MSG_WAITALL) min_ret = iov_iter_count(&kmsg->msg.msg_iter); From patchwork Fri Mar 21 18:48:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Caleb Sander Mateos X-Patchwork-Id: 14025904 Received: from mail-qt1-f228.google.com (mail-qt1-f228.google.com [209.85.160.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 006AE13C914 for ; Fri, 21 Mar 2025 18:48:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.228 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582925; cv=none; b=KUZ8UZdwvlvdArYYBpdXHFKmK61F6/MEsDFFbV6SfcCahmEd0J81PYrfXTPobGkulQGMaAT3Kdk1HLgEve3X2Z4+vMKsiLG56ROJuf7PL7YmflTfczGzYo6x+Nzqb1UVHvZOglaCi9vKwXJ6tSROx5C/Bi+Gus8ZbDscwzvxrsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742582925; c=relaxed/simple; bh=A3iW60qspZ7EVkOlE+MyGo4VhzreaaD9dkKM9cX8B4I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BBsHnZANPW4cAXNMVshmyzs0SfOND2qBeFY4AcvU7RFwZrCMXTwRoYUSaG2yCcHVzF3MRFH8/Dvq8YTAMyEAqqPGtXaMQb51Ck9/5Qp28B6ZjWsOqNaFsLRMCrkkn1eo1YeWZzmUK2i3dLAO/3cyeduQ1OWlz3YgWexIYOMU6JY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=SIbmHa7A; arc=none smtp.client-ip=209.85.160.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="SIbmHa7A" Received: by mail-qt1-f228.google.com with SMTP id d75a77b69052e-47675dc7c79so1329541cf.1 for ; Fri, 21 Mar 2025 11:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1742582923; x=1743187723; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mfjEq8IyL7/x/8c9RdusJ/wqPO/dcQUh+COX5jmkRnk=; b=SIbmHa7APd7hcKIWqHVLzoPIwyvDWshKAp3PKTEv0zhgdTJ7iU1EpKGtOOcQVXThtf XhPrwFNo7jhqiQz6q20HkTkbMYE/2rr3XFy53fEZQrcERpuAyy4f3imOaTFCafMj5g/u AJBPSvbtFDk7bgAP3e4QdNfPL47iTjfVgL0Z/VZOO3oliN/jMHXB9HdOrN6qCtBYu6Gh opE9enTNkY00+iNl8roJhyuU5c+C0fATi1O4c+JfMVvMQMDEb6572tCngyLm5QN+sw+T 4PEUJ6LDbBa2ZIogNeHiEWs1GUnsDHISfBGE9tPrfHWSsVIO0PplioqUstYKNmn3U9fq ExMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742582923; x=1743187723; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mfjEq8IyL7/x/8c9RdusJ/wqPO/dcQUh+COX5jmkRnk=; b=v2balY8vfkcicReaU7Cd5YgKCfL13B85K8pZL35v+QpQRqLLvotxyJbqE7+v5hzg8O aYWjkBoCXmw082+4pshMrYe7gU4OKhD5P5W37uUmFQM7fT7zEtScq7LFrSwA04WXrw3J B6HceqdKq5ng2rFr4iriXCO3IV0UZwlFQ7tCSTu987YudY2Gr3RbXxq3lAi31rGBf4+k BQho8S4EuP0qRSATCoM52Sq0nAQ81hZBkV2txvDOAEpNa9qg19YOEo3ndMgs4UBsj5Jw +lf69aT3XEVgExCqHU8lHJVL366knOkSBy6aExXCdKRZiH4W+Cl1wuWugCjac5PN5Zqv IBJg== X-Forwarded-Encrypted: i=1; AJvYcCUOxDU5m5B1BbLvYgUhIupFHqQZoOMVSRcjLs5p88GUycPzqrGkB4V4Ys58SK3nQQ3gEtGXHVQUVQ==@vger.kernel.org X-Gm-Message-State: AOJu0Yz14sCdJ8Y5/cTy2oQCPe14hOIw7xRxfLMpmgY07bRc5/noYurN a4eKz/JbpMC/6fwrLyp/YOZBjYdbDCkPCTFjPq5H7FvTyIEIzSvEa1ElafrhNtwHGiGVICfH/d9 9z+qI+QIrKyUc8vwjO4Np3c1hTR3E8ORj X-Gm-Gg: ASbGncuJDHSAVRcU2pVPyMot3YgoTVmml7e4K4IFSTnOszogMdPCXvACp166On3dY4s rPwyrMLkp70raSjSuRa+eIfvuopIntL/J2i6/qw4Sx0AUF5rKWDE+9zXGWnufDdP4cS0DXXmCv+ ZjgAEEsB3J5BkxoxypPkDsnSagX+xvYEGFASXzatW9P12I6VzNUwGUqsAQSQF75AglPJyG62qCU LrQTygN23DyCIH6kGjlv3drCQptMt9+IZaCC1QlfV3MagiCJ5nms0JocMsWxpI/TbCYriUTFioX /TWzz7RcPOksfnsyeV2yWwQRZNUrRD47qhqrApl5u1FvJUic X-Google-Smtp-Source: AGHT+IGmTdCvB1188jBMmyDMG3MYl9833lk8K/s6kZSo8qwO5x/RMDQKhQyV5LroO0Dqzra7vD1ZM+pBTfVD X-Received: by 2002:ad4:5c8c:0:b0:6d8:e6be:50fc with SMTP id 6a1803df08f44-6eb3f2ec191mr22803036d6.6.1742582922650; Fri, 21 Mar 2025 11:48:42 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-6eb3efaa3a3sm1211626d6.35.2025.03.21.11.48.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Mar 2025 11:48:42 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 66D26340245; Fri, 21 Mar 2025 12:48:41 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 628F1E4195E; Fri, 21 Mar 2025 12:48:41 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Pavel Begunkov , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: Xinyu Zhang , io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Caleb Sander Mateos Subject: [PATCH 3/3] io_uring/uring_cmd: import fixed buffer before going async Date: Fri, 21 Mar 2025 12:48:19 -0600 Message-ID: <20250321184819.3847386-4-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250321184819.3847386-1-csander@purestorage.com> References: <20250321184819.3847386-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For uring_cmd operations with fixed buffers, the fixed buffer lookup happens in io_uring_cmd_import_fixed(), called from the ->uring_cmd() implementation. A ->uring_cmd() implementation could return -EAGAIN on the initial issue for any reason before io_uring_cmd_import_fixed(). For example, nvme_uring_cmd_io() calls nvme_alloc_user_request() first, which can return -EAGAIN if all tags in the tag set are in use. This ordering difference is observable when using UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If the uring_cmd is followed by a UBLK_U_IO_UNREGISTER_IO_BUF operation that unregisters the fixed buffer, the uring_cmd going async will cause the fixed buffer lookup to fail because it happens after the unregister. Move the fixed buffer lookup out of io_uring_cmd_import_fixed() and instead perform it in io_uring_cmd() before calling ->uring_cmd(). io_uring_cmd_import_fixed() now only initializes an iov_iter from the existing fixed buffer node. This division of responsibilities makes sense as the fixed buffer lookup is an io_uring implementation detail and independent of the ->uring_cmd() implementation. It also cuts down on the need to pass around the io_uring issue_flags. Signed-off-by: Caleb Sander Mateos Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs") --- drivers/nvme/host/ioctl.c | 10 ++++------ include/linux/io_uring/cmd.h | 6 ++---- io_uring/rsrc.c | 6 ++++++ io_uring/rsrc.h | 2 ++ io_uring/uring_cmd.c | 10 +++++++--- 5 files changed, 21 insertions(+), 13 deletions(-) diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index fe9fb80c6a14..3fad74563b9e 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -112,12 +112,11 @@ static struct request *nvme_alloc_user_request(struct request_queue *q, return req; } static int nvme_map_user_request(struct request *req, u64 ubuffer, unsigned bufflen, void __user *meta_buffer, unsigned meta_len, - struct io_uring_cmd *ioucmd, unsigned int flags, - unsigned int iou_issue_flags) + struct io_uring_cmd *ioucmd, unsigned int flags) { struct request_queue *q = req->q; struct nvme_ns *ns = q->queuedata; struct block_device *bdev = ns ? ns->disk->part0 : NULL; bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk); @@ -141,12 +140,11 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer, /* fixedbufs is only for non-vectored io */ if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) return -EINVAL; ret = io_uring_cmd_import_fixed(ubuffer, bufflen, - rq_data_dir(req), &iter, ioucmd, - iou_issue_flags); + rq_data_dir(req), &iter, ioucmd); if (ret < 0) goto out; ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL); } else { ret = blk_rq_map_user_io(req, NULL, nvme_to_user_ptr(ubuffer), @@ -194,11 +192,11 @@ static int nvme_submit_user_cmd(struct request_queue *q, return PTR_ERR(req); req->timeout = timeout; if (ubuffer && bufflen) { ret = nvme_map_user_request(req, ubuffer, bufflen, meta_buffer, - meta_len, NULL, flags, 0); + meta_len, NULL, flags); if (ret) return ret; } bio = req->bio; @@ -514,11 +512,11 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns, req->timeout = d.timeout_ms ? msecs_to_jiffies(d.timeout_ms) : 0; if (d.data_len) { ret = nvme_map_user_request(req, d.addr, d.data_len, nvme_to_user_ptr(d.metadata), - d.metadata_len, ioucmd, vec, issue_flags); + d.metadata_len, ioucmd, vec); if (ret) return ret; } /* to free bio on completion, as req->bio will be null at that time */ diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index 598cacda4aa3..ea243bfab2a8 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -39,12 +39,11 @@ static inline void io_uring_cmd_private_sz_check(size_t cmd_sz) ) #if defined(CONFIG_IO_URING) int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw, struct iov_iter *iter, - struct io_uring_cmd *ioucmd, - unsigned int issue_flags); + struct io_uring_cmd *ioucmd); /* * Completes the request, i.e. posts an io_uring CQE and deallocates @ioucmd * and the corresponding io_uring request. * @@ -69,12 +68,11 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd); #else static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw, - struct iov_iter *iter, struct io_uring_cmd *ioucmd, - unsigned int issue_flags) + struct iov_iter *iter, struct io_uring_cmd *ioucmd) { return -EOPNOTSUPP; } static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret, u64 ret2, unsigned issue_flags) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 5fff6ba2b7c0..ad0dfe51acb1 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1099,10 +1099,16 @@ int io_import_reg_buf(struct io_kiocb *req, struct iov_iter *iter, if (!node) return -EFAULT; return io_import_fixed(ddir, iter, node->buf, buf_addr, len); } +int io_import_buf_node(struct io_kiocb *req, struct iov_iter *iter, + u64 buf_addr, size_t len, int ddir) +{ + return io_import_fixed(ddir, iter, req->buf_node->buf, buf_addr, len); +} + /* Lock two rings at once. The rings must be different! */ static void lock_two_rings(struct io_ring_ctx *ctx1, struct io_ring_ctx *ctx2) { if (ctx1 > ctx2) swap(ctx1, ctx2); diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index f10a1252b3e9..bc0f8f0a2054 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -59,10 +59,12 @@ int io_rsrc_data_alloc(struct io_rsrc_data *data, unsigned nr); struct io_rsrc_node *io_find_buf_node(struct io_kiocb *req, unsigned issue_flags); int io_import_reg_buf(struct io_kiocb *req, struct iov_iter *iter, u64 buf_addr, size_t len, int ddir, unsigned issue_flags); +int io_import_buf_node(struct io_kiocb *req, struct iov_iter *iter, + u64 buf_addr, size_t len, int ddir); int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg); int io_sqe_buffers_unregister(struct io_ring_ctx *ctx); int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg, unsigned int nr_args, u64 __user *tags); diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index de39b602aa82..15a76fe48fe5 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -232,10 +232,15 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) return -EOPNOTSUPP; issue_flags |= IO_URING_F_IOPOLL; req->iopoll_completed = 0; } + if (ioucmd->flags & IORING_URING_CMD_FIXED) { + if (!io_find_buf_node(req, issue_flags)) + return -EFAULT; + } + ret = file->f_op->uring_cmd(ioucmd, issue_flags); if (ret == -EAGAIN || ret == -EIOCBQUEUED) return ret; if (ret < 0) req_set_fail(req); @@ -244,16 +249,15 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) return IOU_OK; } int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw, struct iov_iter *iter, - struct io_uring_cmd *ioucmd, - unsigned int issue_flags) + struct io_uring_cmd *ioucmd) { struct io_kiocb *req = cmd_to_io_kiocb(ioucmd); - return io_import_reg_buf(req, iter, ubuf, len, rw, issue_flags); + return io_import_buf_node(req, iter, ubuf, len, rw); } EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed); void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd) {