From patchwork Thu May 4 12:18:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Breno Leitao X-Patchwork-Id: 13231141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 737C2C77B78 for ; Thu, 4 May 2023 12:19:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230401AbjEDMTI (ORCPT ); Thu, 4 May 2023 08:19:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230398AbjEDMTH (ORCPT ); Thu, 4 May 2023 08:19:07 -0400 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE64665AC; Thu, 4 May 2023 05:19:06 -0700 (PDT) Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-3f4000ec74aso3206235e9.3; Thu, 04 May 2023 05:19:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683202745; x=1685794745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wSyb6givsMExpGuHktpWfu+gxOuU871A0uj9kukvqHo=; b=fQm+vNQ8MOl+Mi34U+NN9DrZkBQ+jbIkfsV415pPDbx/3mU7Gv41w3henCqYYepesg mxmeut48mkp/i+Fi3uK3gs+cQ/8W5S+jQvzAF5rIqORam8BgthSDtk7Kjr0OqzXawNvV zQ8ylRVG599RlqjcgJVmBT0Sgi/rnLDBKUSq5U/BenohwPieBIXhKrbPwQVnaW9qKNU/ nsQBqkGJaeuGrLcG/kDa5P/ycFy63O2dwQvCva3GK0HkDrTPGLqe29cVzlSh6PIYJz/i eIP+YNghm2O/sS+eA2XWU2zNsQby3pOqDH643hypmx3rPB3EHuMVwhv+73Ixk6SeCs8o OnkA== X-Gm-Message-State: AC+VfDz4S0doQZ7UorP74oENTPBT1japos5t+M43vIDaHCFfwkmpNC8L tYTR9FuGGrtMetxAsxCOXVNhAL/5pTtQfA== X-Google-Smtp-Source: ACHHUZ6U41Ymc//NQ1NfFRXTc5vZxexdvF5yf/UL7aT819ElkY5jwLddGgqGkL2K4aqY7X/9EkOZQA== X-Received: by 2002:a1c:f717:0:b0:3f1:8aaa:c20c with SMTP id v23-20020a1cf717000000b003f18aaac20cmr17008034wmh.7.1683202744867; Thu, 04 May 2023 05:19:04 -0700 (PDT) Received: from localhost (fwdproxy-cln-027.fbsv.net. [2a03:2880:31ff:1b::face:b00c]) by smtp.gmail.com with ESMTPSA id y4-20020a05600c364400b003f175954e71sm4830463wmq.32.2023.05.04.05.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 05:19:04 -0700 (PDT) From: Breno Leitao To: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org, asml.silence@gmail.com, hch@lst.de, axboe@kernel.dk, ming.lei@redhat.com Cc: leit@fb.com, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, sagi@grimberg.me, joshi.k@samsung.com, kbusch@kernel.org Subject: [PATCH v4 1/3] io_uring: Create a helper to return the SQE size Date: Thu, 4 May 2023 05:18:54 -0700 Message-Id: <20230504121856.904491-2-leitao@debian.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230504121856.904491-1-leitao@debian.org> References: <20230504121856.904491-1-leitao@debian.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Create a simple helper that returns the size of the SQE. The SQE could have two size, depending of the flags. If IO_URING_SETUP_SQE128 flag is set, then return a double SQE, otherwise returns the sizeof of io_uring_sqe (64 bytes). Signed-off-by: Breno Leitao Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei --- io_uring/io_uring.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 25515d69d205..259bf798a390 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -394,4 +394,14 @@ static inline void io_req_queue_tw_complete(struct io_kiocb *req, s32 res) io_req_task_work_add(req); } +/* + * IORING_SETUP_SQE128 contexts allocate twice the normal SQE size for each + * slot. + */ +static inline size_t uring_sqe_size(struct io_ring_ctx *ctx) +{ + if (ctx->flags & IORING_SETUP_SQE128) + return 2 * sizeof(struct io_uring_sqe); + return sizeof(struct io_uring_sqe); +} #endif From patchwork Thu May 4 12:18:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Breno Leitao X-Patchwork-Id: 13231142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6C07C7EE2C for ; Thu, 4 May 2023 12:19:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230426AbjEDMTM (ORCPT ); Thu, 4 May 2023 08:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230407AbjEDMTJ (ORCPT ); Thu, 4 May 2023 08:19:09 -0400 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1A4B6186; Thu, 4 May 2023 05:19:07 -0700 (PDT) Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-3f19a7f9424so4250905e9.2; Thu, 04 May 2023 05:19:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683202746; x=1685794746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9NGNi6mRkVCR2qGgOfiAwQ62RvEBOIDTb5fwVl75RrU=; b=P0oBQgZFRpY+X1bfQrQPSDmArzEd3S6kjm9VgJ80oTQgbuNr5yWS08oMv9HLJpjowt j81yfNyhPEZP9locj1soCps8XoDJ3HiZHuMgmEEv4jAd0BB7Srruhlz2CEblrg8Vdj5H uOxCSIgwsHCYTvC54txmLw/frWstI1iAaeLqf2BmF9aCYtnfbrn/JF4rDHEMLY2t+110 1deIzWv5EISLWJQIahoKPh7NFUsZ/dvQrj2zO14Fi8K9bkyY2suF6KIMNT+oOndT62IL fzYJwcXqcINO9G2ZiZX6eFM+KgfAp1PMQNK8PDGsTW/XUxKoWTLtxmbAFI9RroK/ywrn ZEYw== X-Gm-Message-State: AC+VfDwlGs+jE0GmVIbRGNiqyVcKUmHQTw1fTref95EL1IaEk0PvSTT3 JcFw2RZqNDZdrWZOjwzk0+cQ+1rmSJlh8g== X-Google-Smtp-Source: ACHHUZ75FunCVCcbYfiW5LlO1abDrDow/4py6f9oGCuMkvrxTImsMfvbqk9FDNnFommVIcSLCizahw== X-Received: by 2002:a7b:cb8f:0:b0:3f1:885f:2e52 with SMTP id m15-20020a7bcb8f000000b003f1885f2e52mr17525032wmi.16.1683202746028; Thu, 04 May 2023 05:19:06 -0700 (PDT) Received: from localhost (fwdproxy-cln-016.fbsv.net. [2a03:2880:31ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id c7-20020a05600c0ac700b003f1958eeadcsm4757182wmr.17.2023.05.04.05.19.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 05:19:05 -0700 (PDT) From: Breno Leitao To: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org, asml.silence@gmail.com, hch@lst.de, axboe@kernel.dk, ming.lei@redhat.com Cc: leit@fb.com, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, sagi@grimberg.me, joshi.k@samsung.com, kbusch@kernel.org Subject: [PATCH v4 2/3] io_uring: Pass whole sqe to commands Date: Thu, 4 May 2023 05:18:55 -0700 Message-Id: <20230504121856.904491-3-leitao@debian.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230504121856.904491-1-leitao@debian.org> References: <20230504121856.904491-1-leitao@debian.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Currently uring CMD operation relies on having large SQEs, but future operations might want to use normal SQE. The io_uring_cmd currently only saves the payload (cmd) part of the SQE, but, for commands that use normal SQE size, it might be necessary to access the initial SQE fields outside of the payload/cmd block. So, saves the whole SQE other than just the pdu. This changes slightly how the io_uring_cmd works, since the cmd structures and callbacks are not opaque to io_uring anymore. I.e, the callbacks can look at the SQE entries, not only, in the cmd structure. The main advantage is that we don't need to create custom structures for simple commands. Creates io_uring_sqe_cmd() that returns the cmd private data as a null pointer and avoids casting in the callee side. Also, make most of ublk_drv's sqe->cmd priv structure into const, and use io_uring_sqe_cmd() to get the private structure, removing the unwanted cast. (There is one case where the cast is still needed since the header->{len,addr} is updated in the private structure) Suggested-by: Pavel Begunkov Signed-off-by: Breno Leitao Reviewed-by: Keith Busch Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 26 +++++++++++++------------- drivers/nvme/host/ioctl.c | 2 +- include/linux/io_uring.h | 7 ++++++- io_uring/opdef.c | 2 +- io_uring/uring_cmd.c | 9 +++------ 5 files changed, 24 insertions(+), 22 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index c73cc57ec547..42f4d7ca962e 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1019,7 +1019,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma) } static void ublk_commit_completion(struct ublk_device *ub, - struct ublksrv_io_cmd *ub_cmd) + const struct ublksrv_io_cmd *ub_cmd) { u32 qid = ub_cmd->q_id, tag = ub_cmd->tag; struct ublk_queue *ubq = ublk_get_queue(ub, qid); @@ -1263,7 +1263,7 @@ static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id, static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) { - struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->cmd; + const struct ublksrv_io_cmd *ub_cmd = io_uring_sqe_cmd(cmd->sqe); struct ublk_device *ub = cmd->file->private_data; struct ublk_queue *ubq; struct ublk_io *io; @@ -1567,7 +1567,7 @@ static struct ublk_device *ublk_get_device_from_id(int idx) static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); int ublksrv_pid = (int)header->data[0]; struct gendisk *disk; int ret = -EINVAL; @@ -1630,7 +1630,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) static int ublk_ctrl_get_queue_affinity(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); void __user *argp = (void __user *)(unsigned long)header->addr; cpumask_var_t cpumask; unsigned long queue; @@ -1681,7 +1681,7 @@ static inline void ublk_dump_dev_info(struct ublksrv_ctrl_dev_info *info) static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); void __user *argp = (void __user *)(unsigned long)header->addr; struct ublksrv_ctrl_dev_info info; struct ublk_device *ub; @@ -1844,7 +1844,7 @@ static int ublk_ctrl_del_dev(struct ublk_device **p_ub) static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); pr_devel("%s: cmd_op %x, dev id %d qid %d data %llx buf %llx len %u\n", __func__, cmd->cmd_op, header->dev_id, header->queue_id, @@ -1863,7 +1863,7 @@ static int ublk_ctrl_stop_dev(struct ublk_device *ub) static int ublk_ctrl_get_dev_info(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); void __user *argp = (void __user *)(unsigned long)header->addr; if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr) @@ -1894,7 +1894,7 @@ static void ublk_ctrl_fill_params_devt(struct ublk_device *ub) static int ublk_ctrl_get_params(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); void __user *argp = (void __user *)(unsigned long)header->addr; struct ublk_params_header ph; int ret; @@ -1925,7 +1925,7 @@ static int ublk_ctrl_get_params(struct ublk_device *ub, static int ublk_ctrl_set_params(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); void __user *argp = (void __user *)(unsigned long)header->addr; struct ublk_params_header ph; int ret = -EFAULT; @@ -1983,7 +1983,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq) static int ublk_ctrl_start_recovery(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); int ret = -EINVAL; int i; @@ -2025,7 +2025,7 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub, static int ublk_ctrl_end_recovery(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); int ublksrv_pid = (int)header->data[0]; int ret = -EINVAL; @@ -2092,7 +2092,7 @@ static int ublk_char_dev_permission(struct ublk_device *ub, static int ublk_ctrl_uring_cmd_permission(struct ublk_device *ub, struct io_uring_cmd *cmd) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)io_uring_sqe_cmd(cmd->sqe); bool unprivileged = ub->dev_info.flags & UBLK_F_UNPRIVILEGED_DEV; void __user *argp = (void __user *)(unsigned long)header->addr; char *dev_path = NULL; @@ -2171,7 +2171,7 @@ static int ublk_ctrl_uring_cmd_permission(struct ublk_device *ub, static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) { - struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe); struct ublk_device *ub = NULL; int ret = -EINVAL; diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index d24ea2e05156..81c5c9e38477 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -552,7 +552,7 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns, struct io_uring_cmd *ioucmd, unsigned int issue_flags, bool vec) { struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd); - const struct nvme_uring_cmd *cmd = ioucmd->cmd; + const struct nvme_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe); struct request_queue *q = ns ? ns->queue : ctrl->admin_q; struct nvme_uring_data d; struct nvme_command c; diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h index 35b9328ca335..3399d979ee1c 100644 --- a/include/linux/io_uring.h +++ b/include/linux/io_uring.h @@ -24,7 +24,7 @@ enum io_uring_cmd_flags { struct io_uring_cmd { struct file *file; - const void *cmd; + const struct io_uring_sqe *sqe; union { /* callback to defer completions to task context */ void (*task_work_cb)(struct io_uring_cmd *cmd, unsigned); @@ -66,6 +66,11 @@ static inline void io_uring_free(struct task_struct *tsk) if (tsk->io_uring) __io_uring_free(tsk); } + +static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) +{ + return sqe->cmd; +} #else static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw, struct iov_iter *iter, void *ioucmd) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index cca7c5b55208..3b9c6489b8b6 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -627,7 +627,7 @@ const struct io_cold_def io_cold_defs[] = { }, [IORING_OP_URING_CMD] = { .name = "URING_CMD", - .async_size = uring_cmd_pdu_size(1), + .async_size = 2 * sizeof(struct io_uring_sqe), .prep_async = io_uring_cmd_prep_async, }, [IORING_OP_SEND_ZC] = { diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 5113c9a48583..ed536d7499db 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -69,15 +69,12 @@ EXPORT_SYMBOL_GPL(io_uring_cmd_done); int io_uring_cmd_prep_async(struct io_kiocb *req) { struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); - size_t cmd_size; BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16); BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80); - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128); - - memcpy(req->async_data, ioucmd->cmd, cmd_size); - ioucmd->cmd = req->async_data; + memcpy(req->async_data, ioucmd->sqe, uring_sqe_size(req->ctx)); + ioucmd->sqe = req->async_data; return 0; } @@ -103,7 +100,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) req->imu = ctx->user_bufs[index]; io_req_set_rsrc_node(req, ctx, 0); } - ioucmd->cmd = sqe->cmd; + ioucmd->sqe = sqe; ioucmd->cmd_op = READ_ONCE(sqe->cmd_op); return 0; } From patchwork Thu May 4 12:18:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Breno Leitao X-Patchwork-Id: 13231143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5003DC77B78 for ; Thu, 4 May 2023 12:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230416AbjEDMTN (ORCPT ); Thu, 4 May 2023 08:19:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230415AbjEDMTK (ORCPT ); Thu, 4 May 2023 08:19:10 -0400 Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9505618B; Thu, 4 May 2023 05:19:08 -0700 (PDT) Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-3064099f9b6so259169f8f.1; Thu, 04 May 2023 05:19:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683202747; x=1685794747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=it1Jg8DeNZi+uSQaZN6tyTLLss5U0nLB9ujTI6DLQlE=; b=I2EcyFal6Slt1X71bBmBCA1jLd9RXN2G5Sl8E8Z1DnTyyUFK/ML4aNCDsnwEU6Gm28 YHOv1H4uhD0GyLHu0TI4Aw+5b47rGSXnAlh10kZQ78obYMAqZ09INVdqZSPZcSmoavrz 9YxqXKC/XSBj9ao7zCg0CyBLiDlqzIazbuSVvbVBHDXbs+Kd4uy3QX71OnLsaMRxDcF+ szmkygZKJqhjOksQShanmCryb1j58OjOms7gVAatUj2eHZ65EMdlbCSClPdjcgWCROG6 tNT3RLbp2+qtKumgiOmwZqvoM61SjfmhvQ/L7hqOOGy72OBty9usXfT7YO23tYllbK8h CgpA== X-Gm-Message-State: AC+VfDwX7UXE0NBF1cs6thGJ9E2kcXFYuWaUXlFA4DB/S0T53Pv1cN0k FSSnr5hisvEndxtcQe6w9QhrbuxlAx3dXw== X-Google-Smtp-Source: ACHHUZ5pFqkNjG1UPGsV75RqUHWvgJLeZ2Ye5mVCDjBw50wrtT9suVc5ow9F1hW4QpWHoybLEEvBqg== X-Received: by 2002:adf:fec7:0:b0:307:41a1:dc86 with SMTP id q7-20020adffec7000000b0030741a1dc86mr902784wrs.67.1683202747127; Thu, 04 May 2023 05:19:07 -0700 (PDT) Received: from localhost (fwdproxy-cln-030.fbsv.net. [2a03:2880:31ff:1e::face:b00c]) by smtp.gmail.com with ESMTPSA id m18-20020adffa12000000b003047297a5e8sm29871228wrr.54.2023.05.04.05.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 05:19:06 -0700 (PDT) From: Breno Leitao To: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org, asml.silence@gmail.com, hch@lst.de, axboe@kernel.dk, ming.lei@redhat.com Cc: leit@fb.com, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, sagi@grimberg.me, joshi.k@samsung.com, kbusch@kernel.org Subject: [PATCH v4 3/3] io_uring: Remove unnecessary BUILD_BUG_ON Date: Thu, 4 May 2023 05:18:56 -0700 Message-Id: <20230504121856.904491-4-leitao@debian.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230504121856.904491-1-leitao@debian.org> References: <20230504121856.904491-1-leitao@debian.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org In the io_uring_cmd_prep_async() there is an unnecessary compilation time check to check if cmd is correctly placed at field 48 of the SQE. This is unnecessary, since this check is already in place at io_uring_init(): BUILD_BUG_SQE_ELEM(48, __u64, addr3); Remove it and the uring_cmd_pdu_size() function, which is not used anymore. Keith started a discussion about this topic in the following thread: Link: https://lore.kernel.org/lkml/ZDBmQOhbyU0iLhMw@kbusch-mbp.dhcp.thefacebook.com/ Signed-off-by: Breno Leitao Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei --- io_uring/uring_cmd.c | 3 --- io_uring/uring_cmd.h | 8 -------- 2 files changed, 11 deletions(-) diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index ed536d7499db..5e32db48696d 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -70,9 +70,6 @@ int io_uring_cmd_prep_async(struct io_kiocb *req) { struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); - BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16); - BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80); - memcpy(req->async_data, ioucmd->sqe, uring_sqe_size(req->ctx)); ioucmd->sqe = req->async_data; return 0; diff --git a/io_uring/uring_cmd.h b/io_uring/uring_cmd.h index 7c6697d13cb2..8117684ec3ca 100644 --- a/io_uring/uring_cmd.h +++ b/io_uring/uring_cmd.h @@ -3,11 +3,3 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags); int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_uring_cmd_prep_async(struct io_kiocb *req); - -/* - * The URING_CMD payload starts at 'cmd' in the first sqe, and continues into - * the following sqe if SQE128 is used. - */ -#define uring_cmd_pdu_size(is_sqe128) \ - ((1 + !!(is_sqe128)) * sizeof(struct io_uring_sqe) - \ - offsetof(struct io_uring_sqe, cmd))