From patchwork Wed Apr 19 16:25:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F417C6FD18 for ; Wed, 19 Apr 2023 16:25:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233470AbjDSQZ6 (ORCPT ); Wed, 19 Apr 2023 12:25:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231812AbjDSQZ5 (ORCPT ); Wed, 19 Apr 2023 12:25:57 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B6A2718 for ; Wed, 19 Apr 2023 09:25:56 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-63b9f00640eso15034b3a.0 for ; Wed, 19 Apr 2023 09:25:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921556; x=1684513556; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pay37e3HWwU6Mn9CRHT9mBgWqRnQQUiumbcx6SUrx+o=; b=sjllGtxDUOycL4Hl8+0vNi0OGAGzUD1x85RZmps+ECdCQadVpJDa+oCm/otl/k29Tp SI5vtgii6/QQ2jVmo9sgIR+5589r4BGwmmUNxryboAFU1/my0kOMcDpFgaFlumEJ8Ikf k9vmqIP6m5Bq9cD3hRBY6wMJH4vvX4tGpASBtif+FIBwhBRAt2iXWz2HAxBTEUdnAVzH V7QIuAGaIEVOJBnhDnbYryhcjPaJ8Bj7FlK4SYWSii6VNkHf13GQ29LK19eurxJYWIuL oQWCTuXSMxry6Tvk+C6GOPoj7KbaIk6kMexwa7cX4OYCr25pmiKbwwnuukIOd9AhaxMk qx3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921556; x=1684513556; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pay37e3HWwU6Mn9CRHT9mBgWqRnQQUiumbcx6SUrx+o=; b=KgGRv0VyOYf+ZBT5Pg2eX53FJml5ClgxSiiH3vkYN+2mZhQH3papHvsZP+LkATwOuL zoryCKBy7FzmcJMGpQzOIiCd6FlgoYi1KvsvFAO4jQITF9L/M/mv006k95QjBLo9snHx Yvt3vb9P5CMHCBIsvToVVDAJwHWyeQ2Nocbjv8xc2aOaQMGkPqZ22fS27CG2ioq9AAIc 8+3e455xZwTbV7InzvTRnIA+h0l6jJ8FrSiSMaI/4aak6BiJb/rVhDOiMcMVenuON6EJ RZk3eEYozMfFtm5X8qJXZBk7xOaPKUUlLyxBr61o1O5I1RtSY/puQWxDA57/0Uu2FaL7 cKKw== X-Gm-Message-State: AAQBX9eFrSKGlokyf5WwAs5X4p3yQ3fNlVfqR4A9PpLmO9LT/rlsCU6m WNJjF6ySkAOqLnZWOYHcSsykvkZnQFW+lp3XqOU= X-Google-Smtp-Source: AKy350amlOL+a8MtF1sTY0pbbCdbA8ZRUTvajm9kdNhQRbDWvEeNaQF92/t/nSFpjkZU0TnwHVopvw== X-Received: by 2002:a05:6a20:7da1:b0:f0:5bb4:9d0e with SMTP id v33-20020a056a207da100b000f05bb49d0emr9227702pzj.6.1681921555614; Wed, 19 Apr 2023 09:25:55 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.25.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:25:55 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 1/6] io_uring: grow struct io_kiocb 'flags' to a 64-bit value Date: Wed, 19 Apr 2023 10:25:47 -0600 Message-Id: <20230419162552.576489-2-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We've run out of flags at this point, and none of the flags are easily removable. Bump the flags argument to 64-bit to add room for more. Signed-off-by: Jens Axboe --- include/linux/io_uring_types.h | 64 +++++++++++++++++----------------- io_uring/filetable.h | 2 +- io_uring/io_uring.c | 6 ++-- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 1b2a20a42413..84f436cc6509 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -413,68 +413,68 @@ enum { enum { /* ctx owns file */ - REQ_F_FIXED_FILE = BIT(REQ_F_FIXED_FILE_BIT), + REQ_F_FIXED_FILE = BIT_ULL(REQ_F_FIXED_FILE_BIT), /* drain existing IO first */ - REQ_F_IO_DRAIN = BIT(REQ_F_IO_DRAIN_BIT), + REQ_F_IO_DRAIN = BIT_ULL(REQ_F_IO_DRAIN_BIT), /* linked sqes */ - REQ_F_LINK = BIT(REQ_F_LINK_BIT), + REQ_F_LINK = BIT_ULL(REQ_F_LINK_BIT), /* doesn't sever on completion < 0 */ - REQ_F_HARDLINK = BIT(REQ_F_HARDLINK_BIT), + REQ_F_HARDLINK = BIT_ULL(REQ_F_HARDLINK_BIT), /* IOSQE_ASYNC */ - REQ_F_FORCE_ASYNC = BIT(REQ_F_FORCE_ASYNC_BIT), + REQ_F_FORCE_ASYNC = BIT_ULL(REQ_F_FORCE_ASYNC_BIT), /* IOSQE_BUFFER_SELECT */ - REQ_F_BUFFER_SELECT = BIT(REQ_F_BUFFER_SELECT_BIT), + REQ_F_BUFFER_SELECT = BIT_ULL(REQ_F_BUFFER_SELECT_BIT), /* IOSQE_CQE_SKIP_SUCCESS */ - REQ_F_CQE_SKIP = BIT(REQ_F_CQE_SKIP_BIT), + REQ_F_CQE_SKIP = BIT_ULL(REQ_F_CQE_SKIP_BIT), /* fail rest of links */ - REQ_F_FAIL = BIT(REQ_F_FAIL_BIT), + REQ_F_FAIL = BIT_ULL(REQ_F_FAIL_BIT), /* on inflight list, should be cancelled and waited on exit reliably */ - REQ_F_INFLIGHT = BIT(REQ_F_INFLIGHT_BIT), + REQ_F_INFLIGHT = BIT_ULL(REQ_F_INFLIGHT_BIT), /* read/write uses file position */ - REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT), + REQ_F_CUR_POS = BIT_ULL(REQ_F_CUR_POS_BIT), /* must not punt to workers */ - REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT), + REQ_F_NOWAIT = BIT_ULL(REQ_F_NOWAIT_BIT), /* has or had linked timeout */ - REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT), + REQ_F_LINK_TIMEOUT = BIT_ULL(REQ_F_LINK_TIMEOUT_BIT), /* needs cleanup */ - REQ_F_NEED_CLEANUP = BIT(REQ_F_NEED_CLEANUP_BIT), + REQ_F_NEED_CLEANUP = BIT_ULL(REQ_F_NEED_CLEANUP_BIT), /* already went through poll handler */ - REQ_F_POLLED = BIT(REQ_F_POLLED_BIT), + REQ_F_POLLED = BIT_ULL(REQ_F_POLLED_BIT), /* buffer already selected */ - REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT), + REQ_F_BUFFER_SELECTED = BIT_ULL(REQ_F_BUFFER_SELECTED_BIT), /* buffer selected from ring, needs commit */ - REQ_F_BUFFER_RING = BIT(REQ_F_BUFFER_RING_BIT), + REQ_F_BUFFER_RING = BIT_ULL(REQ_F_BUFFER_RING_BIT), /* caller should reissue async */ - REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT), + REQ_F_REISSUE = BIT_ULL(REQ_F_REISSUE_BIT), /* supports async reads/writes */ - REQ_F_SUPPORT_NOWAIT = BIT(REQ_F_SUPPORT_NOWAIT_BIT), + REQ_F_SUPPORT_NOWAIT = BIT_ULL(REQ_F_SUPPORT_NOWAIT_BIT), /* regular file */ - REQ_F_ISREG = BIT(REQ_F_ISREG_BIT), + REQ_F_ISREG = BIT_ULL(REQ_F_ISREG_BIT), /* has creds assigned */ - REQ_F_CREDS = BIT(REQ_F_CREDS_BIT), + REQ_F_CREDS = BIT_ULL(REQ_F_CREDS_BIT), /* skip refcounting if not set */ - REQ_F_REFCOUNT = BIT(REQ_F_REFCOUNT_BIT), + REQ_F_REFCOUNT = BIT_ULL(REQ_F_REFCOUNT_BIT), /* there is a linked timeout that has to be armed */ - REQ_F_ARM_LTIMEOUT = BIT(REQ_F_ARM_LTIMEOUT_BIT), + REQ_F_ARM_LTIMEOUT = BIT_ULL(REQ_F_ARM_LTIMEOUT_BIT), /* ->async_data allocated */ - REQ_F_ASYNC_DATA = BIT(REQ_F_ASYNC_DATA_BIT), + REQ_F_ASYNC_DATA = BIT_ULL(REQ_F_ASYNC_DATA_BIT), /* don't post CQEs while failing linked requests */ - REQ_F_SKIP_LINK_CQES = BIT(REQ_F_SKIP_LINK_CQES_BIT), + REQ_F_SKIP_LINK_CQES = BIT_ULL(REQ_F_SKIP_LINK_CQES_BIT), /* single poll may be active */ - REQ_F_SINGLE_POLL = BIT(REQ_F_SINGLE_POLL_BIT), + REQ_F_SINGLE_POLL = BIT_ULL(REQ_F_SINGLE_POLL_BIT), /* double poll may active */ - REQ_F_DOUBLE_POLL = BIT(REQ_F_DOUBLE_POLL_BIT), + REQ_F_DOUBLE_POLL = BIT_ULL(REQ_F_DOUBLE_POLL_BIT), /* request has already done partial IO */ - REQ_F_PARTIAL_IO = BIT(REQ_F_PARTIAL_IO_BIT), + REQ_F_PARTIAL_IO = BIT_ULL(REQ_F_PARTIAL_IO_BIT), /* fast poll multishot mode */ - REQ_F_APOLL_MULTISHOT = BIT(REQ_F_APOLL_MULTISHOT_BIT), + REQ_F_APOLL_MULTISHOT = BIT_ULL(REQ_F_APOLL_MULTISHOT_BIT), /* ->extra1 and ->extra2 are initialised */ - REQ_F_CQE32_INIT = BIT(REQ_F_CQE32_INIT_BIT), + REQ_F_CQE32_INIT = BIT_ULL(REQ_F_CQE32_INIT_BIT), /* recvmsg special flag, clear EPOLLIN */ - REQ_F_CLEAR_POLLIN = BIT(REQ_F_CLEAR_POLLIN_BIT), + REQ_F_CLEAR_POLLIN = BIT_ULL(REQ_F_CLEAR_POLLIN_BIT), /* hashed into ->cancel_hash_locked, protected by ->uring_lock */ - REQ_F_HASH_LOCKED = BIT(REQ_F_HASH_LOCKED_BIT), + REQ_F_HASH_LOCKED = BIT_ULL(REQ_F_HASH_LOCKED_BIT), }; typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts); @@ -535,7 +535,7 @@ struct io_kiocb { * and after selection it points to the buffer ID itself. */ u16 buf_index; - unsigned int flags; + u64 flags; struct io_cqe cqe; diff --git a/io_uring/filetable.h b/io_uring/filetable.h index 351111ff8882..cfa32dcd77a1 100644 --- a/io_uring/filetable.h +++ b/io_uring/filetable.h @@ -21,7 +21,7 @@ int io_fixed_fd_remove(struct io_ring_ctx *ctx, unsigned int offset); int io_register_file_alloc_range(struct io_ring_ctx *ctx, struct io_uring_file_index_range __user *arg); -unsigned int io_file_get_flags(struct file *file); +u64 io_file_get_flags(struct file *file); static inline void io_file_bitmap_clear(struct io_file_table *table, int bit) { diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 3bca7a79efda..9568b5e4cf87 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1113,7 +1113,7 @@ __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx) static inline void io_dismantle_req(struct io_kiocb *req) { - unsigned int flags = req->flags; + u64 flags = req->flags; if (unlikely(flags & IO_REQ_CLEAN_FLAGS)) io_clean_op(req); @@ -1797,7 +1797,7 @@ static bool __io_file_supports_nowait(struct file *file, umode_t mode) * any file. For now, just ensure that anything potentially problematic is done * inline. */ -unsigned int io_file_get_flags(struct file *file) +u64 io_file_get_flags(struct file *file) { umode_t mode = file_inode(file)->i_mode; unsigned int res = 0; @@ -4544,7 +4544,7 @@ static int __init io_uring_init(void) BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8)); BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS); - BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int)); + BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(u64)); BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32)); From patchwork Wed Apr 19 16:25:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AD92C77B7A for ; Wed, 19 Apr 2023 16:26:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231812AbjDSQZ7 (ORCPT ); Wed, 19 Apr 2023 12:25:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233406AbjDSQZ6 (ORCPT ); Wed, 19 Apr 2023 12:25:58 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7043B10C1 for ; Wed, 19 Apr 2023 09:25:57 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id 41be03b00d2f7-5180ad24653so544481a12.1 for ; Wed, 19 Apr 2023 09:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921556; x=1684513556; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yig2yzZEbAOAAe0JyonvM0JghOyicJfE8C/+0HCK1lk=; b=pZyd1zs6OIeD2mBOVJYWN+9NOY/TboYsxnPt528S8qnuyK5/24lNehJ2Kd8Mwxo2HE 9Fz+G8G85Pj9cgI1bDRaJ5Ki0TL0+FxfWt5QfPWQ+OObGnz4qXM13XYy6BzMjFgXJ4Dd SoIQJRWtuhWkCJo8OsGvKXlPI5OEC/WojTz9BKQUZTu+I+DmNKXtdF6kuiS4WCTG27bp z/JfIxtedOtoc4yfTiKuP/N1fMbagko8uYbcvXwK15K8BEJHVZRapkBYJTqWY/ltz9EM 1n4odr0GdcgSu8o5MZQ3Rz96lyuFVwqJp62oOw67oYCYZTsjOYtXE3mql7rc0K61HJ8J nzcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921556; x=1684513556; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yig2yzZEbAOAAe0JyonvM0JghOyicJfE8C/+0HCK1lk=; b=jQC0zAWp52XyxKumgC0MtghKvksttzF7HZpSbt0bD8zQQqMRm5Hl4oAv+DnJ3AETqf O7cHCT7Zd6w8JPZHdC+0Q5hXqlTYjAWtLHdddGP29WzPjBOk8haPLRM2srKvnH7yGzj6 TIoSWwvWmM7jfvZSLhaYvVnC+9nraS3j5UhYrDgqltkEu+nkn6D4aUKSsbWqvCE/v0E0 NdpblJ/7tA8AvXKJjYZE8lQH4ohI9oJage2R/ijJlYJHQLN0mZ7zriicEA87J1Al2Bu5 +WEN6SrxLmUxeGwyMKjU81xwiA/Pwf0YMgTDQADkOQGPQoea+zc1E5U/W2/mZvGiGiZY 9Log== X-Gm-Message-State: AAQBX9eX/8Nda78oMWFin+Bt0E6W5NZGt6G3wznHjJajVb/RVSrvBZOr 39awphMrFgT8KjbygfpUlbfLLmHI21TpOIrVN4M= X-Google-Smtp-Source: AKy350YktWJI1dlbWt35YwMyLT0dB5jy6klCfgpPd8wD1kJOgf7RZkODuIv4rFqR0f8WAsxFqywIKA== X-Received: by 2002:a05:6a21:32a2:b0:e9:843:18f3 with SMTP id yt34-20020a056a2132a200b000e9084318f3mr20706791pzb.5.1681921556600; Wed, 19 Apr 2023 09:25:56 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.25.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:25:56 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 2/6] io_uring: move poll_refs up a cacheline to fill a hole Date: Wed, 19 Apr 2023 10:25:48 -0600 Message-Id: <20230419162552.576489-3-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org After bumping the flags to 64-bits, we now have two holes in io_kiocb. The best candidate for moving is poll_refs, as not to split the task_work related items. Signed-off-by: Jens Axboe --- include/linux/io_uring_types.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 84f436cc6509..4dd54d2173e1 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -535,6 +535,9 @@ struct io_kiocb { * and after selection it points to the buffer ID itself. */ u16 buf_index; + + atomic_t poll_refs; + u64 flags; struct io_cqe cqe; @@ -565,9 +568,8 @@ struct io_kiocb { __poll_t apoll_events; }; atomic_t refs; - atomic_t poll_refs; - struct io_task_work io_task_work; unsigned nr_tw; + struct io_task_work io_task_work; /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */ union { struct hlist_node hash_node; From patchwork Wed Apr 19 16:25:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA24C77B78 for ; Wed, 19 Apr 2023 16:26:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233527AbjDSQ0B (ORCPT ); Wed, 19 Apr 2023 12:26:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233406AbjDSQ0A (ORCPT ); Wed, 19 Apr 2023 12:26:00 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC70410C1 for ; Wed, 19 Apr 2023 09:25:58 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-63b7b18a336so11252b3a.1 for ; Wed, 19 Apr 2023 09:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921558; x=1684513558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YwYrlY5CnjdbXSCvAeq+mcm2xkAzawti+ArxhyJebv4=; b=m1b7S5Bg39SqRKV+vQbPBGPTjHRqt/q0FCJzZLEXaamHpVDnv8GT96gyfRuXKFivcJ e3XVJxQZkYGw8ov8ojO8rTBd60ar5P9VVMAEFtZsaDum7FZAVQS81SNJqeDjyJw9/l6z Au2/dK4ySlBISQXfvpYYOoYUSBbXRyOIJDwX8wa+WIDd2B2hSJQY/cYfEWEW19Ux8ytP GpaqJy+MX/q/ihynadOHvn3RDAyWE+5BW/0dfq+F2gFk1Yu4jUCR1WXVgEV5tb8KvT9a 5s/UfT5tKZ2vF6Accd6GbO5+V68AVxOOagVfQw3VMHyZbrKt6UOm5/6nCSNaS1yysTry zi/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921558; x=1684513558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YwYrlY5CnjdbXSCvAeq+mcm2xkAzawti+ArxhyJebv4=; b=BFTgYvskmIL/49InOSlcHc/lTf1WpW8Kd13xGMqBAiJFTA1bL6Gb3DyMc6a0USKOGd nEU1xIvkSFJIEEtUgIgo1pFClNg931qO4Pu3JvWi20bPm2DPXqr2foEZWdIr8VDY1+F/ rVeQcMKu1HgyMru6JAcEEqWFI+rVUAvUZ3sPLH20wQ1Rt1KNyEEn2kghUxiy4PwWMsxl xLdn5xZs0SchSPZOb344HOfelkeudF3F30CljYxNSMd343mBiU8M9/lfvOlBUDsCmS9d LH0BER2lCcBQpwWt8xA9zihsvxtzhWh9In9FH8XBcJEV8ZVXkp+3Pt76ayDQbnhpKeD+ ChFQ== X-Gm-Message-State: AAQBX9d5nr+4r2aSXEi38X5VUQ4E5ZDZaQuVhK4tyRBqeaCFzFf0L2gi zOTEfl1uklHna5ZSddHU6hGlYaNsGUMXUfEc5h0= X-Google-Smtp-Source: AKy350YAW6Dck9P0dOhM/4ITciGkiRcBYocqYUz4FQt6QlVnNUJHdqZ6dbNDyCkkzOM8bFAptXmacg== X-Received: by 2002:a05:6a00:338f:b0:5e4:f141:568b with SMTP id cm15-20020a056a00338f00b005e4f141568bmr19984387pfb.3.1681921557616; Wed, 19 Apr 2023 09:25:57 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.25.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:25:57 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 3/6] io_uring: add support for NO_OFFLOAD Date: Wed, 19 Apr 2023 10:25:49 -0600 Message-Id: <20230419162552.576489-4-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Some applications don't necessarily care about io_uring not blocking for request issue, they simply want to use io_uring for batched submission of IO. However, io_uring will always do non-blocking issues, and for some request types, there's simply no support for doing non-blocking issue and hence they get punted to io-wq unconditionally. If the application doesn't care about issue potentially blocking, this causes a performance slowdown as thread offload is not nearly as efficient as inline issue. Add support for configuring the ring with IORING_SETUP_NO_OFFLOAD, and add an IORING_ENTER_NO_OFFLOAD flag to io_uring_enter(2). If either one of these is set, then io_uring will ignore the non-block issue attempt for any file which we cannot poll for readiness. The simplified io_uring issue model looks as follows: 1) Non-blocking issue is attempted for IO. If successful, we're done for now. 2) Case 1 failed. Now we have two options a) We can poll the file. We arm poll, and we're done for now until that triggers. b) File cannot be polled, we punt to io-wq which then does a blocking attempt. If either of the NO_OFFLOAD flags are set, we should never hit case 2b. Instead, case 1 would issue the IO without the non-blocking flag being set and perform an inline completion. Signed-off-by: Jens Axboe --- include/linux/io_uring_types.h | 3 +++ include/uapi/linux/io_uring.h | 7 +++++++ io_uring/io_uring.c | 26 ++++++++++++++++++++------ io_uring/io_uring.h | 2 +- io_uring/sqpoll.c | 3 ++- 5 files changed, 33 insertions(+), 8 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 4dd54d2173e1..c54f3fb7ab1a 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -403,6 +403,7 @@ enum { REQ_F_APOLL_MULTISHOT_BIT, REQ_F_CLEAR_POLLIN_BIT, REQ_F_HASH_LOCKED_BIT, + REQ_F_NO_OFFLOAD_BIT, /* keep async read/write and isreg together and in order */ REQ_F_SUPPORT_NOWAIT_BIT, REQ_F_ISREG_BIT, @@ -475,6 +476,8 @@ enum { REQ_F_CLEAR_POLLIN = BIT_ULL(REQ_F_CLEAR_POLLIN_BIT), /* hashed into ->cancel_hash_locked, protected by ->uring_lock */ REQ_F_HASH_LOCKED = BIT_ULL(REQ_F_HASH_LOCKED_BIT), + /* don't offload to io-wq, issue blocking if needed */ + REQ_F_NO_OFFLOAD = BIT_ULL(REQ_F_NO_OFFLOAD_BIT), }; typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts); diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 0716cb17e436..ea903a677ce9 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -173,6 +173,12 @@ enum { */ #define IORING_SETUP_DEFER_TASKRUN (1U << 13) +/* + * Don't attempt non-blocking issue on file types that would otherwise + * punt to io-wq if they cannot be completed non-blocking. + */ +#define IORING_SETUP_NO_OFFLOAD (1U << 14) + enum io_uring_op { IORING_OP_NOP, IORING_OP_READV, @@ -443,6 +449,7 @@ struct io_cqring_offsets { #define IORING_ENTER_SQ_WAIT (1U << 2) #define IORING_ENTER_EXT_ARG (1U << 3) #define IORING_ENTER_REGISTERED_RING (1U << 4) +#define IORING_ENTER_NO_OFFLOAD (1U << 5) /* * Passed in for io_uring_setup(2). Copied back with updated info on success diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 9568b5e4cf87..04770b06de16 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1947,6 +1947,10 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!io_assign_file(req, def, issue_flags))) return -EBADF; + if (req->flags & REQ_F_NO_OFFLOAD && + (!req->file || !file_can_poll(req->file))) + issue_flags &= ~IO_URING_F_NONBLOCK; + if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred())) creds = override_creds(req->creds); @@ -2337,7 +2341,7 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe, } static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - const struct io_uring_sqe *sqe) + const struct io_uring_sqe *sqe, bool no_offload) __must_hold(&ctx->uring_lock) { struct io_submit_link *link = &ctx->submit_state.link; @@ -2385,6 +2389,9 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, return 0; } + if (no_offload) + req->flags |= REQ_F_NO_OFFLOAD; + io_queue_sqe(req); return 0; } @@ -2466,7 +2473,7 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe) return false; } -int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) +int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, bool no_offload) __must_hold(&ctx->uring_lock) { unsigned int entries = io_sqring_entries(ctx); @@ -2495,7 +2502,7 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) * Continue submitting even for sqe failure if the * ring was setup with IORING_SETUP_SUBMIT_ALL */ - if (unlikely(io_submit_sqe(ctx, req, sqe)) && + if (unlikely(io_submit_sqe(ctx, req, sqe, no_offload)) && !(ctx->flags & IORING_SETUP_SUBMIT_ALL)) { left--; break; @@ -3524,7 +3531,8 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP | IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG | - IORING_ENTER_REGISTERED_RING))) + IORING_ENTER_REGISTERED_RING | + IORING_ENTER_NO_OFFLOAD))) return -EINVAL; /* @@ -3575,12 +3583,17 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, ret = to_submit; } else if (to_submit) { + bool no_offload; + ret = io_uring_add_tctx_node(ctx); if (unlikely(ret)) goto out; + no_offload = flags & IORING_ENTER_NO_OFFLOAD || + ctx->flags & IORING_SETUP_NO_OFFLOAD; + mutex_lock(&ctx->uring_lock); - ret = io_submit_sqes(ctx, to_submit); + ret = io_submit_sqes(ctx, to_submit, no_offload); if (ret != to_submit) { mutex_unlock(&ctx->uring_lock); goto out; @@ -3969,7 +3982,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params) IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL | IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG | IORING_SETUP_SQE128 | IORING_SETUP_CQE32 | - IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN)) + IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN | + IORING_SETUP_NO_OFFLOAD)) return -EINVAL; return io_uring_create(entries, &p, params); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 25515d69d205..c5c0db7232c0 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -76,7 +76,7 @@ int io_uring_alloc_task_context(struct task_struct *task, struct io_ring_ctx *ctx); int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts); -int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr); +int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, bool no_offload); int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin); void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node); int io_req_prep_async(struct io_kiocb *req); diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c index 9db4bc1f521a..9a9417bf9e3f 100644 --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -166,6 +166,7 @@ static inline bool io_sqd_events_pending(struct io_sq_data *sqd) static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) { + bool no_offload = ctx->flags & IORING_SETUP_NO_OFFLOAD; unsigned int to_submit; int ret = 0; @@ -190,7 +191,7 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) */ if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) && !(ctx->flags & IORING_SETUP_R_DISABLED)) - ret = io_submit_sqes(ctx, to_submit); + ret = io_submit_sqes(ctx, to_submit, no_offload); mutex_unlock(&ctx->uring_lock); if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait)) From patchwork Wed Apr 19 16:25:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05277C77B73 for ; Wed, 19 Apr 2023 16:26:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233545AbjDSQ0A (ORCPT ); Wed, 19 Apr 2023 12:26:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233527AbjDSQ0A (ORCPT ); Wed, 19 Apr 2023 12:26:00 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CC5C3C15 for ; Wed, 19 Apr 2023 09:25:59 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-63b9f00640eso15047b3a.0 for ; Wed, 19 Apr 2023 09:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921559; x=1684513559; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QNBNMSl1jOPYogsKOmmzMd0l9YXo4Oz/6dQdAI0cTVA=; b=xwCOf3X4PlfnWhWSaAtpYQ2FhzyfRy7QNDiwtG7uFYejQ4r6TA/1X3kOmMWLkswwvE Vk/HaoaFgvZrSsxxIOH8OVOoqoV7UbS+cfGqCkIt1f5TPN8rzaovlVSwA+QLhlt/aLIe bfI0nesHoTb9UJSwb1Ln+KoeExoajaRBhCoapEZX6tJ8RmCnbriYBmFdM0BT7z8IDXNM FHxhtoCGEDXrs01p6pYxeDpsrKfg8KfDds5HSkCNV94zkm4IWyIKjaxtYs9ZaDreKop4 WlGZkziX9fUjpnKdc6Vfw3VH57J0g2/WkCERiPSmTtb7Qoghybt9NsJ3FwEKvPRxcQIm 632Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921559; x=1684513559; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QNBNMSl1jOPYogsKOmmzMd0l9YXo4Oz/6dQdAI0cTVA=; b=h5aAkAarEs5q2N9k0IRQx2M12ZnHsoIK0925vQbujCr4TZxRru45+zog+ulIY8dXdl nOchJD5meRmTyj2FuiS0yj2RXc0o2m/WiyoqmDyF7irvtFxcsPbP5EB4u3Oc/OfQscoG 4Ck01Fd/urxvGDU4fOIyHeb7pb2rqMJ1oj9o8fEQZ8SCSQJLHD8cuQOJcvO2YXpKaMB3 vT4PRQYExJ3tgrZec/X3GJte4v2n1vIXbNoMLCw8wffuQ1Z5y3iSFJJIaKY0vs0zS4q0 OLjFiqPSvQpo0cf6C8BcWACQ+I+EIrzuI5xRpr73azN4+NClOOM/SIAnG7E/ValBxOcl 8A0w== X-Gm-Message-State: AAQBX9ejpi6Qf5TgwAyISVh+eL1jvpFe6DPwmHw5qrzHa+zaeWCcfTP7 qvP4L7eoT16C0wk24HXSqgMpFxWML62/KDLOPnI= X-Google-Smtp-Source: AKy350apdSgkk2XYXjP9RgnTE35elIKF4K+nqx/4fCOwWDqPKNzyTF6eVMcJtLTUYe0XM0WYmAM0hA== X-Received: by 2002:a05:6a00:26d2:b0:63b:5257:6837 with SMTP id p18-20020a056a0026d200b0063b52576837mr20011980pfw.1.1681921558771; Wed, 19 Apr 2023 09:25:58 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.25.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:25:58 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 4/6] Revert "io_uring: always go async for unsupported fadvise flags" Date: Wed, 19 Apr 2023 10:25:50 -0600 Message-Id: <20230419162552.576489-5-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This reverts commit c31cc60fddd11134031e7f9eb76812353cfaac84. In preparation for handling this a bit differently, revert this cleanup. Signed-off-by: Jens Axboe --- io_uring/advise.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/io_uring/advise.c b/io_uring/advise.c index 7085804c513c..cf600579bffe 100644 --- a/io_uring/advise.c +++ b/io_uring/advise.c @@ -62,18 +62,6 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags) #endif } -static bool io_fadvise_force_async(struct io_fadvise *fa) -{ - switch (fa->advice) { - case POSIX_FADV_NORMAL: - case POSIX_FADV_RANDOM: - case POSIX_FADV_SEQUENTIAL: - return false; - default: - return true; - } -} - int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise); @@ -84,8 +72,6 @@ int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) fa->offset = READ_ONCE(sqe->off); fa->len = READ_ONCE(sqe->len); fa->advice = READ_ONCE(sqe->fadvise_advice); - if (io_fadvise_force_async(fa)) - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -94,7 +80,16 @@ int io_fadvise(struct io_kiocb *req, unsigned int issue_flags) struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK && io_fadvise_force_async(fa)); + if (issue_flags & IO_URING_F_NONBLOCK) { + switch (fa->advice) { + case POSIX_FADV_NORMAL: + case POSIX_FADV_RANDOM: + case POSIX_FADV_SEQUENTIAL: + break; + default: + return -EAGAIN; + } + } ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice); if (ret < 0) From patchwork Wed Apr 19 16:25:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1212BC77B7A for ; Wed, 19 Apr 2023 16:26:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233406AbjDSQ0D (ORCPT ); Wed, 19 Apr 2023 12:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233546AbjDSQ0C (ORCPT ); Wed, 19 Apr 2023 12:26:02 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D7910C1 for ; Wed, 19 Apr 2023 09:26:00 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-63b7b18a336so11258b3a.1 for ; Wed, 19 Apr 2023 09:26:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921560; x=1684513560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Um+nb2+lFo5MMa4n2nYjynvYaU31qYFb1VpiGZ7Cvgw=; b=5FWfNeq6Pkfl4jfcO6mcHv7Cy/DZQCcih9/9KQIcVFTrpbZNr0Vkjc+0xz5gw6ZqGJ mxI5KYgRvhAa7EiAUOf4Sw105aHBOM7PFSbBFni4I6nFsP3V0YvEVNq+2OzZjfz2NEFh NPLMXrySfZTij+cZJEo2p8BxFvAItSoQMoZksw+O0L2cO6f29rxUpP1NnoeJWiQnw2I0 LOVN3a6A6nn0LQq3tHTHX+VXmNJJya60vd0rogsWc5q2SsotxQ0JSRvJsx/4mcnZuy7Z IsApdZ0Dp8NonNgHsBTKoQot4+umbcD6JFA3SfibiIoEXqBcIMkWKnQVzx4rb/SKVi6Q phUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921560; x=1684513560; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Um+nb2+lFo5MMa4n2nYjynvYaU31qYFb1VpiGZ7Cvgw=; b=RN01h1phyD+1/s6gdkps+FMCLzgozqZhS0kq8LE7RhY04OIdCDGvuZGw+fjEB40siJ 87odgtwy8OcNON33FHKhhFV4LbNs30e6oRcNY7UW1R7wLB5/65AJpbB6Ka8K6vZekGaR EvCtyiDJHup7db+JaWZgs46icQGou7slYTS/+f+z6kqOxDpArpNvdl+YyN2moS36eEXP EY7H1Q2FiqVl2UjGhs5gwYO8VyEgcnchFpxC9ml94yyBXA0iXgno7nzBRZnr2Fx9N6gz DoL3GxLWFh/zGE3LLKWf5y1nQ8BJBPzWc8cnJrvnwCiunc6v6QyzeMGh+c2fs5dXS2Dj tTyA== X-Gm-Message-State: AAQBX9fLrHH4ja137CC8bQEnSmJEgcFsBbLrhMEh+B4Xnhk4zkaCjrzF WEXBZ5M19QK+oNLkRqecnldVuRaoAvVpCYwobk4= X-Google-Smtp-Source: AKy350aLEmV9kcjyEyj8hmFPGyvSW3ScKXtJ0sTyJ3Dakmn0pafuiP84gLPK2xjoaNa9+jhzAoYbNw== X-Received: by 2002:a05:6a00:338f:b0:5e4:f141:568b with SMTP id cm15-20020a056a00338f00b005e4f141568bmr19984482pfb.3.1681921559890; Wed, 19 Apr 2023 09:25:59 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.25.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:25:59 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 5/6] Revert "io_uring: for requests that require async, force it" Date: Wed, 19 Apr 2023 10:25:51 -0600 Message-Id: <20230419162552.576489-6-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This reverts commit aebb224fd4fc7352cd839ad90414c548387142fd. In preparation for handling this a bit differently, revert this cleanup. Signed-off-by: Jens Axboe --- io_uring/advise.c | 4 ++-- io_uring/fs.c | 20 ++++++++++---------- io_uring/net.c | 4 ++-- io_uring/splice.c | 7 ++++--- io_uring/statx.c | 4 ++-- io_uring/sync.c | 14 ++++++-------- io_uring/xattr.c | 14 ++++++++------ 7 files changed, 34 insertions(+), 33 deletions(-) diff --git a/io_uring/advise.c b/io_uring/advise.c index cf600579bffe..449c6f14649f 100644 --- a/io_uring/advise.c +++ b/io_uring/advise.c @@ -39,7 +39,6 @@ int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) ma->addr = READ_ONCE(sqe->addr); ma->len = READ_ONCE(sqe->len); ma->advice = READ_ONCE(sqe->fadvise_advice); - req->flags |= REQ_F_FORCE_ASYNC; return 0; #else return -EOPNOTSUPP; @@ -52,7 +51,8 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags) struct io_madvise *ma = io_kiocb_to_cmd(req, struct io_madvise); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice); io_req_set_res(req, ret, 0); diff --git a/io_uring/fs.c b/io_uring/fs.c index f6a69a549fd4..7100c293c13a 100644 --- a/io_uring/fs.c +++ b/io_uring/fs.c @@ -74,7 +74,6 @@ int io_renameat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -83,7 +82,8 @@ int io_renameat(struct io_kiocb *req, unsigned int issue_flags) struct io_rename *ren = io_kiocb_to_cmd(req, struct io_rename); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd, ren->newpath, ren->flags); @@ -123,7 +123,6 @@ int io_unlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return PTR_ERR(un->filename); req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -132,7 +131,8 @@ int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags) struct io_unlink *un = io_kiocb_to_cmd(req, struct io_unlink); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (un->flags & AT_REMOVEDIR) ret = do_rmdir(un->dfd, un->filename); @@ -170,7 +170,6 @@ int io_mkdirat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return PTR_ERR(mkd->filename); req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -179,7 +178,8 @@ int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags) struct io_mkdir *mkd = io_kiocb_to_cmd(req, struct io_mkdir); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_mkdirat(mkd->dfd, mkd->filename, mkd->mode); @@ -220,7 +220,6 @@ int io_symlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -229,7 +228,8 @@ int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags) struct io_link *sl = io_kiocb_to_cmd(req, struct io_link); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_symlinkat(sl->oldpath, sl->new_dfd, sl->newpath); @@ -265,7 +265,6 @@ int io_linkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -274,7 +273,8 @@ int io_linkat(struct io_kiocb *req, unsigned int issue_flags) struct io_link *lnk = io_kiocb_to_cmd(req, struct io_link); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_linkat(lnk->old_dfd, lnk->oldpath, lnk->new_dfd, lnk->newpath, lnk->flags); diff --git a/io_uring/net.c b/io_uring/net.c index 89e839013837..e85a868290ec 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -91,7 +91,6 @@ int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return -EINVAL; shutdown->how = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -101,7 +100,8 @@ int io_shutdown(struct io_kiocb *req, unsigned int issue_flags) struct socket *sock; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; sock = sock_from_file(req->file); if (unlikely(!sock)) diff --git a/io_uring/splice.c b/io_uring/splice.c index 2a4bbb719531..53e4232d0866 100644 --- a/io_uring/splice.c +++ b/io_uring/splice.c @@ -34,7 +34,6 @@ static int __io_splice_prep(struct io_kiocb *req, if (unlikely(sp->flags & ~valid_flags)) return -EINVAL; sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -53,7 +52,8 @@ int io_tee(struct io_kiocb *req, unsigned int issue_flags) struct file *in; long ret = 0; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (sp->flags & SPLICE_F_FD_IN_FIXED) in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags); @@ -94,7 +94,8 @@ int io_splice(struct io_kiocb *req, unsigned int issue_flags) struct file *in; long ret = 0; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (sp->flags & SPLICE_F_FD_IN_FIXED) in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags); diff --git a/io_uring/statx.c b/io_uring/statx.c index abb874209caa..d8fc933d3f59 100644 --- a/io_uring/statx.c +++ b/io_uring/statx.c @@ -48,7 +48,6 @@ int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -57,7 +56,8 @@ int io_statx(struct io_kiocb *req, unsigned int issue_flags) struct io_statx *sx = io_kiocb_to_cmd(req, struct io_statx); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_statx(sx->dfd, sx->filename, sx->flags, sx->mask, sx->buffer); io_req_set_res(req, ret, 0); diff --git a/io_uring/sync.c b/io_uring/sync.c index 255f68c37e55..64e87ea2b8fb 100644 --- a/io_uring/sync.c +++ b/io_uring/sync.c @@ -32,8 +32,6 @@ int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->len); sync->flags = READ_ONCE(sqe->sync_range_flags); - req->flags |= REQ_F_FORCE_ASYNC; - return 0; } @@ -43,7 +41,8 @@ int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags) int ret; /* sync_file_range always requires a blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = sync_file_range(req->file, sync->off, sync->len, sync->flags); io_req_set_res(req, ret, 0); @@ -63,7 +62,6 @@ int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -74,7 +72,8 @@ int io_fsync(struct io_kiocb *req, unsigned int issue_flags) int ret; /* fsync always requires a blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = vfs_fsync_range(req->file, sync->off, end > 0 ? end : LLONG_MAX, sync->flags & IORING_FSYNC_DATASYNC); @@ -92,7 +91,6 @@ int io_fallocate_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->addr); sync->mode = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -102,8 +100,8 @@ int io_fallocate(struct io_kiocb *req, unsigned int issue_flags) int ret; /* fallocate always requiring blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); - + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = vfs_fallocate(req->file, sync->mode, sync->off, sync->len); if (ret >= 0) fsnotify_modify(req->file); diff --git a/io_uring/xattr.c b/io_uring/xattr.c index e1c810e0b85a..6201a9f442c6 100644 --- a/io_uring/xattr.c +++ b/io_uring/xattr.c @@ -75,7 +75,6 @@ static int __io_getxattr_prep(struct io_kiocb *req, } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -110,7 +109,8 @@ int io_fgetxattr(struct io_kiocb *req, unsigned int issue_flags) struct io_xattr *ix = io_kiocb_to_cmd(req, struct io_xattr); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_getxattr(mnt_idmap(req->file->f_path.mnt), req->file->f_path.dentry, @@ -127,7 +127,8 @@ int io_getxattr(struct io_kiocb *req, unsigned int issue_flags) struct path path; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; retry: ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL); @@ -173,7 +174,6 @@ static int __io_setxattr_prep(struct io_kiocb *req, } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -222,7 +222,8 @@ int io_fsetxattr(struct io_kiocb *req, unsigned int issue_flags) { int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = __io_setxattr(req, issue_flags, &req->file->f_path); io_xattr_finish(req, ret); @@ -236,7 +237,8 @@ int io_setxattr(struct io_kiocb *req, unsigned int issue_flags) struct path path; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; retry: ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL); From patchwork Wed Apr 19 16:25:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13217135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD436C77B73 for ; Wed, 19 Apr 2023 16:26:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233546AbjDSQ0E (ORCPT ); Wed, 19 Apr 2023 12:26:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233542AbjDSQ0C (ORCPT ); Wed, 19 Apr 2023 12:26:02 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1E6430DC for ; Wed, 19 Apr 2023 09:26:01 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-63b78525ac5so17334b3a.0 for ; Wed, 19 Apr 2023 09:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1681921561; x=1684513561; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Jnh79NqHUAliOqaj/jfm7JDhwlum7MQ336y4gZPZSHY=; b=0yVf4y/6qSbPTM7TmQPv6K/czm+yefNdVFXHRyEGD16Qn943yzz2bz0F+4+l7QGg4w Pdb3Z1QxECfMSYoOsmwSn/HUJWqHbFi/2tsWLSZE7VewSzYQcbA0goqzv/unxs7VEg69 omWI2Y82cT4iz6TFG0aEdH5x/FcfFLQZKu4hIINK6adIYo0oU7bHuU7VYmSRzjp0Q5hk CE+5pfdmUBxgMmxZ9F6xD/hvIAcDnePZzTJDBp2FP4W8l1bXtqxdgWvVA5sIM9TRkHFU M6HIj6AsKEy6hLOMOxTbPwR0YSKXUig8ATVTmQAnIiy+/5LjkaeRpsxpOrLqToox3BDg Tv7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681921561; x=1684513561; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Jnh79NqHUAliOqaj/jfm7JDhwlum7MQ336y4gZPZSHY=; b=QSPDxMADudlmHfiq4fkjr1fjGdEblP7PdCI4Zfg3gNZ6me5VK+pSCQUvhM0BOaj+TW ZyMYa9oF86+scdcGpqs/PBJofQVtQFY7mdFN93Gh42enA7JDoSrVyc11HdHxQ6zKkEsE 3CcpnCjqRedG/jOYscY52ZFEinAfO8MFt39W7uKuDeXlvPuNgZNsXaX+cz3/soBMxbLP e77U44nlbDKFYkonNj362ja06oOx9i3Dg3R9ZlJ9mYdWM2pr1ZXBkp1O3dJTMs4gFVPd kmySUmJJ/59qmb8TkI7PZz3TUjFdkPKli01V8adKed8H0rQPk3usJzyphq8lfk5pTg/p qhLw== X-Gm-Message-State: AAQBX9djJtV5aowXvgSB8esCrxTrFKN760go+euVQuRiP8RS/E3LYHe7 qKNxVBmOsZlQC05aB8mpicvQPzfaPxIaWp4g2jE= X-Google-Smtp-Source: AKy350YpHTjRr5gEzEmRhnUS2x9uLUSjUBRVYX1Cqzs7BHuzIQKPgLDB7G4F/31OaT4BGxcIWiM7/w== X-Received: by 2002:a05:6a00:1c9e:b0:63d:344c:f123 with SMTP id y30-20020a056a001c9e00b0063d344cf123mr6887508pfw.1.1681921560840; Wed, 19 Apr 2023 09:26:00 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id k19-20020aa790d3000000b0063d2cd02d69sm4531334pfk.54.2023.04.19.09.26.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 09:26:00 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: luhongfei@vivo.com, Jens Axboe Subject: [PATCH 6/6] io_uring: mark opcodes that always need io-wq punt Date: Wed, 19 Apr 2023 10:25:52 -0600 Message-Id: <20230419162552.576489-7-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419162552.576489-1-axboe@kernel.dk> References: <20230419162552.576489-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add an opdef bit for them, and set it for the opcodes where we always need io-wq punt. With that done, exclude them from the file_can_poll() check in terms of whether or not we need to punt them if any of the NO_OFFLOAD flags are set. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 2 +- io_uring/opdef.c | 22 ++++++++++++++++++++-- io_uring/opdef.h | 2 ++ 3 files changed, 23 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 04770b06de16..91045270b665 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1948,7 +1948,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) return -EBADF; if (req->flags & REQ_F_NO_OFFLOAD && - (!req->file || !file_can_poll(req->file))) + (!req->file || !file_can_poll(req->file) || def->always_iowq)) issue_flags &= ~IO_URING_F_NONBLOCK; if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred())) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index cca7c5b55208..686d46001622 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -82,6 +82,7 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_FSYNC] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_fsync_prep, .issue = io_fsync, }, @@ -125,6 +126,7 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_SYNC_FILE_RANGE] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_sfr_prep, .issue = io_sync_file_range, }, @@ -202,6 +204,7 @@ const struct io_issue_def io_issue_defs[] = { }, [IORING_OP_FALLOCATE] = { .needs_file = 1, + .always_iowq = 1, .prep = io_fallocate_prep, .issue = io_fallocate, }, @@ -221,6 +224,7 @@ const struct io_issue_def io_issue_defs[] = { }, [IORING_OP_STATX] = { .audit_skip = 1, + .always_iowq = 1, .prep = io_statx_prep, .issue = io_statx, }, @@ -253,11 +257,13 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_FADVISE] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_fadvise_prep, .issue = io_fadvise, }, [IORING_OP_MADVISE] = { .audit_skip = 1, + .always_iowq = 1, .prep = io_madvise_prep, .issue = io_madvise, }, @@ -308,6 +314,7 @@ const struct io_issue_def io_issue_defs[] = { .hash_reg_file = 1, .unbound_nonreg_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_splice_prep, .issue = io_splice, }, @@ -328,11 +335,13 @@ const struct io_issue_def io_issue_defs[] = { .hash_reg_file = 1, .unbound_nonreg_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_tee_prep, .issue = io_tee, }, [IORING_OP_SHUTDOWN] = { .needs_file = 1, + .always_iowq = 1, #if defined(CONFIG_NET) .prep = io_shutdown_prep, .issue = io_shutdown, @@ -343,22 +352,27 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_RENAMEAT] = { .prep = io_renameat_prep, .issue = io_renameat, + .always_iowq = 1, }, [IORING_OP_UNLINKAT] = { .prep = io_unlinkat_prep, .issue = io_unlinkat, + .always_iowq = 1, }, [IORING_OP_MKDIRAT] = { .prep = io_mkdirat_prep, .issue = io_mkdirat, + .always_iowq = 1, }, [IORING_OP_SYMLINKAT] = { .prep = io_symlinkat_prep, .issue = io_symlinkat, + .always_iowq = 1, }, [IORING_OP_LINKAT] = { .prep = io_linkat_prep, .issue = io_linkat, + .always_iowq = 1, }, [IORING_OP_MSG_RING] = { .needs_file = 1, @@ -367,20 +381,24 @@ const struct io_issue_def io_issue_defs[] = { .issue = io_msg_ring, }, [IORING_OP_FSETXATTR] = { - .needs_file = 1, + .needs_file = 1, + .always_iowq = 1, .prep = io_fsetxattr_prep, .issue = io_fsetxattr, }, [IORING_OP_SETXATTR] = { + .always_iowq = 1, .prep = io_setxattr_prep, .issue = io_setxattr, }, [IORING_OP_FGETXATTR] = { - .needs_file = 1, + .needs_file = 1, + .always_iowq = 1, .prep = io_fgetxattr_prep, .issue = io_fgetxattr, }, [IORING_OP_GETXATTR] = { + .always_iowq = 1, .prep = io_getxattr_prep, .issue = io_getxattr, }, diff --git a/io_uring/opdef.h b/io_uring/opdef.h index c22c8696e749..657a831249ff 100644 --- a/io_uring/opdef.h +++ b/io_uring/opdef.h @@ -29,6 +29,8 @@ struct io_issue_def { unsigned iopoll_queue : 1; /* opcode specific path will handle ->async_data allocation if needed */ unsigned manual_alloc : 1; + /* op always needs io-wq offload */ + unsigned always_iowq : 1; int (*issue)(struct io_kiocb *, unsigned int); int (*prep)(struct io_kiocb *, const struct io_uring_sqe *);