From patchwork Thu Apr 20 18:31:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13219069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 038B5C77B7A for ; Thu, 20 Apr 2023 18:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231864AbjDTScb (ORCPT ); Thu, 20 Apr 2023 14:32:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231954AbjDTScJ (ORCPT ); Thu, 20 Apr 2023 14:32:09 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80CAD4ECF for ; Thu, 20 Apr 2023 11:31:40 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id ca18e2360f4ac-760e332e1b3so7676039f.0 for ; Thu, 20 Apr 2023 11:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1682015499; x=1684607499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CnGEhGdXV2Z1x5XhnHR3IqlvZK4t9CqIhqNFlRmuKGg=; b=JcLMZ9IDlCm5HL7H67/6MFp8Vgief9wRahLbJwXdphHZOvahhH+4Dy7gEe4ipeudFl UHxCW7z5IHeo1RejN2hg6515LooNaWlC7gOh2QN7mBTUwcIO8eE5uwNdlT/XwHi5WhEt pY9rD4AONeeg3jNNC2khmr/2WwcWu7CVPqpnzgwsIslCx/AOYekhxMwghI0WcAgcTP2q YKYse5h4/tSybXDkRIxNzFmHDWfKlyFVWFMtpa3DHcF+myvIQiygYB0gn39FanZrWNiC 7szYvzzp/riWL0HXR+x2JkmD3F9KQ7cWyXkbh0vOggyAZ6BQxSo6ZcJafFaP21BJKLzK oyfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682015499; x=1684607499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CnGEhGdXV2Z1x5XhnHR3IqlvZK4t9CqIhqNFlRmuKGg=; b=L8nluGth8LIZyZQTCy1ryVW6KfZ5HJhSTcVWmewcY0DlUzZc39mKwsYtBgzh1CooUu DWQOhLU8WeYlDAsOxpw+HyzyVFPqtxyvFx1k2BFyw+Q1prcP5b30aaczIo4UR/M0qPJC iEd8BveMTgktdFWQT3nP4llxdW5QwQXz4TYhslikBFCDfBO4cBcJdCr3aW6bffrSaoOr V9txr8RTADvQk0icnbqrHqpz2V0q7AGMt5nn7X7FPFfGvH2IY2KO/WBtgurXLOE4C9iu A/Ntaizj/Ah+CUS1m3m6p3oz1WMZjTx/dBqx/J30T1W+v3h6K/XhAwigU+FuuI7DOVr9 /cPw== X-Gm-Message-State: AAQBX9cK5MqMUDTjTLQRm6zvaxIM8etODSe3LmKkfg3j0CgSjOki2XbC +hGPXwTyPWv+/bsOeuAdMCytpOwmCogL6gvw1Pk= X-Google-Smtp-Source: AKy350Z7xcr1ZBkSDdg8BVgOplRPS4FCJhcx3+LAUkS81sbhhFlhnVklbcoEwtIkdhqdeH9uMfIc+g== X-Received: by 2002:a05:6e02:188d:b0:325:e639:76aa with SMTP id o13-20020a056e02188d00b00325e63976aamr1861129ilu.1.1682015498787; Thu, 20 Apr 2023 11:31:38 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id dl36-20020a05663827a400b003c4e02148e5sm659132jab.53.2023.04.20.11.31.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 11:31:38 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/4] io_uring: add support for NO_OFFLOAD Date: Thu, 20 Apr 2023 12:31:32 -0600 Message-Id: <20230420183135.119618-2-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420183135.119618-1-axboe@kernel.dk> References: <20230420183135.119618-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Some applications don't necessarily care about io_uring not blocking for request issue, they simply want to use io_uring for batched submission of IO. However, io_uring will always do non-blocking issues, and for some request types, there's simply no support for doing non-blocking issue and hence they get punted to io-wq unconditionally. If the application doesn't care about issue potentially blocking, this causes a performance slowdown as thread offload is not nearly as efficient as inline issue. Add support for configuring the ring with IORING_SETUP_NO_OFFLOAD, and add an IORING_ENTER_NO_OFFLOAD flag to io_uring_enter(2). If either one of these is set, then io_uring will ignore the non-block issue attempt for any file which we cannot poll for readiness. The simplified io_uring issue model looks as follows: 1) Non-blocking issue is attempted for IO. If successful, we're done for now. 2) Case 1 failed. Now we have two options a) We can poll the file. We arm poll, and we're done for now until that triggers. b) File cannot be polled, we punt to io-wq which then does a blocking attempt. If either of the NO_OFFLOAD flags are set, we should never hit case 2b. Instead, case 1 would issue the IO without the non-blocking flag being set and perform an inline completion. Signed-off-by: Jens Axboe --- include/linux/io_uring.h | 1 + include/uapi/linux/io_uring.h | 7 +++++ io_uring/io_uring.c | 52 +++++++++++++++++++++++++---------- io_uring/io_uring.h | 2 +- io_uring/sqpoll.c | 6 ++-- 5 files changed, 50 insertions(+), 18 deletions(-) diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h index 35b9328ca335..386d6b722481 100644 --- a/include/linux/io_uring.h +++ b/include/linux/io_uring.h @@ -13,6 +13,7 @@ enum io_uring_cmd_flags { IO_URING_F_MULTISHOT = 4, /* executed by io-wq */ IO_URING_F_IOWQ = 8, + IO_URING_F_NO_OFFLOAD = 16, /* int's last bit, sign checks are usually faster than a bit test */ IO_URING_F_NONBLOCK = INT_MIN, diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 0716cb17e436..ea903a677ce9 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -173,6 +173,12 @@ enum { */ #define IORING_SETUP_DEFER_TASKRUN (1U << 13) +/* + * Don't attempt non-blocking issue on file types that would otherwise + * punt to io-wq if they cannot be completed non-blocking. + */ +#define IORING_SETUP_NO_OFFLOAD (1U << 14) + enum io_uring_op { IORING_OP_NOP, IORING_OP_READV, @@ -443,6 +449,7 @@ struct io_cqring_offsets { #define IORING_ENTER_SQ_WAIT (1U << 2) #define IORING_ENTER_EXT_ARG (1U << 3) #define IORING_ENTER_REGISTERED_RING (1U << 4) +#define IORING_ENTER_NO_OFFLOAD (1U << 5) /* * Passed in for io_uring_setup(2). Copied back with updated info on success diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 3d43df8f1e4e..fee3e461e149 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -147,7 +147,7 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, static void io_dismantle_req(struct io_kiocb *req); static void io_clean_op(struct io_kiocb *req); -static void io_queue_sqe(struct io_kiocb *req); +static void io_queue_sqe(struct io_kiocb *req, unsigned int issue_flags); static void io_move_task_work_from_local(struct io_ring_ctx *ctx); static void __io_submit_flush_completions(struct io_ring_ctx *ctx); static __cold void io_fallback_tw(struct io_uring_task *tctx); @@ -1471,7 +1471,7 @@ void io_req_task_submit(struct io_kiocb *req, struct io_tw_state *ts) else if (req->flags & REQ_F_FORCE_ASYNC) io_queue_iowq(req, ts); else - io_queue_sqe(req); + io_queue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER); } void io_req_task_queue_fail(struct io_kiocb *req, int ret) @@ -1947,6 +1947,10 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!io_assign_file(req, def, issue_flags))) return -EBADF; + if (issue_flags & IO_URING_F_NO_OFFLOAD && + (!req->file || !file_can_poll(req->file))) + issue_flags &= ~IO_URING_F_NONBLOCK; + if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred())) creds = override_creds(req->creds); @@ -2120,12 +2124,12 @@ static void io_queue_async(struct io_kiocb *req, int ret) io_queue_linked_timeout(linked_timeout); } -static inline void io_queue_sqe(struct io_kiocb *req) +static inline void io_queue_sqe(struct io_kiocb *req, unsigned int issue_flags) __must_hold(&req->ctx->uring_lock) { int ret; - ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER); + ret = io_issue_sqe(req, issue_flags); /* * We async punt it if the file wasn't marked NOWAIT, or if the file @@ -2337,7 +2341,8 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe, } static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - const struct io_uring_sqe *sqe) + const struct io_uring_sqe *sqe, + unsigned int aux_issue_flags) __must_hold(&ctx->uring_lock) { struct io_submit_link *link = &ctx->submit_state.link; @@ -2385,7 +2390,8 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, return 0; } - io_queue_sqe(req); + io_queue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER| + aux_issue_flags); return 0; } @@ -2466,7 +2472,8 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe) return false; } -int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) +int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, + unsigned int aux_issue_flags) __must_hold(&ctx->uring_lock) { unsigned int entries = io_sqring_entries(ctx); @@ -2495,7 +2502,7 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) * Continue submitting even for sqe failure if the * ring was setup with IORING_SETUP_SUBMIT_ALL */ - if (unlikely(io_submit_sqe(ctx, req, sqe)) && + if (unlikely(io_submit_sqe(ctx, req, sqe, aux_issue_flags)) && !(ctx->flags & IORING_SETUP_SUBMIT_ALL)) { left--; break; @@ -3524,7 +3531,8 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP | IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG | - IORING_ENTER_REGISTERED_RING))) + IORING_ENTER_REGISTERED_RING | + IORING_ENTER_NO_OFFLOAD))) return -EINVAL; /* @@ -3575,12 +3583,18 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, ret = to_submit; } else if (to_submit) { + unsigned int aux_issue_flags = 0; + ret = io_uring_add_tctx_node(ctx); if (unlikely(ret)) goto out; + if (flags & IORING_ENTER_NO_OFFLOAD || + ctx->flags & IORING_SETUP_NO_OFFLOAD) + aux_issue_flags = IO_URING_F_NO_OFFLOAD; + mutex_lock(&ctx->uring_lock); - ret = io_submit_sqes(ctx, to_submit); + ret = io_submit_sqes(ctx, to_submit, aux_issue_flags); if (ret != to_submit) { mutex_unlock(&ctx->uring_lock); goto out; @@ -3827,9 +3841,17 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p, * polling again, they can rely on io_sq_thread to do polling * work, which can reduce cpu usage and uring_lock contention. */ - if (ctx->flags & IORING_SETUP_IOPOLL && - !(ctx->flags & IORING_SETUP_SQPOLL)) - ctx->syscall_iopoll = 1; + ret = -EINVAL; + if (ctx->flags & IORING_SETUP_IOPOLL) { + /* + * Can't sanely block for issue for IOPOLL, nor does this + * combination make any sense. Disallow it. + */ + if (ctx->flags & IORING_SETUP_NO_OFFLOAD) + goto err; + if (!(ctx->flags & IORING_SETUP_SQPOLL)) + ctx->syscall_iopoll = 1; + } ctx->compat = in_compat_syscall(); if (!capable(CAP_IPC_LOCK)) @@ -3839,7 +3861,6 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p, * For SQPOLL, we just need a wakeup, always. For !SQPOLL, if * COOP_TASKRUN is set, then IPIs are never needed by the app. */ - ret = -EINVAL; if (ctx->flags & IORING_SETUP_SQPOLL) { /* IPI related flags don't make sense with SQPOLL */ if (ctx->flags & (IORING_SETUP_COOP_TASKRUN | @@ -3969,7 +3990,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params) IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL | IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG | IORING_SETUP_SQE128 | IORING_SETUP_CQE32 | - IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN)) + IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN | + IORING_SETUP_NO_OFFLOAD)) return -EINVAL; return io_uring_create(entries, &p, params); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 25515d69d205..fb3619ae0fd3 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -76,7 +76,7 @@ int io_uring_alloc_task_context(struct task_struct *task, struct io_ring_ctx *ctx); int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts); -int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr); +int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, unsigned int aux_issue_flags); int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin); void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node); int io_req_prep_async(struct io_kiocb *req); diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c index 9db4bc1f521a..9f2968a441ce 100644 --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -166,7 +166,7 @@ static inline bool io_sqd_events_pending(struct io_sq_data *sqd) static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) { - unsigned int to_submit; + unsigned int to_submit, aux_issue_flags = 0; int ret = 0; to_submit = io_sqring_entries(ctx); @@ -179,6 +179,8 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) if (ctx->sq_creds != current_cred()) creds = override_creds(ctx->sq_creds); + if (ctx->flags & IORING_SETUP_NO_OFFLOAD) + aux_issue_flags = IO_URING_F_NO_OFFLOAD; mutex_lock(&ctx->uring_lock); if (!wq_list_empty(&ctx->iopoll_list)) @@ -190,7 +192,7 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) */ if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) && !(ctx->flags & IORING_SETUP_R_DISABLED)) - ret = io_submit_sqes(ctx, to_submit); + ret = io_submit_sqes(ctx, to_submit, aux_issue_flags); mutex_unlock(&ctx->uring_lock); if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait)) From patchwork Thu Apr 20 18:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13219070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3999C77B73 for ; Thu, 20 Apr 2023 18:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231837AbjDTScc (ORCPT ); Thu, 20 Apr 2023 14:32:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231691AbjDTScT (ORCPT ); Thu, 20 Apr 2023 14:32:19 -0400 Received: from mail-io1-xd2f.google.com (mail-io1-xd2f.google.com [IPv6:2607:f8b0:4864:20::d2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A60D6A4C for ; Thu, 20 Apr 2023 11:31:40 -0700 (PDT) Received: by mail-io1-xd2f.google.com with SMTP id ca18e2360f4ac-760f040ecccso7656739f.1 for ; Thu, 20 Apr 2023 11:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1682015499; x=1684607499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QNBNMSl1jOPYogsKOmmzMd0l9YXo4Oz/6dQdAI0cTVA=; b=2e7ydIF1lE5pIcqLrgPngNieVBPG80J3zFrU8eWekVQFmwYBArfTJ0yJMVbpiMN3vW 9jGG0W/qWgWuNV7Uf6k85NQgqYIP0Ccs1psgSLB+8pBCvziyQ57jjZ8T9oDlsFNDOIcw blHNhhS6+RwQINq7fTL0aeN2Hi0leV+15XwN6DkuVL0stn+LA99yy3wWANXeCFI5hL8O w87UR8KcJVWRCG5BXJNjRhvQSalNkfyLHgbaIUVtNQQoMcsFJ8RrMc8HtJypkcLwcvLW idoqkQB2NuisMzPNe03ZznM+3MZF6baapOPNYhvw5KN3FzKneiwWxkHUMkbjey8jza9/ U8iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682015499; x=1684607499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QNBNMSl1jOPYogsKOmmzMd0l9YXo4Oz/6dQdAI0cTVA=; b=fkK3MMF4nQlVDDuK5OM0OIu2mqBWFr3Vzo9jhpI1NHiP7q03ZgiHz+JCSt7wVf5j0/ G/fICGwU7iT1C5DLm8Sx8e7aYAy90qlhQiGklstFdrWPoNhEED6mRjdV+luBnvy5eCsz ObS7SebhqXDYAXEdFXP/VS3+os8PrDhxUftLfz2gBCKyl0YElfyU8nACLl19kzaHq+/I 3ES2Tf8I+maYJbrKfkn1R26to4KdgHlnydhaGiB6qy/dAbvtEutzOsKvZud/Y5sXfS7k yoqcSpJ0w84MP7sZXHJMBEKC2CiTuOI1Yfk1psminSzpRx7wdhg1T4s/LalRuN8iy9hx AUMA== X-Gm-Message-State: AAQBX9edHvybSqNELM00rJjDhPhwnwBff+YpFhBxsDGJshvKPyRV5Yky ILXG4EUrdLYTcJluEvFgHrX/9Ca/pvvjDiMvR7w= X-Google-Smtp-Source: AKy350ZRK6mVYJ3NIVz/OraNh0aOBK1TkyhoaHTEw7h6sa12J0GHqelcxrSa/e1iiwydSSIzAd4jnA== X-Received: by 2002:a05:6602:2d91:b0:763:86b1:6111 with SMTP id k17-20020a0566022d9100b0076386b16111mr2264707iow.2.1682015499455; Thu, 20 Apr 2023 11:31:39 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id dl36-20020a05663827a400b003c4e02148e5sm659132jab.53.2023.04.20.11.31.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 11:31:39 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/4] Revert "io_uring: always go async for unsupported fadvise flags" Date: Thu, 20 Apr 2023 12:31:33 -0600 Message-Id: <20230420183135.119618-3-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420183135.119618-1-axboe@kernel.dk> References: <20230420183135.119618-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This reverts commit c31cc60fddd11134031e7f9eb76812353cfaac84. In preparation for handling this a bit differently, revert this cleanup. Signed-off-by: Jens Axboe --- io_uring/advise.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/io_uring/advise.c b/io_uring/advise.c index 7085804c513c..cf600579bffe 100644 --- a/io_uring/advise.c +++ b/io_uring/advise.c @@ -62,18 +62,6 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags) #endif } -static bool io_fadvise_force_async(struct io_fadvise *fa) -{ - switch (fa->advice) { - case POSIX_FADV_NORMAL: - case POSIX_FADV_RANDOM: - case POSIX_FADV_SEQUENTIAL: - return false; - default: - return true; - } -} - int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise); @@ -84,8 +72,6 @@ int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) fa->offset = READ_ONCE(sqe->off); fa->len = READ_ONCE(sqe->len); fa->advice = READ_ONCE(sqe->fadvise_advice); - if (io_fadvise_force_async(fa)) - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -94,7 +80,16 @@ int io_fadvise(struct io_kiocb *req, unsigned int issue_flags) struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK && io_fadvise_force_async(fa)); + if (issue_flags & IO_URING_F_NONBLOCK) { + switch (fa->advice) { + case POSIX_FADV_NORMAL: + case POSIX_FADV_RANDOM: + case POSIX_FADV_SEQUENTIAL: + break; + default: + return -EAGAIN; + } + } ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice); if (ret < 0) From patchwork Thu Apr 20 18:31:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13219071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65C8DC77B72 for ; Thu, 20 Apr 2023 18:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231977AbjDTScg (ORCPT ); Thu, 20 Apr 2023 14:32:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231893AbjDTScZ (ORCPT ); Thu, 20 Apr 2023 14:32:25 -0400 Received: from mail-il1-x12b.google.com (mail-il1-x12b.google.com [IPv6:2607:f8b0:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C12261B4 for ; Thu, 20 Apr 2023 11:31:42 -0700 (PDT) Received: by mail-il1-x12b.google.com with SMTP id e9e14a558f8ab-32ad7e5627bso501155ab.1 for ; Thu, 20 Apr 2023 11:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1682015501; x=1684607501; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kjyeiwI3ZJ4+QDyhsVN2X72OtyXD1sdnDa91+MMvwVU=; b=4VCbnC2jZrq7m9k9+/UIOVuzWw2AleaXOcp4buhP6g1nezDP+cZdreWetGq5ZdEYsg rcv84qxSv3NIpkPlusOs3XTuKu2Nbvn0Evfd0MKPu6DH7xKwsCAYCZ1NCSuS4HN+VqdE +W20ZvM1t36JiE3mx1mdA4tf3BcaIj/iPMxVsuylB+G+DOG0CtHEImvd5wDVKxR8lMgt vFjaM3cBuAP76EIHljQHZ3qcetBKESy6cArejWTdczHCBk9kakkGCIpfh4YUTTjMbcX/ OwHKFt8oviJgdsfn+H1sLySPa7nOZ/gZPMcduOI/cvzdQlNTJv5k4sfIDop4fkBA9kTx qLNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682015501; x=1684607501; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kjyeiwI3ZJ4+QDyhsVN2X72OtyXD1sdnDa91+MMvwVU=; b=dJA1oyeCaFXmKxiWKUlT3QpKGcPgOVbBIzoJuA0EBDA7ZFammge8SDigmUWB1huYE4 0Zvf5gU//GARbqVK6qHQJzoncdJpKVo7t7azVuI+HBEPfsHQN9AqiKdLjWP+vQKCR/oZ MrUsK4WUhQfPWu9D3gYzk7AhxKnZ7zqlmwr1vpFqKTiVsBeR/0OG3iKyvdZj07FcYZuM G2G9iyLOEFB5yXxEvomkGyTfgQp9PkSRosZYl++se5zG62Uxkrfik+n/9PWdxfipW3rs ZgGm29zVds1N16K+sGg8fyI8dH4SReB7LG2vnio3f3jxpnv5AlpZpZrCc0/fZCCRG5xY 3U1g== X-Gm-Message-State: AAQBX9fNty88WH+QZK8yL7l7EH5lyEGutyaOqHos/auAVd6rYM++ci85 Zf4sP1fEk8EyPo8DdjPvHVoxbMcRzhBL7o3UPq0= X-Google-Smtp-Source: AKy350YzlZIUixec/JcH6yLTqWIf20ne3fKE/f5uqxuyvMpAuLDxhuVk7FB5pejzj5on4w8RHVUONw== X-Received: by 2002:a05:6602:358d:b0:763:542a:f26e with SMTP id bi13-20020a056602358d00b00763542af26emr1456898iob.1.1682015500917; Thu, 20 Apr 2023 11:31:40 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id dl36-20020a05663827a400b003c4e02148e5sm659132jab.53.2023.04.20.11.31.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 11:31:40 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/4] Revert "io_uring: for requests that require async, force it" Date: Thu, 20 Apr 2023 12:31:34 -0600 Message-Id: <20230420183135.119618-4-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420183135.119618-1-axboe@kernel.dk> References: <20230420183135.119618-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This reverts commit aebb224fd4fc7352cd839ad90414c548387142fd. In preparation for handling this a bit differently, revert this cleanup. Signed-off-by: Jens Axboe --- io_uring/advise.c | 4 ++-- io_uring/fs.c | 20 ++++++++++---------- io_uring/net.c | 4 ++-- io_uring/splice.c | 7 ++++--- io_uring/statx.c | 4 ++-- io_uring/sync.c | 14 ++++++-------- io_uring/xattr.c | 14 ++++++++------ 7 files changed, 34 insertions(+), 33 deletions(-) diff --git a/io_uring/advise.c b/io_uring/advise.c index cf600579bffe..449c6f14649f 100644 --- a/io_uring/advise.c +++ b/io_uring/advise.c @@ -39,7 +39,6 @@ int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) ma->addr = READ_ONCE(sqe->addr); ma->len = READ_ONCE(sqe->len); ma->advice = READ_ONCE(sqe->fadvise_advice); - req->flags |= REQ_F_FORCE_ASYNC; return 0; #else return -EOPNOTSUPP; @@ -52,7 +51,8 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags) struct io_madvise *ma = io_kiocb_to_cmd(req, struct io_madvise); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice); io_req_set_res(req, ret, 0); diff --git a/io_uring/fs.c b/io_uring/fs.c index f6a69a549fd4..7100c293c13a 100644 --- a/io_uring/fs.c +++ b/io_uring/fs.c @@ -74,7 +74,6 @@ int io_renameat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -83,7 +82,8 @@ int io_renameat(struct io_kiocb *req, unsigned int issue_flags) struct io_rename *ren = io_kiocb_to_cmd(req, struct io_rename); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd, ren->newpath, ren->flags); @@ -123,7 +123,6 @@ int io_unlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return PTR_ERR(un->filename); req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -132,7 +131,8 @@ int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags) struct io_unlink *un = io_kiocb_to_cmd(req, struct io_unlink); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (un->flags & AT_REMOVEDIR) ret = do_rmdir(un->dfd, un->filename); @@ -170,7 +170,6 @@ int io_mkdirat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return PTR_ERR(mkd->filename); req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -179,7 +178,8 @@ int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags) struct io_mkdir *mkd = io_kiocb_to_cmd(req, struct io_mkdir); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_mkdirat(mkd->dfd, mkd->filename, mkd->mode); @@ -220,7 +220,6 @@ int io_symlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -229,7 +228,8 @@ int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags) struct io_link *sl = io_kiocb_to_cmd(req, struct io_link); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_symlinkat(sl->oldpath, sl->new_dfd, sl->newpath); @@ -265,7 +265,6 @@ int io_linkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -274,7 +273,8 @@ int io_linkat(struct io_kiocb *req, unsigned int issue_flags) struct io_link *lnk = io_kiocb_to_cmd(req, struct io_link); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_linkat(lnk->old_dfd, lnk->oldpath, lnk->new_dfd, lnk->newpath, lnk->flags); diff --git a/io_uring/net.c b/io_uring/net.c index 4040cf093318..baa30f578dcd 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -91,7 +91,6 @@ int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return -EINVAL; shutdown->how = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -101,7 +100,8 @@ int io_shutdown(struct io_kiocb *req, unsigned int issue_flags) struct socket *sock; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; sock = sock_from_file(req->file); if (unlikely(!sock)) diff --git a/io_uring/splice.c b/io_uring/splice.c index 2a4bbb719531..53e4232d0866 100644 --- a/io_uring/splice.c +++ b/io_uring/splice.c @@ -34,7 +34,6 @@ static int __io_splice_prep(struct io_kiocb *req, if (unlikely(sp->flags & ~valid_flags)) return -EINVAL; sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -53,7 +52,8 @@ int io_tee(struct io_kiocb *req, unsigned int issue_flags) struct file *in; long ret = 0; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (sp->flags & SPLICE_F_FD_IN_FIXED) in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags); @@ -94,7 +94,8 @@ int io_splice(struct io_kiocb *req, unsigned int issue_flags) struct file *in; long ret = 0; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; if (sp->flags & SPLICE_F_FD_IN_FIXED) in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags); diff --git a/io_uring/statx.c b/io_uring/statx.c index abb874209caa..d8fc933d3f59 100644 --- a/io_uring/statx.c +++ b/io_uring/statx.c @@ -48,7 +48,6 @@ int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -57,7 +56,8 @@ int io_statx(struct io_kiocb *req, unsigned int issue_flags) struct io_statx *sx = io_kiocb_to_cmd(req, struct io_statx); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_statx(sx->dfd, sx->filename, sx->flags, sx->mask, sx->buffer); io_req_set_res(req, ret, 0); diff --git a/io_uring/sync.c b/io_uring/sync.c index 255f68c37e55..64e87ea2b8fb 100644 --- a/io_uring/sync.c +++ b/io_uring/sync.c @@ -32,8 +32,6 @@ int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->len); sync->flags = READ_ONCE(sqe->sync_range_flags); - req->flags |= REQ_F_FORCE_ASYNC; - return 0; } @@ -43,7 +41,8 @@ int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags) int ret; /* sync_file_range always requires a blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = sync_file_range(req->file, sync->off, sync->len, sync->flags); io_req_set_res(req, ret, 0); @@ -63,7 +62,6 @@ int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -74,7 +72,8 @@ int io_fsync(struct io_kiocb *req, unsigned int issue_flags) int ret; /* fsync always requires a blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = vfs_fsync_range(req->file, sync->off, end > 0 ? end : LLONG_MAX, sync->flags & IORING_FSYNC_DATASYNC); @@ -92,7 +91,6 @@ int io_fallocate_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sync->off = READ_ONCE(sqe->off); sync->len = READ_ONCE(sqe->addr); sync->mode = READ_ONCE(sqe->len); - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -102,8 +100,8 @@ int io_fallocate(struct io_kiocb *req, unsigned int issue_flags) int ret; /* fallocate always requiring blocking context */ - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); - + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = vfs_fallocate(req->file, sync->mode, sync->off, sync->len); if (ret >= 0) fsnotify_modify(req->file); diff --git a/io_uring/xattr.c b/io_uring/xattr.c index e1c810e0b85a..6201a9f442c6 100644 --- a/io_uring/xattr.c +++ b/io_uring/xattr.c @@ -75,7 +75,6 @@ static int __io_getxattr_prep(struct io_kiocb *req, } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -110,7 +109,8 @@ int io_fgetxattr(struct io_kiocb *req, unsigned int issue_flags) struct io_xattr *ix = io_kiocb_to_cmd(req, struct io_xattr); int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = do_getxattr(mnt_idmap(req->file->f_path.mnt), req->file->f_path.dentry, @@ -127,7 +127,8 @@ int io_getxattr(struct io_kiocb *req, unsigned int issue_flags) struct path path; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; retry: ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL); @@ -173,7 +174,6 @@ static int __io_setxattr_prep(struct io_kiocb *req, } req->flags |= REQ_F_NEED_CLEANUP; - req->flags |= REQ_F_FORCE_ASYNC; return 0; } @@ -222,7 +222,8 @@ int io_fsetxattr(struct io_kiocb *req, unsigned int issue_flags) { int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; ret = __io_setxattr(req, issue_flags, &req->file->f_path); io_xattr_finish(req, ret); @@ -236,7 +237,8 @@ int io_setxattr(struct io_kiocb *req, unsigned int issue_flags) struct path path; int ret; - WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; retry: ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL); From patchwork Thu Apr 20 18:31:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13219072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64FCDC77B7A for ; Thu, 20 Apr 2023 18:32:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231979AbjDTScj (ORCPT ); Thu, 20 Apr 2023 14:32:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231567AbjDTSc0 (ORCPT ); Thu, 20 Apr 2023 14:32:26 -0400 Received: from mail-il1-x12a.google.com (mail-il1-x12a.google.com [IPv6:2607:f8b0:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED2666593 for ; Thu, 20 Apr 2023 11:31:43 -0700 (PDT) Received: by mail-il1-x12a.google.com with SMTP id e9e14a558f8ab-32ae537c23fso433015ab.0 for ; Thu, 20 Apr 2023 11:31:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1682015502; x=1684607502; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CIhm8cASOrWnJLy6xy/3Pw/k0HvrxOfMqwE58FM2FGI=; b=ydsKB+MIAcvo5LLfER7OSV9ZwhOlt/2oig5jUrPcDmMc3j92k82j7PT1Bxzww0a8KJ NFcq2Mmurx2cBqDPcjsNkIhjwA9TwqYBJ5Ns87vcqrMbCRITbhSi1ZAmqoAb9dP54lIu icLn+I1REhaXLiaEFUsw37o3+nZkNYo8EuWbaVlQ3ZDR61kc6ZG+SpLbCPX5Ev1WJIou YUrNXbl2qV+rm2i5NFHLirECdZCmcefguMIffpvqUmPKydnGzjWoSlcUt3PdJbgiFigw cudYV+CUn4pCH8fnCOrfv39NcuiEg8ItF2/lab5KtzekMTYszhjL0kirADfyayRGIZHi p+7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682015502; x=1684607502; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CIhm8cASOrWnJLy6xy/3Pw/k0HvrxOfMqwE58FM2FGI=; b=Nw87X27MELdmAGjWfRRB0eRceUJ4CawpD7jqdO0uAXjDTj20kccrrQ2f9CjWBR1JZl yCEHvZZl6yyZerwaU41VOs06A90xKOdiFR1EKVeDlsOMJpr73HsU51HR8GSDU5kU6TDP lKxJkRLSEFcLkCZYPtRgO6QSvHpbqPU1zOXhQSVjw0KSCdr4xwzVdG3hdm8CTlfhlFmq Q7W5Ho1H1rR7Un8rRQUMi9kkJalvMpJmoytqrw5JwAoWpz7NvXj2SnwcQ6VLMWfKlGd9 As85cUdNdcx6LiJ3LWcLkaKv2Rx7nG60YbzzIVXvd/Q7xZNHVCBrb6GqS4dFWhIKpPxx hvBg== X-Gm-Message-State: AAQBX9dfxQqxKNs9q6YiG3wnG2a/7jGOZu/6QbBoku+m6+txdwXyfqTi YsTlBvQYaNyBBDdyblXiCY1r07qMgfvTqnNKloc= X-Google-Smtp-Source: AKy350aZs0gHNFI8aJab0sv7HM0zB1sSD71zDVdLZHqdvl2svnKNp47P+w2hOgO7nvpE9w25Ag15vg== X-Received: by 2002:a92:c56c:0:b0:32b:4518:d122 with SMTP id b12-20020a92c56c000000b0032b4518d122mr1660650ilj.3.1682015501871; Thu, 20 Apr 2023 11:31:41 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id dl36-20020a05663827a400b003c4e02148e5sm659132jab.53.2023.04.20.11.31.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 11:31:41 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/4] io_uring: mark opcodes that always need io-wq punt Date: Thu, 20 Apr 2023 12:31:35 -0600 Message-Id: <20230420183135.119618-5-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420183135.119618-1-axboe@kernel.dk> References: <20230420183135.119618-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add an opdef bit for them, and set it for the opcodes where we always need io-wq punt. With that done, exclude them from the file_can_poll() check in terms of whether or not we need to punt them if any of the NO_OFFLOAD flags are set. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 2 +- io_uring/opdef.c | 22 ++++++++++++++++++++-- io_uring/opdef.h | 2 ++ 3 files changed, 23 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index fee3e461e149..420cfd35ebc6 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1948,7 +1948,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) return -EBADF; if (issue_flags & IO_URING_F_NO_OFFLOAD && - (!req->file || !file_can_poll(req->file))) + (!req->file || !file_can_poll(req->file) || def->always_iowq)) issue_flags &= ~IO_URING_F_NONBLOCK; if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred())) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index cca7c5b55208..686d46001622 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -82,6 +82,7 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_FSYNC] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_fsync_prep, .issue = io_fsync, }, @@ -125,6 +126,7 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_SYNC_FILE_RANGE] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_sfr_prep, .issue = io_sync_file_range, }, @@ -202,6 +204,7 @@ const struct io_issue_def io_issue_defs[] = { }, [IORING_OP_FALLOCATE] = { .needs_file = 1, + .always_iowq = 1, .prep = io_fallocate_prep, .issue = io_fallocate, }, @@ -221,6 +224,7 @@ const struct io_issue_def io_issue_defs[] = { }, [IORING_OP_STATX] = { .audit_skip = 1, + .always_iowq = 1, .prep = io_statx_prep, .issue = io_statx, }, @@ -253,11 +257,13 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_FADVISE] = { .needs_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_fadvise_prep, .issue = io_fadvise, }, [IORING_OP_MADVISE] = { .audit_skip = 1, + .always_iowq = 1, .prep = io_madvise_prep, .issue = io_madvise, }, @@ -308,6 +314,7 @@ const struct io_issue_def io_issue_defs[] = { .hash_reg_file = 1, .unbound_nonreg_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_splice_prep, .issue = io_splice, }, @@ -328,11 +335,13 @@ const struct io_issue_def io_issue_defs[] = { .hash_reg_file = 1, .unbound_nonreg_file = 1, .audit_skip = 1, + .always_iowq = 1, .prep = io_tee_prep, .issue = io_tee, }, [IORING_OP_SHUTDOWN] = { .needs_file = 1, + .always_iowq = 1, #if defined(CONFIG_NET) .prep = io_shutdown_prep, .issue = io_shutdown, @@ -343,22 +352,27 @@ const struct io_issue_def io_issue_defs[] = { [IORING_OP_RENAMEAT] = { .prep = io_renameat_prep, .issue = io_renameat, + .always_iowq = 1, }, [IORING_OP_UNLINKAT] = { .prep = io_unlinkat_prep, .issue = io_unlinkat, + .always_iowq = 1, }, [IORING_OP_MKDIRAT] = { .prep = io_mkdirat_prep, .issue = io_mkdirat, + .always_iowq = 1, }, [IORING_OP_SYMLINKAT] = { .prep = io_symlinkat_prep, .issue = io_symlinkat, + .always_iowq = 1, }, [IORING_OP_LINKAT] = { .prep = io_linkat_prep, .issue = io_linkat, + .always_iowq = 1, }, [IORING_OP_MSG_RING] = { .needs_file = 1, @@ -367,20 +381,24 @@ const struct io_issue_def io_issue_defs[] = { .issue = io_msg_ring, }, [IORING_OP_FSETXATTR] = { - .needs_file = 1, + .needs_file = 1, + .always_iowq = 1, .prep = io_fsetxattr_prep, .issue = io_fsetxattr, }, [IORING_OP_SETXATTR] = { + .always_iowq = 1, .prep = io_setxattr_prep, .issue = io_setxattr, }, [IORING_OP_FGETXATTR] = { - .needs_file = 1, + .needs_file = 1, + .always_iowq = 1, .prep = io_fgetxattr_prep, .issue = io_fgetxattr, }, [IORING_OP_GETXATTR] = { + .always_iowq = 1, .prep = io_getxattr_prep, .issue = io_getxattr, }, diff --git a/io_uring/opdef.h b/io_uring/opdef.h index c22c8696e749..657a831249ff 100644 --- a/io_uring/opdef.h +++ b/io_uring/opdef.h @@ -29,6 +29,8 @@ struct io_issue_def { unsigned iopoll_queue : 1; /* opcode specific path will handle ->async_data allocation if needed */ unsigned manual_alloc : 1; + /* op always needs io-wq offload */ + unsigned always_iowq : 1; int (*issue)(struct io_kiocb *, unsigned int); int (*prep)(struct io_kiocb *, const struct io_uring_sqe *);