From patchwork Fri Jun 17 08:48:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DAD3CCA479 for ; Fri, 17 Jun 2022 08:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381208AbiFQItn (ORCPT ); Fri, 17 Jun 2022 04:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381133AbiFQItm (ORCPT ); Fri, 17 Jun 2022 04:49:42 -0400 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBD5F694B0 for ; Fri, 17 Jun 2022 01:49:41 -0700 (PDT) Received: by mail-ej1-x631.google.com with SMTP id h23so7446056ejj.12 for ; Fri, 17 Jun 2022 01:49:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c9REikkLKeJODwzy/5B/Fp1pXzka/2vpEOf5uX3trvc=; b=MfH7y3BF8SpdNY4QYgJt74/4Qy1Kwgsip/rumJ6Jm/sephcxvvauXGf+jNnRsLEJhq Ew6Mbl3eBiE1xqraNf9+Li3EZo5biOrdJRO4uKiRV8fNyDORPkA+Q9avEA09/MH1ivj/ Kh5d6ktt2AXMXO1QFfEKs9WxmzrgiIsAXqwf28o0QYxty1Z92vR1XQ0IkKnT3Wxe5kat y19tkRc7YGz2yY7o/kDeOJo+KHsEGJeUBB7FS5c+JJnSmktapLs2m/3ZXAHtCixgGzsv A68JB2Ttq6W4blONqqm/kS6TZMdnbm7jigEXOuTtOhFbC27D1LvHce0HEecTxr49c4bL drdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c9REikkLKeJODwzy/5B/Fp1pXzka/2vpEOf5uX3trvc=; b=53zI2b1M/j6HaUNhX98DnW2q0DTlmei3saAIZNCXjicJNTUKv0i4FrVrMUHnKJwlDo IGAMbOfnsNfgimZlUCZKqmxpb+ij7QJb7e3UfXUSiTmfSUyDCW49MejbgzE+ETBnCQ22 e9Kml4rIvfrTnFYPyArPQcLdOWTl1COW9AaMmEFo+NTDHY7rWi+XUn/R/BS5JHblDRPe DzuHzRCilBntPMwXr9glUVAGJrC/Zd5pc/vwW9spcjow+VhvLp8s0LHWtC4FINojNdnt GsKHrgNRXUY9aq1QxUoV/hHDja9ZdIOqogcp4tathQXF5loROWllsuYaDwgvnmwrcHyY Cpng== X-Gm-Message-State: AJIora/d6TAxoWhrnaJfsc2d9WyUIAkpngpMCG4hAu6JPE2vOjMokkq3 9PP37FdQ+5bmVxzSADUjA57YKwCQdWvZSw== X-Google-Smtp-Source: AGRyM1vn+C6E6NN2a/8WGX5ANpEvDuo8dYUBg+INfmrhNRhT3zP6imLf9MHkPCGOv5hCEqPX9l1XHQ== X-Received: by 2002:a17:907:6294:b0:6e1:ea4:74a3 with SMTP id nd20-20020a170907629400b006e10ea474a3mr8054947ejc.168.1655455779950; Fri, 17 Jun 2022 01:49:39 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:39 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 1/6] io_uring: don't expose io_fill_cqe_aux() Date: Fri, 17 Jun 2022 09:48:00 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Deduplicate some code and add a helper for filling an aux CQE, locking and notification. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 18 ++++++++++++++++-- io_uring/io_uring.h | 3 +-- io_uring/msg_ring.c | 11 +---------- io_uring/net.c | 20 +++++--------------- io_uring/poll.c | 24 ++++++++---------------- io_uring/rsrc.c | 14 +++++--------- 6 files changed, 36 insertions(+), 54 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 80c433995347..7ffb8422e7d0 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -673,8 +673,8 @@ bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, return true; } -bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, - u32 cflags) +static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, + u64 user_data, s32 res, u32 cflags) { struct io_uring_cqe *cqe; @@ -701,6 +701,20 @@ bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0); } +bool io_post_aux_cqe(struct io_ring_ctx *ctx, + u64 user_data, s32 res, u32 cflags) +{ + bool filled; + + spin_lock(&ctx->completion_lock); + filled = io_fill_cqe_aux(ctx, user_data, res, cflags); + io_commit_cqring(ctx); + spin_unlock(&ctx->completion_lock); + if (filled) + io_cqring_ev_posted(ctx); + return filled; +} + static void __io_req_complete_put(struct io_kiocb *req) { /* diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 16e46b09253a..ce6538c9aed3 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -241,8 +241,7 @@ void io_req_complete_failed(struct io_kiocb *req, s32 res); void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); void io_req_complete_post(struct io_kiocb *req); void __io_req_complete_post(struct io_kiocb *req); -bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, - u32 cflags); +bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); void io_cqring_ev_posted(struct io_ring_ctx *ctx); void __io_commit_cqring_flush(struct io_ring_ctx *ctx); diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c index 1f2de3534932..b02be2349652 100644 --- a/io_uring/msg_ring.c +++ b/io_uring/msg_ring.c @@ -33,7 +33,6 @@ int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags) { struct io_msg *msg = io_kiocb_to_cmd(req); struct io_ring_ctx *target_ctx; - bool filled; int ret; ret = -EBADFD; @@ -42,16 +41,8 @@ int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags) ret = -EOVERFLOW; target_ctx = req->file->private_data; - - spin_lock(&target_ctx->completion_lock); - filled = io_fill_cqe_aux(target_ctx, msg->user_data, msg->len, 0); - io_commit_cqring(target_ctx); - spin_unlock(&target_ctx->completion_lock); - - if (filled) { - io_cqring_ev_posted(target_ctx); + if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0)) ret = 0; - } done: if (ret < 0) diff --git a/io_uring/net.c b/io_uring/net.c index cd931dae1313..4481deda8607 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -647,22 +647,12 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags) io_req_set_res(req, ret, 0); return IOU_OK; } - if (ret >= 0) { - bool filled; - - spin_lock(&ctx->completion_lock); - filled = io_fill_cqe_aux(ctx, req->cqe.user_data, ret, - IORING_CQE_F_MORE); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - if (filled) { - io_cqring_ev_posted(ctx); - goto retry; - } - ret = -ECANCELED; - } - return ret; + if (ret < 0) + return ret; + if (io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE)) + goto retry; + return -ECANCELED; } int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) diff --git a/io_uring/poll.c b/io_uring/poll.c index 7f245f5617f6..d4bfc6d945cf 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -213,23 +213,15 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked) if (!(req->flags & REQ_F_APOLL_MULTISHOT)) { __poll_t mask = mangle_poll(req->cqe.res & req->apoll_events); - bool filled; - - spin_lock(&ctx->completion_lock); - filled = io_fill_cqe_aux(ctx, req->cqe.user_data, - mask, IORING_CQE_F_MORE); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - if (filled) { - io_cqring_ev_posted(ctx); - continue; - } - return -ECANCELED; - } - ret = io_poll_issue(req, locked); - if (ret) - return ret; + if (!io_post_aux_cqe(ctx, req->cqe.user_data, + mask, IORING_CQE_F_MORE)) + return -ECANCELED; + } else { + ret = io_poll_issue(req, locked); + if (ret) + return ret; + } /* * Release all references, retry if someone tried to restart diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 2f893e3f5c15..c10c512aa71b 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -173,17 +173,13 @@ static void __io_rsrc_put_work(struct io_rsrc_node *ref_node) list_del(&prsrc->list); if (prsrc->tag) { - if (ctx->flags & IORING_SETUP_IOPOLL) + if (ctx->flags & IORING_SETUP_IOPOLL) { mutex_lock(&ctx->uring_lock); - - spin_lock(&ctx->completion_lock); - io_fill_cqe_aux(ctx, prsrc->tag, 0, 0); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); - - if (ctx->flags & IORING_SETUP_IOPOLL) + io_post_aux_cqe(ctx, prsrc->tag, 0, 0); mutex_unlock(&ctx->uring_lock); + } else { + io_post_aux_cqe(ctx, prsrc->tag, 0, 0); + } } rsrc_data->do_put(ctx, prsrc); From patchwork Fri Jun 17 08:48:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7D36C433EF for ; Fri, 17 Jun 2022 08:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232153AbiFQIto (ORCPT ); Fri, 17 Jun 2022 04:49:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381202AbiFQItn (ORCPT ); Fri, 17 Jun 2022 04:49:43 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB35E694A7 for ; Fri, 17 Jun 2022 01:49:42 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id me5so7500933ejb.2 for ; Fri, 17 Jun 2022 01:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rKPW3KVlt1u18qMCZgYvTymdgmCgX21z8bSne7YHzn8=; b=Ca+nd77mLXzQHrj06OwyJ4YKgNt/7rYQWPtXKHmVChbJNP5LAHS3R+q+EKXqBaCUJ3 GJxV86YLcMCfn4tykwRK8XgAo2YxTPL4TRr+kkDT2rmofeK3gr2py7xlz4zJbgI9OoIg XnuDB1CZaGOrV7KkiRrjboxthDCIPTIxat6cUab/dJ7eQur3w5mHHy1Q3x3z/sERSoxa 9qEckefNWuBwclOHwxku6Ob7ImvXEJJeDwNsLXfIIU/49y4WNwubFKg4hiuBjNAI8bIO yrRPd+R7pPCZJcK88ha8RcxKlK9J+IY1heELlsuvPRQR+DBv25uPnG77pmnrXc2/2AC0 DMXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rKPW3KVlt1u18qMCZgYvTymdgmCgX21z8bSne7YHzn8=; b=14etbfg3bTkAyo2W8nhmfIZVHs4K+LGrSonURj3nZ0kVez33FD7/dannamPGemETlJ 9IfVeRmMNwct6U/kdAE/Skj20KWCnxle+ueZquEI2zfLLgrorV60fjEeH70ohQb8fjrX ktcRNxFKviL3ylHbGWI9GL9cAGghbXZ9yqVhtHEWmKr4Hj86YnF4TD94Xc5NIQMxxBv9 xkamJ4JDR7PpmWMXyeTo36su7GSBLqLCc/X4BSLbEuo4zYtpOlbQmuWs7HtABC0Nu0C3 ELhSJpYqODejZ5tJMqNdjJACmkFnfx0OPz7lG9by/BWFRaW2EoN5Fn8qOza24/9KF8vc 6DdQ== X-Gm-Message-State: AJIora9fpn3Ln0XqvvdMY+DdJC7wqJSaPHBqB1b4XegCT3mffdWXFNBy HzFQ3r1vr2eenNdiq2SVb9Zd/2oPYh2s9Q== X-Google-Smtp-Source: AGRyM1vDafAULjC2YAIkAJKKA2PcyXBE5uzAdwo1ukg3we0ULIlcOhjzbx/MZC2VSY69hEyK4mIwsA== X-Received: by 2002:a17:907:3f92:b0:706:db40:a0ef with SMTP id hr18-20020a1709073f9200b00706db40a0efmr8148832ejc.524.1655455780971; Fri, 17 Jun 2022 01:49:40 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:40 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 2/6] io_uring: don't inline __io_get_cqe() Date: Fri, 17 Jun 2022 09:48:01 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org __io_get_cqe() is not as hot as io_get_cqe(), no need to inline it, it sheds ~500B from the binary. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 35 +++++++++++++++++++++++++++++++++++ io_uring/io_uring.h | 36 +----------------------------------- 2 files changed, 36 insertions(+), 35 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 7ffb8422e7d0..a3b1339335c5 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -165,6 +165,11 @@ static inline void io_submit_flush_completions(struct io_ring_ctx *ctx) __io_submit_flush_completions(ctx); } +static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx) +{ + return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head); +} + static bool io_match_linked(struct io_kiocb *head) { struct io_kiocb *req; @@ -673,6 +678,36 @@ bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, return true; } +/* + * writes to the cq entry need to come after reading head; the + * control dependency is enough as we're using WRITE_ONCE to + * fill the cq entry + */ +struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx) +{ + struct io_rings *rings = ctx->rings; + unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1); + unsigned int shift = 0; + unsigned int free, queued, len; + + if (ctx->flags & IORING_SETUP_CQE32) + shift = 1; + + /* userspace may cheat modifying the tail, be safe and do min */ + queued = min(__io_cqring_events(ctx), ctx->cq_entries); + free = ctx->cq_entries - queued; + /* we need a contiguous range, limit based on the current array offset */ + len = min(free, ctx->cq_entries - off); + if (!len) + return NULL; + + ctx->cached_cq_tail++; + ctx->cqe_cached = &rings->cqes[off]; + ctx->cqe_sentinel = ctx->cqe_cached + len; + ctx->cqe_cached++; + return &rings->cqes[off << shift]; +} + static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags) { diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index ce6538c9aed3..51032a494aec 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -16,44 +16,10 @@ enum { IOU_ISSUE_SKIP_COMPLETE = -EIOCBQUEUED, }; +struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx); bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags, u64 extra1, u64 extra2); -static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx) -{ - return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head); -} - -/* - * writes to the cq entry need to come after reading head; the - * control dependency is enough as we're using WRITE_ONCE to - * fill the cq entry - */ -static inline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx) -{ - struct io_rings *rings = ctx->rings; - unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1); - unsigned int shift = 0; - unsigned int free, queued, len; - - if (ctx->flags & IORING_SETUP_CQE32) - shift = 1; - - /* userspace may cheat modifying the tail, be safe and do min */ - queued = min(__io_cqring_events(ctx), ctx->cq_entries); - free = ctx->cq_entries - queued; - /* we need a contiguous range, limit based on the current array offset */ - len = min(free, ctx->cq_entries - off); - if (!len) - return NULL; - - ctx->cached_cq_tail++; - ctx->cqe_cached = &rings->cqes[off]; - ctx->cqe_sentinel = ctx->cqe_cached + len; - ctx->cqe_cached++; - return &rings->cqes[off << shift]; -} - static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) { if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) { From patchwork Fri Jun 17 08:48:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2DA9C43334 for ; Fri, 17 Jun 2022 08:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380735AbiFQItp (ORCPT ); Fri, 17 Jun 2022 04:49:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381085AbiFQIto (ORCPT ); Fri, 17 Jun 2022 04:49:44 -0400 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE5C069495 for ; Fri, 17 Jun 2022 01:49:43 -0700 (PDT) Received: by mail-ej1-x633.google.com with SMTP id hj18so6864680ejb.0 for ; Fri, 17 Jun 2022 01:49:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TZWswOpcOw8YqJ6VKxs3ktcKrk9XoKJUSc0tGrWrMuk=; b=jddoO+YjKzSMdRVCcMZywpVoag2xsDD7hdjZx4+78A0uiHetcKiCDy8MdrYLoUJy7v 9IPHdglvdYY2pV96KMhdosJQx6GfmH76SMhH00DMgUEQqOd/bBSWG5gxb2OYjxC0rPcn cLI/EQ+cHrld1D8B5CqVSd4twjSfY6BUUt9L6mH+DuwwmSgi6JmLz85ecoeUEiNSpsss zfpmlL8tmvdCFcEZVkmV7otzY9YhO761WkpAMU+ZZnMRCvjPt6gk9fb3891Usam8iDBF +XNl/wmrC4egHwhpbFlW3JqxpUVqsejbI+/rML7UCjVLzeP/BQx/lGHRMMRIxwjX2FYt qUHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TZWswOpcOw8YqJ6VKxs3ktcKrk9XoKJUSc0tGrWrMuk=; b=LkAr+oon3+Xt2beCvNNo84EELWRBVbaja/P8GrYXwXXSuupIegIowOlCKbaJC+2/cm TKUpicBceEPAdmT9Hvjun6E+QjiQp76q2QFZHEkYGfAMXt8oPPtANbfPKmZRSe9gySpm aKGXv0cQy73+aExVVuefqLOqamumvxphq4Y3hBrGhMP38XilCX06qALxIHzmJUgsMguD 4b5SEUJ/EwkbEOoM9SGzl6UoW7OQaFsSP5Zgr8NEacwH1f+H3UVngiuoDb4NVBeIUfku gi/+dEGfu2uAqUtUr8lrqNK8/vB+0V1nSKnQSch4jUN8lXMCf4HLOghDsgInjAkj8QGN SH4w== X-Gm-Message-State: AJIora9jSUPziNmD9a/zSzJ981X03EYPBq9HXPFQPp30wyzJlkLEjn9a DvslLGdhWvCSCSO3JOjrXxMQpixiAT7eAg== X-Google-Smtp-Source: AGRyM1tJw0BXDFxySkmF8VRr1ukcIPoamdncJiI/MFS6lywerLeq1+oWajnnzWkN3YJMNn+pFlGwBQ== X-Received: by 2002:a17:907:2da6:b0:711:d86d:ccab with SMTP id gt38-20020a1709072da600b00711d86dccabmr8182381ejc.356.1655455782001; Fri, 17 Jun 2022 01:49:42 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:41 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 3/6] io_uring: introduce io_req_cqe_overflow() Date: Fri, 17 Jun 2022 09:48:02 +0100 Message-Id: <048b9fbcce56814d77a1a540409c98c3d383edcb.1655455613.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org __io_fill_cqe_req() is hot and inlined, we want it to be as small as possible. Add io_req_cqe_overflow() accepting only a request and doing all overflow accounting, and replace with it two calls to 6 argument io_cqring_event_overflow(). Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 15 +++++++++++++-- io_uring/io_uring.h | 12 ++---------- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index a3b1339335c5..263d7e4f1b41 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -640,8 +640,8 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task) } } -bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, - u32 cflags, u64 extra1, u64 extra2) +static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, + s32 res, u32 cflags, u64 extra1, u64 extra2) { struct io_overflow_cqe *ocqe; size_t ocq_size = sizeof(struct io_overflow_cqe); @@ -678,6 +678,17 @@ bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, return true; } +bool io_req_cqe_overflow(struct io_kiocb *req) +{ + if (!(req->flags & REQ_F_CQE32_INIT)) { + req->extra1 = 0; + req->extra2 = 0; + } + return io_cqring_event_overflow(req->ctx, req->cqe.user_data, + req->cqe.res, req->cqe.flags, + req->extra1, req->extra2); +} + /* * writes to the cq entry need to come after reading head; the * control dependency is enough as we're using WRITE_ONCE to diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 51032a494aec..668fff18d3cc 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -17,8 +17,7 @@ enum { }; struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx); -bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data, s32 res, - u32 cflags, u64 extra1, u64 extra2); +bool io_req_cqe_overflow(struct io_kiocb *req); static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) { @@ -58,10 +57,6 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, memcpy(cqe, &req->cqe, sizeof(*cqe)); return true; } - - return io_cqring_event_overflow(ctx, req->cqe.user_data, - req->cqe.res, req->cqe.flags, - 0, 0); } else { u64 extra1 = 0, extra2 = 0; @@ -85,11 +80,8 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, WRITE_ONCE(cqe->big_cqe[1], extra2); return true; } - - return io_cqring_event_overflow(ctx, req->cqe.user_data, - req->cqe.res, req->cqe.flags, - extra1, extra2); } + return io_req_cqe_overflow(req); } static inline void req_set_fail(struct io_kiocb *req) From patchwork Fri Jun 17 08:48:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09B9DC433EF for ; Fri, 17 Jun 2022 08:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381210AbiFQItq (ORCPT ); Fri, 17 Jun 2022 04:49:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381133AbiFQItp (ORCPT ); Fri, 17 Jun 2022 04:49:45 -0400 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [IPv6:2a00:1450:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB0EC69495 for ; Fri, 17 Jun 2022 01:49:44 -0700 (PDT) Received: by mail-ej1-x62b.google.com with SMTP id n10so7492464ejk.5 for ; Fri, 17 Jun 2022 01:49:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tqv5tBUne+CZOc241rUK/KpOvQ9YGwZ9YxD/px9rwDo=; b=AErLzww8zQzZW0uMoGmsif+AaIrbAh5MBNCYPiYrIBABVY8Cqc/6+M9Wrfxp80SH+q kx7ZJ6IKgy8MHc2iHbNp+XtxtP27ZIcoHmXSueoR4YsaPibEjycYSH1cLX1WwZUyVn5I 7aYrQPY1FJ58s4TLAgvUMPApz7ZnbgE1E6Joxe5WFa4XyeQXhcaq8WVmVLC9VUf9R6PO kSVhSXoxTXI4Q/fHcygoW/PVMs7dEIei9+q1XsNK2a0vrjFDAnAOdQXCNitV8Yn4LobL p5EFld5fmh96lzi3r3Oew3m6V4QeUx9MEgudh2SMa1yQ3mkPIPyzZBwmRTX9+2SuzgWy fihQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tqv5tBUne+CZOc241rUK/KpOvQ9YGwZ9YxD/px9rwDo=; b=VOGuMlywCenhSeO5qskYfv+nAfZ4anYqUks/f+JtjdO3CPWbUVfrI0y5U2SMBxR8Eo 7UDXv3YElX2Z51+BBnOSeaJc80M9KHcqPEZXvfhynW+PNC4FN6YccePeC2uJVCvoXFN+ JE6136wGbDHF2LONRKANclWWKJdxCjWH7dlV7frAh8Iilvil4w6fDtBDTpQEOCwttPs3 RUftkGbGbC4+sQH6qWMgk6eMxE4Vkoiw0hgGXGsYJgyKBSYEsk8B3PDXf9V60XLNVE46 7fgBWc7r571Wq03fnzqVlVMJgiZ2GIS7A9DIAyYcUIdL3Jiw55rxMxIYWh1ue/XEnCXS smtQ== X-Gm-Message-State: AJIora9W+NrdN1Fv//0JD4Hm5TlReAs/M+Tyl3xkrD32i4Oqa1dU8t7I FmXZMQcVtBuSiyAGWRzKcnYox+0Md8MsSQ== X-Google-Smtp-Source: AGRyM1u/AZRYdmh2qN8S4325oRJkAhpoDp5r8j0GfBg1y9V36OEzXTnWvkKibZ7eYzwyK1r/IHMfLg== X-Received: by 2002:a17:907:3f0a:b0:711:f0e2:ad67 with SMTP id hq10-20020a1709073f0a00b00711f0e2ad67mr8096574ejc.277.1655455783024; Fri, 17 Jun 2022 01:49:43 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:42 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 4/6] io_uring: deduplicate __io_fill_cqe_req tracing Date: Fri, 17 Jun 2022 09:48:03 +0100 Message-Id: <277ed85dba5189ab7d932164b314013a0f0b0fdc.1655455613.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Deduplicate two trace_io_uring_complete() calls in __io_fill_cqe_req(). Signed-off-by: Pavel Begunkov --- io_uring/io_uring.h | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 668fff18d3cc..4134b206c33c 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -43,10 +43,12 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, { struct io_uring_cqe *cqe; - if (!(ctx->flags & IORING_SETUP_CQE32)) { - trace_io_uring_complete(req->ctx, req, req->cqe.user_data, - req->cqe.res, req->cqe.flags, 0, 0); + trace_io_uring_complete(req->ctx, req, req->cqe.user_data, + req->cqe.res, req->cqe.flags, + (req->flags & REQ_F_CQE32_INIT) ? req->extra1 : 0, + (req->flags & REQ_F_CQE32_INIT) ? req->extra2 : 0); + if (!(ctx->flags & IORING_SETUP_CQE32)) { /* * If we can't get a cq entry, userspace overflowed the * submission (by quite a lot). Increment the overflow count in @@ -65,9 +67,6 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, extra2 = req->extra2; } - trace_io_uring_complete(req->ctx, req, req->cqe.user_data, - req->cqe.res, req->cqe.flags, extra1, extra2); - /* * If we can't get a cq entry, userspace overflowed the * submission (by quite a lot). Increment the overflow count in From patchwork Fri Jun 17 08:48:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34913C43334 for ; Fri, 17 Jun 2022 08:49:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381133AbiFQItr (ORCPT ); Fri, 17 Jun 2022 04:49:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381202AbiFQItq (ORCPT ); Fri, 17 Jun 2022 04:49:46 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7DCB6972A for ; Fri, 17 Jun 2022 01:49:45 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id hj18so6864832ejb.0 for ; Fri, 17 Jun 2022 01:49:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NW2kcZCQGw9AbKy5ShhQW5km1CLdBuJViplrOpGjFTU=; b=dbx11rvaOSjFvXsqFtpZSW1/pzh0N96LzMuSeb/K/Lo56c+DIVKhh1UemGZMMY8wSI u3YuwvMNZjBGOeN6FGYmBLTja3TGuHt6VKgX7/40d3dRn1S2suMuY4SISQbwQuIoBWfK ZPx3vA0gYYIyr1safSnWxpkjDtAOsS3EX51Ai6QZDbci4cUSBMKb3rm+qXSd9cE5gb/v pollwUi/8sXW9e88gyLypZ6wRA5RHReQV4q6sfhjdibXpd2P5IV7FtI5hXs+8/H0o6jf Q9olGIkEQaTiFEKJro9YzlUW3VGwAXV9wkKQwQ4Od3JL0M0x8TMaxJDAawI2aK/d1iaH IWZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NW2kcZCQGw9AbKy5ShhQW5km1CLdBuJViplrOpGjFTU=; b=RnFtp4gyCmt4OMEnn+Aoc26p2nEycjY+blo6dlrs7bQWjv9Y3xi+a/v77C9HKRvU8M pXdQ3hyDHUdlBqkU6Hsg83ciaLE9IC7DnsmhBzqkMLyOz5vwL/1t6fN4EpeMzJc96q5b 5Eah2gT+JrIIDQYL2cJptYZ4h3doTUpwL+xZWwdPOBCaxA6JHVefAG72gZo+Zf8aFAoq jLTugASkXXFRTByOV8qq4VGkcBu7VczkCsjFAnqRF65M6MziBKoFEU5/8J5Olx7AZGUP urIWstF67ahYK/jDVjm3/baJZ6fsklWtB5jfecCD5k4Ns6VxMXSjjX1W802vUIVxtjFu nbPQ== X-Gm-Message-State: AJIora9HoEWhsFL8oARgvIvCcJ8+C9wW7JWfdX2uxsPcpGuPoJoGZpmK M7F/xEPR8ft8MGCIxvshIjGbWN5A+yHB8w== X-Google-Smtp-Source: AGRyM1ss9RKTednYJu8vG3wCT7J8lZfx3dx5tXe9PSrDwuV21EBiJkFv+UUp7TEnf49I8hdaCPgvpw== X-Received: by 2002:a17:907:3f8c:b0:71b:badb:9389 with SMTP id hr12-20020a1709073f8c00b0071bbadb9389mr5790819ejc.311.1655455784023; Fri, 17 Jun 2022 01:49:44 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:43 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 5/6] io_uring: deduplicate io_get_cqe() calls Date: Fri, 17 Jun 2022 09:48:04 +0100 Message-Id: <4fa077986cc3abab7c59ff4e7c390c783885465f.1655455613.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Deduplicate calls to io_get_cqe() from __io_fill_cqe_req(). Signed-off-by: Pavel Begunkov --- io_uring/io_uring.h | 38 +++++++++++++------------------------- 1 file changed, 13 insertions(+), 25 deletions(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 4134b206c33c..cd29d91c2175 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -47,19 +47,17 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, req->cqe.res, req->cqe.flags, (req->flags & REQ_F_CQE32_INIT) ? req->extra1 : 0, (req->flags & REQ_F_CQE32_INIT) ? req->extra2 : 0); + /* + * If we can't get a cq entry, userspace overflowed the + * submission (by quite a lot). Increment the overflow count in + * the ring. + */ + cqe = io_get_cqe(ctx); + if (unlikely(!cqe)) + return io_req_cqe_overflow(req); + memcpy(cqe, &req->cqe, sizeof(*cqe)); - if (!(ctx->flags & IORING_SETUP_CQE32)) { - /* - * If we can't get a cq entry, userspace overflowed the - * submission (by quite a lot). Increment the overflow count in - * the ring. - */ - cqe = io_get_cqe(ctx); - if (likely(cqe)) { - memcpy(cqe, &req->cqe, sizeof(*cqe)); - return true; - } - } else { + if (ctx->flags & IORING_SETUP_CQE32) { u64 extra1 = 0, extra2 = 0; if (req->flags & REQ_F_CQE32_INIT) { @@ -67,20 +65,10 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, extra2 = req->extra2; } - /* - * If we can't get a cq entry, userspace overflowed the - * submission (by quite a lot). Increment the overflow count in - * the ring. - */ - cqe = io_get_cqe(ctx); - if (likely(cqe)) { - memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe)); - WRITE_ONCE(cqe->big_cqe[0], extra1); - WRITE_ONCE(cqe->big_cqe[1], extra2); - return true; - } + WRITE_ONCE(cqe->big_cqe[0], extra1); + WRITE_ONCE(cqe->big_cqe[1], extra2); } - return io_req_cqe_overflow(req); + return true; } static inline void req_set_fail(struct io_kiocb *req) From patchwork Fri Jun 17 08:48:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12885344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AEB5C433EF for ; Fri, 17 Jun 2022 08:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381220AbiFQIts (ORCPT ); Fri, 17 Jun 2022 04:49:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381265AbiFQIts (ORCPT ); Fri, 17 Jun 2022 04:49:48 -0400 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADFDE694B0 for ; Fri, 17 Jun 2022 01:49:46 -0700 (PDT) Received: by mail-ed1-x530.google.com with SMTP id ej4so1329207edb.7 for ; Fri, 17 Jun 2022 01:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sYKWq+KGlgwf5IPJ3VD5XpYLXdGaLVBpMxjUxYcqacQ=; b=QNX/vrMaKXZyYicIu1UG0Fsixi5bJAhNAnmMHWQq66JOrndZk6rhDRtBCXsM9dUKGq VU0zsvFikxIGohmR4u80/TQYYGWe+XKiDzZbzgLJzzGa+hs5BIeaPa9RMbmaVXi3rdA0 nP6nSDZWnEBCCW7VghoFCvQ9SkU3kSKc22mne2VRyj5JtwiXGoui0zBFdPX07WHj830r 5yQqfSoTXLImt1tQxLUb16FvsrWFiAswFoBd5WSKWU/0a1Z6P1zLFvV9Kf12XE6600ri V2wsSG+CKVvKvIJJ+HgPTd07KN3jEPiTOvWyyBhn/+676mm6iXjgIwAcWA3njJYB4P8A YVDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sYKWq+KGlgwf5IPJ3VD5XpYLXdGaLVBpMxjUxYcqacQ=; b=THCvCMnnqLja8Ge5X+GvbSPavoWRb2WGghBkakpGM2UPnnNHPmNTs7ndlEjjeIPRTw ZvmgMT5Mx+qIyMI55NLQpwD389epDOSAt5xkroVUH6RrJIAxJ0r2su3jJTKy8VXuThxr ERXTXW13PpVH+y8vRBStmN6fsDDllADp26VpUpNYjwZ9MlNB91btWyzvVdZDzFlY89D9 ALgfoaSIFa1yJoL811Q8I5bMDkKral+wUT9i4J7zSCLF8Jr84yZ/yA8wW+vzYLxIQ2wi QWxpdRMf29x5SOQ9NdkxN+AG19J6bLy5kFcDxD2wDL+RF8Fjoe3lPTdjmcjeFEsXS3Qx xBmg== X-Gm-Message-State: AJIora9jF/K1g79JSlQrGy0LHwlycBjavlNCL1+8fFtGP1cyIBrixx6l SE20YbTUOWIW6Yzh4dlVbqKCmUOQ61A2qg== X-Google-Smtp-Source: AGRyM1s7ebSN2DT5vzTzdr2wuFMl36x0sq8MRiaDUVUGyLv87MNPOWVqvCSHghmg28I0HNA4RKS4OQ== X-Received: by 2002:a05:6402:3514:b0:431:7164:f1d9 with SMTP id b20-20020a056402351400b004317164f1d9mr10870604edd.99.1655455784981; Fri, 17 Jun 2022 01:49:44 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:b65a]) by smtp.gmail.com with ESMTPSA id u17-20020a1709060b1100b006ff52dfccf3sm1851895ejg.211.2022.06.17.01.49.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 01:49:44 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 6/6] io_uring: change ->cqe_cached invariant for CQE32 Date: Fri, 17 Jun 2022 09:48:05 +0100 Message-Id: <1ee1838cba16bed96381a006950b36ba640d998c.1655455613.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org With IORING_SETUP_CQE32 ->cqe_cached doesn't store a real address but rather an implicit offset into cqes. Store the real cqe pointer and increment it accordingly if CQE32. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 15 ++++++++++----- io_uring/io_uring.h | 8 ++------ 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 263d7e4f1b41..11b4b5040020 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -698,11 +698,8 @@ struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx) { struct io_rings *rings = ctx->rings; unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1); - unsigned int shift = 0; unsigned int free, queued, len; - if (ctx->flags & IORING_SETUP_CQE32) - shift = 1; /* userspace may cheat modifying the tail, be safe and do min */ queued = min(__io_cqring_events(ctx), ctx->cq_entries); @@ -712,11 +709,19 @@ struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx) if (!len) return NULL; - ctx->cached_cq_tail++; + if (ctx->flags & IORING_SETUP_CQE32) { + off <<= 1; + len <<= 1; + } + ctx->cqe_cached = &rings->cqes[off]; ctx->cqe_sentinel = ctx->cqe_cached + len; + + ctx->cached_cq_tail++; ctx->cqe_cached++; - return &rings->cqes[off << shift]; + if (ctx->flags & IORING_SETUP_CQE32) + ctx->cqe_cached++; + return &rings->cqes[off]; } static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index cd29d91c2175..f1b3e765495b 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -24,14 +24,10 @@ static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) { struct io_uring_cqe *cqe = ctx->cqe_cached; - if (ctx->flags & IORING_SETUP_CQE32) { - unsigned int off = ctx->cqe_cached - ctx->rings->cqes; - - cqe += off; - } - ctx->cached_cq_tail++; ctx->cqe_cached++; + if (ctx->flags & IORING_SETUP_CQE32) + ctx->cqe_cached++; return cqe; }