From patchwork Fri Jul 28 20:14:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 13332641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19A11EB64DD for ; Fri, 28 Jul 2023 20:20:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230381AbjG1UUO (ORCPT ); Fri, 28 Jul 2023 16:20:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231324AbjG1UUL (ORCPT ); Fri, 28 Jul 2023 16:20:11 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA354469A for ; Fri, 28 Jul 2023 13:19:45 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SJUEen010680 for ; Fri, 28 Jul 2023 13:19:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=kJWrtTkStC6cxrlTIyl+TkH/zRRo0Y8FfDLQItMN1pw=; b=S6aFzlxqTpjTDSlxeMmrrDOTp/L3GhXwphRMBIU2ZxJwYUA/YeV93tBH99lFqzB1dbun acVYJt+rldJSYhO45l5G3njly3e9yjaxW1IalSJNDOwalwsS3qkhRLI+9MKgKZUB2CNx wVJ6djLSf1W2CvI3Y/hp6ZZdDk+Y+BmaEpTwmcKmSg+iEevwmXWghQcMoVzIT7O7Y8RV 0wURBHveBthaO1vk64Wy4jifboyfPeVUIOpxzo4xjkk7e8jm8sYOtTa0RIMmrVNP9hpG SuvoURbBolZacf5CAShlbn5N1yiQRtwo0tUQ7/hcMBIsoPSjNprU7r+RMRN6kp1SovCN PQ== Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3s49g0wyr7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 28 Jul 2023 13:19:13 -0700 Received: from twshared19625.39.frc1.facebook.com (2620:10d:c0a8:1c::11) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Fri, 28 Jul 2023 13:19:11 -0700 Received: by devbig007.nao1.facebook.com (Postfix, from userid 544533) id 1C32C1C47B5D5; Fri, 28 Jul 2023 13:15:57 -0700 (PDT) From: Keith Busch To: , , , CC: Keith Busch Subject: [PATCH 2/3] io_uring: split req prep and submit loops Date: Fri, 28 Jul 2023 13:14:48 -0700 Message-ID: <20230728201449.3350962-2-kbusch@meta.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230728201449.3350962-1-kbusch@meta.com> References: <20230728201449.3350962-1-kbusch@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 7QdYYSRLOljHcQJO40ik87atBKKnxOOH X-Proofpoint-GUID: 7QdYYSRLOljHcQJO40ik87atBKKnxOOH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Keith Busch Do all the prep work up front, then dispatch all synchronous requests at once. This will make it easier to count batches for plugging. Signed-off-by: Keith Busch --- io_uring/io_uring.c | 26 ++++++++++++++++++-------- io_uring/slist.h | 4 ++++ 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 818b2d1661c5e..5434aef0a8ef7 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1082,6 +1082,7 @@ static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx) req->ctx = ctx; req->link = NULL; req->async_data = NULL; + req->comp_list.next = NULL; /* not necessary, but safer to zero */ req->cqe.res = 0; } @@ -2282,11 +2283,7 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe, static inline void io_submit_sqe(struct io_kiocb *req) { trace_io_uring_submit_req(req); - - if (unlikely(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))) - io_queue_sqe_fallback(req); - else - io_queue_sqe(req); + io_queue_sqe(req); } static int io_setup_link(struct io_submit_link *link, struct io_kiocb **orig) @@ -2409,6 +2406,9 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) { struct io_submit_link *link = &ctx->submit_state.link; unsigned int entries = io_sqring_entries(ctx); + struct io_wq_work_node *pos, *next; + struct io_wq_work_list req_list; + struct io_kiocb *req; unsigned int left; int ret, err; @@ -2419,6 +2419,7 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) io_get_task_refs(left); io_submit_state_start(&ctx->submit_state, left); + INIT_WQ_LIST(&req_list); do { const struct io_uring_sqe *sqe; struct io_kiocb *req; @@ -2437,9 +2438,12 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) err = io_setup_link(link, &req); if (unlikely(err)) goto error; - - if (likely(req)) - io_submit_sqe(req); + else if (unlikely(!req)) + continue; + else if (unlikely(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))) + io_queue_sqe_fallback(req); + else + wq_list_add_tail(&req->comp_list, &req_list); continue; error: /* @@ -2453,6 +2457,12 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) } } while (--left); + wq_list_for_each_safe(pos, next, &req_list) { + req = container_of(pos, struct io_kiocb, comp_list); + req->comp_list.next = NULL; + io_submit_sqe(req); + } + if (unlikely(left)) { ret -= left; /* try again if it submitted nothing and can't allocate a req */ diff --git a/io_uring/slist.h b/io_uring/slist.h index 0eb194817242e..93fbb715111ca 100644 --- a/io_uring/slist.h +++ b/io_uring/slist.h @@ -12,6 +12,10 @@ #define wq_list_for_each_resume(pos, prv) \ for (; pos; prv = pos, pos = (pos)->next) +#define wq_list_for_each_safe(pos, n, head) \ + for (pos = (head)->first, n = pos ? pos->next : NULL; \ + pos; pos = n, n = pos ? pos->next : NULL) + #define wq_list_empty(list) (READ_ONCE((list)->first) == NULL) #define INIT_WQ_LIST(list) do { \