From patchwork Mon Aug 15 13:09:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 12943566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229D1C25B06 for ; Mon, 15 Aug 2022 13:19:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231921AbiHONTi (ORCPT ); Mon, 15 Aug 2022 09:19:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242921AbiHONPe (ORCPT ); Mon, 15 Aug 2022 09:15:34 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4150765C for ; Mon, 15 Aug 2022 06:15:28 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FBOhhS007878 for ; Mon, 15 Aug 2022 06:15:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=NpQXEqs1qk3NnqzrOfsWsfDkvZfdNBDU2CTMUlPf+N4=; b=KMa7EurGBBgXYZ5xmGmwFVRLg8i7p6O9DglFfbzfF59iGJRI1jOE+xlPUnBSRAkf/pHc SvH4Xe8rASNbzoZq5dCJ1eOUV4zsSNKwdq+iifCKT9BNxrmjj2I7e5HaFZJ/6GGTPNdT 8D86yfTFnfkpHHlIybvreda0PMBDmignV+E= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3hyn83rfwb-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 15 Aug 2022 06:15:27 -0700 Received: from twshared0646.06.ash9.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 15 Aug 2022 06:15:26 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id 0F85349B72DE; Mon, 15 Aug 2022 06:09:55 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov , CC: , Dylan Yudaken Subject: [PATCH liburing 02/11] add io_uring_submit_and_get_events and io_uring_get_events Date: Mon, 15 Aug 2022 06:09:38 -0700 Message-ID: <20220815130947.1002152-3-dylany@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220815130947.1002152-1-dylany@fb.com> References: <20220815130947.1002152-1-dylany@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 0s0HfNL6QbC2h1HbfFLgVLnum2Ee_Arr X-Proofpoint-GUID: 0s0HfNL6QbC2h1HbfFLgVLnum2Ee_Arr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org With deferred task running, we would like to be able to combine submit with get events (regardless of if there are CQE's available), or if there is nothing to submit then simply do an enter with IORING_ENTER_GETEVENTS set, in order to process any available work. Expose these APIs Signed-off-by: Dylan Yudaken --- src/include/liburing.h | 2 ++ src/queue.c | 28 +++++++++++++++++++--------- 2 files changed, 21 insertions(+), 9 deletions(-) diff --git a/src/include/liburing.h b/src/include/liburing.h index 06f4a50bacb1..6b25c358c63f 100644 --- a/src/include/liburing.h +++ b/src/include/liburing.h @@ -192,6 +192,8 @@ int io_uring_register_file_alloc_range(struct io_uring *ring, int io_uring_register_notifications(struct io_uring *ring, unsigned nr, struct io_uring_notification_slot *slots); int io_uring_unregister_notifications(struct io_uring *ring); +int io_uring_get_events(struct io_uring *ring); +int io_uring_submit_and_get_events(struct io_uring *ring); /* * Helper for the peek/wait single cqe functions. Exported because of that, diff --git a/src/queue.c b/src/queue.c index 72cc77b9f0d0..216f29a8afef 100644 --- a/src/queue.c +++ b/src/queue.c @@ -124,6 +124,15 @@ int __io_uring_get_cqe(struct io_uring *ring, struct io_uring_cqe **cqe_ptr, return _io_uring_get_cqe(ring, cqe_ptr, &data); } +int io_uring_get_events(struct io_uring *ring) +{ + int flags = IORING_ENTER_GETEVENTS; + + if (ring->int_flags & INT_FLAG_REG_RING) + flags |= IORING_ENTER_REGISTERED_RING; + return __sys_io_uring_enter(ring->enter_ring_fd, 0, 0, flags, NULL); +} + /* * Fill in an array of IO completions up to count, if any are available. * Returns the amount of IO completions filled. @@ -158,11 +167,7 @@ again: goto done; if (cq_ring_needs_flush(ring)) { - int flags = IORING_ENTER_GETEVENTS; - - if (ring->int_flags & INT_FLAG_REG_RING) - flags |= IORING_ENTER_REGISTERED_RING; - __sys_io_uring_enter(ring->enter_ring_fd, 0, 0, flags, NULL); + io_uring_get_events(ring); overflow_checked = true; goto again; } @@ -348,14 +353,14 @@ int io_uring_wait_cqe_timeout(struct io_uring *ring, * Returns number of sqes submitted */ static int __io_uring_submit(struct io_uring *ring, unsigned submitted, - unsigned wait_nr) + unsigned wait_nr, bool getevents) { unsigned flags; int ret; flags = 0; - if (sq_ring_needs_enter(ring, &flags) || wait_nr) { - if (wait_nr || (ring->flags & IORING_SETUP_IOPOLL)) + if (getevents || sq_ring_needs_enter(ring, &flags) || wait_nr) { + if (getevents || wait_nr || (ring->flags & IORING_SETUP_IOPOLL)) flags |= IORING_ENTER_GETEVENTS; if (ring->int_flags & INT_FLAG_REG_RING) flags |= IORING_ENTER_REGISTERED_RING; @@ -370,7 +375,7 @@ static int __io_uring_submit(struct io_uring *ring, unsigned submitted, static int __io_uring_submit_and_wait(struct io_uring *ring, unsigned wait_nr) { - return __io_uring_submit(ring, __io_uring_flush_sq(ring), wait_nr); + return __io_uring_submit(ring, __io_uring_flush_sq(ring), wait_nr, false); } /* @@ -393,6 +398,11 @@ int io_uring_submit_and_wait(struct io_uring *ring, unsigned wait_nr) return __io_uring_submit_and_wait(ring, wait_nr); } +int io_uring_submit_and_get_events(struct io_uring *ring) +{ + return __io_uring_submit(ring, __io_uring_flush_sq(ring), 0, true); +} + #ifdef LIBURING_INTERNAL struct io_uring_sqe *io_uring_get_sqe(struct io_uring *ring) {