From patchwork Sun Jun 19 11:26:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97745C43334 for ; Sun, 19 Jun 2022 11:26:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233268AbiFSL0p (ORCPT ); Sun, 19 Jun 2022 07:26:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229853AbiFSL0p (ORCPT ); Sun, 19 Jun 2022 07:26:45 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBF446153 for ; Sun, 19 Jun 2022 04:26:43 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id o37-20020a05600c512500b0039c4ba4c64dso6446453wms.2 for ; Sun, 19 Jun 2022 04:26:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rMRvtbDCNHQBxhNWxo2d+tDiG3EmaLK8r3Cs02zPJ2o=; b=DmkY7i7OHUmYtcFyFDD9M9a2S964GRO1KAWAtU79Zhvnfty4eNo9Q1kL+SfyMhPyNH 2+a25tAG6sG+lIRtDKUTfT2rJ/flo0dqMC0DpW+4HdfJwGQv9eMxek+R4Iew7vBbhhr0 XczsbO2hQYCxynTZoX4iRQrJu4wF5Ma35BQCnYyZpdP2+pm5v8FIeya91JgiCaVaHj3c LKzP4oVR2cren+hY6+A4OxZhb3nS24FOILE1h2pvVOBwkbZAuHEiXtidca8REZ0bN9oy upbsHvWrgBYNbJUbuP7GCNtYA00Y5F/zvco00n51CLUj6kZXsHY0fAXTFkuP2rJtdmkg 3/rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rMRvtbDCNHQBxhNWxo2d+tDiG3EmaLK8r3Cs02zPJ2o=; b=AVlWkuwv0iejTbUgQWkrGywxkZKQ+x6OngtB6X0zAzjit4PBQxb0N0k6XXTJkhdzvo PDJMvsKWUO1Tj46OWLqiYbBDjSNg+qem0RMnakyJGpMsX/KXH9yH+S4rXkrYN5tlH1yH wWvau7VUYmxCgrP5NhQ1Pk9Fi/D2G3gL8YdhUr0bTkHrGvb6pmXm7uQrN6WaxY01LVBt jVqdN1dkYt3acFw6ajjNGLOq+o+KGQXdFJW+QQfX7/dTNlumsMejkROINfOYAP2kGyBv mGJaTU2X8uBJmYIDbI9vHElphX0k6o/ycFDR9j+jX4pVGta6uQIJ8Vzp6k3YGxApVmxM rFXQ== X-Gm-Message-State: AOAM533zQHcHw1lzFImZHO68sW71hYC9+CJqpN5FCeYW0iQf2ZQRAE7q qNkMUrDG/XJyrQ2o9F6rU/t+tHZhkBB/eA== X-Google-Smtp-Source: ABdhPJxK5Zy30n3VZoiwCyWecgMTWIAW635maQ/GcRWq0Bc5+mmBcb4L70Mvov4+hqTYINGgXEEryQ== X-Received: by 2002:a05:600c:4f81:b0:39c:809c:8a9e with SMTP id n1-20020a05600c4f8100b0039c809c8a9emr30391906wmq.39.1655638002126; Sun, 19 Jun 2022 04:26:42 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:41 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 1/7] io_uring: remove extra io_commit_cqring() Date: Sun, 19 Jun 2022 12:26:04 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We don't post events in __io_commit_cqring_flush() anymore but send all requests to tw, so no need to do io_commit_cqring() there. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 1 - 1 file changed, 1 deletion(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 37084fe3cc07..9e02c4a950ef 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -480,7 +480,6 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) io_flush_timeouts(ctx); if (ctx->drain_active) io_queue_deferred(ctx); - io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); } if (ctx->has_evfd) From patchwork Sun Jun 19 11:26:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7523CCA47A for ; Sun, 19 Jun 2022 11:26:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236187AbiFSL0r (ORCPT ); Sun, 19 Jun 2022 07:26:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229853AbiFSL0q (ORCPT ); Sun, 19 Jun 2022 07:26:46 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02D9A5F67 for ; Sun, 19 Jun 2022 04:26:45 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id o8so11101113wro.3 for ; Sun, 19 Jun 2022 04:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NBaWof8Lzg/HXk/U7LCqpKHXLfiNBK1+scHwWFvlKXo=; b=TbAUQPb/1ZcTZnXMpUK9z7mEqwN+n2mnlitB6KXUCR5LZCHq95q3/nU73MIhGrYx0A aiiQaI3PNd4Pyrwjwt37Q0n1Q6vbZDYXbgqEJLmKkGXs99QBzuLz1ZS6xImUIVULlQEO 36HjR+R8dtBrQLyyPJi19x4o7Cu5+/sA0IhQv4Ky3r2cShs//xeIbRyvhGlNxmEe+7mu l/8uJCkLU+J9uCIX7DQDy6Ar0sQA4v0DtkLeQ9/h9Rvp/hqLU0G/i1dcQyfG0JNo9vzL gSHJFZdLeMyzYwWRBaJnn1M/JgkJuPPBBw4Gj4f0GUlAAeTLFf273+ovKHJUl+lZXHJh pqow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NBaWof8Lzg/HXk/U7LCqpKHXLfiNBK1+scHwWFvlKXo=; b=PRg/t1OL9vA/tnT7VYQRn9wQwStT/RvhlL1m2RNRP9gM3Gs8ljkQPw3VQU1mLc4RIw KEU+Gdq4an7AI+BADrRLpO56arvhx+S7E+6r2HeaPAEXII9ZvZbBp8ZMp3cVug23ItW5 CoxH0dCYj6MElG1exZ04oZoHWLR9kpimpVVCrOsqg1iQWOsOmjrJmXbtytfUNTs2+21r 8DZ/6OmJIaAh0hNktXUXymusJ3m8wtqV4vvhBcbcQJECoA1J0pYB6cufwKzSHhgCKNFF Zo2xpTayc+piteveT0h6qdvrWYq/VlpmGpXpo36cxuPXUNfdsLU34hI/hVwC+Sj9GHMV 3Y9g== X-Gm-Message-State: AJIora9qADA2EZFUCIcluPwwjrLdHhWQpv9hrOQNviOObF5GOeFcu+8U YIPsi+tBGbUCG3/uXiCvTZCr6p+OuaRkrw== X-Google-Smtp-Source: AGRyM1tCNzTUA0UW84YV2HH9Yc34I4kDiEJiKBFiCk6DBVq+Q2djllyVDw4ghk5n+I1qAJonxUd3EQ== X-Received: by 2002:adf:ed47:0:b0:21b:8724:6704 with SMTP id u7-20020adfed47000000b0021b87246704mr6433714wro.389.1655638003295; Sun, 19 Jun 2022 04:26:43 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:42 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 2/7] io_uring: reshuffle io_uring/io_uring.h Date: Sun, 19 Jun 2022 12:26:05 +0100 Message-Id: <1d7fa6672ed43f20ccc0c54ae201369ebc3ebfab.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org It's a good idea to first do forward declarations and then inline helpers, otherwise there will be keep stumbling on dependencies between them. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.h | 95 ++++++++++++++++++++++----------------------- 1 file changed, 47 insertions(+), 48 deletions(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 388391516a62..906749fa3415 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -18,6 +18,53 @@ enum { struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx); bool io_req_cqe_overflow(struct io_kiocb *req); +int io_run_task_work_sig(void); +void io_req_complete_failed(struct io_kiocb *req, s32 res); +void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); +void io_req_complete_post(struct io_kiocb *req); +void __io_req_complete_post(struct io_kiocb *req); +bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); +void io_cqring_ev_posted(struct io_ring_ctx *ctx); +void __io_commit_cqring_flush(struct io_ring_ctx *ctx); + +struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); + +struct file *io_file_get_normal(struct io_kiocb *req, int fd); +struct file *io_file_get_fixed(struct io_kiocb *req, int fd, + unsigned issue_flags); + +bool io_is_uring_fops(struct file *file); +bool io_alloc_async_data(struct io_kiocb *req); +void io_req_task_work_add(struct io_kiocb *req); +void io_req_task_prio_work_add(struct io_kiocb *req); +void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags); +void io_req_task_queue(struct io_kiocb *req); +void io_queue_iowq(struct io_kiocb *req, bool *dont_use); +void io_req_task_complete(struct io_kiocb *req, bool *locked); +void io_req_task_queue_fail(struct io_kiocb *req, int ret); +void io_req_task_submit(struct io_kiocb *req, bool *locked); +void tctx_task_work(struct callback_head *cb); +__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd); +int io_uring_alloc_task_context(struct task_struct *task, + struct io_ring_ctx *ctx); + +int io_poll_issue(struct io_kiocb *req, bool *locked); +int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr); +int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin); +void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node); +int io_req_prep_async(struct io_kiocb *req); + +struct io_wq_work *io_wq_free_work(struct io_wq_work *work); +void io_wq_submit_work(struct io_wq_work *work); + +void io_free_req(struct io_kiocb *req); +void io_queue_next(struct io_kiocb *req); + +bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task, + bool cancel_all); + +#define io_for_each_link(pos, head) \ + for (pos = (head); pos; pos = pos->link) static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) { @@ -190,52 +237,4 @@ static inline void io_req_complete_defer(struct io_kiocb *req) wq_list_add_tail(&req->comp_list, &state->compl_reqs); } -int io_run_task_work_sig(void); -void io_req_complete_failed(struct io_kiocb *req, s32 res); -void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); -void io_req_complete_post(struct io_kiocb *req); -void __io_req_complete_post(struct io_kiocb *req); -bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); -void io_cqring_ev_posted(struct io_ring_ctx *ctx); -void __io_commit_cqring_flush(struct io_ring_ctx *ctx); - -struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); - -struct file *io_file_get_normal(struct io_kiocb *req, int fd); -struct file *io_file_get_fixed(struct io_kiocb *req, int fd, - unsigned issue_flags); - -bool io_is_uring_fops(struct file *file); -bool io_alloc_async_data(struct io_kiocb *req); -void io_req_task_work_add(struct io_kiocb *req); -void io_req_task_prio_work_add(struct io_kiocb *req); -void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags); -void io_req_task_queue(struct io_kiocb *req); -void io_queue_iowq(struct io_kiocb *req, bool *dont_use); -void io_req_task_complete(struct io_kiocb *req, bool *locked); -void io_req_task_queue_fail(struct io_kiocb *req, int ret); -void io_req_task_submit(struct io_kiocb *req, bool *locked); -void tctx_task_work(struct callback_head *cb); -__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd); -int io_uring_alloc_task_context(struct task_struct *task, - struct io_ring_ctx *ctx); - -int io_poll_issue(struct io_kiocb *req, bool *locked); -int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr); -int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin); -void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node); -int io_req_prep_async(struct io_kiocb *req); - -struct io_wq_work *io_wq_free_work(struct io_wq_work *work); -void io_wq_submit_work(struct io_wq_work *work); - -void io_free_req(struct io_kiocb *req); -void io_queue_next(struct io_kiocb *req); - -bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task, - bool cancel_all); - -#define io_for_each_link(pos, head) \ - for (pos = (head); pos; pos = pos->link) - #endif From patchwork Sun Jun 19 11:26:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BC3EC433EF for ; Sun, 19 Jun 2022 11:26:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229853AbiFSL0r (ORCPT ); Sun, 19 Jun 2022 07:26:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbiFSL0q (ORCPT ); Sun, 19 Jun 2022 07:26:46 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEDB66153 for ; Sun, 19 Jun 2022 04:26:45 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id v14so11090061wra.5 for ; Sun, 19 Jun 2022 04:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/LavPulCuHLlYsXb/c3AGIbqLtOcsN8l0Dc366IHheI=; b=byhn4/9T3b7OKrK5M5uu8rr1ce7LHDRqo1+2kyIh/bRBPvX+rxgCD0rTE2OuP8Q6d4 hSpVRMGT+Jgk/g0cXk3SazLaGvIsT6iQm/KWudl8KX2Apdvi3OzPRVWqQAe/8Qs+TAvv rvvLFC8Pwur8Tl6fn9KOO9KLuPsz9LzAsvvxU9Ihj+EW/nYTQp6mo74H8Ka+ZZ0e/ZI0 BIPbY+Hq7+XLFqpzMxNTFD3yNubg8v4LkfziS0VCezE91zqtSctsmkNw3cwt1aLjKQvQ qZWibiauvIcZXLd6ekGm/U7oqDelNtk2+4BWNyVIuMx1YdSb+Kpi0nnJ9c9d9r5Rd6Ci /U+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/LavPulCuHLlYsXb/c3AGIbqLtOcsN8l0Dc366IHheI=; b=hcrS3y/NhvVI9ApYhyptHb1Y2cXk+unxfaCJfDQWJl5RuRM9KxnZbHpaXVlZCN5m0w AxrZ8GQGsK3zS8yiTz/NYzEJdnGsaNMxNfgYo00sVg7IUK+mVug5qUP/K1lK+69suPcn MD9GnaPuTxW+5CYF4pzaZL4rXnY1hS7gf2/jwIwWa04Bk60nIxokH88tje9Jbsk8plCh d7N7fGnwoPeZpVyFM0EltmliMkKRI5TOumgRUKn6BlcQIoukZvwxskXRWlC69BJZg2hK DLinE4ZBzFgEwIpkkxEHqVpk3ubNkRKqfrjgUblF4MmlY5hdyV9swnV9eThFUEwloKj0 jRTA== X-Gm-Message-State: AJIora84e/zpg6BhkAYdT/sQi2X0NS3S35tJmc508oYzsek17niJCwEd rV/jkfaChBUnBJTV7RSquFJdKRfvO8UcLg== X-Google-Smtp-Source: AGRyM1vcgqkdadFTL9ZtPssy6sEq3o8SG7Ue1MXxf5qquABhqxYvKpVzIr6wi15S+S+GxtS0J31Z2Q== X-Received: by 2002:a05:6000:1788:b0:219:e28f:dc98 with SMTP id e8-20020a056000178800b00219e28fdc98mr17747166wrg.144.1655638004291; Sun, 19 Jun 2022 04:26:44 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:43 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 3/7] io_uring: move io_eventfd_signal() Date: Sun, 19 Jun 2022 12:26:06 +0100 Message-Id: <9ebebb3f6f56f5a5448a621e0b6a537720c43334.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Move io_eventfd_signal() in the sources without any changes and kill its forward declaration. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 9e02c4a950ef..31beb9ccbf12 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -142,8 +142,6 @@ static void io_queue_sqe(struct io_kiocb *req); static void __io_submit_flush_completions(struct io_ring_ctx *ctx); -static void io_eventfd_signal(struct io_ring_ctx *ctx); - static struct kmem_cache *req_cachep; struct sock *io_uring_get_socket(struct file *file) @@ -472,20 +470,6 @@ static __cold void io_queue_deferred(struct io_ring_ctx *ctx) } } -void __io_commit_cqring_flush(struct io_ring_ctx *ctx) -{ - if (ctx->off_timeout_used || ctx->drain_active) { - spin_lock(&ctx->completion_lock); - if (ctx->off_timeout_used) - io_flush_timeouts(ctx); - if (ctx->drain_active) - io_queue_deferred(ctx); - spin_unlock(&ctx->completion_lock); - } - if (ctx->has_evfd) - io_eventfd_signal(ctx); -} - static void io_eventfd_signal(struct io_ring_ctx *ctx) { struct io_ev_fd *ev_fd; @@ -513,6 +497,20 @@ static void io_eventfd_signal(struct io_ring_ctx *ctx) rcu_read_unlock(); } +void __io_commit_cqring_flush(struct io_ring_ctx *ctx) +{ + if (ctx->off_timeout_used || ctx->drain_active) { + spin_lock(&ctx->completion_lock); + if (ctx->off_timeout_used) + io_flush_timeouts(ctx); + if (ctx->drain_active) + io_queue_deferred(ctx); + spin_unlock(&ctx->completion_lock); + } + if (ctx->has_evfd) + io_eventfd_signal(ctx); +} + /* * This should only get called when at least one event has been posted. * Some applications rely on the eventfd notification count only changing From patchwork Sun Jun 19 11:26:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE82C43334 for ; Sun, 19 Jun 2022 11:26:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235295AbiFSL0t (ORCPT ); Sun, 19 Jun 2022 07:26:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236239AbiFSL0s (ORCPT ); Sun, 19 Jun 2022 07:26:48 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F1FA5F67 for ; Sun, 19 Jun 2022 04:26:47 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id g4so11068646wrh.11 for ; Sun, 19 Jun 2022 04:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v/E8L6vENwLzJ5FVowXVq6f4QglMkinNZA6/HXU+7zA=; b=CrO2wqOEZTbHizixoXPc5EKy+FECoTs2GnNB+L/zBIFpvEMR8M9WgW8iONqODfYEzq 66LaCSXrcYNZxzK+E0OibdT+i53JnowqLaunzlsGXNELzsLdy5M6Kr5wIHmLr7hZfPo6 da+7bgsWZmRwm+/EDpUGITbqFOB78Ej6l7B1Q/jsKhnHXodB7A6qbOX9XhRD1UIbTfY6 ywIBca4Qmn35UHhortNMGscftxlXBJEn2GKGdHOqaVPOdg/waqp2gCOBqzaY9DT1QQG4 +5ID5FwiqEf71G9z5OvvOzGxqlEGiVMZ26w423ooPuheqKtdUglznGsCJ7M80thcAzXr rPDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v/E8L6vENwLzJ5FVowXVq6f4QglMkinNZA6/HXU+7zA=; b=jLHv4ZDG8LIW6UIv5i6ixxbIXTv5+DOwMHvJU/J8nXC/9wrPcuvILlvfcBlcIn7gpK T7STPCEv5MXZHgVeGC9oegvpHJtLCtYXt4cOmx26saD+5jDTfwrzneGsD532I2FFY6is fzBvONhQy3i9aILejHzdgnK/MiRZrDRu+qZ4+PQgHbS54yTw+kESvlgmsnI+RSrPotp+ KNQtV6DJtn82LX0jG6JzuA9jlbFdoUwLuX5im9zMF3cv+2AFSsNzsAHDH2Y+Ld0Azxv8 /lubKYlZ8TSq41iSSpgiNwDGp6A6OykyPtn4y1F1ttzTazfvguT9CMLviZrfDt0bo8YI DP3A== X-Gm-Message-State: AJIora9McR24sF0J70WtDywVYoCobHGDe/7GmDyQ/E6pViYcKGRDl457 THKCs/fpPpdpzsLUeAXZPlDf81hQZqqz5w== X-Google-Smtp-Source: AGRyM1toC+K/nBM2s5frw+cFGDMHWBsV/mzwy5iRMhcU86uEw7yTUzUS/XSAQbOrpchQv+ZgfU/NHQ== X-Received: by 2002:adf:ea87:0:b0:21b:83f8:d911 with SMTP id s7-20020adfea87000000b0021b83f8d911mr8162231wrm.556.1655638005335; Sun, 19 Jun 2022 04:26:45 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:44 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 4/7] io_uring: hide eventfd assumptions in evenfd paths Date: Sun, 19 Jun 2022 12:26:07 +0100 Message-Id: <8ac7fbe5ad880990ef498fa09f8de70390836f97.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Some io_uring-eventfd users assume that there won't be spurious wakeups. That assumption has to be honoured by all io_cqring_ev_posted() callers, which is inconvenient and from time to time leads to problems but should be maintained to not break the userspace. Instead of making the callers to track whether a CQE was posted or not, hide it inside io_eventfd_signal(). It saves ->cached_cq_tail it saw last time and triggers the eventfd only when ->cached_cq_tail changed since then. Signed-off-by: Pavel Begunkov --- include/linux/io_uring_types.h | 2 ++ io_uring/io_uring.c | 44 ++++++++++++++++++++-------------- io_uring/timeout.c | 3 +-- 3 files changed, 29 insertions(+), 20 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 779c72da5b8f..327bc7f0808d 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -315,6 +315,8 @@ struct io_ring_ctx { struct list_head defer_list; unsigned sq_thread_idle; + /* protected by ->completion_lock */ + unsigned evfd_last_cq_tail; }; enum { diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 31beb9ccbf12..0875cc649e23 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -473,6 +473,22 @@ static __cold void io_queue_deferred(struct io_ring_ctx *ctx) static void io_eventfd_signal(struct io_ring_ctx *ctx) { struct io_ev_fd *ev_fd; + bool skip; + + spin_lock(&ctx->completion_lock); + /* + * Eventfd should only get triggered when at least one event has been + * posted. Some applications rely on the eventfd notification count only + * changing IFF a new CQE has been added to the CQ ring. There's no + * depedency on 1:1 relationship between how many times this function is + * called (and hence the eventfd count) and number of CQEs posted to the + * CQ ring. + */ + skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail; + ctx->evfd_last_cq_tail = ctx->cached_cq_tail; + spin_unlock(&ctx->completion_lock); + if (skip) + return; rcu_read_lock(); /* @@ -511,13 +527,6 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) io_eventfd_signal(ctx); } -/* - * This should only get called when at least one event has been posted. - * Some applications rely on the eventfd notification count only changing - * IFF a new CQE has been added to the CQ ring. There's no depedency on - * 1:1 relationship between how many times this function is called (and - * hence the eventfd count) and number of CQEs posted to the CQ ring. - */ void io_cqring_ev_posted(struct io_ring_ctx *ctx) { if (unlikely(ctx->off_timeout_used || ctx->drain_active || @@ -530,7 +539,7 @@ void io_cqring_ev_posted(struct io_ring_ctx *ctx) /* Returns true if there are no backlogged entries after the flush */ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) { - bool all_flushed, posted; + bool all_flushed; size_t cqe_size = sizeof(struct io_uring_cqe); if (!force && __io_cqring_events(ctx) == ctx->cq_entries) @@ -539,7 +548,6 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) if (ctx->flags & IORING_SETUP_CQE32) cqe_size <<= 1; - posted = false; spin_lock(&ctx->completion_lock); while (!list_empty(&ctx->cq_overflow_list)) { struct io_uring_cqe *cqe = io_get_cqe(ctx); @@ -554,7 +562,6 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) else io_account_cq_overflow(ctx); - posted = true; list_del(&ocqe->list); kfree(ocqe); } @@ -567,8 +574,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (posted) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return all_flushed; } @@ -758,8 +764,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, filled = io_fill_cqe_aux(ctx, user_data, res, cflags); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (filled) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return filled; } @@ -940,14 +945,12 @@ __cold void io_free_req(struct io_kiocb *req) static void __io_req_find_next_prep(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - bool posted; spin_lock(&ctx->completion_lock); - posted = io_disarm_next(req); + io_disarm_next(req); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (posted) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); } static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req) @@ -2431,6 +2434,11 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg, kfree(ev_fd); return ret; } + + spin_lock(&ctx->completion_lock); + ctx->evfd_last_cq_tail = ctx->cached_cq_tail; + spin_unlock(&ctx->completion_lock); + ev_fd->eventfd_async = eventfd_async; ctx->has_evfd = true; rcu_assign_pointer(ctx->io_ev_fd, ev_fd); diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 557c637af158..4938c1cdcbcd 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -628,7 +628,6 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, spin_unlock_irq(&ctx->timeout_lock); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (canceled != 0) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return canceled != 0; } From patchwork Sun Jun 19 11:26:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 415CBC433EF for ; Sun, 19 Jun 2022 11:26:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236447AbiFSL0t (ORCPT ); Sun, 19 Jun 2022 07:26:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbiFSL0s (ORCPT ); Sun, 19 Jun 2022 07:26:48 -0400 Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B3F16153 for ; Sun, 19 Jun 2022 04:26:47 -0700 (PDT) Received: by mail-wr1-x435.google.com with SMTP id v14so11090061wra.5 for ; Sun, 19 Jun 2022 04:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MhKVC6/7Nw0gdOt5wcUBQM6VXSKzq7yiIe7eF3ZRLEI=; b=qtLGUQn0kk8f+ySS5tWIHjM8H95BnpYqlp5iquMqHULj67nf+r6t5Yz2Dq+ovcNgIQ W1fAtGcWAkGvYJhorSkmiSIyCiwNPU5vb8HROrDb34tI0t+3F+oZBid62O8UvQf97HCu 0nU5u3ReJpsQoWef5RzzXX7Z0wPS+yW3+lKQ0uuChkrulDCdLH+gnLq4AQEQrhufkT8H UqW83mmNM8oM/eze6WRyeQOY44oWTXjMs4oqbltVdA+kuLyIgZgmSWQrOillFNvyQ+f7 +1MOn23y0DWjNU8iXka6j7kZ+h5rbM82lQZxAM9nyb0S7EgKWF6ghtAzU8rtN0i120Xc Jk0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MhKVC6/7Nw0gdOt5wcUBQM6VXSKzq7yiIe7eF3ZRLEI=; b=l0I65g54faWGAZsfVXyDZ4/1mmoQBl+UpFHT5TGYsj/sWIvtEcZTHMlLnhZa0qo5cu 4yppmUKGLz5/GIEzF7xFIGjGwtZ4knpSAOL8YAQ4gETYQV2vqPBlAGj7GNZgpL/IVeIy +G0w6EYVS1XJdghaGzqHNRMDToSmQJyAK+vHBN2Z/4ovKe55FhB1EkOgmJq1lKiu3MOC SKbRDgKJq4vQqTO9adKxUVA1KyyBIEiDhY/a7jGleDNY6galBcvWdVx1b4mi6YJhEJnv gd7IdIwN4ZTULApvndoIJQPzQIpDnC+OlCUWyK5w8kXD3PIbpP5jhZCL9z2L989LuHSO cMFA== X-Gm-Message-State: AJIora8n2nInDY7rqbG1Y9RF+HCtiYr6i8ApyDjWXbg1Am2bDxm+y284 06RxpEyH9BBV26Alwyr8t2sdWh2g3q3F+A== X-Google-Smtp-Source: AGRyM1uqceXNczSUWgS0uFA7nDhZbZLl+UWUwBDNtdB4n5mSeeibInLO6ALWflVn+vHxHzILv7JjgQ== X-Received: by 2002:a5d:64c7:0:b0:218:4a82:ffa4 with SMTP id f7-20020a5d64c7000000b002184a82ffa4mr18396897wri.592.1655638006357; Sun, 19 Jun 2022 04:26:46 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:45 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 5/7] io_uring: remove ->flush_cqes optimisation Date: Sun, 19 Jun 2022 12:26:08 +0100 Message-Id: <692e81eeddccc096f449a7960365fa7b4a18f8e6.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org It's not clear how widely used IOSQE_CQE_SKIP_SUCCESS is, and how often ->flush_cqes flag prevents from completion being flushed. Sometimes it's high level of concurrency that enables it at least for one CQE, but sometimes it doesn't save much because nobody waiting on the CQ. Remove ->flush_cqes flag and the optimisation, it should benefit the normal use case. Note, that there is no spurious eventfd problem with that as checks for spuriousness were incorporated into io_eventfd_signal(). Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 23 ++++++++++------------- io_uring/io_uring.h | 2 -- 2 files changed, 10 insertions(+), 15 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 0875cc649e23..57aef092ef38 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1253,22 +1253,19 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx) struct io_wq_work_node *node, *prev; struct io_submit_state *state = &ctx->submit_state; - if (state->flush_cqes) { - spin_lock(&ctx->completion_lock); - wq_list_for_each(node, prev, &state->compl_reqs) { - struct io_kiocb *req = container_of(node, struct io_kiocb, - comp_list); - - if (!(req->flags & REQ_F_CQE_SKIP)) - __io_fill_cqe_req(ctx, req); - } + spin_lock(&ctx->completion_lock); + wq_list_for_each(node, prev, &state->compl_reqs) { + struct io_kiocb *req = container_of(node, struct io_kiocb, + comp_list); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); - state->flush_cqes = false; + if (!(req->flags & REQ_F_CQE_SKIP)) + __io_fill_cqe_req(ctx, req); } + io_commit_cqring(ctx); + spin_unlock(&ctx->completion_lock); + io_cqring_ev_posted(ctx); + io_free_batch_list(ctx, state->compl_reqs.first); INIT_WQ_LIST(&state->compl_reqs); } diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 906749fa3415..7feef8c36db7 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -232,8 +232,6 @@ static inline void io_req_complete_defer(struct io_kiocb *req) lockdep_assert_held(&req->ctx->uring_lock); - if (!(req->flags & REQ_F_CQE_SKIP)) - state->flush_cqes = true; wq_list_add_tail(&req->comp_list, &state->compl_reqs); } From patchwork Sun Jun 19 11:26:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F115C43334 for ; Sun, 19 Jun 2022 11:26:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236239AbiFSL0v (ORCPT ); Sun, 19 Jun 2022 07:26:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236488AbiFSL0u (ORCPT ); Sun, 19 Jun 2022 07:26:50 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3447C6241 for ; Sun, 19 Jun 2022 04:26:49 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id m16-20020a7bca50000000b0039c8a224c95so4398402wml.2 for ; Sun, 19 Jun 2022 04:26:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X2sheGRHl1S0ozgC6OUD39lZ5WhAaWalrQ8M/a2GZaM=; b=QxaiCN2Ad2LYEONV7H5fEBsp7vFBnwJM+iFYmHOBVbtG05U5whHXfQ5GFuThbKBgxn Gi4SJK9K12DvZTfisPKBrSSjpiwkk2qXAMfOmP71Z5SZdCRhXOKmg6LrI/o3tifSAE/6 H/kQfHW5QeJtdLlrkGNO9ghBLeAkcnjivosn0MZepAlqrgdqtNDUFA7AM8e/SHUaWdv8 k32eQIRBJtCuVkhaH3jqzKS1DdnoAKQlrMCkPQMIGX9r5O16kfADCIJ6cSeuqr3CYW1b GVrHkRksNbXXeZQBGRDTGZTkxSqtyDb81kaKV+J4HzSo+kYh5xfrR1s3bD/WCGpfMRcA uvfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X2sheGRHl1S0ozgC6OUD39lZ5WhAaWalrQ8M/a2GZaM=; b=SlWJSI4hi2YQrJXuhRfoWdi/JGvwC/HWS83HpTJCbAKBOs3V5ZbUcsgYxO3906v1Rf tCWiisVhLwz9JdV05Kj2ZPh7lQPq8zgtXSO4i8y8H9LA6arkLqhqwRKYzkjSfSw21N64 opBgUdSZzvTS9xT0c9UtvgV5UmpcUNLgLAajI1P1x8mEjIFAAf47cEE5kjbiBEnw+KoN E64XppRDEsy5AoiW1p8VxeLJIA1jvWGFVRSwU2jPPC4SMpf0BFVg2o9zwFry1zTug7jP T7GvRcv98EJ3A9XdC7aw6sjwV/9ob9naYTfQlzLw+yzGmeeGj7EQ3xZXnc1wr65CLM4p J6dg== X-Gm-Message-State: AOAM533tjAfihyFwuqjARH/4CkF/eIsnvy2QGm3SXyrL/ehZwATmqBSl Bw44Jmj3prczAKuZivMPe2Wfg+vRxX7XxA== X-Google-Smtp-Source: ABdhPJyOYz6hFNcCOqQsWLA1KEf3+fQur98c+YZlf8bJbQsLB2TOYZ3rVuWlHOOtwySWGRPjYKgdPg== X-Received: by 2002:a1c:7703:0:b0:39c:521d:e9af with SMTP id t3-20020a1c7703000000b0039c521de9afmr29132554wmi.170.1655638007472; Sun, 19 Jun 2022 04:26:47 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:47 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 6/7] io_uring: introduce locking helpers for CQE posting Date: Sun, 19 Jun 2022 12:26:09 +0100 Message-Id: <693e461561af1ce9ccacfee9c28ff0c54e31e84f.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org spin_lock(&ctx->completion_lock); /* post CQEs */ io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); io_cqring_ev_posted(ctx); We have many places repeating this sequence, and the three function unlock section is not perfect from the maintainance perspective and also makes harder to add new locking/sync trick. Introduce to helpers. io_cq_lock(), which is simple and only grabs ->completion_lock, and io_cq_unlock_post() encapsulating the three call section. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 57 +++++++++++++++++++++------------------------ io_uring/io_uring.h | 9 ++++++- io_uring/timeout.c | 6 ++--- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 57aef092ef38..cff046b0734b 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -527,7 +527,7 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) io_eventfd_signal(ctx); } -void io_cqring_ev_posted(struct io_ring_ctx *ctx) +static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx) { if (unlikely(ctx->off_timeout_used || ctx->drain_active || ctx->has_evfd)) @@ -536,6 +536,19 @@ void io_cqring_ev_posted(struct io_ring_ctx *ctx) io_cqring_wake(ctx); } +static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx) + __releases(ctx->completion_lock) +{ + io_commit_cqring(ctx); + spin_unlock(&ctx->completion_lock); + io_cqring_ev_posted(ctx); +} + +void io_cq_unlock_post(struct io_ring_ctx *ctx) +{ + __io_cq_unlock_post(ctx); +} + /* Returns true if there are no backlogged entries after the flush */ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) { @@ -548,7 +561,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) if (ctx->flags & IORING_SETUP_CQE32) cqe_size <<= 1; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); while (!list_empty(&ctx->cq_overflow_list)) { struct io_uring_cqe *cqe = io_get_cqe(ctx); struct io_overflow_cqe *ocqe; @@ -572,9 +585,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags); } - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return all_flushed; } @@ -760,11 +771,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, { bool filled; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); filled = io_fill_cqe_aux(ctx, user_data, res, cflags); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return filled; } @@ -810,11 +819,9 @@ void io_req_complete_post(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); __io_req_complete_post(req); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); } inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags) @@ -946,11 +953,9 @@ static void __io_req_find_next_prep(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); io_disarm_next(req); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); } static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req) @@ -984,13 +989,6 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked) percpu_ref_put(&ctx->refs); } -static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx) -{ - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); -} - static void handle_prev_tw_list(struct io_wq_work_node *node, struct io_ring_ctx **ctx, bool *uring_locked) { @@ -1006,7 +1004,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, if (req->ctx != *ctx) { if (unlikely(!*uring_locked && *ctx)) - ctx_commit_and_unlock(*ctx); + io_cq_unlock_post(*ctx); ctx_flush_and_put(*ctx, uring_locked); *ctx = req->ctx; @@ -1014,7 +1012,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, *uring_locked = mutex_trylock(&(*ctx)->uring_lock); percpu_ref_get(&(*ctx)->refs); if (unlikely(!*uring_locked)) - spin_lock(&(*ctx)->completion_lock); + io_cq_lock(*ctx); } if (likely(*uring_locked)) { req->io_task_work.func(req, uring_locked); @@ -1026,7 +1024,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, } while (node); if (unlikely(!*uring_locked)) - ctx_commit_and_unlock(*ctx); + io_cq_unlock_post(*ctx); } static void handle_tw_list(struct io_wq_work_node *node, @@ -1261,10 +1259,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx) if (!(req->flags & REQ_F_CQE_SKIP)) __io_fill_cqe_req(ctx, req); } - - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + __io_cq_unlock_post(ctx); io_free_batch_list(ctx, state->compl_reqs.first); INIT_WQ_LIST(&state->compl_reqs); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 7feef8c36db7..bb8367908472 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -24,7 +24,6 @@ void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); void io_req_complete_post(struct io_kiocb *req); void __io_req_complete_post(struct io_kiocb *req); bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); -void io_cqring_ev_posted(struct io_ring_ctx *ctx); void __io_commit_cqring_flush(struct io_ring_ctx *ctx); struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); @@ -66,6 +65,14 @@ bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task, #define io_for_each_link(pos, head) \ for (pos = (head); pos; pos = pos->link) +static inline void io_cq_lock(struct io_ring_ctx *ctx) + __acquires(ctx->completion_lock) +{ + spin_lock(&ctx->completion_lock); +} + +void io_cq_unlock_post(struct io_ring_ctx *ctx); + static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) { if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) { diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 4938c1cdcbcd..3c331b723332 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -615,7 +615,7 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, struct io_timeout *timeout, *tmp; int canceled = 0; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); spin_lock_irq(&ctx->timeout_lock); list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { struct io_kiocb *req = cmd_to_io_kiocb(timeout); @@ -626,8 +626,6 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, } } spin_unlock_irq(&ctx->timeout_lock); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return canceled != 0; } From patchwork Sun Jun 19 11:26:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A48CCC43334 for ; Sun, 19 Jun 2022 11:26:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236544AbiFSL0z (ORCPT ); Sun, 19 Jun 2022 07:26:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236277AbiFSL0v (ORCPT ); Sun, 19 Jun 2022 07:26:51 -0400 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BA9511A07 for ; Sun, 19 Jun 2022 04:26:50 -0700 (PDT) Received: by mail-wm1-x32c.google.com with SMTP id i81-20020a1c3b54000000b0039c76434147so6450715wma.1 for ; Sun, 19 Jun 2022 04:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fewyXD2tWiUZFUHM4SZq/xvzdVDlm/vCOa9X2cT+Fxg=; b=UZ1RCxFwvu4KEQotu8sI5qmGp3eeHt0nAIZ64Xt/YhEzpw6qUd0KJQurAZaUpdS4r/ 9wNqdi8Q7ETWL00BEE0s0kKg4Un/2GTPD3yuczCflPzYgX4cSPQ7ZV5N01P1AH5Odrin Q/QgcDPhAAPOGFiVeoxaYMu1MV1clEvCNhG0aSEf4peCifOPLTRKBoIAdT34t//0pLN3 9/gsikg3hs4bZ838QIfUmn9mh2IJjT1C6Y6QwlfCopx4/wjCGcNMvr1TevhflWK4LLud 8qecUvjyCElcC2+dYGu1XUwPm+6BpCU7JuyEk1fFpCZTEUCNESLKK9opvhLaFS4GF0qQ EAqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fewyXD2tWiUZFUHM4SZq/xvzdVDlm/vCOa9X2cT+Fxg=; b=8F2VPoLGqJX0ENtPiWxir0+WC6fSgga7cpwYxsAGG8VxeS8+nhuyfqeoXCTs1fwcaG igfEpl9o3FI6pBWO1SJyJmpWMQo48dZEMyAQu8Y5QVAKOSpKitnQGrekqWdUt6DSPmkK kijYjroUUl5nES7FzpiWWVG8ccgbhfmtHeYZAZ7GpTfkvJb30Rb6PDhf/cry1ZS51Cr4 y9Fr4YD2z7RK44y1TGRS2pucCmAiZw0Iuhyi7AnYaT0IMQl+FzOGqEMzV4rcDjCWD57r z+dSGhDBdgPt0XmmsVA2UGNoO6w/FLjOA7SlNXuOqOPr77zI10h/HsR7XitquN8gP3bZ gkvg== X-Gm-Message-State: AJIora9dtFxXfxMdc08SqkEO/lLK4/uFJqBLZx2BSGaPWe3lq3HF6hLP zOFqwog4TaKM7BmhbXg8qsACPIf7OXMPRw== X-Google-Smtp-Source: AGRyM1soDDcnzV6nzuiaK7DN3FrLsLWHOCRH3emb49y9k5ggriqOOGm95xSWCxwGXkdppTpfMR15OQ== X-Received: by 2002:a05:600c:3d94:b0:39e:da8a:1791 with SMTP id bi20-20020a05600c3d9400b0039eda8a1791mr17866882wmb.20.1655638008507; Sun, 19 Jun 2022 04:26:48 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id y14-20020adfee0e000000b002119c1a03e4sm9921653wrn.31.2022.06.19.04.26.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 04:26:48 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 7/7] io_uring: add io_commit_cqring_flush() Date: Sun, 19 Jun 2022 12:26:10 +0100 Message-Id: <69cec30d9e4bfcf1a0c63e61ed6323cadc53d516.1655637157.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Since __io_commit_cqring_flush users moved to different files, introduce io_commit_cqring_flush() helper and encapsulate all flags testing details inside. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 5 +---- io_uring/io_uring.h | 6 ++++++ io_uring/rw.c | 5 +---- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index cff046b0734b..c24c285dfac9 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -529,10 +529,7 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx) { - if (unlikely(ctx->off_timeout_used || ctx->drain_active || - ctx->has_evfd)) - __io_commit_cqring_flush(ctx); - + io_commit_cqring_flush(ctx); io_cqring_wake(ctx); } diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index bb8367908472..76cfb88af812 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -242,4 +242,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req) wq_list_add_tail(&req->comp_list, &state->compl_reqs); } +static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx) +{ + if (unlikely(ctx->off_timeout_used || ctx->drain_active || ctx->has_evfd)) + __io_commit_cqring_flush(ctx); +} + #endif diff --git a/io_uring/rw.c b/io_uring/rw.c index 5f24db65a81d..17707e78ab01 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -948,10 +948,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx) { - if (unlikely(ctx->off_timeout_used || ctx->drain_active || - ctx->has_evfd)) - __io_commit_cqring_flush(ctx); - + io_commit_cqring_flush(ctx); if (ctx->flags & IORING_SETUP_SQPOLL) io_cqring_wake(ctx); }