From patchwork Mon Jun 20 00:25:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A74D6CCA47A for ; Mon, 20 Jun 2022 00:26:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237175AbiFTA0e (ORCPT ); Sun, 19 Jun 2022 20:26:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237207AbiFTA0e (ORCPT ); Sun, 19 Jun 2022 20:26:34 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CBFFAE41 for ; Sun, 19 Jun 2022 17:26:33 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id z9so4950554wmf.3 for ; Sun, 19 Jun 2022 17:26:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jY2nSkB+1l8Zp48/Y+s7yDtln5iA6jefEHAekgrMJ2c=; b=GfswRFATgb3RGO+wXEvC30nBsw8nmR79sGjAqG7UxuDkt2NJG9pGAAbvv6iK2DtDCW Num8IHwi7AO1cyBF8FXsa7Xv6lxRWnVapLL/7cZuS0xAyUjqkd8NoW7lMlM5ka2NNSsG oVSQuSIOWK0ManD8hYeiHRGeU5FXgWQujwlv4qznYmguJrh14Y29x2NYShGuf1bHarwL eVOLfGEbdJ311Hbvzi5UkEzp9REyoB1L2aE11WnfYOFaJ6MYE37FitA+F+iPryCx8k3v ZOfcDtKTk/INB0xXD39B19NnyeS4yt+euSSH90fh2PSlDow6w7N40Jb0L+mrXKBDwTSj 2myA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jY2nSkB+1l8Zp48/Y+s7yDtln5iA6jefEHAekgrMJ2c=; b=mOKcE4bE8cuYCzMpZKYytJ172/lQfAFTnnIwbr7qwWHyvfVOVqfrroNH5u1r4LINdU a5ur+3wrgpgjnBFzHXE/y2WpCc/1rBDBWklD95KMN8qldwtjNPAkteOA2y0HxBj1dZft bNDFM1REp4MAEsAuF3ojVSWlWDPALJDDuMGW2LmVwrZyEW+zAGImH/SIf1fkljnkJDB8 M8TfZ/rZRu7xCGxScJRJpGhraIguk65UdbUAAZ4+BNj+j852LvyyclLpCkWj249Tn3H3 c6qgfLUB1UANudv7AmFtMa1HqxEGSfduCt0190lE4PZ6wbf7maaPl5WEBuiW4fcIDP3V s22Q== X-Gm-Message-State: AOAM530/K8L9AQTPxLx1ycTDy9nhnv+U15fnIVgPWNZbzQ9TltIYk6Cm ChkilkcGF8BpZ2berLcGOzhL6/UGhNsRAQ== X-Google-Smtp-Source: ABdhPJxq2xEOCaivjjgulBn/nEQrsZkRVM+kemWjr02NmmUB0LbE0a1NdM3/iku1k1vMUYAY58Ojzg== X-Received: by 2002:a05:600c:358f:b0:39c:7fe7:cbe2 with SMTP id p15-20020a05600c358f00b0039c7fe7cbe2mr32109057wmq.46.1655684791127; Sun, 19 Jun 2022 17:26:31 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:30 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 01/10] io_uring: fix multi ctx cancellation Date: Mon, 20 Jun 2022 01:25:52 +0100 Message-Id: <8d491fe02d8ac4c77ff38061cf86b9a827e8845c.1655684496.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org io_uring_try_cancel_requests() loops until there is nothing left to do with the ring, however there might be several rings and they might have dependencies between them, e.g. via poll requests. Instead of cancelling rings one by one, try to cancel them all and only then loop over if we still potenially some work to do. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 87 ++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 41 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 0f18a86f3f8c..2d1d4752b955 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -132,7 +132,7 @@ struct io_defer_entry { #define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL) #define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK) -static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx, +static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, struct task_struct *task, bool cancel_all); @@ -2648,7 +2648,9 @@ static __cold void io_ring_exit_work(struct work_struct *work) * as nobody else will be looking for them. */ do { - io_uring_try_cancel_requests(ctx, NULL, true); + while (io_uring_try_cancel_requests(ctx, NULL, true)) + cond_resched(); + if (ctx->sq_data) { struct io_sq_data *sqd = ctx->sq_data; struct task_struct *tsk; @@ -2806,53 +2808,48 @@ static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx) return ret; } -static __cold void io_uring_try_cancel_requests(struct io_ring_ctx *ctx, +static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, struct task_struct *task, bool cancel_all) { struct io_task_cancel cancel = { .task = task, .all = cancel_all, }; struct io_uring_task *tctx = task ? task->io_uring : NULL; + enum io_wq_cancel cret; + bool ret = false; /* failed during ring init, it couldn't have issued any requests */ if (!ctx->rings) - return; - - while (1) { - enum io_wq_cancel cret; - bool ret = false; + return false; - if (!task) { - ret |= io_uring_try_cancel_iowq(ctx); - } else if (tctx && tctx->io_wq) { - /* - * Cancels requests of all rings, not only @ctx, but - * it's fine as the task is in exit/exec. - */ - cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb, - &cancel, true); - ret |= (cret != IO_WQ_CANCEL_NOTFOUND); - } + if (!task) { + ret |= io_uring_try_cancel_iowq(ctx); + } else if (tctx && tctx->io_wq) { + /* + * Cancels requests of all rings, not only @ctx, but + * it's fine as the task is in exit/exec. + */ + cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb, + &cancel, true); + ret |= (cret != IO_WQ_CANCEL_NOTFOUND); + } - /* SQPOLL thread does its own polling */ - if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || - (ctx->sq_data && ctx->sq_data->thread == current)) { - while (!wq_list_empty(&ctx->iopoll_list)) { - io_iopoll_try_reap_events(ctx); - ret = true; - } + /* SQPOLL thread does its own polling */ + if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || + (ctx->sq_data && ctx->sq_data->thread == current)) { + while (!wq_list_empty(&ctx->iopoll_list)) { + io_iopoll_try_reap_events(ctx); + ret = true; } - - ret |= io_cancel_defer_files(ctx, task, cancel_all); - mutex_lock(&ctx->uring_lock); - ret |= io_poll_remove_all(ctx, task, cancel_all); - mutex_unlock(&ctx->uring_lock); - ret |= io_kill_timeouts(ctx, task, cancel_all); - if (task) - ret |= io_run_task_work(); - if (!ret) - break; - cond_resched(); } + + ret |= io_cancel_defer_files(ctx, task, cancel_all); + mutex_lock(&ctx->uring_lock); + ret |= io_poll_remove_all(ctx, task, cancel_all); + mutex_unlock(&ctx->uring_lock); + ret |= io_kill_timeouts(ctx, task, cancel_all); + if (task) + ret |= io_run_task_work(); + return ret; } static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked) @@ -2882,6 +2879,8 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) atomic_inc(&tctx->in_idle); do { + bool loop = false; + io_uring_drop_tctx_refs(current); /* read completions before cancelations */ inflight = tctx_inflight(tctx, !cancel_all); @@ -2896,13 +2895,19 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) /* sqpoll task will cancel all its requests */ if (node->ctx->sq_data) continue; - io_uring_try_cancel_requests(node->ctx, current, - cancel_all); + loop |= io_uring_try_cancel_requests(node->ctx, + current, cancel_all); } } else { list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) - io_uring_try_cancel_requests(ctx, current, - cancel_all); + loop |= io_uring_try_cancel_requests(ctx, + current, + cancel_all); + } + + if (loop) { + cond_resched(); + continue; } prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE); From patchwork Mon Jun 20 00:25:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 612AFC433EF for ; Mon, 20 Jun 2022 00:26:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237207AbiFTA0f (ORCPT ); Sun, 19 Jun 2022 20:26:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237222AbiFTA0e (ORCPT ); Sun, 19 Jun 2022 20:26:34 -0400 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B6E0A1B0 for ; Sun, 19 Jun 2022 17:26:34 -0700 (PDT) Received: by mail-wr1-x434.google.com with SMTP id k22so6021890wrd.6 for ; Sun, 19 Jun 2022 17:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hfKvmNhTq8ja3OOJ0fm7B/LY4GXBKz4A3F1pqiaY4Yo=; b=gzoZjgL5nujUiF6Jk87E0f1f4iV7iJmtsSXWBBb8aB7JESXdFi/iLVgcJx3UlSCFZT 6xkoTzRmTTuUeXgslZZSPcQBXFxskEOsMSQ2SYNRIYik7R4zzJYDn6mclZn6TmRNifY+ KcoBydQCk4CFtHNoHeCcoF32iQxVO6PaWgvMo90Qwtk0i8p4w3IUaK/Q6/8Js+liRyIj idOQdfQfE1HnYfmZtQW/IxldNvH1L4TnhbOOJ4h4F07/vzPKwY/LtSh6BenMAzI9xvsy ZKo/80nG0uUvIkbI0utqpHz7sVPQqBnEfSm2WIr1Ou9l9pdhIUXupnSt7LyWcHBSvBoM JwGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hfKvmNhTq8ja3OOJ0fm7B/LY4GXBKz4A3F1pqiaY4Yo=; b=QY+03iv77v0ST9lviumJ1aEFnGOvCCkfYCGkVdDfmwt/l1G/0VWYA4zLwrVJ7tHAyb jXFezeAkKzUGWsFV63BoUWA54SGLCgTa8HBK+Faml3pS+/+lJsFIFCQj4fFm0c+ov79H rT/CTYjFJad6xv/h/Fdv7ik6pWjPMZWchLgRcnRMjGcJrTUv7HZW9AwEFYU61RQFJSC1 CtyreIwzpdnKqrPAfUklcHN8qT1Fm3yidZe2snW63xRhx2+P1Cd8lUvW3eUpwX3JEVZX P6f80gS5c6xt7HVRPVS++7IEOq8sMjbj2UUNJsAsljBWHgDf/x+pbh4ByaHPXSh2aHY0 45xg== X-Gm-Message-State: AJIora+3q+vMPAXMoZaPgclWy8js7cNg46M4NzoU9Hmc1CZQ4+u3YZDH C+ztRsEdhfrltTzQKevm9MLZSMdNMVCZfg== X-Google-Smtp-Source: AGRyM1sHC8oLKpLfU9Xp6CZ0211j/P+Sfy9PziprLoVxp6G1s0Oj1BQKo47kI4DHInd1J8BlBJlBRw== X-Received: by 2002:a5d:5342:0:b0:210:c508:956d with SMTP id t2-20020a5d5342000000b00210c508956dmr20801948wrv.205.1655684792423; Sun, 19 Jun 2022 17:26:32 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:31 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 02/10] io_uring: improve task exit timeout cancellations Date: Mon, 20 Jun 2022 01:25:53 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Don't spin trying to cancel timeouts that are reachable but not cancellable, e.g. already executing. Signed-off-by: Pavel Begunkov --- io_uring/timeout.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 557c637af158..a79a7d6ef1b3 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -49,7 +49,7 @@ static inline void io_put_req(struct io_kiocb *req) } } -static void io_kill_timeout(struct io_kiocb *req, int status) +static bool io_kill_timeout(struct io_kiocb *req, int status) __must_hold(&req->ctx->completion_lock) __must_hold(&req->ctx->timeout_lock) { @@ -64,7 +64,9 @@ static void io_kill_timeout(struct io_kiocb *req, int status) atomic_read(&req->ctx->cq_timeouts) + 1); list_del_init(&timeout->list); io_req_tw_post_queue(req, status, 0); + return true; } + return false; } __cold void io_flush_timeouts(struct io_ring_ctx *ctx) @@ -620,10 +622,9 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { struct io_kiocb *req = cmd_to_io_kiocb(timeout); - if (io_match_task(req, tsk, cancel_all)) { - io_kill_timeout(req, -ECANCELED); + if (io_match_task(req, tsk, cancel_all) && + io_kill_timeout(req, -ECANCELED)) canceled++; - } } spin_unlock_irq(&ctx->timeout_lock); io_commit_cqring(ctx); From patchwork Mon Jun 20 00:25:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F22CC43334 for ; Mon, 20 Jun 2022 00:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237264AbiFTA0h (ORCPT ); Sun, 19 Jun 2022 20:26:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237222AbiFTA0g (ORCPT ); Sun, 19 Jun 2022 20:26:36 -0400 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31755A1B0 for ; Sun, 19 Jun 2022 17:26:35 -0700 (PDT) Received: by mail-wm1-x333.google.com with SMTP id m39-20020a05600c3b2700b0039c511ebbacso6978755wms.3 for ; Sun, 19 Jun 2022 17:26:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bMRnPVi0XcPa9YJu7/ow+fUvpoMsSalydaykjjzRIkI=; b=WoivE2T3+MTQd2Rs0qw8pfpXmy/ZLIVq+jn1f1D/w75tifzom6kaZxsUfX8ySGXdew uUbLPPgteMwzFkdBap9LUBj2TP5lQ9w4vjk+8+AXKLlfhXL25eNfCWUpb8mtxl+OmcZ0 poxmcLEnUEmgaz8Oe32oGXq5fTzDY+mzU0Gl+vuLFUX0NZj/R2Tr6zrxAlk2X+MLAAM0 T/bwNiKsy+Uh2Cxe15cWL/rkO1ll7XrqaDi8MYwfW9wWY0iOfyqVE5yhgHNE/FL3sqhD QszUnEWCkukRAL1JEcoo3xOTCsy5JwK05oVuJeV90wPC8AUd6FQDCcBoU5IB/zpZaCUP wYJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bMRnPVi0XcPa9YJu7/ow+fUvpoMsSalydaykjjzRIkI=; b=e3L9eVqdycY5CeuPfOsm9NEcTPq+wg/uQ2jP40Twb4XGiLqiSaumOKo7dpXQMSsLQs zuJTxnWPcnLflyx5sPerl3860vD+keORC5aoRH/eDkM5LCh8EN24em0VcRCDVsmpQDfo BdjeDluK/RS3Kzn1mFjAT6G9n2OfqLAQp7gm+Qr49p+KRmoC/YAMBH8Yj4yOWpMuw2ER vGG7DT0/8JD4D+OgNMeUjhFEswZPkO0RCg8yDHtuiLXIXTpimHmgNpCp/tym/aWnBRPU sok+Wf9tJ0wyU9PTbIA7OsL1ydVCdV3Md7ZURhfZdJJht1xcQ1nfDUK8ell0V5Y/qkKq A/hg== X-Gm-Message-State: AJIora+VgW/jd2n+5RLLWStPIg5+bAltb+d5UbZ4e9IkoDm0V3RdqcoF 3oKCvbIpZF3hVqSzmIOQydOfSL/nfJ2HfA== X-Google-Smtp-Source: AGRyM1uguAZihHTCb+nDYBODxPvobwep78Hg5lnQKJ1ZeNcIIqYn47DZLeSYufeWQ0V1CCg+havbSA== X-Received: by 2002:a05:600c:35c2:b0:39b:fa1f:4f38 with SMTP id r2-20020a05600c35c200b0039bfa1f4f38mr21830682wmq.22.1655684793498; Sun, 19 Jun 2022 17:26:33 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:33 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com, Nathan Chancellor Subject: [PATCH for-next 03/10] io_uring: fix io_poll_remove_all clang warnings Date: Mon, 20 Jun 2022 01:25:54 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org clang complains on bitwise operations with bools, add a bit more verbosity to better show that we want to call io_poll_remove_all_table() twice but with different arguments. Reported-by: Nathan Chancellor Signed-off-by: Pavel Begunkov Reviewed-by: Nathan Chancellor --- io_uring/poll.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/io_uring/poll.c b/io_uring/poll.c index d4bfc6d945cf..9af6a34222a9 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -589,8 +589,11 @@ __cold bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk, bool cancel_all) __must_hold(&ctx->uring_lock) { - return io_poll_remove_all_table(tsk, &ctx->cancel_table, cancel_all) | - io_poll_remove_all_table(tsk, &ctx->cancel_table_locked, cancel_all); + bool ret; + + ret = io_poll_remove_all_table(tsk, &ctx->cancel_table, cancel_all); + ret |= io_poll_remove_all_table(tsk, &ctx->cancel_table_locked, cancel_all); + return ret; } static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, bool poll_only, From patchwork Mon Jun 20 00:25:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8D28C433EF for ; Mon, 20 Jun 2022 00:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237222AbiFTA0i (ORCPT ); Sun, 19 Jun 2022 20:26:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237232AbiFTA0h (ORCPT ); Sun, 19 Jun 2022 20:26:37 -0400 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CBB6AE41 for ; Sun, 19 Jun 2022 17:26:36 -0700 (PDT) Received: by mail-wr1-x42d.google.com with SMTP id v14so12472104wra.5 for ; Sun, 19 Jun 2022 17:26:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z6F1kSkayF9VqahsR2vA++YtT0wZ/B8SrjyrYHcF2bA=; b=n5inSpA+6K4YZwd5bDBJXuQHIYkA7G4YR3/DnFcUshUK6O16HW9Ojpi9DHdBE0xB5X JzjQpkmd1kPCuVirYXtPMN4YdxJq3XuQ4wJdHFARRlnZ2HEyzhc59yLH94ORWnRX8moz SVdQWG0vWyh2doDL2F2vDd03fXfdczZyrb1hq+yY4SJCk+t3nA4DNd2Ptu3zlbdmHnDD 9sqhFuVQDBOKInlCeYbEIDBVOg57NdpL7bmma5LdBrwCAYGIIXSLH4SOhrWEIXFey5tX drzuA4mfPnyM/cC4BPwmNPbArjILRzE4ya+xOUeOtUHFI+G33xcxLJLqP0k2Ly9QR73g xLAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z6F1kSkayF9VqahsR2vA++YtT0wZ/B8SrjyrYHcF2bA=; b=OzGoURgR9y5sMkiBbNHoUSyb9O/BTHcJRaJlEsQlzUbjQ6ssoFMC3wY7iJANZeXNNj PnbDAQquD4Nh8fegcquP0mzhVka/yhqkorPMAKffg6gOY+9SH6Me0b/p2VA9tO9YYbHZ CtdalorxWks/otDYioB5t3HKPV4OcG49cIqsOtxTUMx6wg9PcEI0XuphBfrDVRPIHknx 6a/FImtrI0TVtV/nVf2azycKHI2YDStlv/zWuMM/SNqOF25hzduwsWPmEYpRkljXma/R 0G6VwF+0oXYNbbY6LwT3nM4l/H7Yny9WEUT35GW+hAebjyuktAw3NFHRdUZGpcYv3W0P +zKg== X-Gm-Message-State: AJIora81GwI78iKUVZ86CDVxMk6n13zXTqD0ndpboN6txd9CjBCa0nze nxNbmOTZdyTbSHnY5Y9fclKXoDEyRPg8/w== X-Google-Smtp-Source: AGRyM1uwD+sp+Te5QCydS+/O+hURj/7pxJtLXz/lg7yklLOnugI7GhOkaOzLTRovAG8fZTK961De4w== X-Received: by 2002:adf:e502:0:b0:21b:8de6:7f14 with SMTP id j2-20020adfe502000000b0021b8de67f14mr3456802wrm.3.1655684794687; Sun, 19 Jun 2022 17:26:34 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:34 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 04/10] io_uring: hide eventfd assumptions in evenfd paths Date: Mon, 20 Jun 2022 01:25:55 +0100 Message-Id: <0ffc66bae37a2513080b601e4370e147faaa72c5.1655684496.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Some io_uring-eventfd users assume that there won't be spurious wakeups. That assumption has to be honoured by all io_cqring_ev_posted() callers, which is inconvenient and from time to time leads to problems but should be maintained to not break the userspace. Instead of making the callers to track whether a CQE was posted or not, hide it inside io_eventfd_signal(). It saves ->cached_cq_tail it saw last time and triggers the eventfd only when ->cached_cq_tail changed since then. Signed-off-by: Pavel Begunkov --- include/linux/io_uring_types.h | 2 ++ io_uring/io_uring.c | 44 ++++++++++++++++++++-------------- io_uring/timeout.c | 3 +-- 3 files changed, 29 insertions(+), 20 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 6bcd7bff6479..5987f8acca38 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -314,6 +314,8 @@ struct io_ring_ctx { struct list_head defer_list; unsigned sq_thread_idle; + /* protected by ->completion_lock */ + unsigned evfd_last_cq_tail; }; enum { diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 2d1d4752b955..ded42d884c49 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -473,6 +473,22 @@ static __cold void io_queue_deferred(struct io_ring_ctx *ctx) static void io_eventfd_signal(struct io_ring_ctx *ctx) { struct io_ev_fd *ev_fd; + bool skip; + + spin_lock(&ctx->completion_lock); + /* + * Eventfd should only get triggered when at least one event has been + * posted. Some applications rely on the eventfd notification count only + * changing IFF a new CQE has been added to the CQ ring. There's no + * depedency on 1:1 relationship between how many times this function is + * called (and hence the eventfd count) and number of CQEs posted to the + * CQ ring. + */ + skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail; + ctx->evfd_last_cq_tail = ctx->cached_cq_tail; + spin_unlock(&ctx->completion_lock); + if (skip) + return; rcu_read_lock(); /* @@ -511,13 +527,6 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) io_eventfd_signal(ctx); } -/* - * This should only get called when at least one event has been posted. - * Some applications rely on the eventfd notification count only changing - * IFF a new CQE has been added to the CQ ring. There's no depedency on - * 1:1 relationship between how many times this function is called (and - * hence the eventfd count) and number of CQEs posted to the CQ ring. - */ void io_cqring_ev_posted(struct io_ring_ctx *ctx) { if (unlikely(ctx->off_timeout_used || ctx->drain_active || @@ -530,7 +539,7 @@ void io_cqring_ev_posted(struct io_ring_ctx *ctx) /* Returns true if there are no backlogged entries after the flush */ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) { - bool all_flushed, posted; + bool all_flushed; size_t cqe_size = sizeof(struct io_uring_cqe); if (!force && __io_cqring_events(ctx) == ctx->cq_entries) @@ -539,7 +548,6 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) if (ctx->flags & IORING_SETUP_CQE32) cqe_size <<= 1; - posted = false; spin_lock(&ctx->completion_lock); while (!list_empty(&ctx->cq_overflow_list)) { struct io_uring_cqe *cqe = io_get_cqe(ctx); @@ -554,7 +562,6 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) else io_account_cq_overflow(ctx); - posted = true; list_del(&ocqe->list); kfree(ocqe); } @@ -567,8 +574,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (posted) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return all_flushed; } @@ -758,8 +764,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, filled = io_fill_cqe_aux(ctx, user_data, res, cflags); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (filled) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return filled; } @@ -940,14 +945,12 @@ __cold void io_free_req(struct io_kiocb *req) static void __io_req_find_next_prep(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - bool posted; spin_lock(&ctx->completion_lock); - posted = io_disarm_next(req); + io_disarm_next(req); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (posted) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); } static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req) @@ -2428,6 +2431,11 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg, kfree(ev_fd); return ret; } + + spin_lock(&ctx->completion_lock); + ctx->evfd_last_cq_tail = ctx->cached_cq_tail; + spin_unlock(&ctx->completion_lock); + ev_fd->eventfd_async = eventfd_async; ctx->has_evfd = true; rcu_assign_pointer(ctx->io_ev_fd, ev_fd); diff --git a/io_uring/timeout.c b/io_uring/timeout.c index a79a7d6ef1b3..424b2fc858b8 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -629,7 +629,6 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, spin_unlock_irq(&ctx->timeout_lock); io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); - if (canceled != 0) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted(ctx); return canceled != 0; } From patchwork Mon Jun 20 00:25:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40721CCA47A for ; Mon, 20 Jun 2022 00:26:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237280AbiFTA0k (ORCPT ); Sun, 19 Jun 2022 20:26:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237272AbiFTA0i (ORCPT ); Sun, 19 Jun 2022 20:26:38 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AA21A1B0 for ; Sun, 19 Jun 2022 17:26:37 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id q9so12469261wrd.8 for ; Sun, 19 Jun 2022 17:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z2uPrsk7BEZP8sHdJzPwYohfCoy3RRkhF5HGc7u3Tgc=; b=NO5q4vnfRZgc0MZh8SNifkQ/fs3EPGkZr4JN0LeBuFhvliZ8/z8a3HXmdpcIrsgaz8 i8DW2UgplxaCcqp1UQAIxM9Ygg/ZUNGNx1p1FL0L+nnwfQYvqiJ8wWM6bB0MwZRbFFz1 Wdekh7ji2i/11zYkCsekwuFKFLXZ9Gmwwpbrq1Fuufg4O1lmGAKCWDbRC2mLMbnc8iZ2 HYMxC8m9Ru2zDRu1tQjIbapca+bZJBeaPhl/T4kbz5QxkD2pKKxrX1KzTeb2YmK7VQH0 EM1+KO7H0T0PfSNvsIJBvBGFw/byCpRPP4yyGND0gzw/Nn/OBozqrIsscAZCXtluh5zi NhoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z2uPrsk7BEZP8sHdJzPwYohfCoy3RRkhF5HGc7u3Tgc=; b=Em9pZaQzJcOVpBYR3KtN8V4T43MlcEk6icAJvXedxA3+ofQrmokSZsbCHZbsunLOkO sE9Kfo3x92GnoicKhXbnsqum7GO2NInNJqe0uvfBTcD+it9pLbrbGJdx7D1gBVH9VplG 59vTQv9LqEskfY+e565meLWnBIc1pj6Ol7v5V8j94VvJ7+7AjAMZgkmbKk0r/xTaZ6ze wqcnb6AMMqcY9mA+59a9GyWLd1nHTIVj9IVU4zERg6vKGzV3OmyUwDEwAKeTHvCnWcu/ BgSLFk9xgF96eQOe0NPL1uTEgcJ+lyloX9TdjxVCjTYfirh9w7mxd+pZcX0qiVU6V0kQ qlYA== X-Gm-Message-State: AJIora+OkCpVs5I7JlNlSQw5UYXDPk52tF7HVx8jKVzBqk+Rmy7iLK9X B064RUi+eLQIqlpdvnRhgBFCJ4qC8LJstg== X-Google-Smtp-Source: AGRyM1u2lUOYo6uIC27D3KtiVtmc4ls3+KViZwrDvu2w5EjfnwmzEs2QThs69GcM5L1QcUXlckl/nw== X-Received: by 2002:a05:6000:168c:b0:218:4523:c975 with SMTP id y12-20020a056000168c00b002184523c975mr19922370wrd.23.1655684795691; Sun, 19 Jun 2022 17:26:35 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:35 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 05/10] io_uring: introduce locking helpers for CQE posting Date: Mon, 20 Jun 2022 01:25:56 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org spin_lock(&ctx->completion_lock); /* post CQEs */ io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); io_cqring_ev_posted(ctx); We have many places repeating this sequence, and the three function unlock section is not perfect from the maintainance perspective and also makes harder to add new locking/sync trick. Introduce to helpers. io_cq_lock(), which is simple and only grabs ->completion_lock, and io_cq_unlock_post() encapsulating the three call section. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 57 +++++++++++++++++++++------------------------ io_uring/io_uring.h | 9 ++++++- io_uring/timeout.c | 6 ++--- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index ded42d884c49..82a9e4e2a3e2 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -527,7 +527,7 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) io_eventfd_signal(ctx); } -void io_cqring_ev_posted(struct io_ring_ctx *ctx) +static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx) { if (unlikely(ctx->off_timeout_used || ctx->drain_active || ctx->has_evfd)) @@ -536,6 +536,19 @@ void io_cqring_ev_posted(struct io_ring_ctx *ctx) io_cqring_wake(ctx); } +static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx) + __releases(ctx->completion_lock) +{ + io_commit_cqring(ctx); + spin_unlock(&ctx->completion_lock); + io_cqring_ev_posted(ctx); +} + +void io_cq_unlock_post(struct io_ring_ctx *ctx) +{ + __io_cq_unlock_post(ctx); +} + /* Returns true if there are no backlogged entries after the flush */ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) { @@ -548,7 +561,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) if (ctx->flags & IORING_SETUP_CQE32) cqe_size <<= 1; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); while (!list_empty(&ctx->cq_overflow_list)) { struct io_uring_cqe *cqe = io_get_cqe(ctx); struct io_overflow_cqe *ocqe; @@ -572,9 +585,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags); } - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return all_flushed; } @@ -760,11 +771,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, { bool filled; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); filled = io_fill_cqe_aux(ctx, user_data, res, cflags); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return filled; } @@ -810,11 +819,9 @@ void io_req_complete_post(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); __io_req_complete_post(req); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); } inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags) @@ -946,11 +953,9 @@ static void __io_req_find_next_prep(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); io_disarm_next(req); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); } static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req) @@ -984,13 +989,6 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked) percpu_ref_put(&ctx->refs); } -static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx) -{ - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); -} - static void handle_prev_tw_list(struct io_wq_work_node *node, struct io_ring_ctx **ctx, bool *uring_locked) { @@ -1006,7 +1004,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, if (req->ctx != *ctx) { if (unlikely(!*uring_locked && *ctx)) - ctx_commit_and_unlock(*ctx); + io_cq_unlock_post(*ctx); ctx_flush_and_put(*ctx, uring_locked); *ctx = req->ctx; @@ -1014,7 +1012,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, *uring_locked = mutex_trylock(&(*ctx)->uring_lock); percpu_ref_get(&(*ctx)->refs); if (unlikely(!*uring_locked)) - spin_lock(&(*ctx)->completion_lock); + io_cq_lock(*ctx); } if (likely(*uring_locked)) { req->io_task_work.func(req, uring_locked); @@ -1026,7 +1024,7 @@ static void handle_prev_tw_list(struct io_wq_work_node *node, } while (node); if (unlikely(!*uring_locked)) - ctx_commit_and_unlock(*ctx); + io_cq_unlock_post(*ctx); } static void handle_tw_list(struct io_wq_work_node *node, @@ -1261,10 +1259,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx) if (!(req->flags & REQ_F_CQE_SKIP)) __io_fill_cqe_req(ctx, req); } - - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + __io_cq_unlock_post(ctx); io_free_batch_list(ctx, state->compl_reqs.first); INIT_WQ_LIST(&state->compl_reqs); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index bdc62727638b..738fb96575ab 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -24,7 +24,6 @@ void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); void io_req_complete_post(struct io_kiocb *req); void __io_req_complete_post(struct io_kiocb *req); bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); -void io_cqring_ev_posted(struct io_ring_ctx *ctx); void __io_commit_cqring_flush(struct io_ring_ctx *ctx); struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); @@ -66,6 +65,14 @@ bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task, #define io_for_each_link(pos, head) \ for (pos = (head); pos; pos = pos->link) +static inline void io_cq_lock(struct io_ring_ctx *ctx) + __acquires(ctx->completion_lock) +{ + spin_lock(&ctx->completion_lock); +} + +void io_cq_unlock_post(struct io_ring_ctx *ctx); + static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) { if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) { diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 424b2fc858b8..7e2c341f9762 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -617,7 +617,7 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, struct io_timeout *timeout, *tmp; int canceled = 0; - spin_lock(&ctx->completion_lock); + io_cq_lock(ctx); spin_lock_irq(&ctx->timeout_lock); list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { struct io_kiocb *req = cmd_to_io_kiocb(timeout); @@ -627,8 +627,6 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, canceled++; } spin_unlock_irq(&ctx->timeout_lock); - io_commit_cqring(ctx); - spin_unlock(&ctx->completion_lock); - io_cqring_ev_posted(ctx); + io_cq_unlock_post(ctx); return canceled != 0; } From patchwork Mon Jun 20 00:25:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E52B3C43334 for ; Mon, 20 Jun 2022 00:26:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237275AbiFTA0k (ORCPT ); Sun, 19 Jun 2022 20:26:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237232AbiFTA0i (ORCPT ); Sun, 19 Jun 2022 20:26:38 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FCA0AE4C for ; Sun, 19 Jun 2022 17:26:37 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id z9so4950554wmf.3 for ; Sun, 19 Jun 2022 17:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FZbSkQcF8WKaKzAWuyvvIgaKpdT68ONaojO6bijpIdg=; b=KGGCMpBN+ZRwFco3VwRKsdOKuicU3IHbMFnvuVxqcV3XiDVtGPbeslUMiy/jvBKt25 KmgXQSompYgdCkNMq2MnpvnTPDO/MjEKAtaGyXAKm0zbNK0Gf/jDLLalYvYutE4+F1oN z+Bpf/qWbxhM/4NAoNPv1d8h1pemJpSeVK1nnL0i3Wd8J9heq2pfPXY4Zf0jslHQGqSr +45bYtdUhOxeXhNTOTx0bXFPv4+S7eQRw5t3+f27SZDlKmPgxz0li4aW2lHHq6ClEuKB 3jm9qZVwNMaBUXM4hYIxje6Yvsu3ghZbCOyNmeGgEK8waT5cWFWRWWCex59KFLbsDgWJ 3K/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FZbSkQcF8WKaKzAWuyvvIgaKpdT68ONaojO6bijpIdg=; b=EyOFWznRZ95Fz7MxsI2C9PWun5DY7yZTGQ7uKQSnsv2e6T2HcUpqX+Ne5PLt9m0hlv ObrLU9cxtcoK8m+A75d3ik8gFB6PITYPeNf3i8NqQyuNeOk7qyNcP/smmjuUT26er8ie yruO9d47IXL51NC+7sP+ldRkft0MFlcNcz3notJdHPlYnWmIgM5nfKxbeCoQZFo4B1lH cCElJp19CTBWpAQYu3vTRS4jzYqEfkxHOfr6FoapzThlfhpGYpvCb7ieYROWCuD1GiHm wVqTf8uphHC6TjoKOB6+v7t8fiQIyEQiENwAdJdz2+k4CT5GKTE3NLQX6FyfuNdYWCf+ hAGA== X-Gm-Message-State: AJIora9vl5jpa6GSB0cgpkeIgneLBdEJFtbSuScUU17KsOLWSudpiKzG 4U1rFe60usn5ZIsLj9fl21a5h2eZUM8OKA== X-Google-Smtp-Source: AGRyM1us/50M9rJOXTqsU0sHOnb/z9M610fSQW3PFmOTTX4jKZjmAQrjidURsXHAmqeyDsIgP/gELg== X-Received: by 2002:a7b:c856:0:b0:39c:3b44:7ab0 with SMTP id c22-20020a7bc856000000b0039c3b447ab0mr22190582wml.117.1655684796844; Sun, 19 Jun 2022 17:26:36 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:36 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 06/10] io_uring: add io_commit_cqring_flush() Date: Mon, 20 Jun 2022 01:25:57 +0100 Message-Id: <0da03887435dd9869ffe46dcd3962bf104afcca3.1655684496.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Since __io_commit_cqring_flush users moved to different files, introduce io_commit_cqring_flush() helper and encapsulate all flags testing details inside. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 5 +---- io_uring/io_uring.h | 6 ++++++ io_uring/rw.c | 5 +---- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 82a9e4e2a3e2..0be942ca91c4 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -529,10 +529,7 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx) static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx) { - if (unlikely(ctx->off_timeout_used || ctx->drain_active || - ctx->has_evfd)) - __io_commit_cqring_flush(ctx); - + io_commit_cqring_flush(ctx); io_cqring_wake(ctx); } diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 738fb96575ab..afca7ff8956c 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -229,4 +229,10 @@ static inline void io_req_add_compl_list(struct io_kiocb *req) wq_list_add_tail(&req->comp_list, &state->compl_reqs); } +static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx) +{ + if (unlikely(ctx->off_timeout_used || ctx->drain_active || ctx->has_evfd)) + __io_commit_cqring_flush(ctx); +} + #endif diff --git a/io_uring/rw.c b/io_uring/rw.c index f5567d52d2af..5660b1c95641 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -1015,10 +1015,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx) { - if (unlikely(ctx->off_timeout_used || ctx->drain_active || - ctx->has_evfd)) - __io_commit_cqring_flush(ctx); - + io_commit_cqring_flush(ctx); if (ctx->flags & IORING_SETUP_SQPOLL) io_cqring_wake(ctx); } From patchwork Mon Jun 20 00:25:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFB98C433EF for ; Mon, 20 Jun 2022 00:26:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237272AbiFTA0l (ORCPT ); Sun, 19 Jun 2022 20:26:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237232AbiFTA0k (ORCPT ); Sun, 19 Jun 2022 20:26:40 -0400 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8551AAE41 for ; Sun, 19 Jun 2022 17:26:39 -0700 (PDT) Received: by mail-wr1-x431.google.com with SMTP id i10so8720910wrc.0 for ; Sun, 19 Jun 2022 17:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4FSyTbDneLGtKugI2soK8tzPgcBp91PZCjoYIRrzazk=; b=MXiiBWS4uKk2H6d7W8ljaf5ypZWzgVIzjqueK9bIfCSUH+Iz/WPIPQtMm8YRJCIEJh IkBIAguhN+LPXVCgxIA9jHd8eZ2df9X04Ty5D/NGdJBhteCzkgAHnwyZ6kG62CV+VrqY he9LwHHcmlh8BV08h4Llb8JyqZyCp00zBzbAp+T6Y/SEA1x18PTJ7u2BCZsY07txsLxZ maN2YeItyy54DTFm7ib7XL8VYaAvUkuwPbAzVlWSq67xK/jesaf13LL9VirPbC5XzQu9 GNxqHvLLP/YdT1Hvaa71avjiljJrhtTw24lgpZoKzY7etS4hYbc9DOCkXKGY3SYC65QF r+hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4FSyTbDneLGtKugI2soK8tzPgcBp91PZCjoYIRrzazk=; b=BOTpZQAWhoxAYSDTJlKYNK/lZU1cEIvyo4E1OEpCzy7FRMdL1FwZqDjUwUnleCKeKM li5vkYW0eV+P1zD36mkHaEJ998+eLYixgzg9h9c7GekDbIn+SfP50gjRP7MzwZ2u7EBW qub+YDqdFy6kH3q/Kd9U6nzdsPlK9TagLdTre9NQpLLGP/20/a/sO3vgCaNGwVV29V5H lUqhv7iCUFcd8X6qhOXT30BNlmUxCQOASUypDuXayLZ9zvtKp9QVy7dg9dzpSIU4GnQQ 8hrIqreUKD8VoUXWzdCIMhTw8uo3KMuC5sSKEHUFrkCCaowN3JYwUoqOvDn29Dm3lJbK fhFw== X-Gm-Message-State: AJIora+li1/+s4Vbt+LQCWVUMJJFgv+qY9J16guETRhEhPzBFnZw/1/l jxZzwyOiFe/8h51T0aiT2tTf7QJB87Gq2A== X-Google-Smtp-Source: AGRyM1u4aciiEIp+lUnUpUTHhh2OyZufDPbR0jnh0eRQNqEC7BNNSdPmwB3DSUmeJVdhTpbNdYQHiw== X-Received: by 2002:a05:6000:1e1b:b0:219:e32d:cbe8 with SMTP id bj27-20020a0560001e1b00b00219e32dcbe8mr20549779wrb.129.1655684797851; Sun, 19 Jun 2022 17:26:37 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:37 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 07/10] io_uring: opcode independent fixed buf import Date: Mon, 20 Jun 2022 01:25:58 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Fixed buffers are generic infrastructure, make io_import_fixed() opcode agnostic. Signed-off-by: Pavel Begunkov --- io_uring/rw.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/io_uring/rw.c b/io_uring/rw.c index 5660b1c95641..4e5d96040cdc 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -273,14 +273,15 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret, return IOU_ISSUE_SKIP_COMPLETE; } -static int __io_import_fixed(struct io_kiocb *req, int ddir, - struct iov_iter *iter, struct io_mapped_ubuf *imu) +static int io_import_fixed(int ddir, struct iov_iter *iter, + struct io_mapped_ubuf *imu, + u64 buf_addr, size_t len) { - struct io_rw *rw = io_kiocb_to_cmd(req); - size_t len = rw->len; - u64 buf_end, buf_addr = rw->addr; + u64 buf_end; size_t offset; + if (WARN_ON_ONCE(!imu)) + return -EFAULT; if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end))) return -EFAULT; /* not inside the mapped region */ @@ -332,14 +333,6 @@ static int __io_import_fixed(struct io_kiocb *req, int ddir, return 0; } -static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter, - unsigned int issue_flags) -{ - if (WARN_ON_ONCE(!req->imu)) - return -EFAULT; - return __io_import_fixed(req, rw, iter, req->imu); -} - #ifdef CONFIG_COMPAT static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov, unsigned int issue_flags) @@ -426,7 +419,7 @@ static struct iovec *__io_import_iovec(int ddir, struct io_kiocb *req, ssize_t ret; if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) { - ret = io_import_fixed(req, ddir, iter, issue_flags); + ret = io_import_fixed(ddir, iter, req->imu, rw->addr, rw->len); if (ret) return ERR_PTR(ret); return NULL; From patchwork Mon Jun 20 00:25:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86365CCA47A for ; Mon, 20 Jun 2022 00:26:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237314AbiFTA0m (ORCPT ); Sun, 19 Jun 2022 20:26:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235902AbiFTA0m (ORCPT ); Sun, 19 Jun 2022 20:26:42 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 877B9AE4C for ; Sun, 19 Jun 2022 17:26:40 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id g4so12446674wrh.11 for ; Sun, 19 Jun 2022 17:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1sN/Gp9aHMJl2eMxU9Q8u380cMh6zn5WKKXCkShHB6A=; b=SqZIbgFlVJqolpHp2FNs/I2XMpEDijECmCw11e/uGqeF5gVFxCr5ZiGC6IJpfCYVBe 9eoNIDgfSLYOI6R/yIOZ8si+ZB07cz4Lb2sZu8p8QN8VOM3WMhFcOcYSR0z++Kx9q7PI 4EZBxPTL3z6yrJKrQZ34GvZDO2FOUvUAqr7Xrij+8xFESSj+5lT4P7oVusQxh/U3Jy5f dUzVnti//KXoRFUBfp5zgsQAyVIpsZ7JjIz7IGbY31eW6LnudBRp4GmTPuiZ8YlmghPg 3bpXZVpvRAgdDaotQpl7DeM3PaTop4rA1nXA8+dl8YU+pUAkMrvh6PG3Is6mSG+aYDJh n4gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1sN/Gp9aHMJl2eMxU9Q8u380cMh6zn5WKKXCkShHB6A=; b=0Xf6bEOKv+CXnApCDiVREeeDXux27/XbhALicztHl8lumB+CwlBeHmAUb0tS9jThHp e9yanI7N6VzlJ/itE9cT9UKjGkT+PK0oAC2qMXHB2MM/kHP8pF71NT8WASxJ4VE6KsSw GBTvoUysMYcqRUiQKXYGbLLvIUYo54myleaLUy2vLmbgFZT1053F0vmTsveKa2vucdlo Q9cwoaJsdGqNOeyqv5hlJi6QOywjRlA/A0HfEaOPcr8mEJDpEW9/o/8m3RoZcl9J0M7h LKuH+tKsJ+tcWmKrUmiyT8+oq649U2SeqKCFFty1yU6YjUKunCh46ASt1zeQP3gHnb99 ZHgA== X-Gm-Message-State: AJIora9kXWZVft6Drm9erWZ7s2u42B8nBRasMBPpGPy9oCSIs1Mqs8CA hAj6eDL10wSL8K37REmPi0fcZ8cb4HdhLw== X-Google-Smtp-Source: AGRyM1tZKySiMLd97/6KvTxesTpewNhavfFZ9wN/h3Amf5KwqjIxX6I0792ZUwZwi9Qm2M+gRev99A== X-Received: by 2002:a5d:51c8:0:b0:21b:8c04:9756 with SMTP id n8-20020a5d51c8000000b0021b8c049756mr4642721wrv.618.1655684798858; Sun, 19 Jun 2022 17:26:38 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:38 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 08/10] io_uring: move io_import_fixed() Date: Mon, 20 Jun 2022 01:25:59 +0100 Message-Id: <4d5becb21f332b4fef6a7cedd6a50e65e2371630.1655684496.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Move io_import_fixed() into rsrc.c where it belongs. Signed-off-by: Pavel Begunkov --- io_uring/rsrc.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ io_uring/rsrc.h | 3 +++ io_uring/rw.c | 60 ------------------------------------------------- 3 files changed, 63 insertions(+), 60 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index c10c512aa71b..3a2a5ef263f0 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1307,3 +1307,63 @@ int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg, io_rsrc_node_switch(ctx, NULL); return ret; } + +int io_import_fixed(int ddir, struct iov_iter *iter, + struct io_mapped_ubuf *imu, + u64 buf_addr, size_t len) +{ + u64 buf_end; + size_t offset; + + if (WARN_ON_ONCE(!imu)) + return -EFAULT; + if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end))) + return -EFAULT; + /* not inside the mapped region */ + if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end)) + return -EFAULT; + + /* + * May not be a start of buffer, set size appropriately + * and advance us to the beginning. + */ + offset = buf_addr - imu->ubuf; + iov_iter_bvec(iter, ddir, imu->bvec, imu->nr_bvecs, offset + len); + + if (offset) { + /* + * Don't use iov_iter_advance() here, as it's really slow for + * using the latter parts of a big fixed buffer - it iterates + * over each segment manually. We can cheat a bit here, because + * we know that: + * + * 1) it's a BVEC iter, we set it up + * 2) all bvecs are PAGE_SIZE in size, except potentially the + * first and last bvec + * + * So just find our index, and adjust the iterator afterwards. + * If the offset is within the first bvec (or the whole first + * bvec, just use iov_iter_advance(). This makes it easier + * since we can just skip the first segment, which may not + * be PAGE_SIZE aligned. + */ + const struct bio_vec *bvec = imu->bvec; + + if (offset <= bvec->bv_len) { + iov_iter_advance(iter, offset); + } else { + unsigned long seg_skip; + + /* skip first vec */ + offset -= bvec->bv_len; + seg_skip = 1 + (offset >> PAGE_SHIFT); + + iter->bvec = bvec + seg_skip; + iter->nr_segs -= seg_skip; + iter->count -= bvec->bv_len + offset; + iter->iov_offset = offset & ~PAGE_MASK; + } + } + + return 0; +} diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 03f26516e994..87f58315b247 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -64,6 +64,9 @@ int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx, void io_rsrc_node_switch(struct io_ring_ctx *ctx, struct io_rsrc_data *data_to_kill); +int io_import_fixed(int ddir, struct iov_iter *iter, + struct io_mapped_ubuf *imu, + u64 buf_addr, size_t len); void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx); int io_sqe_buffers_unregister(struct io_ring_ctx *ctx); diff --git a/io_uring/rw.c b/io_uring/rw.c index 4e5d96040cdc..9166d8166b82 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -273,66 +273,6 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret, return IOU_ISSUE_SKIP_COMPLETE; } -static int io_import_fixed(int ddir, struct iov_iter *iter, - struct io_mapped_ubuf *imu, - u64 buf_addr, size_t len) -{ - u64 buf_end; - size_t offset; - - if (WARN_ON_ONCE(!imu)) - return -EFAULT; - if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end))) - return -EFAULT; - /* not inside the mapped region */ - if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end)) - return -EFAULT; - - /* - * May not be a start of buffer, set size appropriately - * and advance us to the beginning. - */ - offset = buf_addr - imu->ubuf; - iov_iter_bvec(iter, ddir, imu->bvec, imu->nr_bvecs, offset + len); - - if (offset) { - /* - * Don't use iov_iter_advance() here, as it's really slow for - * using the latter parts of a big fixed buffer - it iterates - * over each segment manually. We can cheat a bit here, because - * we know that: - * - * 1) it's a BVEC iter, we set it up - * 2) all bvecs are PAGE_SIZE in size, except potentially the - * first and last bvec - * - * So just find our index, and adjust the iterator afterwards. - * If the offset is within the first bvec (or the whole first - * bvec, just use iov_iter_advance(). This makes it easier - * since we can just skip the first segment, which may not - * be PAGE_SIZE aligned. - */ - const struct bio_vec *bvec = imu->bvec; - - if (offset <= bvec->bv_len) { - iov_iter_advance(iter, offset); - } else { - unsigned long seg_skip; - - /* skip first vec */ - offset -= bvec->bv_len; - seg_skip = 1 + (offset >> PAGE_SHIFT); - - iter->bvec = bvec + seg_skip; - iter->nr_segs -= seg_skip; - iter->count -= bvec->bv_len + offset; - iter->iov_offset = offset & ~PAGE_MASK; - } - } - - return 0; -} - #ifdef CONFIG_COMPAT static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov, unsigned int issue_flags) From patchwork Mon Jun 20 00:26:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB947C43334 for ; Mon, 20 Jun 2022 00:26:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237288AbiFTA0m (ORCPT ); Sun, 19 Jun 2022 20:26:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237232AbiFTA0l (ORCPT ); Sun, 19 Jun 2022 20:26:41 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 829E3A1B0 for ; Sun, 19 Jun 2022 17:26:40 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id z9so4950554wmf.3 for ; Sun, 19 Jun 2022 17:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mrkOxIqYNTJX3Mqz4qXBAvXoRisCgrhflp6PCeCHkxE=; b=hcaMhZuTGE53VE1ov4y1o8qudccp9i58HZSDcSoPvIUZiKY1eRU67N8/fl4LiskIog u1/pKJgQBEdzmRMN8xifhC+leUsNyV+lYsaNgopKEzlypSuvoJ716VYLgQRLqSac4MzT dPR1WUGm0NUJrxwDHXZ1er9fP1uqUAsEZe3FcC73/sgxAAkSK37Njler+hBMv29hilRl k01y9Nvf8C2fvW3g3p//fwzgMlHJ3RexyySVQ2/r3cBuZWXvdwRbH0u6rFnt8LVoswHO TLK3wX5wG8g6SXnftwpJ2gUNrro0x8BTs7cKLOpm/xLaby2122bEkfkOcyUcHdVAxPhg 1wiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mrkOxIqYNTJX3Mqz4qXBAvXoRisCgrhflp6PCeCHkxE=; b=s3s17+Rp7+CB3/gyj7kP8PI290Ro3ZeKQV1TRjMVLeSiT1KOmt28CSWoOP8WSv9Q7P /Yv0OwBvKZdtkHzgU+ggZ0HRthZhacdy6yHpYJEXysQaHtVIeyG2/H8WSj0cHnpkWHj0 pCmSDT6g6Na2kT2eeHKUSQe8B72UtH7B1tLfjFYZbyn2KZkximUKQpdNcrZ2xcyh2n23 B+bpTAgjMzHZy+xsouTOpEmyeQ9WMGhoxjYRZR7uVjRXeVZpGKsuCqgDbamXx5cPSylv 8Gk0ZF2ydB4kbu39rKKgUKxMyGGY+qIvxHRK3Xk7KdjCeVRXwuUJAF2BsZYEyBr2007M bzBQ== X-Gm-Message-State: AOAM533ndvznoOMHRC500ItmUwaHojPLi/usprt1Adz76R4mHyz9kZNE K8rxxHsGwVFde/0pNpJTVAbUXFkgPorRvQ== X-Google-Smtp-Source: ABdhPJxCweayE/Wm7eTR6A5/L5fRVBgkUYGUnWhSAletns3al3K1zkzDpxb/23v45uBQoMWt6as3MA== X-Received: by 2002:a05:600c:1ca5:b0:39b:a66b:7805 with SMTP id k37-20020a05600c1ca500b0039ba66b7805mr31522250wms.87.1655684799851; Sun, 19 Jun 2022 17:26:39 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:39 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 09/10] io_uring: consistent naming for inline completion Date: Mon, 20 Jun 2022 01:26:00 +0100 Message-Id: <797c619943dac06529e9d3fcb16e4c3cde6ad1a3.1655684496.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Improve naming of the inline/deferred completion helper so it's consistent with it's *_post counterpart. Add some comments and extra lockdeps to ensure the locking is done right. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 4 ++-- io_uring/io_uring.h | 10 +++++++++- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 0be942ca91c4..afda42246d12 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1380,7 +1380,7 @@ void io_req_task_complete(struct io_kiocb *req, bool *locked) } if (*locked) - io_req_add_compl_list(req); + io_req_complete_defer(req); else io_req_complete_post(req); } @@ -1648,7 +1648,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) if (ret == IOU_OK) { if (issue_flags & IO_URING_F_COMPLETE_DEFER) - io_req_add_compl_list(req); + io_req_complete_defer(req); else io_req_complete_post(req); } else if (ret != IOU_ISSUE_SKIP_COMPLETE) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index afca7ff8956c..7a00bbe85d35 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -222,10 +222,18 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked) } } -static inline void io_req_add_compl_list(struct io_kiocb *req) +/* + * Don't complete immediately but use deferred completion infrastructure. + * Protected by ->uring_lock and can only be used either with + * IO_URING_F_COMPLETE_DEFER or inside a tw handler holding the mutex. + */ +static inline void io_req_complete_defer(struct io_kiocb *req) + __must_hold(&req->ctx->uring_lock) { struct io_submit_state *state = &req->ctx->submit_state; + lockdep_assert_held(&req->ctx->uring_lock); + wq_list_add_tail(&req->comp_list, &state->compl_reqs); } From patchwork Mon Jun 20 00:26:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12886889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7AF9C433EF for ; Mon, 20 Jun 2022 00:26:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237303AbiFTA0n (ORCPT ); Sun, 19 Jun 2022 20:26:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237322AbiFTA0n (ORCPT ); Sun, 19 Jun 2022 20:26:43 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79F9FAE4C for ; Sun, 19 Jun 2022 17:26:42 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id m32-20020a05600c3b2000b0039756bb41f2so4909564wms.3 for ; Sun, 19 Jun 2022 17:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iptJu5S9l0iTt5O+vNfxYG0xjAG14H5Pl4KgK+E5Nr8=; b=PeydXpV4x1bQmUlhd8FGnJJWLRkcgkBP6TNlxIcLttns+xt+/x6yTOG7asmt86US4D tQ8ATXoFZeg7qe0pW0hEWg415eRTwi6uuquHFIM91iuzzFnNz/ijHNU//14GGXIQE2yZ 5EmxyW1YKgvNpHBltEuK1+Cchp1iVNWObVCNSWgaoHmx8Nuweil/xUoBLMQMoTH2QMSX KdHIKEsBMWncylXC1BpYhRGjJ2hzFKBSNoLT/m8rBMPeG7cAj5lhLX/XS8phXPjk1HmO NRakWw/DTrpsvZSIhUC24dUTLF00eP3B70PF0Yu6dsvM8iiSXRa9n0EQEz3Hw8o9+1LX 6vXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iptJu5S9l0iTt5O+vNfxYG0xjAG14H5Pl4KgK+E5Nr8=; b=5iPnA5dzikAwURs/XHuQT969skv6NTIw6tIiKDND7IqEH27Sqk2n0fIi9bszuNYUy4 9fQVklPDAIk9F9NR6ATASIjN8jZtkkFqErflmiD7Yi8bCmeDubYfH/thqu15weA5K9LH cjDBaj/cxlyjcd7BxGA5kx7zgNzOMBaA/HcoBHP4k3tjS6WRCEnp6wqsIg9Mbw7fDI23 EGdkz4hMNsvsOPsm3lLrxKQDODCaPD5V35uvewJuu4c+H5hvADkKNubq9Vd6e5LLzWXQ cvCfeSkx6zCNHV/LzZWeUQ+jfwTMo1nEHWnvdYa/4YcsYR4D0PdKJ6zHtb1iDYInW/gP aDBQ== X-Gm-Message-State: AOAM530Ovim49NKY/74RYdD3sykD8ll6gobyqAZScwOJewBmFjkFHOEj iAhfYwb7mnpePO5nmA6kWdVnZPvNKRSyeA== X-Google-Smtp-Source: ABdhPJwCbsIQt+/3JIcFlPskj0MF4Ug1T1I4YsYS7dYZwOYIu9ud8O5b4y0rMfeugYrD9t+omH4JEA== X-Received: by 2002:a05:600c:4112:b0:39c:9417:a89f with SMTP id j18-20020a05600c411200b0039c9417a89fmr32497779wmi.150.1655684800838; Sun, 19 Jun 2022 17:26:40 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id h18-20020a5d4312000000b002167efdd549sm11543807wrq.38.2022.06.19.17.26.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Jun 2022 17:26:40 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 10/10] io_uring: add an warn_once for poll_find Date: Mon, 20 Jun 2022 01:26:01 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org io_poll_remove() expects poll_find() to search only for poll requests and passes a flag for this. Just be a little bit extra cautious considering lots of recent poll/cancellation changes and add a WARN_ON_ONCE checking that we don't get an apoll'ed request. Signed-off-by: Pavel Begunkov --- io_uring/poll.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/io_uring/poll.c b/io_uring/poll.c index 9af6a34222a9..8f4fff76d3b4 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -827,6 +827,11 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags) } found: + if (WARN_ON_ONCE(preq->opcode != IORING_OP_POLL_ADD)) { + ret = -EFAULT; + goto out; + } + if (poll_update->update_events || poll_update->update_user_data) { /* only mask one event flags, keep behavior flags */ if (poll_update->update_events) {