From patchwork Mon Dec 5 02:44:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 13064090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA0FC47089 for ; Mon, 5 Dec 2022 02:45:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230448AbiLECpn (ORCPT ); Sun, 4 Dec 2022 21:45:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231139AbiLECpl (ORCPT ); Sun, 4 Dec 2022 21:45:41 -0500 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99008FADD for ; Sun, 4 Dec 2022 18:45:40 -0800 (PST) Received: by mail-wr1-x42f.google.com with SMTP id h7so10611291wrs.6 for ; Sun, 04 Dec 2022 18:45:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PVLl5MtlPDSyk/qlXtfhxF35haHp/uds22QS3HFvxRc=; b=WjUeR/hNPShOaUOXLRC3VFXbDiGBvSfxKHW4xU5d5ILDhgVWi9vdJsENU85B8ioE/L nbbMkILUpYMmZm14wKTwweq9DlBC51xm9IKI12Xi79Y9ps2YICwDe3LH8tsId4wLaffX MPTNBiYanGN0TLsK9SfUW/FXDH7XHsnTAn/U1rOnBsvonMSsPDysPDTAVV/C2/TgT7FJ tsIoXnIKciI5frzVj1cKVXoIvHNVmwHzjNfTnsW8rZftWpRR+DzSCNtG2OLz/Mct29XA kRq3bnfPxuVAD2+ponEmA8gniAfiWJ/HSnIl0VtYmL9WAxgOsCiYNAf4rUzP4ZEMYZvB 8i4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PVLl5MtlPDSyk/qlXtfhxF35haHp/uds22QS3HFvxRc=; b=rbPM2VljyEksz2KYXBRZ4BYv/dvLW8KSTPxi6XF9pncslhYetYHs+X8HBlGN45xS/L Ff5Ng1N/0DKWgf15CFwUE1jDXlaNFBoiJvjiMl9c6O9D21OSSmae3zMBVoiCN612faQs 9THfb+7AuLr8oNOk41DrPsA2e4reZd/3rYhtwFpMPCrWXbcFobRDR8trIpCyZGLc6Z7C IXQ9LfQcyzhwEWRm8rBd/yS+K+Ct+szun0GUkvSGqF+oBX6vEa1LD62BPLEaO8naUllm NvzrZrFg95eo3q1l2kJDmqJTrsJLQFAvfgg9td3wdOJP+fcJk0WEYtifyyqQtV0Jjro8 8pFg== X-Gm-Message-State: ANoB5plSKI8hgAcjbbPF3DVG6ZY87mSnjuJp2YPju6H5E//bYP9CXoIn EpPA9Ngz4c6i/2QLTD7pkpIXX8R8FDE= X-Google-Smtp-Source: AA0mqf7WSbVW8VFJRB0pnBHgacUGBAwK3Nopmvzj29ThSTDFUB4haQ13kqAMzYeC1HdK/YiV1ddPvA== X-Received: by 2002:adf:eec5:0:b0:242:1352:2b62 with SMTP id a5-20020adfeec5000000b0024213522b62mr22565318wrp.370.1670208338832; Sun, 04 Dec 2022 18:45:38 -0800 (PST) Received: from 127.0.0.1localhost (94.196.241.58.threembb.co.uk. [94.196.241.58]) by smtp.gmail.com with ESMTPSA id t17-20020a05600c41d100b003cf71b1f66csm15281532wmh.0.2022.12.04.18.45.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Dec 2022 18:45:38 -0800 (PST) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 2/7] io_uring: don't check overflow flush failures Date: Mon, 5 Dec 2022 02:44:26 +0000 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org The only way to fail overflowed CQEs flush is for CQ to be fully packed. There is one place checking for flush failures, i.e. io_cqring_wait(), but we limit the number to be waited for by the CQ size, so getting a failure automatically means that we're done with waiting. Don't check for failures, rarely but they might spuriously fail CQ waiting with -EBUSY. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.c | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4721ff6cafaa..7239776a9d4b 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -629,13 +629,12 @@ static void io_cqring_overflow_kill(struct io_ring_ctx *ctx) } /* Returns true if there are no backlogged entries after the flush */ -static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx) +static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx) { - bool all_flushed; size_t cqe_size = sizeof(struct io_uring_cqe); if (__io_cqring_events(ctx) == ctx->cq_entries) - return false; + return; if (ctx->flags & IORING_SETUP_CQE32) cqe_size <<= 1; @@ -654,30 +653,23 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx) kfree(ocqe); } - all_flushed = list_empty(&ctx->cq_overflow_list); - if (all_flushed) { + if (list_empty(&ctx->cq_overflow_list)) { clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq); atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags); } - io_cq_unlock_post(ctx); - return all_flushed; } -static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx) +static void io_cqring_overflow_flush(struct io_ring_ctx *ctx) { - bool ret = true; - if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) { /* iopoll syncs against uring_lock, not completion_lock */ if (ctx->flags & IORING_SETUP_IOPOLL) mutex_lock(&ctx->uring_lock); - ret = __io_cqring_overflow_flush(ctx); + __io_cqring_overflow_flush(ctx); if (ctx->flags & IORING_SETUP_IOPOLL) mutex_unlock(&ctx->uring_lock); } - - return ret; } void __io_put_task(struct task_struct *task, int nr) @@ -2505,11 +2497,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, trace_io_uring_cqring_wait(ctx, min_events); do { - /* if we can't even flush overflow, don't wait for more */ - if (!io_cqring_overflow_flush(ctx)) { - ret = -EBUSY; - break; - } + io_cqring_overflow_flush(ctx); prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq, TASK_INTERRUPTIBLE); ret = io_cqring_wait_schedule(ctx, &iowq, timeout);