From patchwork Fri Jan 27 10:59:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13118366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C72AC54EAA for ; Fri, 27 Jan 2023 10:59:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232509AbjA0K7m (ORCPT ); Fri, 27 Jan 2023 05:59:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232642AbjA0K7k (ORCPT ); Fri, 27 Jan 2023 05:59:40 -0500 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 278F918AA8 for ; Fri, 27 Jan 2023 02:59:38 -0800 (PST) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 30RApEf7010934 for ; Fri, 27 Jan 2023 02:59:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=SFHN2U2WIAI6nW3KlEx2htg7L7yaTcuHQiYYjVO6IS0=; b=KoiwDmB9/VUV+ch0S3JWRGjP86nd3iBEYTTPxXYXUnMUjf/auEww73baELTU8X2gnkvL TDPao7jatyvM5J6slVL3h3K1MWDe9HDsSkXWpEpoIWJXGUfGsggWCZGjDH0VQgcZ8QR4 Z0KmJLUWede963RP6rNILgXHojj9CHehW4Z4OHmT+dfxp3qHuAg87Ss1q3glJT5iwB0P PKRClnUJE6PAesWM6tGmNMFgnenV1KaVNbjgCJO0jW2D6XYD9cWauBPLcLfc+OXA4NJ0 r/NpoAO+iJ1QUDF7arNalxk0clNhnoYJnAruMdGGW0Mvclu3GwFkGt/PE9qs/QrSXJIg Sg== Received: from maileast.thefacebook.com ([163.114.130.16]) by m0089730.ppops.net (PPS) with ESMTPS id 3nbw49x82k-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 27 Jan 2023 02:59:37 -0800 Received: from twshared5320.05.ash8.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Fri, 27 Jan 2023 02:59:29 -0800 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id 5BC63E9FEC07; Fri, 27 Jan 2023 02:59:15 -0800 (PST) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , , Dylan Yudaken , Subject: [PATCH] io_uring: always prep_async for drain requests Date: Fri, 27 Jan 2023 02:59:11 -0800 Message-ID: <20230127105911.2420061-1-dylany@meta.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 3vSLMe3hF_25AHPqMmY8jvdZ5S_Vkrw4 X-Proofpoint-GUID: 3vSLMe3hF_25AHPqMmY8jvdZ5S_Vkrw4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-27_06,2023-01-27_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Drain requests all go through io_drain_req, which has a quick exit in case there is nothing pending (ie the drain is not useful). In that case it can run the issue the request immediately. However for safety it queues it through task work. The problem is that in this case the request is run asynchronously, but the async work has not been prepared through io_req_prep_async. This has not been a problem up to now, as the task work always would run before returning to userspace, and so the user would not have a chance to race with it. However - with IORING_SETUP_DEFER_TASKRUN - this is no longer the case and the work might be defered, giving userspace a chance to change data being referred to in the request. Instead _always_ prep_async for drain requests, which is simpler anyway and removes this issue. Cc: stable@vger.kernel.org Fixes: c0e0d6ba25f1 ("io_uring: add IORING_SETUP_DEFER_TASKRUN") Signed-off-by: Dylan Yudaken --- Hi, Worth pointing out in discussion with Pavel that we considered just removing the fast path in io_drain_req. That felt like more of an invasive change as it can add an extra kmalloc + io_prep_async_link(), but perhaps you feel that is a better approach. I'll send out a liburing test shortly. Dylan io_uring/io_uring.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) base-commit: b00c51ef8f72ced0965d021a291b98ff822c5337 diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 0a4efada9b3c..db623b3185c8 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1765,17 +1765,12 @@ static __cold void io_drain_req(struct io_kiocb *req) } spin_unlock(&ctx->completion_lock); - ret = io_req_prep_async(req); - if (ret) { -fail: - io_req_defer_failed(req, ret); - return; - } io_prep_async_link(req); de = kmalloc(sizeof(*de), GFP_KERNEL); if (!de) { ret = -ENOMEM; - goto fail; + io_req_defer_failed(req, ret); + return; } spin_lock(&ctx->completion_lock); @@ -2048,13 +2043,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req) req->flags &= ~REQ_F_HARDLINK; req->flags |= REQ_F_LINK; io_req_defer_failed(req, req->cqe.res); - } else if (unlikely(req->ctx->drain_active)) { - io_drain_req(req); } else { int ret = io_req_prep_async(req); - if (unlikely(ret)) + if (unlikely(ret)) { io_req_defer_failed(req, ret); + return; + } + + if (unlikely(req->ctx->drain_active)) + io_drain_req(req); else io_queue_iowq(req, NULL); }