From patchwork Tue Jun 21 09:08:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12888914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E86CCC433EF for ; Tue, 21 Jun 2022 09:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346840AbiFUJK3 (ORCPT ); Tue, 21 Jun 2022 05:10:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243660AbiFUJK2 (ORCPT ); Tue, 21 Jun 2022 05:10:28 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62C025F52 for ; Tue, 21 Jun 2022 02:10:27 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id cw10so2341362ejb.3 for ; Tue, 21 Jun 2022 02:10:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hlxs5bU33Wc66tGL6gcQaBQ2czbjqt6oRXLAavmsM1Y=; b=Z2LGjPfQvvCTjHWD4RCt8suG1laOscn3VqyaWXEjd+op/QYCjX5Yt0APFCWFDeZBon jxy0JQTOJ1bIJzGWSloH83324g/DGJExnGDxd6gv9YiXVQ3fbbocjFR6CLN3XWBehW+Y JN06S621zFk4DiPpPXDRvwPObBEfkcLEpNpIlQfB+b6VBE7Oc+vZm/eH4ktbMh2Vqjwi VvHrn/7E9Ru6+6fp+LkdT43VSgi697176TpntW+bW7nB5hCqFb3o0Jy8ZDUAijb7LXp8 +eHLsLINuRH8ZAUm08a/WCopwW7sV8hO49hqVSYkZib/2/W078NeDT86MkX5RjTqDdeo WtNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hlxs5bU33Wc66tGL6gcQaBQ2czbjqt6oRXLAavmsM1Y=; b=s/6U9BrQnRxpyo88tm/6ZUY+bN26/kesEiTZfny6AWDyUCYC2yejk98u5rPA9C7/BI jdMmIg2Yrs6pIC0EECmTEz5yeKZ8v9PLK2LoLoYj9NNNL3rcyClQbXROL7XzMASKZmwY bjoBUAFhprF7gNi2E4gE0/sM8XCbdWtFq6Ua0r43RxiXoCtZMjuL2LUyyTdLMOBU+Vy0 6cBSxjL+4m+cEYry0CHJnaSVhrKlF8+jHnhQX9HvPjBdeAdVnEeFxQDQjkJdEj1pIwki qzRvvfp09Qw6q9sQupH5cmB67m1Ke9agUgCqqvhSp4lbcgQ1cawfRnX3NBUrK5KaM/cq /JQA== X-Gm-Message-State: AJIora8TZ2nWvPdzTJpB7/V6N+kfsPN4rtoCp47lYWOHPuuf5vhBVa7Z gkMh9VywjSD2FPdtzx4kXjmMMD2EOZMemw== X-Google-Smtp-Source: AGRyM1usipu4an35/AXW3J79FWhB1VBf0cgNPIe0s6t4dztT85Zeyi79BdEwTFGItId+GTpO3jMphw== X-Received: by 2002:a17:906:20c6:b0:716:646d:c019 with SMTP id c6-20020a17090620c600b00716646dc019mr24443875ejc.529.1655802625599; Tue, 21 Jun 2022 02:10:25 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:d61a]) by smtp.gmail.com with ESMTPSA id cq18-20020a056402221200b00435651c4a01sm9194420edb.56.2022.06.21.02.10.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 02:10:25 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 1/4] io_uring: fix poll_add error handling Date: Tue, 21 Jun 2022 10:08:59 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We should first look at the return value of __io_arm_poll_handler() and only if zero checking for ipt.error, not the other way around. Currently we may enqueue a tw for such request and then release it inline causing UAF. Fixes: 9c1d09f56425e ("io_uring: handle completions in the core") Signed-off-by: Pavel Begunkov --- io_uring/poll.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/io_uring/poll.c b/io_uring/poll.c index 8f4fff76d3b4..528418aaf3f6 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -782,16 +782,11 @@ int io_poll_add(struct io_kiocb *req, unsigned int issue_flags) req->flags &= ~REQ_F_HASH_LOCKED; ret = __io_arm_poll_handler(req, poll, &ipt, poll->events); - if (ipt.error) { - return ipt.error; - } else if (ret > 0) { + if (ret) { io_req_set_res(req, ret, 0); return IOU_OK; - } else if (!ret) { - return IOU_ISSUE_SKIP_COMPLETE; } - - return ret; + return ipt.error ?: IOU_ISSUE_SKIP_COMPLETE; } int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags) From patchwork Tue Jun 21 09:09:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12888915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAD2EC433EF for ; Tue, 21 Jun 2022 09:10:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348475AbiFUJKb (ORCPT ); Tue, 21 Jun 2022 05:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347543AbiFUJK3 (ORCPT ); Tue, 21 Jun 2022 05:10:29 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 305F19FEE for ; Tue, 21 Jun 2022 02:10:28 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id es26so16751344edb.4 for ; Tue, 21 Jun 2022 02:10:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2+/IQbvGcCO4lowWT+mbffBPXgLbyfXTC8rtYBdsV74=; b=fOQGrOTUFOKi+cBGrlvZlx8DS48N0LGywfd7V8FV9w0DQ/7h4INSitiTRDFAzH7OAC IYiiURQVlFZBvnyM8Z9hxz3b759/lapGMEflNdZmhzPeZ03gzDji7EQCdqYmRLeBIKZT b+KDimIGdZrxU3LUKstWbJJ1eTyS7BC8dumf7xaT1sJ2Wz8aURA5XBeMlUkUFXZ+SB9i b6f/Ab3dWpepqmq37wgbLDqVNJNZ9yodQ3AaIhNedHeOtQTSqqFkjrv9262ewJU8xz/o cWyygfuwUxFH1KZc6OROMj3eVzJdmPpGLsZ7opiK6Ktf7u9vVaRdLjghh7bBWZV7E82z easw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2+/IQbvGcCO4lowWT+mbffBPXgLbyfXTC8rtYBdsV74=; b=Zz7UYfNkFvDVx1kHA0KmBjOJta4FMId9HEg9k0d1TSfei5WbRwkXOcxYudsss7ACLF C2YRbqOlw9dVhzxIwYJkyW5Wedh/IAcptxgVyFG9TEeaPtz5jqEu9qzptMy62KeI15V0 SgShfO/q2bIdk9sb+SuK47djhxYjOoEA0yfOdPHSD3pvWdbh1Fm+JW4q2fvkog77mqAt ok1HTt3LiRMOzn5gfxdO7EK+5akZNauVg5bxCLsEAI5/EE9A3YCtd7SnxV9JMI1q6SbV tXUpXSZsULAN6wxCieetG2GUwtOwNgcP1udaTLoxAJMcCMfgr6BfkHKuFK3282IkCveB 8Rhw== X-Gm-Message-State: AJIora/sSKnc7wTHRQO31BkEiLOgRrEs3+twm/KR4CY6uS0K2Ieots6D 1A9OdmATNBlGE+PgjY8q6qqYWoJWEqh2/A== X-Google-Smtp-Source: AGRyM1uhr926JpOUkUOfqQ2fuwvdjupMfesIOMkBIjA1EvtsdMvTDQpwwNqlT3xCUen1V9VMAUwckw== X-Received: by 2002:a05:6402:3514:b0:42f:dd01:922 with SMTP id b20-20020a056402351400b0042fdd010922mr34562557edd.324.1655802626458; Tue, 21 Jun 2022 02:10:26 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:d61a]) by smtp.gmail.com with ESMTPSA id cq18-20020a056402221200b00435651c4a01sm9194420edb.56.2022.06.21.02.10.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 02:10:26 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 2/4] io_uring: improve io_run_task_work() Date: Tue, 21 Jun 2022 10:09:00 +0100 Message-Id: <75d4f34b0c671075892821a409e28da6cb1d64fe.1655802465.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Since SQPOLL now uses TWA_SIGNAL_NO_IPI, there won't be task work items without TIF_NOTIFY_SIGNAL. Simplify io_run_task_work() by removing task->task_works check. Even though looks it doesn't cause extra cache bouncing, it's still nice to not touch it an extra time when it might be not cached. Signed-off-by: Pavel Begunkov --- io_uring/io_uring.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 7a00bbe85d35..4c4d38ffc5ec 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -203,7 +203,7 @@ static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx) static inline bool io_run_task_work(void) { - if (test_thread_flag(TIF_NOTIFY_SIGNAL) || task_work_pending(current)) { + if (test_thread_flag(TIF_NOTIFY_SIGNAL)) { __set_current_state(TASK_RUNNING); clear_notify_signal(); if (task_work_pending(current)) From patchwork Tue Jun 21 09:09:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12888916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA57CC433EF for ; Tue, 21 Jun 2022 09:10:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345991AbiFUJKd (ORCPT ); Tue, 21 Jun 2022 05:10:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243660AbiFUJKa (ORCPT ); Tue, 21 Jun 2022 05:10:30 -0400 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 408845F52 for ; Tue, 21 Jun 2022 02:10:29 -0700 (PDT) Received: by mail-ed1-x534.google.com with SMTP id ej4so14504626edb.7 for ; Tue, 21 Jun 2022 02:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WtTg6iGZVEhWH68/9yKl+yPJ6KtmwrlmlNcZKKIBE1w=; b=gQxi6ACNekvI/usm4JShsCXJbR+pBM28BTmL3plG4hphY4Bi+gfF2qLCdfpCcHs5tH RhXpjEl2RnKqCIPeyhhbF1r4t8Bvq+FF/Wxroav0QxPKrf/TWVBjhyaxmg0CIDURyJuq ZT7ui+AJYtSFRlbqWAWdbfbhvKPHWPlno6gDRd36dqAgCoLcc017nxXA+9w4kNMcussB 1JATOZjzJvSbXk5+OLSFa5SP4CMLl/x0TshS3Wp5dsQDRznE4/bt7CEI/39kn07YxEcu SKUtJTUkxpSbcYUWk45XzztjJUXZpQsMQcZ6DV24GYTTkF2nzhzPHtdjsIcCKHPQJ/0+ Dcrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WtTg6iGZVEhWH68/9yKl+yPJ6KtmwrlmlNcZKKIBE1w=; b=3P3U3Tcto50IvklFW6eAOtjMkE9WH6dXkfOwoOvP3wIgUiH6fFASRH5rITO433zam7 rt4qTXLAyMvx3Q2cToNyRfiFeZ3cVWnUw0hyp1C06psJjP//0w4iqK1VBglidNQpKtUd LAhGdKjOpuIAMq47cqc7FX+Fs12xYhjAaCNN7jGzPEoRFNqeY1VQulcaXople80NT1Dr 9DLv/f/X+x/lDo/mjrbowhNr+kXy2bmXOgwJdjEVu1OZHgHGpGjI47540zkOoE5SHvqr f+j8LEWUbWYeZlaDtO2e7yvMxMbDsd/LgitFbNw1g3KOKhT5YbVarL1ZFA7LA5zzqvU4 Y1Dw== X-Gm-Message-State: AJIora/aHa0oT0V90OuyyEqHUOZ6Qgsgc+Vs2HRYRvyH1yyY0BuKYBRi ZjO8l7XQdsEAiAQZfOE2byxkIDloSNp4aw== X-Google-Smtp-Source: AGRyM1t/mFVhuH0uW0nvwmNUCu5+1T/R9aTvCCY6pBi7PQ6eKfJnCBH32tUPiZ9UdXN/J5j+H+OXHQ== X-Received: by 2002:a05:6402:1c09:b0:435:6562:e70d with SMTP id ck9-20020a0564021c0900b004356562e70dmr23870340edb.203.1655802627479; Tue, 21 Jun 2022 02:10:27 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:d61a]) by smtp.gmail.com with ESMTPSA id cq18-20020a056402221200b00435651c4a01sm9194420edb.56.2022.06.21.02.10.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 02:10:27 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 3/4] io_uring: move list helpers to a separate file Date: Tue, 21 Jun 2022 10:09:01 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org It's annoying to have io-wq.h as a dependency every time we want some of struct io_wq_work_list helpers, move them into a separate file. Signed-off-by: Pavel Begunkov --- io_uring/io-wq.c | 1 + io_uring/io-wq.h | 131 ----------------------------------------- io_uring/io_uring.h | 1 + io_uring/slist.h | 138 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 140 insertions(+), 131 deletions(-) create mode 100644 io_uring/slist.h diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 824623bcf1a5..3e34dfbdf946 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -18,6 +18,7 @@ #include #include "io-wq.h" +#include "slist.h" #define WORKER_IDLE_TIMEOUT (5 * HZ) diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h index 10b80ef78bb8..31228426d192 100644 --- a/io_uring/io-wq.h +++ b/io_uring/io-wq.h @@ -21,137 +21,6 @@ enum io_wq_cancel { IO_WQ_CANCEL_NOTFOUND, /* work not found */ }; -#define wq_list_for_each(pos, prv, head) \ - for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next) - -#define wq_list_for_each_resume(pos, prv) \ - for (; pos; prv = pos, pos = (pos)->next) - -#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL) -#define INIT_WQ_LIST(list) do { \ - (list)->first = NULL; \ -} while (0) - -static inline void wq_list_add_after(struct io_wq_work_node *node, - struct io_wq_work_node *pos, - struct io_wq_work_list *list) -{ - struct io_wq_work_node *next = pos->next; - - pos->next = node; - node->next = next; - if (!next) - list->last = node; -} - -/** - * wq_list_merge - merge the second list to the first one. - * @list0: the first list - * @list1: the second list - * Return the first node after mergence. - */ -static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0, - struct io_wq_work_list *list1) -{ - struct io_wq_work_node *ret; - - if (!list0->first) { - ret = list1->first; - } else { - ret = list0->first; - list0->last->next = list1->first; - } - INIT_WQ_LIST(list0); - INIT_WQ_LIST(list1); - return ret; -} - -static inline void wq_list_add_tail(struct io_wq_work_node *node, - struct io_wq_work_list *list) -{ - node->next = NULL; - if (!list->first) { - list->last = node; - WRITE_ONCE(list->first, node); - } else { - list->last->next = node; - list->last = node; - } -} - -static inline void wq_list_add_head(struct io_wq_work_node *node, - struct io_wq_work_list *list) -{ - node->next = list->first; - if (!node->next) - list->last = node; - WRITE_ONCE(list->first, node); -} - -static inline void wq_list_cut(struct io_wq_work_list *list, - struct io_wq_work_node *last, - struct io_wq_work_node *prev) -{ - /* first in the list, if prev==NULL */ - if (!prev) - WRITE_ONCE(list->first, last->next); - else - prev->next = last->next; - - if (last == list->last) - list->last = prev; - last->next = NULL; -} - -static inline void __wq_list_splice(struct io_wq_work_list *list, - struct io_wq_work_node *to) -{ - list->last->next = to->next; - to->next = list->first; - INIT_WQ_LIST(list); -} - -static inline bool wq_list_splice(struct io_wq_work_list *list, - struct io_wq_work_node *to) -{ - if (!wq_list_empty(list)) { - __wq_list_splice(list, to); - return true; - } - return false; -} - -static inline void wq_stack_add_head(struct io_wq_work_node *node, - struct io_wq_work_node *stack) -{ - node->next = stack->next; - stack->next = node; -} - -static inline void wq_list_del(struct io_wq_work_list *list, - struct io_wq_work_node *node, - struct io_wq_work_node *prev) -{ - wq_list_cut(list, node, prev); -} - -static inline -struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack) -{ - struct io_wq_work_node *node = stack->next; - - stack->next = node->next; - return node; -} - -static inline struct io_wq_work *wq_next_work(struct io_wq_work *work) -{ - if (!work->list.next) - return NULL; - - return container_of(work->list.next, struct io_wq_work, list); -} - typedef struct io_wq_work *(free_work_fn)(struct io_wq_work *); typedef void (io_wq_work_fn)(struct io_wq_work *); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 4c4d38ffc5ec..f026d2670959 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -5,6 +5,7 @@ #include #include #include "io-wq.h" +#include "slist.h" #include "filetable.h" #ifndef CREATE_TRACE_POINTS diff --git a/io_uring/slist.h b/io_uring/slist.h new file mode 100644 index 000000000000..f27601fa4660 --- /dev/null +++ b/io_uring/slist.h @@ -0,0 +1,138 @@ +#ifndef INTERNAL_IO_SLIST_H +#define INTERNAL_IO_SLIST_H + +#include + +#define wq_list_for_each(pos, prv, head) \ + for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next) + +#define wq_list_for_each_resume(pos, prv) \ + for (; pos; prv = pos, pos = (pos)->next) + +#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL) + +#define INIT_WQ_LIST(list) do { \ + (list)->first = NULL; \ +} while (0) + +static inline void wq_list_add_after(struct io_wq_work_node *node, + struct io_wq_work_node *pos, + struct io_wq_work_list *list) +{ + struct io_wq_work_node *next = pos->next; + + pos->next = node; + node->next = next; + if (!next) + list->last = node; +} + +/** + * wq_list_merge - merge the second list to the first one. + * @list0: the first list + * @list1: the second list + * Return the first node after mergence. + */ +static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0, + struct io_wq_work_list *list1) +{ + struct io_wq_work_node *ret; + + if (!list0->first) { + ret = list1->first; + } else { + ret = list0->first; + list0->last->next = list1->first; + } + INIT_WQ_LIST(list0); + INIT_WQ_LIST(list1); + return ret; +} + +static inline void wq_list_add_tail(struct io_wq_work_node *node, + struct io_wq_work_list *list) +{ + node->next = NULL; + if (!list->first) { + list->last = node; + WRITE_ONCE(list->first, node); + } else { + list->last->next = node; + list->last = node; + } +} + +static inline void wq_list_add_head(struct io_wq_work_node *node, + struct io_wq_work_list *list) +{ + node->next = list->first; + if (!node->next) + list->last = node; + WRITE_ONCE(list->first, node); +} + +static inline void wq_list_cut(struct io_wq_work_list *list, + struct io_wq_work_node *last, + struct io_wq_work_node *prev) +{ + /* first in the list, if prev==NULL */ + if (!prev) + WRITE_ONCE(list->first, last->next); + else + prev->next = last->next; + + if (last == list->last) + list->last = prev; + last->next = NULL; +} + +static inline void __wq_list_splice(struct io_wq_work_list *list, + struct io_wq_work_node *to) +{ + list->last->next = to->next; + to->next = list->first; + INIT_WQ_LIST(list); +} + +static inline bool wq_list_splice(struct io_wq_work_list *list, + struct io_wq_work_node *to) +{ + if (!wq_list_empty(list)) { + __wq_list_splice(list, to); + return true; + } + return false; +} + +static inline void wq_stack_add_head(struct io_wq_work_node *node, + struct io_wq_work_node *stack) +{ + node->next = stack->next; + stack->next = node; +} + +static inline void wq_list_del(struct io_wq_work_list *list, + struct io_wq_work_node *node, + struct io_wq_work_node *prev) +{ + wq_list_cut(list, node, prev); +} + +static inline +struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack) +{ + struct io_wq_work_node *node = stack->next; + + stack->next = node->next; + return node; +} + +static inline struct io_wq_work *wq_next_work(struct io_wq_work *work) +{ + if (!work->list.next) + return NULL; + + return container_of(work->list.next, struct io_wq_work, list); +} + +#endif // INTERNAL_IO_SLIST_H \ No newline at end of file From patchwork Tue Jun 21 09:09:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12888917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B65A3C43334 for ; Tue, 21 Jun 2022 09:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346590AbiFUJKf (ORCPT ); Tue, 21 Jun 2022 05:10:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348472AbiFUJKb (ORCPT ); Tue, 21 Jun 2022 05:10:31 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 451BF9FF8 for ; Tue, 21 Jun 2022 02:10:30 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id v1so26026446ejg.13 for ; Tue, 21 Jun 2022 02:10:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=66rhTpeiVCVuXrNZ0fvyk2Otyipnp7dr2D8O/HIACXY=; b=Ji0Az+Bqpvob5g4KPz3IBJO9StokKvbgEnnUTSq5wXpUUaVQTCFrWw2oztF2ZqEuhq /W97AJt1h36JnSXQ1xlGX4VgPbGAEHoMVpJrB0hPt6ix6b54PSVThXb/IZDTR0OQcohJ Q6qK3+pMR4ZSsZS2Er3GjeuCyrWSePxNjw84897xkl+pU0PUmzcXvkKc2x5tHVARzstC f4W9XxfHJT4X5iXby0uczlSJLyadZBzWuLDZr0s0acCsRhlwO6gjqrA7BaVICKpMnArP 5yvlbZVSEcYsG/KB742kREctvuSf9SP4KhXjJsVn/asfM+gNXQB/MpD8Qs8kJmeHxYQJ lY2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=66rhTpeiVCVuXrNZ0fvyk2Otyipnp7dr2D8O/HIACXY=; b=SZTH0kBQ7n1yihHnfcI/wNQgjYT/8uC74OqvQmv5qkMLxSTR9SPV8+Ag7X+VRqwBZN +eb5Za+6qQ6/ceP8hNPrXIhVNGO+PuKDVqR3v1Mw4GpdopOQ5LY38D7to98Bfj4UB0IM vBSzGTRvwzm6NtVVxQIIfjU33cjLIKkOa98I3xY7ZnZy0mcAL7TPipdUsxTesHvAI/CU QyQRLoF2BO7MrUM+wlm075YOi1dL4Xejj9BopCmB46Y6BrlLiR8l4K92AZVKq1fpxjMR MqqIy+RrLfu4hJX8ikbp9rE8kTqRYcwxbBnAYoQ/5sDjoPaKefXxZNXzAcS8mOEAd7oK p2PA== X-Gm-Message-State: AJIora/btDw1z+p510RulccUUXXhDeb27b8HJpBbGC0ES2K1UEjel1Wy fZBipY2lPktFp7b10+Ve6Do1h6SZAR8EMg== X-Google-Smtp-Source: AGRyM1surNjNq7ra3BGj658lNlBiNgqAqr9usD4KDdVUq8i0RNzAtg0khVWeOL0nVsU5oQSEJfV7KA== X-Received: by 2002:a17:906:9c96:b0:711:6c3:c9d7 with SMTP id fj22-20020a1709069c9600b0071106c3c9d7mr23472909ejc.60.1655802628561; Tue, 21 Jun 2022 02:10:28 -0700 (PDT) Received: from 127.0.0.1localhost.com ([2620:10d:c092:600::2:d61a]) by smtp.gmail.com with ESMTPSA id cq18-20020a056402221200b00435651c4a01sm9194420edb.56.2022.06.21.02.10.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 02:10:28 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org Cc: Jens Axboe , asml.silence@gmail.com Subject: [PATCH for-next 4/4] io_uring: dedup io_run_task_work Date: Tue, 21 Jun 2022 10:09:02 +0100 Message-Id: X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We have an identical copy of io_run_task_work() for io-wq called io_flush_signals(), deduplicate them. Signed-off-by: Pavel Begunkov --- io_uring/filetable.h | 2 ++ io_uring/io-wq.c | 17 +++-------------- 2 files changed, 5 insertions(+), 14 deletions(-) diff --git a/io_uring/filetable.h b/io_uring/filetable.h index 6b58aa48bc45..fb5a274c08ff 100644 --- a/io_uring/filetable.h +++ b/io_uring/filetable.h @@ -2,6 +2,8 @@ #ifndef IOU_FILE_TABLE_H #define IOU_FILE_TABLE_H +#include + struct io_ring_ctx; struct io_kiocb; diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 3e34dfbdf946..77df5b43bf52 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -19,6 +19,7 @@ #include "io-wq.h" #include "slist.h" +#include "io_uring.h" #define WORKER_IDLE_TIMEOUT (5 * HZ) @@ -519,23 +520,11 @@ static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct, return NULL; } -static bool io_flush_signals(void) -{ - if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) { - __set_current_state(TASK_RUNNING); - clear_notify_signal(); - if (task_work_pending(current)) - task_work_run(); - return true; - } - return false; -} - static void io_assign_current_work(struct io_worker *worker, struct io_wq_work *work) { if (work) { - io_flush_signals(); + io_run_task_work(); cond_resched(); } @@ -655,7 +644,7 @@ static int io_wqe_worker(void *data) last_timeout = false; __io_worker_idle(wqe, worker); raw_spin_unlock(&wqe->lock); - if (io_flush_signals()) + if (io_run_task_work()) continue; ret = schedule_timeout(WORKER_IDLE_TIMEOUT); if (signal_pending(current)) {