From patchwork Sun Dec 22 14:46:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11307507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C982139A for ; Sun, 22 Dec 2019 14:47:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2C60320733 for ; Sun, 22 Dec 2019 14:47:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C60320733 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5BCF68E0006; Sun, 22 Dec 2019 09:47:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 56D158E0001; Sun, 22 Dec 2019 09:47:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AA488E0006; Sun, 22 Dec 2019 09:47:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id 394228E0001 for ; Sun, 22 Dec 2019 09:47:15 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E9B754DB5 for ; Sun, 22 Dec 2019 14:47:14 +0000 (UTC) X-FDA: 76293055188.02.hat58_4dde2ef16159 X-Spam-Summary: 2,0,0,b61e8556f8895cb1,d41d8cd98f00b204,hdanton@sina.com,:io-uring@vger.kernel.org:axboe@kernel.dk:viro@zeniv.linux.org.uk:linux-fsdevel@vger.kernel.org:linux-kernel@vger.kernel.org::hdanton@sina.com,RULES_HIT:41:355:379:800:960:966:968:973:988:989:1260:1311:1314:1345:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2903:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:3872:4385:5007:6120:6261:9010:9012:10004:11026:11232:11334:11473:11537:11658:11914:12043:12296:12297:12438:13069:13172:13229:13311:13357:13894:14181:14384:14721:21080:21451:21611:21627:21740:30045:30054,0,RBL:202.108.3.17:@sina.com:.lbl8.mailshell.net-62.18.2.100 64.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: hat58_4dde2ef16159 X-Filterd-Recvd-Size: 2615 Received: from r3-17.sinamail.sina.com.cn (r3-17.sinamail.sina.com.cn [202.108.3.17]) by imf26.hostedemail.com (Postfix) with SMTP for ; Sun, 22 Dec 2019 14:47:13 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([221.219.1.122]) by sina.com with ESMTP id 5DFF81E70000BE9F; Sun, 22 Dec 2019 22:47:05 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 54721449283712 From: Hillf Danton To: io-uring@vger.kernel.org Cc: axboe@kernel.dk, viro@zeniv.linux.org.uk, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm , Hillf Danton Subject: [RFC PATCH] io-wq: cut busy list off io_wqe Date: Sun, 22 Dec 2019 22:46:54 +0800 Message-Id: <20191222144654.5060-1-hdanton@sina.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.448562, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit e61df66c69b1 ("io-wq: ensure free/busy list browsing see all items") added a list for io workers in addition to the free and busy lists, not only making worker walk cleaner but leaving the busy list to be at most a nice vase. Time to remove it now. Signed-off-by: Hillf Danton --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -92,7 +92,6 @@ struct io_wqe { struct io_wqe_acct acct[2]; struct hlist_nulls_head free_list; - struct hlist_nulls_head busy_list; struct list_head all_list; struct io_wq *wq; @@ -327,7 +326,6 @@ static void __io_worker_busy(struct io_w if (worker->flags & IO_WORKER_F_FREE) { worker->flags &= ~IO_WORKER_F_FREE; hlist_nulls_del_init_rcu(&worker->nulls_node); - hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->busy_list); } /* @@ -365,7 +363,6 @@ static bool __io_worker_idle(struct io_w { if (!(worker->flags & IO_WORKER_F_FREE)) { worker->flags |= IO_WORKER_F_FREE; - hlist_nulls_del_init_rcu(&worker->nulls_node); hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list); } @@ -805,10 +802,6 @@ void io_wq_cancel_all(struct io_wq *wq) set_bit(IO_WQ_BIT_CANCEL, &wq->state); - /* - * Browse both lists, as there's a gap between handing work off - * to a worker and the worker putting itself on the busy_list - */ rcu_read_lock(); for_each_node(node) { struct io_wqe *wqe = wq->wqes[node]; @@ -1058,7 +1051,6 @@ struct io_wq *io_wq_create(unsigned boun spin_lock_init(&wqe->lock); INIT_WQ_LIST(&wqe->work_list); INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0); - INIT_HLIST_NULLS_HEAD(&wqe->busy_list, 1); INIT_LIST_HEAD(&wqe->all_list); }