From patchwork Sat Jan 4 10:20:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11317607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 09757930 for ; Sat, 4 Jan 2020 10:21:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BEB8C2464E for ; Sat, 4 Jan 2020 10:21:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BEB8C2464E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C42FA8E0005; Sat, 4 Jan 2020 05:21:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BF3AD8E0003; Sat, 4 Jan 2020 05:21:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B307F8E0005; Sat, 4 Jan 2020 05:21:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 9D4FA8E0003 for ; Sat, 4 Jan 2020 05:21:49 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 6315E181AEF07 for ; Sat, 4 Jan 2020 10:21:49 +0000 (UTC) X-FDA: 76339560738.09.boat69_2cd1b9b3df622 X-Spam-Summary: 2,0,0,8863014034cb2668,d41d8cd98f00b204,hdanton@sina.com,:linux-kernel@vger.kernel.org:axboe@kernel.dk::hdanton@sina.com,RULES_HIT:41:355:379:800:960:973:988:989:1260:1311:1314:1345:1437:1515:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:5007:6117:6119:6120:6261:7903:9592:10004:11026:11334:11537:11658:11914:12043:12296:12297:12438:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21222:21627:21990:30034:30045:30054:30070,0,RBL:202.108.3.24:@sina.com:.lbl8.mailshell.net-62.18.2.100 64.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: boat69_2cd1b9b3df622 X-Filterd-Recvd-Size: 2706 Received: from r3-24.sinamail.sina.com.cn (r3-24.sinamail.sina.com.cn [202.108.3.24]) by imf18.hostedemail.com (Postfix) with SMTP for ; Sat, 4 Jan 2020 10:21:47 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([114.244.160.153]) by sina.com with ESMTP id 5E106712000245F5; Sat, 4 Jan 2020 18:21:08 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 17102254919562 From: Hillf Danton To: linux-kernel Cc: Jens Axboe , linux-mm , Hillf Danton Subject: [RFC PATCH] blk-mq: cut fair share of tag depth Date: Sat, 4 Jan 2020 18:20:56 +0800 Message-Id: <20200104102056.1632-1-hdanton@sina.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.302231, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently active tag allocators are tracked in an attempt to provide a fair share of the tag depth for each of them, but the number of allocators is bogus as test_bit() and test_and_set_bit() are used for tracking it. Even it is not bogus, however, the result of hctx_may_queue() is incorrect because the number of allocated tags is compared with the expected fair share. Worse, even the consumed tags per allocater for a new allocator is not an argument strong enough to turn what was determined for previous allocators upside down, as it is hard to tell how many tags each allocator actually needs. IOW tag depth share can't be fair without innocent victims voted for in a dark box. Then fair share of tag depth is no longer provided. Signed-off-by: Hillf Danton --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -56,35 +56,15 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ blk_mq_tag_wakeup_all(tags, false); } -/* - * For shared tag users, we track the number of currently active users - * and attempt to provide a fair share of the tag depth for each of them. - */ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, struct sbitmap_queue *bt) { - unsigned int depth, users; - if (!hctx || !(hctx->flags & BLK_MQ_F_TAG_SHARED)) return true; if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) return true; - /* - * Don't try dividing an ant - */ - if (bt->sb.depth == 1) - return true; - - users = atomic_read(&hctx->tags->active_queues); - if (!users) - return true; - - /* - * Allow at least some tags - */ - depth = max((bt->sb.depth + users - 1) / users, 4U); - return atomic_read(&hctx->nr_active) < depth; + return atomic_read(&hctx->nr_active) < bt->sb.depth; } static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,