From patchwork Fri Apr 15 10:10:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 997E8C433FE for ; Fri, 15 Apr 2022 09:56:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351944AbiDOJ7C (ORCPT ); Fri, 15 Apr 2022 05:59:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351936AbiDOJ64 (ORCPT ); Fri, 15 Apr 2022 05:58:56 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A096FB91B5; Fri, 15 Apr 2022 02:56:27 -0700 (PDT) Received: from kwepemi100009.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfsB22gv0zgYtr; Fri, 15 Apr 2022 17:54:34 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100009.china.huawei.com (7.221.188.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:25 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:24 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 1/8] sbitmap: record the number of waiters for each waitqueue Date: Fri, 15 Apr 2022 18:10:46 +0800 Message-ID: <20220415101053.554495-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a counter in struct sbq_wait_stat to recored how many threads are waiting on the waitqueue, this will be used in later patches to make sure 8 waitqueues are balanced. Such counter will also be shown in debugfs so that user can see if waitqueues are balanced. Signed-off-by: Yu Kuai --- include/linux/sbitmap.h | 5 +++++ lib/sbitmap.c | 7 +++++-- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 8f5a86e210b9..8a64271d0696 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -91,6 +91,11 @@ struct sbq_wait_state { */ atomic_t wait_cnt; + /** + * @waiters_cnt: Number of active waiters + */ + atomic_t waiters_cnt; + /** * @wait: Wait queue. */ diff --git a/lib/sbitmap.c b/lib/sbitmap.c index ae4fd4de9ebe..a5105ce6d424 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -444,6 +444,7 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, for (i = 0; i < SBQ_WAIT_QUEUES; i++) { init_waitqueue_head(&sbq->ws[i].wait); atomic_set(&sbq->ws[i].wait_cnt, sbq->wake_batch); + atomic_set(&sbq->ws[i].waiters_cnt, 0); } return 0; @@ -759,9 +760,9 @@ void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m) for (i = 0; i < SBQ_WAIT_QUEUES; i++) { struct sbq_wait_state *ws = &sbq->ws[i]; - seq_printf(m, "\t{.wait_cnt=%d, .wait=%s},\n", + seq_printf(m, "\t{.wait_cnt=%d, .waiters_cnt=%d},\n", atomic_read(&ws->wait_cnt), - waitqueue_active(&ws->wait) ? "active" : "inactive"); + atomic_read(&ws->waiters_cnt)); } seq_puts(m, "}\n"); @@ -797,6 +798,7 @@ void sbitmap_prepare_to_wait(struct sbitmap_queue *sbq, struct sbq_wait *sbq_wait, int state) { if (!sbq_wait->sbq) { + atomic_inc(&ws->waiters_cnt); atomic_inc(&sbq->ws_active); sbq_wait->sbq = sbq; } @@ -810,6 +812,7 @@ void sbitmap_finish_wait(struct sbitmap_queue *sbq, struct sbq_wait_state *ws, finish_wait(&ws->wait, &sbq_wait->wait); if (sbq_wait->sbq) { atomic_dec(&sbq->ws_active); + atomic_dec(&ws->waiters_cnt); sbq_wait->sbq = NULL; } } From patchwork Fri Apr 15 10:10:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2545FC433EF for ; Fri, 15 Apr 2022 09:56:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351932AbiDOJ7A (ORCPT ); Fri, 15 Apr 2022 05:59:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351938AbiDOJ66 (ORCPT ); Fri, 15 Apr 2022 05:58:58 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06EE3BA306; Fri, 15 Apr 2022 02:56:27 -0700 (PDT) Received: from kwepemi100007.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfsCR3BnwzfYvB; Fri, 15 Apr 2022 17:55:47 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100007.china.huawei.com (7.221.188.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:25 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:25 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 2/8] blk-mq: call 'bt_wait_ptr()' later in blk_mq_get_tag() Date: Fri, 15 Apr 2022 18:10:47 +0800 Message-ID: <20220415101053.554495-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org bt_wait_ptr() will increase 'wait_index', however, if blk_mq_get_tag() get a tag successfully after bt_wait_ptr() is called and before sbitmap_prepare_to_wait() is called, then the 'ws' is skipped. This behavior might cause 8 waitqueues to be unbalanced. Move bt_wait_ptr() later should reduce the problem when the disk is under high io preesure. In the meantime, instead of calling bt_wait_ptr() during every loop, calling bt_wait_ptr() only if destination hw queue is changed, which should reduce the unfairness further. Signed-off-by: Yu Kuai --- block/blk-mq-tag.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 68ac23d0b640..5ad85063e91e 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -131,7 +131,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) { struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct sbitmap_queue *bt; - struct sbq_wait_state *ws; + struct sbq_wait_state *ws = NULL; DEFINE_SBQ_WAIT(wait); unsigned int tag_offset; int tag; @@ -155,7 +155,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (data->flags & BLK_MQ_REQ_NOWAIT) return BLK_MQ_NO_TAG; - ws = bt_wait_ptr(bt, data->hctx); do { struct sbitmap_queue *bt_prev; @@ -174,6 +173,8 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (tag != BLK_MQ_NO_TAG) break; + if (!ws) + ws = bt_wait_ptr(bt, data->hctx); sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); tag = __blk_mq_get_tag(data, bt); @@ -199,10 +200,10 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * previous queue for compensating the wake up miss, so * other allocations on previous queue won't be starved. */ - if (bt != bt_prev) + if (bt != bt_prev) { sbitmap_queue_wake_up(bt_prev); - - ws = bt_wait_ptr(bt, data->hctx); + ws = bt_wait_ptr(bt, data->hctx); + } } while (1); sbitmap_finish_wait(bt, ws, &wait); From patchwork Fri Apr 15 10:10:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 974AFC433F5 for ; Fri, 15 Apr 2022 09:56:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351952AbiDOJ7E (ORCPT ); Fri, 15 Apr 2022 05:59:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351942AbiDOJ67 (ORCPT ); Fri, 15 Apr 2022 05:58:59 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91968BA310; Fri, 15 Apr 2022 02:56:28 -0700 (PDT) Received: from kwepemi100006.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfsCS1TdXzfYvY; Fri, 15 Apr 2022 17:55:48 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100006.china.huawei.com (7.221.188.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:26 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:25 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 3/8] sbitmap: make sure waitqueues are balanced Date: Fri, 15 Apr 2022 18:10:48 +0800 Message-ID: <20220415101053.554495-4-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, same waitqueue might be woken up continuously: __sbq_wake_up __sbq_wake_up sbq_wake_ptr -> assume 0 sbq_wake_ptr -> 0 atomic_dec_return atomic_dec_return atomic_cmpxchg -> succeed atomic_cmpxchg -> failed return true __sbq_wake_up sbq_wake_ptr atomic_read(&sbq->wake_index) -> still 0 sbq_index_atomic_inc -> inc to 1 if (waitqueue_active(&ws->wait)) if (wake_index != atomic_read(&sbq->wake_index)) atomic_set -> reset from 1 to 0 wake_up_nr -> wake up first waitqueue // continue to wake up in first waitqueue What's worse, io hung is possible in theory because wake up might be missed. For example, 2 * wake_batch tags are put, while only wake_batch threads are worken: __sbq_wake_up atomic_cmpxchg -> reset wait_cnt __sbq_wake_up -> decrease wait_cnt ... __sbq_wake_up -> wait_cnt is decreased to 0 again atomic_cmpxchg sbq_index_atomic_inc -> increase wake_index wake_up_nr -> wake up and waitqueue might be empty sbq_index_atomic_inc -> increase again, one waitqueue is skipped wake_up_nr -> invalid wake up because old wakequeue might be empty To fix the problem, refactor to make sure waitqueues will be woken up one by one, and also choose the next waitqueue by the number of threads that are waiting to keep waitqueues balanced. Test cmd: nr_requests is 64, and queue_depth is 32 [global] filename=/dev/sda ioengine=libaio direct=1 allow_mounted_write=0 group_reporting [test] rw=randwrite bs=4k numjobs=512 iodepth=2 Before this patch, waitqueues can be extremly unbalanced, for example: ws_active=484 ws={ {.wait_cnt=8, .waiters_cnt=117}, {.wait_cnt=8, .waiters_cnt=59}, {.wait_cnt=8, .waiters_cnt=76}, {.wait_cnt=8, .waiters_cnt=0}, {.wait_cnt=5, .waiters_cnt=24}, {.wait_cnt=8, .waiters_cnt=12}, {.wait_cnt=8, .waiters_cnt=21}, {.wait_cnt=8, .waiters_cnt=175}, } With this patch, waitqueues is always balanced, for example: ws_active=477 ws={ {.wait_cnt=8, .waiters_cnt=59}, {.wait_cnt=6, .waiters_cnt=62}, {.wait_cnt=8, .waiters_cnt=61}, {.wait_cnt=8, .waiters_cnt=60}, {.wait_cnt=8, .waiters_cnt=63}, {.wait_cnt=8, .waiters_cnt=56}, {.wait_cnt=8, .waiters_cnt=59}, {.wait_cnt=8, .waiters_cnt=57}, } Signed-off-by: Yu Kuai --- lib/sbitmap.c | 88 ++++++++++++++++++++++++++++----------------------- 1 file changed, 48 insertions(+), 40 deletions(-) diff --git a/lib/sbitmap.c b/lib/sbitmap.c index a5105ce6d424..7527527bbc86 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -575,66 +575,74 @@ void sbitmap_queue_min_shallow_depth(struct sbitmap_queue *sbq, } EXPORT_SYMBOL_GPL(sbitmap_queue_min_shallow_depth); -static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq) +/* always choose the 'ws' with the max waiters */ +static void sbq_update_wake_index(struct sbitmap_queue *sbq, + int old_wake_index) { - int i, wake_index; + int index, wake_index; + int max_waiters = 0; - if (!atomic_read(&sbq->ws_active)) - return NULL; + if (old_wake_index != atomic_read(&sbq->wake_index)) + return; - wake_index = atomic_read(&sbq->wake_index); - for (i = 0; i < SBQ_WAIT_QUEUES; i++) { - struct sbq_wait_state *ws = &sbq->ws[wake_index]; + for (wake_index = 0; wake_index < SBQ_WAIT_QUEUES; wake_index++) { + struct sbq_wait_state *ws; + int waiters; - if (waitqueue_active(&ws->wait)) { - if (wake_index != atomic_read(&sbq->wake_index)) - atomic_set(&sbq->wake_index, wake_index); - return ws; - } + if (wake_index == old_wake_index) + continue; - wake_index = sbq_index_inc(wake_index); + ws = &sbq->ws[wake_index]; + waiters = atomic_read(&ws->waiters_cnt); + if (waiters > max_waiters) { + max_waiters = waiters; + index = wake_index; + } } - return NULL; + if (max_waiters) + atomic_cmpxchg(&sbq->wake_index, old_wake_index, index); } static bool __sbq_wake_up(struct sbitmap_queue *sbq) { struct sbq_wait_state *ws; unsigned int wake_batch; - int wait_cnt; + int wait_cnt, wake_index; - ws = sbq_wake_ptr(sbq); - if (!ws) + if (!atomic_read(&sbq->ws_active)) return false; - wait_cnt = atomic_dec_return(&ws->wait_cnt); - if (wait_cnt <= 0) { - int ret; - - wake_batch = READ_ONCE(sbq->wake_batch); + wake_index = atomic_read(&sbq->wake_index); + ws = &sbq->ws[wake_index]; - /* - * Pairs with the memory barrier in sbitmap_queue_resize() to - * ensure that we see the batch size update before the wait - * count is reset. - */ - smp_mb__before_atomic(); + /* Dismatch wake_index can only happened in the first wakeup. */ + if (!atomic_read(&ws->waiters_cnt)) { + sbq_update_wake_index(sbq, wake_index); + return true; + } - /* - * For concurrent callers of this, the one that failed the - * atomic_cmpxhcg() race should call this function again - * to wakeup a new batch on a different 'ws'. - */ - ret = atomic_cmpxchg(&ws->wait_cnt, wait_cnt, wake_batch); - if (ret == wait_cnt) { - sbq_index_atomic_inc(&sbq->wake_index); - wake_up_nr(&ws->wait, wake_batch); - return false; - } + wait_cnt = atomic_dec_return(&ws->wait_cnt); + if (wait_cnt > 0) + return false; + sbq_update_wake_index(sbq, wake_index); + /* + * Concurrent callers should call this function again + * to wakeup a new batch on a different 'ws'. + */ + if (wait_cnt < 0) return true; - } + + wake_batch = READ_ONCE(sbq->wake_batch); + /* + * Pairs with the memory barrier in sbitmap_queue_resize() to + * ensure that we see the batch size update before the wait + * count is reset. + */ + smp_mb__before_atomic(); + atomic_set(&ws->wait_cnt, wake_batch); + wake_up_nr(&ws->wait, wake_batch); return false; } From patchwork Fri Apr 15 10:10:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52743C4332F for ; Fri, 15 Apr 2022 09:56:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351947AbiDOJ7D (ORCPT ); Fri, 15 Apr 2022 05:59:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351940AbiDOJ67 (ORCPT ); Fri, 15 Apr 2022 05:58:59 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7005AB91B7; Fri, 15 Apr 2022 02:56:29 -0700 (PDT) Received: from kwepemi100003.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfsCS6xcCzfYvk; Fri, 15 Apr 2022 17:55:48 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100003.china.huawei.com (7.221.188.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:27 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:26 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 4/8] blk-mq: don't preempt tag under heavy load Date: Fri, 15 Apr 2022 18:10:49 +0800 Message-ID: <20220415101053.554495-5-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Tag preemption is the default behaviour, specifically blk_mq_get_tag() will try to get tag unconditionally, which means a new io can preempt tag even if there are lots of ios that are waiting for tags. Such behaviour doesn't make sense when the disk is under heavy load, because it will intensify competition without improving performance, especially for huge io as split ios are unlikely to be issued continuously. The ideal way to disable tag preemption is to track how many tags are available, and wait directly in blk_mq_get_tag() if free tags are very little. However, this is out of reality because fast path is affected. As 'ws_active' is only updated in slow path, this patch disable tag preemption if 'ws_active' is greater than 8, which means there are many threads waiting for tags already. Once tag preemption is disabled, there is a situation that can cause performance degradation(or io hung in extreme scenarios): the waitqueue doesn't have 'wake_batch' threads, thus wake up on this waitqueue might cause the concurrency of ios to be decreased. The next patch will fix this problem. This patch also add a detection in blk_mq_timeout_work(), just in case io hung is triggered due to waiters can't awakened in some corner cases. Signed-off-by: Yu Kuai --- block/blk-mq-tag.c | 36 +++++++++++++++++++++++++----------- block/blk-mq.c | 29 +++++++++++++++++++++++++++++ block/blk-mq.h | 2 ++ 3 files changed, 56 insertions(+), 11 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 5ad85063e91e..a6c5ec846a5e 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -127,6 +127,13 @@ unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags, return ret; } +static inline bool preempt_tag(struct blk_mq_alloc_data *data, + struct sbitmap_queue *bt) +{ + return data->preempt || + atomic_read(&bt->ws_active) <= SBQ_WAIT_QUEUES; +} + unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) { struct blk_mq_tags *tags = blk_mq_tags_from_data(data); @@ -148,12 +155,14 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) tag_offset = tags->nr_reserved_tags; } - tag = __blk_mq_get_tag(data, bt); - if (tag != BLK_MQ_NO_TAG) - goto found_tag; + if (data->flags & BLK_MQ_REQ_NOWAIT || preempt_tag(data, bt)) { + tag = __blk_mq_get_tag(data, bt); + if (tag != BLK_MQ_NO_TAG) + goto found_tag; - if (data->flags & BLK_MQ_REQ_NOWAIT) - return BLK_MQ_NO_TAG; + if (data->flags & BLK_MQ_REQ_NOWAIT) + return BLK_MQ_NO_TAG; + } do { struct sbitmap_queue *bt_prev; @@ -169,21 +178,26 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * Retry tag allocation after running the hardware queue, * as running the queue may also have found completions. */ - tag = __blk_mq_get_tag(data, bt); - if (tag != BLK_MQ_NO_TAG) - break; + if (preempt_tag(data, bt)) { + tag = __blk_mq_get_tag(data, bt); + if (tag != BLK_MQ_NO_TAG) + break; + } if (!ws) ws = bt_wait_ptr(bt, data->hctx); sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); - tag = __blk_mq_get_tag(data, bt); - if (tag != BLK_MQ_NO_TAG) - break; + if (preempt_tag(data, bt)) { + tag = __blk_mq_get_tag(data, bt); + if (tag != BLK_MQ_NO_TAG) + break; + } bt_prev = bt; io_schedule(); + data->preempt = true; sbitmap_finish_wait(bt, ws, &wait); data->ctx = blk_mq_get_ctx(data->q); diff --git a/block/blk-mq.c b/block/blk-mq.c index ed3ed86f7dd2..32beacbad5e2 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1446,6 +1446,34 @@ static bool blk_mq_check_expired(struct request *rq, void *priv, bool reserved) return true; } +static void blk_mq_check_tag_waiters(struct blk_mq_hw_ctx *hctx) +{ + bool warn = false; + struct blk_mq_tags *tags = hctx->tags; + +again: + if (atomic_read(&tags->bitmap_tags.ws_active)) { + warn = true; + sbitmap_queue_wake_all(&tags->bitmap_tags); + } + + if (atomic_read(&tags->breserved_tags.ws_active)) { + warn = true; + sbitmap_queue_wake_all(&tags->breserved_tags); + } + + if (hctx->sched_tags && tags != hctx->sched_tags) { + tags = hctx->sched_tags; + goto again; + } + + /* + * This is problematic because someone is still waiting for tag while + * no tag is used. + */ + WARN_ON_ONCE(warn); +} + static void blk_mq_timeout_work(struct work_struct *work) { struct request_queue *q = @@ -1482,6 +1510,7 @@ static void blk_mq_timeout_work(struct work_struct *work) * each hctx as idle. */ queue_for_each_hw_ctx(q, hctx, i) { + blk_mq_check_tag_waiters(hctx); /* the hctx may be unmapped, so check it here */ if (blk_mq_hw_queue_mapped(hctx)) blk_mq_tag_idle(hctx); diff --git a/block/blk-mq.h b/block/blk-mq.h index 2615bd58bad3..1a85bd1045d8 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -156,6 +156,8 @@ struct blk_mq_alloc_data { /* allocate multiple requests/tags in one go */ unsigned int nr_tags; + /* true if blk_mq_get_tag() will try to preempt tag */ + bool preempt; struct request **cached_rq; /* input & output parameter */ From patchwork Fri Apr 15 10:10:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D1FDC433EF for ; Fri, 15 Apr 2022 09:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351938AbiDOJ7E (ORCPT ); Fri, 15 Apr 2022 05:59:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351946AbiDOJ67 (ORCPT ); Fri, 15 Apr 2022 05:58:59 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 165BBBA316; Fri, 15 Apr 2022 02:56:30 -0700 (PDT) Received: from kwepemi100004.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KfsB50Yq4zhXZF; Fri, 15 Apr 2022 17:54:37 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100004.china.huawei.com (7.221.188.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:28 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:27 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 5/8] sbitmap: force tag preemption if free tags are sufficient Date: Fri, 15 Apr 2022 18:10:50 +0800 Message-ID: <20220415101053.554495-6-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that tag preemption is disabled under heavy load, if wakers doesn't use up 'wake_batch' tags while preemption is still disabled, io concurrency will be declined. To fix the problem, add a detection before wake up, and force tag preemption is free tags are sufficient, so that the extra tags can be used by new io. And tag preemption will be disabled again if the extra tags are used up. Signed-off-by: Yu Kuai --- block/blk-mq-tag.c | 3 ++- include/linux/sbitmap.h | 2 ++ lib/sbitmap.c | 30 ++++++++++++++++++++++++++++++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index a6c5ec846a5e..d02710cf3355 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -131,7 +131,8 @@ static inline bool preempt_tag(struct blk_mq_alloc_data *data, struct sbitmap_queue *bt) { return data->preempt || - atomic_read(&bt->ws_active) <= SBQ_WAIT_QUEUES; + atomic_read(&bt->ws_active) <= SBQ_WAIT_QUEUES || + bt->force_tag_preemption; } unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 8a64271d0696..ca00ccb6af48 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -143,6 +143,8 @@ struct sbitmap_queue { * sbitmap_queue_get_shallow() */ unsigned int min_shallow_depth; + + bool force_tag_preemption; }; /** diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 7527527bbc86..315e5619b384 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -434,6 +434,7 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, sbq->wake_batch = sbq_calc_wake_batch(sbq, depth); atomic_set(&sbq->wake_index, 0); atomic_set(&sbq->ws_active, 0); + sbq->force_tag_preemption = true; sbq->ws = kzalloc_node(SBQ_WAIT_QUEUES * sizeof(*sbq->ws), flags, node); if (!sbq->ws) { @@ -604,6 +605,34 @@ static void sbq_update_wake_index(struct sbitmap_queue *sbq, atomic_cmpxchg(&sbq->wake_index, old_wake_index, index); } +static inline void sbq_update_preemption(struct sbitmap_queue *sbq, + unsigned int wake_batch) +{ + unsigned int free; + + if (wake_batch == 1) { + /* + * Waiters will be woken up one by one, no risk of declining + * io concurrency. + */ + sbq->force_tag_preemption = false; + return; + } + + free = sbq->sb.depth - sbitmap_weight(&sbq->sb); + if (sbq->force_tag_preemption) { + if (free <= wake_batch) + sbq->force_tag_preemption = false; + } else { + if (free > wake_batch << 1) + sbq->force_tag_preemption = true; + + } + sbq->force_tag_preemption = + (sbq->sb.depth - sbitmap_weight(&sbq->sb)) >= wake_batch << 1 ? + true : false; +} + static bool __sbq_wake_up(struct sbitmap_queue *sbq) { struct sbq_wait_state *ws; @@ -642,6 +671,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) */ smp_mb__before_atomic(); atomic_set(&ws->wait_cnt, wake_batch); + sbq_update_preemption(sbq, wake_batch); wake_up_nr(&ws->wait, wake_batch); return false; From patchwork Fri Apr 15 10:10:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 890EEC4332F for ; Fri, 15 Apr 2022 09:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351968AbiDOJ7H (ORCPT ); Fri, 15 Apr 2022 05:59:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351949AbiDOJ7A (ORCPT ); Fri, 15 Apr 2022 05:59:00 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCD81BA327; Fri, 15 Apr 2022 02:56:31 -0700 (PDT) Received: from kwepemi100002.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KfsCV6HWRz1HC0w; Fri, 15 Apr 2022 17:55:50 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100002.china.huawei.com (7.221.188.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:29 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:28 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 6/8] blk-mq: force tag preemption for split bios Date: Fri, 15 Apr 2022 18:10:51 +0800 Message-ID: <20220415101053.554495-7-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For HDD, sequential io is much faster than random io, thus it's better to issue split io continuously. However, this is broken when tag preemption is disabled, because wakers can only get one tag each time. Thus tag preemption should be enabled for split bios, at least for HDD, specifically the first bio won't preempt tag, and following split bios will preempt tag. Signed-off-by: Yu Kuai --- block/blk-merge.c | 8 +++++++- block/blk-mq.c | 1 + include/linux/blk_types.h | 4 ++++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 7771dacc99cb..85c285023f5e 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -343,12 +343,18 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio, if (split) { /* there isn't chance to merge the splitted bio */ - split->bi_opf |= REQ_NOMERGE; + split->bi_opf |= (REQ_NOMERGE | REQ_SPLIT); + if ((*bio)->bi_opf & REQ_SPLIT) + split->bi_opf |= REQ_PREEMPT; + else + (*bio)->bi_opf |= REQ_SPLIT; bio_chain(split, *bio); trace_block_split(split, (*bio)->bi_iter.bi_sector); submit_bio_noacct(*bio); *bio = split; + } else if ((*bio)->bi_opf & REQ_SPLIT) { + (*bio)->bi_opf |= REQ_PREEMPT; } } diff --git a/block/blk-mq.c b/block/blk-mq.c index 32beacbad5e2..a889f01d2cdf 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2766,6 +2766,7 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q, .q = q, .nr_tags = 1, .cmd_flags = bio->bi_opf, + .preempt = (bio->bi_opf & REQ_PREEMPT), }; struct request *rq; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index c62274466e72..046a34c81ec4 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -418,6 +418,8 @@ enum req_flag_bits { /* for driver use */ __REQ_DRV, __REQ_SWAP, /* swapping request. */ + __REQ_SPLIT, /* IO is split. */ + __REQ_PREEMPT, /* IO will preempt tag. */ __REQ_NR_BITS, /* stops here */ }; @@ -443,6 +445,8 @@ enum req_flag_bits { #define REQ_DRV (1ULL << __REQ_DRV) #define REQ_SWAP (1ULL << __REQ_SWAP) +#define REQ_SPLIT (1ULL << __REQ_SPLIT) +#define REQ_PREEMPT (1ULL << __REQ_PREEMPT) #define REQ_FAILFAST_MASK \ (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) From patchwork Fri Apr 15 10:10:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D6A4C433F5 for ; Fri, 15 Apr 2022 09:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351962AbiDOJ7G (ORCPT ); Fri, 15 Apr 2022 05:59:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351948AbiDOJ7A (ORCPT ); Fri, 15 Apr 2022 05:59:00 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE01BA32E; Fri, 15 Apr 2022 02:56:31 -0700 (PDT) Received: from kwepemi100005.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KfsB64NWrzhXZ3; Fri, 15 Apr 2022 17:54:38 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100005.china.huawei.com (7.221.188.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:29 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:29 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 7/8] blk-mq: record how many tags are needed for splited bio Date: Fri, 15 Apr 2022 18:10:52 +0800 Message-ID: <20220415101053.554495-8-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, each time 8(or wake batch) requests is done, 8 waiters will be woken up, this is not necessary because we only need to make sure wakers will use up 8 tags. For example, if we know in advance that a thread need 8 tags, then wake up one thread is enough, and this can also avoid unnecessary context switch. On the other hand, sequential io is much faster than random io, thus it's better to issue split io continuously. This patch tries to provide such information that how many tags will be needed for huge io, and it will be used in next patch. Signed-off-by: Yu Kuai --- block/blk-mq-tag.c | 1 + block/blk-mq.c | 24 +++++++++++++++++++++--- block/blk-mq.h | 2 ++ include/linux/sbitmap.h | 2 ++ 4 files changed, 26 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index d02710cf3355..70ce98a5c32b 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -165,6 +165,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) return BLK_MQ_NO_TAG; } + wait.nr_tags += data->nr_split; do { struct sbitmap_queue *bt_prev; diff --git a/block/blk-mq.c b/block/blk-mq.c index a889f01d2cdf..ac614a379a6d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2760,12 +2760,14 @@ static bool blk_mq_attempt_bio_merge(struct request_queue *q, static struct request *blk_mq_get_new_requests(struct request_queue *q, struct blk_plug *plug, struct bio *bio, - unsigned int nsegs) + unsigned int nsegs, + unsigned int nr_split) { struct blk_mq_alloc_data data = { .q = q, .nr_tags = 1, .cmd_flags = bio->bi_opf, + .nr_split = nr_split, .preempt = (bio->bi_opf & REQ_PREEMPT), }; struct request *rq; @@ -2824,6 +2826,19 @@ static inline struct request *blk_mq_get_cached_request(struct request_queue *q, return rq; } +static inline unsigned int caculate_sectors_split(struct bio *bio) +{ + switch (bio_op(bio)) { + case REQ_OP_DISCARD: + case REQ_OP_SECURE_ERASE: + case REQ_OP_WRITE_ZEROES: + return 0; + default: + return (bio_sectors(bio) - 1) / + queue_max_sectors(bio->bi_bdev->bd_queue); + } +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2844,11 +2859,14 @@ void blk_mq_submit_bio(struct bio *bio) const int is_sync = op_is_sync(bio->bi_opf); struct request *rq; unsigned int nr_segs = 1; + unsigned int nr_split = 0; blk_status_t ret; blk_queue_bounce(q, &bio); - if (blk_may_split(q, bio)) + if (blk_may_split(q, bio)) { + nr_split = caculate_sectors_split(bio); __blk_queue_split(q, &bio, &nr_segs); + } if (!bio_integrity_prep(bio)) return; @@ -2857,7 +2875,7 @@ void blk_mq_submit_bio(struct bio *bio) if (!rq) { if (!bio) return; - rq = blk_mq_get_new_requests(q, plug, bio, nr_segs); + rq = blk_mq_get_new_requests(q, plug, bio, nr_segs, nr_split); if (unlikely(!rq)) return; } diff --git a/block/blk-mq.h b/block/blk-mq.h index 1a85bd1045d8..9bad3057c1f3 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -156,6 +156,8 @@ struct blk_mq_alloc_data { /* allocate multiple requests/tags in one go */ unsigned int nr_tags; + /* number of ios left after this io is handled */ + unsigned int nr_split; /* true if blk_mq_get_tag() will try to preempt tag */ bool preempt; struct request **cached_rq; diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index ca00ccb6af48..1abd8ed5d406 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -596,12 +596,14 @@ void sbitmap_queue_wake_up(struct sbitmap_queue *sbq); void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m); struct sbq_wait { + unsigned int nr_tags; struct sbitmap_queue *sbq; /* if set, sbq_wait is accounted */ struct wait_queue_entry wait; }; #define DEFINE_SBQ_WAIT(name) \ struct sbq_wait name = { \ + .nr_tags = 1, \ .sbq = NULL, \ .wait = { \ .private = current, \ From patchwork Fri Apr 15 10:10:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12814705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B83C433EF for ; Fri, 15 Apr 2022 09:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351972AbiDOJ7I (ORCPT ); Fri, 15 Apr 2022 05:59:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351950AbiDOJ7A (ORCPT ); Fri, 15 Apr 2022 05:59:00 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A609BA330; Fri, 15 Apr 2022 02:56:32 -0700 (PDT) Received: from kwepemi100001.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KfsCX31jBz1HBp9; Fri, 15 Apr 2022 17:55:52 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100001.china.huawei.com (7.221.188.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:30 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 17:56:29 +0800 From: Yu Kuai To: , , , , , CC: , , , Subject: [PATCH -next RFC v3 8/8] sbitmap: wake up the number of threads based on required tags Date: Fri, 15 Apr 2022 18:10:53 +0800 Message-ID: <20220415101053.554495-9-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220415101053.554495-1-yukuai3@huawei.com> References: <20220415101053.554495-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that split bios are forced to preemt tag, however, they are unlikely to get tags continuously because 'wake_batch' threads are woke up each time while there are only 'wake_batch' tags available. Since it can be known in advance how many tags are required for huge io, it's safe to wake up based on required tags, because it can be sure that wakers will use up 'wake_batch' tags. Signed-off-by: Yu Kuai --- lib/sbitmap.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 315e5619b384..5ac5ad1b4b1e 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -633,6 +633,32 @@ static inline void sbq_update_preemption(struct sbitmap_queue *sbq, true : false; } +static unsigned int get_wake_nr(struct sbq_wait_state *ws, unsigned int nr_tags) +{ + struct sbq_wait *wait; + struct wait_queue_entry *entry; + unsigned int nr = 1; + + spin_lock_irq(&ws->wait.lock); + list_for_each_entry(entry, &ws->wait.head, entry) { + wait = container_of(entry, struct sbq_wait, wait); + if (nr_tags <= wait->nr_tags) { + nr_tags = 0; + break; + } + + nr++; + nr_tags -= wait->nr_tags; + } + spin_unlock_irq(&ws->wait.lock); + + /* + * If nr_tags is not 0, additional wakeup is triggered to fix the race + * that new threads are waited before wake_up_nr() is called. + */ + return nr + nr_tags; +} + static bool __sbq_wake_up(struct sbitmap_queue *sbq) { struct sbq_wait_state *ws; @@ -672,7 +698,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) smp_mb__before_atomic(); atomic_set(&ws->wait_cnt, wake_batch); sbq_update_preemption(sbq, wake_batch); - wake_up_nr(&ws->wait, wake_batch); + wake_up_nr(&ws->wait, get_wake_nr(ws, wake_batch)); return false; }