From patchwork Tue Jul 2 08:41:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?54mb5b+X5Zu9IChaaGlndW8gTml1KQ==?= X-Patchwork-Id: 13719109 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F28814E2D8 for ; Tue, 2 Jul 2024 08:42:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=222.66.158.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719909740; cv=none; b=WAskMvgOG3Uwu5oYs/xB8RdwoRiJjPXcEsBsKN+Xe5ssBTB4neC3WyESquSA0B89zcZER8446QckvHFqTobNnpUpWaFjeHcOU1pSJLfVe8gQ42GG1w/JgMh2KycPc/u7wU8Mveu+WcpvSxBfHRwokFPo61nBfSYLQkHg8iGT+D0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719909740; c=relaxed/simple; bh=Puj+jNmI26rldIqIgNvv4QkEN36nq3C66P0GQHXmSM0=; h=From:To:CC:Subject:Date:Message-ID:References:In-Reply-To: Content-Type:MIME-Version; b=RXGYp246lsW+uj57/9sEi/GlAYZwsaF2BrCN2j1GNMdlIqnk8HU97gyewojPpOwjQRhUdiGXnzfbjUW2y0j6qdkTm3dN5C/gg/kOx6BO2pQd2ywlYvTTt1hh+QVQprw0MG6RCWQfpIRuiowU6w+M4tnkiLauR3va03zKWhaxQI0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=unisoc.com; spf=pass smtp.mailfrom=unisoc.com; arc=none smtp.client-ip=222.66.158.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 4628fkba023870; Tue, 2 Jul 2024 16:41:46 +0800 (+08) (envelope-from Zhiguo.Niu@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4WCx8z6xTXz2K4lJ1; Tue, 2 Jul 2024 16:36:51 +0800 (CST) Received: from BJMBX02.spreadtrum.com (10.0.64.8) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 2 Jul 2024 16:41:44 +0800 Received: from BJMBX02.spreadtrum.com ([fe80::c8c3:f3a0:9c9f:b0fb]) by BJMBX02.spreadtrum.com ([fe80::c8c3:f3a0:9c9f:b0fb%19]) with mapi id 15.00.1497.023; Tue, 2 Jul 2024 16:41:44 +0800 From: =?utf-8?b?54mb5b+X5Zu9IChaaGlndW8gTml1KQ==?= To: Bart Van Assche , Jens Axboe CC: "linux-block@vger.kernel.org" , "Christoph Hellwig" , Damien Le Moal , =?utf-8?b?546L55qTIChIYW9faGFvIFdhbmcp?= Subject: =?utf-8?q?=E7=AD=94=E5=A4=8D=3A_=5BPATCH_v2_2/2=5D_block/mq-deadlin?= =?utf-8?q?e=3A_Fix_the_tag_reservation_code?= Thread-Topic: [PATCH v2 2/2] block/mq-deadline: Fix the tag reservation code Thread-Index: AQHazCQFSt1vw8hL60CP452z2NHXcbHjHolw Date: Tue, 2 Jul 2024 08:41:44 +0000 Message-ID: <91752dcd09fb427bb48dbec05d151c37@BJMBX02.spreadtrum.com> References: <20240509170149.7639-1-bvanassche@acm.org> <202407020202.46222wCt089266@SHSPAM01.spreadtrum.com> In-Reply-To: <202407020202.46222wCt089266@SHSPAM01.spreadtrum.com> Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MAIL: SHSQR01.spreadtrum.com 4628fkba023870 -----邮件原件----- 发件人: Bart Van Assche 发送时间: 2024年5月10日 1:02 收件人: Jens Axboe 抄送: linux-block@vger.kernel.org; Christoph Hellwig ; Bart Van Assche ; Damien Le Moal ; 牛志国 (Zhiguo Niu) 主题: [PATCH v2 2/2] block/mq-deadline: Fix the tag reservation code 注意: 这封邮件来自于外部。除非你确定邮件内容安全,否则不要点击任何链接和附件。 CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. The current tag reservation code is based on a misunderstanding of the meaning of data->shallow_depth. Fix the tag reservation code as follows: * By default, do not reserve any tags for synchronous requests because for certain use cases reserving tags reduces performance. See also Harshit Mogalapalli, [bug-report] Performance regression with fio sequential-write on a multipath setup, 2024-03-07 (https://lore.kernel.org/linux-block/5ce2ae5d-61e2-4ede-ad55-551112602401@oracle.com/) * Reduce min_shallow_depth to one because min_shallow_depth must be less than or equal any shallow_depth value. * Scale dd->async_depth from the range [1, nr_requests] to [1, bits_per_sbitmap_word]. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Zhiguo Niu Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) struct deadline_data *dd = q->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - dd->async_depth = max(1UL, 3 * q->nr_requests / 4); + dd->async_depth = q->nr_requests; - sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1); } Hi Bart, I tested basic function about these serial patches and results is pass, and no warning report after setting async_depth value by sysfs, so Tested-by: Zhiguo Niu Thanks! /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */ diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 94eede4fb9eb..acdc28756d9d 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -487,6 +487,20 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) return rq; } +/* + * 'depth' is a number in the range 1..INT_MAX representing a number of + * requests. Scale it with a factor (1 << bt->sb.shift) / +q->nr_requests since + * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow(). + * Values larger than q->nr_requests have the same effect as q->nr_requests. + */ +static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int +qdepth) { + struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags; + const unsigned int nrr = hctx->queue->nr_requests; + + return ((qdepth << bt->sb.shift) + nrr - 1) / nrr; } + /* * Called by __blk_mq_alloc_request(). The shallow_depth value set by this * function is used by __blk_mq_get_tag(). @@ -503,7 +517,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) * Throttle asynchronous requests and writes such that these requests * do not block the allocation of synchronous requests. */ - data->shallow_depth = dd->async_depth; + data->shallow_depth = dd_to_word_depth(data->hctx, + dd->async_depth); } /* Called by blk_mq_update_nr_requests(). */ @@ -513,9 +527,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)