From patchwork Mon Dec 9 11:55:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13899422 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04F511F931; Mon, 9 Dec 2024 11:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745517; cv=none; b=p4FHk9bdtnPm6UDX823IGMsFH2MC3H8rDA9eP4H0cuUG2LaTn36Vd9ToN46VkMJiJaP8bNW0lyiZR2tv0qGdZhlV/gPrV4yGaekFuq5FdfY97v/2Wf3i7hTvKxdKcpnv5PhPeHdv1GFglUwDotYJekabSHC5VIyZfm6gJKF3EJ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745517; c=relaxed/simple; bh=5nkEV1ILLRaoHBEnwkZe5Vv72bmyTnJD94Hh1J+cDeQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FCs5kbgEQcIS9F7Y26JEtoavcf4EF5oikzB2aRFeIYbyzm80Oskcdz38W5kh06cLTS5uNngpa5iXjtoY4mUS++pGiffKvhc9CyBgxr7kI0HhBqXSjN3Jt+LTgHKlQCa/77Srg5rh9V77zI1co1Vw6cJ0hFuI/m0L1rroqYsu7jE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Y6L3R53qPz4f3kFP; Mon, 9 Dec 2024 19:58:11 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id DD0011A058E; Mon, 9 Dec 2024 19:58:30 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCXc4dk21Zn1extEA--.18505S5; Mon, 09 Dec 2024 19:58:30 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk, akpm@linux-foundation.org, yang.yang@vivo.com, ming.lei@redhat.com, yukuai3@huawei.com, bvanassche@acm.org, osandov@fb.com, paolo.valente@linaro.org Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH RFC 1/3] block/mq-deadline: Revert "block/mq-deadline: Fix the tag reservation code" Date: Mon, 9 Dec 2024 19:55:20 +0800 Message-Id: <20241209115522.3741093-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241209115522.3741093-1-yukuai1@huaweicloud.com> References: <20241209115522.3741093-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXc4dk21Zn1extEA--.18505S5 X-Coremail-Antispam: 1UD129KBjvJXoWxWr13ZFWDAw1kGFW5Zr1DJrb_yoW5XrWfpF W5JanFkr1rJF4DuF4DJ398Zr1ktws3WryfJF1aqwnakF1DAa9xXF1rtFWrZFWIvrWxCa1j 9r4qq3s5ZwnFqFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQq14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8cxan2IY04 v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAKI48JMxAqzxv26xkF7I0En4kS14v2 6r1q6r43MxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6x AIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY 1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUjrHUDUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai This reverts commit 39823b47bbd40502632ffba90ebb34fff7c8b5e8. Because tag reservation is not fixed and will introduce performance problem. 1) Set min_shallow_depth to 1 will end up setting wake_batch to 1, deadline has no reason to do this. And this will cause performance degradation in some high concurrency test, for both IO bandwidth and cpu usage. 2) async_depth is nr_requests, hence shallow_depth will always set to 1 << bt->sb.shift. For consequence, no tag can be reserved. The next patch will fix tag reservation again. Fixes: 39823b47bbd4 ("block/mq-deadline: Fix the tag reservation code") Signed-off-by: Yu Kuai --- block/mq-deadline.c | 20 +++----------------- 1 file changed, 3 insertions(+), 17 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 91b3789f710e..1f0d175a941e 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -487,20 +487,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) return rq; } -/* - * 'depth' is a number in the range 1..INT_MAX representing a number of - * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests since - * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow(). - * Values larger than q->nr_requests have the same effect as q->nr_requests. - */ -static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdepth) -{ - struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags; - const unsigned int nrr = hctx->queue->nr_requests; - - return ((qdepth << bt->sb.shift) + nrr - 1) / nrr; -} - /* * Called by __blk_mq_alloc_request(). The shallow_depth value set by this * function is used by __blk_mq_get_tag(). @@ -517,7 +503,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) * Throttle asynchronous requests and writes such that these requests * do not block the allocation of synchronous requests. */ - data->shallow_depth = dd_to_word_depth(data->hctx, dd->async_depth); + data->shallow_depth = dd->async_depth; } /* Called by blk_mq_update_nr_requests(). */ @@ -527,9 +513,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) struct deadline_data *dd = q->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - dd->async_depth = q->nr_requests; + dd->async_depth = max(1UL, 3 * q->nr_requests / 4); - sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); } /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */ From patchwork Mon Dec 9 11:55:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13899421 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6483D2206A5; Mon, 9 Dec 2024 11:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745517; cv=none; b=eM+ujrenuf9CZfkymLZWD6IsCh+a2IB1VEOzQkobb//Ob8BFlRU6WLpBflhAV3S35EMgeLHNNrUmvUP2uPpOUxndDx1SiHQuVqr3c8hDWt+LkCqOSMgwAhLDZ9jJjEj2bG/SQDMBHCL19k8DQks4gfX2ee+k8k3ZXtktCfigDvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745517; c=relaxed/simple; bh=nEhtg5q1cwzEyyWc0/SJjINGkUljcKmGB9BjRlmICos=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=n//vy1bDs9EyLqd34GjZbY1jeDtmU7gO54oT8LY5V8YG6F1qILbdR4mb1BULV94pEbsKgzEwHh/rMNz9PPOl3dY+IC3e3NPhoDixzS+lC9GpprLdwjF/PmpOtnNS1oLIRPlxegcHP3ffwWTKA0CyhlHzSGJNWjnFR4ATKCo2Ptc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Y6L3Y1nxJz4f3kJx; Mon, 9 Dec 2024 19:58:17 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 6B8181A018D; Mon, 9 Dec 2024 19:58:31 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCXc4dk21Zn1extEA--.18505S6; Mon, 09 Dec 2024 19:58:31 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk, akpm@linux-foundation.org, yang.yang@vivo.com, ming.lei@redhat.com, yukuai3@huawei.com, bvanassche@acm.org, osandov@fb.com, paolo.valente@linaro.org Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH RFC 2/3] lib/sbitmap: don't export sbitmap_get_shallow() Date: Mon, 9 Dec 2024 19:55:21 +0800 Message-Id: <20241209115522.3741093-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241209115522.3741093-1-yukuai1@huaweicloud.com> References: <20241209115522.3741093-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXc4dk21Zn1extEA--.18505S6 X-Coremail-Antispam: 1UD129KBjvJXoWxAw4fGr1kJw1kXFyUCryrCrg_yoW5Gw1rpF 4xKF17GryFyryUur1DXrWDZFy5Jw4DJrnxGF1S9F1FkrWUJanaqrn5CFW3t343uFWFyFW3 CFWF9ry8Kr1UXFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQq14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8cxan2IY04 v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAKI48JMxAqzxv26xkF7I0En4kS14v2 6r1q6r43MxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6x AIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY 1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbH5lUUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai Because it's only used inside sbitmap.c. Signed-off-by: Yu Kuai --- block/kyber-iosched.c | 2 +- include/linux/sbitmap.h | 17 ----------------- lib/sbitmap.c | 3 +-- 3 files changed, 2 insertions(+), 20 deletions(-) diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 4155594aefc6..2cb579b288e1 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -159,7 +159,7 @@ struct kyber_queue_data { /* * Async request percentage, converted to per-word depth for - * sbitmap_get_shallow(). + * sbitmap_queue_get_shallow(). */ unsigned int async_depth; diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 189140bf11fc..e1730f5fdf9c 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -209,23 +209,6 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int depth); */ int sbitmap_get(struct sbitmap *sb); -/** - * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap, - * limiting the depth used from each word. - * @sb: Bitmap to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single word. - * - * This rather specific operation allows for having multiple users with - * different allocation limits. E.g., there can be a high-priority class that - * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shallow() - * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority - * class can only allocate half of the total bits in the bitmap, preventing it - * from starving out the high-priority class. - * - * Return: Non-negative allocated bit number if successful, -1 otherwise. - */ -int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth); - /** * sbitmap_any_bit_set() - Check for a set bit in a &struct sbitmap. * @sb: Bitmap to check. diff --git a/lib/sbitmap.c b/lib/sbitmap.c index d3412984170c..9d4213ce7916 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -287,7 +287,7 @@ static int __sbitmap_get_shallow(struct sbitmap *sb, return sbitmap_find_bit(sb, shallow_depth, index, alloc_hint, true); } -int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) +static int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) { int nr; unsigned int hint, depth; @@ -302,7 +302,6 @@ int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) return nr; } -EXPORT_SYMBOL_GPL(sbitmap_get_shallow); bool sbitmap_any_bit_set(const struct sbitmap *sb) { From patchwork Mon Dec 9 11:55:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13899420 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E948A22069F; Mon, 9 Dec 2024 11:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745516; cv=none; b=Cf9nfTPdUZEz2Zl06Df5WvZWJBUmYUo5G1nOdniFOc3UrgHAeRmNk5UodCxFY3JZ7YsPWgXpuFtb0K8bR220Veq+vFQ2LYkhpzvCyCiLVqc2B1YqSvuQikjqtzG+zEevH/vd3cZqhd9D62lTMtMThYG3AaUk8htOMhSfo9/k6Y4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733745516; c=relaxed/simple; bh=9V9xeIAC8+LAWsBNoznh8AfBkrZFjIT+yC15uz7ELIc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dkjLxGvtKz2nZhcaJINY86VcdsWqvZ6XVxqNK5rT19itjg9TV637Uisuu9k9P76J3cxmqRkoyjs95P1r84nXt3aAbgiU9YccIca9LG9JRTV6o8dCsANw0wYOnTG76CVfUyM5PENy9BD++dT90Vi0ra7ZyrcRZ+Ba7OF48N5T5TU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Y6L3S5PWgz4f3kFG; Mon, 9 Dec 2024 19:58:12 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id EBAB21A147E; Mon, 9 Dec 2024 19:58:31 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCXc4dk21Zn1extEA--.18505S7; Mon, 09 Dec 2024 19:58:31 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk, akpm@linux-foundation.org, yang.yang@vivo.com, ming.lei@redhat.com, yukuai3@huawei.com, bvanassche@acm.org, osandov@fb.com, paolo.valente@linaro.org Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH RFC 3/3] lib/sbitmap: fix shallow_depth tag allocation Date: Mon, 9 Dec 2024 19:55:22 +0800 Message-Id: <20241209115522.3741093-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241209115522.3741093-1-yukuai1@huaweicloud.com> References: <20241209115522.3741093-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXc4dk21Zn1extEA--.18505S7 X-Coremail-Antispam: 1UD129KBjvJXoWxuFy8Xw1rZF1rWr4rCFW8tFb_yoW5AFykpF 48K3WfKr4Fyr129rs8C34DZFyrGw4DGwnrJFWfWw1Fkr45JanaqryrKFyaqa47CFZ7ZFWj yF4rXry8G34jqaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQv14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8cxan2IY04 v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAKI48JMxAqzxv26xkF7I0En4kS14v2 6r1q6r43MxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUHWlkUUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai Currently, shallow_depth is used by bfq, kyber and mq-deadline, they both pass in the value for the whole sbitmap, while sbitmap treate the value for just one word. Which means, shallow_depth never work as expected, and there really is no such functional tests to covert it. Consider that callers doesn't know which word will be used, and how many bits are available in the last word, fix this problem by treating shallow_depth for the whole sbitmap in sbitmap_find_bit(). Fixes: 00e043936e9a ("blk-mq: introduce Kyber multiqueue I/O scheduler") Fixes: a52a69ea89dc ("block, bfq: limit tags for writes and async I/O") Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by: Yu Kuai --- include/linux/sbitmap.h | 2 +- lib/sbitmap.c | 31 +++++++++++++++++++++++++------ 2 files changed, 26 insertions(+), 7 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index e1730f5fdf9c..ffb9907c7070 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -461,7 +461,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags, * sbitmap_queue, limiting the depth used from each word, with preemption * already disabled. * @sbq: Bitmap queue to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single word. + * @shallow_depth: The maximum number of bits to allocate from the queue. * See sbitmap_get_shallow(). * * If you call this, make sure to call sbitmap_queue_min_shallow_depth() after diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 9d4213ce7916..13831c7536a3 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -208,8 +208,27 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map, return nr; } +static unsigned int __map_depth_with_shallow(const struct sbitmap *sb, + int index, + unsigned int shallow_depth) +{ + unsigned int pre_word_bits = 0; + + if (shallow_depth >= sb->depth) + return __map_depth(sb, index); + + if (index > 0) + pre_word_bits += (index - 1) << sb->shift; + + if (shallow_depth <= pre_word_bits) + return 0; + + return min_t(unsigned int, __map_depth(sb, index), + shallow_depth - pre_word_bits); +} + static int sbitmap_find_bit(struct sbitmap *sb, - unsigned int depth, + unsigned int shallow_depth, unsigned int index, unsigned int alloc_hint, bool wrap) @@ -218,12 +237,12 @@ static int sbitmap_find_bit(struct sbitmap *sb, int nr = -1; for (i = 0; i < sb->map_nr; i++) { - nr = sbitmap_find_bit_in_word(&sb->map[index], - min_t(unsigned int, - __map_depth(sb, index), - depth), - alloc_hint, wrap); + unsigned int depth = __map_depth_with_shallow(sb, index, + shallow_depth); + if (depth) + nr = sbitmap_find_bit_in_word(&sb->map[index], depth, + alloc_hint, wrap); if (nr != -1) { nr += index << sb->shift; break;