From patchwork Thu Apr 17 01:50:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zizhi Wo X-Patchwork-Id: 14054791 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80953D2FB for ; Thu, 17 Apr 2025 02:00:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855235; cv=none; b=bZ9+mpn7TkDDw7K8iMgSWlka4n6GI0uDPNfYchRAN4BaMhvGg2Q3qA9Af40FXIsC4sw49QCaTAY4LoEqFNOoLjEPVarLAFlrBo7kAEyxznGpixMifE2XkyLfC5SeKpjr7uCO0/lLpIG4e4t6xyw9JFakLl+hm27yHjeJLpqq+Mo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855235; c=relaxed/simple; bh=7fQUoX0FPCcKNWlQ4BiupIBThKS2yKB8zYvdF89Q+u0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=plKAld97wiArSweCO0AZS3Fjt1fk/CUvvqaM/NNVaOr/Vmn/1ngD972hxdSl4kcVNOL5kJBaWnKqDFqVq78hxnuF9cYBqYRZmGH4RTp3zZ7b1FgMk+ur6j2UhhEUaDLYxFK+Oj/JIyv+6SFCicmvkphJDDpEYllP83vZF2G8HXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ZdLf03MP9z1R7Ys; Thu, 17 Apr 2025 09:58:32 +0800 (CST) Received: from kwepemf100017.china.huawei.com (unknown [7.202.181.16]) by mail.maildlp.com (Postfix) with ESMTPS id 5EE431A016C; Thu, 17 Apr 2025 10:00:29 +0800 (CST) Received: from localhost.localdomain (10.175.112.188) by kwepemf100017.china.huawei.com (7.202.181.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 17 Apr 2025 10:00:28 +0800 From: Zizhi Wo To: , CC: , , , , Subject: [PATCH 1/3] blk-throttle: Fix wrong tg->[bytes/io]_disp update in __tg_update_carryover() Date: Thu, 17 Apr 2025 09:50:31 +0800 Message-ID: <20250417015033.512940-2-wozizhi@huawei.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20250417015033.512940-1-wozizhi@huawei.com> References: <20250417015033.512940-1-wozizhi@huawei.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemf100017.china.huawei.com (7.202.181.16) In commit 6cc477c36875 ("blk-throttle: carry over directly"), the carryover bytes/ios was be carried to [bytes/io]_disp. However, its update mechanism has some issues. In __tg_update_carryover(), we calculate "bytes" and "ios" to represent the carryover, but the computation when updating [bytes/io]_disp is incorrect. This patch fixes the issue. And if the bps/iops limit was previously set to max and later changed to a smaller value, we may not update tg->[bytes/io]_disp to 0 in tg_update_carryover(). Relying solely on throtl_trim_slice() is not sufficient, which can lead to subsequent bio dispatches not behaving as expected. We should set tg->[bytes/io]_disp to 0 in non_carryover case. The same handling applies when nr_queued is 0. Fixes: 6cc477c36875 ("blk-throttle: carry over directly") Signed-off-by: Zizhi Wo Reviewed-by: Yu Kuai --- block/blk-throttle.c | 33 +++++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 91dab43c65ab..df9825eb83be 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -644,20 +644,39 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw, u64 bps_limit = tg_bps_limit(tg, rw); u32 iops_limit = tg_iops_limit(tg, rw); + /* + * If the queue is empty, carryover handling is not needed. In such cases, + * tg->[bytes/io]_disp should be reset to 0 to avoid impacting the dispatch + * of subsequent bios. The same handling applies when the previous BPS/IOPS + * limit was set to max. + */ + if (tg->service_queue.nr_queued[rw] == 0) { + tg->bytes_disp[rw] = 0; + tg->io_disp[rw] = 0; + return; + } + /* * If config is updated while bios are still throttled, calculate and * accumulate how many bytes/ios are waited across changes. And * carryover_bytes/ios will be used to calculate new wait time under new * configuration. */ - if (bps_limit != U64_MAX) + if (bps_limit != U64_MAX) { *bytes = calculate_bytes_allowed(bps_limit, jiffy_elapsed) - tg->bytes_disp[rw]; - if (iops_limit != UINT_MAX) + tg->bytes_disp[rw] = -*bytes; + } else { + tg->bytes_disp[rw] = 0; + } + + if (iops_limit != UINT_MAX) { *ios = calculate_io_allowed(iops_limit, jiffy_elapsed) - tg->io_disp[rw]; - tg->bytes_disp[rw] -= *bytes; - tg->io_disp[rw] -= *ios; + tg->io_disp[rw] = -*ios; + } else { + tg->io_disp[rw] = 0; + } } static void tg_update_carryover(struct throtl_grp *tg) @@ -665,10 +684,8 @@ static void tg_update_carryover(struct throtl_grp *tg) long long bytes[2] = {0}; int ios[2] = {0}; - if (tg->service_queue.nr_queued[READ]) - __tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]); - if (tg->service_queue.nr_queued[WRITE]) - __tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]); + __tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]); + __tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]); /* see comments in struct throtl_grp for meaning of these fields. */ throtl_log(&tg->service_queue, "%s: %lld %lld %d %d\n", __func__, From patchwork Thu Apr 17 01:50:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zizhi Wo X-Patchwork-Id: 14054793 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2132813777E for ; Thu, 17 Apr 2025 02:00:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855240; cv=none; b=Y3JrGzMxXzMk33Xk6XzQCZMJ1J1Gv8xaxMc0/5hmw5uCCORo9baKPVN62OLyMGwcstDfrH7VtjS1HZedN2y8ZUtSgBfH3nOzG8pyg5p35iabacma9ii2nIlAKz0gPhmilBUbao61q+u3o/8vlFBQHDpaCNsYNca20FjRp+02YmU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855240; c=relaxed/simple; bh=C3Vjal98J1FOs5wSVV9I5ekTyd28kserVoMl7tCkOeY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dyVHcj3Txhd0IGB0ZF0GsOJda/YfJv8EGU5qzNlNtkWlpspH4wUWmW4Y3imK6rhEkdq+bKGSjF6DcsS0LBuulTcloj657kQ+u7pcCIky4oBwxwgPt2iJuJQIz3SdXOOLDU8IogqZgm48rezzQwNIvoQTZWuxNYfifIMb4gP1/F4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4ZdLh46kljzsT0h; Thu, 17 Apr 2025 10:00:20 +0800 (CST) Received: from kwepemf100017.china.huawei.com (unknown [7.202.181.16]) by mail.maildlp.com (Postfix) with ESMTPS id 0EB191A016C; Thu, 17 Apr 2025 10:00:30 +0800 (CST) Received: from localhost.localdomain (10.175.112.188) by kwepemf100017.china.huawei.com (7.202.181.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 17 Apr 2025 10:00:29 +0800 From: Zizhi Wo To: , CC: , , , , Subject: [PATCH 2/3] blk-throttle: Delete unnecessary carryover-related fields from throtl_grp Date: Thu, 17 Apr 2025 09:50:32 +0800 Message-ID: <20250417015033.512940-3-wozizhi@huawei.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20250417015033.512940-1-wozizhi@huawei.com> References: <20250417015033.512940-1-wozizhi@huawei.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemf100017.china.huawei.com (7.202.181.16) We no longer need carryover_[bytes/ios] in tg, so it is removed. The related comments about carryover in tg are also merged into [bytes/io]_disp, and modify other related comments. Signed-off-by: Zizhi Wo Reviewed-by: Ming Lei Reviewed-by: Yu Kuai --- block/blk-throttle.c | 8 ++++---- block/blk-throttle.h | 19 ++++++++----------- 2 files changed, 12 insertions(+), 15 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index df9825eb83be..4e4609dce85d 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -658,9 +658,9 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw, /* * If config is updated while bios are still throttled, calculate and - * accumulate how many bytes/ios are waited across changes. And - * carryover_bytes/ios will be used to calculate new wait time under new - * configuration. + * accumulate how many bytes/ios are waited across changes. And use the + * calculated carryover (@bytes/@ios) to update [bytes/io]_disp, which + * will be used to calculate new wait time under new configuration. */ if (bps_limit != U64_MAX) { *bytes = calculate_bytes_allowed(bps_limit, jiffy_elapsed) - @@ -687,7 +687,7 @@ static void tg_update_carryover(struct throtl_grp *tg) __tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]); __tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]); - /* see comments in struct throtl_grp for meaning of these fields. */ + /* see comments in struct throtl_grp for meaning of carryover. */ throtl_log(&tg->service_queue, "%s: %lld %lld %d %d\n", __func__, bytes[READ], bytes[WRITE], ios[READ], ios[WRITE]); } diff --git a/block/blk-throttle.h b/block/blk-throttle.h index 7964cc041e06..8bd16535302c 100644 --- a/block/blk-throttle.h +++ b/block/blk-throttle.h @@ -101,19 +101,16 @@ struct throtl_grp { /* IOPS limits */ unsigned int iops[2]; - /* Number of bytes dispatched in current slice */ - int64_t bytes_disp[2]; - /* Number of bio's dispatched in current slice */ - int io_disp[2]; - /* - * The following two fields are updated when new configuration is - * submitted while some bios are still throttled, they record how many - * bytes/ios are waited already in previous configuration, and they will - * be used to calculate wait time under new configuration. + * Number of bytes/bio's dispatched in current slice. + * When new configuration is submitted while some bios are still throttled, + * first calculate the carryover: the amount of bytes/IOs already waited + * under the previous configuration. Then, [bytes/io]_disp are represented + * as the negative of the carryover, and they will be used to calculate the + * wait time under the new configuration. */ - long long carryover_bytes[2]; - int carryover_ios[2]; + int64_t bytes_disp[2]; + int io_disp[2]; unsigned long last_check_time; From patchwork Thu Apr 17 01:50:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zizhi Wo X-Patchwork-Id: 14054794 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 635ED1A7249 for ; Thu, 17 Apr 2025 02:00:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855241; cv=none; b=cNEe8Mx4sAMy8qRWUzGT8HA1G7gx/STd4A8UMa0nGpuYpf5ZbDSPGS2SdRHXxYU8rQSR8l7doCL0oC/UXC4YNwOw3RSy0aUHneuO7MeI9UKEePe8Aa/iUjWW60l8xSEG4whUuaWUB5Rz+ohCYl0QIQkTSOUDLQqMg4DiZB7Sml8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744855241; c=relaxed/simple; bh=oJGBHPCgCNkklOm/mDKcszhr4shQXbohWBAJk24rHHY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KmKRW3+ybi4rlp70/TmnqBCa4jBjerrHYoVzzCgPZk3rzUMpy/OZCpy+wCr8b4Pkx2IfJM7TMq/a2UxUyjQvrATlvxtrqjGxg+ZcthVClOhOapfzdyap6VCBCjGTKgl66PtPrCwp/u0xKmki/SgAHQGssY6ZUT1wlydElPzgblE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4ZdLh61wCKz2TSB9; Thu, 17 Apr 2025 10:00:22 +0800 (CST) Received: from kwepemf100017.china.huawei.com (unknown [7.202.181.16]) by mail.maildlp.com (Postfix) with ESMTPS id AEB231401F1; Thu, 17 Apr 2025 10:00:30 +0800 (CST) Received: from localhost.localdomain (10.175.112.188) by kwepemf100017.china.huawei.com (7.202.181.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 17 Apr 2025 10:00:29 +0800 From: Zizhi Wo To: , CC: , , , , Subject: [PATCH 3/3] blk-throttle: Add an additional overflow check to the call calculate_bytes/io_allowed Date: Thu, 17 Apr 2025 09:50:33 +0800 Message-ID: <20250417015033.512940-4-wozizhi@huawei.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20250417015033.512940-1-wozizhi@huawei.com> References: <20250417015033.512940-1-wozizhi@huawei.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemf100017.china.huawei.com (7.202.181.16) Now the tg->[bytes/io]_disp type is signed, and calculate_bytes/io_allowed return type is unsigned. Even if the bps/iops limit is not set to max, the return value of the function may still exceed INT_MAX or LLONG_MAX, which can cause overflow in outer variables. In such cases, we can add additional checks accordingly. And in throtl_trim_slice(), if the BPS/IOPS limit is set to max, there's no need to call calculate_bytes/io_allowed(). Introduces the helper functions throtl_trim_bps/iops to simplifies the process. For cases when the calculated trim value exceeds INT_MAX (causing an overflow), we reset tg->[bytes/io]_disp to zero, so return original tg->[bytes/io]_disp because it is the size that is actually trimmed. Signed-off-by: Zizhi Wo --- block/blk-throttle.c | 94 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 73 insertions(+), 21 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 4e4609dce85d..cfb9c47d1af7 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -571,6 +571,56 @@ static u64 calculate_bytes_allowed(u64 bps_limit, unsigned long jiffy_elapsed) return mul_u64_u64_div_u64(bps_limit, (u64)jiffy_elapsed, (u64)HZ); } +static long long throtl_trim_bps(struct throtl_grp *tg, bool rw, + unsigned long time_elapsed) +{ + u64 bps_limit = tg_bps_limit(tg, rw); + long long bytes_trim; + + if (bps_limit == U64_MAX) + return 0; + + /* Need to consider the case of bytes_allowed overflow. */ + bytes_trim = calculate_bytes_allowed(bps_limit, time_elapsed); + if (bytes_trim <= 0) { + bytes_trim = tg->bytes_disp[rw]; + tg->bytes_disp[rw] = 0; + goto out; + } + + if (tg->bytes_disp[rw] >= bytes_trim) + tg->bytes_disp[rw] -= bytes_trim; + else + tg->bytes_disp[rw] = 0; +out: + return bytes_trim; +} + +static int throtl_trim_iops(struct throtl_grp *tg, bool rw, + unsigned long time_elapsed) +{ + u32 iops_limit = tg_iops_limit(tg, rw); + int io_trim; + + if (iops_limit == UINT_MAX) + return 0; + + /* Need to consider the case of io_allowed overflow. */ + io_trim = calculate_io_allowed(iops_limit, time_elapsed); + if (io_trim <= 0) { + io_trim = tg->io_disp[rw]; + tg->io_disp[rw] = 0; + goto out; + } + + if (tg->io_disp[rw] >= io_trim) + tg->io_disp[rw] -= io_trim; + else + tg->io_disp[rw] = 0; +out: + return io_trim; +} + /* Trim the used slices and adjust slice start accordingly */ static inline void throtl_trim_slice(struct throtl_grp *tg, bool rw) { @@ -612,22 +662,11 @@ static inline void throtl_trim_slice(struct throtl_grp *tg, bool rw) * one extra slice is preserved for deviation. */ time_elapsed -= tg->td->throtl_slice; - bytes_trim = calculate_bytes_allowed(tg_bps_limit(tg, rw), - time_elapsed); - io_trim = calculate_io_allowed(tg_iops_limit(tg, rw), time_elapsed); - if (bytes_trim <= 0 && io_trim <= 0) + bytes_trim = throtl_trim_bps(tg, rw, time_elapsed); + io_trim = throtl_trim_iops(tg, rw, time_elapsed); + if (!bytes_trim && !io_trim) return; - if ((long long)tg->bytes_disp[rw] >= bytes_trim) - tg->bytes_disp[rw] -= bytes_trim; - else - tg->bytes_disp[rw] = 0; - - if ((int)tg->io_disp[rw] >= io_trim) - tg->io_disp[rw] -= io_trim; - else - tg->io_disp[rw] = 0; - tg->slice_start[rw] += time_elapsed; throtl_log(&tg->service_queue, @@ -643,6 +682,8 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw, unsigned long jiffy_elapsed = jiffies - tg->slice_start[rw]; u64 bps_limit = tg_bps_limit(tg, rw); u32 iops_limit = tg_iops_limit(tg, rw); + long long bytes_allowed; + int io_allowed; /* * If the queue is empty, carryover handling is not needed. In such cases, @@ -661,19 +702,28 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw, * accumulate how many bytes/ios are waited across changes. And use the * calculated carryover (@bytes/@ios) to update [bytes/io]_disp, which * will be used to calculate new wait time under new configuration. + * And we need to consider the case of bytes/io_allowed overflow. */ if (bps_limit != U64_MAX) { - *bytes = calculate_bytes_allowed(bps_limit, jiffy_elapsed) - - tg->bytes_disp[rw]; - tg->bytes_disp[rw] = -*bytes; + bytes_allowed = calculate_bytes_allowed(bps_limit, jiffy_elapsed); + if (bytes_allowed < 0) { + tg->bytes_disp[rw] = 0; + } else { + *bytes = bytes_allowed - tg->bytes_disp[rw]; + tg->bytes_disp[rw] = -*bytes; + } } else { tg->bytes_disp[rw] = 0; } if (iops_limit != UINT_MAX) { - *ios = calculate_io_allowed(iops_limit, jiffy_elapsed) - - tg->io_disp[rw]; - tg->io_disp[rw] = -*ios; + io_allowed = calculate_io_allowed(iops_limit, jiffy_elapsed); + if (io_allowed < 0) { + tg->io_disp[rw] = 0; + } else { + *ios = io_allowed - tg->io_disp[rw]; + tg->io_disp[rw] = -*ios; + } } else { tg->io_disp[rw] = 0; } @@ -741,7 +791,9 @@ static unsigned long tg_within_bps_limit(struct throtl_grp *tg, struct bio *bio, jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice); bytes_allowed = calculate_bytes_allowed(bps_limit, jiffy_elapsed_rnd); - if (bytes_allowed > 0 && tg->bytes_disp[rw] + bio_size <= bytes_allowed) + /* Need to consider the case of bytes_allowed overflow. */ + if ((bytes_allowed > 0 && tg->bytes_disp[rw] + bio_size <= bytes_allowed) + || bytes_allowed < 0) return 0; /* Calc approx time to dispatch */