From patchwork Thu May 19 08:58:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12854680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A71FAC433FE for ; Thu, 19 May 2022 08:44:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235548AbiESIoe (ORCPT ); Thu, 19 May 2022 04:44:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233645AbiESIod (ORCPT ); Thu, 19 May 2022 04:44:33 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB81A7E1F1; Thu, 19 May 2022 01:44:31 -0700 (PDT) Received: from kwepemi100013.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L3k0V6FLJzhYsl; Thu, 19 May 2022 16:43:38 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100013.china.huawei.com (7.221.188.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 19 May 2022 16:44:29 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 19 May 2022 16:44:28 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v3 1/2] blk-throttle: fix that io throttle can only work for single bio Date: Thu, 19 May 2022 16:58:10 +0800 Message-ID: <20220519085811.879097-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220519085811.879097-1-yukuai3@huawei.com> References: <20220519085811.879097-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org commit 9f5ede3c01f9 ("block: throttle split bio in case of iops limit") introduce a new problem, for example: [root@localhost ~]# echo "8:0 1024" > /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device [root@localhost ~]# echo $$ > /sys/fs/cgroup/blkio/cgroup.procs [root@localhost ~]# dd if=/dev/zero of=/dev/sda bs=10k count=1 oflag=direct & [1] 620 [root@localhost ~]# dd if=/dev/zero of=/dev/sda bs=10k count=1 oflag=direct & [2] 626 [root@localhost ~]# 1+0 records in 1+0 records out 10240 bytes (10 kB, 10 KiB) copied, 10.0038 s, 1.0 kB/s1+0 records in 1+0 records out 10240 bytes (10 kB, 10 KiB) copied, 9.23076 s, 1.1 kB/s -> the second bio is issued after 10s instead of 20s. This is because if some bios are already queued, current bio is queued directly and the flag 'BIO_THROTTLED' is set. And later, when former bios are dispatched, this bio will be dispatched without waiting at all, this is due to tg_with_in_bps_limit() return 0 for this bio. In order to fix the problem, don't skip flaged bio in tg_with_in_bps_limit(), and for the problem that split bio can be double accounted, compensate the over-accounting in __blk_throtl_bio(). Fixes: 9f5ede3c01f9 ("block: throttle split bio in case of iops limit") Signed-off-by: Yu Kuai Reviewed-by: Ming Lei --- block/blk-throttle.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 447e1b8722f7..0c37be08ff28 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -811,7 +811,7 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio, unsigned int bio_size = throtl_bio_data_size(bio); /* no need to throttle if this bio's bytes have been accounted */ - if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) { + if (bps_limit == U64_MAX) { if (wait) *wait = 0; return true; @@ -921,11 +921,8 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio) unsigned int bio_size = throtl_bio_data_size(bio); /* Charge the bio to the group */ - if (!bio_flagged(bio, BIO_THROTTLED)) { - tg->bytes_disp[rw] += bio_size; - tg->last_bytes_disp[rw] += bio_size; - } - + tg->bytes_disp[rw] += bio_size; + tg->last_bytes_disp[rw] += bio_size; tg->io_disp[rw]++; tg->last_io_disp[rw]++; @@ -2121,6 +2118,21 @@ bool __blk_throtl_bio(struct bio *bio) tg->last_low_overflow_time[rw] = jiffies; throtl_downgrade_check(tg); throtl_upgrade_check(tg); + + /* + * re-entered bio has accounted bytes already, so try to + * compensate previous over-accounting. However, if new + * slice is started, just forget it. + */ + if (bio_flagged(bio, BIO_THROTTLED)) { + unsigned int bio_size = throtl_bio_data_size(bio); + + if (tg->bytes_disp[rw] >= bio_size) + tg->bytes_disp[rw] -= bio_size; + if (tg->last_bytes_disp[rw] >= bio_size) + tg->last_bytes_disp[rw] -= bio_size; + } + /* throtl is FIFO - if bios are already queued, should queue */ if (sq->nr_queued[rw]) break;