From patchwork Tue Jul 11 13:58:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 9834761 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B94B6602BD for ; Tue, 11 Jul 2017 14:00:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A847426E4A for ; Tue, 11 Jul 2017 14:00:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D0B2283BB; Tue, 11 Jul 2017 14:00:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86C3C26E4A for ; Tue, 11 Jul 2017 14:00:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755920AbdGKOAY (ORCPT ); Tue, 11 Jul 2017 10:00:24 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:8896 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755627AbdGKOAX (ORCPT ); Tue, 11 Jul 2017 10:00:23 -0400 Received: from 172.30.72.54 (EHLO DGGEML402-HUB.china.huawei.com) ([172.30.72.54]) by dggrg03-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQY43341; Tue, 11 Jul 2017 21:53:55 +0800 (CST) Received: from huawei.com (10.175.124.28) by DGGEML402-HUB.china.huawei.com (10.3.17.38) with Microsoft SMTP Server id 14.3.301.0; Tue, 11 Jul 2017 21:53:53 +0800 From: Hou Tao To: CC: , Subject: [PATCH] block, bfq: dispatch request to prevent queue stalling after the request completion Date: Tue, 11 Jul 2017 21:58:15 +0800 Message-ID: <1499781495-44090-1-git-send-email-houtao1@huawei.com> X-Mailer: git-send-email 2.5.0 MIME-Version: 1.0 X-Originating-IP: [10.175.124.28] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.5964D875.0046, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: d8fa9dbf02931e1e5ee700d6a0b9f1fc Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are mq devices (eg., virtio-blk, nbd and loopback) which don't invoke blk_mq_run_hw_queues() after the completion of a request. If bfq is enabled on these devices and the slice_idle attribute or strict_guarantees attribute is set as zero, it is possible that after a request completion the remaining requests of busy bfq queue will stalled in the bfq schedule until a new request arrives. To fix the scheduler latency problem, we need to check whether or not all issued requests have completed and dispatch more requests to driver if there is no request in driver. Signed-off-by: Hou Tao The problem can be reproduced by running the following script on a virtio-blk device with nr_hw_queues as 1: #!/bin/sh dev=vdb # mount point for dev mp=/tmp/mnt cd $mp job=strict.job cat < $job [global] direct=1 bs=4k size=256M rw=write ioengine=libaio iodepth=128 runtime=5 time_based [1] filename=1.data [2] new_group filename=2.data EOF echo bfq > /sys/block/$dev/queue/scheduler echo 1 > /sys/block/$dev/queue/iosched/strict_guarantees fio $job Reviewed-by: Paolo Valente --- block/bfq-iosched.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 12bbc6b..94cd682 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4293,6 +4293,9 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) bfq_bfqq_expire(bfqd, bfqq, false, BFQQE_NO_MORE_REQUESTS); } + + if (!bfqd->rq_in_driver) + bfq_schedule_dispatch(bfqd); } static void bfq_put_rq_priv_body(struct bfq_queue *bfqq)