From patchwork Tue Apr 11 13:43:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 9675377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C2C8160381 for ; Tue, 11 Apr 2017 13:47:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8427281AA for ; Tue, 11 Apr 2017 13:47:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACB91284DA; Tue, 11 Apr 2017 13:47:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F18BC281AA for ; Tue, 11 Apr 2017 13:47:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753077AbdDKNqB (ORCPT ); Tue, 11 Apr 2017 09:46:01 -0400 Received: from mail-wr0-f180.google.com ([209.85.128.180]:33857 "EHLO mail-wr0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753544AbdDKNoX (ORCPT ); Tue, 11 Apr 2017 09:44:23 -0400 Received: by mail-wr0-f180.google.com with SMTP id z109so29246770wrb.1 for ; Tue, 11 Apr 2017 06:44:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DFx/sv1Yw7cuSWsoB7P5K0/H1I1JmX0w732DZcLLid8=; b=CzqBv9CB9KKGn7ap0gBTVN+aG8uNXQqGFYL66OXzVfzCf0Tdsmf62npSK0fCuZvsAi kz/k2LRj3k9l7FNpjwQtORNk8hOSjNngrbK16wN5UH/uiB89zE06dxXOfKYE8Cs2Ej0v JxPPKG2PovXDrqhLc6Palvju62cpV9nrd6hDg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DFx/sv1Yw7cuSWsoB7P5K0/H1I1JmX0w732DZcLLid8=; b=OztdGkFZixJtpyFEHkeE0rajW2OHj/pmk8hF9Y0JLF3093eEL8TY7AvXh1YUooJ4iz FgAqUrXHfI9BN0Qkww7pdJUk4B5lz0V9Ty4lA9T89i03W7XV7etX7DMoIcnOttpCqSRX fkcgLJUQPVESvRwMHzB4U6fGhwmmlMeo8Aklqkduu8NcBw3vBkwdguJCENvRAZ47jhkA aI4zR+XCzcYCZJE5gtlr2s8t3xG/YEGLAJXijMulG2EoQvL9K7buloR9sqw3TcdW8MGL hDhBQAh/QWE6zykzw45Pt3Y9TD5PtzfEx+o+6Esnk664XRn91sGwuT4rqPdFAXHsogc2 1zRQ== X-Gm-Message-State: AFeK/H1SLxZoVceG1/Zk/ZExDhoPtVZKFdlxwpOBBlXRNDzWesPf906f4NnGxe2GunG+0own X-Received: by 10.223.150.149 with SMTP id u21mr32802460wrb.195.1491918256952; Tue, 11 Apr 2017 06:44:16 -0700 (PDT) Received: from localhost.localdomain ([185.14.8.188]) by smtp.gmail.com with ESMTPSA id v186sm2572933wmv.2.2017.04.11.06.44.15 (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 11 Apr 2017 06:44:16 -0700 (PDT) From: Paolo Valente To: Jens Axboe , Tejun Heo Cc: Fabio Checconi , Arianna Avanzini , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, Paolo Valente Subject: [PATCH V3 09/16] block, bfq: reduce latency during request-pool saturation Date: Tue, 11 Apr 2017 15:43:08 +0200 Message-Id: <20170411134315.44135-10-paolo.valente@linaro.org> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170411134315.44135-1-paolo.valente@linaro.org> References: <20170411134315.44135-1-paolo.valente@linaro.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces an heuristic that reduces latency when the I/O-request pool is saturated. This goal is achieved by disabling device idling, for non-weight-raised queues, when there are weight- raised queues with pending or in-flight requests. In fact, as explained in more detail in the comment on the function bfq_bfqq_may_idle(), this reduces the rate at which processes associated with non-weight-raised queues grab requests from the pool, thereby increasing the probability that processes associated with weight-raised queues get a request immediately (or at least soon) when they need one. Along the same line, if there are weight-raised queues, then this patch halves the service rate of async (write) requests for non-weight-raised queues. Signed-off-by: Paolo Valente Signed-off-by: Arianna Avanzini --- block/bfq-iosched.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 63 insertions(+), 3 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 574a5f6..deb1f21c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -420,6 +420,8 @@ struct bfq_data { * queue in service, even if it is idling). */ int busy_queues; + /* number of weight-raised busy @bfq_queues */ + int wr_busy_queues; /* number of queued requests */ int queued; /* number of requests dispatched and waiting for completion */ @@ -2490,6 +2492,9 @@ static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqd->busy_queues--; + if (bfqq->wr_coeff > 1) + bfqd->wr_busy_queues--; + bfqg_stats_update_dequeue(bfqq_group(bfqq)); bfq_deactivate_bfqq(bfqd, bfqq, true, expiration); @@ -2506,6 +2511,9 @@ static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfq_mark_bfqq_busy(bfqq); bfqd->busy_queues++; + + if (bfqq->wr_coeff > 1) + bfqd->wr_busy_queues++; } #ifdef CONFIG_BFQ_GROUP_IOSCHED @@ -3779,7 +3787,16 @@ static unsigned long bfq_serv_to_charge(struct request *rq, if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1) return blk_rq_sectors(rq); - return blk_rq_sectors(rq) * bfq_async_charge_factor; + /* + * If there are no weight-raised queues, then amplify service + * by just the async charge factor; otherwise amplify service + * by twice the async charge factor, to further reduce latency + * for weight-raised queues. + */ + if (bfqq->bfqd->wr_busy_queues == 0) + return blk_rq_sectors(rq) * bfq_async_charge_factor; + + return blk_rq_sectors(rq) * 2 * bfq_async_charge_factor; } /** @@ -4234,6 +4251,7 @@ static void bfq_add_request(struct request *rq) bfqq->wr_coeff = bfqd->bfq_wr_coeff; bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + bfqd->wr_busy_queues++; bfqq->entity.prio_changed = 1; } if (prev != bfqq->next_rq) @@ -4474,6 +4492,8 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq, /* Must be called with bfqq != NULL */ static void bfq_bfqq_end_wr(struct bfq_queue *bfqq) { + if (bfq_bfqq_busy(bfqq)) + bfqq->bfqd->wr_busy_queues--; bfqq->wr_coeff = 1; bfqq->wr_cur_max_time = 0; bfqq->last_wr_start_finish = jiffies; @@ -5497,7 +5517,8 @@ static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq) static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq) { struct bfq_data *bfqd = bfqq->bfqd; - bool idling_boosts_thr, asymmetric_scenario; + bool idling_boosts_thr, idling_boosts_thr_without_issues, + asymmetric_scenario; if (bfqd->strict_guarantees) return true; @@ -5520,6 +5541,44 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq) idling_boosts_thr = !bfqd->hw_tag || bfq_bfqq_IO_bound(bfqq); /* + * The value of the next variable, + * idling_boosts_thr_without_issues, is equal to that of + * idling_boosts_thr, unless a special case holds. In this + * special case, described below, idling may cause problems to + * weight-raised queues. + * + * When the request pool is saturated (e.g., in the presence + * of write hogs), if the processes associated with + * non-weight-raised queues ask for requests at a lower rate, + * then processes associated with weight-raised queues have a + * higher probability to get a request from the pool + * immediately (or at least soon) when they need one. Thus + * they have a higher probability to actually get a fraction + * of the device throughput proportional to their high + * weight. This is especially true with NCQ-capable drives, + * which enqueue several requests in advance, and further + * reorder internally-queued requests. + * + * For this reason, we force to false the value of + * idling_boosts_thr_without_issues if there are weight-raised + * busy queues. In this case, and if bfqq is not weight-raised, + * this guarantees that the device is not idled for bfqq (if, + * instead, bfqq is weight-raised, then idling will be + * guaranteed by another variable, see below). Combined with + * the timestamping rules of BFQ (see [1] for details), this + * behavior causes bfqq, and hence any sync non-weight-raised + * queue, to get a lower number of requests served, and thus + * to ask for a lower number of requests from the request + * pool, before the busy weight-raised queues get served + * again. This often mitigates starvation problems in the + * presence of heavy write workloads and NCQ, thereby + * guaranteeing a higher application and system responsiveness + * in these hostile scenarios. + */ + idling_boosts_thr_without_issues = idling_boosts_thr && + bfqd->wr_busy_queues == 0; + + /* * There is then a case where idling must be performed not for * throughput concerns, but to preserve service guarantees. To * introduce it, we can note that allowing the drive to @@ -5593,7 +5652,7 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq) * is necessary to preserve service guarantees. */ return bfq_bfqq_sync(bfqq) && - (idling_boosts_thr || asymmetric_scenario); + (idling_boosts_thr_without_issues || asymmetric_scenario); } /* @@ -6801,6 +6860,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) * high-definition compressed * video. */ + bfqd->wr_busy_queues = 0; /* * Begin by assuming, optimistically, that the device is a