From patchwork Wed May 11 00:16:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaohua Li X-Patchwork-Id: 9064201 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D0EFDBF440 for ; Wed, 11 May 2016 00:17:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D555020160 for ; Wed, 11 May 2016 00:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C004220172 for ; Wed, 11 May 2016 00:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752433AbcEKAQy (ORCPT ); Tue, 10 May 2016 20:16:54 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:57734 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753082AbcEKAQu (ORCPT ); Tue, 10 May 2016 20:16:50 -0400 Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.16.0.11/8.16.0.11) with SMTP id u4B0ECZC018980 for ; Tue, 10 May 2016 17:16:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=eXbAjEqaMkrT1UU2ec+RmLE7VSb15YDyWVS5RJ/RhaI=; b=qzXDUribSbkC7Jev1k8y6WWCj3Wl5OTCH300pnK79GsSomFlH9IYwYO+tQo1H35WK8Yr 9GLcOnjtQDa1AgCFGoefcJqu+OQM5/DYGzg5jO7QU5eYexODhfhooQezXd0XeuOebbEL Ru2PHieUlLzYfBVvk/Pqf0aUkyfuaPaZEOU= Received: from mail.thefacebook.com ([199.201.64.23]) by m0001303.ppops.net with ESMTP id 22u4gm6um8-3 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Tue, 10 May 2016 17:16:49 -0700 Received: from mx-out.facebook.com (192.168.52.123) by PRN-CHUB02.TheFacebook.com (192.168.16.12) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 10 May 2016 17:16:44 -0700 Received: from facebook.com (2401:db00:11:d0a2:face:0:39:0) by mx-out.facebook.com (10.212.232.59) with ESMTP id a430f664170d11e6b37b0002c991e86a-d20d8c50 for ; Tue, 10 May 2016 17:16:44 -0700 Received: by devbig084.prn1.facebook.com (Postfix, from userid 11222) id 144D94B01792; Tue, 10 May 2016 17:16:41 -0700 (PDT) From: Shaohua Li To: , CC: , , , , Subject: [PATCH 10/10] blk-throttle: add trace log Date: Tue, 10 May 2016 17:16:40 -0700 Message-ID: X-Mailer: git-send-email 2.8.0.rc2 In-Reply-To: References: X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-05-10_13:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-8.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add trace log in new low/high logic to help dignose issues. Signed-off-by: Shaohua Li --- block/blk-throttle.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index a5f3435..d35bbf1 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -2,6 +2,7 @@ * Interface for controlling IO bandwidth on a request queue * * Copyright (C) 2010 Vivek Goyal + * Shaohua Li adds low/high limit */ #include @@ -92,6 +93,12 @@ enum { LIMIT_CNT = 3, }; +static char *limit_name[LIMIT_CNT] = { + [LIMIT_LOW] = "low", + [LIMIT_HIGH] = "high", + [LIMIT_MAX] = "max", +}; + struct throtl_grp { /* must be the first member */ struct blkg_policy_data pd; @@ -1565,6 +1572,8 @@ static ssize_t tg_set_limit(struct kernfs_open_file *of, if (index == LIMIT_MAX && tg->td->limit_index == LIMIT_MAX && tg->td->limit_valid[LIMIT_HIGH]) tg->td->limit_index = LIMIT_HIGH; + throtl_log(&tg->td->service_queue, "switch state to %s", + limit_name[tg->td->limit_index]); tg_conf_updated(tg); ret = 0; out_finish: @@ -1710,6 +1719,9 @@ static void throtl_calculate_low_interval(struct throtl_data *td) td->low_upgrade_interval = cg_check_time; td->low_downgrade_interval = dbits / ubits * cg_check_time; } + throtl_log(&td->service_queue, + "low upgrade interval=%u, downgrade interval=%u", + td->low_upgrade_interval, td->low_downgrade_interval); } static void throtl_calculate_high_interval(struct throtl_data *td) @@ -1729,6 +1741,9 @@ static void throtl_calculate_high_interval(struct throtl_data *td) td->high_upgrade_interval = cg_check_time; td->high_downgrade_interval = dbits / ubits * cg_check_time; } + throtl_log(&td->service_queue, + "high upgrade interval=%u, downgrade interval=%u", + td->high_upgrade_interval, td->high_downgrade_interval); } static bool throtl_upgrade_check_one(struct throtl_grp *tg, bool *idle) @@ -1745,6 +1760,8 @@ static bool throtl_upgrade_check_one(struct throtl_grp *tg, bool *idle) /* if cgroup is below low limit for a long time, consider it idle */ if (time_after(jiffies, tg_last_low_overflow_time(tg) + tg->td->low_upgrade_interval)) { + throtl_log(&tg->service_queue, "idle upgrade, hit low time=%lu jiffies=%lu", + tg_last_low_overflow_time(tg), jiffies); *idle = true; return true; } @@ -1757,18 +1774,23 @@ static bool throtl_upgrade_check_one(struct throtl_grp *tg, bool *idle) return false; if (tg->iops[WRITE][LIMIT_LOW] != 0 && !sq->nr_queued[WRITE]) return false; + throtl_log(&tg->service_queue, "reach low limit upgrade"); return true; check_high: /* if cgroup is below high limit for a long time, consider it idle */ if (time_after(jiffies, tg_last_high_overflow_time(tg) + tg->td->high_upgrade_interval)) { + throtl_log(&tg->service_queue, "idle upgrade, hit high time=%lu jiffies=%lu", + tg_last_high_overflow_time(tg), jiffies); *idle = true; return true; } /* if cgroup reaches high/max limit, it's ok to next limit */ - if (sq->nr_queued[READ] || sq->nr_queued[WRITE]) + if (sq->nr_queued[READ] || sq->nr_queued[WRITE]) { + throtl_log(&tg->service_queue, "reach high/max limit upgrade"); return true; + } return false; } @@ -1838,6 +1860,8 @@ static void throtl_upgrade_state(struct throtl_data *td) td->limit_index++; while (!td->limit_valid[td->limit_index]) td->limit_index++; + throtl_log(&td->service_queue, "upgrade state to %s", + limit_name[td->limit_index]); td->low_upgrade_time = jiffies; if (td->limit_index == LIMIT_HIGH) td->high_downgrade_time = jiffies; @@ -1886,6 +1910,8 @@ static void throtl_downgrade_state(struct throtl_data *td, int new) int old = td->limit_index; td->limit_index = new; + throtl_log(&td->service_queue, "downgrade state to %s", + limit_name[td->limit_index]); /* max crosses high to low */ if (new == LIMIT_LOW && old == LIMIT_MAX && td->limit_valid[LIMIT_HIGH]) { td->low_downgrade_time = jiffies; @@ -1926,8 +1952,17 @@ static bool throtl_downgrade_check_one(struct throtl_grp *tg, bool check_low) time_after(now, td->high_upgrade_time + td->high_downgrade_interval) && time_after(now, - tg_last_high_overflow_time(tg) + td->high_downgrade_interval))) + tg_last_high_overflow_time(tg) + td->high_downgrade_interval))) { + throtl_log(&tg->service_queue, + "%s idle downgrade, last hit limit time=%lu upgrade time=%lu jiffies=%lu", + check_low ? "low" : "high", + check_low ? tg_last_low_overflow_time(tg) : + tg_last_high_overflow_time(tg), + check_low ? td->low_upgrade_time : + td->high_upgrade_time, + jiffies); return true; + } return false; } @@ -1986,6 +2021,7 @@ static void throtl_downgrade_check(struct throtl_grp *tg) bps = tg->last_bytes_disp[READ] * HZ; do_div(bps, elapsed_time); + throtl_log(&tg->service_queue, "rbps=%llu", bps); if (tg->bps[READ][LIMIT_LOW] != 0 && bps >= tg->bps[READ][LIMIT_LOW]) tg->last_low_overflow_time[READ] = now; @@ -1996,6 +2032,7 @@ static void throtl_downgrade_check(struct throtl_grp *tg) bps = tg->last_bytes_disp[WRITE] * HZ; do_div(bps, elapsed_time); + throtl_log(&tg->service_queue, "wbps=%llu", bps); if (tg->bps[WRITE][LIMIT_LOW] != 0 && bps >= tg->bps[WRITE][LIMIT_LOW]) tg->last_low_overflow_time[WRITE] = now; @@ -2005,6 +2042,7 @@ static void throtl_downgrade_check(struct throtl_grp *tg) tg->last_high_overflow_time[WRITE] = now; iops = tg->last_io_disp[READ] * HZ / elapsed_time; + throtl_log(&tg->service_queue, "riops=%u", iops); if (tg->iops[READ][LIMIT_LOW] != 0 && iops >= tg->iops[READ][LIMIT_LOW]) tg->last_low_overflow_time[READ] = now; @@ -2014,6 +2052,7 @@ static void throtl_downgrade_check(struct throtl_grp *tg) tg->last_high_overflow_time[READ] = now; iops = tg->last_io_disp[WRITE] * HZ / elapsed_time; + throtl_log(&tg->service_queue, "wiops=%u", iops); if (tg->iops[WRITE][LIMIT_LOW] != 0 && iops >= tg->iops[WRITE][LIMIT_LOW]) tg->last_low_overflow_time[WRITE] = now;