From patchwork Fri Sep 28 17:45:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A8DD15A7 for ; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 671222BEB6 for ; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AEEF2BF68; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D75402BEB6 for ; Fri, 28 Sep 2018 17:45:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726114AbeI2AKl (ORCPT ); Fri, 28 Sep 2018 20:10:41 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:42794 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKl (ORCPT ); Fri, 28 Sep 2018 20:10:41 -0400 Received: by mail-qt1-f196.google.com with SMTP id z8-v6so7507886qto.9 for ; Fri, 28 Sep 2018 10:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=/5VOxmukhqtKCNjD1P0yo0Uqv0rcSa8SYnVd90VhpUI=; b=nl05g7rxa3Q0aTauHownUSk6InJEHt4ousrVrfNw9OQPlr6WEgwerKU9IKNhbUOm0k 12kBrI1X23AeD6s28czSS61AGzjKUXYhHB+eRaQvMYbuu1tZ2iMPJf0f0qf0k0tOmKgv 0n9lsSMEnWZhy6WlLjhUqxtFkURvtMV6r6c5BdWE9RcNDaYZxUCFReNvxkdkhZmfNVlF 2PK8DhsBAnZMH9EfgHH/Q0Cms3H3BzaroKjHdG2RPi0erIG/GZszhp85vjjOlPJHQ/dp HmTaoc2az8j6XXrL3BK4b4pXF/HhC6XPdSJVW58NMFwpTdHXW1ii/Qvt6uZWUL+sm5Qx qZ8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=/5VOxmukhqtKCNjD1P0yo0Uqv0rcSa8SYnVd90VhpUI=; b=sr2IsWDxoSEZBI0YBTXTM3duA9XwOe8QkeFfqPcm+fL3Ibi4p8caSVo36gFcXFNFEq EuLzs/XbUswzMKL85sCBIMiD3TDemla8ltVSuw9x9gRJTIapjWyJ/FkY5dL3qeRPw56X QXbsoPmkRBT6uOLFEmqb3GQgQlVNAV+uxeeRPtJ0NKJJYgbUcz7kDL8Dymz51Zr8GqaG ttFRMo7/AAUjiOY/ySFGXfIz02/abIZHq/79eQbRI5z4SG4k2IEUf1B0OuXmEP0AC9v5 hZswzu48NAlFcMiyy2nlPv5reHmCrKu/o42hqB7wZwqxpTRz0YlVg5FOO9ySnCyFv1ZX ZOvg== X-Gm-Message-State: ABuFfojeDen/mB4talbYersCTPrDoziKWJnCKOg2wjh45aPMSq91h0p0 rtzwqsItqoNu0ndaO10b/sPw1A== X-Google-Smtp-Source: ACcGV62jBCh7YUqt47BJjjh5/ibGp9th/+mY5a5ZUYvNHy1B5APlh1q6m5XEfssJJ2Nf8JTDcx6HcA== X-Received: by 2002:ac8:85c:: with SMTP id x28-v6mr13456639qth.90.1538156749050; Fri, 28 Sep 2018 10:45:49 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id n41-v6sm3789091qtn.73.2018.09.28.10.45.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:48 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 1/5] blk-iolatency: use q->nr_requests directly Date: Fri, 28 Sep 2018 13:45:39 -0400 Message-Id: <20180928174543.28486-2-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We were using blk_queue_depth() assuming that it would return nr_requests, but we hit a case in production on drives that had to have NCQ turned off in order for them to not shit the bed which resulted in a qd of 1, even though the nr_requests was much larger. iolatency really only cares about requests we are allowed to queue up, as any io that get's onto the request list is going to be serviced soonish, so we want to be throttling before the bio gets onto the request list. To make iolatency work as expected, simply use q->nr_requests instead of blk_queue_depth() as that is what we actually care about. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 27c14f8d2576..c2e38bc12f27 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -255,7 +255,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, struct child_latency_info *lat_info, bool up) { - unsigned long qd = blk_queue_depth(blkiolat->rqos.q); + unsigned long qd = blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = atomic_read(&lat_info->scale_cookie); unsigned long max_scale = qd << 1; @@ -295,7 +295,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, */ static void scale_change(struct iolatency_grp *iolat, bool up) { - unsigned long qd = blk_queue_depth(iolat->blkiolat->rqos.q); + unsigned long qd = iolat->blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = iolat->rq_depth.max_depth; @@ -857,7 +857,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); - iolat->rq_depth.queue_depth = blk_queue_depth(blkg->q); + iolat->rq_depth.queue_depth = blkg->q->nr_requests; iolat->rq_depth.max_depth = UINT_MAX; iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth; iolat->blkiolat = blkiolat; From patchwork Fri Sep 28 17:45:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620229 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9CF7915A7 for ; Fri, 28 Sep 2018 17:45:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A7702BEB6 for ; Fri, 28 Sep 2018 17:45:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D1132BF68; Fri, 28 Sep 2018 17:45:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B1222BEB6 for ; Fri, 28 Sep 2018 17:45:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726121AbeI2AKm (ORCPT ); Fri, 28 Sep 2018 20:10:42 -0400 Received: from mail-qk1-f196.google.com ([209.85.222.196]:42826 "EHLO mail-qk1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKm (ORCPT ); Fri, 28 Sep 2018 20:10:42 -0400 Received: by mail-qk1-f196.google.com with SMTP id g20-v6so3240180qke.9 for ; Fri, 28 Sep 2018 10:45:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=DhRGIV9aJq1LIJK2qOL1gEMwvqkSfgjEF4v5S5RzTlE=; b=Pw13qM6DFMheHgNfNoOO74Fa1TL3hzzgV1DCZSqvFh4raFB1vYWuuG3jhx0LBghB9Y ydZRVsbwQsnDog7rN/vnpAjbungKRWn+RHmwPmxj+m5jqFc/ImWjMTb1vrwA8uPa+7Ys lpNdT5uXdWNn07AW6m2IY4/Pmhmepo81cKBFYhDAXVkAVoOQOY3b0a11mCM2rrx0dFFw Umrv9kJh7t1jjxMUT59onlW6gb/t/H5RSrSCgtNPX33Z++zJwFP1YG6C2xCTvHxTbfxB HccYcVZr5MVGtf9zFLT31SwZBh9y9Zdlqkj7pWBRUXIjt56MrSoS/lzATVIBBCC0YqLr Zq8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=DhRGIV9aJq1LIJK2qOL1gEMwvqkSfgjEF4v5S5RzTlE=; b=NsFiImdKnYuFHNLUinOy7zOWslQVf/H9mc5NSwdhjs1cNg+glAGIXW5v8TKz2L2ndS gI8SURfh+4TvkMPDpkkEJ1b9mD3tb9dmEXOp+agjU5V0Cyv5ZDZj3jvU4DWaXZTQn50O HHAnwKWlSA7bwKW5vT85P9MItRBsjRiyM/PsySP9f1oLoUwiuuO1MC4/r2xztG6wYUZ7 FzYl2u36bBH5By/NeAyKyquP9eRjAboEfnKd4nvIRISaEDBbQJ/GNsVVRM20wIdfqJTS zaYdETZmVVfT9WyLsfSSPjkqUiKPwsroW+bLMKM+D2aHx4XU50MirVZgDKu9Wl0YuNHQ qaAQ== X-Gm-Message-State: ABuFfohunCklqBr0jvT0T0nSNdKt179EVhgR47/U8yHVhneL/qWD50Ca 8h8BmOTCl4S+79ClqFZvd+tJtw== X-Google-Smtp-Source: ACcGV63dkU/707VWCgIHb/XdbLgSY9a2p/yrgXL0x4gY+0RbKpLeAKGaWxbqT/B4zevh5DjTfvJ/tA== X-Received: by 2002:ae9:ea1a:: with SMTP id f26-v6mr12386767qkg.210.1538156750874; Fri, 28 Sep 2018 10:45:50 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id u27-v6sm3311032qki.85.2018.09.28.10.45.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:50 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 2/5] blk-iolatency: deal with nr_requests == 1 Date: Fri, 28 Sep 2018 13:45:40 -0400 Message-Id: <20180928174543.28486-3-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hitting the case where blk_queue_depth() returned 1 uncovered the fact that iolatency doesn't actually handle this case properly, it simply doesn't scale down anybody. For this case we should go straight into applying the time delay, which we weren't doing. Since we already limit the floor at 1 request this if statement is not needed, and this allows us to set our depth to 1 which allows us to apply the delay if needed. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index c2e38bc12f27..8daea7a4fe49 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -312,7 +312,7 @@ static void scale_change(struct iolatency_grp *iolat, bool up) iolat->rq_depth.max_depth = old; wake_up_all(&iolat->rq_wait.wait); } - } else if (old > 1) { + } else { old >>= 1; iolat->rq_depth.max_depth = max(old, 1UL); } From patchwork Fri Sep 28 17:45:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620231 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 914F315E8 for ; Fri, 28 Sep 2018 17:45:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C4C32BEB6 for ; Fri, 28 Sep 2018 17:45:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 703192BF68; Fri, 28 Sep 2018 17:45:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21CB92BEB6 for ; Fri, 28 Sep 2018 17:45:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726130AbeI2AKo (ORCPT ); Fri, 28 Sep 2018 20:10:44 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:40453 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKo (ORCPT ); Fri, 28 Sep 2018 20:10:44 -0400 Received: by mail-qt1-f193.google.com with SMTP id e9-v6so7524565qtp.7 for ; Fri, 28 Sep 2018 10:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=oPNeefOLqPC1KcU6yOYEg0y5A14BM6tWfLCKEuI6ZJg=; b=wCNBpCm5d1LsbNp/+U3m+ga0x+4n6xIMmBCrz9MxlPt+kpBcC6Tu4ft8Yw/oy2uHB8 Jr+4RYWOES/J5O/F/iTAILHi2f2m3Mvj/Hm6TYh3GLDSIxiceYZ62wASZF0/EZ33xZju Py1M7MVTY8fLbHNjwY/eUY0aSNpULfyxyurKcpEYKEGBm2FzZ/YRaC7Nq5tKOfPiw36S XbgjDDg3U+pzyP/7Ew67KB8jslkSo7l0FEgQ23OM85yeAS4Fa8leGAFooll3JuoZtE1E s3UmI/XRHr0R8qFc0SP8czebdd4/3jU6VguyeN+j2Ek436fdYdQnwMg6sruI9IhZkmVT hYQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=oPNeefOLqPC1KcU6yOYEg0y5A14BM6tWfLCKEuI6ZJg=; b=rO5jfpc9a+KGifErF+E2PWIoxr/tLoCLAjrvarCnFVYBsDUWZ6jg3PKJd5XHX5IU9S aUCwCYlwvySBP7b9JlRjxFZLNhdrwSZgKHLZv6TkGT5CJsynn5RwNiX23F/jF1n7ccmi XHJ25grG1J+zAjiDOx2GkT4KQAcMybLqN4Ub6EcT3kOCwwTcl9raPGnj4Xqlth9R+8yM 6EkeRY9MzuuYANMnnBJaqDvfyZOtPE/gWppM4RPO5BpketSolgkYUh3msAmqxrpeRjkG 1LjxDSrTcm5rBWt9Gk90aTRq/YFlop/RGKG0yf6zim2znncJhKuxvYtpLhLE1As1eyyU M2og== X-Gm-Message-State: ABuFfogkG6pbbztQt/Ttf9HDY3ftKcheeiw7KucVovI+iC9MZ9cojXf+ 2ACo1ktUJwvq+lmOwFvlnNMA4A== X-Google-Smtp-Source: ACcGV60GT8LI6WzlmBZuoY8YsmgU+paTizTOEyfG1V57k31PYw3GjOxbtgZSw/mQ9BdprSaTXIKs4Q== X-Received: by 2002:aed:2867:: with SMTP id r94-v6mr7157031qtd.112.1538156752613; Fri, 28 Sep 2018 10:45:52 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id v12-v6sm3231883qtg.82.2018.09.28.10.45.51 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:51 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 3/5] blk-iolatency: deal with small samples Date: Fri, 28 Sep 2018 13:45:41 -0400 Message-Id: <20180928174543.28486-4-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is logic to keep cgroups that haven't done a lot of IO in the most recent scale window from being punished for over-active higher priority groups. However for things like ssd's where the windows are pretty short we'll end up with small numbers of samples, so 5% of samples will come out to 0 if there aren't enough. Make the floor 1 sample to keep us from improperly bailing out of scaling down. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 8daea7a4fe49..e7be77b0ce8b 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -366,7 +366,7 @@ static void check_scale_change(struct iolatency_grp *iolat) * scale down event. */ samples_thresh = lat_info->nr_samples * 5; - samples_thresh = div64_u64(samples_thresh, 100); + samples_thresh = max(1ULL, div64_u64(samples_thresh, 100)); if (iolat->nr_samples <= samples_thresh) return; } From patchwork Fri Sep 28 17:45:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620233 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ED5315E8 for ; Fri, 28 Sep 2018 17:45:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A5B92BEB6 for ; Fri, 28 Sep 2018 17:45:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E9BA2BF7D; Fri, 28 Sep 2018 17:45:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C8F52BEB6 for ; Fri, 28 Sep 2018 17:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726137AbeI2AKq (ORCPT ); Fri, 28 Sep 2018 20:10:46 -0400 Received: from mail-qt1-f194.google.com ([209.85.160.194]:46364 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKq (ORCPT ); Fri, 28 Sep 2018 20:10:46 -0400 Received: by mail-qt1-f194.google.com with SMTP id h22-v6so7487397qtr.13 for ; Fri, 28 Sep 2018 10:45:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=wnpIeRXgBTc7MeJUtAlvoUvmJj47sB3rMAc+8pCdoj8=; b=vwN70/PUKXTU6AgNowacZjjJTdhmT0tHB7aO4hFnr8alERSkglwd2TTTc4Wta9Kh3o RnLS+AVxAOUQu562gwvZxeekzomVL8a30vEvOXQKkgBU7rG7Rgtv8AUc69a6tWLDtIiC 1agCJhkahoJWWnXItK7PjOOBHkRuFhMBin5PmLjE/oU/lHTtHUgdMSEqGTw4z+CUbDMn e8PxZ72VtsQSI2LONKMFF731mE4b5JADMBHY+dtfSj3BW2TOZyIEY01qnDD9WBhgIU0Y cETCf8Hhwgn7xphZePEcrFRQeymMOFp/TTPK/eNXd0I5okTrn+Os9ut9Y7xvuLfhDBnX xuLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=wnpIeRXgBTc7MeJUtAlvoUvmJj47sB3rMAc+8pCdoj8=; b=VIjkf/9dIG6axn7D6J2nEOXZj4yqkhTbKpAKJvNpjJqwabG+PjcUu5SoMZC8oQj+mT gMkZMiiiZfvySWGUqoflcuivPOmraPRvLObGdWfgseY5kYgUNxQ3ZaDLmTlJ+BBzg88q WfuaxvM6J5+/yNBM5Dq/1+pbJg24TBMmsLzlp63SNE0Wg6BypRjDKv+/QqrFG0OpirPh 54zhVGKyq9PD0bMhpZ8pC9SuWHWrSl4XVrw9wV2yYS9dtpDgTiaLLdHOVyvjINdNfGT3 9MEeNEdjjCch07Q1KmFgSsbHR6K76uIjP8fhUwUDrcL2mkaSW45XLGCIEo1rH3HiIR3W Hslg== X-Gm-Message-State: ABuFfoif3J3tBEyRxz/ey06vg2wB/Gu6NMqBq4Ox4niHB0KoczxUFphY 19izQKcaS4s0/rJxLzrb93zjwIx8kdM= X-Google-Smtp-Source: ACcGV63azrDVwNsIYeku6eB00Duo88yH86P+d5CwMT7h7BaY/b7zh4BJHYTiLKFfKO+VZ9sHLKbxAg== X-Received: by 2002:a0c:91fd:: with SMTP id r58-v6mr2497304qvr.30.1538156754315; Fri, 28 Sep 2018 10:45:54 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id k71-v6sm3693139qkh.30.2018.09.28.10.45.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:53 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 4/5] blk-iolatency: use a percentile approache for ssd's Date: Fri, 28 Sep 2018 13:45:42 -0400 Message-Id: <20180928174543.28486-5-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We use an average latency approach for determining if we're missing our latency target. This works well for rotational storage where we have generally consistent latencies, but for ssd's and other low latency devices you have more of a spikey behavior, which means we often won't throttle misbehaving groups because a lot of IO completes at drastically faster times than our latency target. Instead keep track of how many IO's miss our target and how many IO's are done in our time window. If the p(90) latency is above our target then we know we need to throttle. With this change in place we are seeing the same throttling behavior with our testcase on ssd's as we see with rotational drives. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 179 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 145 insertions(+), 34 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index e7be77b0ce8b..fd246805b0be 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -115,9 +115,21 @@ struct child_latency_info { atomic_t scale_cookie; }; +struct percentile_stats { + u64 total; + u64 missed; +}; + +struct latency_stat { + union { + struct percentile_stats ps; + struct blk_rq_stat rqs; + }; +}; + struct iolatency_grp { struct blkg_policy_data pd; - struct blk_rq_stat __percpu *stats; + struct latency_stat __percpu *stats; struct blk_iolatency *blkiolat; struct rq_depth rq_depth; struct rq_wait rq_wait; @@ -132,6 +144,7 @@ struct iolatency_grp { /* Our current number of IO's for the last summation. */ u64 nr_samples; + bool ssd; struct child_latency_info child_lat; }; @@ -172,6 +185,80 @@ static inline struct blkcg_gq *lat_to_blkg(struct iolatency_grp *iolat) return pd_to_blkg(&iolat->pd); } +static inline void latency_stat_init(struct iolatency_grp *iolat, + struct latency_stat *stat) +{ + if (iolat->ssd) { + stat->ps.total = 0; + stat->ps.missed = 0; + } else + blk_rq_stat_init(&stat->rqs); +} + +static inline void latency_stat_sum(struct iolatency_grp *iolat, + struct latency_stat *sum, + struct latency_stat *stat) +{ + if (iolat->ssd) { + sum->ps.total += stat->ps.total; + sum->ps.missed += stat->ps.missed; + } else + blk_rq_stat_sum(&sum->rqs, &stat->rqs); +} + +static inline void latency_stat_record_time(struct iolatency_grp *iolat, + u64 req_time) +{ + struct latency_stat *stat = get_cpu_ptr(iolat->stats); + if (iolat->ssd) { + if (req_time >= iolat->min_lat_nsec) + stat->ps.missed++; + stat->ps.total++; + } else + blk_rq_stat_add(&stat->rqs, req_time); + put_cpu_ptr(stat); +} + +static inline bool latency_sum_ok(struct iolatency_grp *iolat, + struct latency_stat *stat) +{ + if (iolat->ssd) { + u64 thresh = div64_u64(stat->ps.total, 10); + thresh = max(thresh, 1ULL); + return stat->ps.missed < thresh; + } + return stat->rqs.mean <= iolat->min_lat_nsec; +} + +static inline u64 latency_stat_samples(struct iolatency_grp *iolat, + struct latency_stat *stat) +{ + if (iolat->ssd) + return stat->ps.total; + return stat->rqs.nr_samples; +} + +static inline void iolat_update_total_lat_avg(struct iolatency_grp *iolat, + struct latency_stat *stat) +{ + int exp_idx; + + if (iolat->ssd) + return; + + /* + * CALC_LOAD takes in a number stored in fixed point representation. + * Because we are using this for IO time in ns, the values stored + * are significantly larger than the FIXED_1 denominator (2048). + * Therefore, rounding errors in the calculation are negligible and + * can be ignored. + */ + exp_idx = min_t(int, BLKIOLATENCY_NR_EXP_FACTORS - 1, + div64_u64(iolat->cur_win_nsec, + BLKIOLATENCY_EXP_BUCKET_SIZE)); + CALC_LOAD(iolat->lat_avg, iolatency_exp_factors[exp_idx], stat->rqs.mean); +} + static inline bool iolatency_may_queue(struct iolatency_grp *iolat, wait_queue_entry_t *wait, bool first_block) @@ -418,7 +505,6 @@ static void iolatency_record_time(struct iolatency_grp *iolat, struct bio_issue *issue, u64 now, bool issue_as_root) { - struct blk_rq_stat *rq_stat; u64 start = bio_issue_time(issue); u64 req_time; @@ -444,9 +530,7 @@ static void iolatency_record_time(struct iolatency_grp *iolat, return; } - rq_stat = get_cpu_ptr(iolat->stats); - blk_rq_stat_add(rq_stat, req_time); - put_cpu_ptr(rq_stat); + latency_stat_record_time(iolat, req_time); } #define BLKIOLATENCY_MIN_ADJUST_TIME (500 * NSEC_PER_MSEC) @@ -457,17 +541,17 @@ static void iolatency_check_latencies(struct iolatency_grp *iolat, u64 now) struct blkcg_gq *blkg = lat_to_blkg(iolat); struct iolatency_grp *parent; struct child_latency_info *lat_info; - struct blk_rq_stat stat; + struct latency_stat stat; unsigned long flags; - int cpu, exp_idx; + int cpu; - blk_rq_stat_init(&stat); + latency_stat_init(iolat, &stat); preempt_disable(); for_each_online_cpu(cpu) { - struct blk_rq_stat *s; + struct latency_stat *s; s = per_cpu_ptr(iolat->stats, cpu); - blk_rq_stat_sum(&stat, s); - blk_rq_stat_init(s); + latency_stat_sum(iolat, &stat, s); + latency_stat_init(iolat, s); } preempt_enable(); @@ -477,41 +561,33 @@ static void iolatency_check_latencies(struct iolatency_grp *iolat, u64 now) lat_info = &parent->child_lat; - /* - * CALC_LOAD takes in a number stored in fixed point representation. - * Because we are using this for IO time in ns, the values stored - * are significantly larger than the FIXED_1 denominator (2048). - * Therefore, rounding errors in the calculation are negligible and - * can be ignored. - */ - exp_idx = min_t(int, BLKIOLATENCY_NR_EXP_FACTORS - 1, - div64_u64(iolat->cur_win_nsec, - BLKIOLATENCY_EXP_BUCKET_SIZE)); - CALC_LOAD(iolat->lat_avg, iolatency_exp_factors[exp_idx], stat.mean); + iolat_update_total_lat_avg(iolat, &stat); /* Everything is ok and we don't need to adjust the scale. */ - if (stat.mean <= iolat->min_lat_nsec && + if (latency_sum_ok(iolat, &stat) && atomic_read(&lat_info->scale_cookie) == DEFAULT_SCALE_COOKIE) return; /* Somebody beat us to the punch, just bail. */ spin_lock_irqsave(&lat_info->lock, flags); lat_info->nr_samples -= iolat->nr_samples; - lat_info->nr_samples += stat.nr_samples; - iolat->nr_samples = stat.nr_samples; + lat_info->nr_samples += latency_stat_samples(iolat, &stat); + iolat->nr_samples = latency_stat_samples(iolat, &stat); if ((lat_info->last_scale_event >= now || now - lat_info->last_scale_event < BLKIOLATENCY_MIN_ADJUST_TIME) && lat_info->scale_lat <= iolat->min_lat_nsec) goto out; - if (stat.mean <= iolat->min_lat_nsec && - stat.nr_samples >= BLKIOLATENCY_MIN_GOOD_SAMPLES) { + if (latency_sum_ok(iolat, &stat)) { + if (latency_stat_samples(iolat, &stat) < + BLKIOLATENCY_MIN_GOOD_SAMPLES) + goto out; if (lat_info->scale_grp == iolat) { lat_info->last_scale_event = now; scale_cookie_change(iolat->blkiolat, lat_info, true); } - } else if (stat.mean > iolat->min_lat_nsec) { + } else { lat_info->last_scale_event = now; if (!lat_info->scale_grp || lat_info->scale_lat > iolat->min_lat_nsec) { @@ -808,13 +884,43 @@ static int iolatency_print_limit(struct seq_file *sf, void *v) return 0; } +static size_t iolatency_ssd_stat(struct iolatency_grp *iolat, char *buf, + size_t size) +{ + struct latency_stat stat; + int cpu; + + latency_stat_init(iolat, &stat); + preempt_disable(); + for_each_online_cpu(cpu) { + struct latency_stat *s; + s = per_cpu_ptr(iolat->stats, cpu); + latency_stat_sum(iolat, &stat, s); + } + preempt_enable(); + + if (iolat->rq_depth.max_depth == UINT_MAX) + return scnprintf(buf, size, " missed=%llu total=%llu depth=max", + (unsigned long long)stat.ps.missed, + (unsigned long long)stat.ps.total); + return scnprintf(buf, size, " missed=%llu total=%llu depth=%u", + (unsigned long long)stat.ps.missed, + (unsigned long long)stat.ps.total, + iolat->rq_depth.max_depth); +} + static size_t iolatency_pd_stat(struct blkg_policy_data *pd, char *buf, size_t size) { struct iolatency_grp *iolat = pd_to_lat(pd); - unsigned long long avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC); - unsigned long long cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC); + unsigned long long avg_lat; + unsigned long long cur_win; + + if (iolat->ssd) + return iolatency_ssd_stat(iolat, buf, size); + avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC); + cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC); if (iolat->rq_depth.max_depth == UINT_MAX) return scnprintf(buf, size, " depth=max avg_lat=%llu win=%llu", avg_lat, cur_win); @@ -831,8 +937,8 @@ static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp, int node) iolat = kzalloc_node(sizeof(*iolat), gfp, node); if (!iolat) return NULL; - iolat->stats = __alloc_percpu_gfp(sizeof(struct blk_rq_stat), - __alignof__(struct blk_rq_stat), gfp); + iolat->stats = __alloc_percpu_gfp(sizeof(struct latency_stat), + __alignof__(struct latency_stat), gfp); if (!iolat->stats) { kfree(iolat); return NULL; @@ -849,10 +955,15 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) u64 now = ktime_to_ns(ktime_get()); int cpu; + if (blk_queue_nonrot(blkg->q)) + iolat->ssd = true; + else + iolat->ssd = false; + for_each_possible_cpu(cpu) { - struct blk_rq_stat *stat; + struct latency_stat *stat; stat = per_cpu_ptr(iolat->stats, cpu); - blk_rq_stat_init(stat); + latency_stat_init(iolat, stat); } rq_wait_init(&iolat->rq_wait); From patchwork Fri Sep 28 17:45:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620235 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31A5F15A7 for ; Fri, 28 Sep 2018 17:45:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FE892BEB6 for ; Fri, 28 Sep 2018 17:45:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 148F12BF88; Fri, 28 Sep 2018 17:45:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07F212BEB6 for ; Fri, 28 Sep 2018 17:45:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726144AbeI2AKs (ORCPT ); Fri, 28 Sep 2018 20:10:48 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:37593 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKs (ORCPT ); Fri, 28 Sep 2018 20:10:48 -0400 Received: by mail-qt1-f195.google.com with SMTP id n6-v6so7554668qtl.4 for ; Fri, 28 Sep 2018 10:45:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=LaCfN37N4Q7co/matB6j6Ioo5GEb6qKti52y2BDjg8o=; b=fJQLb2LVpQUIV2IFMjBbXXrg5g8An9MwkbUJEVBiAAEy8xLztAxoOT8NBCL4KWX2PF tfThrpb/E6kAPZXL2Tv158NNrwgQWHVgphy4tr5Gud+EQVDu9wRssTL3GF8yw8ZoEpCh x9xkVqN4JSb3suxtPBRKNoWu3VmjoGyqvgdlMVcZ3b7ql340BD5lr52PgiO95GdO34Ld hUr9FomtW5EaEV0uI3/sLUAknywdQY89q4/pnBHpeAbj+ikG1Qgf1whCbCsB+VYbEp7o KYMnhRS0caZXS7RwrK7uFwXZAduEBfeFUpPTs4g6GLkvkRYJie0mKYwWYr8zde2PGi/S FBeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=LaCfN37N4Q7co/matB6j6Ioo5GEb6qKti52y2BDjg8o=; b=CTXypHqIM0nvGhLLJnHS8xUHe709wT8H0f2a5/A6mAuH4xWJ6OTCkw2i5iow8HUOA5 tXfLETw6VXOTHn32s3LdgzqDYDwhCTG8IG+GEfTkHdLoMPnQrxr+VUAFN9tT2FhkOq9I 0i1oq+0Z9izZevZpLazqtNEoUC3MyMr7szXCjZj9LFC+wQilE94cwvN9D3ag0Arh9Dkj zpdb0XycVjUV95OMXgwKUys/gavamqQ42vIlUH9qEtWYcbnk0N/PskFRQuej5hgHz3Yx rRZu8hNJfWejrWvGHodMYdqPsLtt11pyik4sYXz5VGEpPAzGLBZp6/pergQV9kAm/f3+ lJSw== X-Gm-Message-State: ABuFfoiSeSnWNJ2RmzyUqRXBbnTxhKfsuHGj2SIEL7BzN09SZED972oH DPzuqTp5z5p8NNo+DqD6NAWpNw== X-Google-Smtp-Source: ACcGV61JxQt+oQ5o5rsTNljmiLBTXuMTw20nPhSyypYFII2Rl+CiRtB8NMkE0R/XQBuDtcuARSG6ow== X-Received: by 2002:aed:3983:: with SMTP id m3-v6mr13450190qte.164.1538156756066; Fri, 28 Sep 2018 10:45:56 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id e206-v6sm2778599qkb.4.2018.09.28.10.45.54 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:55 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 5/5] blk-iolatency: keep track of previous windows stats Date: Fri, 28 Sep 2018 13:45:43 -0400 Message-Id: <20180928174543.28486-6-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We apply a smoothing to the scale changes in order to keep sawtoothy behavior from occurring. However our window for checking if we've missed our target can sometimes be lower than the smoothing interval (500ms), especially on faster drives like ssd's. In order to deal with this keep track of the running tally of the previous intervals that we threw away because we had already done a scale event recently. This is needed for the ssd case as these low latency drives will have bursts of latency, and if it happens to be ok for the window that directly follows the opening of the scale window we could unthrottle when previous windows we were missing our target. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index fd246805b0be..35c48d7b8f78 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -130,6 +130,7 @@ struct latency_stat { struct iolatency_grp { struct blkg_policy_data pd; struct latency_stat __percpu *stats; + struct latency_stat cur_stat; struct blk_iolatency *blkiolat; struct rq_depth rq_depth; struct rq_wait rq_wait; @@ -570,24 +571,27 @@ static void iolatency_check_latencies(struct iolatency_grp *iolat, u64 now) /* Somebody beat us to the punch, just bail. */ spin_lock_irqsave(&lat_info->lock, flags); + + latency_stat_sum(iolat, &iolat->cur_stat, &stat); lat_info->nr_samples -= iolat->nr_samples; - lat_info->nr_samples += latency_stat_samples(iolat, &stat); - iolat->nr_samples = latency_stat_samples(iolat, &stat); + lat_info->nr_samples += latency_stat_samples(iolat, &iolat->cur_stat); + iolat->nr_samples = latency_stat_samples(iolat, &iolat->cur_stat); if ((lat_info->last_scale_event >= now || - now - lat_info->last_scale_event < BLKIOLATENCY_MIN_ADJUST_TIME) && - lat_info->scale_lat <= iolat->min_lat_nsec) + now - lat_info->last_scale_event < BLKIOLATENCY_MIN_ADJUST_TIME)) goto out; - if (latency_sum_ok(iolat, &stat)) { - if (latency_stat_samples(iolat, &stat) < + if (latency_sum_ok(iolat, &iolat->cur_stat) && + latency_sum_ok(iolat, &stat)) { + if (latency_stat_samples(iolat, &iolat->cur_stat) < BLKIOLATENCY_MIN_GOOD_SAMPLES) goto out; if (lat_info->scale_grp == iolat) { lat_info->last_scale_event = now; scale_cookie_change(iolat->blkiolat, lat_info, true); } - } else { + } else if (lat_info->scale_lat == 0 || + lat_info->scale_lat >= iolat->min_lat_nsec) { lat_info->last_scale_event = now; if (!lat_info->scale_grp || lat_info->scale_lat > iolat->min_lat_nsec) { @@ -596,6 +600,7 @@ static void iolatency_check_latencies(struct iolatency_grp *iolat, u64 now) } scale_cookie_change(iolat->blkiolat, lat_info, false); } + latency_stat_init(iolat, &iolat->cur_stat); out: spin_unlock_irqrestore(&lat_info->lock, flags); } @@ -966,6 +971,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) latency_stat_init(iolat, stat); } + latency_stat_init(iolat, &iolat->cur_stat); rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); iolat->rq_depth.queue_depth = blkg->q->nr_requests;