From patchwork Wed Apr 5 19:01:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9665475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8438260364 for ; Wed, 5 Apr 2017 19:02:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 700CA2856A for ; Wed, 5 Apr 2017 19:02:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 64E68285AD; Wed, 5 Apr 2017 19:02:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A86A285A3 for ; Wed, 5 Apr 2017 19:02:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751694AbdDETCz (ORCPT ); Wed, 5 Apr 2017 15:02:55 -0400 Received: from mail-pg0-f41.google.com ([74.125.83.41]:34211 "EHLO mail-pg0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755814AbdDETCy (ORCPT ); Wed, 5 Apr 2017 15:02:54 -0400 Received: by mail-pg0-f41.google.com with SMTP id 21so12953801pgg.1 for ; Wed, 05 Apr 2017 12:02:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=QNoHJ8htKkBP8EPR9XstB5Q+nJbiOM9Gac0eEoCMuA8=; b=pn1jLln6wCW1wkkl6KaQ74M1lEx7LmhGZAbhxKUCW+fOHKlY60ET6A60wyLaIR3Smi SNSY0I0D/+rIhVArUdmXg3K1EAv5MZZwPxEBAs1xP2tLoLfT8qSoniEPDpG1DvJRwFEE eCgU9X1pmFbZRwIb8o/LySr5uX24wfR/SvMOKhr05yGN2lhvIDpwWdmYSVblHoxkGQzT tRFsEL6La/pgfOjMfkoD6UG5tZTizj5QV4PKBsX0tkzb6+FYoUomORmQrKcS1q4M/wKF qwWVmmIWi588pDWw+a1Slehn8U6WDufsPomhjrP1gn+tM3ixEVN1XrTssoQ+YtJTd/aH i+aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=QNoHJ8htKkBP8EPR9XstB5Q+nJbiOM9Gac0eEoCMuA8=; b=Wn/kBGtM36WuCPf5kZKp0m6inhZMpWus0Pkr0QCONeAjiJ9KVRw+Eq3V8CcKLE/z3M kAOjM2WCOd0SNGjOjyw7ux/Bte+Vq5ifCSKzkIFFu8w38OqTOs5AyDX3dfBJQdQUT0JT uaUZadgnOtr3c8hc55Wls5zUo1PpMcmYgx032gT/QbGNiiQrXzMRK27NtyiBvR54CWOg Wb96iwvqRNWUft6F+FzaNEmetwMVvo9vMUBCtc+7hdOfwCacbIwMDSUhqyn0me6Z8nIg 5CKwLKlWwJikk1hZt2TIHOn/3QeI5c5+odtsEYvgQsoG1CTMubbGGxaxprEtMc0ZDRte 2MPQ== X-Gm-Message-State: AFeK/H2Fiy14KXnhWulcEYVUt9CINqhr38z3EUhvZlw0v12cAVhon/Yo4NAZ65sojN/MaP1p X-Received: by 10.98.196.221 with SMTP id h90mr30225060pfk.149.1491418971970; Wed, 05 Apr 2017 12:02:51 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:200::a:a8dd]) by smtp.gmail.com with ESMTPSA id 4sm5610375pff.17.2017.04.05.12.02.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Apr 2017 12:02:51 -0700 (PDT) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH v3 3/8] blk-mq-sched: set up scheduler tags when bringing up new queues Date: Wed, 5 Apr 2017 12:01:31 -0700 Message-Id: <138eeb30ef2533cefa165a4c621abf26a1989604.1491418411.git.osandov@fb.com> X-Mailer: git-send-email 2.12.2 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval If a new hardware queue is added at runtime, we don't allocate scheduler tags for it, leading to a crash. This hooks up the scheduler framework to blk_mq_{init,exit}_hctx() to make sure everything gets properly initialized/freed. Signed-off-by: Omar Sandoval --- block/blk-mq-sched.c | 22 ++++++++++++++++++++++ block/blk-mq-sched.h | 5 +++++ block/blk-mq.c | 9 ++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 6bd1758ea29b..0bb13bb51daa 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -461,6 +461,28 @@ void blk_mq_sched_teardown(struct request_queue *q) blk_mq_sched_free_tags(set, hctx, i); } +int blk_mq_sched_init_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx) +{ + struct elevator_queue *e = q->elevator; + + if (!e) + return 0; + + return blk_mq_sched_alloc_tags(q, hctx, hctx_idx); +} + +void blk_mq_sched_exit_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx) +{ + struct elevator_queue *e = q->elevator; + + if (!e) + return; + + blk_mq_sched_free_tags(q->tag_set, hctx, hctx_idx); +} + int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) { struct blk_mq_hw_ctx *hctx; diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 873f9af5a35b..19db25e0c95a 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -35,6 +35,11 @@ void blk_mq_sched_move_to_dispatch(struct blk_mq_hw_ctx *hctx, int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e); void blk_mq_sched_teardown(struct request_queue *q); +int blk_mq_sched_init_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx); +void blk_mq_sched_exit_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx); + int blk_mq_sched_init(struct request_queue *q); static inline bool diff --git a/block/blk-mq.c b/block/blk-mq.c index 09cff6d1ba76..672430c8c342 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1823,6 +1823,8 @@ static void blk_mq_exit_hctx(struct request_queue *q, hctx->fq->flush_rq, hctx_idx, flush_start_tag + hctx_idx); + blk_mq_sched_exit_hctx(q, hctx, hctx_idx); + if (set->ops->exit_hctx) set->ops->exit_hctx(hctx, hctx_idx); @@ -1889,9 +1891,12 @@ static int blk_mq_init_hctx(struct request_queue *q, set->ops->init_hctx(hctx, set->driver_data, hctx_idx)) goto free_bitmap; + if (blk_mq_sched_init_hctx(q, hctx, hctx_idx)) + goto exit_hctx; + hctx->fq = blk_alloc_flush_queue(q, hctx->numa_node, set->cmd_size); if (!hctx->fq) - goto exit_hctx; + goto sched_exit_hctx; if (set->ops->init_request && set->ops->init_request(set->driver_data, @@ -1906,6 +1911,8 @@ static int blk_mq_init_hctx(struct request_queue *q, free_fq: kfree(hctx->fq); + sched_exit_hctx: + blk_mq_sched_exit_hctx(q, hctx, hctx_idx); exit_hctx: if (set->ops->exit_hctx) set->ops->exit_hctx(hctx, hctx_idx);