From patchwork Wed Apr 5 19:01:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9665479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DBBE960364 for ; Wed, 5 Apr 2017 19:02:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C956A2856A for ; Wed, 5 Apr 2017 19:02:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BE566285A5; Wed, 5 Apr 2017 19:02:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51982285A3 for ; Wed, 5 Apr 2017 19:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756049AbdDETC4 (ORCPT ); Wed, 5 Apr 2017 15:02:56 -0400 Received: from mail-pg0-f49.google.com ([74.125.83.49]:34232 "EHLO mail-pg0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756046AbdDETCz (ORCPT ); Wed, 5 Apr 2017 15:02:55 -0400 Received: by mail-pg0-f49.google.com with SMTP id 21so12954887pgg.1 for ; Wed, 05 Apr 2017 12:02:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=K9l035VDC0Cmx488WD/xlh0eZNcqIOHR8ICxBYulFE8=; b=ZRmRjZQcQpQSwtMYZPRB/CAwTCTqiBuw6OE2269web7MAz8fWpZlrwNJlm2Qs7k5eb 7MD4ccwLd5bs70atUpNkFBFA0aShjD8BXD08rlnC5wImGkvRW+6LgYgmDgnOUjwL7u+M KWjKDo3+j3SUkoK6it6D1wqQvLkCE/pw6soToc+xN+5ZgdZpj5MM88+O4ODKcWcKuGok qi8zPFh+LQazSkC8jlTM5os7MjG4ZJzhD3WHdeFFq0fAHC0dbn0QVt4fLzvvcFWyYk2/ bKbAGGXjEKjo1eRSj3RbrMm/RvoKflVFbZl1M5NiqR0GihYm14gu55ER+J6c8KCOhVsn Ktww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=K9l035VDC0Cmx488WD/xlh0eZNcqIOHR8ICxBYulFE8=; b=SxKfU5bmskafdRI6pIP+SgftcIR/XImr4KAqwmFk0r1Qb6U30i5oX2+UQUqUXM/nwy VaGVcfeZ//+O1XxTnMg3eHRJz4xjLEfFH+480hj/eqBM6251Op/zIeIYVXaO9DLX8vZo IPkADSYmlsFVIS4OJz1q9/8APF251BDlz9RheznFvbLBBgb+Cjzzuv/RdlslrYplLxQR Ey3qemLkqnenvnsaFgSrv2yHpjlHD+M1g7dWxqRDSc/5+/Rss1BqMa0gA4tPrb1DdoZY tJLODouk2Cb4QlKt2ShsY3eAuiZXIzKyla+0dYJHCbP51S3uX5DXrR5Qhi8qYJzTZ4Bl cgVQ== X-Gm-Message-State: AFeK/H0UhneB/hFbDQ9QCm0RqQ9A79HzuDq9xNx+Bo6p83mwawpYKgWmL0/JMreX7J0zS5VC X-Received: by 10.84.192.129 with SMTP id c1mr38245448pld.181.1491418974838; Wed, 05 Apr 2017 12:02:54 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:200::a:a8dd]) by smtp.gmail.com with ESMTPSA id 4sm5610375pff.17.2017.04.05.12.02.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Apr 2017 12:02:54 -0700 (PDT) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH v3 6/8] blk-mq-sched: provide hooks for initializing hardware queue data Date: Wed, 5 Apr 2017 12:01:34 -0700 Message-Id: <07fff316fa01c2be4fa6c45f510f18ca1b79fa5a.1491418411.git.osandov@fb.com> X-Mailer: git-send-email 2.12.2 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval Schedulers need to be informed when a hardware queue is added or removed at runtime so they can allocate/free per-hardware queue data. So, replace the blk_mq_sched_init_hctx_data() helper, which only makes sense at init time, with .init_hctx() and .exit_hctx() hooks. Signed-off-by: Omar Sandoval --- block/blk-mq-sched.c | 81 +++++++++++++++++++++++++----------------------- block/blk-mq-sched.h | 4 --- include/linux/elevator.h | 2 ++ 3 files changed, 45 insertions(+), 42 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index e8c2ed654ef0..9d7f6d6ca693 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -30,43 +30,6 @@ void blk_mq_sched_free_hctx_data(struct request_queue *q, } EXPORT_SYMBOL_GPL(blk_mq_sched_free_hctx_data); -int blk_mq_sched_init_hctx_data(struct request_queue *q, size_t size, - int (*init)(struct blk_mq_hw_ctx *), - void (*exit)(struct blk_mq_hw_ctx *)) -{ - struct blk_mq_hw_ctx *hctx; - int ret; - int i; - - queue_for_each_hw_ctx(q, hctx, i) { - hctx->sched_data = kmalloc_node(size, GFP_KERNEL, hctx->numa_node); - if (!hctx->sched_data) { - ret = -ENOMEM; - goto error; - } - - if (init) { - ret = init(hctx); - if (ret) { - /* - * We don't want to give exit() a partially - * initialized sched_data. init() must clean up - * if it fails. - */ - kfree(hctx->sched_data); - hctx->sched_data = NULL; - goto error; - } - } - } - - return 0; -error: - blk_mq_sched_free_hctx_data(q, exit); - return ret; -} -EXPORT_SYMBOL_GPL(blk_mq_sched_init_hctx_data); - static void __blk_mq_sched_assign_ioc(struct request_queue *q, struct request *rq, struct bio *bio, @@ -465,11 +428,24 @@ int blk_mq_sched_init_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) { struct elevator_queue *e = q->elevator; + int ret; if (!e) return 0; - return blk_mq_sched_alloc_tags(q, hctx, hctx_idx); + ret = blk_mq_sched_alloc_tags(q, hctx, hctx_idx); + if (ret) + return ret; + + if (e->type->ops.mq.init_hctx) { + ret = e->type->ops.mq.init_hctx(hctx, hctx_idx); + if (ret) { + blk_mq_sched_free_tags(q->tag_set, hctx, hctx_idx); + return ret; + } + } + + return 0; } void blk_mq_sched_exit_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, @@ -480,12 +456,18 @@ void blk_mq_sched_exit_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx, if (!e) return; + if (e->type->ops.mq.exit_hctx && hctx->sched_data) { + e->type->ops.mq.exit_hctx(hctx, hctx_idx); + hctx->sched_data = NULL; + } + blk_mq_sched_free_tags(q->tag_set, hctx, hctx_idx); } int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) { struct blk_mq_hw_ctx *hctx; + struct elevator_queue *eq; unsigned int i; int ret; @@ -510,6 +492,18 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) if (ret) goto err; + if (e->ops.mq.init_hctx) { + queue_for_each_hw_ctx(q, hctx, i) { + ret = e->ops.mq.init_hctx(hctx, i); + if (ret) { + eq = q->elevator; + blk_mq_exit_sched(q, eq); + kobject_put(&eq->kobj); + return ret; + } + } + } + return 0; err: @@ -520,6 +514,17 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e) { + struct blk_mq_hw_ctx *hctx; + unsigned int i; + + if (e->type->ops.mq.exit_hctx) { + queue_for_each_hw_ctx(q, hctx, i) { + if (hctx->sched_data) { + e->type->ops.mq.exit_hctx(hctx, i); + hctx->sched_data = NULL; + } + } + } if (e->type->ops.mq.exit_sched) e->type->ops.mq.exit_sched(e); blk_mq_sched_tags_teardown(q); diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index e704956e0862..c6e760df0fb4 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -4,10 +4,6 @@ #include "blk-mq.h" #include "blk-mq-tag.h" -int blk_mq_sched_init_hctx_data(struct request_queue *q, size_t size, - int (*init)(struct blk_mq_hw_ctx *), - void (*exit)(struct blk_mq_hw_ctx *)); - void blk_mq_sched_free_hctx_data(struct request_queue *q, void (*exit)(struct blk_mq_hw_ctx *)); diff --git a/include/linux/elevator.h b/include/linux/elevator.h index 22d39e8d4de1..b7ec315ee7e7 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -93,6 +93,8 @@ struct blk_mq_hw_ctx; struct elevator_mq_ops { int (*init_sched)(struct request_queue *, struct elevator_type *); void (*exit_sched)(struct elevator_queue *); + int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int); + void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); bool (*allow_merge)(struct request_queue *, struct request *, struct bio *); bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *);