From patchwork Thu Sep 13 12:15:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10599357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89DAE14F9 for ; Thu, 13 Sep 2018 12:16:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A5F82AABC for ; Thu, 13 Sep 2018 12:16:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62B072AAC0; Thu, 13 Sep 2018 12:16:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D24712AABD for ; Thu, 13 Sep 2018 12:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727534AbeIMRZ2 (ORCPT ); Thu, 13 Sep 2018 13:25:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45464 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727194AbeIMRZ2 (ORCPT ); Thu, 13 Sep 2018 13:25:28 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3AE00C04B920; Thu, 13 Sep 2018 12:16:15 +0000 (UTC) Received: from localhost (ovpn-8-34.pek2.redhat.com [10.72.8.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id 681775D6B5; Thu, 13 Sep 2018 12:16:10 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Alan Stern , Christoph Hellwig , Bart Van Assche , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Adrian Hunter , "James E.J. Bottomley" , "Martin K. Petersen" , linux-scsi@vger.kernel.org Subject: [PATCH V3 03/17] block: rename QUEUE_FLAG_NO_SCHED as QUEUE_FLAG_ADMIN Date: Thu, 13 Sep 2018 20:15:32 +0800 Message-Id: <20180913121546.5710-4-ming.lei@redhat.com> In-Reply-To: <20180913121546.5710-1-ming.lei@redhat.com> References: <20180913121546.5710-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 13 Sep 2018 12:16:15 +0000 (UTC) Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now all users of QUEUE_FLAG_NO_SCHED is for admin queue only, and not see any drivers need this flag for IO queue. So rename it as QUEUE_FLAG_ADMIN, which looks more straightforward. Cc: Alan Stern Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Adrian Hunter Cc: "James E.J. Bottomley" Cc: "Martin K. Petersen" Cc: linux-scsi@vger.kernel.org Signed-off-by: Ming Lei --- block/blk-mq-debugfs.c | 2 +- block/blk-mq.c | 2 +- block/elevator.c | 2 +- drivers/block/null_blk_main.c | 2 +- drivers/nvme/host/fc.c | 2 +- drivers/nvme/host/pci.c | 2 +- drivers/nvme/host/rdma.c | 2 +- drivers/nvme/target/loop.c | 2 +- include/linux/blkdev.h | 8 ++++---- 9 files changed, 12 insertions(+), 12 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 246c9afb6f5d..8df013e9f242 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -133,7 +133,7 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(SCSI_PASSTHROUGH), QUEUE_FLAG_NAME(QUIESCED), QUEUE_FLAG_NAME(PREEMPT_ONLY), - QUEUE_FLAG_NAME(NO_SCHED), + QUEUE_FLAG_NAME(ADMIN), }; #undef QUEUE_FLAG_NAME diff --git a/block/blk-mq.c b/block/blk-mq.c index 5b56ed306cd9..7868daaf6de0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2633,7 +2633,7 @@ struct request_queue *__blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, blk_mq_add_queue_tag_set(set, q); blk_mq_map_swqueue(q); - if (!blk_queue_no_sched(q)) { + if (!blk_queue_admin(q)) { int ret; ret = elevator_init_mq(q); diff --git a/block/elevator.c b/block/elevator.c index 8fb8754222fa..d6abba76c89e 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -1111,7 +1111,7 @@ static int __elevator_change(struct request_queue *q, const char *name) static inline bool elv_support_iosched(struct request_queue *q) { - if (q->mq_ops && blk_queue_no_sched(q)) + if (q->mq_ops && blk_queue_admin(q)) return false; return true; } diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 5d9504e65725..9fb358007e43 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1702,7 +1702,7 @@ static int null_add_dev(struct nullb_device *dev) if (dev->queue_mode == NULL_Q_MQ) { unsigned long q_flags = g_no_sched ? - QUEUE_FLAG_MQ_NO_SCHED_DEFAULT : QUEUE_FLAG_MQ_DEFAULT; + QUEUE_FLAG_MQ_ADMIN_DEFAULT : QUEUE_FLAG_MQ_DEFAULT; if (shared_tags) { nullb->tag_set = &tag_set; diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 7048e1444210..a920d13c3538 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -3041,7 +3041,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, ctrl->ctrl.admin_tagset = &ctrl->admin_tag_set; ctrl->ctrl.admin_q = __blk_mq_init_queue(&ctrl->admin_tag_set, - QUEUE_FLAG_MQ_NO_SCHED_DEFAULT); + QUEUE_FLAG_MQ_ADMIN_DEFAULT); if (IS_ERR(ctrl->ctrl.admin_q)) { ret = PTR_ERR(ctrl->ctrl.admin_q); goto out_free_admin_tag_set; diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 73a3bd980fc9..10716a00a6b4 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1499,7 +1499,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev) dev->ctrl.admin_tagset = &dev->admin_tagset; dev->ctrl.admin_q = __blk_mq_init_queue(&dev->admin_tagset, - QUEUE_FLAG_MQ_NO_SCHED_DEFAULT); + QUEUE_FLAG_MQ_ADMIN_DEFAULT); if (IS_ERR(dev->ctrl.admin_q)) { blk_mq_free_tag_set(&dev->admin_tagset); return -ENOMEM; diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index d61c057c0a71..f901b3dafac5 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -770,7 +770,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, } ctrl->ctrl.admin_q = __blk_mq_init_queue(&ctrl->admin_tag_set, - QUEUE_FLAG_MQ_NO_SCHED_DEFAULT); + QUEUE_FLAG_MQ_ADMIN_DEFAULT); if (IS_ERR(ctrl->ctrl.admin_q)) { error = PTR_ERR(ctrl->ctrl.admin_q); goto out_free_tagset; diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index c689621c2187..8fca59e6b3c3 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -381,7 +381,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl) ctrl->ctrl.admin_tagset = &ctrl->admin_tag_set; ctrl->ctrl.admin_q = __blk_mq_init_queue(&ctrl->admin_tag_set, - QUEUE_FLAG_MQ_NO_SCHED_DEFAULT); + QUEUE_FLAG_MQ_ADMIN_DEFAULT); if (IS_ERR(ctrl->ctrl.admin_q)) { error = PTR_ERR(ctrl->ctrl.admin_q); goto out_free_tagset; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a2b110ec422d..2dbc7524a169 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -699,7 +699,7 @@ struct request_queue { #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ #define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process REQ_PREEMPT requests */ -#define QUEUE_FLAG_NO_SCHED 30 /* no scheduler allowed */ +#define QUEUE_FLAG_ADMIN 30 /* admin queue */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -709,8 +709,8 @@ struct request_queue { (1 << QUEUE_FLAG_SAME_COMP) | \ (1 << QUEUE_FLAG_POLL)) -#define QUEUE_FLAG_MQ_NO_SCHED_DEFAULT (QUEUE_FLAG_MQ_DEFAULT | \ - (1 << QUEUE_FLAG_NO_SCHED)) +#define QUEUE_FLAG_MQ_ADMIN_DEFAULT (QUEUE_FLAG_MQ_DEFAULT | \ + (1 << QUEUE_FLAG_ADMIN)) void blk_queue_flag_set(unsigned int flag, struct request_queue *q); void blk_queue_flag_clear(unsigned int flag, struct request_queue *q); @@ -743,7 +743,7 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); #define blk_queue_preempt_only(q) \ test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags) #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) -#define blk_queue_no_sched(q) test_bit(QUEUE_FLAG_NO_SCHED, &(q)->queue_flags) +#define blk_queue_admin(q) test_bit(QUEUE_FLAG_ADMIN, &(q)->queue_flags) extern int blk_set_preempt_only(struct request_queue *q); extern void blk_clear_preempt_only(struct request_queue *q);