From patchwork Tue Sep 19 03:49:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9958127 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 30EB96038F for ; Tue, 19 Sep 2017 03:50:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E70FB28D97 for ; Tue, 19 Sep 2017 03:50:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB93F28DAA; Tue, 19 Sep 2017 03:50:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DE2828CD5 for ; Tue, 19 Sep 2017 03:50:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751341AbdISDum (ORCPT ); Mon, 18 Sep 2017 23:50:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48771 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751309AbdISDul (ORCPT ); Mon, 18 Sep 2017 23:50:41 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 96396883B8; Tue, 19 Sep 2017 03:50:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 96396883B8 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ming.lei@redhat.com Received: from localhost (ovpn-12-132.pek2.redhat.com [10.72.12.132]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A40060466; Tue, 19 Sep 2017 03:50:32 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , "James E . J . Bottomley" Cc: Bart Van Assche , Oleksandr Natalenko , Johannes Thumshirn , Cathy Avery , Ming Lei Subject: [PATCH V5 05/10] block: rename .mq_freeze_wq and .mq_freeze_depth Date: Tue, 19 Sep 2017 11:49:20 +0800 Message-Id: <20170919034925.9894-6-ming.lei@redhat.com> In-Reply-To: <20170919034925.9894-1-ming.lei@redhat.com> References: <20170919034925.9894-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Tue, 19 Sep 2017 03:50:41 +0000 (UTC) Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Both two are used for legacy and blk-mq, so rename them as .freeze_wq and .freeze_depth for avoiding to confuse people. No functional change. Tested-by: Cathy Avery Tested-by: Oleksandr Natalenko Signed-off-by: Ming Lei --- block/blk-core.c | 12 ++++++------ block/blk-mq.c | 12 ++++++------ include/linux/blkdev.h | 4 ++-- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 97837a5bf838..8889920567d2 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -630,7 +630,7 @@ void blk_set_queue_dying(struct request_queue *q) * We need to ensure that processes currently waiting on * the queue are notified as well. */ - wake_up_all(&q->mq_freeze_wq); + wake_up_all(&q->freeze_wq); } EXPORT_SYMBOL_GPL(blk_set_queue_dying); @@ -793,14 +793,14 @@ int blk_queue_enter(struct request_queue *q, bool nowait) /* * read pair of barrier in blk_freeze_queue_start(), * we need to order reading __PERCPU_REF_DEAD flag of - * .q_usage_counter and reading .mq_freeze_depth or + * .q_usage_counter and reading .freeze_depth or * queue dying flag, otherwise the following wait may * never return if the two reads are reordered. */ smp_rmb(); - ret = wait_event_interruptible(q->mq_freeze_wq, - !atomic_read(&q->mq_freeze_depth) || + ret = wait_event_interruptible(q->freeze_wq, + !atomic_read(&q->freeze_depth) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; @@ -819,7 +819,7 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref) struct request_queue *q = container_of(ref, struct request_queue, q_usage_counter); - wake_up_all(&q->mq_freeze_wq); + wake_up_all(&q->freeze_wq); } static void blk_rq_timed_out_timer(unsigned long data) @@ -891,7 +891,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) q->bypass_depth = 1; __set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags); - init_waitqueue_head(&q->mq_freeze_wq); + init_waitqueue_head(&q->freeze_wq); /* * Init percpu_ref in atomic mode so that it's faster to shutdown. diff --git a/block/blk-mq.c b/block/blk-mq.c index 1e9bf2b987a1..988da93d85a4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -122,7 +122,7 @@ void blk_freeze_queue_start(struct request_queue *q) { int freeze_depth; - freeze_depth = atomic_inc_return(&q->mq_freeze_depth); + freeze_depth = atomic_inc_return(&q->freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); if (q->mq_ops) @@ -135,14 +135,14 @@ void blk_freeze_queue_wait(struct request_queue *q) { if (!q->mq_ops) blk_drain_queue(q); - wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter)); + wait_event(q->freeze_wq, percpu_ref_is_zero(&q->q_usage_counter)); } EXPORT_SYMBOL_GPL(blk_freeze_queue_wait); int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, unsigned long timeout) { - return wait_event_timeout(q->mq_freeze_wq, + return wait_event_timeout(q->freeze_wq, percpu_ref_is_zero(&q->q_usage_counter), timeout); } @@ -170,11 +170,11 @@ void blk_unfreeze_queue(struct request_queue *q) { int freeze_depth; - freeze_depth = atomic_dec_return(&q->mq_freeze_depth); + freeze_depth = atomic_dec_return(&q->freeze_depth); WARN_ON_ONCE(freeze_depth < 0); if (!freeze_depth) { percpu_ref_reinit(&q->q_usage_counter); - wake_up_all(&q->mq_freeze_wq); + wake_up_all(&q->freeze_wq); } } EXPORT_SYMBOL_GPL(blk_unfreeze_queue); @@ -2440,7 +2440,7 @@ void blk_mq_free_queue(struct request_queue *q) /* Basically redo blk_mq_init_queue with queue frozen */ static void blk_mq_queue_reinit(struct request_queue *q) { - WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth)); + WARN_ON_ONCE(!atomic_read(&q->freeze_depth)); blk_mq_debugfs_unregister_hctxs(q); blk_mq_sysfs_unregister(q); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 460294bb0fa5..b8053bcc6b5f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -564,7 +564,7 @@ struct request_queue { struct mutex sysfs_lock; int bypass_depth; - atomic_t mq_freeze_depth; + atomic_t freeze_depth; #if defined(CONFIG_BLK_DEV_BSG) bsg_job_fn *bsg_job_fn; @@ -576,7 +576,7 @@ struct request_queue { struct throtl_data *td; #endif struct rcu_head rcu_head; - wait_queue_head_t mq_freeze_wq; + wait_queue_head_t freeze_wq; struct percpu_ref q_usage_counter; struct list_head all_q_node;