From patchwork Mon Mar 27 12:06:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9646467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2A322602C8 for ; Mon, 27 Mar 2017 12:07:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FB312836D for ; Mon, 27 Mar 2017 12:07:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1498828364; Mon, 27 Mar 2017 12:07:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B157B27F4B for ; Mon, 27 Mar 2017 12:07:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752763AbdC0MHo (ORCPT ); Mon, 27 Mar 2017 08:07:44 -0400 Received: from mail-pg0-f68.google.com ([74.125.83.68]:34586 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752740AbdC0MHl (ORCPT ); Mon, 27 Mar 2017 08:07:41 -0400 Received: by mail-pg0-f68.google.com with SMTP id o123so9635234pga.1 for ; Mon, 27 Mar 2017 05:07:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=I2o4Msx/IP9Dywj1K/yJj4X5qgG0paVDbVceHOBdbRM=; b=DQ4Y+91tL/tu3k/LDe6sEY8yfKBd7bLHdCY5rI57+qFz0lqz8l7IZtPGT8MLcrXmZX UH5u/EFDlSYTpZlnV0wvVymgi+gA1bw6taNF4BVxG9mKKe9nUWV+mVEe51HPKoodlheW xc+SRBKmqNqAWgGDTkwDLyfK2d8k3V5CnwzCQrAM9GK5xu2ASws9Vx5323v9ADra9sa5 JGsjC++CQ44eq+KC/GZF64Tgl+HjvAQl2sTUwsSmvYov2lLm8Allnaxr1/fiGx7OuB9q KY5NYJ5YOCKCbivsDcezbZv/1zhQ1XPIh1t+uyLWvHxH7IpdSH66s72IKtRnnreMzF9X 9JDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I2o4Msx/IP9Dywj1K/yJj4X5qgG0paVDbVceHOBdbRM=; b=o4DamCQKJI4uXdSyVHC0tUu7AenIeSuU8zNhGG/saFr3LDkQ2bHsAXBzylkLZFPbW/ DyziMtsFlyhSOkbcewdGKDjRs+cSqxz5Hj+VzSh7odnV3ShR9C0KEW6JhDjJ0Kx/SAn0 yoscyvCJUv+h489VIm8KXjk6v1H3em+UVQ+IF8Z2IgvoeL359oQ9m7QvUmRlFJ1ZtaSx BrMmbDbw4+P+HTsz6SnSebSxcSmNyX5qc19Lqo9aen7pOlSH1+ybuT+z+OD9UsxWu6Mt K4L+0q0jpkFUdBulPpZ/LDWX4dC7PPycEOMbi8PDiDYITGWbaaiWDDj582BZqLv8jb9o ZblQ== X-Gm-Message-State: AFeK/H0Sig+W7OWQg5PVJQmVNW2Qggsq+bztApqX8WSg3luJr2uAuVA/vAMlTmzxDEieXA== X-Received: by 10.99.163.91 with SMTP id v27mr23961768pgn.171.1490616440909; Mon, 27 Mar 2017 05:07:20 -0700 (PDT) Received: from localhost (li405-222.members.linode.com. [106.187.53.222]) by smtp.gmail.com with ESMTPSA id a77sm981059pfj.1.2017.03.27.05.07.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 27 Mar 2017 05:07:20 -0700 (PDT) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig Cc: Bart Van Assche , Hannes Reinecke , Ming Lei , Tejun Heo Subject: [PATCH v3 4/4] block: block new I/O just after queue is set as dying Date: Mon, 27 Mar 2017 20:06:58 +0800 Message-Id: <20170327120658.29864-5-tom.leiming@gmail.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170327120658.29864-1-tom.leiming@gmail.com> References: <20170327120658.29864-1-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before commit 780db2071a(blk-mq: decouble blk-mq freezing from generic bypassing), the dying flag is checked before entering queue, and Tejun converts the checking into .mq_freeze_depth, and assumes the counter is increased just after dying flag is set. Unfortunately we doesn't do that in blk_set_queue_dying(). This patch calls blk_freeze_queue_start() in blk_set_queue_dying(), so that we can block new I/O coming once the queue is set as dying. Given blk_set_queue_dying() is always called in remove path of block device, and queue will be cleaned up later, we don't need to worry about undoing the counter. Cc: Bart Van Assche Cc: Tejun Heo Reviewed-by: Hannes Reinecke Signed-off-by: Ming Lei Reviewed-by: Johannes Thumshirn Reviewed-by: Bart Van Assche --- block/blk-core.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 60f364e1d36b..e22c4ea002ec 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -500,6 +500,13 @@ void blk_set_queue_dying(struct request_queue *q) queue_flag_set(QUEUE_FLAG_DYING, q); spin_unlock_irq(q->queue_lock); + /* + * When queue DYING flag is set, we need to block new req + * entering queue, so we call blk_freeze_queue_start() to + * prevent I/O from crossing blk_queue_enter(). + */ + blk_freeze_queue_start(q); + if (q->mq_ops) blk_mq_wake_waiters(q); else { @@ -672,9 +679,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait) /* * read pair of barrier in blk_freeze_queue_start(), * we need to order reading __PERCPU_REF_DEAD flag of - * .q_usage_counter and reading .mq_freeze_depth, - * otherwise the following wait may never return if the - * two reads are reordered. + * .q_usage_counter and reading .mq_freeze_depth or + * queue dying flag, otherwise the following wait may + * never return if the two reads are reordered. */ smp_rmb();