From patchwork Fri Sep 1 18:49:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9935107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9C07F6038C for ; Fri, 1 Sep 2017 18:52:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 902F427F91 for ; Fri, 1 Sep 2017 18:52:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 835F727FC0; Fri, 1 Sep 2017 18:52:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 223FE27F91 for ; Fri, 1 Sep 2017 18:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752195AbdIASwQ (ORCPT ); Fri, 1 Sep 2017 14:52:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52588 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752180AbdIASwP (ORCPT ); Fri, 1 Sep 2017 14:52:15 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9B048806BD; Fri, 1 Sep 2017 18:52:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9B048806BD Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ming.lei@redhat.com Received: from localhost (ovpn-12-21.pek2.redhat.com [10.72.12.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 250947B154; Fri, 1 Sep 2017 18:52:01 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , linux-scsi@vger.kernel.org, "Martin K . Petersen" , "James E . J . Bottomley" Cc: Oleksandr Natalenko , Johannes Thumshirn , Tejun Heo , Ming Lei Subject: [PATCH V2 8/8] SCSI: freeze block queue when SCSI device is put into quiesce Date: Sat, 2 Sep 2017 02:49:58 +0800 Message-Id: <20170901184958.19452-10-ming.lei@redhat.com> In-Reply-To: <20170901184958.19452-1-ming.lei@redhat.com> References: <20170901184958.19452-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 01 Sep 2017 18:52:15 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Simply quiesing SCSI device and waiting for completeion of IO dispatched to SCSI queue isn't safe, it is easy to use up requests because all these allocated requests can't be dispatched when device is put in QIUESCE. Then no request can be allocated for RQF_PREEMPT, and system may hang somewhere, such as When sending commands of sync_cache or start_stop during system suspend path. Before quiesing SCSI, this patch freezes block queue first, so no new request can enter queue any more, and all pending requests are drained too once blk_freeze_queue is returned. This patch also uses __blk_get_request() for allocating request with RQF_PREEMPT, so that the allocation can succeed even though block queue is frozen. Signed-off-by: Ming Lei --- drivers/scsi/scsi_lib.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index f6097b89d5d3..a59544012b68 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -243,10 +243,12 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd, struct request *req; struct scsi_request *rq; int ret = DRIVER_ERROR << 24; + unsigned flag = sdev->sdev_state == SDEV_QUIESCE ? BLK_REQ_PREEMPT : 0; - req = blk_get_request(sdev->request_queue, + req = __blk_get_request(sdev->request_queue, data_direction == DMA_TO_DEVICE ? - REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM); + REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM, + flag); if (IS_ERR(req)) return ret; rq = scsi_req(req); @@ -2890,6 +2892,19 @@ scsi_device_quiesce(struct scsi_device *sdev) { int err; + /* + * Simply quiesing SCSI device isn't safe, it is easy + * to use up requests because all these allocated requests + * can't be dispatched when device is put in QIUESCE. + * Then no request can be allocated and we may hang + * somewhere, such as system suspend/resume. + * + * So we freeze block queue first, no new request can + * enter queue any more, and pending requests are drained + * once blk_freeze_queue is returned. + */ + blk_freeze_queue_preempt(sdev->request_queue); + mutex_lock(&sdev->state_mutex); err = scsi_device_set_state(sdev, SDEV_QUIESCE); mutex_unlock(&sdev->state_mutex); @@ -2926,6 +2941,8 @@ void scsi_device_resume(struct scsi_device *sdev) scsi_device_set_state(sdev, SDEV_RUNNING) == 0) scsi_run_queue(sdev->request_queue); mutex_unlock(&sdev->state_mutex); + + blk_unfreeze_queue_preempt(sdev->request_queue); } EXPORT_SYMBOL(scsi_device_resume);