From patchwork Sat Aug 11 07:12:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10563323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04DF514C0 for ; Sat, 11 Aug 2018 07:14:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E16582A957 for ; Sat, 11 Aug 2018 07:14:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D53512ABEF; Sat, 11 Aug 2018 07:14:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BBF42A957 for ; Sat, 11 Aug 2018 07:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727205AbeHKJrT (ORCPT ); Sat, 11 Aug 2018 05:47:19 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:48590 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727088AbeHKJrT (ORCPT ); Sat, 11 Aug 2018 05:47:19 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 06FE7401EF2B; Sat, 11 Aug 2018 07:14:05 +0000 (UTC) Received: from localhost (ovpn-12-20.pek2.redhat.com [10.72.12.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id B20141687E; Sat, 11 Aug 2018 07:13:54 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Alan Stern , Christoph Hellwig , Bart Van Assche , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Adrian Hunter , "James E.J. Bottomley" , "Martin K. Petersen" , linux-scsi@vger.kernel.org Subject: [RFC PATCH V2 07/17] SCSI: prepare for introducing admin queue for legacy path Date: Sat, 11 Aug 2018 15:12:10 +0800 Message-Id: <20180811071220.357-8-ming.lei@redhat.com> In-Reply-To: <20180811071220.357-1-ming.lei@redhat.com> References: <20180811071220.357-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 11 Aug 2018 07:14:05 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 11 Aug 2018 07:14:05 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Uses scsi_is_admin_queue() and scsi_get_scsi_dev() to retrieve 'scsi_device' for legacy path. The same approach can be used in SCSI_MQ path too, just not very efficiently, and will deal with that in the patch when introducing admin queue for SCSI_MQ. Cc: Alan Stern Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Adrian Hunter Cc: "James E.J. Bottomley" Cc: "Martin K. Petersen" Cc: linux-scsi@vger.kernel.org Signed-off-by: Ming Lei --- drivers/scsi/scsi_lib.c | 37 +++++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 62699adaef61..d0da89322425 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -46,6 +46,20 @@ static DEFINE_MUTEX(scsi_sense_cache_mutex); static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd); +/* For admin queue, its queuedata is NULL */ +static inline bool scsi_is_admin_queue(struct request_queue *q) +{ + return !q->queuedata; +} + +/* This helper can only be used in req prep stage */ +static inline struct scsi_device *scsi_get_scsi_dev(struct request *rq) +{ + if (scsi_is_admin_queue(rq->q)) + return scsi_req(rq)->sdev; + return rq->q->queuedata; +} + static inline struct kmem_cache * scsi_select_sense_cache(bool unchecked_isa_dma) { @@ -1376,10 +1390,9 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req) } static int -scsi_prep_return(struct request_queue *q, struct request *req, int ret) +scsi_prep_return(struct scsi_device *sdev, struct request_queue *q, + struct request *req, int ret) { - struct scsi_device *sdev = q->queuedata; - switch (ret) { case BLKPREP_KILL: case BLKPREP_INVALID: @@ -1411,7 +1424,7 @@ scsi_prep_return(struct request_queue *q, struct request *req, int ret) static int scsi_prep_fn(struct request_queue *q, struct request *req) { - struct scsi_device *sdev = q->queuedata; + struct scsi_device *sdev = scsi_get_scsi_dev(req); struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); int ret; @@ -1436,7 +1449,7 @@ static int scsi_prep_fn(struct request_queue *q, struct request *req) ret = scsi_setup_cmnd(sdev, req); out: - return scsi_prep_return(q, req, ret); + return scsi_prep_return(sdev, q, req, ret); } static void scsi_unprep_fn(struct request_queue *q, struct request *req) @@ -1613,6 +1626,9 @@ static int scsi_lld_busy(struct request_queue *q) if (blk_queue_dying(q)) return 0; + if (WARN_ON_ONCE(scsi_is_admin_queue(q))) + return 0; + shost = sdev->host; /* @@ -1816,7 +1832,7 @@ static void scsi_request_fn(struct request_queue *q) __releases(q->queue_lock) __acquires(q->queue_lock) { - struct scsi_device *sdev = q->queuedata; + struct scsi_device *sdev; struct Scsi_Host *shost; struct scsi_cmnd *cmd; struct request *req; @@ -1825,7 +1841,6 @@ static void scsi_request_fn(struct request_queue *q) * To start with, we keep looping until the queue is empty, or until * the host is no longer able to accept any more requests. */ - shost = sdev->host; for (;;) { int rtn; /* @@ -1837,6 +1852,10 @@ static void scsi_request_fn(struct request_queue *q) if (!req) break; + cmd = blk_mq_rq_to_pdu(req); + sdev = cmd->device; + shost = sdev->host; + if (unlikely(!scsi_device_online(sdev))) { sdev_printk(KERN_ERR, sdev, "rejecting I/O to offline device\n"); @@ -1854,7 +1873,6 @@ static void scsi_request_fn(struct request_queue *q) blk_start_request(req); spin_unlock_irq(q->queue_lock); - cmd = blk_mq_rq_to_pdu(req); if (cmd != req->special) { printk(KERN_CRIT "impossible request in %s.\n" "please mail a stack trace to " @@ -2332,6 +2350,9 @@ struct scsi_device *scsi_device_from_queue(struct request_queue *q) { struct scsi_device *sdev = NULL; + /* admin queue won't be exposed to external users */ + WARN_ON_ONCE(scsi_is_admin_queue(q)); + if (q->mq_ops) { if (q->mq_ops == &scsi_mq_ops) sdev = q->queuedata;