From patchwork Thu Oct 31 22:55:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11222045 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 716E913BD for ; Thu, 31 Oct 2019 22:55:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F0EB20873 for ; Thu, 31 Oct 2019 22:55:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728381AbfJaWzh (ORCPT ); Thu, 31 Oct 2019 18:55:37 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:39132 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbfJaWzg (ORCPT ); Thu, 31 Oct 2019 18:55:36 -0400 Received: by mail-pl1-f194.google.com with SMTP id t12so3410637plo.6 for ; Thu, 31 Oct 2019 15:55:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PYDrgCzhPi9ZmJz9epJ31IC3Mt69mb2vcSIa0K1oUbc=; b=GaLjItwtxqC9StwiO0FlexeJmPTm42il6yjqJpvpgfb8pUq/RCD/X5ZNwXQ0fUl4Oa n4h3SLqAkDKMKNZA0O+HczBuCHOgBqp3UCBK+yYlgw7k7qN6VYp583v2sHRaM5/VXGMi Usrh7ef2K2p/t8Kqujg+mmlvxfXcVjqS/NUPS68SyIMLJZ0QF3ZRFoTDj5oaeLsdtVQ0 Z7y4Q7V4WZoLeUl0TansycXs+Gh/1SxnxkGA6ttwAqgi+byAtSj+wG6hjJfWMG0iBsMt +W5NReTLUgcIhc7fGGUlyN0Bf366AL2gN6K9cuPhc002LSmFSznHL2LwDdq2jLf2nlJh rbfg== X-Gm-Message-State: APjAAAUzcN2c3YCygweeUvR1J6EL+/bzulaKDI3F3cBfJ5Snu+PYe9ba OokLh6iXKKybr6vkPTBjiiOGyNbd X-Google-Smtp-Source: APXvYqxc+liBQ/SwCjzZDDrJ/digN6FqjtcAdkj/VazmkbNdkoiNBVSrWjX5YHbXjrOqH6dwHdTD7Q== X-Received: by 2002:a17:902:aa82:: with SMTP id d2mr5243797plr.24.1572562535799; Thu, 31 Oct 2019 15:55:35 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id d139sm8391711pfd.162.2019.10.31.15.55.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 15:55:34 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Yaniv Gardi , Subhash Jadavani , Stanley Chu , Avri Altman , Tomas Winkler Subject: [PATCH 1/4] ufs: Avoid busy-waiting by eliminating tag conflicts Date: Thu, 31 Oct 2019 15:55:25 -0700 Message-Id: <20191031225528.233895-2-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191031225528.233895-1-bvanassche@acm.org> References: <20191031225528.233895-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Instead of tracking which tags are in use in the ufs_hba.lrb_in_use bitmask, rely on the block layer tag allocation mechanism. This patch removes the following busy-waiting loop if ufshcd_issue_devman_upiu_cmd() and the block layer accidentally allocate the same tag for a SCSI request: * ufshcd_queuecommand() returns SCSI_MLQUEUE_HOST_BUSY. * The SCSI core requeues the SCSI command. Cc: Yaniv Gardi Cc: Subhash Jadavani Cc: Stanley Chu Cc: Avri Altman Cc: Tomas Winkler Signed-off-by: Bart Van Assche --- drivers/scsi/ufs/ufshcd.c | 117 +++++++++++++++----------------------- drivers/scsi/ufs/ufshcd.h | 9 +-- 2 files changed, 50 insertions(+), 76 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 9fc05a535624..da9677fb2d5d 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -497,8 +497,8 @@ static void ufshcd_print_tmrs(struct ufs_hba *hba, unsigned long bitmap) static void ufshcd_print_host_state(struct ufs_hba *hba) { dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state); - dev_err(hba->dev, "lrb in use=0x%lx, outstanding reqs=0x%lx tasks=0x%lx\n", - hba->lrb_in_use, hba->outstanding_reqs, hba->outstanding_tasks); + dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n", + hba->outstanding_reqs, hba->outstanding_tasks); dev_err(hba->dev, "saved_err=0x%x, saved_uic_err=0x%x\n", hba->saved_err, hba->saved_uic_err); dev_err(hba->dev, "Device power mode=%d, UIC link state=%d\n", @@ -1596,6 +1596,24 @@ int ufshcd_hold(struct ufs_hba *hba, bool async) } EXPORT_SYMBOL_GPL(ufshcd_hold); +static bool ufshcd_is_busy(struct request *req, void *priv, bool reserved) +{ + int *busy = priv; + + (*busy)++; + return false; +} + +/* Whether or not any tag is in use by a request that is in progress. */ +static bool ufshcd_any_tag_in_use(struct ufs_hba *hba) +{ + struct request_queue *q = hba->tag_alloc_queue; + int busy = 0; + + blk_mq_tagset_busy_iter(q->tag_set, ufshcd_is_busy, &busy); + return busy; +} + static void ufshcd_gate_work(struct work_struct *work) { struct ufs_hba *hba = container_of(work, struct ufs_hba, @@ -1619,7 +1637,7 @@ static void ufshcd_gate_work(struct work_struct *work) if (hba->clk_gating.active_reqs || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL - || hba->lrb_in_use || hba->outstanding_tasks + || ufshcd_any_tag_in_use(hba) || hba->outstanding_tasks || hba->active_uic_cmd || hba->uic_async_done) goto rel_lock; @@ -1673,7 +1691,7 @@ static void __ufshcd_release(struct ufs_hba *hba) if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL - || hba->lrb_in_use || hba->outstanding_tasks + || ufshcd_any_tag_in_use(hba) || hba->outstanding_tasks || hba->active_uic_cmd || hba->uic_async_done || ufshcd_eh_in_progress(hba)) return; @@ -2443,22 +2461,9 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) hba->req_abort_count = 0; - /* acquire the tag to make sure device cmds don't use it */ - if (test_and_set_bit_lock(tag, &hba->lrb_in_use)) { - /* - * Dev manage command in progress, requeue the command. - * Requeuing the command helps in cases where the request *may* - * find different tag instead of waiting for dev manage command - * completion. - */ - err = SCSI_MLQUEUE_HOST_BUSY; - goto out; - } - err = ufshcd_hold(hba, true); if (err) { err = SCSI_MLQUEUE_HOST_BUSY; - clear_bit_unlock(tag, &hba->lrb_in_use); goto out; } WARN_ON(hba->clk_gating.state != CLKS_ON); @@ -2479,7 +2484,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) err = ufshcd_map_sg(hba, lrbp); if (err) { lrbp->cmd = NULL; - clear_bit_unlock(tag, &hba->lrb_in_use); goto out; } /* Make sure descriptors are ready before ringing the doorbell */ @@ -2625,44 +2629,6 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, return err; } -/** - * ufshcd_get_dev_cmd_tag - Get device management command tag - * @hba: per-adapter instance - * @tag_out: pointer to variable with available slot value - * - * Get a free slot and lock it until device management command - * completes. - * - * Returns false if free slot is unavailable for locking, else - * return true with tag value in @tag. - */ -static bool ufshcd_get_dev_cmd_tag(struct ufs_hba *hba, int *tag_out) -{ - int tag; - bool ret = false; - unsigned long tmp; - - if (!tag_out) - goto out; - - do { - tmp = ~hba->lrb_in_use; - tag = find_last_bit(&tmp, hba->nutrs); - if (tag >= hba->nutrs) - goto out; - } while (test_and_set_bit_lock(tag, &hba->lrb_in_use)); - - *tag_out = tag; - ret = true; -out: - return ret; -} - -static inline void ufshcd_put_dev_cmd_tag(struct ufs_hba *hba, int tag) -{ - clear_bit_unlock(tag, &hba->lrb_in_use); -} - /** * ufshcd_exec_dev_cmd - API for sending device management requests * @hba: UFS hba @@ -2675,6 +2641,8 @@ static inline void ufshcd_put_dev_cmd_tag(struct ufs_hba *hba, int tag) static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, enum dev_cmd_type cmd_type, int timeout) { + struct request_queue *q = hba->tag_alloc_queue; + struct request *req; struct ufshcd_lrb *lrbp; int err; int tag; @@ -2688,7 +2656,10 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, * Even though we use wait_event() which sleeps indefinitely, * the maximum wait time is bounded by SCSI request timeout. */ - wait_event(hba->dev_cmd.tag_wq, ufshcd_get_dev_cmd_tag(hba, &tag)); + req = blk_get_request(q, REQ_OP_DRV_IN, 0); + if (IS_ERR(req)) + return PTR_ERR(req); + tag = req->tag; init_completion(&wait); lrbp = &hba->lrb[tag]; @@ -2712,8 +2683,7 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, err ? "query_complete_err" : "query_complete"); out_put_tag: - ufshcd_put_dev_cmd_tag(hba, tag); - wake_up(&hba->dev_cmd.tag_wq); + blk_put_request(req); up_read(&hba->clk_scaling_lock); return err; } @@ -4832,7 +4802,6 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba, cmd->result = result; /* Mark completed command as NULL in LRB */ lrbp->cmd = NULL; - clear_bit_unlock(index, &hba->lrb_in_use); /* Do not touch lrbp after scsi done */ cmd->scsi_done(cmd); __ufshcd_release(hba); @@ -4854,9 +4823,6 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba, hba->outstanding_reqs ^= completed_reqs; ufshcd_clk_scaling_update_busy(hba); - - /* we might have free'd some tags above */ - wake_up(&hba->dev_cmd.tag_wq); } /** @@ -5785,6 +5751,8 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, enum dev_cmd_type cmd_type, enum query_opcode desc_op) { + struct request_queue *q = hba->tag_alloc_queue; + struct request *req; struct ufshcd_lrb *lrbp; int err = 0; int tag; @@ -5794,7 +5762,10 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, down_read(&hba->clk_scaling_lock); - wait_event(hba->dev_cmd.tag_wq, ufshcd_get_dev_cmd_tag(hba, &tag)); + req = blk_get_request(q, REQ_OP_DRV_IN, 0); + if (IS_ERR(req)) + return PTR_ERR(req); + tag = req->tag; init_completion(&wait); lrbp = &hba->lrb[tag]; @@ -5868,8 +5839,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, } } - ufshcd_put_dev_cmd_tag(hba, tag); - wake_up(&hba->dev_cmd.tag_wq); + blk_put_request(req); up_read(&hba->clk_scaling_lock); return err; } @@ -6164,9 +6134,6 @@ static int ufshcd_abort(struct scsi_cmnd *cmd) hba->lrb[tag].cmd = NULL; spin_unlock_irqrestore(host->host_lock, flags); - clear_bit_unlock(tag, &hba->lrb_in_use); - wake_up(&hba->dev_cmd.tag_wq); - out: if (!err) { err = SUCCESS; @@ -6873,6 +6840,11 @@ static int ufshcd_probe_hba(struct ufs_hba *hba) int ret; ktime_t start = ktime_get(); + ret = -ENOMEM; + hba->tag_alloc_queue = blk_mq_init_queue(&hba->host->tag_set); + if (!hba->tag_alloc_queue) + goto out; + ret = ufshcd_link_startup(hba); if (ret) goto out; @@ -7505,6 +7477,10 @@ static void ufshcd_hba_exit(struct ufs_hba *hba) ufshcd_setup_hba_vreg(hba, false); hba->is_powered = false; } + + if (hba->tag_alloc_queue) + blk_cleanup_queue(hba->tag_alloc_queue); + hba->tag_alloc_queue = NULL; } static int @@ -8346,9 +8322,6 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) init_rwsem(&hba->clk_scaling_lock); - /* Initialize device management tag acquire wait queue */ - init_waitqueue_head(&hba->dev_cmd.tag_wq); - ufshcd_init_clk_gating(hba); ufshcd_init_clk_scaling(hba); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index e3593cce23c1..8fa33fb71237 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -212,13 +212,11 @@ struct ufs_query { * @type: device management command type - Query, NOP OUT * @lock: lock to allow one command at a time * @complete: internal commands completion - * @tag_wq: wait queue until free command slot is available */ struct ufs_dev_cmd { enum dev_cmd_type type; struct mutex lock; struct completion *complete; - wait_queue_head_t tag_wq; struct ufs_query query; }; @@ -480,7 +478,10 @@ struct ufs_stats { * @host: Scsi_Host instance of the driver * @dev: device handle * @lrb: local reference block - * @lrb_in_use: lrb in use + * @tag_alloc_queue: None of the exported block layer functions allows to + * allocate a tag directly from a tag set. The purpose of this request queue + * is to support allocating tags from hba->host->tag_set before any LUNs have + * been associated with this HBA. * @outstanding_tasks: Bits representing outstanding task requests * @outstanding_reqs: Bits representing outstanding transfer requests * @capabilities: UFS Controller Capabilities @@ -538,6 +539,7 @@ struct ufs_hba { struct Scsi_Host *host; struct device *dev; + struct request_queue *tag_alloc_queue; /* * This field is to keep a reference to "scsi_device" corresponding to * "UFS device" W-LU. @@ -558,7 +560,6 @@ struct ufs_hba { u32 ahit; struct ufshcd_lrb *lrb; - unsigned long lrb_in_use; unsigned long outstanding_tasks; unsigned long outstanding_reqs; From patchwork Thu Oct 31 22:55:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11222047 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A86A913 for ; Thu, 31 Oct 2019 22:55:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 38FA520873 for ; Thu, 31 Oct 2019 22:55:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728389AbfJaWzj (ORCPT ); Thu, 31 Oct 2019 18:55:39 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:45676 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbfJaWzi (ORCPT ); Thu, 31 Oct 2019 18:55:38 -0400 Received: by mail-pg1-f194.google.com with SMTP id r1so5076161pgj.12 for ; Thu, 31 Oct 2019 15:55:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EEOV50IZRPg32q0FSFfXKmFxs3kNH9cfwbJA85sPBQk=; b=FrpdeyN6Y8dRplfM3tyjLqXc/9cwu3n8QCrP1OK68ni+eQEgNiowzrpzNUPl0dSRnN xTQ/eNi9fvCzfrFmoFm0b9W5ep3dN5N6crONROS3Tp8UNk4j6Z2k43kXvXYXQTbcGIsL ZN4d1+eIvEVut5XAAdG7nEqbvj8PH+l7NNN4sa2K+iQjAfcoiKSnvq6gJJqHtngYYzoD 15u+/mDtSzh3DTAfYw+n/Akkz363r5rEYh9jMaU6Mwf3jcgCPgNz81qS7XabmSj17QXS mK2KUQfd9lpYsOKTp0i3oIF4JxsNSV68xw3Db5tmZLl74v3r4RrDOfKp/Khi4/k6fiO2 tZwg== X-Gm-Message-State: APjAAAXDskYt2in1KHodoAE1KokuGaH0CgG2vy1UNbZZts1prYZM5y+3 TTCTd802ySnrebubG9gqZQM= X-Google-Smtp-Source: APXvYqyO0CRrcbVI5rY8t7goof3GiwmMRcCp/nuJrcRhDL7BV+C/FwbXdUDZOfOyumwVOV3mrbGetw== X-Received: by 2002:a17:90a:26c1:: with SMTP id m59mr10883959pje.101.1572562537216; Thu, 31 Oct 2019 15:55:37 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id d139sm8391711pfd.162.2019.10.31.15.55.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 15:55:36 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Yaniv Gardi , Subhash Jadavani , Stanley Chu , Avri Altman , Tomas Winkler Subject: [PATCH 2/4] ufs: Simplify the clock scaling mechanism implementation Date: Thu, 31 Oct 2019 15:55:26 -0700 Message-Id: <20191031225528.233895-3-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191031225528.233895-1-bvanassche@acm.org> References: <20191031225528.233895-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Scaling the clock is only safe while no commands are in progress. Use blk_mq_{un,}freeze_queue() to block submission of new commands and to wait for ongoing commands to complete. Cc: Yaniv Gardi Cc: Subhash Jadavani Cc: Stanley Chu Cc: Avri Altman Cc: Tomas Winkler Signed-off-by: Bart Van Assche --- drivers/scsi/ufs/ufshcd.c | 131 ++++++++++++-------------------------- drivers/scsi/ufs/ufshcd.h | 3 - 2 files changed, 40 insertions(+), 94 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index da9677fb2d5d..b7e27d86a0ec 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -292,14 +292,46 @@ static inline void ufshcd_disable_irq(struct ufs_hba *hba) static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba) { - if (atomic_dec_and_test(&hba->scsi_block_reqs_cnt)) - scsi_unblock_requests(hba->host); + struct scsi_device *sdev; + + blk_mq_unfreeze_queue(hba->tag_alloc_queue); + shost_for_each_device(sdev, hba->host) + blk_mq_unfreeze_queue(sdev->request_queue); } -static void ufshcd_scsi_block_requests(struct ufs_hba *hba) +static int ufshcd_scsi_block_requests(struct ufs_hba *hba, + unsigned long timeout) { - if (atomic_inc_return(&hba->scsi_block_reqs_cnt) == 1) - scsi_block_requests(hba->host); + struct scsi_device *sdev; + unsigned long deadline = jiffies + timeout; + bool success = true; + + if (timeout == ULONG_MAX) { + shost_for_each_device(sdev, hba->host) + blk_mq_freeze_queue(sdev->request_queue); + blk_mq_freeze_queue(hba->tag_alloc_queue); + return 0; + } + + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + blk_freeze_queue_start(hba->tag_alloc_queue); + if (blk_mq_freeze_queue_wait_timeout(hba->tag_alloc_queue, + max_t(long, 0, deadline - jiffies)) <= 0) + goto err; + shost_for_each_device(sdev, hba->host) { + if (blk_mq_freeze_queue_wait_timeout(sdev->request_queue, + max_t(long, 0, deadline - jiffies)) <= 0) { + success = false; + break; + } + } + if (!success) { +err: + ufshcd_scsi_unblock_requests(hba); + return -ETIMEDOUT; + } + return 0; } static void ufshcd_add_cmd_upiu_trace(struct ufs_hba *hba, unsigned int tag, @@ -1005,65 +1037,6 @@ static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba, return false; } -static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba, - u64 wait_timeout_us) -{ - unsigned long flags; - int ret = 0; - u32 tm_doorbell; - u32 tr_doorbell; - bool timeout = false, do_last_check = false; - ktime_t start; - - ufshcd_hold(hba, false); - spin_lock_irqsave(hba->host->host_lock, flags); - /* - * Wait for all the outstanding tasks/transfer requests. - * Verify by checking the doorbell registers are clear. - */ - start = ktime_get(); - do { - if (hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) { - ret = -EBUSY; - goto out; - } - - tm_doorbell = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL); - tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); - if (!tm_doorbell && !tr_doorbell) { - timeout = false; - break; - } else if (do_last_check) { - break; - } - - spin_unlock_irqrestore(hba->host->host_lock, flags); - schedule(); - if (ktime_to_us(ktime_sub(ktime_get(), start)) > - wait_timeout_us) { - timeout = true; - /* - * We might have scheduled out for long time so make - * sure to check if doorbells are cleared by this time - * or not. - */ - do_last_check = true; - } - spin_lock_irqsave(hba->host->host_lock, flags); - } while (tm_doorbell || tr_doorbell); - - if (timeout) { - dev_err(hba->dev, - "%s: timedout waiting for doorbell to clear (tm=0x%x, tr=0x%x)\n", - __func__, tm_doorbell, tr_doorbell); - ret = -EBUSY; - } -out: - spin_unlock_irqrestore(hba->host->host_lock, flags); - ufshcd_release(hba); - return ret; -} - /** * ufshcd_scale_gear - scale up/down UFS gear * @hba: per adapter instance @@ -1113,26 +1086,15 @@ static int ufshcd_scale_gear(struct ufs_hba *hba, bool scale_up) static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba) { - #define DOORBELL_CLR_TOUT_US (1000 * 1000) /* 1 sec */ - int ret = 0; /* * make sure that there are no outstanding requests when * clock scaling is in progress */ - ufshcd_scsi_block_requests(hba); - down_write(&hba->clk_scaling_lock); - if (ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) { - ret = -EBUSY; - up_write(&hba->clk_scaling_lock); - ufshcd_scsi_unblock_requests(hba); - } - - return ret; + return ufshcd_scsi_block_requests(hba, HZ); } static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba) { - up_write(&hba->clk_scaling_lock); ufshcd_scsi_unblock_requests(hba); } @@ -1562,7 +1524,7 @@ int ufshcd_hold(struct ufs_hba *hba, bool async) */ /* fallthrough */ case CLKS_OFF: - ufshcd_scsi_block_requests(hba); + ufshcd_scsi_block_requests(hba, ULONG_MAX); hba->clk_gating.state = REQ_CLKS_ON; trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); @@ -2428,9 +2390,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) BUG(); } - if (!down_read_trylock(&hba->clk_scaling_lock)) - return SCSI_MLQUEUE_HOST_BUSY; - spin_lock_irqsave(hba->host->host_lock, flags); switch (hba->ufshcd_state) { case UFSHCD_STATE_OPERATIONAL: @@ -2495,7 +2454,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) out_unlock: spin_unlock_irqrestore(hba->host->host_lock, flags); out: - up_read(&hba->clk_scaling_lock); return err; } @@ -2649,8 +2607,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, struct completion wait; unsigned long flags; - down_read(&hba->clk_scaling_lock); - /* * Get free slot, sleep if slots are unavailable. * Even though we use wait_event() which sleeps indefinitely, @@ -2684,7 +2640,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, out_put_tag: blk_put_request(req); - up_read(&hba->clk_scaling_lock); return err; } @@ -5483,7 +5438,7 @@ static void ufshcd_check_errors(struct ufs_hba *hba) /* handle fatal errors only when link is functional */ if (hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL) { /* block commands from scsi mid-layer */ - ufshcd_scsi_block_requests(hba); + ufshcd_scsi_block_requests(hba, ULONG_MAX); hba->ufshcd_state = UFSHCD_STATE_EH_SCHEDULED; @@ -5760,8 +5715,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, unsigned long flags; u32 upiu_flags; - down_read(&hba->clk_scaling_lock); - req = blk_get_request(q, REQ_OP_DRV_IN, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -5840,7 +5793,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, } blk_put_request(req); - up_read(&hba->clk_scaling_lock); return err; } @@ -8320,8 +8272,6 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) /* Initialize mutex for device management commands */ mutex_init(&hba->dev_cmd.lock); - init_rwsem(&hba->clk_scaling_lock); - ufshcd_init_clk_gating(hba); ufshcd_init_clk_scaling(hba); @@ -8387,7 +8337,6 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) /* Hold auto suspend until async scan completes */ pm_runtime_get_sync(dev); - atomic_set(&hba->scsi_block_reqs_cnt, 0); /* * We are assuming that device wasn't put in sleep/power-down * state exclusively during the boot stage before kernel. diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 8fa33fb71237..fd88a9de3519 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -522,7 +522,6 @@ struct ufs_stats { * @urgent_bkops_lvl: keeps track of urgent bkops level for device * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for * device is known or not. - * @scsi_block_reqs_cnt: reference counting for scsi block requests */ struct ufs_hba { void __iomem *mmio_base; @@ -728,9 +727,7 @@ struct ufs_hba { enum bkops_status urgent_bkops_lvl; bool is_urgent_bkops_lvl_checked; - struct rw_semaphore clk_scaling_lock; struct ufs_desc_size desc_size; - atomic_t scsi_block_reqs_cnt; struct device bsg_dev; struct request_queue *bsg_queue; From patchwork Thu Oct 31 22:55:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11222049 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCB05913 for ; Thu, 31 Oct 2019 22:55:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4A8A20873 for ; Thu, 31 Oct 2019 22:55:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728395AbfJaWzk (ORCPT ); Thu, 31 Oct 2019 18:55:40 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:42798 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbfJaWzk (ORCPT ); Thu, 31 Oct 2019 18:55:40 -0400 Received: by mail-pl1-f195.google.com with SMTP id j12so1459916plt.9 for ; Thu, 31 Oct 2019 15:55:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lnUdgdtjnyT8WApB6Dztq1Jd7auxRCekXUI0b3t7ONc=; b=Y0W/KBUmbyKOBxHojEU0YfCPrw39EF8EZAHvcWCU5OiNHHIvYZx4DLfVVkmPPSKk7A F1lO4roOfBcTSnTBal0U5MPPZ20UmIJoqDuhpmeyXHLsDFu9lj+/3dUl0gN4Hqnr1wH7 Ct3/SNoZ3vkaNakhTZyZZEi8GK+e97us9nEHcnoMNFRSS/zjWSfTsCFvU2lHanXJ2Vkq Tycy6O3CpKc708Z0Y63WpvR+AkM1r/9MpG0Wo67ACltWzA84ymYuUMZ+e1aC7ft9l34f G9iqb78GgKkdzP9X+5aSuebpNFl5OKXgVCLAnk181lrGffQShqFBPwIQF0jqgv4tTANx c6CA== X-Gm-Message-State: APjAAAVmeqJez4N1Vu8+/VfHW2DpJ6Fh9Yrlg425kasAIx2CNuj7JrSk /0Oej51FTNY7EY4BnxDVGY4= X-Google-Smtp-Source: APXvYqxFtn90VoEfjAXK43O6rOTc9jgRmsc097bgQdWypN1hOr4txKXFConJjbQcU9JYNLpj5q8/hg== X-Received: by 2002:a17:902:304:: with SMTP id 4mr9253642pld.106.1572562538546; Thu, 31 Oct 2019 15:55:38 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id d139sm8391711pfd.162.2019.10.31.15.55.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 15:55:37 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Gilad Broner , Yaniv Gardi , Subhash Jadavani , Stanley Chu , Avri Altman , Tomas Winkler Subject: [PATCH 3/4] ufs: Remove the SCSI timeout handler Date: Thu, 31 Oct 2019 15:55:27 -0700 Message-Id: <20191031225528.233895-4-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191031225528.233895-1-bvanassche@acm.org> References: <20191031225528.233895-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Whether or not an UFS command gets requeued, one of the hba->lrb[].cmd pointers will point at that command. In other words, the UFS SCSI timeout handler will always return BLK_EH_DONE. Since always returning BLK_EH_DONE has the same effect of not defining a timeout handler, remove the UFS SCSI timeout handler. See also commit f550c65b543b ("scsi: ufs: implement scsi host timeout handler"). Cc: Gilad Broner Cc: Yaniv Gardi Cc: Subhash Jadavani Cc: Stanley Chu Cc: Avri Altman Cc: Tomas Winkler Signed-off-by: Bart Van Assche --- drivers/scsi/ufs/ufshcd.c | 36 ------------------------------------ 1 file changed, 36 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index b7e27d86a0ec..8c969fab5d92 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -6936,41 +6936,6 @@ static void ufshcd_async_scan(void *data, async_cookie_t cookie) ufshcd_probe_hba(hba); } -static enum blk_eh_timer_return ufshcd_eh_timed_out(struct scsi_cmnd *scmd) -{ - unsigned long flags; - struct Scsi_Host *host; - struct ufs_hba *hba; - int index; - bool found = false; - - if (!scmd || !scmd->device || !scmd->device->host) - return BLK_EH_DONE; - - host = scmd->device->host; - hba = shost_priv(host); - if (!hba) - return BLK_EH_DONE; - - spin_lock_irqsave(host->host_lock, flags); - - for_each_set_bit(index, &hba->outstanding_reqs, hba->nutrs) { - if (hba->lrb[index].cmd == scmd) { - found = true; - break; - } - } - - spin_unlock_irqrestore(host->host_lock, flags); - - /* - * Bypass SCSI error handling and reset the block layer timer if this - * SCSI command was not actually dispatched to UFS driver, otherwise - * let SCSI layer handle the error as usual. - */ - return found ? BLK_EH_DONE : BLK_EH_RESET_TIMER; -} - static const struct attribute_group *ufshcd_driver_groups[] = { &ufs_sysfs_unit_descriptor_group, &ufs_sysfs_lun_attributes_group, @@ -6989,7 +6954,6 @@ static struct scsi_host_template ufshcd_driver_template = { .eh_abort_handler = ufshcd_abort, .eh_device_reset_handler = ufshcd_eh_device_reset_handler, .eh_host_reset_handler = ufshcd_eh_host_reset_handler, - .eh_timed_out = ufshcd_eh_timed_out, .this_id = -1, .sg_tablesize = SG_ALL, .cmd_per_lun = UFSHCD_CMD_PER_LUN, From patchwork Thu Oct 31 22:55:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11222051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A81015AB for ; Thu, 31 Oct 2019 22:55:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 736D120873 for ; Thu, 31 Oct 2019 22:55:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728398AbfJaWzl (ORCPT ); Thu, 31 Oct 2019 18:55:41 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:32873 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbfJaWzk (ORCPT ); Thu, 31 Oct 2019 18:55:40 -0400 Received: by mail-pl1-f194.google.com with SMTP id y8so3431645plk.0 for ; Thu, 31 Oct 2019 15:55:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2sp5hWX8cM1VYWqXk5IDcMfjtFajr2AWpOWYZwhDLQo=; b=qByHe3zej3c2dOJndfsLepNGylh3h5NRUG9FcQYZ6m+CHPZN0F673/rgmrSH6O/elT dhoK/+CKYE3TLElwXAXMhN4k2UtpYRJ8R+50syzC+5aYWISDPEs2e/bl7ElDLvSIgfjc 9bqg8bPnWKWl6XcbxdMFNdwT/9i/hRJ4PVdmKpnJ9666WuBY7z3/ZUva4K1B44cNkJz6 Bbb3rWn7LaDueh+sIzm/5SiMe30AvJuv18IMna0wOyA5+nK6j8Aql/dvn4iBkZFbZelM 09nkCnpOktJEsGLdKfwxL3fXfq9so12fxxl2xA8xOuKzV0/8aASwdkgzABXdcL8fWzPc tzMw== X-Gm-Message-State: APjAAAUbUkvOTmfnJyuCxLZoRweMe5VrZopOdxYTr7j5gvaCYBH5N3sY cJ3GDLeG1yAGYCQNL+cRMKE= X-Google-Smtp-Source: APXvYqyhN3jgEeslL8qj4BgDolHUEgIqP9iEEgHrCXqhYznqsyu4iV40hCwP6A73eiCHUOaWpW95gA== X-Received: by 2002:a17:902:44d:: with SMTP id 71mr8668947ple.274.1572562540055; Thu, 31 Oct 2019 15:55:40 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id d139sm8391711pfd.162.2019.10.31.15.55.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 15:55:39 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Gilad Broner , Yaniv Gardi , Subhash Jadavani , Stanley Chu , Avri Altman , Tomas Winkler Subject: [PATCH 4/4] ufs: Remove superfluous memory barriers Date: Thu, 31 Oct 2019 15:55:28 -0700 Message-Id: <20191031225528.233895-5-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191031225528.233895-1-bvanassche@acm.org> References: <20191031225528.233895-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Calling wmb() after having written to a doorbell slows down code and does not help to commit the doorbell write faster. Hence remove such wmb() calls. Cc: Gilad Broner Cc: Yaniv Gardi Cc: Subhash Jadavani Cc: Stanley Chu Cc: Avri Altman Cc: Tomas Winkler Signed-off-by: Bart Van Assche --- drivers/scsi/ufs/ufshcd.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 8c969fab5d92..ace929df7bab 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -1864,8 +1864,6 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag) ufshcd_clk_scaling_start_busy(hba); __set_bit(task_tag, &hba->outstanding_reqs); ufshcd_writel(hba, 1 << task_tag, REG_UTP_TRANSFER_REQ_DOOR_BELL); - /* Make sure that doorbell is committed immediately */ - wmb(); ufshcd_add_command_trace(hba, task_tag, "send"); } @@ -5598,8 +5596,6 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba, wmb(); ufshcd_writel(hba, 1 << free_slot, REG_UTP_TASK_REQ_DOOR_BELL); - /* Make sure that doorbell is committed immediately */ - wmb(); spin_unlock_irqrestore(host->host_lock, flags);