From patchwork Wed Aug 23 17:56:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 9918137 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9525E603FF for ; Wed, 23 Aug 2017 17:58:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 88990289EF for ; Wed, 23 Aug 2017 17:58:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D800289F1; Wed, 23 Aug 2017 17:58:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BA29289F0 for ; Wed, 23 Aug 2017 17:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932431AbdHWR6q (ORCPT ); Wed, 23 Aug 2017 13:58:46 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:37220 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932461AbdHWR6m (ORCPT ); Wed, 23 Aug 2017 13:58:42 -0400 X-IronPort-AV: E=Sophos;i="5.41,417,1498492800"; d="scan'208";a="44541301" Received: from sjappemgw12.hgst.com (HELO sjappemgw11.hgst.com) ([199.255.44.66]) by ob1.hgst.iphmx.com with ESMTP; 24 Aug 2017 01:56:45 +0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.172.152]) by sjappemgw11.hgst.com with ESMTP; 23 Aug 2017 10:56:35 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , Bart Van Assche , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH 4/5] skd: Avoid double completions in case of a timeout Date: Wed, 23 Aug 2017 10:56:32 -0700 Message-Id: <20170823175633.12680-5-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.0 In-Reply-To: <20170823175633.12680-1-bart.vanassche@wdc.com> References: <20170823175633.12680-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Avoid that normal request completion and the timeout handler can run concurrently by calling blk_mq_complete_request() instead of blk_mq_end_request() from skd_end_request(). Avoid that the block layer can reuse a request while the firmware is still processing it. Convert skd_softirq_done() to blk-mq. Pass the pointer to skd_softirq_done() to the block layer core through blk_mq_ops.complete instead of by calling blk_queue_softirq_done(). Pass the pointer to skd_timed_out() to the block layer core through blk_mq_ops.timeout instead of by calling blk_queue_timed_out(). The timeout handler has been tested as follows: echo 1 > /sys/block/skd0/io-timeout-fail && (cd /sys/kernel/debug/fail_io_timeout && echo 100 > probability && echo N > task-filter && echo 1 > times) Fixes: commit a74d5b76fab9 ("skd: Switch to block layer timeout mechanism") Reported-by: Christoph Hellwig Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- drivers/block/skd_main.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c index 0d6340884009..ff288f1a5dec 100644 --- a/drivers/block/skd_main.c +++ b/drivers/block/skd_main.c @@ -184,6 +184,7 @@ struct skd_request_context { struct fit_comp_error_info err_info; + blk_status_t status; }; struct skd_special_context { @@ -596,19 +597,22 @@ static blk_status_t skd_mq_queue_rq(struct blk_mq_hw_ctx *hctx, return BLK_STS_OK; } -static enum blk_eh_timer_return skd_timed_out(struct request *req) +static enum blk_eh_timer_return skd_timed_out(struct request *req, + bool reserved) { struct skd_device *skdev = req->q->queuedata; dev_err(&skdev->pdev->dev, "request with tag %#x timed out\n", blk_mq_unique_tag(req)); - return BLK_EH_HANDLED; + return BLK_EH_RESET_TIMER; } static void skd_end_request(struct skd_device *skdev, struct request *req, blk_status_t error) { + struct skd_request_context *skreq = blk_mq_rq_to_pdu(req); + if (unlikely(error)) { char *cmd = (rq_data_dir(req) == READ) ? "read" : "write"; u32 lba = (u32)blk_rq_pos(req); @@ -621,19 +625,15 @@ static void skd_end_request(struct skd_device *skdev, struct request *req, dev_dbg(&skdev->pdev->dev, "id=0x%x error=%d\n", req->tag, error); - blk_mq_end_request(req, error); + skreq->status = error; + blk_mq_complete_request(req); } -/* Only called in case of a request timeout */ static void skd_softirq_done(struct request *req) { - struct skd_device *skdev = req->q->queuedata; struct skd_request_context *skreq = blk_mq_rq_to_pdu(req); - unsigned long flags; - spin_lock_irqsave(&skdev->lock, flags); - skd_end_request(skdev, blk_mq_rq_from_pdu(skreq), BLK_STS_TIMEOUT); - spin_unlock_irqrestore(&skdev->lock, flags); + blk_mq_end_request(req, skreq->status); } static bool skd_preop_sg_list(struct skd_device *skdev, @@ -2821,6 +2821,8 @@ static int skd_cons_sksb(struct skd_device *skdev) static const struct blk_mq_ops skd_mq_ops = { .queue_rq = skd_mq_queue_rq, + .complete = skd_softirq_done, + .timeout = skd_timed_out, .init_request = skd_init_request, .exit_request = skd_exit_request, }; @@ -2884,8 +2886,6 @@ static int skd_cons_disk(struct skd_device *skdev) queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q); blk_queue_rq_timeout(q, 8 * HZ); - blk_queue_rq_timed_out(q, skd_timed_out); - blk_queue_softirq_done(q, skd_softirq_done); spin_lock_irqsave(&skdev->lock, flags); dev_dbg(&skdev->pdev->dev, "stopping queue\n");