From patchwork Mon Jul 31 22:14:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A668DC04A94 for ; Mon, 31 Jul 2023 22:15:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229721AbjGaWPZ (ORCPT ); Mon, 31 Jul 2023 18:15:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbjGaWPY (ORCPT ); Mon, 31 Jul 2023 18:15:24 -0400 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9AF0121 for ; Mon, 31 Jul 2023 15:15:23 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1bbc7b2133fso31819585ad.1 for ; Mon, 31 Jul 2023 15:15:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841723; x=1691446523; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xXHK95/CmEAoQ1Wi641vQDpGLWHT/1L7/uwvS0Slz8U=; b=TCwVvIYYqj7G4/0cnd3YFVqwr815O/Qx0flM1EnDaeraIsT0Ik8n1ixnT0oZxtZKI0 G8pTtqS0IdFE/jiQTpMHk842lpdcYWLNPssVkm4qyhRZuOc1CWZ1VezwWHDaWozO2X13 v7lxj792APnvIYCljF5uMELmCpjLm8Aot0I+UQBAO5ktrHk/ufBbIPpJ2SoV7ezA9On3 FgT6kFvpoEsjZ9uBPtCJs57QlOrvs/UZYMAgBN/q6KWS+WuOP2nTfitPm+RaktU4qEOk mGU83ch38z1NaPAfA39opZ7mw3Adgg3BupqhNAqFUP8z1PSjwnxrfgtG9yD+lTCrrIJW PGmQ== X-Gm-Message-State: ABy/qLa19Cbbhh1Sfc5VC6SehX6XmdUhXe+COjBDN4OxDtVTq8Fwrgma SthwfvaIW/4dVlnLa5oJqd0= X-Google-Smtp-Source: APBJJlEHxYmyT8tsF7+LW6TmH+NSui1VGCrg+QiiGWuwG66Dh70wpNXHI7YBcQPJBrLrn4gcV2kYNg== X-Received: by 2002:a17:903:11d0:b0:1b6:bced:1dd6 with SMTP id q16-20020a17090311d000b001b6bced1dd6mr11851617plh.35.1690841723183; Mon, 31 Jul 2023 15:15:23 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:22 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v5 1/7] block: Introduce the flag QUEUE_FLAG_NO_ZONE_WRITE_LOCK Date: Mon, 31 Jul 2023 15:14:37 -0700 Message-ID: <20230731221458.437440-2-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Writes in sequential write required zones must happen at the write pointer. Even if the submitter of the write commands (e.g. a filesystem) submits writes for sequential write required zones in order, the block layer or the storage controller may reorder these write commands. The zone locking mechanism in the mq-deadline I/O scheduler serializes write commands for sequential zones. Some but not all storage controllers require this serialization. Introduce a new request queue flag to allow block drivers to indicate that they preserve the order of write commands and thus do not require serialization of writes per zone. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- include/linux/blkdev.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 2f5371b8482c..de5e05cc34fa 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -534,6 +534,11 @@ struct request_queue { #define QUEUE_FLAG_NONROT 6 /* non-rotational device (SSD) */ #define QUEUE_FLAG_VIRT QUEUE_FLAG_NONROT /* paravirt device */ #define QUEUE_FLAG_IO_STAT 7 /* do disk/partitions IO accounting */ +/* + * Do not serialize sequential writes (REQ_OP_WRITE, REQ_OP_WRITE_ZEROES) sent + * to a sequential write required zone (BLK_ZONE_TYPE_SEQWRITE_REQ). + */ +#define QUEUE_FLAG_NO_ZONE_WRITE_LOCK 8 #define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */ #define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */ #define QUEUE_FLAG_SYNCHRONOUS 11 /* always completes in submit context */ @@ -597,6 +602,11 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_skip_tagset_quiesce(q) \ test_bit(QUEUE_FLAG_SKIP_TAGSET_QUIESCE, &(q)->queue_flags) +static inline bool blk_queue_no_zone_write_lock(struct request_queue *q) +{ + return test_bit(QUEUE_FLAG_NO_ZONE_WRITE_LOCK, &q->queue_flags); +} + extern void blk_set_pm_only(struct request_queue *q); extern void blk_clear_pm_only(struct request_queue *q); From patchwork Mon Jul 31 22:14:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6065C001E0 for ; Mon, 31 Jul 2023 22:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229925AbjGaWP0 (ORCPT ); Mon, 31 Jul 2023 18:15:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229913AbjGaWPZ (ORCPT ); Mon, 31 Jul 2023 18:15:25 -0400 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13004B2 for ; Mon, 31 Jul 2023 15:15:25 -0700 (PDT) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1bb81809ca8so39789075ad.3 for ; Mon, 31 Jul 2023 15:15:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841724; x=1691446524; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=52d6Yv2j3msvKUoHhQfE+78n/6sSy14S5P1th7+0tyg=; b=HUAQrLdqX0lIg7plFNmt+hyt67iEHOO9L6QqhNtRduPHV1Y/Np1AagKid9X1G3jKvw 27LG/SfQraOol8m5yqtd4xZA2s7k/6iWx6F06pMsHYnMMhl4WxeMIr76W2t9CD4riqxB 55PYanSbF2FzwsGNliwuvDPVh1b2b7iMCIkpsLItIEoDltkjlskbnLizNz3fNJKKZHmm HXNhHe8DGb5TY1jEpBHqd28knNte7ZJBTL2TTej0bHJ5x4p09PxWmgaPFtIuK9qLEebw JNyuLMiLxE0ilqfYaGwVIhGFTkbuz7pWvh+iQlmc5ZxgqnAuYbQkydGBhEuILli48R5d mQDA== X-Gm-Message-State: ABy/qLb8UWJvzg/FgNixxwrdj2q7Ev5YW44958i1/n7s8Zq6l+yjA7Vm LP3gZd2Nr+2CkPj/WZYXslA= X-Google-Smtp-Source: APBJJlHGPY6DTsfCodAnX1Trs1U2GLvB9BRXH4h2kiHW+gB7EV8bsy/whwMrkchE1L06K5UteK2qhg== X-Received: by 2002:a17:903:22c1:b0:1b8:b2c6:7e8d with SMTP id y1-20020a17090322c100b001b8b2c67e8dmr13054253plg.66.1690841724371; Mon, 31 Jul 2023 15:15:24 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:24 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v5 2/7] block/mq-deadline: Only use zone locking if necessary Date: Mon, 31 Jul 2023 15:14:38 -0700 Message-ID: <20230731221458.437440-3-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that submits write requests per zone in LBA order. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/mq-deadline.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 02a916ba62ee..1f4124dd4a0b 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -338,6 +338,16 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd, return rq; } +/* + * Use write locking if either QUEUE_FLAG_NO_ZONE_WRITE_LOCK has not been set. + * Not using zone write locking is only safe if the block driver preserves the + * request order. + */ +static bool dd_use_zone_write_locking(struct request_queue *q) +{ + return blk_queue_is_zoned(q) && !blk_queue_no_zone_write_lock(q); +} + /* * For the specified data direction, return the next request to * dispatch using arrival ordered lists. @@ -353,7 +363,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !dd_use_zone_write_locking(rq->q)) return rq; /* @@ -398,7 +408,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !dd_use_zone_write_locking(rq->q)) return rq; /* @@ -526,8 +536,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -552,7 +563,8 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, /* * If the request needs its target zone locked, do it. */ - blk_req_zone_write_lock(rq); + if (dd_use_zone_write_locking(rq->q)) + blk_req_zone_write_lock(rq); rq->rq_flags |= RQF_STARTED; return rq; } @@ -933,7 +945,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (dd_use_zone_write_locking(rq->q)) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Mon Jul 31 22:14:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F6FC001DF for ; Mon, 31 Jul 2023 22:15:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229932AbjGaWPa (ORCPT ); Mon, 31 Jul 2023 18:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229935AbjGaWP3 (ORCPT ); Mon, 31 Jul 2023 18:15:29 -0400 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59326B2 for ; Mon, 31 Jul 2023 15:15:26 -0700 (PDT) Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-686daaa5f1fso3508920b3a.3 for ; Mon, 31 Jul 2023 15:15:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841726; x=1691446526; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lxaQKdPofwgep3jGtj8wOmuCBCk05Widn0Lpq/npZrQ=; b=Oibo/Gr5JAMaxXjVhSBdUtW7j9tuOCo7KdWu+sOGgsDIEygI4NGVcfJpEydct62VLf rgG/N7hNSs1LSdJvf7qn7RZ6BNfKdcXnaHhEQBQsDJHha/XPJtXcUngN7N1C/hfB/O6x QkUBqbuKCD7bKMZcxWsgWIj8oOeGSdTaqTESPdaVj9PtxQxD1lnUAaHERv9WSqRlTcUs RSwjKLuOXjqIF8//nIB0tFJJxluhXtuuh9gW6hUe4WVhTdhKGr/SrxZHcnubDkZEK3Nk NprFKHlRddkwQlZErLPZ9QSZzgahyIpP2CL2kRgTF/vp9NLVw9lNLx3cBXDHxsGMnGp2 4WaA== X-Gm-Message-State: ABy/qLapagT+kADnMCs/BeCsFRjiC5ozfQZWzxI12NEzRsGJTXLNUFiE LebiJUNYpCy+xVsM0O3oRq0= X-Google-Smtp-Source: APBJJlH58SLPtzD6H7lhBW0c7wlc0yKff+UukzoNdZIaEvpQbc/rebK7YQo48H0SpvNUgpzZXx0X0w== X-Received: by 2002:a05:6a21:4887:b0:13b:7776:ceed with SMTP id av7-20020a056a21488700b0013b7776ceedmr10947631pzc.26.1690841725759; Mon, 31 Jul 2023 15:15:25 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:25 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v5 3/7] scsi: core: Retry unaligned zoned writes Date: Mon, 31 Jul 2023 15:14:39 -0700 Message-ID: <20230731221458.437440-4-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. Let the SCSI error handler sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- drivers/scsi/scsi_error.c | 37 +++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 3 +++ include/scsi/scsi.h | 1 + 4 files changed, 42 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..24a4e49215af 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -698,6 +699,16 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This indicates that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + blk_queue_no_zone_write_lock(scsi_cmd_to_rq(scmd)->q)) + return NEEDS_DELAYED_RETRY; + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ @@ -2186,6 +2197,25 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns a negative value if @_a has a lower LBA than @_b, zero if + * both have the same LBA and a positive value otherwise. + */ +static int scsi_cmp_lba(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + const sector_t pos_a = blk_rq_pos(scsi_cmd_to_rq(a)); + const sector_t pos_b = blk_rq_pos(scsi_cmd_to_rq(b)); + + if (pos_a < pos_b) + return -1; + if (pos_a > pos_b) + return 1; + return 0; +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2224,13 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + /* + * Sort pending SCSI commands in LBA order. This is important if one of + * the SCSI devices associated with @shost is a zoned block device for + * which zone write locking is disabled. + */ + list_sort(NULL, done_q, scsi_cmp_lba); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 59176946ab56..69da8aee13df 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 68b12afa0721..27b9ebe05b90 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1235,6 +1235,9 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + if (blk_queue_no_zone_write_lock(rq->q) && + blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Mon Jul 31 22:14:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36F32C001E0 for ; Mon, 31 Jul 2023 22:15:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbjGaWPa (ORCPT ); Mon, 31 Jul 2023 18:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229932AbjGaWP3 (ORCPT ); Mon, 31 Jul 2023 18:15:29 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 902D01724 for ; Mon, 31 Jul 2023 15:15:27 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1bb893e6365so31671595ad.2 for ; Mon, 31 Jul 2023 15:15:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841727; x=1691446527; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iKserUc7+qK0U76dK5VQGcFoPHl5fU7xJij7hQDUDFE=; b=d0BXnw9rRF5DrZeENVGCsq/SjoGckxND3kQevTw8C2xSdPCUpVGrPpbpY3Ziiinqm+ Fv1BnJMNTmQAcCGg0ipSNTFRq6gkUvD3gd0e90ERLInJMCghF+BAA+NSTQEOtT+5FIvb Ep3uRUd8aJjNFmEFJiCEIwArF7JiE+wjVvr7UMiY0yx2TAZdzmWbsBWBt9XXEmNOUzaU 16lt8cBcskr8kLNgVyPV5CMeEEYXCfoD8udhWQDnfzyCnYec6Z+liuXVl+S0SV7Pvdwe AxosNsKq2mvFTsytmcZuqPN/n+vex3QX9M/bDj9XPGaOnW+Daj/5gkMvHMIxXvqCwZ9g 324Q== X-Gm-Message-State: ABy/qLZRHYemCY9e7UyhoEG8zYGyo/QYAnUwOPxlfO0saAxE89y5qf8H iT/ZBOSegs1aXF6iCh5uDCQ= X-Google-Smtp-Source: APBJJlFYo6Yv4sGqTfEtAHxyPuUIhMy/TRgtk7hqlmrhaNqPvxMCt8gfS+S4sKo7YMdWSisAUqYKLQ== X-Received: by 2002:a17:903:11d0:b0:1b6:bced:1dd6 with SMTP id q16-20020a17090311d000b001b6bced1dd6mr11851749plh.35.1690841726941; Mon, 31 Jul 2023 15:15:26 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:26 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Douglas Gilbert , "James E.J. Bottomley" Subject: [PATCH v5 4/7] scsi: scsi_debug: Support disabling zone write locking Date: Mon, 31 Jul 2023 15:14:40 -0700 Message-ID: <20230731221458.437440-5-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make it easier to test handling of QUEUE_FLAG_NO_ZONE_WRITE_LOCK by adding support for setting this flag for scsi_debug request queues. Cc: Martin K. Petersen Cc: Douglas Gilbert Signed-off-by: Bart Van Assche Acked-by: Douglas Gilbert --- drivers/scsi/scsi_debug.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..57c6242bfb26 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_no_zwrl; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_no_zwrl) + blk_queue_flag_set(QUEUE_FLAG_NO_ZONE_WRITE_LOCK, q); return 0; } @@ -5738,6 +5743,7 @@ module_param_named(ndelay, sdebug_ndelay, int, S_IRUGO | S_IWUSR); module_param_named(no_lun_0, sdebug_no_lun_0, int, S_IRUGO | S_IWUSR); module_param_named(no_rwlock, sdebug_no_rwlock, bool, S_IRUGO | S_IWUSR); module_param_named(no_uld, sdebug_no_uld, int, S_IRUGO); +module_param_named(no_zone_write_lock, sdeb_no_zwrl, bool, S_IRUGO); module_param_named(num_parts, sdebug_num_parts, int, S_IRUGO); module_param_named(num_tgts, sdebug_num_tgts, int, S_IRUGO | S_IWUSR); module_param_named(opt_blks, sdebug_opt_blks, int, S_IRUGO); @@ -5812,6 +5818,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(no_zone_write_lock, + "Disable serialization of zoned writes (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Mon Jul 31 22:14:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EA0FC04A94 for ; Mon, 31 Jul 2023 22:15:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbjGaWPb (ORCPT ); Mon, 31 Jul 2023 18:15:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230051AbjGaWPa (ORCPT ); Mon, 31 Jul 2023 18:15:30 -0400 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEE22173B for ; Mon, 31 Jul 2023 15:15:28 -0700 (PDT) Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1bb7b8390e8so30650065ad.2 for ; Mon, 31 Jul 2023 15:15:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841728; x=1691446528; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/dgVJwigdvLnC3PLQrDWBl0zwhzVQqSuEsytYp17x/Y=; b=exfpQyE5hJkTIvkUx7zYPuwdlcGHf6p6MqWvS28d4x7ws1Ea/Trq8dKBe6sKBts0je sLhnEfRf+DHNlysy3voSwHqTyWqeD7y8upBbcxWQjem1tfZ0yWx7PfSaWhDcgH5voyfM 30EN0mB+fPBjAeuixG5gn4aG5uv2gp2BEMMPASTDnQIAZVX8mQLBe8gDuv3Gjy9FR2hM nf7IfaTn7N/3xplCqtloyp1srMCRsL7O64OyYTIFk0anXvhdUWxLCMYlxpoZISf6LDcx nZxwk1BdXMB50L6lvHZmLvBIkFSNFYL2TG3jWGJkaryFyli98Et/QTGUElr5PrFyMcpK sWIQ== X-Gm-Message-State: ABy/qLbut1Rh2NWslprNSighvbOyu+xSjCYIu3Qm3TADavd7i3hyHAem o6Wqeml17I35P3LBYUPsKnk= X-Google-Smtp-Source: APBJJlGBxLCFY+YJMJmRVWgC9LQDBcRP3r4IAHDd39rzHlEdGUNj6FBNToNd2nyTuSSKynGIsPwKfg== X-Received: by 2002:a17:902:a601:b0:1b8:5b70:2988 with SMTP id u1-20020a170902a60100b001b85b702988mr8861832plq.30.1690841728086; Mon, 31 Jul 2023 15:15:28 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:27 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Douglas Gilbert , "James E.J. Bottomley" Subject: [PATCH v5 5/7] scsi: scsi_debug: Support injecting unaligned write errors Date: Mon, 31 Jul 2023 15:14:41 -0700 Message-ID: <20230731221458.437440-6-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Cc: Martin K. Petersen Cc: Douglas Gilbert Signed-off-by: Bart Van Assche Acked-by: Douglas Gilbert --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 57c6242bfb26..051b0605f11f 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Mon Jul 31 22:14:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D14A1C001DF for ; Mon, 31 Jul 2023 22:15:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230051AbjGaWPo (ORCPT ); Mon, 31 Jul 2023 18:15:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230480AbjGaWPm (ORCPT ); Mon, 31 Jul 2023 18:15:42 -0400 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A646619B6 for ; Mon, 31 Jul 2023 15:15:36 -0700 (PDT) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1bbc06f830aso32491315ad.0 for ; Mon, 31 Jul 2023 15:15:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841736; x=1691446536; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0xs3eQ100BztgYl07Pu6o58l3Ii8LAwrgxPmcbK9FQY=; b=GSbNyGJiQE0B2ERDdAfy+frkUxtkO4oOP2X0eI2IveyJG4e2TzMFNo4o2MG76D9aAV amHJCBW3y9Ym8caA3QcImLTA4ATvtwwbe1yBLLFBHHj/mR5S/7D6XUD6kM7VpBOzp1Ja k6WDmsJUgcwY72v9EjJ+TWoxk1oCa1JFXB64NUXNl7euNRr7UXznmkhEb91cih63StCa YDVz49CU/IRwSPoJNauYqPDFC+Rz9UJNvaEJTS4iqkcVct3meEgzACIQsxQylpSw8ZQ1 pOeDBSWLQK/B5CTKUjlaUjplfhBjnBNE4kw6FExPUllP53FA3QImnl0EygJ99AXoRd9p qxdg== X-Gm-Message-State: ABy/qLacogCmLx/aHYbUTsQLNvKvocYiNdFMNCEpmnfF8TnJixFHzBzD rwti6EhmQ5Wp6eFmGkItfbs= X-Google-Smtp-Source: APBJJlH6ALD3nxRgLBtWnfKog0SPdjVRheFkzoq5I7LEYzkhMBBCiy4/7rXfCAoMhedUeN+Hk2kB2w== X-Received: by 2002:a17:902:e546:b0:1b7:f64b:379b with SMTP id n6-20020a170902e54600b001b7f64b379bmr12696217plf.17.1690841736104; Mon, 31 Jul 2023 15:15:36 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:35 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Can Guo , Asutosh Das , "Bao D. Nguyen" , Bean Huo , Arthur Simchaev Subject: [PATCH v5 6/7] scsi: ufs: Split an if-condition Date: Mon, 31 Jul 2023 15:14:42 -0700 Message-ID: <20230731221458.437440-7-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make the next patch in this series easier to read. No functionality is changed. Cc: Martin K. Petersen Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..ae7b868f9c26 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4352,8 +4352,9 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } spin_unlock_irqrestore(hba->host->host_lock, flags); - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + if (!update) + return; + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_auto_hibern8_enable(hba); From patchwork Mon Jul 31 22:14:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13335545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18189C001DF for ; Mon, 31 Jul 2023 22:15:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230416AbjGaWPv (ORCPT ); Mon, 31 Jul 2023 18:15:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230007AbjGaWPt (ORCPT ); Mon, 31 Jul 2023 18:15:49 -0400 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E45E41723 for ; Mon, 31 Jul 2023 15:15:44 -0700 (PDT) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1bbbbb77b38so29383015ad.3 for ; Mon, 31 Jul 2023 15:15:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690841744; x=1691446544; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KQtAnIArA9iV45Y4VwwqcwMGiJtLwJXQ9EsDGHUi6lM=; b=ODkjrn0UEwyCaK3gzuAnaqXSEZB9upwBk4quPtJhWyYDxDoxX20dJZLFd9Ob1KyD6U hR4C7R1xkUdNjz/IQz1EqoJfaaTwsobIeKJ62BjkoScwRsec212dBFpq8x/96TemVPEj AuTUwQdgwXFzFFd1y+tmmk0D5HD0+Grg52Ha7zcUQFFXNzSO4ib32GmvzyVGhMtM9a6E 65g4JpfhEDmgzLbe2ZraVfyIakGOSTGxDuaNUdD7ob79foWQyZ06Ur8ZhVggkzQqeY0q 5mRolOokCf8EC9PJTwkZaqXydWyx3IbLeaJom1teoWo10BGy4qZ3cxrJ8nGzys2LVvE3 VBAA== X-Gm-Message-State: ABy/qLbFTEfPeVXoADT3hCjvErNafOvVfvLGrPGDNoN9HvVdwKtOtP17 0lsxEhmsXlkWQZJdr0KawT4= X-Google-Smtp-Source: APBJJlF7CgFdHK5tpjnQAIh20WScB+UvdUsa5oHDiGqf/SEFV6daYoUboVIwMK4bpnBljtOZn40sTQ== X-Received: by 2002:a17:903:1107:b0:1b8:5a32:2345 with SMTP id n7-20020a170903110700b001b85a322345mr10736203plh.22.1690841744360; Mon, 31 Jul 2023 15:15:44 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:9346:70e3:158a:281c]) by smtp.gmail.com with ESMTPSA id jn13-20020a170903050d00b001b895a18472sm9000888plb.117.2023.07.31.15.15.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 15:15:44 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , Bart Van Assche , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Can Guo , Bean Huo , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev Subject: [PATCH v5 7/7] scsi: ufs: Disable zone write locking Date: Mon, 31 Jul 2023 15:14:43 -0700 Message-ID: <20230731221458.437440-8-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog In-Reply-To: <20230731221458.437440-1-bvanassche@acm.org> References: <20230731221458.437440-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Cc: Martin K. Petersen Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 40 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index ae7b868f9c26..3c99516c38fa 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,23 +4337,53 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_update_no_zone_write_lock(struct ufs_hba *hba, + bool set_no_zone_write_lock) +{ + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + if (set_no_zone_write_lock) + blk_queue_flag_set(QUEUE_FLAG_NO_ZONE_WRITE_LOCK, q); + else + blk_queue_flag_clear(QUEUE_FLAG_NO_ZONE_WRITE_LOCK, q); + blk_mq_unfreeze_queue(q); + } +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; - bool update = false; + bool prev_state, new_state, update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) return; spin_lock_irqsave(hba->host->host_lock, flags); + prev_state = ufshcd_is_auto_hibern8_enabled(hba); if (hba->ahit != ahit) { hba->ahit = ahit; update = true; } + new_state = ufshcd_is_auto_hibern8_enabled(hba); spin_unlock_irqrestore(hba->host->host_lock, flags); if (!update) return; + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled. Enable write locking for + * zoned writes since auto-hibernation may cause reordering of + * zoned writes when using the legacy mode of the UFS host + * controller. + */ + ufshcd_update_no_zone_write_lock(hba, false); + } if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); @@ -4361,6 +4391,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. Disable write locking + * for zoned writes. + */ + ufshcd_update_no_zone_write_lock(hba, true); + } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); @@ -5140,6 +5177,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) ufshcd_hpb_configure(hba, sdev); + blk_queue_flag_set(QUEUE_FLAG_NO_ZONE_WRITE_LOCK, q); blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);