From patchwork Wed Aug 9 20:23:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA098C001B0 for ; Wed, 9 Aug 2023 20:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234454AbjHIUYO (ORCPT ); Wed, 9 Aug 2023 16:24:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234355AbjHIUYB (ORCPT ); Wed, 9 Aug 2023 16:24:01 -0400 Received: from mail-oo1-f54.google.com (mail-oo1-f54.google.com [209.85.161.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FFE8211D; Wed, 9 Aug 2023 13:24:01 -0700 (PDT) Received: by mail-oo1-f54.google.com with SMTP id 006d021491bc7-56cc461f34fso208580eaf.0; Wed, 09 Aug 2023 13:24:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612640; x=1692217440; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1WWShbjlu/DYJqBGUFh6QWNO1w0dEUupKqXQuglIsyA=; b=YHWqmNFe6fAlxRDEwY9hYjPYK5OMasZ+uEtw5XiK0bs65HvMIhTXulY6XnZ5mThtQW eYLtT2BCnRGfjl1NrnCg2kqhVmr7PPQojdfOAr6fYqaIZAwUKSULCafK9NaaftB4CM+F 6Lg6G2C3oCi5XsMXXqemmCS4WDm6ZFyxKaTXoOmiZBXpqdT15SK5OR1HqCUW6H8/giDf UYWJB4wxRYvHrsmC1yeY9ue1j1ar9K8zbKqXQeh675cxz9YSe8KE75TKQe13KncL7R1l 5JiiDTtveYMzXzMaYGD96Ytjqvmr0+UHZwdjGfgHoLjgYTt0B20Ll4Ipk1kQmS0pcunG xNOw== X-Gm-Message-State: AOJu0YxecTL8p+VZGw/GnU2GmsKhYAsKHWGa2juSxzShAjjMiEbh6b0G k9EItGbSyAtIZQuipSkZzu0= X-Google-Smtp-Source: AGHT+IFvT3vLjwErnughpw3zqFH1GggoNKRjKpqsejwMGwJ4NTMcCZRAbfkr5UTMyE+oWbGCWT2Jfg== X-Received: by 2002:a05:6808:199a:b0:3a7:9bc2:ff6b with SMTP id bj26-20020a056808199a00b003a79bc2ff6bmr604948oib.5.1691612640237; Wed, 09 Aug 2023 13:24:00 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.23.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:23:59 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v7 1/7] block: Introduce the use_zone_write_lock member variable Date: Wed, 9 Aug 2023 13:23:42 -0700 Message-ID: <20230809202355.1171455-2-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Writes in sequential write required zones must happen at the write pointer. Even if the submitter of the write commands (e.g. a filesystem) submits writes for sequential write required zones in order, the block layer or the storage controller may reorder these write commands. The zone locking mechanism in the mq-deadline I/O scheduler serializes write commands for sequential zones. Some but not all storage controllers require this serialization. Introduce a new request queue limit member variable to allow block drivers to indicate that they preserve the order of write commands and thus do not require serialization of writes per zone. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-settings.c | 6 ++++++ include/linux/blkdev.h | 1 + 2 files changed, 7 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..b75c97971860 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -56,6 +56,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; + lim->use_zone_write_lock = true; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; @@ -685,6 +686,11 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_secure_erase_sectors); t->zone_write_granularity = max(t->zone_write_granularity, b->zone_write_granularity); + /* + * Whether or not the zone write lock should be used depends on the + * bottom driver only. + */ + t->use_zone_write_lock = b->use_zone_write_lock; t->zoned = max(t->zoned, b->zoned); return ret; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 2f5371b8482c..deffa1f13444 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -316,6 +316,7 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; + bool use_zone_write_lock; enum blk_zoned_model zoned; /* From patchwork Wed Aug 9 20:23:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C30BC04FE0 for ; Wed, 9 Aug 2023 20:24:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234483AbjHIUYP (ORCPT ); Wed, 9 Aug 2023 16:24:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234434AbjHIUYN (ORCPT ); Wed, 9 Aug 2023 16:24:13 -0400 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55DF1212A; Wed, 9 Aug 2023 13:24:02 -0700 (PDT) Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-563e21a6011so206643a12.0; Wed, 09 Aug 2023 13:24:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612642; x=1692217442; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NzQNbfaKWGkxxfv4yeJF4grPPcZ/PzSiA77GygBeEUM=; b=NbazEDrZbRIcy8Tn+TlCMms+zMBw0cTFhCInBycpfP+LHyVRRSmU6DN1ogsJuYwNvp tW6/kqnhP60TETp0uiBSk+itcu7lvcDpmd1AcoWfkYaKhaCfF3D3gvIwSpIQ58XHbZng 6DyqSM/vBKAZbLNp/0C+nsI9V6okHLDFs6VCOG5dJXFKzCSBsccoCoOKTgOUCOQUEuAz Z+crraZ/BbAgTLt8o/CzP4vKSn5cdRLLnGhJlLSLV1DIUmGNpZxOMBgI/OCT38odbzjm 9rlGU+eS5NUt4PyrKJkG7dJbDNqstZtJDqary/vD3n5r7iQkjONqyxNDZauqyRsoX5YY +uKQ== X-Gm-Message-State: AOJu0YwkCRFfgkE8TNLbrcdU7VRNVfCrfJKPVQeoFBJlx87PhM/JppF7 CXYORz94zncn815sktiOwdoUNqNcs84= X-Google-Smtp-Source: AGHT+IEkuyPyNNldM6jHTTi5So8Smn56rSBTxODpNtAeeaUfbhfHRsuDCf6AZZQeZsG+DA4i9eouJA== X-Received: by 2002:a17:90b:4a81:b0:263:f8e3:5a2a with SMTP id lp1-20020a17090b4a8100b00263f8e35a2amr344762pjb.36.1691612641508; Wed, 09 Aug 2023 13:24:01 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:01 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v7 2/7] block/mq-deadline: Only use zone locking if necessary Date: Wed, 9 Aug 2023 13:23:43 -0700 Message-ID: <20230809202355.1171455-3-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that serializes write requests per zone. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..cd2504205ff8 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -338,6 +338,16 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd, return rq; } +/* + * Whether or not to use zone write locking. Not using zone write locking for + * sequential write required zones is only safe if the block driver preserves + * the request order. + */ +static bool dd_use_zone_write_locking(struct request_queue *q) +{ + return q->limits.use_zone_write_lock && blk_queue_is_zoned(q); +} + /* * For the specified data direction, return the next request to * dispatch using arrival ordered lists. @@ -353,7 +363,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !dd_use_zone_write_locking(rq->q)) return rq; /* @@ -398,7 +408,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !dd_use_zone_write_locking(rq->q)) return rq; /* @@ -526,8 +536,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -552,7 +563,8 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, /* * If the request needs its target zone locked, do it. */ - blk_req_zone_write_lock(rq); + if (dd_use_zone_write_locking(rq->q)) + blk_req_zone_write_lock(rq); rq->rq_flags |= RQF_STARTED; return rq; } @@ -934,7 +946,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (dd_use_zone_write_locking(rq->q)) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Wed Aug 9 20:23:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76185C04FE1 for ; Wed, 9 Aug 2023 20:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234361AbjHIUYQ (ORCPT ); Wed, 9 Aug 2023 16:24:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234451AbjHIUYO (ORCPT ); Wed, 9 Aug 2023 16:24:14 -0400 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5C422136; Wed, 9 Aug 2023 13:24:03 -0700 (PDT) Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-56518bb1552so201283a12.0; Wed, 09 Aug 2023 13:24:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612643; x=1692217443; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=me3ElenHYRVvbr1VLAxmTuaND4JiSTtfliXTYye97+M=; b=gAJT2bykjvu5/BNrh1ck+T9TPObX7GFA7QqaPUEcDNAJ3/no/Cw3oNm+UlYSnHXZOF uwgSwrLw4USmiVVO3e/cHdIBY6jrAmbyoR51hlKZyJbkQEpEaFQiMx3fvWr/ikYXEQeI YAicaf5ood6xBikHeqteIQN68IHRU3PVWrhibEtiXpAK8sKsVvNfzGz1/Ul9EBxTmYzd 8EAllNp1ifRfmpUbs52P60CtS6yBAzzB2vch23Y2m99786OrwM775GAqaR6QApZpi4u3 zV4g5jwTU4mJLfGv7pNvr82UART28UD6oVP2O/PyAgnUydS3L1rLKtVUkvS27wwizStI rlMQ== X-Gm-Message-State: AOJu0YxUUNUNJD00VkAXgu18Z+hbw9kSiRzpciItlcoYlLzecrc1zq71 vEYzker2IJNJ6knm8GBOR3Y= X-Google-Smtp-Source: AGHT+IFGmwM09BcbZuN+OxgJ2+WeL0ZpcRMCNDQFGsx/rKB/DhHITjLa33WLu6XXNocOmR5CCtK0XA== X-Received: by 2002:a17:90a:950e:b0:268:f20:46b with SMTP id t14-20020a17090a950e00b002680f20046bmr334339pjo.38.1691612643067; Wed, 09 Aug 2023 13:24:03 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:02 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , "Martin K . Petersen" , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v7 3/7] scsi: core: Retry unaligned zoned writes Date: Wed, 9 Aug 2023 13:23:44 -0700 Message-ID: <20230809202355.1171455-4-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. Let the SCSI error handler sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Cc: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 37 +++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 3 +++ include/scsi/scsi.h | 1 + 4 files changed, 42 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..c624b9c8fdab 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -698,6 +699,16 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This may indicate that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + scsi_cmd_to_rq(scmd)->q->limits.use_zone_write_lock) + return NEEDS_DELAYED_RETRY; + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ @@ -2186,6 +2197,25 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns a negative value if @_a has a lower starting sector than @_b, zero if + * both have the same starting sector and a positive value otherwise. + */ +static int scsi_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + const sector_t pos_a = blk_rq_pos(scsi_cmd_to_rq(a)); + const sector_t pos_b = blk_rq_pos(scsi_cmd_to_rq(b)); + + if (pos_a < pos_b) + return -1; + if (pos_a > pos_b) + return 1; + return 0; +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2224,13 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + /* + * Sort pending SCSI commands in starting sector order. This is + * important if one of the SCSI devices associated with @shost is a + * zoned block device for which zone write locking is disabled. + */ + list_sort(NULL, done_q, scsi_cmp_sector); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 59176946ab56..69da8aee13df 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 3c668cfb146d..3716392daa2c 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1235,6 +1235,9 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + if (rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Wed Aug 9 20:23:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E67B2C001B0 for ; Wed, 9 Aug 2023 20:24:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232125AbjHIUYT (ORCPT ); Wed, 9 Aug 2023 16:24:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234456AbjHIUYO (ORCPT ); Wed, 9 Aug 2023 16:24:14 -0400 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A2FC213D; Wed, 9 Aug 2023 13:24:05 -0700 (PDT) Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-268030e1be7so115886a91.3; Wed, 09 Aug 2023 13:24:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612644; x=1692217444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kSNICLt5izxMPlfJB1Gv3pSOy0XC1d0m6f5mFl3o5Og=; b=I27x2eQpX9QRZBrsOFwF4VPLzVmakpjtbOeriymhFkuleM/vFFV4lgv/W/oP58f0xG /D8dcogoZHF45zROHrxjSPWuNtox6XtrgiJQ7nkC8oFEWfS/FoqaR9fGnbsxtfywca/x 0ggxwQfZrKrMuq9KFmxjPLAFLKBosrFsF8gDzNJRSJxhwb3qOS6sRImiRO/ipyybUIig qd5H+IJjICOguvXNXUsUcuE1e1rqsZduxVZSxuEugbHu0GSmw6M2lejAocQxe5hgW00M 6PhLQwWWGDlvLmxfTi7ZsVu7zqeWYD3aNMj8yI4wV+zC+BXMZn0yBTcSw2wrX7h+j6H9 4HkA== X-Gm-Message-State: AOJu0Yx+bC2MAXJXA0M981bLH5rKj0FbKa9tHqe6lbKW+cizejoV9zak 2nTkwLnC9oaXeHGI2qM5eUc= X-Google-Smtp-Source: AGHT+IHlJqgsFL4e9TZCsmfYgxlgxUqjyBXjL42w4gnyVAAkSamX3FbAbt1d6bIGFJs/jtVRpwsUGw== X-Received: by 2002:a17:90b:1b0c:b0:267:f2f6:586b with SMTP id nu12-20020a17090b1b0c00b00267f2f6586bmr322493pjb.21.1691612644371; Wed, 09 Aug 2023 13:24:04 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:04 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Douglas Gilbert , "James E.J. Bottomley" Subject: [PATCH v7 4/7] scsi: scsi_debug: Support disabling zone write locking Date: Wed, 9 Aug 2023 13:23:45 -0700 Message-ID: <20230809202355.1171455-5-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make it easier to test not using zone write locking by supporting disabling zone write locking in the scsi_debug driver. Cc: Martin K. Petersen Cc: Douglas Gilbert Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..22485726c534 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_no_zwrl; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_no_zwrl) + q->limits.use_zone_write_lock = false; return 0; } @@ -5738,6 +5743,7 @@ module_param_named(ndelay, sdebug_ndelay, int, S_IRUGO | S_IWUSR); module_param_named(no_lun_0, sdebug_no_lun_0, int, S_IRUGO | S_IWUSR); module_param_named(no_rwlock, sdebug_no_rwlock, bool, S_IRUGO | S_IWUSR); module_param_named(no_uld, sdebug_no_uld, int, S_IRUGO); +module_param_named(no_zone_write_lock, sdeb_no_zwrl, bool, S_IRUGO); module_param_named(num_parts, sdebug_num_parts, int, S_IRUGO); module_param_named(num_tgts, sdebug_num_tgts, int, S_IRUGO | S_IWUSR); module_param_named(opt_blks, sdebug_opt_blks, int, S_IRUGO); @@ -5812,6 +5818,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(no_zone_write_lock, + "Disable serialization of zoned writes (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Wed Aug 9 20:23:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AF01C001B0 for ; Wed, 9 Aug 2023 20:24:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234436AbjHIUYb (ORCPT ); Wed, 9 Aug 2023 16:24:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234479AbjHIUYO (ORCPT ); Wed, 9 Aug 2023 16:24:14 -0400 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B58F2690; Wed, 9 Aug 2023 13:24:06 -0700 (PDT) Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-26830595676so126047a91.2; Wed, 09 Aug 2023 13:24:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612646; x=1692217446; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o5dO3J2D44JWA/zBVJrnrUbBeTSoS+a+K9Gvbk3ZU3w=; b=O0MduQoDyTZZ7O9hysuBdRSuTIFEOo3uWt38eQpYFEYMGaXQHmq0aDjIWw9rzcb2+p WwPGNgujY03IvRmtDniUZpX0lU1itBel6188melW70F4dCnXSuvtnJpaD4/pqXImJ2S4 QvpTHjJ/9MeE02VASzLvYqDHy2dcXamafp8s2Gavd393CwwK+8SC7i6SIEqLQwcpAP8Y 7QRyH8hIe6A1aHoWMKabsClZ6uj5Saps7a8EJKJggNyVY5RD0uZ/e3zEsMu5DUqutIny IkhNInF3ZVI9voBqdOKL0NrWVnPCjZuSf3DRTUy8l9a4HN+gYWcmkENqfIbr4FDj9HHg 293Q== X-Gm-Message-State: AOJu0Yw3WR1eSyXeJQGt42bsuMKAeJbEPMLdtPE0hiDFTBOxOKyL3Qzs ibZtYh08NAIcS4BOKZoTYdw= X-Google-Smtp-Source: AGHT+IEganERE5UnLbYrCgpdsQUD6t+54AGWUd+9pL3pi4Wi3r33UIZuUPbyh3vz4sSEt3NKsp+HNg== X-Received: by 2002:a17:90b:1d12:b0:268:5c2f:f0c7 with SMTP id on18-20020a17090b1d1200b002685c2ff0c7mr371579pjb.5.1691612645970; Wed, 09 Aug 2023 13:24:05 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:05 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Douglas Gilbert , "James E.J. Bottomley" Subject: [PATCH v7 5/7] scsi: scsi_debug: Support injecting unaligned write errors Date: Wed, 9 Aug 2023 13:23:46 -0700 Message-ID: <20230809202355.1171455-6-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Cc: Martin K. Petersen Cc: Douglas Gilbert Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 22485726c534..b9712275de9d 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Wed Aug 9 20:23:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B0C9C001E0 for ; Wed, 9 Aug 2023 20:24:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234579AbjHIUYg (ORCPT ); Wed, 9 Aug 2023 16:24:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234460AbjHIUY3 (ORCPT ); Wed, 9 Aug 2023 16:24:29 -0400 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A18922717; Wed, 9 Aug 2023 13:24:12 -0700 (PDT) Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-267f8f36a3cso116762a91.2; Wed, 09 Aug 2023 13:24:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612652; x=1692217452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/Ht/S5/IQaEQ/R/sr0BTkozacEwEufdvyDt4t1mhvhY=; b=PKQ1KvXU1peQ99snuRy9PYeOHleSeoRUiREHeDQm/YQDONSGTuBw/F1cTZMe0SacrI WD/FzLGJxYRduHJrv+AggRo3UaHcCCA9qbVBPQ54zyrBn0OJ+1264B6QgCX04nzIN0Qa qgHFLwbb7+Caxx8nv0Q5IJRNnbAXgQRUzB5e3Ke+OzkKcWVSEPBpbQFWENHS6yyCOV+U J1FAdTrWIROOml79ts98z1z/ckD9Z3Z1hO6B6Tnml6TqZKt3ZQP8d3AWCjimBQifsDyi PIEyvnR4BgXlqHygj7XUVaOoY7yHBGvay0hG/qCGjjKBtf7+hFqQ3w8g/2Dn2d+eHpX5 BltQ== X-Gm-Message-State: AOJu0YxBc28yaQPW6tIP9Dmc8TozZ5YLtwTSN6fr1nIlGDAy7zIsRsaf S14DMiXRNfXAdGyXGX7Why//rm99a+8= X-Google-Smtp-Source: AGHT+IGQ5LGlRE73wBeepIM5upy78P9OT+PVzAFGP1skr0nh0yppeCi7DuQG7PcD6e7Y2E/1Is+nMA== X-Received: by 2002:a17:90a:8b09:b0:268:2f2:cc88 with SMTP id y9-20020a17090a8b0900b0026802f2cc88mr332915pjn.12.1691612651996; Wed, 09 Aug 2023 13:24:11 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:11 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Can Guo , "Martin K . Petersen" , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev Subject: [PATCH v7 6/7] scsi: ufs: Split an if-condition Date: Wed, 9 Aug 2023 13:23:47 -0700 Message-ID: <20230809202355.1171455-7-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make the next patch in this series easier to read. No functionality is changed. Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..ae7b868f9c26 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4352,8 +4352,9 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } spin_unlock_irqrestore(hba->host->host_lock, flags); - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + if (!update) + return; + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_auto_hibern8_enable(hba); From patchwork Wed Aug 9 20:23:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13348462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C25C0C001E0 for ; Wed, 9 Aug 2023 20:24:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234532AbjHIUYo (ORCPT ); Wed, 9 Aug 2023 16:24:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234562AbjHIUYc (ORCPT ); Wed, 9 Aug 2023 16:24:32 -0400 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A67D42737; Wed, 9 Aug 2023 13:24:17 -0700 (PDT) Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-564af0ac494so203218a12.0; Wed, 09 Aug 2023 13:24:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691612657; x=1692217457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5GQvRWKNYdRiuW/Tcxgs198CxU/SAUQOToewtQo4mxI=; b=gifHS4Ii9XgWJ9aOswiF7EkaAW8y7luZfbXkiWuUa8FqENomZFp94e+54sCCw64SUk AI7XciyzgVlJGsi7hvvFohiHmNgYKUn3QYx3OnOXOoXGUTIG3d0vwMXlBITQlKiTWDaI xRAMb1yly8dK8j9XbuQFYEJUlZF1rWhyLJYqi0ruL1aWaJLa5iASd+YxVJMBWug+Q2iu zq8YGKsjoNvq33hGi/4ghF6R7AhdpeaUwgvVp3+Kv2SnV6bVJUX7U5WM6gVjRsfN6J5t qHEY+PtmocpaogreseIydE589OswTFWjkacCdQTZoVN7g6njS67fgfDq1W/px+HnUc3Z 5IMw== X-Gm-Message-State: AOJu0YyWacxJqe+BtbjGIqjGbBalUaj/R7fZ8yucOLIFo+EyApwjJC9q C7d4Z6PmUNzNlFwT4976V7I= X-Google-Smtp-Source: AGHT+IGI2lF8cTD7f0JakBeSSM5vUd4YS+85+eChvlIBblwBYJ6SnBf4rQ3RjhxDCDp2TGYmXDv+VA== X-Received: by 2002:a17:90b:1e12:b0:268:47c2:e490 with SMTP id pg18-20020a17090b1e1200b0026847c2e490mr331892pjb.1.1691612656880; Wed, 09 Aug 2023 13:24:16 -0700 (PDT) Received: from bvanassche-glaptop2.roam.corp.google.com ([98.51.102.78]) by smtp.gmail.com with ESMTPSA id gq9-20020a17090b104900b002694da8a9cdsm1868103pjb.48.2023.08.09.13.24.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 13:24:16 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Can Guo , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev Subject: [PATCH v7 7/7] scsi: ufs: Disable zone write locking Date: Wed, 9 Aug 2023 13:23:48 -0700 Message-ID: <20230809202355.1171455-8-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230809202355.1171455-1-bvanassche@acm.org> References: <20230809202355.1171455-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 37 ++++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index ae7b868f9c26..ae6b63b02930 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,23 +4337,50 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_update_zone_write_lock(struct ufs_hba *hba, + bool use_zone_write_lock) +{ + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.use_zone_write_lock = use_zone_write_lock; + blk_mq_unfreeze_queue(q); + } +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; - bool update = false; + bool prev_state, new_state, update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) return; spin_lock_irqsave(hba->host->host_lock, flags); + prev_state = ufshcd_is_auto_hibern8_enabled(hba); if (hba->ahit != ahit) { hba->ahit = ahit; update = true; } + new_state = ufshcd_is_auto_hibern8_enabled(hba); spin_unlock_irqrestore(hba->host->host_lock, flags); if (!update) return; + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled. Enable write locking for + * zoned writes since auto-hibernation may cause reordering of + * zoned writes when using the legacy mode of the UFS host + * controller. + */ + ufshcd_update_zone_write_lock(hba, true); + } if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); @@ -4361,6 +4388,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. Disable write locking + * for zoned writes. + */ + ufshcd_update_zone_write_lock(hba, false); + } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); @@ -5140,6 +5174,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) ufshcd_hpb_configure(hba, sdev); + q->limits.use_zone_write_lock = false; blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);