From patchwork Fri Aug 18 19:34:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C6F3EE4997 for ; Fri, 18 Aug 2023 19:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344014AbjHRTpg (ORCPT ); Fri, 18 Aug 2023 15:45:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379919AbjHRTpe (ORCPT ); Fri, 18 Aug 2023 15:45:34 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BB8E4684; Fri, 18 Aug 2023 12:45:12 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1bc3d94d40fso10692775ad.3; Fri, 18 Aug 2023 12:45:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387790; x=1692992590; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A0wk1xa90VnYAyPUoUN4zedZwQDxNHF4MtXWEOIMiWM=; b=bgzdamJIIrI7Tz0mnbPud/KLhqu3iw19owxqH6wugbCKVumeZL96ZygzorwKbFZQ+V TKTxs/syf4EUSrog5lg5qJ+UAYH/1Wqm2Hqp/fN2wiiRMQMORHtKCgCC/0kyh20HkmO4 t6Oq0ELKTElWreD8bkvapE5dulal13T/TLZaQ6NqmSG955DeiFly35nVS6gq+pK02zVN wcHHGkF106siEY4VlQyhOaCxiq2bCdbF17D/FjGaJhuzw5R7e/S9VQup4wOtOWnLlL6g nKj96QOFhzm1idGvys9VMcE76rU1j0efuAaF+kbIe+qejBMGSpZvrYPzvvWEuavYnG53 w3FA== X-Gm-Message-State: AOJu0Yw98k2X9f4hLyMWXw7hhO4ZOIDExHhB/jrxXD4HNcGx7smKWAQH ei0wK69af2zEQfXtxDwGNUo= X-Google-Smtp-Source: AGHT+IGnCDxz1tQlpiy7M2N5WUTK9mPo33ZafgMMWUjR1b6pbtgoL87mgKTCtQMPozbcH92qYmIdZw== X-Received: by 2002:a17:902:d4c9:b0:1b8:a88c:4dc6 with SMTP id o9-20020a170902d4c900b001b8a88c4dc6mr194814plg.45.1692387789982; Fri, 18 Aug 2023 12:43:09 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:09 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v10 01/18] block: Introduce more member variables related to zone write locking Date: Fri, 18 Aug 2023 12:34:04 -0700 Message-ID: <20230818193546.2014874-2-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Many but not all storage controllers require serialization of zoned writes. Introduce two new request queue limit member variables related to write serialization. 'driver_preserves_write_order' allows block drivers to indicate that the order of write commands is preserved and hence that serialization of writes per zone is not required. 'use_zone_write_lock' is set by disk_set_zoned() if and only if the block device has zones and if the block driver does not preserve the order of write requests. Reviewed-by: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-settings.c | 15 +++++++++++++++ block/blk-zoned.c | 1 + include/linux/blkdev.h | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..4c776c08f190 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -56,6 +56,8 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; + lim->driver_preserves_write_order = false; + lim->use_zone_write_lock = false; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; @@ -82,6 +84,8 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + /* Request-based stacking drivers do not reorder requests. */ + lim->driver_preserves_write_order = true; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -685,6 +689,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_secure_erase_sectors); t->zone_write_granularity = max(t->zone_write_granularity, b->zone_write_granularity); + t->driver_preserves_write_order = t->driver_preserves_write_order && + b->driver_preserves_write_order; + t->use_zone_write_lock = t->use_zone_write_lock || + b->use_zone_write_lock; t->zoned = max(t->zoned, b->zoned); return ret; } @@ -949,6 +957,13 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) } q->limits.zoned = model; + /* + * Use the zone write lock only for zoned block devices and only if + * the block driver does not preserve the order of write commands. + */ + q->limits.use_zone_write_lock = model != BLK_ZONED_NONE && + !q->limits.driver_preserves_write_order; + if (model != BLK_ZONED_NONE) { /* * Set the zone write granularity to the device logical block diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 619ee41a51cc..112620985bff 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -631,6 +631,7 @@ void disk_clear_zone_settings(struct gendisk *disk) q->limits.chunk_sectors = 0; q->limits.zone_write_granularity = 0; q->limits.max_zone_append_sectors = 0; + q->limits.use_zone_write_lock = false; blk_mq_unfreeze_queue(q); } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c1421da8d45e..38a1a71b2bcc 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -316,6 +316,16 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; + /* + * Whether or not the block driver preserves the order of write + * requests. Set by the block driver. + */ + bool driver_preserves_write_order; + /* + * Whether or not zone write locking should be used. Set by + * disk_set_zoned(). + */ + bool use_zone_write_lock; enum blk_zoned_model zoned; /* From patchwork Fri Aug 18 19:34:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48D8EEE4999 for ; Fri, 18 Aug 2023 19:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379668AbjHRTph (ORCPT ); Fri, 18 Aug 2023 15:45:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379842AbjHRTpV (ORCPT ); Fri, 18 Aug 2023 15:45:21 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 560944239; Fri, 18 Aug 2023 12:44:44 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1bf092a16c9so10943335ad.0; Fri, 18 Aug 2023 12:44:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387791; x=1692992591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FvsjixjHprGJoE6zHlkCW2HQWi6Q2BrxOyZzjpL+qb0=; b=aLr4VXwyeaAt8n+PBTH5xW4Y65weOGZ3Sh3OPa0xiE0OyKmGsfk5BGyLlWCPy0NSXQ PeAJ0MdEY7ue3JxdhLG+RhyacoJETMn60jINIgz9Gp5ANAguYFY5ePIwUgWbeSvfHqwF zWY+/CxjmxCtatJhnmmvF/FaC4LdDowcSAqNUnGM+ggx8Zp/Z5SCdiQ1+XGH/YcVtWrR tLFrtAwerWrIiQimY0Qn2Cg0RrjNj+fMpHcuSeHw8vtMCuDeRBXKmghMHFI4SZSrySqE /mKO5G3dpdG8v3/OEgBzmAsJdi2DvfZmJETKsNPeNqlwUQzjbK7QegBw0NcZ1uyHglhI mZoA== X-Gm-Message-State: AOJu0YzHkK8ejvaqxEW+/P2tRm3FoKvSIq1t9Wt5kO6WBDjp0WuY3pOF Wv86p9+QazetLP4tQYtIlQDTc8OIdlQ= X-Google-Smtp-Source: AGHT+IF4n0M75II18TdmYY9xJaoBhoB8sECgbX5jp9wZYLq3+MQPy1u2o1bjEOBxLNlzeHIoSichVg== X-Received: by 2002:a17:902:8695:b0:1b8:2c6f:3248 with SMTP id g21-20020a170902869500b001b82c6f3248mr206100plo.39.1692387791269; Fri, 18 Aug 2023 12:43:11 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:10 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v10 02/18] block: Only use write locking if necessary Date: Fri, 18 Aug 2023 12:34:05 -0700 Message-ID: <20230818193546.2014874-3-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk_req_needs_zone_write_lock() return false if q->limits.use_zone_write_lock is false. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-zoned.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 112620985bff..d8a80cce832f 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -53,11 +53,16 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond) EXPORT_SYMBOL_GPL(blk_zone_cond_str); /* - * Return true if a request is a write requests that needs zone write locking. + * Return true if a request is a write request that needs zone write locking. */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (!rq->q->disk->seq_zones_wlock) + struct request_queue *q = rq->q; + + if (!q->limits.use_zone_write_lock) + return false; + + if (!q->disk->seq_zones_wlock) return false; return blk_rq_is_seq_zoned_write(rq); From patchwork Fri Aug 18 19:34:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FEF0EE499C for ; Fri, 18 Aug 2023 19:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379757AbjHRTpi (ORCPT ); Fri, 18 Aug 2023 15:45:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379851AbjHRTpX (ORCPT ); Fri, 18 Aug 2023 15:45:23 -0400 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37C3C421F; Fri, 18 Aug 2023 12:44:47 -0700 (PDT) Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1bbff6b2679so10022125ad.1; Fri, 18 Aug 2023 12:44:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387793; x=1692992593; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GjGIaWCkHD/AcFvunDs8CWF2fnVsmWJz6tsJCh5ZAlo=; b=VkNnq8fTEM5yoJPJGi17esCP6Wx6H4e6dCp/PqyleCZQKzYOcl3myrlq/S0ZDNV4L9 FMf+/Vc6TFfM7oNRoyiYhSEPibhM4hTluyUEA5xlfYbJ6EPbpsz+qMdAWHcjI2qgoiTt L1wjFrR25zxB3JE33gyOj9LSpj6nKE8vkkSX0hoeP3+LWLW/fNPhsI2BzXqLoZCkmHs+ zNCTfOQMLz76SnVtzNSDnMyYKg+wSjcInsSOrVOmNeTk9Pf56b0YUYPhNwv8VQo9tLeL ilHClz9otA+BWz6FBrl6T9PMwJmH0PX+YJWDDnClqxCu8ObSMTxHa1Vh0q96zgjCGHS5 7A7A== X-Gm-Message-State: AOJu0Yzr9fUd5rIPmMOhyRCu7ptNqkF5VFSBdMJW1pE70B/GtPSN7L41 OMXzYLysbMX8aIDcfmOPDN8= X-Google-Smtp-Source: AGHT+IEw87tUMoU5iuJ22C/MT/AysykN2kqpsa9IYh9braGioKT5selEp4HQs8whvbLP/HhdfwSecw== X-Received: by 2002:a17:902:f54a:b0:1b8:a2af:fe23 with SMTP id h10-20020a170902f54a00b001b8a2affe23mr228076plf.2.1692387792604; Fri, 18 Aug 2023 12:43:12 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:12 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v10 03/18] block/mq-deadline: Only use zone locking if necessary Date: Fri, 18 Aug 2023 12:34:06 -0700 Message-ID: <20230818193546.2014874-4-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that serializes write requests per zone. Reviewed-by: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..082ccf3186f4 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -353,7 +353,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -398,7 +398,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -526,8 +526,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -934,7 +935,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (rq->q->limits.use_zone_write_lock) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Fri Aug 18 19:34:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C545EE498E for ; Fri, 18 Aug 2023 19:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379733AbjHRTpi (ORCPT ); Fri, 18 Aug 2023 15:45:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379857AbjHRTpY (ORCPT ); Fri, 18 Aug 2023 15:45:24 -0400 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7882421E; Fri, 18 Aug 2023 12:44:49 -0700 (PDT) Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1bf3a2f4528so8112215ad.2; Fri, 18 Aug 2023 12:44:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387794; x=1692992594; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CZv/GCTChROxyk79tHpawLiGp2hTN5F6+jFSRDwuk9c=; b=kkQrfTPGGTsruYZiNBhux9X32SF13eraCeSoefQr1h28fLsx6USFkeMy7WyqZ18MQ5 ZWrLXRt65h/VghZ7F61JWGZV6QIgz5gzgAECRYKFIGzD17uAR6/KplX182dMrjAxEw0z DToF7MsBESe3DjkTsZqXAY4KipOMoAfZtK8vsPaCA1PrGEI/b5te/3eZ5uLPl8psGxcr oOxCXkPDim4k/cpDj3vzKh0OPpA8475u5DH+f3QhuRlhECFGKtNZ8FiGMNNaVp7AC3EA AP/XXS8XE6ckGddeoByOx1B6eWYsyTAMIIE3G+ZU5OdOZrr37m3wp+ih5ParXL5OJFX+ PQ5w== X-Gm-Message-State: AOJu0YxOrqoUry2On0fVUDjq2HAPzGaOsX4lGIDtpJQehsTgyW9hAEYD 9fV2SSAxyxQW5u/43MrmvI8= X-Google-Smtp-Source: AGHT+IEAR5Sk6Zy4lpk0jvURmNoj1FCAoxycnR2AmEian94uY9DRfAQ/rq6joO08n3jH8fL2z+FfMg== X-Received: by 2002:a17:902:e78a:b0:1bb:dbec:40ba with SMTP id cp10-20020a170902e78a00b001bbdbec40bamr162933plb.16.1692387794095; Fri, 18 Aug 2023 12:43:14 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:13 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 04/18] scsi: core: Introduce a mechanism for reordering requests in the error handler Date: Fri, 18 Aug 2023 12:34:07 -0700 Message-ID: <20230818193546.2014874-5-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the .eh_needs_prepare_resubmit and the .eh_prepare_resubmit function pointers in struct scsi_driver. Make the error handler call .eh_prepare_resubmit() before resubmitting commands if any of the .eh_needs_prepare_resubmit() invocations return true. A later patch will use this functionality to sort SCSI commands by LBA from inside the SCSI disk driver before these are resubmitted by the error handler. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 65 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_priv.h | 1 + include/scsi/scsi_driver.h | 2 ++ 3 files changed, 68 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..c4d817f044a0 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -2186,6 +2187,68 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns true if .eh_prepare_resubmit should be called for the commands in + * @done_q. + */ +static bool scsi_needs_preparation(struct list_head *done_q) +{ + struct scsi_cmnd *scmd; + + list_for_each_entry(scmd, done_q, eh_entry) { + struct scsi_driver *uld = scsi_cmd_to_driver(scmd); + bool (*npr)(struct scsi_cmnd *) = uld->eh_needs_prepare_resubmit; + + if (npr && npr(scmd)) + return true; + } + + return false; +} + +/* + * Comparison function that allows to sort SCSI commands by ULD driver. + */ +static int scsi_cmp_uld(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + + /* See also the comment above the list_sort() definition. */ + return scsi_cmd_to_driver(a) > scsi_cmd_to_driver(b); +} + +void scsi_call_prepare_resubmit(struct list_head *done_q) +{ + struct scsi_cmnd *scmd, *next; + + if (!scsi_needs_preparation(done_q)) + return; + + /* Sort pending SCSI commands by ULD. */ + list_sort(NULL, done_q, scsi_cmp_uld); + + /* + * Call .eh_prepare_resubmit for each range of commands with identical + * ULD driver pointer. + */ + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { + struct scsi_driver *uld = scsi_cmd_to_driver(scmd); + struct list_head *prev, uld_cmd_list; + + while (&next->eh_entry != done_q && + scsi_cmd_to_driver(next) == uld) + next = list_next_entry(next, eh_entry); + if (!uld->eh_prepare_resubmit) + continue; + prev = scmd->eh_entry.prev; + list_cut_position(&uld_cmd_list, prev, next->eh_entry.prev); + uld->eh_prepare_resubmit(&uld_cmd_list); + list_splice(&uld_cmd_list, prev); + } +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2257,8 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + scsi_call_prepare_resubmit(done_q); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index f42388ecb024..df4af4645430 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -101,6 +101,7 @@ int scsi_eh_get_sense(struct list_head *work_q, struct list_head *done_q); bool scsi_noretry_cmd(struct scsi_cmnd *scmd); void scsi_eh_done(struct scsi_cmnd *scmd); +void scsi_call_prepare_resubmit(struct list_head *done_q); /* scsi_lib.c */ extern int scsi_maybe_unblock_host(struct scsi_device *sdev); diff --git a/include/scsi/scsi_driver.h b/include/scsi/scsi_driver.h index 4ce1988b2ba0..00ffa470724a 100644 --- a/include/scsi/scsi_driver.h +++ b/include/scsi/scsi_driver.h @@ -18,6 +18,8 @@ struct scsi_driver { int (*done)(struct scsi_cmnd *); int (*eh_action)(struct scsi_cmnd *, int); void (*eh_reset)(struct scsi_cmnd *); + bool (*eh_needs_prepare_resubmit)(struct scsi_cmnd *cmd); + void (*eh_prepare_resubmit)(struct list_head *cmd_list); }; #define to_scsi_driver(drv) \ container_of((drv), struct scsi_driver, gendrv) From patchwork Fri Aug 18 19:34:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDD5CEE499C for ; Fri, 18 Aug 2023 19:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379824AbjHRTqm (ORCPT ); Fri, 18 Aug 2023 15:46:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379849AbjHRTqU (ORCPT ); Fri, 18 Aug 2023 15:46:20 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EE1444B6; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1bb84194bf3so10003685ad.3; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387801; x=1692992601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=adWNOW49kVXe1MZ2NF8xJiy0tj9xGw91as9Acxixn3o=; b=C6MISHqQ7mrpLibE54K4KmLQdwNZR6CIlB3G80PUaQXT9zfTWvfNQV39EFzJkzxYtW tr3rme/E5RF88n8PWLJBQzpY3UUF7Wx6+bGM5lMDThZNSWVbTxWtxc7WTRxjIXfmeIxG zq6WQVmmuAzUblm4Nym/yF7mzDmfFuVSeM1eXS0FgWmGs/mOBXV1pCpwjaTSxVN3BN47 3ZJ4lsP/v7jRN0E78l5QEezmReGL8VHn96EiuXEpA7uXI8hUOSKbDpS7Hm42abmoxQQS ZZXDKgdMGajHMBk2m+v4M/ZdyKQoQFx/Rudd9ADmGb2oEIaolGI4E7lKECHAydgh/H0l 9Z0w== X-Gm-Message-State: AOJu0YzCW9mWbgTGvLS3WX5/iQ+RUb1XcvJY+5m9o1frjz5nflx4Efr1 rBJDA/XHEf1NdUw2xv9zd1o= X-Google-Smtp-Source: AGHT+IHP/lcnbxfpAekISY88hv+xg7Sb8VLy4Ad+f3zCr1Ns5RmqxH7mgUlpwWGVWkBbXFvl3+b+GA== X-Received: by 2002:a17:902:d2c7:b0:1bb:a125:f831 with SMTP id n7-20020a170902d2c700b001bba125f831mr215203plc.58.1692387801553; Fri, 18 Aug 2023 12:43:21 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:21 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 05/18] scsi: core: Add unit tests for scsi_call_prepare_resubmit() Date: Fri, 18 Aug 2023 12:34:08 -0700 Message-ID: <20230818193546.2014874-6-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Triggering all code paths in scsi_call_prepare_resubmit() via manual testing is difficult. Hence add unit tests for this function. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/Kconfig | 2 + drivers/scsi/Kconfig.kunit | 4 + drivers/scsi/Makefile | 2 + drivers/scsi/Makefile.kunit | 1 + drivers/scsi/scsi_error_test.c | 207 +++++++++++++++++++++++++++++++++ 5 files changed, 216 insertions(+) create mode 100644 drivers/scsi/Kconfig.kunit create mode 100644 drivers/scsi/Makefile.kunit create mode 100644 drivers/scsi/scsi_error_test.c diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 4962ce989113..fc288f8fb800 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -232,6 +232,8 @@ config SCSI_SCAN_ASYNC Note that this setting also affects whether resuming from system suspend will be performed asynchronously. +source "drivers/scsi/Kconfig.kunit" + menu "SCSI Transports" depends on SCSI diff --git a/drivers/scsi/Kconfig.kunit b/drivers/scsi/Kconfig.kunit new file mode 100644 index 000000000000..90984a6ec7cc --- /dev/null +++ b/drivers/scsi/Kconfig.kunit @@ -0,0 +1,4 @@ +config SCSI_ERROR_TEST + tristate "scsi_error.c unit tests" if !KUNIT_ALL_TESTS + depends on SCSI && KUNIT + default KUNIT_ALL_TESTS diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index f055bfd54a68..1c5c3afb6c6e 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -168,6 +168,8 @@ scsi_mod-$(CONFIG_PM) += scsi_pm.o scsi_mod-$(CONFIG_SCSI_DH) += scsi_dh.o scsi_mod-$(CONFIG_BLK_DEV_BSG) += scsi_bsg.o +include $(srctree)/drivers/scsi/Makefile.kunit + hv_storvsc-y := storvsc_drv.o sd_mod-objs := sd.o diff --git a/drivers/scsi/Makefile.kunit b/drivers/scsi/Makefile.kunit new file mode 100644 index 000000000000..3e98053b2709 --- /dev/null +++ b/drivers/scsi/Makefile.kunit @@ -0,0 +1 @@ +obj-$(CONFIG_SCSI_ERROR_TEST) += scsi_error_test.o diff --git a/drivers/scsi/scsi_error_test.c b/drivers/scsi/scsi_error_test.c new file mode 100644 index 000000000000..be0d25e1fb57 --- /dev/null +++ b/drivers/scsi/scsi_error_test.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2023 Google LLC + */ +#include +#include +#include +#include "scsi_priv.h" + +static struct kunit *kunit_test; + +static bool uld_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static void uld_prepare_resubmit(struct list_head *cmd_list) +{ + /* This function must not be called. */ + KUNIT_EXPECT_TRUE(kunit_test, false); +} + +/* + * Verify that .eh_prepare_resubmit() is not called if use_zone_write_lock is + * true. + */ +static void test_prepare_resubmit1(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = false, + .use_zone_write_lock = true, + .zoned = BLK_ZONED_HM, + } + }; + static struct scsi_driver uld = { + .eh_needs_prepare_resubmit = uld_needs_prepare_resubmit, + .eh_prepare_resubmit = uld_prepare_resubmit, + }; + static struct scsi_device dev = { + .request_queue = &q, + .sdev_gendev.driver = &uld.gendrv, + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + disk.queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 2, + }, + .cmd.device = &dev, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 1; + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); +} + +static struct scsi_driver *uld1, *uld2, *uld3; + +static void uld1_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld1); +} + +static void uld2_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld2); +} + +static void test_prepare_resubmit2(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = true, + .use_zone_write_lock = false, + .zoned = BLK_ZONED_HM, + } + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2, cmd3, cmd4, cmd5, cmd6; + static struct scsi_device dev1, dev2, dev3; + struct scsi_driver *uld; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + uld = kzalloc(3 * sizeof(*uld), GFP_KERNEL); + uld1 = &uld[0]; + uld1->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld1->eh_prepare_resubmit = uld1_prepare_resubmit; + uld2 = &uld[1]; + uld2->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld2->eh_prepare_resubmit = uld2_prepare_resubmit; + uld3 = &uld[2]; + disk.queue = &q; + dev1.sdev_gendev.driver = &uld1->gendrv; + dev1.request_queue = &q; + dev2.sdev_gendev.driver = &uld2->gendrv; + dev2.request_queue = &q; + dev3.sdev_gendev.driver = &uld3->gendrv; + dev3.request_queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 3, + }, + .cmd.device = &dev1, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 4; + cmd3 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 1, + }, + .cmd.device = &dev2, + }; + cmd4 = cmd3; + cmd4.rq.__sector = 2, + cmd5 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 5, + }, + .cmd.device = &dev3, + }; + cmd6 = cmd5; + cmd6.rq.__sector = 6; + list_add_tail(&cmd3.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd5.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd6.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd4.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + KUNIT_EXPECT_TRUE(test, uld1 < uld2); + KUNIT_EXPECT_TRUE(test, uld2 < uld3); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next, + &cmd3.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next, + &cmd4.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next, + &cmd5.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next->next, + &cmd6.cmd.eh_entry); + kfree(uld); +} + +static struct kunit_case prepare_resubmit_test_cases[] = { + KUNIT_CASE(test_prepare_resubmit1), + KUNIT_CASE(test_prepare_resubmit2), + {} +}; + +static struct kunit_suite prepare_resubmit_test_suite = { + .name = "prepare_resubmit", + .test_cases = prepare_resubmit_test_cases, +}; +kunit_test_suite(prepare_resubmit_test_suite); + +MODULE_DESCRIPTION("scsi_call_prepare_resubmit() unit tests"); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); From patchwork Fri Aug 18 19:34:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA785EE499A for ; Fri, 18 Aug 2023 19:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357534AbjHRTql (ORCPT ); Fri, 18 Aug 2023 15:46:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379851AbjHRTqU (ORCPT ); Fri, 18 Aug 2023 15:46:20 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3C7944AF; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1bf0b24d925so9813725ad.3; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387803; x=1692992603; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+kT0lODXnQs5o0Wfuh374uESQu7wFS2RjdWOYRnRcYk=; b=MR8hYS4xc9GrhxL1vH5j1xuejdraHWHfagU9fdzrltw0ZwJQfaeMtfjsFl3VXJ8m0g +JedLgmlpuJBcDkUG3uuKIKHevzsHnTpf9AUnU2k9Kf3H9bwdE9GEeJK1NEdq7C9XPBM xe4bl6o9o1UCwZA9zaVN4bSpr+mzLr3IbxS5wCsXjb7Rqyn/J2gWC7QAOYlH4R3eqzIB ZXQTaQMgTmPA6muzV1fJSWrkD+Ays4t7TN1LsWvxEX3kUwnK/GhYr5bHNuzJmp4Xh21f 8w/Y4CgqMz1lfJXJa6MdERAifBNPLfS4odlsqb5KD+m7CGP8d/OaPjCYij7Qs8vwr/ha k5Iw== X-Gm-Message-State: AOJu0YwjrDnpDQCBY8F3plTvZ3tIW8eC6tim0YftZPwu9myRTZWQfxkF CaNZk6kcKRVTYrsbmgV20+ihK8Mc4V4= X-Google-Smtp-Source: AGHT+IEF3BgAgcEuz3dbs6/Pn7ecjoSU8tP8vQ7YMbvZ8XkC23fbFtOk+cZ6uX/aPEZ02HofKZVZ6Q== X-Received: by 2002:a17:902:d2c9:b0:1bc:8748:8bbf with SMTP id n9-20020a170902d2c900b001bc87488bbfmr186330plc.52.1692387802982; Fri, 18 Aug 2023 12:43:22 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:22 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 06/18] scsi: sd: Sort commands by LBA before resubmitting Date: Fri, 18 Aug 2023 12:34:09 -0700 Message-ID: <20230818193546.2014874-7-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Sort SCSI commands by LBA before the SCSI error handler resubmits these commands. This is necessary when resubmitting zoned writes (REQ_OP_WRITE) if multiple writes have been queued for a single zone. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 3c668cfb146d..d4feac5de17a 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -117,6 +118,8 @@ static void sd_uninit_command(struct scsi_cmnd *SCpnt); static int sd_done(struct scsi_cmnd *); static void sd_eh_reset(struct scsi_cmnd *); static int sd_eh_action(struct scsi_cmnd *, int); +static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd); +static void sd_prepare_resubmit(struct list_head *cmd_list); static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer); static void scsi_disk_release(struct device *cdev); @@ -617,6 +620,8 @@ static struct scsi_driver sd_template = { .done = sd_done, .eh_action = sd_eh_action, .eh_reset = sd_eh_reset, + .eh_needs_prepare_resubmit = sd_needs_prepare_resubmit, + .eh_prepare_resubmit = sd_prepare_resubmit, }; /* @@ -2018,6 +2023,46 @@ static int sd_eh_action(struct scsi_cmnd *scmd, int eh_disp) return eh_disp; } +static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + struct request *rq_a = scsi_cmd_to_rq(a); + struct request *rq_b = scsi_cmd_to_rq(b); + bool use_zwl_a = rq_a->q->limits.use_zone_write_lock; + bool use_zwl_b = rq_b->q->limits.use_zone_write_lock; + + /* + * Order the commands that need zone write locking after the commands + * that do not need zone write locking. Order the commands that do not + * need zone write locking by LBA. Do not reorder the commands that + * need zone write locking. See also the comment above the list_sort() + * definition. + */ + if (use_zwl_a || use_zwl_b) + return use_zwl_a > use_zwl_b; + return blk_rq_pos(rq_a) > blk_rq_pos(rq_b); +} + +static void sd_prepare_resubmit(struct list_head *cmd_list) +{ + /* + * Sort pending SCSI commands in starting sector order. This is + * important if one of the SCSI devices associated with @shost is a + * zoned block device for which zone write locking is disabled. + */ + list_sort(NULL, cmd_list, sd_cmp_sector); +} + static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd) { struct request *req = scsi_cmd_to_rq(scmd); From patchwork Fri Aug 18 19:34:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3D9EEE498E for ; Fri, 18 Aug 2023 19:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229941AbjHRTqI (ORCPT ); Fri, 18 Aug 2023 15:46:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379728AbjHRTpi (ORCPT ); Fri, 18 Aug 2023 15:45:38 -0400 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40FE3E4C; Fri, 18 Aug 2023 12:45:23 -0700 (PDT) Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1bdbf10333bso10689175ad.1; Fri, 18 Aug 2023 12:45:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387804; x=1692992604; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X0xCoxo/qg7/ntVVvz5KKadmaE1D/02ErCrJvy85BN4=; b=lUYeYs8I8X6zj2czPl45N0pQVDbXtt+i/DUiaSV08x+0/deprfHrvIOkpTSAlZVnHb Uc+vKuJLONo4PL3bbO5eQdQiKF6leRH4SN7zrs9mJT96cD75RULTXrx9FARv3EqbLkCe D+p+7qMXWuqp8/K1ZEcmK/w7DxW8if0EHNrKfSU8sR2Jrjc6MCSZ7Z+zBN9ohmdZ9baV Ej6WJejlEjwpTQmpFV9xAdXWzG5eCw7VHKVm9rHaGG9/oI+pj7CgJXtyOXlYv/NPm1DB PsQw64DvXLgMjWt6HNxAgUJ64MBF8bDOLSKHYlvyg0DgSZ4maV+ZTlk4RVRLVZG1ycWP 0n6A== X-Gm-Message-State: AOJu0YzwMsU0SI6wQfVYI3QJ+6j2mO7/O38Tf497V2vEHHRaRIEzTb8R R0M2wO0W0LHEI+1QhCOGQdo= X-Google-Smtp-Source: AGHT+IFXWe77mlu7Q8bfL6K8M3GpduaICUrEk4gamEhEiTwo7H0n3/pNlWlQ4gM/vCgydxGQngznNA== X-Received: by 2002:a17:902:c404:b0:1b5:64a4:be8b with SMTP id k4-20020a170902c40400b001b564a4be8bmr237400plk.35.1692387804515; Fri, 18 Aug 2023 12:43:24 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:24 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 07/18] scsi: core: Retry unaligned zoned writes Date: Fri, 18 Aug 2023 12:34:10 -0700 Message-ID: <20230818193546.2014874-8-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. The SCSI error handler will sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 16 ++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 6 ++++++ include/scsi/scsi.h | 1 + 4 files changed, 24 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c4d817f044a0..e393d78b921b 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -699,6 +699,22 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This may indicate that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + !req->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(req)) { + SCSI_LOG_ERROR_RECOVERY(3, + sdev_printk(KERN_INFO, scmd->device, + "Retrying unaligned write at LBA %#llx.\n", + scsi_get_lba(scmd))); + return NEEDS_DELAYED_RETRY; + } + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 59176946ab56..69da8aee13df 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index d4feac5de17a..e96014caa34c 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1240,6 +1240,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + /* + * Increase the number of allowed retries for zoned writes if zone + * write locking is disabled. + */ + if (!rq->q->limits.use_zone_write_lock && blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Fri Aug 18 19:34:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 724AFEE498E for ; Fri, 18 Aug 2023 19:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357933AbjHRTqk (ORCPT ); Fri, 18 Aug 2023 15:46:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346250AbjHRTqI (ORCPT ); Fri, 18 Aug 2023 15:46:08 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 745934234; Fri, 18 Aug 2023 12:45:46 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1bf1935f6c2so9101805ad.1; Fri, 18 Aug 2023 12:45:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387806; x=1692992606; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KzaYxnCqQNUVR3i3khEB0a4wN2TqC5S5bjyD6Ki8qwY=; b=akqkrTSFy0N40Hp2gF+Ri7ZxvGQoq7gibzEyQ45KfN0pyKyd7k5A71afP2GBMy62fC lMPwYJg3IFkWTgWUVtXaRDqq+qFLfX2ldkKpOe/X00IgCvgAYMZ0Dr6/tfAX7o/Lakdb Me4KJvnMaFIcUUrH9UvUvh02BLXscwqQkGEv0eraiyFNA6dJIgPzEkgZ+OtpizbxVuJ2 M6YoG1pXHxMHK3bOnu7q7BRy8KfX8Bpy2QDrNPbjpXp6sDOcFp0fajNfPmtMVW4RPRA0 ZUiZf8qo6UWoHyaXlYhD3mShpUJz7KdO6mgDnnoZzvABDW+NRSL9D/x32jV7MgF9SzYG Y6Eg== X-Gm-Message-State: AOJu0Yzz4t8VwJCGJCeoKEKQOkrxlM8hFy77h81qToOg34/Y8UxM6C5k 8YS3eQkmo4H0rRrSjJwWRBs= X-Google-Smtp-Source: AGHT+IHAWW7ymWFXjV8zJ0Xl1YvuSwu8blcDtuhTdeE5FxGsNC0vGOoXyYGnyAWrfp5UY+zwgUE5SQ== X-Received: by 2002:a17:902:da87:b0:1b4:5699:aac1 with SMTP id j7-20020a170902da8700b001b45699aac1mr291880plx.12.1692387805963; Fri, 18 Aug 2023 12:43:25 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:25 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 08/18] scsi: sd_zbc: Only require an I/O scheduler if needed Date: Fri, 18 Aug 2023 12:34:11 -0700 Message-ID: <20230818193546.2014874-9-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org An I/O scheduler that serializes zoned writes is only needed if the SCSI LLD does not preserve the write order. Hence only set ELEVATOR_F_ZBD_SEQ_WRITE if the LLD does not preserve the write order. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd_zbc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index a25215507668..718b31bed878 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -955,7 +955,9 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) /* The drive satisfies the kernel restrictions: set it up */ blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + if (!q->limits.driver_preserves_write_order) + blk_queue_required_elevator_features(q, + ELEVATOR_F_ZBD_SEQ_WRITE); if (sdkp->zones_max_open == U32_MAX) disk_set_max_open_zones(disk, 0); else From patchwork Fri Aug 18 19:34:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3381CEE499A for ; Fri, 18 Aug 2023 19:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379781AbjHRTqK (ORCPT ); Fri, 18 Aug 2023 15:46:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379887AbjHRTp5 (ORCPT ); Fri, 18 Aug 2023 15:45:57 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D4864684; Fri, 18 Aug 2023 12:45:35 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1bdc243d62bso9937225ad.3; Fri, 18 Aug 2023 12:45:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387807; x=1692992607; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4ddJJSzsf27gguzmddFTPFN75vUwRRNdf41W6bkncAY=; b=TNHr+YUYvNcXNKlrFH6BwHesiw4CnmGDXzXHofcUfdqpnahczCdcmGonknirpDfBtz VLXJWzzukxoD5WH3QRJEp/hC3kudIcvbSNEB8UijC9shd16TRSVCCL+ajXNMGi6y7qYQ mKeRUgZGGsKLRO04eAUapWmC7E/rAuSgW6xueCxWawYiK2ZURccVhIiTqa8h7lLgOED0 ZXMUO71s+uc6uxOeERHvheUyxMYjUVd+d6UIgej2VaAKwNguUQtSdlf+YMEJi7L398Yq OAWe8tSvzmRMbeILTsbsdHqoLWwP18fSsnBRqpFVkaKj3JZELNFHjW3S4yYdAR2o/5AP E+Yg== X-Gm-Message-State: AOJu0Yzidq9k+AMgUjG0YdsRsfdwmP2SZxdG2AMLnhWrrq5Yy8gHmfEk QWgQ+pSKZtE5FlVowRclxr+ELB2ZJFU= X-Google-Smtp-Source: AGHT+IH68pyyaMDkE/Y/eSGyPgqQVubto+pSC5F/mLRepAXgiGK4ih9ob3bbfALKPge3XdGfcETdDA== X-Received: by 2002:a17:902:ee89:b0:1b8:865d:6e1d with SMTP id a9-20020a170902ee8900b001b8865d6e1dmr116098pld.51.1692387807416; Fri, 18 Aug 2023 12:43:27 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:27 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 09/18] scsi: scsi_debug: Add the preserves_write_order module parameter Date: Fri, 18 Aug 2023 12:34:12 -0700 Message-ID: <20230818193546.2014874-10-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone write locking is not used for zoned devices if the block driver reports that it preserves the order of write commands. Make it easier to test not using zone write locking by adding support for setting the driver_preserves_write_order flag. Cc: Martin K. Petersen Cc: Douglas Gilbert Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Acked-by: Douglas Gilbert --- drivers/scsi/scsi_debug.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..1ea4925d2c2f 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_preserves_write_order; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_preserves_write_order) + q->limits.driver_preserves_write_order = true; return 0; } @@ -5755,6 +5760,8 @@ module_param_named(statistics, sdebug_statistics, bool, S_IRUGO | S_IWUSR); module_param_named(strict, sdebug_strict, bool, S_IRUGO | S_IWUSR); module_param_named(submit_queues, submit_queues, int, S_IRUGO); module_param_named(poll_queues, poll_queues, int, S_IRUGO); +module_param_named(preserves_write_order, sdeb_preserves_write_order, bool, + S_IRUGO); module_param_named(tur_ms_to_ready, sdeb_tur_ms_to_ready, int, S_IRUGO); module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO); module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO); @@ -5812,6 +5819,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(preserves_write_order, + "Whether or not to inform the block layer that this driver preserves the order of WRITE commands (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Fri Aug 18 19:34:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA72EE4997 for ; Fri, 18 Aug 2023 19:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379728AbjHRTqJ (ORCPT ); Fri, 18 Aug 2023 15:46:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379890AbjHRTp6 (ORCPT ); Fri, 18 Aug 2023 15:45:58 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5E4F4689; Fri, 18 Aug 2023 12:45:35 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1bdbbede5d4so10556985ad.2; Fri, 18 Aug 2023 12:45:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387809; x=1692992609; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mKw3Aw2p8++zyZ3GJZp8ItEvankLwq1zpCMHoyF24eQ=; b=QriJlvok2CV4PtJkI8TZuCXbQ51IV0INk90nYcVrn9cYjiTSQ2mMFMMEO4XUE9Llg9 X5WIuWWoLZHZX6qG9yMAGIa/YMF29qm18CTj8k4NDttbCUK5B3X3acRlvxZ3c359XyG7 5wAvXpZqpWWAlVUuq9kaYo4+fC68RSsNOZs4BDYKytrOOR1fpi2/z/kA7c2cii5Aht45 4bjoTMvSN6lR0XliNhQQDm0qWX8X74ezqkv7nkztD+vZuDPg/J4n6RFFcvpVZtRiX3Ij K5yZQYCAa0D2xsMCVAM3bBZfsj2H9uybq1YBRquGqvztNTuvZeQtAswGX5he5Apd+3r0 tQoQ== X-Gm-Message-State: AOJu0YwjritIK3h58NsyRLteCx3tyjXtg+OxUNHnzeNvKMOrB5rsHTNj pu1QXK3zaH/RX3NZvicJTS8= X-Google-Smtp-Source: AGHT+IFrsVRlNTiCXtpQUwsT7QsAF5X3cdZpAdzHe0eMl5VW2o0+6UdijY5XH/JPuYingf6VqvaYRA== X-Received: by 2002:a17:902:d48e:b0:1bc:5b36:a2e8 with SMTP id c14-20020a170902d48e00b001bc5b36a2e8mr221860plg.34.1692387808961; Fri, 18 Aug 2023 12:43:28 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:28 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v10 10/18] scsi: scsi_debug: Support injecting unaligned write errors Date: Fri, 18 Aug 2023 12:34:13 -0700 Message-ID: <20230818193546.2014874-11-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Acked-by: Douglas Gilbert Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 1ea4925d2c2f..164e82c218ff 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Fri Aug 18 19:34:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0889EE499D for ; Fri, 18 Aug 2023 19:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379827AbjHRTqm (ORCPT ); Fri, 18 Aug 2023 15:46:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379864AbjHRTqY (ORCPT ); Fri, 18 Aug 2023 15:46:24 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D50524685; Fri, 18 Aug 2023 12:45:58 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1bc63ef9959so10706805ad.2; Fri, 18 Aug 2023 12:45:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387838; x=1692992638; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V9kjXpHnX8SSylVORGDS67iBhAPRKcB9tfPcWy3ggq4=; b=JhRvcjLBKBejfceMQd7M8oYEIk5X6RWZBe4La5+Bqr2B7yeyOMHGf3vO48W0MWsikU z2WYcoFRYV0MLwgWZW6fnV5SbbKYmRGVEFBlvlGPB3fUzZhogolOTo/aGPk0RZ3P0Zby gBf0Qn0w3h4fA5d/jeBmxiIcbnNiraDtl5LQI7juKuF2nTWyz20iPV2RtjQ6/9tGPmfl EsLEVWTD8nb6p4W5i4OD73IOxyresuUjwTQSTJAbUlx39xMPFUItFi0Y/j1iQa5BrLfY IocpGtM+khA/TxDbntyAs0em5meWC9kxfa72BPb4pPaLzeFphsrhw+lxbv1pGUuq3KPI 1MLA== X-Gm-Message-State: AOJu0YyyAH97MbkqTlMYeRCzq/9swFZ6wOc5odZFzrQKVGrgptr9thjm /t0jcDR5nD1oRWIe6SurWCk= X-Google-Smtp-Source: AGHT+IFBQiZR4RDdP+7V78pWQFaYjj/RYAEN9c3+1CFNCsEGV/eOAVCCZN8B0Z+iYcVLvuNxCnveHg== X-Received: by 2002:a17:903:1111:b0:1b9:de75:d5bb with SMTP id n17-20020a170903111100b001b9de75d5bbmr300486plh.7.1692387838261; Fri, 18 Aug 2023 12:43:58 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.43.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:43:57 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Bean Huo , Jinyoung Choi , Peter Wang , Lu Hongfei , Daniil Lunev , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , Arthur Simchaev , zhanghui , Po-Wen Kao , Eric Biggers , Keoseong Park Subject: [PATCH v10 11/18] scsi: ufs: Change the return type of ufshcd_auto_hibern8_update() Date: Fri, 18 Aug 2023 12:34:14 -0700 Message-ID: <20230818193546.2014874-12-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A later patch will introduce an error path in ufshcd_auto_hibern8_update(). Change the return type of that function before introducing calls to that function in the host drivers such that the host drivers only have to be modified once. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo --- drivers/ufs/core/ufs-sysfs.c | 2 +- drivers/ufs/core/ufshcd-priv.h | 1 - drivers/ufs/core/ufshcd.c | 6 ++++-- include/ufs/ufshcd.h | 2 +- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c index 6c72075750dd..a693dea1bd18 100644 --- a/drivers/ufs/core/ufs-sysfs.c +++ b/drivers/ufs/core/ufs-sysfs.c @@ -203,7 +203,7 @@ static ssize_t auto_hibern8_store(struct device *dev, goto out; } - ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); + ret = ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); out: up(&hba->host_sem); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 0f3bd943b58b..a2b74fbc2056 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -60,7 +60,6 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, u32 *attr_val); int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, struct cq_entry *cqe); int ufshcd_mcq_init(struct ufs_hba *hba); diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..053b82eac80f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,13 +4337,13 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; bool update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) - return; + return 0; spin_lock_irqsave(hba->host->host_lock, flags); if (hba->ahit != ahit) { @@ -4360,6 +4360,8 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + + return 0; } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 6dc11fa0ebb1..ffc6ffe45fe1 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1364,7 +1364,7 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) } void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); #define SD_ASCII_STD true From patchwork Fri Aug 18 19:34:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 116C3EE499F for ; Fri, 18 Aug 2023 19:47:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379830AbjHRTqn (ORCPT ); Fri, 18 Aug 2023 15:46:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379883AbjHRTqa (ORCPT ); Fri, 18 Aug 2023 15:46:30 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F13A3C3F; Fri, 18 Aug 2023 12:46:02 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1bf092a16c9so10949595ad.0; Fri, 18 Aug 2023 12:46:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387846; x=1692992646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+HldWEGBvoxP6ozgprpbRkTLOv311yhKIPJB3nLy/L8=; b=CjU0WVo2d3oQb/JH+8g0iC4fHI77Ot1oNibliRgudmOwXt3FvJuyBkdsBthji89WXt 0E8pbb1IQwiTFa83PMQ43SXgVg/w71RMjuvbkRqyJ17gIjwXkXHfPh2XSyxZNa3U2+y/ 6Rwjy73VcjQqatCwYWEi20vaAnAm0J0fdAW0Y/m2BZbchd16xkt+qLsEdKIVruDLXAOU gnGAVhySZSF6m8/2NBZ+q2x/wOOH9IZQ93UJ/BJbCZJAK959xJBYiX10cVdSNWd+1U2v WVhiwyX8GjLAI1NPmwV/Pbb3HKXRHs3nGui+95FihGJyTknpI76mXEPCOalw/c/vSvIn rkSQ== X-Gm-Message-State: AOJu0YzBPSYw5Jx7fUNQ4fGkDePKURnkpIlO0yDnWW4QhxjVR1iDl6r/ 7xLayzuLSc16SNZgwvyuohKubOjOIQs= X-Google-Smtp-Source: AGHT+IH9MXj6cLK3F5+CQkJSxomRMN7h17MmalCOsl7SNgC/rZXl4UjZU+m9MjJpzW0DqAKzoM6ycQ== X-Received: by 2002:a17:903:24f:b0:1bf:4833:9c25 with SMTP id j15-20020a170903024f00b001bf48339c25mr248127plh.36.1692387846421; Fri, 18 Aug 2023 12:44:06 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:06 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Krzysztof Kozlowski Subject: [PATCH v10 12/18] scsi: ufs: hisi: Rework the code that disables auto-hibernation Date: Fri, 18 Aug 2023 12:34:15 -0700 Message-ID: <20230818193546.2014874-13-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The host driver link startup callback is called indirectly by ufshcd_probe_hba(). That function applies the auto-hibernation settings by writing hba->ahit into the auto-hibernation control register. Simplify the code for disabling auto-hibernation by setting hba->ahit instead of writing into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen --- drivers/ufs/host/ufs-hisi.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c index 5b3060cd0ab8..f2ec687121bb 100644 --- a/drivers/ufs/host/ufs-hisi.c +++ b/drivers/ufs/host/ufs-hisi.c @@ -142,7 +142,6 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) struct ufs_hisi_host *host = ufshcd_get_variant(hba); int err; uint32_t value; - uint32_t reg; /* Unipro VS_mphy_disable */ ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0xD0C1, 0x0), 0x1); @@ -232,9 +231,7 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) ufshcd_writel(hba, UFS_HCLKDIV_NORMAL_VALUE, UFS_REG_HCLKDIV); /* disable auto H8 */ - reg = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER); - reg = reg & (~UFS_AHIT_AH8ITV_MASK); - ufshcd_writel(hba, reg, REG_AUTO_HIBERNATE_IDLE_TIMER); + hba->ahit = 0; /* Unipro PA_Local_TX_LCC_Enable */ ufshcd_disable_host_tx_lcc(hba); From patchwork Fri Aug 18 19:34:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21321EE499E for ; Fri, 18 Aug 2023 19:47:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379832AbjHRTqo (ORCPT ); Fri, 18 Aug 2023 15:46:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379886AbjHRTqa (ORCPT ); Fri, 18 Aug 2023 15:46:30 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C1EB4222; Fri, 18 Aug 2023 12:46:03 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1bdf4752c3cso9655895ad.2; Fri, 18 Aug 2023 12:46:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387848; x=1692992648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KZcvkVqEgcZDFQS81H9kWGylORDZIjImpoq5H3+tcVc=; b=GjIpXLlNXitfjsuDnjX4WP8ZQ4HOE5ie4xK/Nu7TfTxLEUJ5HRqnr5nY0dltLaPpjx A0loBtuxgub3YhtG8OYL8DvnQ45RS4F8Od905pq4E5im+4Ilb/o69A8OA/ZsIMm2Qdgx 8kDyFCfHxbYrge4wFYro+RPYGrCOYw6/MiNzpasOt4RldrCeMm7HFxWUFNkfgJHnV4nx QM1WYTfEzLQOaeKPdvB1Yl5o5EpTy+fEtSqIOQD1ZM9CuhptqEZblJqOgT2yMUmS+/Ip Jq2GI1Ucoc5XncnkQ1o9zvAF0jDH4zKw0xCRCKqacaDYf6q+0duhk2ZGITthXZzDonQH hH3g== X-Gm-Message-State: AOJu0Yx/Bhq/bIQzjlmaBk/MyOG6e/OWpoCYrcYcfSUPAKYGfoxlckfh FmOv0EXdx0Pir+tFwwI9lkU= X-Google-Smtp-Source: AGHT+IEV1C9zA43zMqSI+JkbfJmRb0iMdmLvh/+5xAajttb3+NI7TnYzzudwcf4SLdwlJOlRLyQjVg== X-Received: by 2002:a17:902:c409:b0:1bc:6266:d0e4 with SMTP id k9-20020a170902c40900b001bc6266d0e4mr144999plk.69.1692387847876; Fri, 18 Aug 2023 12:44:07 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:07 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , Stanley Chu , "James E.J. Bottomley" , Matthias Brugger Subject: [PATCH v10 13/18] scsi: ufs: mediatek: Rework the code for disabling auto-hibernation Date: Fri, 18 Aug 2023 12:34:16 -0700 Message-ID: <20230818193546.2014874-14-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Call ufshcd_auto_hibern8_update() instead of writing directly into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen --- drivers/ufs/host/ufs-mediatek.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c index e68b05976f9e..266898a877b0 100644 --- a/drivers/ufs/host/ufs-mediatek.c +++ b/drivers/ufs/host/ufs-mediatek.c @@ -1252,7 +1252,7 @@ static void ufs_mtk_auto_hibern8_disable(struct ufs_hba *hba) int ret; /* disable auto-hibern8 */ - ufshcd_writel(hba, 0, REG_AUTO_HIBERNATE_IDLE_TIMER); + WARN_ON_ONCE(ufshcd_auto_hibern8_update(hba, 0) != 0); /* wait host return to idle state when auto-hibern8 off */ ufs_mtk_wait_idle_state(hba, 5); From patchwork Fri Aug 18 19:34:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08696EE4999 for ; Fri, 18 Aug 2023 19:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357693AbjHRTqJ (ORCPT ); Fri, 18 Aug 2023 15:46:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379866AbjHRTpx (ORCPT ); Fri, 18 Aug 2023 15:45:53 -0400 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0949B44B5; Fri, 18 Aug 2023 12:45:32 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1bee82fad0fso9942025ad.2; Fri, 18 Aug 2023 12:45:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387860; x=1692992660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FAIQKgLHklFdKNw8QYfKTGv+/r8dWNnrRjcsYXO8i4g=; b=Gyl5zbWTt8PaIJhBMVgG3kAL7gfMLAI32nlTJQ9eZQ3MXF0iUqHqcwVl6yrVbbGZpk BAC64ttGM6o2Aqc152xCeR1UvsZCGpqmH7rgVmbyztgohcFW4ppy486VY2DRomO3RrBK xq778UyCHMqSt1OxLga8TpzVFh49DSS99ZOCb/oHsV/dgQWoRNfGf5ZbSKRiGd+WwbWJ tUFImqBOHuTbrX/Qa+ffmqJRg4UH0GaSCBzV5PNk7kC+OjH+ZorWt02o/YlVKHACQ9Oo uCgKTq5wbNT1W8fzlBUXu6XY1CK2b3GcRIpcd7b8d38WXx1UipuH6ggICRhvSnvre6hR 0dBw== X-Gm-Message-State: AOJu0YwLKZkgrpj3vwxPivvSc0GqNx9x1RuLdMHu+lYiHjDDpMCBc2cv +Vtx0TE0YTMDzVGWB03bVhw= X-Google-Smtp-Source: AGHT+IGJSJR3wKxhjL++9jaUMFFqASt4AEGcsOW2dxNGxK55rffvi+frwODR8BinIKDeRJAWQlspnA== X-Received: by 2002:a17:902:da81:b0:1be:e7eb:32fe with SMTP id j1-20020a170902da8100b001bee7eb32femr47409plx.45.1692387859794; Fri, 18 Aug 2023 12:44:19 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:19 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Orson Zhai , Baolin Wang , Chunyan Zhang , Adrian Hunter , Zhe Wang Subject: [PATCH v10 14/18] scsi: ufs: sprd: Rework the code for disabling auto-hibernation Date: Fri, 18 Aug 2023 12:34:17 -0700 Message-ID: <20230818193546.2014874-15-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Call ufshcd_auto_hibern8_update() instead of writing directly into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen --- drivers/ufs/host/ufs-sprd.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/drivers/ufs/host/ufs-sprd.c b/drivers/ufs/host/ufs-sprd.c index 2bad75dd6d58..15a1876656a7 100644 --- a/drivers/ufs/host/ufs-sprd.c +++ b/drivers/ufs/host/ufs-sprd.c @@ -180,15 +180,8 @@ static int sprd_ufs_pwr_change_notify(struct ufs_hba *hba, static int ufs_sprd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op, enum ufs_notify_change_status status) { - unsigned long flags; - - if (status == PRE_CHANGE) { - if (ufshcd_is_auto_hibern8_supported(hba)) { - spin_lock_irqsave(hba->host->host_lock, flags); - ufshcd_writel(hba, 0, REG_AUTO_HIBERNATE_IDLE_TIMER); - spin_unlock_irqrestore(hba->host->host_lock, flags); - } - } + if (status == PRE_CHANGE) + WARN_ON_ONCE(ufshcd_auto_hibern8_update(hba, 0) != 0); return 0; } From patchwork Fri Aug 18 19:34:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 769D5EE499E for ; Fri, 18 Aug 2023 19:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379803AbjHRTqM (ORCPT ); Fri, 18 Aug 2023 15:46:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379901AbjHRTqA (ORCPT ); Fri, 18 Aug 2023 15:46:00 -0400 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5320C421C; Fri, 18 Aug 2023 12:45:37 -0700 (PDT) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1bdbf10333bso10697445ad.1; Fri, 18 Aug 2023 12:45:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387876; x=1692992676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hgOAgWZfyUnXg0UmZ/VcOkHNMzTjWrEOvdI22N5O+Zo=; b=GVKYv5mi9BVsUwrOIgoKmWBx+tBo2N3r4IjXwOjcMgHcbfr3pr2WCmFwTvlvTLoBSY QoyEg6yWP8VVjXP7C+qUpApTQT+kHeT6d6eqoQ85PPTxMG/2j72LCOm0pJL95OV8XlFE VFKyid7syMdqJwLYsiDbQGqi/2qS+NRPd4ojhPlynMEf7WJdz+cxpPj0GXMqEapetcS7 Y2Ygk9JLuz1oqY0wsxLiHSnbULD33UD6lvD+a/Bgq5RzEMz8soMzxlpzgi7Z0tlw2xEH neDkfXZjeLvmhx+WNHDWqVJkKFbKVqhDeoiJ78p1jOzBorzIjBnTzcp64fsUG+sTREeu vLKg== X-Gm-Message-State: AOJu0Yzu/5kkdB4+oC4ul4M7+mAXEio5TNcsPANdjSTsKslVXs+pDmCM GqI+4yeUfgBLei96i+14QpY= X-Google-Smtp-Source: AGHT+IEWcLSnhP38RPZJZ48kalt2cffBfI8OjqEUDdBnjJV1qJK07mUdEfXxLFzJtmm5p+Jd9Ud8pA== X-Received: by 2002:a17:902:e80e:b0:1bf:423:957b with SMTP id u14-20020a170902e80e00b001bf0423957bmr258608plg.26.1692387875707; Fri, 18 Aug 2023 12:44:35 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:35 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev , Manivannan Sadhasivam , Po-Wen Kao , Eric Biggers , Keoseong Park , Daniil Lunev Subject: [PATCH v10 15/18] scsi: ufs: Rename ufshcd_auto_hibern8_enable() and make it static Date: Fri, 18 Aug 2023 12:34:18 -0700 Message-ID: <20230818193546.2014874-16-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename ufshcd_auto_hibern8_enable() into ufshcd_configure_auto_hibern8() since this function can enable or disable auto-hibernation. Since ufshcd_auto_hibern8_enable() is only used inside the UFSHCI driver core, declare it static. Additionally, move the definition of this function to just before its first caller. Suggested-by: Bao D. Nguyen Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo --- drivers/ufs/core/ufshcd.c | 24 +++++++++++------------- include/ufs/ufshcd.h | 1 - 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 053b82eac80f..a08924972b20 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,14 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) +{ + if (!ufshcd_is_auto_hibern8_supported(hba)) + return; + + ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); +} + int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; @@ -4356,7 +4364,7 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } @@ -4365,14 +4373,6 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) -{ - if (!ufshcd_is_auto_hibern8_supported(hba)) - return; - - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); -} - /** * ufshcd_init_pwr_info - setting the POR (power on reset) * values in hba power info @@ -8817,8 +8817,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) if (hba->ee_usr_mask) ufshcd_write_ee_control(hba); - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_toggle_state(hba, HPB_RESET, HPB_PRESENT); out: @@ -9811,8 +9810,7 @@ static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); } - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_resume(hba); goto out; diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index ffc6ffe45fe1..7ae071a6811c 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1363,7 +1363,6 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); From patchwork Fri Aug 18 19:34:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54C87EE4997 for ; Fri, 18 Aug 2023 19:48:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237689AbjHRTro (ORCPT ); Fri, 18 Aug 2023 15:47:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379862AbjHRTr1 (ORCPT ); Fri, 18 Aug 2023 15:47:27 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C515D46A0; Fri, 18 Aug 2023 12:46:47 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1bf3a2f44f0so7514845ad.2; Fri, 18 Aug 2023 12:46:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387885; x=1692992685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Uz9IhUw0fhLa+d31AmvvxEIaG6umYa3gXf+jIQrc2ys=; b=PhRffwHiAyj6u6YTl0ReefmOAkIBluNmMZwCgHnLHOuykbGBxja3DYnUXDr0xRhEJH Cq+Fof+W3RHa5nrjM8POjIPB3MKgR9CSOCIKVsNvLtq33Z7xYP2m8pB0PWH3JvYQUhtM JcSp+yHs6PGPFwDRl2pZzvr6iTtXEJ/JpgHXn4FjuwUBy+jB5aHRyRlpWNt5SEvWc9qc oxcbDByyNohfV4v0NDqBl9z4uTtQOhgCMbZ1pzxtJQHRPVpkR0t4XRMgij5sEPHTNKTr f/ePVWBy+R2DsOnt6QiUS+G96udMSbZMtOjF8Yhzu7J7oP8EQ7tdr03KHwPCq1HOBfDl QFXw== X-Gm-Message-State: AOJu0Yzo49EX5HEpMqL6Z7KXGD1sFvFQ5HD6awXv5GwW628XUCaI510p o7yaCw9QPaivHmtZL/AxXGs= X-Google-Smtp-Source: AGHT+IHlu8hqOTbdqsplVPmXn7e7E+Hc5Whx5dtj8RVMixWwtLZu6cu1FccsMZoLOVVJr8Khijpabw== X-Received: by 2002:a17:902:ee89:b0:1bd:e877:a7bc with SMTP id a9-20020a170902ee8900b001bde877a7bcmr226318pld.7.1692387884731; Fri, 18 Aug 2023 12:44:44 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:44 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v10 16/18] scsi: ufs: Simplify ufshcd_auto_hibern8_update() Date: Fri, 18 Aug 2023 12:34:19 -0700 Message-ID: <20230818193546.2014874-17-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Calls to ufshcd_auto_hibern8_update() are already serialized: this function is either called if user space software is not running (preparing to suspend) or from a single sysfs store callback function. Kernfs serializes sysfs .store() callbacks. No functionality is changed. This patch makes the next patch in this series easier to read. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo --- drivers/ufs/core/ufshcd.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index a08924972b20..68bf8e94eee6 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4347,21 +4347,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { - unsigned long flags; - bool update = false; + const u32 cur_ahit = READ_ONCE(hba->ahit); - if (!ufshcd_is_auto_hibern8_supported(hba)) + if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; - spin_lock_irqsave(hba->host->host_lock, flags); - if (hba->ahit != ahit) { - hba->ahit = ahit; - update = true; - } - spin_unlock_irqrestore(hba->host->host_lock, flags); - - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + WRITE_ONCE(hba->ahit, ahit); + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_configure_auto_hibern8(hba); From patchwork Fri Aug 18 19:34:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9816AEE4997 for ; Fri, 18 Aug 2023 19:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379821AbjHRTql (ORCPT ); Fri, 18 Aug 2023 15:46:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379680AbjHRTqV (ORCPT ); Fri, 18 Aug 2023 15:46:21 -0400 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2AED44BA; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-689fa6f94e1so1078081b3a.1; Fri, 18 Aug 2023 12:45:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387893; x=1692992693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3nGkbMr8f4Di9fmNTcg0EeI1U2lHMPKKJiTC7Gl/Le4=; b=Y92L83YMw/FmKVh5ovIFjwroTnBJruzPPY3jaGButfzCkPXBRyRjLZOA79MZ/hfckY XKBhjSV5xIGpsfCbNmsR6nkygIMntXDq3x7f6FKYWOcaf+hqlrfn1bNI69rHKcH+WyV3 7qyt9f6TysR35asNUj6PNJpxURl3uitpsY2w/2qDlJwiAA+QA4d0KftpJlzwb0nLsLLi hMA/Lw1YjtnP4RjvQ/ypEVsT2dalNv0LbsnB7tZnX73gi0NUehvjjg4klE+LAg//gZaS Eo54mYicPXc1cbH7RDUctxkbejIQYYBEflFgNj6LSZzXj1nWt15r9btw0Av5Nk5nwHp3 k+3w== X-Gm-Message-State: AOJu0Yx/wdxopz0G87nZsuEgDrUWPtJFwpiBvNkVsbfp63JiGr2RQWi7 th9FM3w9za/I1yZDLWF88xw= X-Google-Smtp-Source: AGHT+IENyR4lLjxKxnHJugBFI37tqAeFW35iUCCyJM4vlgSxkiPPnM+YQGedDS8bleV9sDwcAKw+oA== X-Received: by 2002:a05:6a20:548f:b0:143:b9c4:966 with SMTP id i15-20020a056a20548f00b00143b9c40966mr236838pzk.22.1692387892903; Fri, 18 Aug 2023 12:44:52 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.44.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:44:52 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v10 17/18] scsi: ufs: Forbid auto-hibernation without I/O scheduler Date: Fri, 18 Aug 2023 12:34:20 -0700 Message-ID: <20230818193546.2014874-18-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org UFSHCI 3.0 controllers do not preserve the write order if auto-hibernation is enabled. If the write order is not preserved, an I/O scheduler is required to serialize zoned writes. Hence do not allow auto-hibernation to be enabled without I/O scheduler if a zoned logical unit is present and if the controller is operating in legacy mode. This patch has been tested with the following shell script: show_ah8() { echo -n "auto_hibern8: " adb shell "cat /sys/devices/platform/13200000.ufs/auto_hibern8" } set_ah8() { local rc adb shell "echo $1 > /sys/devices/platform/13200000.ufs/auto_hibern8" rc=$? show_ah8 return $rc } set_iosched() { adb shell "echo $1 >/sys/class/block/$zoned_bdev/queue/scheduler && echo -n 'I/O scheduler: ' && cat /sys/class/block/sde/queue/scheduler" } adb root zoned_bdev=$(adb shell grep -lvw 0 /sys/class/block/sd*/queue/chunk_sectors |& sed 's|/sys/class/block/||g;s|/queue/chunk_sectors||g') [ -n "$zoned_bdev" ] show_ah8 set_ah8 0 set_iosched none if set_ah8 150000; then echo "Error: enabled AH8 without I/O scheduler" fi set_iosched mq-deadline set_ah8 150000 Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Reviewed-by: Can Guo --- drivers/ufs/core/ufshcd.c | 58 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 68bf8e94eee6..c28a362b5b99 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,30 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, + bool preserves_write_order) +{ + struct scsi_device *sdev; + + if (!preserves_write_order) { + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + /* + * Refuse to enable auto-hibernation if no I/O scheduler + * is present. This code does not check whether the + * attached I/O scheduler serializes zoned writes + * (ELEVATOR_F_ZBD_SEQ_WRITE) because this cannot be + * checked from outside the block layer core. + */ + if (blk_queue_is_zoned(q) && !q->elevator) + return -EPERM; + } + } + + return 0; +} + static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) { if (!ufshcd_is_auto_hibern8_supported(hba)) @@ -4345,13 +4369,40 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } +/** + * ufshcd_auto_hibern8_update() - Modify the auto-hibernation control register + * @hba: per-adapter instance + * @ahit: New auto-hibernate settings. Includes the scale and the value of the + * auto-hibernation timer. See also the UFSHCI_AHIBERN8_TIMER_MASK and + * UFSHCI_AHIBERN8_SCALE_MASK constants. + * + * A note: UFSHCI controllers do not preserve the command order in legacy mode + * if auto-hibernation is enabled. If the command order is not preserved, an + * I/O scheduler that serializes zoned writes (mq-deadline) is required if a + * zoned logical unit is present. Enabling auto-hibernation without attaching + * the mq-deadline scheduler first may cause unaligned write errors for the + * zoned logical unit if a zoned logical unit is present. + */ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { const u32 cur_ahit = READ_ONCE(hba->ahit); + bool prev_state, new_state; + int ret; if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; + prev_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, cur_ahit); + new_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, ahit); + + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled for legacy UFSHCI mode. + */ + ret = ufshcd_update_preserves_write_order(hba, false); + if (ret) + return ret; + } WRITE_ONCE(hba->ahit, ahit); if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); @@ -4360,6 +4411,13 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. + */ + ret = ufshcd_update_preserves_write_order(hba, true); + WARN_ON_ONCE(ret); + } return 0; } From patchwork Fri Aug 18 19:34:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13358299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14C30EE498F for ; Fri, 18 Aug 2023 19:48:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379741AbjHRTro (ORCPT ); Fri, 18 Aug 2023 15:47:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379885AbjHRTrh (ORCPT ); Fri, 18 Aug 2023 15:47:37 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A876B421C; Fri, 18 Aug 2023 12:46:58 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1bdbbede5d4so10566595ad.2; Fri, 18 Aug 2023 12:46:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692387901; x=1692992701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G2eimpFm6MK7s06b9e2Yr8gGU9ocIPL07Hu4zNTtE8k=; b=icjYkjleRgprSdUBwxJldGgKPFPquCqEjZDmVVR7NRV+WZng7THctHt67Zl1vVKe2n yS03uMsebQ/NyEkoFaLkC7j3hZ2nJLaiRZzpYkK/P1HEOLY+CO5GtzU3XqolNaWGXMpI 8DR1mF6p6LNzuqijMWoyHMVm4axzVnFvcfo3p6U5TRUA1lghhuln1R+TxqQm4tlhzP2o xrg/Fx0HfbBStdJNJvUdogdPuD7zD7SlQd48ZK4vQTFi2tPhB7Fv7KugWu26MKx301yI Th21tGzRL6qO5dYL9Zyl2AJDmULuHeUocMQGN8mnPkrWRpavpcGSvgKuG2ot/6dUPyKj zoFg== X-Gm-Message-State: AOJu0YzPnXkiBFy4VVZrOlDkoZ/ESvEx7mp+UEhFkBnh/cEp2kZF4/Rn FZRE9UPeTQEMd6gckF82GpQ= X-Google-Smtp-Source: AGHT+IEzQyQxebBEERIXq39XaYdL9hcGXjvOR8wwBm3yITuIDviWJAlq8pG7+z7S6AnZS8ARxnK3sQ== X-Received: by 2002:a17:902:e809:b0:1bc:671d:6d28 with SMTP id u9-20020a170902e80900b001bc671d6d28mr265674plg.10.1692387901309; Fri, 18 Aug 2023 12:45:01 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5012:5192:47aa:c304]) by smtp.gmail.com with ESMTPSA id u16-20020a170903125000b001bb8be10a84sm2115801plh.304.2023.08.18.12.45.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Aug 2023 12:45:01 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , "Bao D . Nguyen" , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v10 18/18] scsi: ufs: Inform the block layer about write ordering Date: Fri, 18 Aug 2023 12:34:21 -0700 Message-ID: <20230818193546.2014874-19-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230818193546.2014874-1-bvanassche@acm.org> References: <20230818193546.2014874-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Bao D. Nguyen Signed-off-by: Bart Van Assche Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo --- drivers/ufs/core/ufshcd.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index c28a362b5b99..a685058d4943 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4357,6 +4357,19 @@ static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, return -EPERM; } } + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.driver_preserves_write_order = preserves_write_order; + blk_queue_required_elevator_features(q, + preserves_write_order ? 0 : ELEVATOR_F_ZBD_SEQ_WRITE); + if (q->disk) + disk_set_zoned(q->disk, q->limits.zoned); + blk_mq_unfreeze_queue(q); + } return 0; } @@ -4397,7 +4410,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) if (!is_mcq_enabled(hba) && !prev_state && new_state) { /* - * Auto-hibernation will be enabled for legacy UFSHCI mode. + * Auto-hibernation will be enabled for legacy UFSHCI mode. Tell + * the block layer that write requests may be reordered. */ ret = ufshcd_update_preserves_write_order(hba, false); if (ret) @@ -4413,7 +4427,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } if (!is_mcq_enabled(hba) && prev_state && !new_state) { /* - * Auto-hibernation has been disabled. + * Auto-hibernation has been disabled. Tell the block layer that + * the order of write requests is preserved. */ ret = ufshcd_update_preserves_write_order(hba, true); WARN_ON_ONCE(ret); @@ -5191,6 +5206,9 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) ufshcd_hpb_configure(hba, sdev); + q->limits.driver_preserves_write_order = + !ufshcd_is_auto_hibern8_supported(hba) || + FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) == 0; blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);