From patchwork Tue Aug 22 19:16:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC8C8EE49AE for ; Tue, 22 Aug 2023 19:18:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229519AbjHVTSr (ORCPT ); Tue, 22 Aug 2023 15:18:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbjHVTSr (ORCPT ); Tue, 22 Aug 2023 15:18:47 -0400 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AEE3CEB; Tue, 22 Aug 2023 12:18:45 -0700 (PDT) Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-26f6b2c8e80so1601911a91.1; Tue, 22 Aug 2023 12:18:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731925; x=1693336725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A0wk1xa90VnYAyPUoUN4zedZwQDxNHF4MtXWEOIMiWM=; b=FE6Nn+HLVAHkwjh1b/Mpnb6go2v0n+rpv7gqW9F6UUkVwNzUjnyRpndNi4KBYCVq4J 9K5gU7JO3MZflG4JURFzKRdPPWLLIxg85nUOKZQgiLUeHs7T3OUF8oM/7ugZYCN6uxkx FQObGf0mccuFDSuYtA4SDs2S1iyr9/EwXFGxRNvp9d93Q5ZgqSlUmdl/oknOqCmsWcLv XDS0MQPLL0b7V2SRz1+8OUH1Uj3o3XNpeiOP4HRTXV9l7p5L37+b6fxYtNDvZ0Dxa792 /hENqj9Hzdn72c2DtZuGSwHMkkro9wqRd/LZ3O8xrz+g/jRJFdccEZ8Mlb03OSFMOjDe Dg+g== X-Gm-Message-State: AOJu0YwhFtIFw7cqCv0YtK3IehWD5ARJtLmVQUIca7KHNRQuEBiIjs3O O49uR5WGnnWMNUL6xTkzcDs= X-Google-Smtp-Source: AGHT+IHBIYmyIdthbQxnyjuocCk2EDFm3FtFi0+gitkYqc3elhgjvWS/cOPE+WBvd7zUxbbN9ssacA== X-Received: by 2002:a17:90b:a03:b0:267:f1d0:ca70 with SMTP id gg3-20020a17090b0a0300b00267f1d0ca70mr9581005pjb.47.1692731924712; Tue, 22 Aug 2023 12:18:44 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:44 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v11 01/16] block: Introduce more member variables related to zone write locking Date: Tue, 22 Aug 2023 12:16:56 -0700 Message-ID: <20230822191822.337080-2-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Many but not all storage controllers require serialization of zoned writes. Introduce two new request queue limit member variables related to write serialization. 'driver_preserves_write_order' allows block drivers to indicate that the order of write commands is preserved and hence that serialization of writes per zone is not required. 'use_zone_write_lock' is set by disk_set_zoned() if and only if the block device has zones and if the block driver does not preserve the order of write requests. Reviewed-by: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Hannes Reinecke Reviewed-by: Nitesh Shetty --- block/blk-settings.c | 15 +++++++++++++++ block/blk-zoned.c | 1 + include/linux/blkdev.h | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..4c776c08f190 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -56,6 +56,8 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; + lim->driver_preserves_write_order = false; + lim->use_zone_write_lock = false; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; @@ -82,6 +84,8 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + /* Request-based stacking drivers do not reorder requests. */ + lim->driver_preserves_write_order = true; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -685,6 +689,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_secure_erase_sectors); t->zone_write_granularity = max(t->zone_write_granularity, b->zone_write_granularity); + t->driver_preserves_write_order = t->driver_preserves_write_order && + b->driver_preserves_write_order; + t->use_zone_write_lock = t->use_zone_write_lock || + b->use_zone_write_lock; t->zoned = max(t->zoned, b->zoned); return ret; } @@ -949,6 +957,13 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) } q->limits.zoned = model; + /* + * Use the zone write lock only for zoned block devices and only if + * the block driver does not preserve the order of write commands. + */ + q->limits.use_zone_write_lock = model != BLK_ZONED_NONE && + !q->limits.driver_preserves_write_order; + if (model != BLK_ZONED_NONE) { /* * Set the zone write granularity to the device logical block diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 619ee41a51cc..112620985bff 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -631,6 +631,7 @@ void disk_clear_zone_settings(struct gendisk *disk) q->limits.chunk_sectors = 0; q->limits.zone_write_granularity = 0; q->limits.max_zone_append_sectors = 0; + q->limits.use_zone_write_lock = false; blk_mq_unfreeze_queue(q); } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c1421da8d45e..38a1a71b2bcc 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -316,6 +316,16 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; + /* + * Whether or not the block driver preserves the order of write + * requests. Set by the block driver. + */ + bool driver_preserves_write_order; + /* + * Whether or not zone write locking should be used. Set by + * disk_set_zoned(). + */ + bool use_zone_write_lock; enum blk_zoned_model zoned; /* From patchwork Tue Aug 22 19:16:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87AB4EE49AF for ; Tue, 22 Aug 2023 19:18:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230062AbjHVTSu (ORCPT ); Tue, 22 Aug 2023 15:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229958AbjHVTSr (ORCPT ); Tue, 22 Aug 2023 15:18:47 -0400 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86D251BE; Tue, 22 Aug 2023 12:18:46 -0700 (PDT) Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1bc0d39b52cso32077185ad.2; Tue, 22 Aug 2023 12:18:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731926; x=1693336726; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FvsjixjHprGJoE6zHlkCW2HQWi6Q2BrxOyZzjpL+qb0=; b=O3ODTaQEWKz/gkevgSKtLLENR4kPyET+LZ3h5byRKhDdnTfPeBoSF6s/IA/EJmyWJQ AuVY/zGvGrXfgbo7rmmgE43WtwFWh8hgpuUg+5I2UD9M/YHao1u+e9r0QPWz/va0iCUu jHJ3pjApEAzLbxpZmuF2CMtoINzbWZhZrxo7sGwYHPX9Q9G18sJ6qO+vVbF0Qxp50E0J uscncfPHIM61QfEMeqgfoQNSh+O8XZpVxlYXJgu6amZ9gM7OoiIW2akbBo42wuWxRO8N q08i4x39dRZWqwYoPtxor4FdlATWegoxxqzRkcrXYjvBEBVyXetmbyle4y1cjcBVREVR o2tQ== X-Gm-Message-State: AOJu0Yxd8TZ2cUvCw+gClP4FqE4x0C1qR+B3+u1bWt7WxLB4TTzJCg5J aU6lRfnbFWA6wE8uefMwwxA= X-Google-Smtp-Source: AGHT+IF6kBVUhdKgw8R4Fbt6wyB7bJhtfIYMpch7uULpfhly52qJlpNXtJ+F+N2i86XJXEjo1X2c2g== X-Received: by 2002:a17:903:1c4:b0:1b8:4ec2:5200 with SMTP id e4-20020a17090301c400b001b84ec25200mr9121886plh.2.1692731925898; Tue, 22 Aug 2023 12:18:45 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:45 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v11 02/16] block: Only use write locking if necessary Date: Tue, 22 Aug 2023 12:16:57 -0700 Message-ID: <20230822191822.337080-3-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk_req_needs_zone_write_lock() return false if q->limits.use_zone_write_lock is false. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Hannes Reinecke Reviewed-by: Nitesh Shetty --- block/blk-zoned.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 112620985bff..d8a80cce832f 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -53,11 +53,16 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond) EXPORT_SYMBOL_GPL(blk_zone_cond_str); /* - * Return true if a request is a write requests that needs zone write locking. + * Return true if a request is a write request that needs zone write locking. */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (!rq->q->disk->seq_zones_wlock) + struct request_queue *q = rq->q; + + if (!q->limits.use_zone_write_lock) + return false; + + if (!q->disk->seq_zones_wlock) return false; return blk_rq_is_seq_zoned_write(rq); From patchwork Tue Aug 22 19:16:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E44BDEE49B3 for ; Tue, 22 Aug 2023 19:18:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbjHVTSv (ORCPT ); Tue, 22 Aug 2023 15:18:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbjHVTSt (ORCPT ); Tue, 22 Aug 2023 15:18:49 -0400 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A88BCD2; Tue, 22 Aug 2023 12:18:48 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1bba48b0bd2so31905205ad.3; Tue, 22 Aug 2023 12:18:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731927; x=1693336727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GjGIaWCkHD/AcFvunDs8CWF2fnVsmWJz6tsJCh5ZAlo=; b=aRRSRiAxRQXrzF2ojYqr7c7NpqFzVZXPPCrGa6W9fL2bEjK3wyQCMO3rTUwNWBJJa9 ackU5I9FXhoq+1VnU10pWrOqu5a5XlcajnPXCu6lWPF0Fp07SCAY35hSJHOpYmlcoRkT leQ7v5bwTjShSdoW2no55iEpdntGmyu9I+61Q7n5AMZDTlY2/jWIZQ1WalPNiqa9q85R 3HqDzznMpZhx93ceWMvk7W1OP7HMs7ZoOzZwvaCwHWI7cdWhJMcJ0MzR4QQn7IDxSRIU tztjhOIV6Cep0nVuxeXpIRvtkOQEV6dH3F+FAr0tUJYbtjTW0SHkfh50+9uZ3f/0IlQp RKuw== X-Gm-Message-State: AOJu0YyETRna3oBeUOyUs6aH+XUzVWl+qbfaM4QtB01HdcMe9sBQtv4M evzyaQFmAtf6Ae+z27LqgCs= X-Google-Smtp-Source: AGHT+IFp4rKL94syCqeyxdde3LzcMvKtXB0GQpl8gqaA9KAFa++Cby+oYs/2is4/oRPHHDIg0+xiOw== X-Received: by 2002:a17:903:18a:b0:1c0:774d:9342 with SMTP id z10-20020a170903018a00b001c0774d9342mr5840909plg.25.1692731927438; Tue, 22 Aug 2023 12:18:47 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:46 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v11 03/16] block/mq-deadline: Only use zone locking if necessary Date: Tue, 22 Aug 2023 12:16:58 -0700 Message-ID: <20230822191822.337080-4-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that serializes write requests per zone. Reviewed-by: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Nitesh Shetty --- block/mq-deadline.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..082ccf3186f4 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -353,7 +353,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -398,7 +398,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -526,8 +526,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -934,7 +935,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (rq->q->limits.use_zone_write_lock) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Tue Aug 22 19:16:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C348CEE49B4 for ; Tue, 22 Aug 2023 19:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230074AbjHVTSv (ORCPT ); Tue, 22 Aug 2023 15:18:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229958AbjHVTSv (ORCPT ); Tue, 22 Aug 2023 15:18:51 -0400 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7276BCED; Tue, 22 Aug 2023 12:18:49 -0700 (PDT) Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-26f9521de4cso639294a91.0; Tue, 22 Aug 2023 12:18:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731929; x=1693336729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CZv/GCTChROxyk79tHpawLiGp2hTN5F6+jFSRDwuk9c=; b=Hdbmy7M6J3g0MYWoBrBs+lHLkWuWxOs2Nk8jFzZ1HciElm8ad3qfuRLXh5VTxGxNxD XGVObclcknZ/g6yeuaEhLkunCwg721EX/7kmcR0eweP971dmLCElcE3F8Wk2nU61DTCe awhRs3+r8fSIEebqvgTxTMVzTd4PNA3K52J72KUg0rDnnbCBqFzT0HLIheFfMaPhyIN4 CAacVGUE0buVk0/7eSexMH/hpW7COVJpu1i9FVnlRZwC2XESu/s1VNA7oo9hkh0gx8Be OI+iNrUTmGNI8bIFh/6BITFJvqa1VouDdR4ghmaXZYnUFLap5ofGHN/fSFYfR3QFkeQU Mk3Q== X-Gm-Message-State: AOJu0YxyN09aeVyG9TfUL0ryIv3fYx4/ygFz/c2BBPgLsYFrGtIumENB xyJqKfFMaYmTk42DFXaz4SY= X-Google-Smtp-Source: AGHT+IHBfms/6a3mxq/qYBN4wm+paRd10JqgMtg49IbgeoVs01JLgywZDFrr4/V31wQf5qrS2ONH7A== X-Received: by 2002:a17:90a:674e:b0:268:dd0:3497 with SMTP id c14-20020a17090a674e00b002680dd03497mr7467774pjm.7.1692731928843; Tue, 22 Aug 2023 12:18:48 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:48 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 04/16] scsi: core: Introduce a mechanism for reordering requests in the error handler Date: Tue, 22 Aug 2023 12:16:59 -0700 Message-ID: <20230822191822.337080-5-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the .eh_needs_prepare_resubmit and the .eh_prepare_resubmit function pointers in struct scsi_driver. Make the error handler call .eh_prepare_resubmit() before resubmitting commands if any of the .eh_needs_prepare_resubmit() invocations return true. A later patch will use this functionality to sort SCSI commands by LBA from inside the SCSI disk driver before these are resubmitted by the error handler. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 65 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_priv.h | 1 + include/scsi/scsi_driver.h | 2 ++ 3 files changed, 68 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..c4d817f044a0 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -2186,6 +2187,68 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns true if .eh_prepare_resubmit should be called for the commands in + * @done_q. + */ +static bool scsi_needs_preparation(struct list_head *done_q) +{ + struct scsi_cmnd *scmd; + + list_for_each_entry(scmd, done_q, eh_entry) { + struct scsi_driver *uld = scsi_cmd_to_driver(scmd); + bool (*npr)(struct scsi_cmnd *) = uld->eh_needs_prepare_resubmit; + + if (npr && npr(scmd)) + return true; + } + + return false; +} + +/* + * Comparison function that allows to sort SCSI commands by ULD driver. + */ +static int scsi_cmp_uld(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + + /* See also the comment above the list_sort() definition. */ + return scsi_cmd_to_driver(a) > scsi_cmd_to_driver(b); +} + +void scsi_call_prepare_resubmit(struct list_head *done_q) +{ + struct scsi_cmnd *scmd, *next; + + if (!scsi_needs_preparation(done_q)) + return; + + /* Sort pending SCSI commands by ULD. */ + list_sort(NULL, done_q, scsi_cmp_uld); + + /* + * Call .eh_prepare_resubmit for each range of commands with identical + * ULD driver pointer. + */ + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { + struct scsi_driver *uld = scsi_cmd_to_driver(scmd); + struct list_head *prev, uld_cmd_list; + + while (&next->eh_entry != done_q && + scsi_cmd_to_driver(next) == uld) + next = list_next_entry(next, eh_entry); + if (!uld->eh_prepare_resubmit) + continue; + prev = scmd->eh_entry.prev; + list_cut_position(&uld_cmd_list, prev, next->eh_entry.prev); + uld->eh_prepare_resubmit(&uld_cmd_list); + list_splice(&uld_cmd_list, prev); + } +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2257,8 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + scsi_call_prepare_resubmit(done_q); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index f42388ecb024..df4af4645430 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -101,6 +101,7 @@ int scsi_eh_get_sense(struct list_head *work_q, struct list_head *done_q); bool scsi_noretry_cmd(struct scsi_cmnd *scmd); void scsi_eh_done(struct scsi_cmnd *scmd); +void scsi_call_prepare_resubmit(struct list_head *done_q); /* scsi_lib.c */ extern int scsi_maybe_unblock_host(struct scsi_device *sdev); diff --git a/include/scsi/scsi_driver.h b/include/scsi/scsi_driver.h index 4ce1988b2ba0..00ffa470724a 100644 --- a/include/scsi/scsi_driver.h +++ b/include/scsi/scsi_driver.h @@ -18,6 +18,8 @@ struct scsi_driver { int (*done)(struct scsi_cmnd *); int (*eh_action)(struct scsi_cmnd *, int); void (*eh_reset)(struct scsi_cmnd *); + bool (*eh_needs_prepare_resubmit)(struct scsi_cmnd *cmd); + void (*eh_prepare_resubmit)(struct list_head *cmd_list); }; #define to_scsi_driver(drv) \ container_of((drv), struct scsi_driver, gendrv) From patchwork Tue Aug 22 19:17:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB791EE4993 for ; Tue, 22 Aug 2023 19:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbjHVTTB (ORCPT ); Tue, 22 Aug 2023 15:19:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjHVTTA (ORCPT ); Tue, 22 Aug 2023 15:19:00 -0400 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9886CFE; Tue, 22 Aug 2023 12:18:56 -0700 (PDT) Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-26d4e1ba2dbso2645500a91.1; Tue, 22 Aug 2023 12:18:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731936; x=1693336736; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=adWNOW49kVXe1MZ2NF8xJiy0tj9xGw91as9Acxixn3o=; b=CXjdYKXz1xnF9fU+id0lG7JfMIRnKPM+5GJ4KXXidwEGD+14M2WExY2jFgRLZOQDK0 JtBP9HbAk28A0KYg66VwPAwdkR2w+usjBx9iWWJBTT1swQg94yR4vS3Qy5feas84+P9m QbX3GyM+Zyz3DhoyWKWbEgw4+21e5kNyGyBIgU2tYyxRylNsobd3IIabuv7qt/A/JmmD ynnE8K+b38zH1s8/0YT2SFRFCChpJZ2JU7r+6zP5sZq9WiHAR3kqzQmJT1QvebFfvi3h jgSnuAudIO6u47uVdAc8STPLKvmTZRHrwZ5Yos8xZUJyAQIeh471aNRuFJ0o7DejbctT aw9g== X-Gm-Message-State: AOJu0YxIUFkxQeY3XGXOiX4Ziz5gZghGyYFP3kclLL8uNn8718OBTQrN BOf1I0KG8sdnglDq/HvUCdM= X-Google-Smtp-Source: AGHT+IFTgeHjinpgwEw5ewk3XZ0jajKlF2OI35vr5IvEsv/AYW2+ieYC9HfLRPTQRaCLz59K2E9DOw== X-Received: by 2002:a17:90b:124b:b0:263:9661:a35c with SMTP id gx11-20020a17090b124b00b002639661a35cmr7806746pjb.8.1692731935960; Tue, 22 Aug 2023 12:18:55 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:55 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 05/16] scsi: core: Add unit tests for scsi_call_prepare_resubmit() Date: Tue, 22 Aug 2023 12:17:00 -0700 Message-ID: <20230822191822.337080-6-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Triggering all code paths in scsi_call_prepare_resubmit() via manual testing is difficult. Hence add unit tests for this function. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/Kconfig | 2 + drivers/scsi/Kconfig.kunit | 4 + drivers/scsi/Makefile | 2 + drivers/scsi/Makefile.kunit | 1 + drivers/scsi/scsi_error_test.c | 207 +++++++++++++++++++++++++++++++++ 5 files changed, 216 insertions(+) create mode 100644 drivers/scsi/Kconfig.kunit create mode 100644 drivers/scsi/Makefile.kunit create mode 100644 drivers/scsi/scsi_error_test.c diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 4962ce989113..fc288f8fb800 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -232,6 +232,8 @@ config SCSI_SCAN_ASYNC Note that this setting also affects whether resuming from system suspend will be performed asynchronously. +source "drivers/scsi/Kconfig.kunit" + menu "SCSI Transports" depends on SCSI diff --git a/drivers/scsi/Kconfig.kunit b/drivers/scsi/Kconfig.kunit new file mode 100644 index 000000000000..90984a6ec7cc --- /dev/null +++ b/drivers/scsi/Kconfig.kunit @@ -0,0 +1,4 @@ +config SCSI_ERROR_TEST + tristate "scsi_error.c unit tests" if !KUNIT_ALL_TESTS + depends on SCSI && KUNIT + default KUNIT_ALL_TESTS diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index f055bfd54a68..1c5c3afb6c6e 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -168,6 +168,8 @@ scsi_mod-$(CONFIG_PM) += scsi_pm.o scsi_mod-$(CONFIG_SCSI_DH) += scsi_dh.o scsi_mod-$(CONFIG_BLK_DEV_BSG) += scsi_bsg.o +include $(srctree)/drivers/scsi/Makefile.kunit + hv_storvsc-y := storvsc_drv.o sd_mod-objs := sd.o diff --git a/drivers/scsi/Makefile.kunit b/drivers/scsi/Makefile.kunit new file mode 100644 index 000000000000..3e98053b2709 --- /dev/null +++ b/drivers/scsi/Makefile.kunit @@ -0,0 +1 @@ +obj-$(CONFIG_SCSI_ERROR_TEST) += scsi_error_test.o diff --git a/drivers/scsi/scsi_error_test.c b/drivers/scsi/scsi_error_test.c new file mode 100644 index 000000000000..be0d25e1fb57 --- /dev/null +++ b/drivers/scsi/scsi_error_test.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2023 Google LLC + */ +#include +#include +#include +#include "scsi_priv.h" + +static struct kunit *kunit_test; + +static bool uld_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static void uld_prepare_resubmit(struct list_head *cmd_list) +{ + /* This function must not be called. */ + KUNIT_EXPECT_TRUE(kunit_test, false); +} + +/* + * Verify that .eh_prepare_resubmit() is not called if use_zone_write_lock is + * true. + */ +static void test_prepare_resubmit1(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = false, + .use_zone_write_lock = true, + .zoned = BLK_ZONED_HM, + } + }; + static struct scsi_driver uld = { + .eh_needs_prepare_resubmit = uld_needs_prepare_resubmit, + .eh_prepare_resubmit = uld_prepare_resubmit, + }; + static struct scsi_device dev = { + .request_queue = &q, + .sdev_gendev.driver = &uld.gendrv, + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + disk.queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 2, + }, + .cmd.device = &dev, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 1; + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); +} + +static struct scsi_driver *uld1, *uld2, *uld3; + +static void uld1_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld1); +} + +static void uld2_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld2); +} + +static void test_prepare_resubmit2(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = true, + .use_zone_write_lock = false, + .zoned = BLK_ZONED_HM, + } + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2, cmd3, cmd4, cmd5, cmd6; + static struct scsi_device dev1, dev2, dev3; + struct scsi_driver *uld; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + uld = kzalloc(3 * sizeof(*uld), GFP_KERNEL); + uld1 = &uld[0]; + uld1->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld1->eh_prepare_resubmit = uld1_prepare_resubmit; + uld2 = &uld[1]; + uld2->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld2->eh_prepare_resubmit = uld2_prepare_resubmit; + uld3 = &uld[2]; + disk.queue = &q; + dev1.sdev_gendev.driver = &uld1->gendrv; + dev1.request_queue = &q; + dev2.sdev_gendev.driver = &uld2->gendrv; + dev2.request_queue = &q; + dev3.sdev_gendev.driver = &uld3->gendrv; + dev3.request_queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 3, + }, + .cmd.device = &dev1, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 4; + cmd3 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 1, + }, + .cmd.device = &dev2, + }; + cmd4 = cmd3; + cmd4.rq.__sector = 2, + cmd5 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 5, + }, + .cmd.device = &dev3, + }; + cmd6 = cmd5; + cmd6.rq.__sector = 6; + list_add_tail(&cmd3.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd5.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd6.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd4.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + KUNIT_EXPECT_TRUE(test, uld1 < uld2); + KUNIT_EXPECT_TRUE(test, uld2 < uld3); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next, + &cmd3.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next, + &cmd4.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next, + &cmd5.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next->next, + &cmd6.cmd.eh_entry); + kfree(uld); +} + +static struct kunit_case prepare_resubmit_test_cases[] = { + KUNIT_CASE(test_prepare_resubmit1), + KUNIT_CASE(test_prepare_resubmit2), + {} +}; + +static struct kunit_suite prepare_resubmit_test_suite = { + .name = "prepare_resubmit", + .test_cases = prepare_resubmit_test_cases, +}; +kunit_test_suite(prepare_resubmit_test_suite); + +MODULE_DESCRIPTION("scsi_call_prepare_resubmit() unit tests"); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); From patchwork Tue Aug 22 19:17:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB1EAEE49AF for ; Tue, 22 Aug 2023 19:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230061AbjHVTTE (ORCPT ); Tue, 22 Aug 2023 15:19:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbjHVTTD (ORCPT ); Tue, 22 Aug 2023 15:19:03 -0400 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D00C5E4B; Tue, 22 Aug 2023 12:18:57 -0700 (PDT) Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-5656a5c6721so2770479a12.1; Tue, 22 Aug 2023 12:18:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731937; x=1693336737; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+kT0lODXnQs5o0Wfuh374uESQu7wFS2RjdWOYRnRcYk=; b=O10P9rITFhzR/IDw8rcB5cyDfKEMwXNBCPOx4+s4bgr5TEO6brKKxez+lqxAN5OZ9Z 3AcunZ9RakIf5dSxhnLp8jz8pTYN/naBiToXekpVHhjFLDVq0o5K41L/1Z6qWeTbi7u5 pLHH4d8o+H2eMls0EwLiYAI49+SV/Gw+HKFWJD9xQ5IPMgFCz5s4UHkQ0MaC10FHJ0rO NHxHSaVmSB4CIuMqXP+7tQbtoMrtnMUxk78NU9epBN21XobmZH5htudGgIrgJ79I0foH prkYSLApBvCj72BVUTN9c60gDFmTTcwyaemTbu4oK6hvYW43YjSucXBH+eXjcuPSGA2u MI+w== X-Gm-Message-State: AOJu0YxoRqK8Ey+dpoFHHRQzx/sSskPw2ZdYVNQAp8v3K3JZ6vUmAF4O ej/OXdxHIZclcUxTfF99nJ4= X-Google-Smtp-Source: AGHT+IGr6l0caQkI8ZEkUdDCB7rvs2JFMQrdCELV38EHMPouevX7/TpkMGh5dFYtWHJyk3y5qnq3Mg== X-Received: by 2002:a17:90a:694d:b0:263:f435:ef2d with SMTP id j13-20020a17090a694d00b00263f435ef2dmr6973173pjm.10.1692731937236; Tue, 22 Aug 2023 12:18:57 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:56 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 06/16] scsi: sd: Sort commands by LBA before resubmitting Date: Tue, 22 Aug 2023 12:17:01 -0700 Message-ID: <20230822191822.337080-7-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Sort SCSI commands by LBA before the SCSI error handler resubmits these commands. This is necessary when resubmitting zoned writes (REQ_OP_WRITE) if multiple writes have been queued for a single zone. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 3c668cfb146d..d4feac5de17a 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -117,6 +118,8 @@ static void sd_uninit_command(struct scsi_cmnd *SCpnt); static int sd_done(struct scsi_cmnd *); static void sd_eh_reset(struct scsi_cmnd *); static int sd_eh_action(struct scsi_cmnd *, int); +static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd); +static void sd_prepare_resubmit(struct list_head *cmd_list); static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer); static void scsi_disk_release(struct device *cdev); @@ -617,6 +620,8 @@ static struct scsi_driver sd_template = { .done = sd_done, .eh_action = sd_eh_action, .eh_reset = sd_eh_reset, + .eh_needs_prepare_resubmit = sd_needs_prepare_resubmit, + .eh_prepare_resubmit = sd_prepare_resubmit, }; /* @@ -2018,6 +2023,46 @@ static int sd_eh_action(struct scsi_cmnd *scmd, int eh_disp) return eh_disp; } +static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + struct request *rq_a = scsi_cmd_to_rq(a); + struct request *rq_b = scsi_cmd_to_rq(b); + bool use_zwl_a = rq_a->q->limits.use_zone_write_lock; + bool use_zwl_b = rq_b->q->limits.use_zone_write_lock; + + /* + * Order the commands that need zone write locking after the commands + * that do not need zone write locking. Order the commands that do not + * need zone write locking by LBA. Do not reorder the commands that + * need zone write locking. See also the comment above the list_sort() + * definition. + */ + if (use_zwl_a || use_zwl_b) + return use_zwl_a > use_zwl_b; + return blk_rq_pos(rq_a) > blk_rq_pos(rq_b); +} + +static void sd_prepare_resubmit(struct list_head *cmd_list) +{ + /* + * Sort pending SCSI commands in starting sector order. This is + * important if one of the SCSI devices associated with @shost is a + * zoned block device for which zone write locking is disabled. + */ + list_sort(NULL, cmd_list, sd_cmp_sector); +} + static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd) { struct request *req = scsi_cmd_to_rq(scmd); From patchwork Tue Aug 22 19:17:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB70EEE49B1 for ; Tue, 22 Aug 2023 19:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbjHVTTH (ORCPT ); Tue, 22 Aug 2023 15:19:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjHVTTF (ORCPT ); Tue, 22 Aug 2023 15:19:05 -0400 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ED97E50; Tue, 22 Aug 2023 12:18:59 -0700 (PDT) Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-26b41112708so3323265a91.3; Tue, 22 Aug 2023 12:18:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731939; x=1693336739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X0xCoxo/qg7/ntVVvz5KKadmaE1D/02ErCrJvy85BN4=; b=bCpZRBo9dM9ZsMUVFK11+Mv+qKx96GovCdvPTC3KimovUF7TDg2SgnwrJHAKnim7dw 8npLP4+gn9ybauoWgdRn3rCoWwIUSOCo0rvMFYXaDBb4sYOClL1mAgdTHoYUNH/afWd/ I1sWBvbIpgpWnh20/rb3alfnUBHctDjLyP+VUBK1P/Ph2FJ6/hi2xViK8w8jqJSIjBK0 TOAfTVDDQSTVntk03dYqx4b/vy6WqPXT2M/TyX13R0g9HB5TaiC0rojuzoXrevMgrDHp ciWWUoAwhSatpSiFEbBzX1greSIWGe0LH+sUTe+NR6vEQ5bICzR2uC0cCkISI8nCGdW9 s6+A== X-Gm-Message-State: AOJu0YyOE7kXlwRu7PenRQyTjx3wrE9BD3suopmeC2DvO8cig4U6mqaK zQkfAg+NPvztVXqgflOvOAM= X-Google-Smtp-Source: AGHT+IF7POkahMDXA55nJokVko2jX0frZeeJkeDCgmXsAjoozTCPpmUv8r73Hr8rvh+WssZFHdILRg== X-Received: by 2002:a17:90b:1b49:b0:267:f66a:f25f with SMTP id nv9-20020a17090b1b4900b00267f66af25fmr9382337pjb.11.1692731938684; Tue, 22 Aug 2023 12:18:58 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:58 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 07/16] scsi: core: Retry unaligned zoned writes Date: Tue, 22 Aug 2023 12:17:02 -0700 Message-ID: <20230822191822.337080-8-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. The SCSI error handler will sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 16 ++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 6 ++++++ include/scsi/scsi.h | 1 + 4 files changed, 24 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c4d817f044a0..e393d78b921b 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -699,6 +699,22 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This may indicate that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + !req->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(req)) { + SCSI_LOG_ERROR_RECOVERY(3, + sdev_printk(KERN_INFO, scmd->device, + "Retrying unaligned write at LBA %#llx.\n", + scsi_get_lba(scmd))); + return NEEDS_DELAYED_RETRY; + } + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 59176946ab56..69da8aee13df 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index d4feac5de17a..e96014caa34c 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1240,6 +1240,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + /* + * Increase the number of allowed retries for zoned writes if zone + * write locking is disabled. + */ + if (!rq->q->limits.use_zone_write_lock && blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Tue Aug 22 19:17:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 525B5EE49AF for ; Tue, 22 Aug 2023 19:19:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230081AbjHVTTK (ORCPT ); Tue, 22 Aug 2023 15:19:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjHVTTI (ORCPT ); Tue, 22 Aug 2023 15:19:08 -0400 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAAD1E58; Tue, 22 Aug 2023 12:19:00 -0700 (PDT) Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-26d1a17ce06so2623114a91.0; Tue, 22 Aug 2023 12:19:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731940; x=1693336740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KzaYxnCqQNUVR3i3khEB0a4wN2TqC5S5bjyD6Ki8qwY=; b=hAANT55DS4FJNI3IuUoo/gzHbVLLeaWTRNOOJeY5WIPOAA21Xhs89nhozm0ahGk0QC El52Xy/blj2FsFGm+oXWU/IiGcSJduBB2XpIUlJUd775FEb82ljSwhEChZF9Jy9M6cRT 6on28pAvYxA5j9ibaDuBM6YzLo71g3UltZxUFNmlBd3MYCiGVN7gaw48YNeSYAJv4QnB NJOX9xkkTtg5TuiYRmwYtY9GeshSD3jkzrp2UDuBA210CiQmUHBmYarbAZTQ25zqWedb ZyrQWl4DGPSSklA5Bl2fY0fLJ/F+x0/5YpfXMYvmvXyyTPXsp+Clw8nckF3aeL6uN1Ln LYKQ== X-Gm-Message-State: AOJu0YxnWfOK8ux8XlAv8HuzYygdphj9IkC6luwWNOvkuEzdAB1Y3MEU +Wd3D5qe86t46JDP7ZJyrN4= X-Google-Smtp-Source: AGHT+IFFrhNhX8cPPRiyPyqNnRiitqqSD/xaFBDpQWM0lselcoSgUxBylWwBo3gMs0rApvVtUB4jgw== X-Received: by 2002:a17:90a:5646:b0:26f:2998:fca9 with SMTP id d6-20020a17090a564600b0026f2998fca9mr6794178pji.28.1692731940043; Tue, 22 Aug 2023 12:19:00 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.18.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:18:59 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 08/16] scsi: sd_zbc: Only require an I/O scheduler if needed Date: Tue, 22 Aug 2023 12:17:03 -0700 Message-ID: <20230822191822.337080-9-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org An I/O scheduler that serializes zoned writes is only needed if the SCSI LLD does not preserve the write order. Hence only set ELEVATOR_F_ZBD_SEQ_WRITE if the LLD does not preserve the write order. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd_zbc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index a25215507668..718b31bed878 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -955,7 +955,9 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) /* The drive satisfies the kernel restrictions: set it up */ blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + if (!q->limits.driver_preserves_write_order) + blk_queue_required_elevator_features(q, + ELEVATOR_F_ZBD_SEQ_WRITE); if (sdkp->zones_max_open == U32_MAX) disk_set_max_open_zones(disk, 0); else From patchwork Tue Aug 22 19:17:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C58FCEE49AF for ; Tue, 22 Aug 2023 19:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230077AbjHVTTM (ORCPT ); Tue, 22 Aug 2023 15:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230087AbjHVTTL (ORCPT ); Tue, 22 Aug 2023 15:19:11 -0400 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4103BE5E; Tue, 22 Aug 2023 12:19:02 -0700 (PDT) Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-55b0e7efb1cso2832203a12.1; Tue, 22 Aug 2023 12:19:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731941; x=1693336741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=035k58Gc0xjZLYCOymyww+e8cIPTj80YBwmkGqX/6yA=; b=W1vR831sKOTl9lLSpxVVNq/cfnuZnijLJzsR3ZbfbAmSPhHDXUDy+GFKrGupktxbeL qDc0EsAOFANkTQYRNY0BWvskaOwJj32s1d1ecJFFP12k8nBiFHKAZqfuqf/VIprB1PAb lkrhDPjE2ss6sC71367R+WZYVZgAuwcXnR2UHiQdyke0KJI7xVw9ArrdePb8SLF3nnWS R7dx9VKMj1FM4TA5Lx0JRa4DOGiOyHNYIoYkXq2mRke8uUyIIqTczWqh/AX3tx+eAtrh K63OWSZA+RxFoWtlLWIfe5H91RNNIImzjDzG/+FaXGsKpAdI4RdV7/pBBKGOmqSeWiRe 8Ntw== X-Gm-Message-State: AOJu0YyKHZSoLp9LbIhgHYtK/4muYTPzevC49wmGmsdf5pvGMPfLO9a3 1WXS7ya9agwGBZpjh496AFQ= X-Google-Smtp-Source: AGHT+IF2x3gYFSOiPfJrEvGl5FGoBt1bgNbMH79VjAYmJGr1KXjj1AGjYkquukx904anzKhR2nWtvQ== X-Received: by 2002:a17:90a:88:b0:26d:2086:8c96 with SMTP id a8-20020a17090a008800b0026d20868c96mr7557378pja.43.1692731941546; Tue, 22 Aug 2023 12:19:01 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:01 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 09/16] scsi: scsi_debug: Add the preserves_write_order module parameter Date: Tue, 22 Aug 2023 12:17:04 -0700 Message-ID: <20230822191822.337080-10-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone write locking is not used for zoned devices if the block driver reports that it preserves the order of write commands. Make it easier to test not using zone write locking by adding support for setting the driver_preserves_write_order flag. Acked-by: Douglas Gilbert Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..1ea4925d2c2f 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_preserves_write_order; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_preserves_write_order) + q->limits.driver_preserves_write_order = true; return 0; } @@ -5755,6 +5760,8 @@ module_param_named(statistics, sdebug_statistics, bool, S_IRUGO | S_IWUSR); module_param_named(strict, sdebug_strict, bool, S_IRUGO | S_IWUSR); module_param_named(submit_queues, submit_queues, int, S_IRUGO); module_param_named(poll_queues, poll_queues, int, S_IRUGO); +module_param_named(preserves_write_order, sdeb_preserves_write_order, bool, + S_IRUGO); module_param_named(tur_ms_to_ready, sdeb_tur_ms_to_ready, int, S_IRUGO); module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO); module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO); @@ -5812,6 +5819,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(preserves_write_order, + "Whether or not to inform the block layer that this driver preserves the order of WRITE commands (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Tue Aug 22 19:17:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6953DEE49AF for ; Tue, 22 Aug 2023 19:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230089AbjHVTTR (ORCPT ); Tue, 22 Aug 2023 15:19:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbjHVTTQ (ORCPT ); Tue, 22 Aug 2023 15:19:16 -0400 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F1DAE49; Tue, 22 Aug 2023 12:19:03 -0700 (PDT) Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-565f86ff4d1so2752882a12.2; Tue, 22 Aug 2023 12:19:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731943; x=1693336743; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mKw3Aw2p8++zyZ3GJZp8ItEvankLwq1zpCMHoyF24eQ=; b=hvHm5q3HXFaTs8DrXwS2ogfcza6yOSIrD1rkB3JvS0g8G57Hc0ibSWniS835FF5YrL SQa6zVV4bBDRxfrlbwRHCGmTnx3NvPBnEMJE//LXKL/qWsKP1NJ+hG4frlP/7eVODdLS vGUu4NcJnCHdv4Kj6DeS4011rLsp5/txE9Pej0/6dwPLWPX+3BUu4L4mdmx1w2EyP4l4 a3KKNDqj85p5cjPRbHqwL9PNupo/eMea8SOu/u96HWrQer6K75cWDQHWESqcf6iN/pfn IHNAkkcjON1OApp9QrHTBSFa4xFdLvWtwiyQMsWUbI1GF/lUpT6XIWejJ4CnKNUZDfAb tSgw== X-Gm-Message-State: AOJu0YxeZ5uivIYDBRONYEhVCB+uUpzKAjAJkiHE0TeQMVguvasZ3b4i k40vwFj0SQIXzJW0cGZXrU8zPQMu74c= X-Google-Smtp-Source: AGHT+IFeJcX2lK7pGt/QV0MFYjUaxeiaObLZqdVAyd9MACGxLZgLlJzBrQGFEFtsBu8okajk2EZxYA== X-Received: by 2002:a17:90a:df07:b0:268:5bed:708e with SMTP id gp7-20020a17090adf0700b002685bed708emr7117519pjb.24.1692731942845; Tue, 22 Aug 2023 12:19:02 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:02 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v11 10/16] scsi: scsi_debug: Support injecting unaligned write errors Date: Tue, 22 Aug 2023 12:17:05 -0700 Message-ID: <20230822191822.337080-11-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Acked-by: Douglas Gilbert Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 1ea4925d2c2f..164e82c218ff 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Tue Aug 22 19:17:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CB70EE49AF for ; Tue, 22 Aug 2023 19:19:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbjHVTTV (ORCPT ); Tue, 22 Aug 2023 15:19:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbjHVTTU (ORCPT ); Tue, 22 Aug 2023 15:19:20 -0400 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8926ECFE; Tue, 22 Aug 2023 12:19:10 -0700 (PDT) Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-26d2b3860daso3348460a91.1; Tue, 22 Aug 2023 12:19:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731950; x=1693336750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XeTP9UxMC9xuYvRY29/KP6o+hE37UxPqFnd4pYMhYhI=; b=Mpcf4lqlgCU9aYHOTcO17zf/IQXdFAcWgZszKBgCHmTehu/G+vwaf9l9A0+f6R7YL2 sj4MnuwKcW3sToRCZv8G8YEtmK9vtGTF5Vghcof192JYhZ4GieehN4jd3q45FaRLJqzY ds4onFusKIQGFOcTLgHmtsU3VqDkHLRGlqKmE3YbIA3N6sVDLT+QqYXT4IPsA98GT+Mq yM2gHjTdJccnT5lGZWmWX7mcxngG2Luz9jU/WzoHCN9wW63aQYwPTKFV6jg/wmq07HcU R3aDeHD0i8CUOPpSYtJOxahYd36dfO0j9nZ621BCD+G0VC0/N+zxMxkEu6Vqz9nUpRYb TPWw== X-Gm-Message-State: AOJu0YyR/h3APchnOywJO2n2ui006D+yhUMVBQS9VqkVHcntwaAPM/tj lLz7mRMAUMpkoYJhABmntkI= X-Google-Smtp-Source: AGHT+IG/1WjbbniTpurPyqjHXV0CtS3tugahKa/qvwvkJNfmrNGsvTZ/2OnGeSANHLZyNfp7RR2FLQ== X-Received: by 2002:a17:90b:11cd:b0:26b:4e59:57e7 with SMTP id gv13-20020a17090b11cd00b0026b4e5957e7mr9530138pjb.43.1692731949971; Tue, 22 Aug 2023 12:19:09 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:09 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Adrian Hunter , Krzysztof Kozlowski Subject: [PATCH v11 11/16] scsi: ufs: hisi: Rework the code that disables auto-hibernation Date: Tue, 22 Aug 2023 12:17:06 -0700 Message-ID: <20230822191822.337080-12-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The host driver link startup callback is called indirectly by ufshcd_probe_hba(). That function applies the auto-hibernation settings by writing hba->ahit into the auto-hibernation control register. Simplify the code for disabling auto-hibernation by setting hba->ahit instead of writing into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Reviewed-by: Bao D. Nguyen Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/host/ufs-hisi.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c index 5b3060cd0ab8..f2ec687121bb 100644 --- a/drivers/ufs/host/ufs-hisi.c +++ b/drivers/ufs/host/ufs-hisi.c @@ -142,7 +142,6 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) struct ufs_hisi_host *host = ufshcd_get_variant(hba); int err; uint32_t value; - uint32_t reg; /* Unipro VS_mphy_disable */ ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0xD0C1, 0x0), 0x1); @@ -232,9 +231,7 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) ufshcd_writel(hba, UFS_HCLKDIV_NORMAL_VALUE, UFS_REG_HCLKDIV); /* disable auto H8 */ - reg = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER); - reg = reg & (~UFS_AHIT_AH8ITV_MASK); - ufshcd_writel(hba, reg, REG_AUTO_HIBERNATE_IDLE_TIMER); + hba->ahit = 0; /* Unipro PA_Local_TX_LCC_Enable */ ufshcd_disable_host_tx_lcc(hba); From patchwork Tue Aug 22 19:17:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AA70EE49AF for ; Tue, 22 Aug 2023 19:19:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230076AbjHVTTn (ORCPT ); Tue, 22 Aug 2023 15:19:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229620AbjHVTTm (ORCPT ); Tue, 22 Aug 2023 15:19:42 -0400 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51035CD2; Tue, 22 Aug 2023 12:19:24 -0700 (PDT) Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-56c250ff3d3so1769943a12.1; Tue, 22 Aug 2023 12:19:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731964; x=1693336764; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pwEAUtiNqr22A3zGWmfi4hZZsvfrlkiLelTqHIaDPew=; b=Wtwnds9bDnRsHMcrwUxK+wON8CUIUO1wC/LItgOUvnUKAj7+qrTQbY6RkrKz/EP6Xh +arNU51mI99yLeSAZ9fxuKHqGfGI28T2lieBK4fDO1TLGC1LegNN6kKuOAueFSEo51n7 X96y7PC3q6GdWMN/pXvcUr4BKFVmeBxXAr3Gxg/ziQc6y9OtCP1L0msCpTSNwxuCzNXX NIOSmMVNxkEt6Jb1lXaLzshw+tGS1rcUasx0hHLLSwcPYWTYwOFHC3vGuRk6NrrV/jvH XLgZKKvkbR4AbdXeClKeFHLeB9Lx9h9VMun2+dAoOB3y0BJWADbMef90lo4xwQCbyAF2 tsYw== X-Gm-Message-State: AOJu0YzaYtzaaG9niEQBIVg6DAPsyE/aLVVYphfvbGcTtlAEG0iWkq++ HalYRsq5yEA31nlGrbJ7xb4= X-Google-Smtp-Source: AGHT+IEcCA3pwgCK1F67rngZP2UiojxiabGTROWjIk375kGm59IzVsOuRtkzL7j9HKqMKYbymK6dFw== X-Received: by 2002:a17:90b:950:b0:26b:36a4:feeb with SMTP id dw16-20020a17090b095000b0026b36a4feebmr14248789pjb.8.1692731963697; Tue, 22 Aug 2023 12:19:23 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:23 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev , Manivannan Sadhasivam , Po-Wen Kao , Eric Biggers , Keoseong Park , Daniil Lunev Subject: [PATCH v11 12/16] scsi: ufs: Rename ufshcd_auto_hibern8_enable() and make it static Date: Tue, 22 Aug 2023 12:17:07 -0700 Message-ID: <20230822191822.337080-13-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename ufshcd_auto_hibern8_enable() into ufshcd_configure_auto_hibern8() since this function can enable or disable auto-hibernation. Since ufshcd_auto_hibern8_enable() is only used inside the UFSHCI driver core, declare it static. Additionally, move the definition of this function to just before its first caller. Suggested-by: Bao D. Nguyen Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 24 +++++++++++------------- include/ufs/ufshcd.h | 1 - 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..f1bba459b46f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,14 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) +{ + if (!ufshcd_is_auto_hibern8_supported(hba)) + return; + + ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; @@ -4356,21 +4364,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) -{ - if (!ufshcd_is_auto_hibern8_supported(hba)) - return; - - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); -} - /** * ufshcd_init_pwr_info - setting the POR (power on reset) * values in hba power info @@ -8815,8 +8815,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) if (hba->ee_usr_mask) ufshcd_write_ee_control(hba); - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_toggle_state(hba, HPB_RESET, HPB_PRESENT); out: @@ -9809,8 +9808,7 @@ static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); } - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshpb_resume(hba); goto out; diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 6dc11fa0ebb1..040d66d99912 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1363,7 +1363,6 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); From patchwork Tue Aug 22 19:17:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 088B7EE49B1 for ; Tue, 22 Aug 2023 19:20:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbjHVTUH (ORCPT ); Tue, 22 Aug 2023 15:20:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjHVTUH (ORCPT ); Tue, 22 Aug 2023 15:20:07 -0400 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C80FBE4D; Tue, 22 Aug 2023 12:19:49 -0700 (PDT) Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-26ef24b8e5aso2333870a91.0; Tue, 22 Aug 2023 12:19:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731989; x=1693336789; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IKOyE0ZGfmDNciokzBC/zSgBfKj2aIlN4m3IlTmArZ8=; b=hWf3JB06XMuR0L5guZ8nbFM/edBq0vAcmhJLRI2SRdS5tsjxVZZCVc09DqwJXd157S h/wFOtfSrzp4jFYe4pQIWUBkBQ9r8EhnX0rlo9sUvbnMfYObdMuc5At3TGiiOlhYJWVq MdJBCFijfSICff+ffXzpR0+ubDnZLr1TINioacTcOHTqj1gfw+03/9gdHgqICQ+sni33 g8Fc0OmM+wRnpmtiMwAN+uGwvUpQno3uh00FxpeO3/735IdAjN3RyKnDARRPcwy9nUVV HczONUWqFlVLyEK1eg/GfeYLvydL8e435aPy3eSTbaWN1XZfvJJYyAnd0YA3yhI8rGjA xhGA== X-Gm-Message-State: AOJu0YzPeHItWivZwZO5UpLjYHDWzNHa+sey3wIPOICjiIyC3g+9QG3r 6MSvp1oNKW12zrbBYKAyvN4= X-Google-Smtp-Source: AGHT+IGa7xOUzo11ogbMlU1jbAGyDWCqhs0dCnCLOOikw7upkIvf0Lnyom7hfKgB+o3FMqZ0xKxIWg== X-Received: by 2002:a17:90a:e006:b0:25f:20f:2f7d with SMTP id u6-20020a17090ae00600b0025f020f2f7dmr9770655pjy.2.1692731989218; Tue, 22 Aug 2023 12:19:49 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:48 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Peter Wang , Avri Altman , "James E.J. Bottomley" , Matthias Brugger , Bean Huo , Jinyoung Choi , Lu Hongfei , Daniil Lunev , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , Arthur Simchaev , zhanghui , Po-Wen Kao , Eric Biggers , Keoseong Park Subject: [PATCH v11 13/16] scsi: ufs: Change the return type of ufshcd_auto_hibern8_update() Date: Tue, 22 Aug 2023 12:17:08 -0700 Message-ID: <20230822191822.337080-14-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A later patch will introduce an error path in ufshcd_auto_hibern8_update(). Change the return type of that function before introducing calls to that function in the host drivers such that the host drivers only have to be modified once. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Reviewed-by: Peter Wang Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufs-sysfs.c | 2 +- drivers/ufs/core/ufshcd-priv.h | 1 - drivers/ufs/core/ufshcd.c | 6 ++++-- include/ufs/ufshcd.h | 2 +- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c index 6c72075750dd..a693dea1bd18 100644 --- a/drivers/ufs/core/ufs-sysfs.c +++ b/drivers/ufs/core/ufs-sysfs.c @@ -203,7 +203,7 @@ static ssize_t auto_hibern8_store(struct device *dev, goto out; } - ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); + ret = ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); out: up(&hba->host_sem); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 0f3bd943b58b..a2b74fbc2056 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -60,7 +60,6 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, u32 *attr_val); int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, struct cq_entry *cqe); int ufshcd_mcq_init(struct ufs_hba *hba); diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index f1bba459b46f..a08924972b20 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4345,13 +4345,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; bool update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) - return; + return 0; spin_lock_irqsave(hba->host->host_lock, flags); if (hba->ahit != ahit) { @@ -4368,6 +4368,8 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + + return 0; } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 040d66d99912..7ae071a6811c 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1363,7 +1363,7 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); #define SD_ASCII_STD true From patchwork Tue Aug 22 19:17:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12070EE4993 for ; Tue, 22 Aug 2023 19:20:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230100AbjHVTUK (ORCPT ); Tue, 22 Aug 2023 15:20:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbjHVTUK (ORCPT ); Tue, 22 Aug 2023 15:20:10 -0400 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEE95E4C; Tue, 22 Aug 2023 12:19:57 -0700 (PDT) Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-26d1e5f2c35so3351519a91.2; Tue, 22 Aug 2023 12:19:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692731997; x=1693336797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4HG3YoYAAnJdyYVAHKUCDxsEfi+t2LMdlh/ycmxSocs=; b=cwZWvOR9Qtekp3X9Gvom0HFsTP+/zvwwX0z2AZDibwLuY9z1kb+cxnExF50Tr3jJLW QBS8BIiW/qesywi4c57uefVvALnEMQkU6R6dNJAvCklk+0Y0PjzdfqigHeyLesBgRllC Z0GpZpke1F2xJoq8s4sPmYmBxBvhdYa5b8zEtXBdWye++6zXFvGUlq99nfZiysy2f1hn fFKCfc8mMc0rVB+uN1We2J4ulV2/Yr4m6fTEX9r0oOLSCIgoA4RA3kfNAGho4Vg9OKZQ jMsG4TILGz7MGi/MwsJmTldWbMIA8dnE3biBRXaiZdmKHrckiepha5iRXZIMiCpLguXH 4fUw== X-Gm-Message-State: AOJu0YyoH3DBJvgsHV5uK8xor7aZhF66NBHbn/yOcighdnp1+rKhmzcX pFSjW4AYG36fZhtCdNzwB8o= X-Google-Smtp-Source: AGHT+IFpPTp7U/8cEvni25HsdjjvUU5eacNSd9GCwyZvepjcuveqs7xnPE9/C5uBWNkC8WVxKYDAvA== X-Received: by 2002:a17:90a:ee87:b0:268:4485:c868 with SMTP id i7-20020a17090aee8700b002684485c868mr8343525pjz.49.1692731997070; Tue, 22 Aug 2023 12:19:57 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.19.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:19:56 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v11 14/16] scsi: ufs: Simplify ufshcd_auto_hibern8_update() Date: Tue, 22 Aug 2023 12:17:09 -0700 Message-ID: <20230822191822.337080-15-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Calls to ufshcd_auto_hibern8_update() are already serialized: this function is either called if user space software is not running (preparing to suspend) or from a single sysfs store callback function. Kernfs serializes sysfs .store() callbacks. No functionality is changed. This patch makes the next patch in this series easier to read. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index a08924972b20..68bf8e94eee6 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4347,21 +4347,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { - unsigned long flags; - bool update = false; + const u32 cur_ahit = READ_ONCE(hba->ahit); - if (!ufshcd_is_auto_hibern8_supported(hba)) + if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; - spin_lock_irqsave(hba->host->host_lock, flags); - if (hba->ahit != ahit) { - hba->ahit = ahit; - update = true; - } - spin_unlock_irqrestore(hba->host->host_lock, flags); - - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + WRITE_ONCE(hba->ahit, ahit); + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_configure_auto_hibern8(hba); From patchwork Tue Aug 22 19:17:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB523EE4993 for ; Tue, 22 Aug 2023 19:20:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230102AbjHVTUO (ORCPT ); Tue, 22 Aug 2023 15:20:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbjHVTUN (ORCPT ); Tue, 22 Aug 2023 15:20:13 -0400 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B75951BE; Tue, 22 Aug 2023 12:20:05 -0700 (PDT) Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-51b4ef5378bso3454661a12.1; Tue, 22 Aug 2023 12:20:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692732005; x=1693336805; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bhEE5rTGdKfdoShqKbxndfVcHcv8Xl0IxiGnxC2ZmBM=; b=FImsoCyAN80Cac0D35wyiypJwn9EtBlGSPpH9US/witZrGZGvSeiAH+L3X5Fv3v4pO lSsQ2HaB77IBPyzrznHEcVQSSzuQQ5UDrISaoyeenjJoL6wF3bjEgwfTCsLa3HAO9UTQ Al+GAOHlxwUrfdegYY9kQSXKaIodrctAlfOM0OfSH+jwvMcR6CfvIbs32x+sCeS7RqsB GSgFEQ/7cRJ/RQu5LIo43K9FcTdNVLS9aL6KZr+KHTTwvAMtUBj54hIuDC0L6jb+36TD 7VGgH3uhndQH2hJFpmg9MwQzBUs27mrNYl4mhICiwV/45+5sSnZfeuVp+jAcITaqdi/1 1x2w== X-Gm-Message-State: AOJu0Yzcsskg4qVT4w2aWLEuXEohUkTUZlG5SoEBYAzM57hxZ9TryIjC 0mZO67zznQB2r0Rt9mqcEaz/vzfpde4= X-Google-Smtp-Source: AGHT+IER2E/nfQVQZwR3MkQEWNJmYW8Kw1VweZkdUVAFn55zSGqdeiQffpIyqEOtZf/YudS0tw4RzA== X-Received: by 2002:a17:90a:db15:b0:263:2335:594e with SMTP id g21-20020a17090adb1500b002632335594emr9697252pjv.38.1692732005075; Tue, 22 Aug 2023 12:20:05 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.20.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:20:04 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v11 15/16] scsi: ufs: Forbid auto-hibernation without I/O scheduler Date: Tue, 22 Aug 2023 12:17:10 -0700 Message-ID: <20230822191822.337080-16-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org UFSHCI 3.0 controllers do not preserve the write order if auto-hibernation is enabled. If the write order is not preserved, an I/O scheduler is required to serialize zoned writes. Hence do not allow auto-hibernation to be enabled without I/O scheduler if a zoned logical unit is present and if the controller is operating in legacy mode. This patch has been tested with the following shell script: show_ah8() { echo -n "auto_hibern8: " adb shell "cat /sys/devices/platform/13200000.ufs/auto_hibern8" } set_ah8() { local rc adb shell "echo $1 > /sys/devices/platform/13200000.ufs/auto_hibern8" rc=$? show_ah8 return $rc } set_iosched() { adb shell "echo $1 >/sys/class/block/$zoned_bdev/queue/scheduler && echo -n 'I/O scheduler: ' && cat /sys/class/block/sde/queue/scheduler" } adb root zoned_bdev=$(adb shell grep -lvw 0 /sys/class/block/sd*/queue/chunk_sectors |& sed 's|/sys/class/block/||g;s|/queue/chunk_sectors||g') [ -n "$zoned_bdev" ] show_ah8 set_ah8 0 set_iosched none if set_ah8 150000; then echo "Error: enabled AH8 without I/O scheduler" fi set_iosched mq-deadline set_ah8 150000 Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 58 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 68bf8e94eee6..c28a362b5b99 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,6 +4337,30 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, + bool preserves_write_order) +{ + struct scsi_device *sdev; + + if (!preserves_write_order) { + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + /* + * Refuse to enable auto-hibernation if no I/O scheduler + * is present. This code does not check whether the + * attached I/O scheduler serializes zoned writes + * (ELEVATOR_F_ZBD_SEQ_WRITE) because this cannot be + * checked from outside the block layer core. + */ + if (blk_queue_is_zoned(q) && !q->elevator) + return -EPERM; + } + } + + return 0; +} + static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) { if (!ufshcd_is_auto_hibern8_supported(hba)) @@ -4345,13 +4369,40 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } +/** + * ufshcd_auto_hibern8_update() - Modify the auto-hibernation control register + * @hba: per-adapter instance + * @ahit: New auto-hibernate settings. Includes the scale and the value of the + * auto-hibernation timer. See also the UFSHCI_AHIBERN8_TIMER_MASK and + * UFSHCI_AHIBERN8_SCALE_MASK constants. + * + * A note: UFSHCI controllers do not preserve the command order in legacy mode + * if auto-hibernation is enabled. If the command order is not preserved, an + * I/O scheduler that serializes zoned writes (mq-deadline) is required if a + * zoned logical unit is present. Enabling auto-hibernation without attaching + * the mq-deadline scheduler first may cause unaligned write errors for the + * zoned logical unit if a zoned logical unit is present. + */ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { const u32 cur_ahit = READ_ONCE(hba->ahit); + bool prev_state, new_state; + int ret; if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; + prev_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, cur_ahit); + new_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, ahit); + + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled for legacy UFSHCI mode. + */ + ret = ufshcd_update_preserves_write_order(hba, false); + if (ret) + return ret; + } WRITE_ONCE(hba->ahit, ahit); if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); @@ -4360,6 +4411,13 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. + */ + ret = ufshcd_update_preserves_write_order(hba, true); + WARN_ON_ONCE(ret); + } return 0; } From patchwork Tue Aug 22 19:17:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13361383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C7DEE49B3 for ; Tue, 22 Aug 2023 19:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbjHVTUT (ORCPT ); Tue, 22 Aug 2023 15:20:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbjHVTUR (ORCPT ); Tue, 22 Aug 2023 15:20:17 -0400 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3704E47; Tue, 22 Aug 2023 12:20:13 -0700 (PDT) Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-56c250ff3d3so1770394a12.1; Tue, 22 Aug 2023 12:20:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692732013; x=1693336813; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S7we6QROJmCV/8TDowAkj10fIMIa6vF9cLXAVjoPLfg=; b=AiAEHg0JOa335VjGS5sb3ygg/lz8qVXvqWarpQSZlF9ytpHLrTOeQeKcXEAtvjculG 0ed5qeXfBPXYsQdnNdZezp2B24uEgVYEgFtkJREZtwLFW4JCoG8Fh70af22rMW5AOEB4 miMlmB60DKQ1sG/ysZCRebHw45y+37ulq4mwL4OOyWwDH48Z/a1t7WrZxdHBYMP2X1qf 0X0px1z69iml7K+AGqqvEljaxJfHMf9J4tUEYhBve2HrAHjcD70kjwofV28YJ62L6KKj RsWp2BN/VcuJ2LfaOgTZ+d0tvFMiXpHhWSFsLUf/eqrO6QyH7RQfKqsM6DtVX4tu7L0S OmmQ== X-Gm-Message-State: AOJu0YzqVsa1R0ckbyHGJazGe1aMSfT9fKuffme5Ls4zMbDdrfyzWXuo rZC3qMixi6kGDte/i1f9Wi4= X-Google-Smtp-Source: AGHT+IG52W/qHjA28sRpb5Sk+prL6Zjt6oJtfk9HOW9Pf2eoDLJnRaati81bpfu92hIp8it42zbJuw== X-Received: by 2002:a17:90a:d392:b0:269:3757:54bb with SMTP id q18-20020a17090ad39200b00269375754bbmr14923755pju.11.1692732013040; Tue, 22 Aug 2023 12:20:13 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:88be:bf57:de29:7cc]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a414b00b002696bd123e4sm8081632pjg.46.2023.08.22.12.20.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 12:20:12 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v11 16/16] scsi: ufs: Inform the block layer about write ordering Date: Tue, 22 Aug 2023 12:17:11 -0700 Message-ID: <20230822191822.337080-17-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230822191822.337080-1-bvanassche@acm.org> References: <20230822191822.337080-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index c28a362b5b99..a685058d4943 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4357,6 +4357,19 @@ static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, return -EPERM; } } + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.driver_preserves_write_order = preserves_write_order; + blk_queue_required_elevator_features(q, + preserves_write_order ? 0 : ELEVATOR_F_ZBD_SEQ_WRITE); + if (q->disk) + disk_set_zoned(q->disk, q->limits.zoned); + blk_mq_unfreeze_queue(q); + } return 0; } @@ -4397,7 +4410,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) if (!is_mcq_enabled(hba) && !prev_state && new_state) { /* - * Auto-hibernation will be enabled for legacy UFSHCI mode. + * Auto-hibernation will be enabled for legacy UFSHCI mode. Tell + * the block layer that write requests may be reordered. */ ret = ufshcd_update_preserves_write_order(hba, false); if (ret) @@ -4413,7 +4427,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } if (!is_mcq_enabled(hba) && prev_state && !new_state) { /* - * Auto-hibernation has been disabled. + * Auto-hibernation has been disabled. Tell the block layer that + * the order of write requests is preserved. */ ret = ufshcd_update_preserves_write_order(hba, true); WARN_ON_ONCE(ret); @@ -5191,6 +5206,9 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) ufshcd_hpb_configure(hba, sdev); + q->limits.driver_preserves_write_order = + !ufshcd_is_auto_hibern8_supported(hba) || + FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) == 0; blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);