From patchwork Wed Oct 18 17:54:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37B8CCDB482 for ; Wed, 18 Oct 2023 17:56:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231894AbjJRR4R (ORCPT ); Wed, 18 Oct 2023 13:56:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbjJRR4Q (ORCPT ); Wed, 18 Oct 2023 13:56:16 -0400 Received: from mail-oo1-f51.google.com (mail-oo1-f51.google.com [209.85.161.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12A02118; Wed, 18 Oct 2023 10:56:15 -0700 (PDT) Received: by mail-oo1-f51.google.com with SMTP id 006d021491bc7-581b6b93bd1so2102591eaf.1; Wed, 18 Oct 2023 10:56:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651774; x=1698256574; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cTMw0a7/Uh2Lje80+tulC82TkA3w3+NwQr1GUTziZqE=; b=XHdu4rBn69nZ1+abZQGGbbaFBhwmCVsKaBVEIyYeAxCvndH3SaGDhD/KPUITWSQgZz N8xrjwiCG60iI5jahx2o2EI7YTd0cl+/iL9u4UAeTfGz6jqMa+gp8uJM9Z+8wpMkFAUp ukXfjKABrQfCiNhzU5VPjZ1yaxHNkigBItT2UllznspZhCEHVb2cMPYUUfk+zEmMnIkJ h/B3/+aYyBNg4hgVOyLeF1trV3TtOn9zf+FrVZHtuMkAA/aLoJ+8Bb0Yc48+2Kus7h/k kVzCX/iZvuZPSOhKURDME876ymw0excukiqjOwSKj/X72YIbLAV/hS5JT6LWDuD84uJ7 hSAQ== X-Gm-Message-State: AOJu0YyXNwLEzJHgtaF4F4uwEh1XFFiSVy76Zi6DTyWHu9TznakEdjwY 7YF8tnM2VVr9AzyCMXIyN5s= X-Google-Smtp-Source: AGHT+IHAAJWegbEpKLHxvYRYKvPrN/M2dsvCCMfgshZ6aRKOJU+bA+c/s0Gpk0OQ5aGLV/mvOlVoqw== X-Received: by 2002:a05:6358:c62a:b0:166:ce95:7122 with SMTP id fd42-20020a056358c62a00b00166ce957122mr6438938rwb.14.1697651774122; Wed, 18 Oct 2023 10:56:14 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:13 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Hannes Reinecke , Nitesh Shetty , Ming Lei Subject: [PATCH v13 01/18] block: Introduce more member variables related to zone write locking Date: Wed, 18 Oct 2023 10:54:23 -0700 Message-ID: <20231018175602.2148415-2-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Many but not all storage controllers require serialization of zoned writes. Introduce two new request queue limit member variables related to write serialization. 'driver_preserves_write_order' allows block drivers to indicate that the order of write commands is preserved and hence that serialization of writes per zone is not required. 'use_zone_write_lock' is set by disk_set_zoned() if and only if the block device has zones and if the block driver does not preserve the order of write requests. Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Nitesh Shetty Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-settings.c | 15 +++++++++++++++ block/blk-zoned.c | 1 + include/linux/blkdev.h | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..4c776c08f190 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -56,6 +56,8 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; + lim->driver_preserves_write_order = false; + lim->use_zone_write_lock = false; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; @@ -82,6 +84,8 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + /* Request-based stacking drivers do not reorder requests. */ + lim->driver_preserves_write_order = true; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -685,6 +689,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_secure_erase_sectors); t->zone_write_granularity = max(t->zone_write_granularity, b->zone_write_granularity); + t->driver_preserves_write_order = t->driver_preserves_write_order && + b->driver_preserves_write_order; + t->use_zone_write_lock = t->use_zone_write_lock || + b->use_zone_write_lock; t->zoned = max(t->zoned, b->zoned); return ret; } @@ -949,6 +957,13 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) } q->limits.zoned = model; + /* + * Use the zone write lock only for zoned block devices and only if + * the block driver does not preserve the order of write commands. + */ + q->limits.use_zone_write_lock = model != BLK_ZONED_NONE && + !q->limits.driver_preserves_write_order; + if (model != BLK_ZONED_NONE) { /* * Set the zone write granularity to the device logical block diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 619ee41a51cc..112620985bff 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -631,6 +631,7 @@ void disk_clear_zone_settings(struct gendisk *disk) q->limits.chunk_sectors = 0; q->limits.zone_write_granularity = 0; q->limits.max_zone_append_sectors = 0; + q->limits.use_zone_write_lock = false; blk_mq_unfreeze_queue(q); } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index eef450f25982..b67bd8433225 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -316,6 +316,16 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; + /* + * Whether or not the block driver preserves the order of write + * requests. Set by the block driver. + */ + bool driver_preserves_write_order; + /* + * Whether or not zone write locking should be used. Set by + * disk_set_zoned(). + */ + bool use_zone_write_lock; enum blk_zoned_model zoned; /* From patchwork Wed Oct 18 17:54:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4F8CDB486 for ; Wed, 18 Oct 2023 17:56:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231888AbjJRR4U (ORCPT ); Wed, 18 Oct 2023 13:56:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231929AbjJRR4R (ORCPT ); Wed, 18 Oct 2023 13:56:17 -0400 Received: from mail-oo1-f42.google.com (mail-oo1-f42.google.com [209.85.161.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3666B115; Wed, 18 Oct 2023 10:56:16 -0700 (PDT) Received: by mail-oo1-f42.google.com with SMTP id 006d021491bc7-57f02eeabcaso4353675eaf.0; Wed, 18 Oct 2023 10:56:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651775; x=1698256575; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aH5XPQQCiAVF4r+XWGxE/2XmtFY/9NCJod7T1V555TM=; b=lFNtni6kIw4tl5KWaj+2yGFuVZ8Z4NB8b3u3555bdIyT38gKhw2pmztO9mn5BgcAoZ 44yMMIyp6yvW8/k4FHj6bHHgEHQJZ2g6p5TbZcfjTyikxgUZl9O9d4YE+hCT47ShS4H2 D94vtK1514dgFcq+M+7aQVSuYff2n6xpgLPIepZ/XlnJIiQaxBfCc4w0E/Y7RVCzDhyV QKQ2QDaKSU5atgiVQ9rehMEs0yQMBWZWIFuTLj6qFyOooJHvybkCvJ+sImxoDpZ9dlti rbVejTfydPZuZO7duNleD3kAz60ialuDagaPvuFks+act9doFH/OrGXXL2mwJSq3XB0k VK4w== X-Gm-Message-State: AOJu0Yw8rjSROPKPKRMn4311Kwq+ZTFmcxf3AKPy7MMPM1RVQEqJXGV4 j2NETv9wQ1t4jABEOe9tQ7YA0hc4uhQ= X-Google-Smtp-Source: AGHT+IH6CojMHFjCbI8d/JAhHQzcOiVFOjDlhE6WMOAHvIU5rkIWR/nuHJKnh/8G26WdsJ/tsq77tw== X-Received: by 2002:a05:6359:c87:b0:143:383e:5b22 with SMTP id go7-20020a0563590c8700b00143383e5b22mr6342514rwb.28.1697651775378; Wed, 18 Oct 2023 10:56:15 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:15 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Hannes Reinecke , Nitesh Shetty , Damien Le Moal , Ming Lei Subject: [PATCH v13 02/18] block: Only use write locking if necessary Date: Wed, 18 Oct 2023 10:54:24 -0700 Message-ID: <20231018175602.2148415-3-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk_req_needs_zone_write_lock() return false if q->limits.use_zone_write_lock is false. Reviewed-by: Hannes Reinecke Reviewed-by: Nitesh Shetty Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-zoned.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 112620985bff..d8a80cce832f 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -53,11 +53,16 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond) EXPORT_SYMBOL_GPL(blk_zone_cond_str); /* - * Return true if a request is a write requests that needs zone write locking. + * Return true if a request is a write request that needs zone write locking. */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (!rq->q->disk->seq_zones_wlock) + struct request_queue *q = rq->q; + + if (!q->limits.use_zone_write_lock) + return false; + + if (!q->disk->seq_zones_wlock) return false; return blk_rq_is_seq_zoned_write(rq); From patchwork Wed Oct 18 17:54:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D5BBCDB47E for ; Wed, 18 Oct 2023 17:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231378AbjJRR4Y (ORCPT ); Wed, 18 Oct 2023 13:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231949AbjJRR4S (ORCPT ); Wed, 18 Oct 2023 13:56:18 -0400 Received: from mail-oi1-f171.google.com (mail-oi1-f171.google.com [209.85.167.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64545118; Wed, 18 Oct 2023 10:56:17 -0700 (PDT) Received: by mail-oi1-f171.google.com with SMTP id 5614622812f47-3af957bd7e9so4533720b6e.3; Wed, 18 Oct 2023 10:56:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651776; x=1698256576; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YRTmtbNb958+HOpWsWyPv7WMdLRsuClrQv5UmN5LTCA=; b=NWY4dMFRH7p5yT4uagBKmpPy5RRZzpZxOzpJtmybpQVUVqdsYpwFzWjO7kMqc8hZNi XdfOfVOnDhscYtDeGPPfnChGqPiLJ0aLMXS+J2Qtc4lEgL+i3loDjGn4nU2HVwb696qw PiJ8eiJcoDxQngweFNcyIcpCY2hPF8W55jNXlsiqW+XZHlxUM3jYfg4l939nKy8w1OCF bTFlrOTcPGzB+BjGYl27Pyg7/OLMKQJeOC2CjCpZjyhq63noWFVaI5X3mSnwnUtuxmw7 SRSQmh5CFZIt93VFFUjC5YB++aG1ocTcR1ppCGB1SXidDVJAKFDo6KWnCeTV7D2m3HLy aJxw== X-Gm-Message-State: AOJu0YyJsokhM6qAIz6+bsoJA+7jaSbkhQLF4yIz4tNm/0Uy9jxv+2FW XiBOUIN7DHhg2JqPaEeLZOw= X-Google-Smtp-Source: AGHT+IEzEftZmMwd3uy8c4jFlRWiCMNewtVViUO8x56NGgTVIs1Slkt0YnO3C+U1f2T7BU7P13kVMQ== X-Received: by 2002:a05:6808:1710:b0:3ac:aae1:6d64 with SMTP id bc16-20020a056808171000b003acaae16d64mr7629540oib.2.1697651776618; Wed, 18 Oct 2023 10:56:16 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:16 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Hannes Reinecke Subject: [PATCH v13 03/18] block: Preserve the order of requeued zoned writes Date: Wed, 18 Oct 2023 10:54:25 -0700 Message-ID: <20231018175602.2148415-4-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk_mq_requeue_work() inserts requeued requests in front of other requests. This is fine for all request types except for sequential zoned writes. Hence this patch. Note: moving this functionality into the mq-deadline I/O scheduler is not an option because we want to be able to use zoned storage without I/O scheduler. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Hannes Reinecke Signed-off-by: Bart Van Assche --- block/blk-mq.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 502dafa76716..ce6ddb249959 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1485,7 +1485,9 @@ static void blk_mq_requeue_work(struct work_struct *work) blk_mq_request_bypass_insert(rq, 0); } else { list_del_init(&rq->queuelist); - blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD); + blk_mq_insert_request(rq, + !blk_rq_is_seq_zoned_write(rq) ? + BLK_MQ_INSERT_AT_HEAD : 0); } } From patchwork Wed Oct 18 17:54:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C39E2CDB482 for ; Wed, 18 Oct 2023 17:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231453AbjJRR4Y (ORCPT ); Wed, 18 Oct 2023 13:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbjJRR4U (ORCPT ); Wed, 18 Oct 2023 13:56:20 -0400 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3451EA; Wed, 18 Oct 2023 10:56:18 -0700 (PDT) Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-6b497c8575aso5203262b3a.1; Wed, 18 Oct 2023 10:56:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651778; x=1698256578; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qt1sHc5t/dL7mhPeX2SDNFeaW6muSPBn8kxtGzd5CRY=; b=DGhhTfbZRKm2e0uWbj9+IoM0VauYuTSrORJPwPAvHlEV7o/AO1oByxkqfc7yiTqwsW 1qQohxlmwnht/r+K/RptmB/NpdwBXYqnbwLbvv8DG4FE6b5nMSCn6y/G806U2ErtyYF5 Bbjx4eGVuEepL3stL0iKZ0H5pqFMwW1VIBHKrJv3RHf1w4F3JgGgpc5SvH1dnaRI02TU 2G5ylLXySSOwJ/93BjHarhYH+VaAViMQMTN2JAGc0swCUa+cGjbmsWxrYsmc7kRvNc9C t+1hFTWXQPdhrxjCbnUhOB6kwKzd6IO/6aSk8xhOb+pb7Nih5/D+7+SstvZoRkA20ESj Y4ng== X-Gm-Message-State: AOJu0YysCjOP11GOpTWCF+kc9s/jZDCD7EWb9aMxHZe3kKpUOmnrvXsG GVh/igELD3k4O6+nccn69NPukjBLADQ= X-Google-Smtp-Source: AGHT+IHcnR0CY/CXBx2s2SVvYMBx+axckIS4Bspn3uTESZHJD3rGlalk2Mc6QtoJohxQkNHhqtfwlw== X-Received: by 2002:a05:6a00:809:b0:690:1c1b:aefd with SMTP id m9-20020a056a00080900b006901c1baefdmr7148683pfk.5.1697651777835; Wed, 18 Oct 2023 10:56:17 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:17 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Hannes Reinecke , Nitesh Shetty , Ming Lei Subject: [PATCH v13 04/18] block/mq-deadline: Only use zone locking if necessary Date: Wed, 18 Oct 2023 10:54:26 -0700 Message-ID: <20231018175602.2148415-5-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one per zone for zoned writes has a significant negative performance impact on zoned UFS devices. Hence this patch that disables zone locking by the mq-deadline scheduler if the storage controller preserves the command order. This patch is based on the following assumptions: - It happens infrequently that zoned write requests are reordered by the block layer. - The I/O priority of all write requests is the same per zone. - Either no I/O scheduler is used or an I/O scheduler is used that serializes write requests per zone. Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Nitesh Shetty Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..082ccf3186f4 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -353,7 +353,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, return NULL; rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -398,7 +398,7 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, if (!rq) return NULL; - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) + if (data_dir == DD_READ || !rq->q->limits.use_zone_write_lock) return rq; /* @@ -526,8 +526,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, } /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. + * For a zoned block device that requires write serialization, if we + * only have writes queued and none of them can be dispatched, rq will + * be NULL. */ if (!rq) return NULL; @@ -934,7 +935,7 @@ static void dd_finish_request(struct request *rq) atomic_inc(&per_prio->stats.completed); - if (blk_queue_is_zoned(q)) { + if (rq->q->limits.use_zone_write_lock) { unsigned long flags; spin_lock_irqsave(&dd->zone_lock, flags); From patchwork Wed Oct 18 17:54:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E964DCDB486 for ; Wed, 18 Oct 2023 17:56:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232011AbjJRR40 (ORCPT ); Wed, 18 Oct 2023 13:56:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231929AbjJRR4X (ORCPT ); Wed, 18 Oct 2023 13:56:23 -0400 Received: from mail-oo1-f53.google.com (mail-oo1-f53.google.com [209.85.161.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DC5711A; Wed, 18 Oct 2023 10:56:20 -0700 (PDT) Received: by mail-oo1-f53.google.com with SMTP id 006d021491bc7-57babef76deso3939524eaf.0; Wed, 18 Oct 2023 10:56:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651779; x=1698256579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=brIl8BbMZiSDDShSYZ1n3Fk5rZQIHjD8DD65DedRTmw=; b=Fl9NUTE74/pNiZBDIlBSU1UERFL8htZ6sGx5O6IaqvUMVoDhaVol+f0SHRw/saVt0N nnwZQwp8NhRP2bCz1LHbbkI9AJEpT7WBMGPmrDNrTvvxm2Rhmr43/xLTYT8oy7OBNtYI pS+QuPSfc5cdp/Igg87LGF5FmkaVaK+aErKmnPg4+NgBUAH6VfI0AnZOn6zJnf4B/Qif xILbJkB9MCkGW6O09IToB6JXrtT/9rUEAteyQV22bD5GE+qSUFsD2mTvMjdG3bB79cZL cY2hLafzvjY7m6zgPvdOqgAk36ila5xldCaY8DVrp0yNmBLUtYT6NPQbEmo0Gn8fmW7y n2UA== X-Gm-Message-State: AOJu0Yw/ZkNaVp7s1qsHcWCbbMJBGBavDKly0h1TRkWDeAoO21czcyKa reo5qYXstmczdVV0XOXmX2U= X-Google-Smtp-Source: AGHT+IHgGVNmPWS2hKiIIOxkKivicFdJ7ipNBuKrYq2+dmhFQ7KTO4Bkk9kctTlNIgaLUXcOsdrJ2w== X-Received: by 2002:a05:6358:24a2:b0:168:9d60:e6b4 with SMTP id m34-20020a05635824a200b001689d60e6b4mr906058rwc.16.1697651779166; Wed, 18 Oct 2023 10:56:19 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:18 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 05/18] scsi: core: Introduce a mechanism for reordering requests in the error handler Date: Wed, 18 Oct 2023 10:54:27 -0700 Message-ID: <20231018175602.2148415-6-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the .eh_needs_prepare_resubmit and the .eh_prepare_resubmit function pointers in struct scsi_driver. Make the error handler call .eh_prepare_resubmit() before resubmitting commands if any of the .eh_needs_prepare_resubmit() invocations return true. A later patch will use this functionality to sort SCSI commands by LBA from inside the SCSI disk driver before these are resubmitted by the error handler. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 72 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_priv.h | 1 + include/scsi/scsi_driver.h | 2 ++ 3 files changed, 75 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c67cdcdc3ba8..c877db23a72d 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -2186,6 +2187,75 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost, } EXPORT_SYMBOL_GPL(scsi_eh_ready_devs); +/* + * Returns true if .eh_prepare_resubmit should be called for the commands in + * @done_q. + */ +static bool scsi_needs_preparation(struct list_head *done_q) +{ + struct scsi_cmnd *scmd; + + list_for_each_entry(scmd, done_q, eh_entry) { + struct scsi_driver *uld; + bool (*npr)(struct scsi_cmnd *scmd); + + if (!scmd->device) + continue; + uld = scsi_cmd_to_driver(scmd); + if (!uld) + continue; + npr = uld->eh_needs_prepare_resubmit; + if (npr && npr(scmd)) + return true; + } + + return false; +} + +/* + * Comparison function that allows to sort SCSI commands by ULD driver. + */ +static int scsi_cmp_uld(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + + /* See also the comment above the list_sort() definition. */ + return scsi_cmd_to_driver(a) > scsi_cmd_to_driver(b); +} + +void scsi_call_prepare_resubmit(struct list_head *done_q) +{ + struct scsi_cmnd *scmd, *next; + + if (!scsi_needs_preparation(done_q)) + return; + + /* Sort pending SCSI commands by ULD. */ + list_sort(NULL, done_q, scsi_cmp_uld); + + /* + * Call .eh_prepare_resubmit for each range of commands with identical + * ULD driver pointer. + */ + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { + struct scsi_driver *uld = + scmd->device ? scsi_cmd_to_driver(scmd) : NULL; + struct list_head *prev, uld_cmd_list; + + while (&next->eh_entry != done_q && + scsi_cmd_to_driver(next) == uld) + next = list_next_entry(next, eh_entry); + if (!uld->eh_prepare_resubmit) + continue; + prev = scmd->eh_entry.prev; + list_cut_position(&uld_cmd_list, prev, next->eh_entry.prev); + uld->eh_prepare_resubmit(&uld_cmd_list); + list_splice(&uld_cmd_list, prev); + } +} + /** * scsi_eh_flush_done_q - finish processed commands or retry them. * @done_q: list_head of processed commands. @@ -2194,6 +2264,8 @@ void scsi_eh_flush_done_q(struct list_head *done_q) { struct scsi_cmnd *scmd, *next; + scsi_call_prepare_resubmit(done_q); + list_for_each_entry_safe(scmd, next, done_q, eh_entry) { list_del_init(&scmd->eh_entry); if (scsi_device_online(scmd->device) && diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index 3f0dfb97db6b..64070e530c4d 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -101,6 +101,7 @@ int scsi_eh_get_sense(struct list_head *work_q, struct list_head *done_q); bool scsi_noretry_cmd(struct scsi_cmnd *scmd); void scsi_eh_done(struct scsi_cmnd *scmd); +void scsi_call_prepare_resubmit(struct list_head *done_q); /* scsi_lib.c */ extern void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd); diff --git a/include/scsi/scsi_driver.h b/include/scsi/scsi_driver.h index 4ce1988b2ba0..00ffa470724a 100644 --- a/include/scsi/scsi_driver.h +++ b/include/scsi/scsi_driver.h @@ -18,6 +18,8 @@ struct scsi_driver { int (*done)(struct scsi_cmnd *); int (*eh_action)(struct scsi_cmnd *, int); void (*eh_reset)(struct scsi_cmnd *); + bool (*eh_needs_prepare_resubmit)(struct scsi_cmnd *cmd); + void (*eh_prepare_resubmit)(struct list_head *cmd_list); }; #define to_scsi_driver(drv) \ container_of((drv), struct scsi_driver, gendrv) From patchwork Wed Oct 18 17:54:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E9E8C41513 for ; Wed, 18 Oct 2023 17:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232505AbjJRR4k (ORCPT ); Wed, 18 Oct 2023 13:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbjJRR4c (ORCPT ); Wed, 18 Oct 2023 13:56:32 -0400 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A32F11B; Wed, 18 Oct 2023 10:56:27 -0700 (PDT) Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-6b709048d8eso3850075b3a.2; Wed, 18 Oct 2023 10:56:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651787; x=1698256587; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U1JXLfB2JK3WrYNsCNEQ8eO7aaXYOrN4a02AXqgjqlw=; b=AWwUqjRME1hENhvOCJ5JLkMKQZ1qziXWA68dtfJkLZ4JuWzgXAI9EcZnPwXP+JsXY6 Nx6y5m5YAq3gQGBY8WMIcCxkzXC+LirbERx8f7yaTgCuy+uypi/6Tad0svnrNJSCFWV7 IjlUtRhBk0ySd5woIR7ycO+TO8nNRuiU8FfVIKQ2l23n9HLy64abQneuiOSWv/vb3tyH ryL8SEsG0nGk6OgboBCXSk4nNfRK5Ln0oVwt1Dhu9oCOdXKHz5e8O1dXOmHF6cDGCy9h kf+5cwmRX23o+gA7qPLfNSITCbZrUpkno7whytsx8TF5bw64S2pkTDE1XQkFQI04bUYH 06PQ== X-Gm-Message-State: AOJu0Yx+Lk3ukYUKPzqAn3J8Jwk/MXA9Pfg+3D7jNdyIVTBCaSxGbdcC DjahU/c06rZnLvt7M6R0pzc= X-Google-Smtp-Source: AGHT+IEI46XOpvOxctT8bngocbCLeOod6LS1ZY+SMiwJSn/ddib2df1Wg98ZFxtxLTH+wQjRppIrnw== X-Received: by 2002:a05:6a00:22d3:b0:6bc:c242:792e with SMTP id f19-20020a056a0022d300b006bcc242792emr6552248pfj.27.1697651786735; Wed, 18 Oct 2023 10:56:26 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:26 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 06/18] scsi: core: Add unit tests for scsi_call_prepare_resubmit() Date: Wed, 18 Oct 2023 10:54:28 -0700 Message-ID: <20231018175602.2148415-7-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Triggering all code paths in scsi_call_prepare_resubmit() via manual testing is difficult. Hence add unit tests for this function. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/Kconfig | 2 + drivers/scsi/Kconfig.kunit | 4 + drivers/scsi/Makefile | 2 + drivers/scsi/Makefile.kunit | 1 + drivers/scsi/scsi_error.c | 3 + drivers/scsi/scsi_error_test.c | 207 +++++++++++++++++++++++++++++++++ 6 files changed, 219 insertions(+) create mode 100644 drivers/scsi/Kconfig.kunit create mode 100644 drivers/scsi/Makefile.kunit create mode 100644 drivers/scsi/scsi_error_test.c diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 695a57d894cd..734a5b10e94c 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -232,6 +232,8 @@ config SCSI_SCAN_ASYNC Note that this setting also affects whether resuming from system suspend will be performed asynchronously. +source "drivers/scsi/Kconfig.kunit" + menu "SCSI Transports" depends on SCSI diff --git a/drivers/scsi/Kconfig.kunit b/drivers/scsi/Kconfig.kunit new file mode 100644 index 000000000000..90984a6ec7cc --- /dev/null +++ b/drivers/scsi/Kconfig.kunit @@ -0,0 +1,4 @@ +config SCSI_ERROR_TEST + tristate "scsi_error.c unit tests" if !KUNIT_ALL_TESTS + depends on SCSI && KUNIT + default KUNIT_ALL_TESTS diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index f055bfd54a68..1c5c3afb6c6e 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -168,6 +168,8 @@ scsi_mod-$(CONFIG_PM) += scsi_pm.o scsi_mod-$(CONFIG_SCSI_DH) += scsi_dh.o scsi_mod-$(CONFIG_BLK_DEV_BSG) += scsi_bsg.o +include $(srctree)/drivers/scsi/Makefile.kunit + hv_storvsc-y := storvsc_drv.o sd_mod-objs := sd.o diff --git a/drivers/scsi/Makefile.kunit b/drivers/scsi/Makefile.kunit new file mode 100644 index 000000000000..3e98053b2709 --- /dev/null +++ b/drivers/scsi/Makefile.kunit @@ -0,0 +1 @@ +obj-$(CONFIG_SCSI_ERROR_TEST) += scsi_error_test.o diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c877db23a72d..76084e00b7bb 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -2255,6 +2255,9 @@ void scsi_call_prepare_resubmit(struct list_head *done_q) list_splice(&uld_cmd_list, prev); } } +#if IS_MODULE(CONFIG_SCSI_ERROR_TEST) +EXPORT_SYMBOL_GPL(scsi_call_prepare_resubmit); +#endif /** * scsi_eh_flush_done_q - finish processed commands or retry them. diff --git a/drivers/scsi/scsi_error_test.c b/drivers/scsi/scsi_error_test.c new file mode 100644 index 000000000000..be0d25e1fb57 --- /dev/null +++ b/drivers/scsi/scsi_error_test.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2023 Google LLC + */ +#include +#include +#include +#include "scsi_priv.h" + +static struct kunit *kunit_test; + +static bool uld_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static void uld_prepare_resubmit(struct list_head *cmd_list) +{ + /* This function must not be called. */ + KUNIT_EXPECT_TRUE(kunit_test, false); +} + +/* + * Verify that .eh_prepare_resubmit() is not called if use_zone_write_lock is + * true. + */ +static void test_prepare_resubmit1(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = false, + .use_zone_write_lock = true, + .zoned = BLK_ZONED_HM, + } + }; + static struct scsi_driver uld = { + .eh_needs_prepare_resubmit = uld_needs_prepare_resubmit, + .eh_prepare_resubmit = uld_prepare_resubmit, + }; + static struct scsi_device dev = { + .request_queue = &q, + .sdev_gendev.driver = &uld.gendrv, + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + disk.queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 2, + }, + .cmd.device = &dev, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 1; + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 2); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); +} + +static struct scsi_driver *uld1, *uld2, *uld3; + +static void uld1_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld1); +} + +static void uld2_prepare_resubmit(struct list_head *cmd_list) +{ + struct scsi_cmnd *cmd; + + KUNIT_EXPECT_EQ(kunit_test, list_count_nodes(cmd_list), 2); + list_for_each_entry(cmd, cmd_list, eh_entry) + KUNIT_EXPECT_PTR_EQ(kunit_test, scsi_cmd_to_driver(cmd), uld2); +} + +static void test_prepare_resubmit2(struct kunit *test) +{ + static struct gendisk disk; + static struct request_queue q = { + .disk = &disk, + .limits = { + .driver_preserves_write_order = true, + .use_zone_write_lock = false, + .zoned = BLK_ZONED_HM, + } + }; + static struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; + } cmd1, cmd2, cmd3, cmd4, cmd5, cmd6; + static struct scsi_device dev1, dev2, dev3; + struct scsi_driver *uld; + LIST_HEAD(cmd_list); + + BUILD_BUG_ON(scsi_cmd_to_rq(&cmd1.cmd) != &cmd1.rq); + + uld = kzalloc(3 * sizeof(*uld), GFP_KERNEL); + uld1 = &uld[0]; + uld1->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld1->eh_prepare_resubmit = uld1_prepare_resubmit; + uld2 = &uld[1]; + uld2->eh_needs_prepare_resubmit = uld_needs_prepare_resubmit; + uld2->eh_prepare_resubmit = uld2_prepare_resubmit; + uld3 = &uld[2]; + disk.queue = &q; + dev1.sdev_gendev.driver = &uld1->gendrv; + dev1.request_queue = &q; + dev2.sdev_gendev.driver = &uld2->gendrv; + dev2.request_queue = &q; + dev3.sdev_gendev.driver = &uld3->gendrv; + dev3.request_queue = &q; + cmd1 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 3, + }, + .cmd.device = &dev1, + }; + cmd2 = cmd1; + cmd2.rq.__sector = 4; + cmd3 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 1, + }, + .cmd.device = &dev2, + }; + cmd4 = cmd3; + cmd4.rq.__sector = 2, + cmd5 = (struct rq_and_cmd){ + .rq = { + .q = &q, + .cmd_flags = REQ_OP_WRITE, + .__sector = 5, + }, + .cmd.device = &dev3, + }; + cmd6 = cmd5; + cmd6.rq.__sector = 6; + list_add_tail(&cmd3.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd1.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd5.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd6.cmd.eh_entry, &cmd_list); + list_add_tail(&cmd4.cmd.eh_entry, &cmd_list); + + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + kunit_test = test; + scsi_call_prepare_resubmit(&cmd_list); + kunit_test = NULL; + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 6); + KUNIT_EXPECT_TRUE(test, uld1 < uld2); + KUNIT_EXPECT_TRUE(test, uld2 < uld3); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd1.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd2.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next, + &cmd3.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next, + &cmd4.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next, + &cmd5.cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next->next->next, + &cmd6.cmd.eh_entry); + kfree(uld); +} + +static struct kunit_case prepare_resubmit_test_cases[] = { + KUNIT_CASE(test_prepare_resubmit1), + KUNIT_CASE(test_prepare_resubmit2), + {} +}; + +static struct kunit_suite prepare_resubmit_test_suite = { + .name = "prepare_resubmit", + .test_cases = prepare_resubmit_test_cases, +}; +kunit_test_suite(prepare_resubmit_test_suite); + +MODULE_DESCRIPTION("scsi_call_prepare_resubmit() unit tests"); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); From patchwork Wed Oct 18 17:54:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2514CDB482 for ; Wed, 18 Oct 2023 17:56:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231414AbjJRR4k (ORCPT ); Wed, 18 Oct 2023 13:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232144AbjJRR4c (ORCPT ); Wed, 18 Oct 2023 13:56:32 -0400 Received: from mail-oo1-f52.google.com (mail-oo1-f52.google.com [209.85.161.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B0AC12D; Wed, 18 Oct 2023 10:56:29 -0700 (PDT) Received: by mail-oo1-f52.google.com with SMTP id 006d021491bc7-57babef76deso3939596eaf.0; Wed, 18 Oct 2023 10:56:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651788; x=1698256588; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NrLWXaeR/EgbK40UM+EivZPIK4WgHuTtooGjxb8KUx4=; b=B7zOlsJHBtT4Sv6dm+t4CzEyqfMRM11esxdgmu1NktVjNmCSy/9N8BxXvdq0yxOoZV kywbNCWXR+zRYZ/qCIdR6CC+Bjdmy0emfnUDZSFjfpsgqY6HPwbGLspdM4C7JA8TErmi OiFN1C3y5z1AW8PYCQGOrwPrKihE7fYeZq+bMQq5lqWxQ8grd9T+zZoVIii2l2AkaWvc sM6G2OqG3qOm61CelkhD/5OivZ2YQlkAroCTe71biYXiNJQKK6U85fh6uVJBJx/NGIid gwC8kFnLJPi7Qnkmb6kJ+t1zYAw20WV5ZjQqcAmqVHN6tvBASAnJMZxR1SgDVQUkrzoB Lf9Q== X-Gm-Message-State: AOJu0YxVRWyMBYfLAMLeN37d09DatQZwfs6LQHF3bxxxs8G7wOYOP1VQ UCgDi+pquyFfOuT3/HUNiWo= X-Google-Smtp-Source: AGHT+IFT9ksTs+qWdKnaoDns9RfCBRlMio7gHRw5ccFM4lLOWB36nCZmpkKtU9duJ2qpcnMo/Te+oA== X-Received: by 2002:a05:6358:44e:b0:14c:e2d3:fb2e with SMTP id 14-20020a056358044e00b0014ce2d3fb2emr5971591rwe.0.1697651788017; Wed, 18 Oct 2023 10:56:28 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:27 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 07/18] scsi: sd: Sort commands by LBA before resubmitting Date: Wed, 18 Oct 2023 10:54:29 -0700 Message-ID: <20231018175602.2148415-8-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Sort SCSI commands by LBA before the SCSI error handler resubmits these commands. This is necessary when resubmitting zoned writes (REQ_OP_WRITE) if multiple writes have been queued for a single zone. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index c92a317ba547..6f26d6d6f50b 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -1979,6 +1980,46 @@ static int sd_eh_action(struct scsi_cmnd *scmd, int eh_disp) return eh_disp; } +static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd) +{ + struct request *rq = scsi_cmd_to_rq(cmd); + + return !rq->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(rq); +} + +static int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) +{ + struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); + struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); + struct request *rq_a = scsi_cmd_to_rq(a); + struct request *rq_b = scsi_cmd_to_rq(b); + bool use_zwl_a = rq_a->q->limits.use_zone_write_lock; + bool use_zwl_b = rq_b->q->limits.use_zone_write_lock; + + /* + * Order the commands that need zone write locking after the commands + * that do not need zone write locking. Order the commands that do not + * need zone write locking by LBA. Do not reorder the commands that + * need zone write locking. See also the comment above the list_sort() + * definition. + */ + if (use_zwl_a || use_zwl_b) + return use_zwl_a > use_zwl_b; + return blk_rq_pos(rq_a) > blk_rq_pos(rq_b); +} + +static void sd_prepare_resubmit(struct list_head *cmd_list) +{ + /* + * Sort pending SCSI commands in starting sector order. This is + * important if one of the SCSI devices associated with @shost is a + * zoned block device for which zone write locking is disabled. + */ + list_sort(NULL, cmd_list, sd_cmp_sector); +} + static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd) { struct request *req = scsi_cmd_to_rq(scmd); @@ -3915,6 +3956,8 @@ static struct scsi_driver sd_template = { .done = sd_done, .eh_action = sd_eh_action, .eh_reset = sd_eh_reset, + .eh_needs_prepare_resubmit = sd_needs_prepare_resubmit, + .eh_prepare_resubmit = sd_prepare_resubmit, }; /** From patchwork Wed Oct 18 17:54:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C65BCCDB47E for ; Wed, 18 Oct 2023 17:56:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230421AbjJRR4u (ORCPT ); Wed, 18 Oct 2023 13:56:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231864AbjJRR4q (ORCPT ); Wed, 18 Oct 2023 13:56:46 -0400 Received: from mail-oo1-f52.google.com (mail-oo1-f52.google.com [209.85.161.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3845218F; Wed, 18 Oct 2023 10:56:36 -0700 (PDT) Received: by mail-oo1-f52.google.com with SMTP id 006d021491bc7-57ad95c555eso4013366eaf.3; Wed, 18 Oct 2023 10:56:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651795; x=1698256595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tgkcpMeB8iCIlRd5RqoUIvAC4F49icgyF9qAednPXIQ=; b=dDTinihSb34rGKoULOp0I3BWM3nAq+hoJVQdH1cD/JwNtIY4pneKLwE2/+23k5oJFK +eio9NTtLy84AZOarl5aTEf1UuyKZNAN+m0L2P18pkXeqv0cVJOOjFvI3FeJm8KTUqK3 RvUDuYmjTBZCh0xW2OgOC5AAdmzsnYEiZ+ZXR9M7McB6fy/hGIIGLe0izjOvS4GggJjE 5NYPPU4O/ZXy1S5LDWF5rEybbamMhrRSI1VjWFcEFfSFevvXwQ7c/6IPVt2Af46R2/gO u37ShyFLszBu40dWYZwOqTRCONyk3TUfz9S2OQMCCGY8EKmmqwCDbtWMg3CEiV/Vm0dV aouA== X-Gm-Message-State: AOJu0YzfmL7Y6TVzW+HNvLhaduek5P3GFH0z+nqLQ+QRyZDW4WWlqNfs lNzEUZLLLc+IHEbSCsl+sWQ= X-Google-Smtp-Source: AGHT+IFartW3SZD3gzSDosUlH5fY5KYz+xtITfvMDuLKNeN1p+x00ehJDAuWiZsgdtTQtmTmum/Wsw== X-Received: by 2002:a05:6359:3119:b0:166:dc89:8c92 with SMTP id rh25-20020a056359311900b00166dc898c92mr5272939rwb.26.1697651795256; Wed, 18 Oct 2023 10:56:35 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:34 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 08/18] scsi: sd: Add a unit test for sd_cmp_sector() Date: Wed, 18 Oct 2023 10:54:30 -0700 Message-ID: <20231018175602.2148415-9-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make it easier to test sd_cmp_sector() by adding a unit test for this function. Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/Kconfig.kunit | 5 ++ drivers/scsi/Makefile.kunit | 1 + drivers/scsi/sd.c | 7 ++- drivers/scsi/sd.h | 2 + drivers/scsi/sd_test.c | 91 +++++++++++++++++++++++++++++++++++++ 5 files changed, 104 insertions(+), 2 deletions(-) create mode 100644 drivers/scsi/sd_test.c diff --git a/drivers/scsi/Kconfig.kunit b/drivers/scsi/Kconfig.kunit index 90984a6ec7cc..907798967b6f 100644 --- a/drivers/scsi/Kconfig.kunit +++ b/drivers/scsi/Kconfig.kunit @@ -2,3 +2,8 @@ config SCSI_ERROR_TEST tristate "scsi_error.c unit tests" if !KUNIT_ALL_TESTS depends on SCSI && KUNIT default KUNIT_ALL_TESTS + +config SD_TEST + tristate "sd.c unit tests" if !KUNIT_ALL_TESTS + depends on SCSI && BLK_DEV_SD && KUNIT + default KUNIT_ALL_TESTS diff --git a/drivers/scsi/Makefile.kunit b/drivers/scsi/Makefile.kunit index 3e98053b2709..dc0a21d7749f 100644 --- a/drivers/scsi/Makefile.kunit +++ b/drivers/scsi/Makefile.kunit @@ -1 +1,2 @@ obj-$(CONFIG_SCSI_ERROR_TEST) += scsi_error_test.o +obj-$(CONFIG_SD_TEST) += sd_test.o diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 6f26d6d6f50b..c4a89300d3f9 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1988,8 +1988,8 @@ static bool sd_needs_prepare_resubmit(struct scsi_cmnd *cmd) blk_rq_is_seq_zoned_write(rq); } -static int sd_cmp_sector(void *priv, const struct list_head *_a, - const struct list_head *_b) +int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b) { struct scsi_cmnd *a = list_entry(_a, typeof(*a), eh_entry); struct scsi_cmnd *b = list_entry(_b, typeof(*b), eh_entry); @@ -2009,6 +2009,9 @@ static int sd_cmp_sector(void *priv, const struct list_head *_a, return use_zwl_a > use_zwl_b; return blk_rq_pos(rq_a) > blk_rq_pos(rq_b); } +#if IS_MODULE(CONFIG_SD_TEST) +EXPORT_SYMBOL_GPL(scsi_call_prepare_resubmit); +#endif static void sd_prepare_resubmit(struct list_head *cmd_list) { diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h index 5eea762f84d1..35b7cdb7bf3b 100644 --- a/drivers/scsi/sd.h +++ b/drivers/scsi/sd.h @@ -292,6 +292,8 @@ static inline blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd, #endif /* CONFIG_BLK_DEV_ZONED */ +int sd_cmp_sector(void *priv, const struct list_head *_a, + const struct list_head *_b); void sd_print_sense_hdr(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr); void sd_print_result(const struct scsi_disk *sdkp, const char *msg, int result); diff --git a/drivers/scsi/sd_test.c b/drivers/scsi/sd_test.c new file mode 100644 index 000000000000..0dc0f4c67b96 --- /dev/null +++ b/drivers/scsi/sd_test.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2023 Google LLC + */ +#include +#include +#include +#include +#include +#include +#include "sd.h" + +#define ALLOC_Q(...) \ + ({ \ + struct request_queue *q; \ + q = kmalloc(sizeof(*q), GFP_KERNEL); \ + if (q) \ + *q = (struct request_queue){ __VA_ARGS__ }; \ + q; \ + }) + +#define ALLOC_CMD(...) \ + ({ \ + struct rq_and_cmd *cmd; \ + cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); \ + if (cmd) \ + *cmd = (struct rq_and_cmd){ __VA_ARGS__ }; \ + cmd; \ + }) + +struct rq_and_cmd { + struct request rq; + struct scsi_cmnd cmd; +}; + +/* + * Verify that sd_cmp_sector() does what it is expected to do. + */ +static void test_sd_cmp_sector(struct kunit *test) +{ + struct request_queue *q1 __free(kfree) = + ALLOC_Q(.limits.use_zone_write_lock = true); + struct request_queue *q2 __free(kfree) = + ALLOC_Q(.limits.use_zone_write_lock = false); + struct rq_and_cmd *cmd1 __free(kfree) = ALLOC_CMD(.rq = { + .q = q1, + .__sector = 7, + }); + struct rq_and_cmd *cmd2 __free(kfree) = ALLOC_CMD(.rq = { + .q = q1, + .__sector = 5, + }); + struct rq_and_cmd *cmd3 __free(kfree) = ALLOC_CMD(.rq = { + .q = q2, + .__sector = 7, + }); + struct rq_and_cmd *cmd4 __free(kfree) = ALLOC_CMD(.rq = { + .q = q2, + .__sector = 5, + }); + LIST_HEAD(cmd_list); + + list_add_tail(&cmd1->cmd.eh_entry, &cmd_list); + list_add_tail(&cmd2->cmd.eh_entry, &cmd_list); + list_add_tail(&cmd3->cmd.eh_entry, &cmd_list); + list_add_tail(&cmd4->cmd.eh_entry, &cmd_list); + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 4); + list_sort(NULL, &cmd_list, sd_cmp_sector); + KUNIT_EXPECT_EQ(test, list_count_nodes(&cmd_list), 4); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next, &cmd4->cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next, &cmd3->cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next, + &cmd1->cmd.eh_entry); + KUNIT_EXPECT_PTR_EQ(test, cmd_list.next->next->next->next, + &cmd2->cmd.eh_entry); +} + +static struct kunit_case sd_test_cases[] = { + KUNIT_CASE(test_sd_cmp_sector), + {} +}; + +static struct kunit_suite sd_test_suite = { + .name = "sd", + .test_cases = sd_test_cases, +}; +kunit_test_suite(sd_test_suite); + +MODULE_DESCRIPTION("SCSI disk (sd) driver unit tests"); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); From patchwork Wed Oct 18 17:54:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8981DCDB483 for ; Wed, 18 Oct 2023 17:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232513AbjJRR4v (ORCPT ); Wed, 18 Oct 2023 13:56:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232360AbjJRR4q (ORCPT ); Wed, 18 Oct 2023 13:56:46 -0400 Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AED7A19A; Wed, 18 Oct 2023 10:56:37 -0700 (PDT) Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-57b5ef5b947so4587538eaf.0; Wed, 18 Oct 2023 10:56:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651797; x=1698256597; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7StF+XzhGIuJN1x0+w93jPC2qHfbPVL0hvVi3cKoSP0=; b=Sj3ag2ArGB6kJprx+7l8AgXACUK3HpnbyGzVkgALvSOoO7oKU4YxOZYeCzF31RZfnx Sd0XZTOyzrMGuxO56K8rv2xVoLIuuIjDPipkb8y1mcHE33cDl+fBZIE6z9E4HqWj2fTT FgtmjrrEcPNHlv6EoNFY1UpBseBreLrBKHBMoD2+HOcW2ORLIUT3MAuEltA+FVfIJKrL HRQU92YGs6OeQ0yKAOWh7cesNAzfsCuw5wxyX5Ms/ndcdTl9T97Q4sr102hw+nEuNEZ/ TeO4cWwNMGDEk8OjKnZqJUw90CikYhGhCSotFFxHstg6TUV20cK1lG8/TKMWxxbBVd0X lPUQ== X-Gm-Message-State: AOJu0YwEcMw+CPw7b73WJDzExdg7/fteDfTjo7VshV0VnHlfLc6HF7ot sldq5IY16LHHr2M0ONtrOw4= X-Google-Smtp-Source: AGHT+IGkL+7P8h7z1Lc7tGw18GKjzPhDhxmWCJhmIS0DU4dh8E+FA+R4EjfNGK0Z5yFfY3Oz9lHaqA== X-Received: by 2002:a05:6359:6301:b0:166:caa8:d48a with SMTP id sf1-20020a056359630100b00166caa8d48amr5680933rwb.1.1697651796795; Wed, 18 Oct 2023 10:56:36 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:36 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 09/18] scsi: core: Retry unaligned zoned writes Date: Wed, 18 Oct 2023 10:54:31 -0700 Message-ID: <20231018175602.2148415-10-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If zoned writes (REQ_OP_WRITE) for a sequential write required zone have a starting LBA that differs from the write pointer, e.g. because zoned writes have been reordered, then the storage device will respond with an UNALIGNED WRITE COMMAND error. Send commands that failed with an unaligned write error to the SCSI error handler if zone write locking is disabled. The SCSI error handler will sort SCSI commands per LBA before resubmitting these. If zone write locking is disabled, increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands because in the worst case the number of times reordered zoned writes have to be retried is (number of outstanding writes per sequential zone) - 1. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 16 ++++++++++++++++ drivers/scsi/scsi_lib.c | 1 + drivers/scsi/sd.c | 6 ++++++ include/scsi/scsi.h | 1 + 4 files changed, 24 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index 76084e00b7bb..b8eb6030bd23 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -699,6 +699,22 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. This may indicate that zoned writes + * have been received by the device in the wrong order. If zone + * write locking is disabled, retry after all pending commands + * have completed. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04 && + !req->q->limits.use_zone_write_lock && + blk_rq_is_seq_zoned_write(req) && + scmd->retries <= scmd->allowed) { + sdev_printk(KERN_INFO, scmd->device, + "Retrying unaligned write at LBA %#llx.\n", + scsi_get_lba(scmd)); + return NEEDS_DELAYED_RETRY; + } + if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index c2f647a7c1b0..33a34693c8a2 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1443,6 +1443,7 @@ static void scsi_complete(struct request *rq) case ADD_TO_MLQUEUE: scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); break; + case NEEDS_DELAYED_RETRY: default: scsi_eh_scmd_add(cmd); break; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index c4a89300d3f9..0461eb9b61fa 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1197,6 +1197,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + /* + * Increase the number of allowed retries for zoned writes if zone + * write locking is disabled. + */ + if (!rq->q->limits.use_zone_write_lock && blk_rq_is_seq_zoned_write(rq)) + cmd->allowed += rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index ec093594ba53..6600db046227 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -93,6 +93,7 @@ static inline int scsi_status_is_check_condition(int status) * Internal return values. */ enum scsi_disposition { + NEEDS_DELAYED_RETRY = 0x2000, NEEDS_RETRY = 0x2001, SUCCESS = 0x2002, FAILED = 0x2003, From patchwork Wed Oct 18 17:54:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80248C46CA1 for ; Wed, 18 Oct 2023 17:56:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbjJRR4x (ORCPT ); Wed, 18 Oct 2023 13:56:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231865AbjJRR4q (ORCPT ); Wed, 18 Oct 2023 13:56:46 -0400 Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com [209.85.167.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE2FB116; Wed, 18 Oct 2023 10:56:38 -0700 (PDT) Received: by mail-oi1-f169.google.com with SMTP id 5614622812f47-3af957bd7e9so4533958b6e.3; Wed, 18 Oct 2023 10:56:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651798; x=1698256598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KzaYxnCqQNUVR3i3khEB0a4wN2TqC5S5bjyD6Ki8qwY=; b=tD80rO3E+bjqq/Du5Acj4/x13yfk/4SjprnG+vwx+2sV2mKehu0fXZTDiAcV4nvU3B RuhoqMhWkojmC9j1jqDUWIp9dFrp3lzIV8dlsFLiD9Lvbsmop6yhbf5QLsizWFMPtup4 dI8z/pLNpVsadA12eXHDQ9B4DN5/1p/MI/lt6rGFP5GO5EskYM19VX/5PWMUp0pXNj6I kPUlBgyyZS3Ao78XU7NjCeEyCs3h3h00TtnvObYbMjXg3mK7DpZn5Gs+xhsjFv9TycD5 1/OLUDpUlBGyAnoe321yG7ntaqMSv2lpGgPYhkSRg6IYo8BZlRE4kuHKybQqCZNNhdPK DVnQ== X-Gm-Message-State: AOJu0YyucPi0laLVwYsbQc/PaOyWo9K9rtIk5LOnMSgK4qN3ja3cpPzg lGapff8Gtt+n/zuiB88GOhU= X-Google-Smtp-Source: AGHT+IHGlohMVbM21pVwttuFEXoYy8SCCGWVRtFC7bENvmPZLa62ixuBoEQpD7Zi0X4Nuoe7ZbFVKQ== X-Received: by 2002:a05:6808:499:b0:3b2:e0ac:b85e with SMTP id z25-20020a056808049900b003b2e0acb85emr4609117oid.5.1697651798012; Wed, 18 Oct 2023 10:56:38 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:37 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 10/18] scsi: sd_zbc: Only require an I/O scheduler if needed Date: Wed, 18 Oct 2023 10:54:32 -0700 Message-ID: <20231018175602.2148415-11-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org An I/O scheduler that serializes zoned writes is only needed if the SCSI LLD does not preserve the write order. Hence only set ELEVATOR_F_ZBD_SEQ_WRITE if the LLD does not preserve the write order. Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/sd_zbc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index a25215507668..718b31bed878 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -955,7 +955,9 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) /* The drive satisfies the kernel restrictions: set it up */ blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + if (!q->limits.driver_preserves_write_order) + blk_queue_required_elevator_features(q, + ELEVATOR_F_ZBD_SEQ_WRITE); if (sdkp->zones_max_open == U32_MAX) disk_set_max_open_zones(disk, 0); else From patchwork Wed Oct 18 17:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D409CDB47E for ; Wed, 18 Oct 2023 17:57:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232653AbjJRR47 (ORCPT ); Wed, 18 Oct 2023 13:56:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232659AbjJRR4s (ORCPT ); Wed, 18 Oct 2023 13:56:48 -0400 Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B63B12F; Wed, 18 Oct 2023 10:56:40 -0700 (PDT) Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-581ed744114so750129eaf.0; Wed, 18 Oct 2023 10:56:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651799; x=1698256599; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=035k58Gc0xjZLYCOymyww+e8cIPTj80YBwmkGqX/6yA=; b=eNcF/r2oh/tJYcuiuzSjJhzm35WwfZPAYuX6UZxKpOgwzfYtd95/mqyZm6q0pANpFR ghzhJrwJpBTrwe/LIir/cmpDCYZv4tj8btRypMfYf0teKEkz8MfNSeLcHzRoQ1MblUqR uUBBviUakgjQsTKekBUbuOIG53hkqn2t3Nc3Jc9mYi9nO20rah2KLPduXeyNCWLm9RaH 2dEy27CSJzAeT9jMzSCpPJVEqF3YD94w30csk65VYenIKRyXpBLXog25lhCtAvlgsVOp XtS3MlpPqy+skcfQbaqZglHi74/vSbgZAkL23z9I8P7cTCmQOak2Dkd9P0ikvAhlb1Yw fmWQ== X-Gm-Message-State: AOJu0Yw9Zf/xwpAq1XNL97BxNEc4mNqPWX2H3IPpMRZCEFF+hhQnuySY KUVTxikwXKVrV9iUz2tfYbY= X-Google-Smtp-Source: AGHT+IFwoWBuwBEsyPNdpDkTWGLbT4dcp55zLUuEJdwLdyVStxzW14WP/OWThzFA3AheRDVR8JaA9g== X-Received: by 2002:a05:6358:3a0f:b0:166:f348:a8b3 with SMTP id g15-20020a0563583a0f00b00166f348a8b3mr1954618rwe.28.1697651799305; Wed, 18 Oct 2023 10:56:39 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:38 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 11/18] scsi: scsi_debug: Add the preserves_write_order module parameter Date: Wed, 18 Oct 2023 10:54:33 -0700 Message-ID: <20231018175602.2148415-12-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone write locking is not used for zoned devices if the block driver reports that it preserves the order of write commands. Make it easier to test not using zone write locking by adding support for setting the driver_preserves_write_order flag. Acked-by: Douglas Gilbert Cc: Martin K. Petersen Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 9c0af50501f9..1ea4925d2c2f 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -832,6 +832,7 @@ static int dix_reads; static int dif_errors; /* ZBC global data */ +static bool sdeb_preserves_write_order; static bool sdeb_zbc_in_use; /* true for host-aware and host-managed disks */ static int sdeb_zbc_zone_cap_mb; static int sdeb_zbc_zone_size_mb; @@ -5138,9 +5139,13 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev) static int scsi_debug_slave_alloc(struct scsi_device *sdp) { + struct request_queue *q = sdp->request_queue; + if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + if (sdeb_preserves_write_order) + q->limits.driver_preserves_write_order = true; return 0; } @@ -5755,6 +5760,8 @@ module_param_named(statistics, sdebug_statistics, bool, S_IRUGO | S_IWUSR); module_param_named(strict, sdebug_strict, bool, S_IRUGO | S_IWUSR); module_param_named(submit_queues, submit_queues, int, S_IRUGO); module_param_named(poll_queues, poll_queues, int, S_IRUGO); +module_param_named(preserves_write_order, sdeb_preserves_write_order, bool, + S_IRUGO); module_param_named(tur_ms_to_ready, sdeb_tur_ms_to_ready, int, S_IRUGO); module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO); module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO); @@ -5812,6 +5819,8 @@ MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); MODULE_PARM_DESC(no_rwlock, "don't protect user data reads+writes (def=0)"); MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); +MODULE_PARM_DESC(preserves_write_order, + "Whether or not to inform the block layer that this driver preserves the order of WRITE commands (def=0)"); MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); MODULE_PARM_DESC(opt_blks, "optimal transfer length in blocks (def=1024)"); From patchwork Wed Oct 18 17:54:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF60DCDB482 for ; Wed, 18 Oct 2023 17:57:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232321AbjJRR5B (ORCPT ); Wed, 18 Oct 2023 13:57:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232704AbjJRR4s (ORCPT ); Wed, 18 Oct 2023 13:56:48 -0400 Received: from mail-oo1-f49.google.com (mail-oo1-f49.google.com [209.85.161.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0621A6; Wed, 18 Oct 2023 10:56:41 -0700 (PDT) Received: by mail-oo1-f49.google.com with SMTP id 006d021491bc7-581de3e691dso1096105eaf.3; Wed, 18 Oct 2023 10:56:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651801; x=1698256601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mKw3Aw2p8++zyZ3GJZp8ItEvankLwq1zpCMHoyF24eQ=; b=VESfGr/N7ft37N+UQvXDE8btjtsKagrVvlOHvbbYPKBvgkIovLL0zbL88J6TylXfUx vP+l4XLCCO6VD1o84oXlZb8wdZsX82xlt0a9aOrJu84LlJK7an6sPYnpmXbi6Oo45xgO 9CbYRcEdjTqkZ0MdoLNSl7eU495pmMbeq3g5YCk6gHA02jCRtF+droDEg7pr6LsC95Uf vNN8g5snqiQwnDpjOBQczhenj6rw0X2gfiyUt4wuaMcm5uTccf2kD7aqo//lyEIVRf6p 14KDAeWGARddFeDdMpPsbLqqnsMIpQZakeNBwVHG1ux4nhrAUgvxGmgo5x5k1LJ6RwzN AI6Q== X-Gm-Message-State: AOJu0YxmAwZMV75ptHvHebyH0Y906+oNilOUe0ykFA7EGv99kE+Qndar jfjTiVT4w5DUYr779dDCuyEHNCNugCQ= X-Google-Smtp-Source: AGHT+IG0M1Qz+gPTY6LGiMPiNVNAssdBWcdwyBRChmCyK5k5PN9sBxiRKCJe5Va+iv6h7JLmT82MlA== X-Received: by 2002:a05:6359:29c1:b0:168:9f53:9d67 with SMTP id qf1-20020a05635929c100b001689f539d67mr329801rwb.20.1697651800637; Wed, 18 Oct 2023 10:56:40 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:40 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Douglas Gilbert , Damien Le Moal , Ming Lei , "James E.J. Bottomley" Subject: [PATCH v13 12/18] scsi: scsi_debug: Support injecting unaligned write errors Date: Wed, 18 Oct 2023 10:54:34 -0700 Message-ID: <20231018175602.2148415-13-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow user space software, e.g. a blktests test, to inject unaligned write errors. Acked-by: Douglas Gilbert Reviewed-by: Damien Le Moal Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 1ea4925d2c2f..164e82c218ff 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -181,6 +181,7 @@ static const char *sdebug_version_date = "20210520"; #define SDEBUG_OPT_NO_CDB_NOISE 0x4000 #define SDEBUG_OPT_HOST_BUSY 0x8000 #define SDEBUG_OPT_CMD_ABORT 0x10000 +#define SDEBUG_OPT_UNALIGNED_WRITE 0x20000 #define SDEBUG_OPT_ALL_NOISE (SDEBUG_OPT_NOISE | SDEBUG_OPT_Q_NOISE | \ SDEBUG_OPT_RESET_NOISE) #define SDEBUG_OPT_ALL_INJECTING (SDEBUG_OPT_RECOVERED_ERR | \ @@ -188,7 +189,8 @@ static const char *sdebug_version_date = "20210520"; SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR | \ SDEBUG_OPT_SHORT_TRANSFER | \ SDEBUG_OPT_HOST_BUSY | \ - SDEBUG_OPT_CMD_ABORT) + SDEBUG_OPT_CMD_ABORT | \ + SDEBUG_OPT_UNALIGNED_WRITE) #define SDEBUG_OPT_RECOV_DIF_DIX (SDEBUG_OPT_RECOVERED_ERR | \ SDEBUG_OPT_DIF_ERR | SDEBUG_OPT_DIX_ERR) @@ -3587,6 +3589,14 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) struct sdeb_store_info *sip = devip2sip(devip, true); u8 *cmd = scp->cmnd; + if (unlikely(sdebug_opts & SDEBUG_OPT_UNALIGNED_WRITE && + atomic_read(&sdeb_inject_pending))) { + atomic_set(&sdeb_inject_pending, 0); + mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, + UNALIGNED_WRITE_ASCQ); + return check_condition_result; + } + switch (cmd[0]) { case WRITE_16: ei_lba = 0; From patchwork Wed Oct 18 17:54:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F418CDB483 for ; Wed, 18 Oct 2023 17:57:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231864AbjJRR5K (ORCPT ); Wed, 18 Oct 2023 13:57:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232586AbjJRR4v (ORCPT ); Wed, 18 Oct 2023 13:56:51 -0400 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBD5812B; Wed, 18 Oct 2023 10:56:49 -0700 (PDT) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6934202b8bdso6196655b3a.1; Wed, 18 Oct 2023 10:56:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651809; x=1698256609; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XeTP9UxMC9xuYvRY29/KP6o+hE37UxPqFnd4pYMhYhI=; b=ZnYt8jBcP6MivTni4NP3Ccv5upl30fyYHLTTDgotrXJS6LTM2ooovOajOjeHHx00jF q8uQ+vm/C+aP8G5SnQAG0oB4UM+KEOfNxcYH3UTLGyVE5+13F9GNUkZBg5y/i0sYjk4r Mg6mHyf6ibJyPCB6kGnyHzbKrAcOKc6BJfqtG8TEP6b9J1ie5iAYOTZ1f+orLZ/dsh7u 6S+cYAqfcKrE3kinF5GQTt0BkqI5+qxypbcxaIqhqU+GD51cg1eW/T/c52xSKo95zAo+ nvEXk/5T2xEcu4J069QFSg1GkIgPJkp+gJTSxzfx7SV0EMRtJSLFELEP89w22DY976Vc gGqA== X-Gm-Message-State: AOJu0YzKxslAhp0cjN+oOClbynPgEZ8Ords+CLiT2wJfJPPOOsM2x5Yo ZCPh/OI/tqXikAkp3IVtaxc= X-Google-Smtp-Source: AGHT+IGPPsi4hLwOqm8sKAT84KTTE+joZI3ncYitA1DGN1iNK/jJh5qeljzIhxxuItGT0qvIosVVLg== X-Received: by 2002:a05:6a00:10c6:b0:6b6:e147:717 with SMTP id d6-20020a056a0010c600b006b6e1470717mr6816277pfu.23.1697651809048; Wed, 18 Oct 2023 10:56:49 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.56.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:56:48 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Krzysztof Kozlowski , Keoseong Park Subject: [PATCH v13 13/18] scsi: ufs: hisi: Rework the code that disables auto-hibernation Date: Wed, 18 Oct 2023 10:54:35 -0700 Message-ID: <20231018175602.2148415-14-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The host driver link startup callback is called indirectly by ufshcd_probe_hba(). That function applies the auto-hibernation settings by writing hba->ahit into the auto-hibernation control register. Simplify the code for disabling auto-hibernation by setting hba->ahit instead of writing into the auto-hibernation control register. This patch is part of an effort to move all auto-hibernation register changes into the UFSHCI driver core. Reviewed-by: Bao D. Nguyen Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/host/ufs-hisi.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c index 5b3060cd0ab8..f2ec687121bb 100644 --- a/drivers/ufs/host/ufs-hisi.c +++ b/drivers/ufs/host/ufs-hisi.c @@ -142,7 +142,6 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) struct ufs_hisi_host *host = ufshcd_get_variant(hba); int err; uint32_t value; - uint32_t reg; /* Unipro VS_mphy_disable */ ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0xD0C1, 0x0), 0x1); @@ -232,9 +231,7 @@ static int ufs_hisi_link_startup_pre_change(struct ufs_hba *hba) ufshcd_writel(hba, UFS_HCLKDIV_NORMAL_VALUE, UFS_REG_HCLKDIV); /* disable auto H8 */ - reg = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER); - reg = reg & (~UFS_AHIT_AH8ITV_MASK); - ufshcd_writel(hba, reg, REG_AUTO_HIBERNATE_IDLE_TIMER); + hba->ahit = 0; /* Unipro PA_Local_TX_LCC_Enable */ ufshcd_disable_host_tx_lcc(hba); From patchwork Wed Oct 18 17:54:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91B25CDB482 for ; Wed, 18 Oct 2023 17:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232677AbjJRR5Z (ORCPT ); Wed, 18 Oct 2023 13:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232684AbjJRR5F (ORCPT ); Wed, 18 Oct 2023 13:57:05 -0400 Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BECAC18D; Wed, 18 Oct 2023 10:57:03 -0700 (PDT) Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-57babef76deso3939892eaf.0; Wed, 18 Oct 2023 10:57:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651823; x=1698256623; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V/rTfHgytuRXp+ZwQl4c9m7yojfusAtRvmfOp3Vdh2A=; b=PKssruaCJYU+sm5hYzbZIIfunT08ud//Y1844Qd/E0zZahrywkd2n+ltWK2DmfAMq/ Uj6vgpPiqio+wASumMfi/+p1jUq5Wc9gZHU2Rih0NSsyRenwEju9nJ1if90WBS9d99+z qFhGF9WgH/EcsRd0VMD6+hmjc3ztgf33VhyGpc9hJV30GgseWqwqLs+IcA7f7RovRwQN FyyW5WUQgPjO/8jaES4fxomS4GrnB6f25W6zPQmm/FiQmO2LUn354agUkRy3Kg8QTU6E Ry5yupwp/yqe/7qZb2h0v5o8fZgjObFxVkTcYYuNNEVWfigwCULfKhdV2Mar90FDKiPT aVgQ== X-Gm-Message-State: AOJu0Yx4ByIUe3jDHLHqV5j7jfuP6amAQx87oyEnRQ1LBCe+m9ju8nmA J5XSOFltfN3slomsj/kyaYM= X-Google-Smtp-Source: AGHT+IEosmTKJLu0U+WpRb6oNiszu/1t0EL43NLQlnSe1oitt49MhMz6fOX//BLMUX8z+8Re6zf9Cw== X-Received: by 2002:a05:6359:6b8a:b0:164:8252:260c with SMTP id ta10-20020a0563596b8a00b001648252260cmr5616822rwb.8.1697651822835; Wed, 18 Oct 2023 10:57:02 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:02 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , Bean Huo , Arthur Simchaev , Po-Wen Kao , Eric Biggers , Keoseong Park Subject: [PATCH v13 14/18] scsi: ufs: Rename ufshcd_auto_hibern8_enable() and make it static Date: Wed, 18 Oct 2023 10:54:36 -0700 Message-ID: <20231018175602.2148415-15-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename ufshcd_auto_hibern8_enable() into ufshcd_configure_auto_hibern8() since this function can enable or disable auto-hibernation. Since ufshcd_auto_hibern8_enable() is only used inside the UFSHCI driver core, declare it static. Additionally, move the definition of this function to just before its first caller. Suggested-by: Bao D. Nguyen Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 24 +++++++++++------------- include/ufs/ufshcd.h | 1 - 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index c2df07545f96..38e79ce05545 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4305,6 +4305,14 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) +{ + if (!ufshcd_is_auto_hibern8_supported(hba)) + return; + + ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; @@ -4324,21 +4332,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) -{ - if (!ufshcd_is_auto_hibern8_supported(hba)) - return; - - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); -} - /** * ufshcd_init_pwr_info - setting the POR (power on reset) * values in hba power info @@ -8757,8 +8757,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) if (hba->ee_usr_mask) ufshcd_write_ee_control(hba); - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); out: spin_lock_irqsave(hba->host->host_lock, flags); @@ -9744,8 +9743,7 @@ static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); } - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); + ufshcd_configure_auto_hibern8(hba); goto out; diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 7d07b256e906..fceef91d186e 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1356,7 +1356,6 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); From patchwork Wed Oct 18 17:54:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60008CDB47E for ; Wed, 18 Oct 2023 17:57:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232538AbjJRR5x (ORCPT ); Wed, 18 Oct 2023 13:57:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232761AbjJRR5h (ORCPT ); Wed, 18 Oct 2023 13:57:37 -0400 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66766112; Wed, 18 Oct 2023 10:57:32 -0700 (PDT) Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6b20577ef7bso4468925b3a.3; Wed, 18 Oct 2023 10:57:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651851; x=1698256651; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SlC0BoedbyGdgzOhgrB/N96VnR6DBEr2m3fzKGpr6oQ=; b=nkJAvDImWxkFw8cKf/gAE6/nqCYD6FLdNixVJySvsNBTLfP3PJKbyRQhaOqXJJqBNR BX2gRTpkXf3YXHTjGxqNWFXsuYIGcamOLUR3kb3MhEu2NR6589WdVSGKYgFlVQwXa4dx Eew7IdJRmpVBzNVS6t+n/gs422jwh/7OdXkoIZXJyz3It/ltKvJXv2hV+FIlB66E/W9I hY94q5CXQle8cnUJJ+Eh27deODLEwKJlES8RhOqXMDF/PhF4u+LloLs/AE5mkdCD21rD yAm9e83LSLYnhKqztwH9NPxU4lh9GqAVoKb8rqYyEp9aeh5qlK5XI2MNaFE+gmhRXmdf QvNA== X-Gm-Message-State: AOJu0YzW2rGjQbcVrULeVyVP71UwDZ2Yx652mjPck4rxhzLzuoETaMzF vvXdDfSIXTLpS6DoNj79uvM= X-Google-Smtp-Source: AGHT+IGjesUfB53pu6vQMcCxZ3Op2MJPL98hOFKojEeVsi6W5cAS+D3Shh022RYWcgilQwSAUsugoQ== X-Received: by 2002:a05:6a00:1891:b0:68e:43ed:d30b with SMTP id x17-20020a056a00189100b0068e43edd30bmr6578242pfh.21.1697651851342; Wed, 18 Oct 2023 10:57:31 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:31 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Peter Wang , Avri Altman , "James E.J. Bottomley" , Matthias Brugger , Bean Huo , Arthur Simchaev , Lu Hongfei , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , zhanghui , Po-Wen Kao , Eric Biggers , Keoseong Park Subject: [PATCH v13 15/18] scsi: ufs: Change the return type of ufshcd_auto_hibern8_update() Date: Wed, 18 Oct 2023 10:54:37 -0700 Message-ID: <20231018175602.2148415-16-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A later patch will introduce an error path in ufshcd_auto_hibern8_update(). Change the return type of that function before introducing calls to that function in the host drivers such that the host drivers only have to be modified once. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Reviewed-by: Peter Wang Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufs-sysfs.c | 2 +- drivers/ufs/core/ufshcd-priv.h | 1 - drivers/ufs/core/ufshcd.c | 6 ++++-- include/ufs/ufshcd.h | 2 +- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c index c95906443d5f..a1554eac9bbc 100644 --- a/drivers/ufs/core/ufs-sysfs.c +++ b/drivers/ufs/core/ufs-sysfs.c @@ -203,7 +203,7 @@ static ssize_t auto_hibern8_store(struct device *dev, goto out; } - ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); + ret = ufshcd_auto_hibern8_update(hba, ufshcd_us_to_ahit(timer)); out: up(&hba->host_sem); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index f42d99ce5bf1..de8e891da36a 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -60,7 +60,6 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, u32 *attr_val); int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, struct cq_entry *cqe); int ufshcd_mcq_init(struct ufs_hba *hba); diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 38e79ce05545..d3718fbe51b9 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4313,13 +4313,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; bool update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) - return; + return 0; spin_lock_irqsave(hba->host->host_lock, flags); if (hba->ahit != ahit) { @@ -4336,6 +4336,8 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + + return 0; } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index fceef91d186e..8fd95a5d5538 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1356,7 +1356,7 @@ static inline int ufshcd_disable_host_tx_lcc(struct ufs_hba *hba) return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0); } -void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups); #define SD_ASCII_STD true From patchwork Wed Oct 18 17:54:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F34B8C41513 for ; Wed, 18 Oct 2023 17:58:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232818AbjJRR6E (ORCPT ); Wed, 18 Oct 2023 13:58:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344716AbjJRR5r (ORCPT ); Wed, 18 Oct 2023 13:57:47 -0400 Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D3C3D61; Wed, 18 Oct 2023 10:57:40 -0700 (PDT) Received: by mail-ot1-f43.google.com with SMTP id 46e09a7af769-6cd0964a994so481996a34.0; Wed, 18 Oct 2023 10:57:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651860; x=1698256660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BSMTtopnpnoVFRhHmUt+zlCmNcQ7/eLzIQS1hLfLLj0=; b=Q/WDLEoayRxaPF+2+7p9H97FaITlZ1LQ/EdiUTNSYBeziOfKWonP8YR7DyYuVeHfLl 83ut7AmPl07O86OuSP3lmcpqLCpEZd5EUfSHWqmCj6xufgM65EV6+qgwoklhiczRFgri P2v7wGr+LGMdo5AanisZ3jsDoEjavpL40KYql40zZ8vzy6B8GVZJXEN6oZgg7uZHqPjD QZW3aoQuFLb5mO0DaphVqQZd+99PYKAIQY7nydrfF/M/ET7xFdE2vFGRM57LPTM9iSkO 4q5V66uWlBxDUNOwRGgrHb28rVmKbFOOy0WvII3SSzIz1si/nemnwT7Da2TrTumbS4Hq 0aew== X-Gm-Message-State: AOJu0YzaVUPhf6yTb0e3MCCXAH6DuVqIJUTql6ghwUJv47cJh+YlV/CS hZa+y1pdf0sZE8aiK159N+I= X-Google-Smtp-Source: AGHT+IHs5W+Ov+53lF41ZQq4E0YEOZHaed5S3JFQx6klUUVo+9s2Do5fmUG/HZ2Dxn7ULE8mPFLwlw== X-Received: by 2002:a05:6830:2691:b0:6bf:2d48:f064 with SMTP id l17-20020a056830269100b006bf2d48f064mr7417403otu.13.1697651859925; Wed, 18 Oct 2023 10:57:39 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:39 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Manivannan Sadhasivam , Asutosh Das , Bean Huo , Arthur Simchaev Subject: [PATCH v13 16/18] scsi: ufs: Simplify ufshcd_auto_hibern8_update() Date: Wed, 18 Oct 2023 10:54:38 -0700 Message-ID: <20231018175602.2148415-17-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Calls to ufshcd_auto_hibern8_update() are already serialized: this function is either called if user space software is not running (preparing to suspend) or from a single sysfs store callback function. Kernfs serializes sysfs .store() callbacks. No functionality is changed. This patch makes the next patch in this series easier to read. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index d3718fbe51b9..3fc33794ce1f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4315,21 +4315,13 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { - unsigned long flags; - bool update = false; + const u32 cur_ahit = READ_ONCE(hba->ahit); - if (!ufshcd_is_auto_hibern8_supported(hba)) + if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; - spin_lock_irqsave(hba->host->host_lock, flags); - if (hba->ahit != ahit) { - hba->ahit = ahit; - update = true; - } - spin_unlock_irqrestore(hba->host->host_lock, flags); - - if (update && - !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { + WRITE_ONCE(hba->ahit, ahit); + if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); ufshcd_configure_auto_hibern8(hba); From patchwork Wed Oct 18 17:54:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2EE2CDB482 for ; Wed, 18 Oct 2023 17:58:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231992AbjJRR6J (ORCPT ); Wed, 18 Oct 2023 13:58:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235164AbjJRR5z (ORCPT ); Wed, 18 Oct 2023 13:57:55 -0400 Received: from mail-ot1-f48.google.com (mail-ot1-f48.google.com [209.85.210.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EA561B8; Wed, 18 Oct 2023 10:57:49 -0700 (PDT) Received: by mail-ot1-f48.google.com with SMTP id 46e09a7af769-6c4fa1c804bso4853288a34.2; Wed, 18 Oct 2023 10:57:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651868; x=1698256668; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MaWb7DT0DEjxXHAAOV/ImKft4gO+Lr1J7bQB17+rCQ0=; b=UWgwP6sG2TZ7rKFmd16Zy3oKYSpdq6yN2sttHBb+xjxYrly7+jog0mIsiW0JdzvUWs oWfJmseulH2lO/95TUhhbAo8EiAkdbl4aPeILzufQC3+vheGCPSFOm2FvXqdDLAQI9dR wKj1LVPZUnsCuGfZhw0C04Yn/kE7ojFbNQwJxQVs7Gnc8zv4jUfGnhsLHVjn86QdTlrN zc1TBSilGltnvjHKWXaoaF3sa0Lye6jxV1GLewUkvUdG71XigvZ44/mOv8FAn6HnDwK5 9g+bG372dVvu6s2Lf4r+ADTMDVpP/uhXf0U0QafO0AIfRVzSCMcrQb2LzglUVmp9rmk1 nFCA== X-Gm-Message-State: AOJu0YyAeSmhmrodhF05iC3D1CUf20NKVXKNe5A6p1ByIcMhfe7gsMvr r2KbIyGJj40Dtq5p9dQX50I= X-Google-Smtp-Source: AGHT+IFzhtJE6VnGzfEA/GOfRWyYhtuBrkeeBo0c6H8+EafyHfTbNU1IhEKTF6SRj+mPd1aLpvwKXQ== X-Received: by 2002:a05:6871:cc5:b0:1ea:3210:3b5d with SMTP id vg5-20020a0568710cc500b001ea32103b5dmr106750oab.40.1697651868478; Wed, 18 Oct 2023 10:57:48 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:48 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v13 17/18] scsi: ufs: Forbid auto-hibernation without I/O scheduler Date: Wed, 18 Oct 2023 10:54:39 -0700 Message-ID: <20231018175602.2148415-18-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org UFSHCI controllers in legacy mode do not preserve the write order if auto-hibernation is enabled. If the write order is not preserved, an I/O scheduler is required to serialize zoned writes. Hence do not allow auto-hibernation to be enabled without I/O scheduler if a zoned logical unit is present and if the controller is operating in legacy mode. This patch has been tested with the following shell script: show_ah8() { echo -n "auto_hibern8: " adb shell "cat /sys/devices/platform/13200000.ufs/auto_hibern8" } set_ah8() { local rc adb shell "echo $1 > /sys/devices/platform/13200000.ufs/auto_hibern8" rc=$? show_ah8 return $rc } set_iosched() { adb shell "echo $1 >/sys/class/block/$zoned_bdev/queue/scheduler && echo -n 'I/O scheduler: ' && cat /sys/class/block/sde/queue/scheduler" } adb root zoned_bdev=$(adb shell grep -lvw 0 /sys/class/block/sd*/queue/chunk_sectors |& sed 's|/sys/class/block/||g;s|/queue/chunk_sectors||g') [ -n "$zoned_bdev" ] show_ah8 set_ah8 0 set_iosched none if set_ah8 150000; then echo "Error: enabled AH8 without I/O scheduler" fi set_iosched mq-deadline set_ah8 150000 Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 60 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 3fc33794ce1f..0a21ea9d7576 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4305,6 +4305,30 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, + bool preserves_write_order) +{ + struct scsi_device *sdev; + + if (!preserves_write_order) { + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + /* + * Refuse to enable auto-hibernation if no I/O scheduler + * is present. This code does not check whether the + * attached I/O scheduler serializes zoned writes + * (ELEVATOR_F_ZBD_SEQ_WRITE) because this cannot be + * checked from outside the block layer core. + */ + if (blk_queue_is_zoned(q) && !q->elevator) + return -EPERM; + } + } + + return 0; +} + static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) { if (!ufshcd_is_auto_hibern8_supported(hba)) @@ -4313,13 +4337,42 @@ static void ufshcd_configure_auto_hibern8(struct ufs_hba *hba) ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); } +/** + * ufshcd_auto_hibern8_update() - Modify the auto-hibernation control register + * @hba: per-adapter instance + * @ahit: New auto-hibernate settings. Includes the scale and the value of the + * auto-hibernation timer. See also the UFSHCI_AHIBERN8_TIMER_MASK and + * UFSHCI_AHIBERN8_SCALE_MASK constants. + * + * Notes: + * - UFSHCI controllers do not preserve the command order in legacy mode + * if auto-hibernation is enabled. If the command order is not preserved, an + * I/O scheduler that serializes zoned writes (mq-deadline) is required if a + * zoned logical unit is present. Enabling auto-hibernation without attaching + * the mq-deadline scheduler first may cause unaligned write errors for the + * zoned logical unit if a zoned logical unit is present. + * - Calls of this function must be serialized. + */ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { const u32 cur_ahit = READ_ONCE(hba->ahit); + bool prev_state, new_state; + int ret; if (!ufshcd_is_auto_hibern8_supported(hba) || cur_ahit == ahit) return 0; + prev_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, cur_ahit); + new_state = FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, ahit); + + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled for legacy UFSHCI mode. + */ + ret = ufshcd_update_preserves_write_order(hba, false); + if (ret) + return ret; + } WRITE_ONCE(hba->ahit, ahit); if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); @@ -4328,6 +4381,13 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. + */ + ret = ufshcd_update_preserves_write_order(hba, true); + WARN_ON_ONCE(ret); + } return 0; } From patchwork Wed Oct 18 17:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DFCFCDB484 for ; Wed, 18 Oct 2023 17:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232819AbjJRR6U (ORCPT ); Wed, 18 Oct 2023 13:58:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232875AbjJRR6C (ORCPT ); Wed, 18 Oct 2023 13:58:02 -0400 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EA5CD4B; Wed, 18 Oct 2023 10:57:58 -0700 (PDT) Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-6b1ef786b7fso5233458b3a.3; Wed, 18 Oct 2023 10:57:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651878; x=1698256678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BUqdMkk9eHE4OExeCBttL2effDTy7rv0XrVt/f4A/lc=; b=dSs09uy3M4sjDEx/UCDmTO0RV56W4qdPFewu/wUZHus6qNrT6cE/oC4jEP5FqP9ypU ZmpJag2jTonE+sJi8+KF0b8aUIN6Xnjf6DQxRFYRerPLnopisgvR0YeYkcr+Vs+yg2Ox RDoarSKzv5dend5PtSbPoRYToootocYypXMESr6beM2w092lAtFI9/vdSjpDvLwnoJty 8sg08UhInZMZhWmRF16rB3NEy0k/vqw7sIywMqwQDc8eYkWhVZI4V5+F/cFIr7jxWCsN iF7UhMz17G4DpUogCX17iyBLoKYKv7Sj5nxVl7jk6tABzS1+qbv6f1JgqOjOzRTn4Dj4 R38g== X-Gm-Message-State: AOJu0YwHgA8ISFPOJvXSvH4bfDey/oqDASZylkB/krzKySZ0eSpKgavM 4xEoWg7tn78w2qAG4kagalk= X-Google-Smtp-Source: AGHT+IHoo9hDh1Ld+Voh0eiiyRtsWXSc8vXxrWBFuJ2vjiYo5wptm2ZFae1U0GOo6GffyxKAM422LA== X-Received: by 2002:a05:6a00:18a3:b0:6b8:a6d6:f51a with SMTP id x35-20020a056a0018a300b006b8a6d6f51amr6167914pfh.31.1697651877530; Wed, 18 Oct 2023 10:57:57 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:57 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v13 18/18] scsi: ufs: Inform the block layer about write ordering Date: Wed, 18 Oct 2023 10:54:40 -0700 Message-ID: <20231018175602.2148415-19-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 0a21ea9d7576..b1bed617e8a2 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4325,6 +4325,20 @@ static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, return -EPERM; } } + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.driver_preserves_write_order = preserves_write_order; + blk_queue_required_elevator_features(q, + !preserves_write_order && blk_queue_is_zoned(q) ? + ELEVATOR_F_ZBD_SEQ_WRITE : 0); + if (q->disk) + disk_set_zoned(q->disk, q->limits.zoned); + blk_mq_unfreeze_queue(q); + } return 0; } @@ -4367,7 +4381,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) if (!is_mcq_enabled(hba) && !prev_state && new_state) { /* - * Auto-hibernation will be enabled for legacy UFSHCI mode. + * Auto-hibernation will be enabled for legacy UFSHCI mode. Tell + * the block layer that write requests may be reordered. */ ret = ufshcd_update_preserves_write_order(hba, false); if (ret) @@ -4383,7 +4398,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } if (!is_mcq_enabled(hba) && prev_state && !new_state) { /* - * Auto-hibernation has been disabled. + * Auto-hibernation has been disabled. Tell the block layer that + * the order of write requests is preserved. */ ret = ufshcd_update_preserves_write_order(hba, true); WARN_ON_ONCE(ret); @@ -5151,6 +5167,10 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) struct ufs_hba *hba = shost_priv(sdev->host); struct request_queue *q = sdev->request_queue; + q->limits.driver_preserves_write_order = + !ufshcd_is_auto_hibern8_supported(hba) || + FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) == 0; + blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);