From patchwork Fri Aug 11 21:35:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13351483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB6E9C0015E for ; Fri, 11 Aug 2023 21:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234425AbjHKVgn (ORCPT ); Fri, 11 Aug 2023 17:36:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236960AbjHKVgj (ORCPT ); Fri, 11 Aug 2023 17:36:39 -0400 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F1B1213B; Fri, 11 Aug 2023 14:36:39 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1bba48b0bd2so17855945ad.3; Fri, 11 Aug 2023 14:36:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691789798; x=1692394598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qlCopLPSXbs4UWBucr4HxRJKam+GS/F13ZuojULhxiI=; b=HOW5z2ckKndqfkHx8BAXDwN7ZfWaeO2FY4zMd4wDEHkiEGK7oeEQC9sxaK73m2/Ufm 88AplWsa59N5MHn2MXbL/N7WvXBMm8+XSqIfSO2YyuSBXOJ8fHP9KDW1fbSdeUP52I5i CuI3Ca+6YZbQ4y5bUMjW/H5dWkidNFHcXN2g1mvqw1hh691fKr/6IT7zzdeV0fuMVRwI F8GMid0RuBey0WLrYIexybIZwSWJX4DyifSHZTBX1R8mebup64VV+gecEaOos6vRT8Yz g35mk8Yo6QYtSG87FqRUBkmSrOTaDnhY3XXCEp8rW/qDfvTdClqqywUfVpq/VZW2v/4l Vh5Q== X-Gm-Message-State: AOJu0Yz3bPazkwsOx/LkN0kR8oVf5QtViq1t9y4cBUg83zhCTb1W29gM MyvRQ2+h8rYkIZ6/Tgct+9A= X-Google-Smtp-Source: AGHT+IF91a4a4mabfu2v9c13OeO7ej7psxVq3/W39c4PR2YTp201VTJ4R9YX8keq8o4fdxE/DRaCkQ== X-Received: by 2002:a17:902:b907:b0:1b9:de3e:7a59 with SMTP id bf7-20020a170902b90700b001b9de3e7a59mr2927902plb.10.1691789798422; Fri, 11 Aug 2023 14:36:38 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:cdd8:4c3:2f3c:adea]) by smtp.gmail.com with ESMTPSA id c10-20020a170903234a00b001b89c313185sm4394865plh.205.2023.08.11.14.36.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Aug 2023 14:36:38 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , Can Guo , Avri Altman , Damien Le Moal , Ming Lei , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , "Bao D. Nguyen" , Arthur Simchaev Subject: [PATCH v8 9/9] scsi: ufs: Inform the block layer about write ordering Date: Fri, 11 Aug 2023 14:35:43 -0700 Message-ID: <20230811213604.548235-10-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog In-Reply-To: <20230811213604.548235-1-bvanassche@acm.org> References: <20230811213604.548235-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Cc: Martin K. Petersen Cc: Can Guo Cc: Avri Altman Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 35 ++++++++++++++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index ae7b868f9c26..71cee10c75ad 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4337,23 +4337,48 @@ int ufshcd_uic_hibern8_exit(struct ufs_hba *hba) } EXPORT_SYMBOL_GPL(ufshcd_uic_hibern8_exit); +static void ufshcd_update_preserves_write_order(struct ufs_hba *hba, + bool preserves_write_order) +{ + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.driver_preserves_write_order = preserves_write_order; + blk_mq_unfreeze_queue(q); + } +} + void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) { unsigned long flags; - bool update = false; + bool prev_state, new_state, update = false; if (!ufshcd_is_auto_hibern8_supported(hba)) return; spin_lock_irqsave(hba->host->host_lock, flags); + prev_state = ufshcd_is_auto_hibern8_enabled(hba); if (hba->ahit != ahit) { hba->ahit = ahit; update = true; } + new_state = ufshcd_is_auto_hibern8_enabled(hba); spin_unlock_irqrestore(hba->host->host_lock, flags); if (!update) return; + if (!is_mcq_enabled(hba) && !prev_state && new_state) { + /* + * Auto-hibernation will be enabled for legacy UFSHCI mode. Tell + * the block layer that write requests may be reordered. + */ + ufshcd_update_preserves_write_order(hba, false); + } if (!pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { ufshcd_rpm_get_sync(hba); ufshcd_hold(hba); @@ -4361,6 +4386,13 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) ufshcd_release(hba); ufshcd_rpm_put_sync(hba); } + if (!is_mcq_enabled(hba) && prev_state && !new_state) { + /* + * Auto-hibernation has been disabled. Tell the block layer that + * the order of write requests is preserved. + */ + ufshcd_update_preserves_write_order(hba, true); + } } EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); @@ -5140,6 +5172,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) ufshcd_hpb_configure(hba, sdev); + q->limits.driver_preserves_write_order = true; blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);