From patchwork Wed Oct 18 17:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13427637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DFCFCDB484 for ; Wed, 18 Oct 2023 17:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232819AbjJRR6U (ORCPT ); Wed, 18 Oct 2023 13:58:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232875AbjJRR6C (ORCPT ); Wed, 18 Oct 2023 13:58:02 -0400 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EA5CD4B; Wed, 18 Oct 2023 10:57:58 -0700 (PDT) Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-6b1ef786b7fso5233458b3a.3; Wed, 18 Oct 2023 10:57:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697651878; x=1698256678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BUqdMkk9eHE4OExeCBttL2effDTy7rv0XrVt/f4A/lc=; b=dSs09uy3M4sjDEx/UCDmTO0RV56W4qdPFewu/wUZHus6qNrT6cE/oC4jEP5FqP9ypU ZmpJag2jTonE+sJi8+KF0b8aUIN6Xnjf6DQxRFYRerPLnopisgvR0YeYkcr+Vs+yg2Ox RDoarSKzv5dend5PtSbPoRYToootocYypXMESr6beM2w092lAtFI9/vdSjpDvLwnoJty 8sg08UhInZMZhWmRF16rB3NEy0k/vqw7sIywMqwQDc8eYkWhVZI4V5+F/cFIr7jxWCsN iF7UhMz17G4DpUogCX17iyBLoKYKv7Sj5nxVl7jk6tABzS1+qbv6f1JgqOjOzRTn4Dj4 R38g== X-Gm-Message-State: AOJu0YwHgA8ISFPOJvXSvH4bfDey/oqDASZylkB/krzKySZ0eSpKgavM 4xEoWg7tn78w2qAG4kagalk= X-Google-Smtp-Source: AGHT+IHoo9hDh1Ld+Voh0eiiyRtsWXSc8vXxrWBFuJ2vjiYo5wptm2ZFae1U0GOo6GffyxKAM422LA== X-Received: by 2002:a05:6a00:18a3:b0:6b8:a6d6:f51a with SMTP id x35-20020a056a0018a300b006b8a6d6f51amr6167914pfh.31.1697651877530; Wed, 18 Oct 2023 10:57:57 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:66c1:dd00:1e1e:add3]) by smtp.gmail.com with ESMTPSA id p15-20020aa7860f000000b00690cd981652sm3628612pfn.61.2023.10.18.10.57.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 10:57:57 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Bart Van Assche , "Bao D . Nguyen" , Can Guo , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , Asutosh Das , Arthur Simchaev Subject: [PATCH v13 18/18] scsi: ufs: Inform the block layer about write ordering Date: Wed, 18 Oct 2023 10:54:40 -0700 Message-ID: <20231018175602.2148415-19-bvanassche@acm.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231018175602.2148415-1-bvanassche@acm.org> References: <20231018175602.2148415-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From the UFSHCI 4.0 specification, about the legacy (single queue) mode: "The host controller always process transfer requests in-order according to the order submitted to the list. In case of multiple commands with single doorbell register ringing (batch mode), The dispatch order for these transfer requests by host controller will base on their index in the List. A transfer request with lower index value will be executed before a transfer request with higher index value." From the UFSHCI 4.0 specification, about the MCQ mode: "Command Submission 1. Host SW writes an Entry to SQ 2. Host SW updates SQ doorbell tail pointer Command Processing 3. After fetching the Entry, Host Controller updates SQ doorbell head pointer 4. Host controller sends COMMAND UPIU to UFS device" In other words, for both legacy and MCQ mode, UFS controllers are required to forward commands to the UFS device in the order these commands have been received from the host. Notes: - For legacy mode this is only correct if the host submits one command at a time. The UFS driver does this. - Also in legacy mode, the command order is not preserved if auto-hibernation is enabled in the UFS controller. Hence, enable zone write locking if auto-hibernation is enabled. This patch improves performance as follows on my test setup: - With the mq-deadline scheduler: 2.5x more IOPS for small writes. - When not using an I/O scheduler compared to using mq-deadline with zone locking: 4x more IOPS for small writes. Reviewed-by: Bao D. Nguyen Reviewed-by: Can Guo Cc: Martin K. Petersen Cc: Avri Altman Signed-off-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 0a21ea9d7576..b1bed617e8a2 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -4325,6 +4325,20 @@ static int ufshcd_update_preserves_write_order(struct ufs_hba *hba, return -EPERM; } } + shost_for_each_device(sdev, hba->host) + blk_freeze_queue_start(sdev->request_queue); + shost_for_each_device(sdev, hba->host) { + struct request_queue *q = sdev->request_queue; + + blk_mq_freeze_queue_wait(q); + q->limits.driver_preserves_write_order = preserves_write_order; + blk_queue_required_elevator_features(q, + !preserves_write_order && blk_queue_is_zoned(q) ? + ELEVATOR_F_ZBD_SEQ_WRITE : 0); + if (q->disk) + disk_set_zoned(q->disk, q->limits.zoned); + blk_mq_unfreeze_queue(q); + } return 0; } @@ -4367,7 +4381,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) if (!is_mcq_enabled(hba) && !prev_state && new_state) { /* - * Auto-hibernation will be enabled for legacy UFSHCI mode. + * Auto-hibernation will be enabled for legacy UFSHCI mode. Tell + * the block layer that write requests may be reordered. */ ret = ufshcd_update_preserves_write_order(hba, false); if (ret) @@ -4383,7 +4398,8 @@ int ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) } if (!is_mcq_enabled(hba) && prev_state && !new_state) { /* - * Auto-hibernation has been disabled. + * Auto-hibernation has been disabled. Tell the block layer that + * the order of write requests is preserved. */ ret = ufshcd_update_preserves_write_order(hba, true); WARN_ON_ONCE(ret); @@ -5151,6 +5167,10 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) struct ufs_hba *hba = shost_priv(sdev->host); struct request_queue *q = sdev->request_queue; + q->limits.driver_preserves_write_order = + !ufshcd_is_auto_hibern8_supported(hba) || + FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) == 0; + blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) blk_queue_update_dma_alignment(q, SZ_4K - 1);