From patchwork Tue Mar 2 13:24:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E8F1C4321A for ; Tue, 2 Mar 2021 19:51:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E528464F1E for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581462AbhCBS51 (ORCPT ); Tue, 2 Mar 2021 13:57:27 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:17300 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346410AbhCBNa6 (ORCPT ); Tue, 2 Mar 2021 08:30:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614692439; x=1646228439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HuKopAmtZwRPVpGHpcGACb+Y7NLgWAIBGyq4VY5jfiY=; b=J3jGcCUd7r92zMNI2AiQP7EEL5Kunc9TaSPBe+TPMe/JdJCicIM8CoUv Olu6KWDybAmuAzO2it/32eZ/XvcJ5e03mzxO4pnwpqqBSKbl4Ruv0b/GL hocQXqqyMQXmsPyhCbPyUZhCpv/jVaNRTL+HZLqB685fc8LOP3dqfeCGg FJ7MWZffVJjr7tOnCm83gp7EsU8wrssHWEawVrr/U+K+jUhiCVgGHKn3y GEIf6RT6mCN+stbxAkoFFPTuDUCVnhHp1zLxxnrNm02Fs/VaS2suN+GSH REC+VM/LjRFOnX682VcutMcdLq3awrU6txhnR1n/2dZ29D+bHDqrPvC4w w==; IronPort-SDR: nHQhqJh9+44d7Sj5A4ye366SBZo93g3fqgyHNmbkY9SwQKdGAjr0Qdy6xm6jBjcR8DpyhRXQkm h38mRG02FVz8JTTjSxGO2PffLKZMCTiTBBdu1nnUTTKiZVnu35IZRSR3MO6LqmoQVR3Eyq01xG d6EAz0g4CsvIGl0S4PThOyObPqbO/NI9qNBZF71T7NCIeh2XAIrPHoQbVj1PzSZlp6GNuUmug1 r1pwiEBgPc6B51RRhVRvRpyF4sdtJ+LONz5wZn+xQFgA8W1XsEBscRivUYm8Lwk40dxXcYQT1Z TDQ= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="265440466" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:33:12 +0800 IronPort-SDR: OLnR+3crMIW9uPhMecK2f+rmMvvCPCKFEbgciLaTeesxQ4/I5CUzZP4ZgtH8HWv1BatI5tAxoa hD+RuKqtKZr5kdvga7pIWKPo8IHIKFlpuSNMMAJSzuT1aYTMUsdKO3l8ii10GAFmS2Ny+7yoA0 3tHN3OTATxbXbTRjYspFmz+tEHpnSuXp5hSy4fXUcrDKoOTVyM2/BK1dPRGFXMdx4jRU8uzc0m a1MbQk2T6GJ2F1i2UqeSdsRk4qtSQmSXM/KOXHu3tExkJC+ncqwNV/Gjmae6IQFmVlNXHbMxap OxGTgWztbTSJ7XErT2DGcDqT Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:07:16 -0800 IronPort-SDR: meUvMjd0HbYVRjQvVDPKmc7dndQndmcu/+LSOSAqBi9D6drh0Je4M4Ih1Y4YH7ddhDK1/OPCP/ CeeDKgE91Z9yH4NopwRoUWjHL4sTCl3JGoxwYyCCz12dEKwgwHn4bN7V2ZOZn14aouma6ie7Na Te2QBAdWv4RDDvtNoUe6xJ+awJ3TQOujxhA8n4rxkehfJqH6shzoIT7Vj1BloYza6hggP39mvx 8e6691eSup5hx635SvC8VEsZyfSrQ/dnlCL8XaBRnR+gBlRzqKq0oIcYfhZAgDMEXn4I5NumwC m3s= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:25:57 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 01/10] scsi: ufshpb: Cache HPB Control mode on init Date: Tue, 2 Mar 2021 15:24:54 +0200 Message-Id: <20210302132503.224670-2-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We will use it later, when we'll need to differentiate between device and host control modes. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshcd.h | 2 ++ drivers/scsi/ufs/ufshpb.c | 8 +++++--- drivers/scsi/ufs/ufshpb.h | 2 ++ 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 3ea7e88f5bff..2d589ee18875 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -656,6 +656,7 @@ struct ufs_hba_variant_params { * @hpb_disabled: flag to check if HPB is disabled * @max_hpb_single_cmd: maximum size of single HPB command * @is_legacy: flag to check HPB 1.0 + * @control_mode: either host or device */ struct ufshpb_dev_info { int num_lu; @@ -665,6 +666,7 @@ struct ufshpb_dev_info { bool hpb_disabled; int max_hpb_single_cmd; bool is_legacy; + u8 control_mode; }; #endif diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index f89714a9785c..d9ea0cddc3c4 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -1624,6 +1624,9 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba, % (hpb->srgn_mem_size / HPB_ENTRY_SIZE); hpb->pages_per_srgn = DIV_ROUND_UP(hpb->srgn_mem_size, PAGE_SIZE); + + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) + hpb->is_hcm = true; } static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2308,11 +2311,10 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) { struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; int version, ret; - u8 hpb_mode; u32 max_hpb_sigle_cmd = 0; - hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_mode == HPB_HOST_CONTROL) { + hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { dev_err(hba->dev, "%s: host control mode is not supported.\n", __func__); hpb_dev_info->hpb_disabled = true; diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 88f424250dd9..14b7ba9bda3a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -227,6 +227,8 @@ struct ufshpb_lu { u32 entries_per_srgn_shift; u32 pages_per_srgn; + bool is_hcm; + struct ufshpb_stats stats; struct ufshpb_params params; From patchwork Tue Mar 2 13:24:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53890C43381 for ; Tue, 2 Mar 2021 19:51:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0C2560232 for ; Tue, 2 Mar 2021 19:51:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581512AbhCBS5u (ORCPT ); Tue, 2 Mar 2021 13:57:50 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:41811 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351058AbhCBNbV (ORCPT ); Tue, 2 Mar 2021 08:31:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691880; x=1646227880; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+mj9k8DOW2ynjWjt3zJCuC4pRcPAUXqtzLGwdtsx21E=; b=oPK+ijT3m2hFrO6owPzlttAXVeGB+lnIZHWwgf6q7B1Av5D/JSPAD1Jw WekaQf+DMnoiOwslNHXM1KB9O7s7lc90plqLc9IH3ATtGtLGWiX6F+hME /lD56k7y9ae7/zYpbv8VylDEdxJgJJ56BLo28qr3nDeDXvh1uLoOmc4TF 0YEiNsWeQJ182beEYIXG2B8BwXsTMdzWrefv4CvPAZYOEdz9SNeOo98f4 4jVmavCM0bl2UUDkBBeqJ1pe/kgbp9JPNixkKCDHdg3gBENu5UMzJ8tbr FeJUnx1brH0Hb4TO61YrTgrynGKCC3y7BdWiitxsCRVVeSBU7JnkWxXOn A==; IronPort-SDR: DKOzjSMtwPI1Dto2ip4qxu23Wkln24mrzbU41ncmRtnoTik6ZdzFF+5bbT1+Cyn5otEkwi83hO NajPcttnknewao6iEp8p3uo0GhneTThKxWmq9LVaPzsfVL/3RTr8qUWaE7kLjjvR2mF+9xX2yl tuB3bLoPy9coJMUsSMmh4i0Dp1vxUEQbcmdIHchk+YVCMmpJd8ilLu2NWlpIzOdXGzlWie4455 YAHc2j36+3xATtT8kCypjkyvjmbvAXHL1XCADw12CS5oqfk27IoUhfZGA24CENoTDvLh+G7TFi uVw= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="161172110" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:07 +0800 IronPort-SDR: 3wyH+ejIGcIfM/PwKSr5yvU+X6NVXJkSJDzgqSm8mZW6r2tL90Qhgk9GrRrWCIBf7YFzB/sI4E 1fZV2QQPTh6IaTz4LGvcB0quIxu3lAG4hXWzGMx9IWxNTHtzyZeHiR8f7snbrppf5GxK8cgbGb gw0biq83KCdWGyDz3il+86DyW+VntSORa+rKZ9ljMaGhIC0nXxxisx+jhkgLv+pYB2XztWd/3F Kyhw65xUd25B/O0weWs04NlcDHmFKBJGY5jcuZ1oRydGgbgP3dUZayZJqfNnKGzwg479fg4wF/ o7EGE0BroecHKsRMIvLpy51G Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:09:14 -0800 IronPort-SDR: fW4yyjzBc7vac75egBzXRpQDpYQFKLYcD6LaTA5oWjGpMkV0udqu4BxViynvuVqBwT3hdlKBq6 lypHHUZ65LtFCu37zW5SE0ScAanxlAM6t5UkfbNLInG0v3oQ4H50Pyx59kpR2CYBqFVSoTwBi2 ZZLeHINVhvAPTOcuxi+kbhnRQL8I33H0qd+vM8St4MxTINfAOMXZWSirr2o6+e3BZiOkSbQ+Wj qkJgLV/ppndb+l1pHRVaqqyfDPcyyZjHbIgvMAeLRkR4tlmKBt1B1Xe838xvjUUi2aLf3bFf+H wR4= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:04 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 02/10] scsi: ufshpb: Add host control mode support to rsp_upiu Date: Tue, 2 Mar 2021 15:24:55 +0200 Message-Id: <20210302132503.224670-3-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In device control mode, the device may recommend the host to either activate or inactivate a region, and the host should follow. Meaning those are not actually recommendations, but more of instructions. On the contrary, in host control mode, the recommendation protocol is slightly changed: a) The device may only recommend the host to update a subregion of an already-active region. And, b) The device may *not* recommend to inactivate a region. Furthermore, in host control mode, the host may choose not to follow any of the device's recommendations. However, in case of a recommendation to update an active and clean subregion, it is better to follow those recommendation because otherwise the host has no other way to know that some internal relocation took place. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 34 +++++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 2 ++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index d9ea0cddc3c4..044fec9854a0 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -166,6 +166,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); @@ -235,6 +237,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static inline bool is_rgn_dirty(struct ufshpb_region *rgn) +{ + return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); +} + static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int len, u64 *ppn_buf) @@ -717,6 +724,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn) { + struct ufshpb_region *rgn; u32 num_entries = hpb->entries_per_srgn; if (!srgn->mctx) { @@ -730,6 +738,10 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, num_entries = hpb->last_srgn_entries; bitmap_zero(srgn->mctx->ppn_dirty, num_entries); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + return 0; } @@ -1257,6 +1269,18 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, srgn_i = be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + rgn = hpb->rgn_tbl + rgn_i; + if (hpb->is_hcm && + (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { + /* + * in host control mode, subregion activation + * recommendations are only allowed to active regions. + * Also, ignore recommendations for dirty regions - the + * host will make decisions concerning those by himself + */ + continue; + } + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "activate(%d) region %d - %d\n", i, rgn_i, srgn_i); @@ -1264,7 +1288,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, ufshpb_update_active_info(hpb, rgn_i, srgn_i); spin_unlock(&hpb->rsp_list_lock); - rgn = hpb->rgn_tbl + rgn_i; srgn = rgn->srgn_tbl + srgn_i; /* blocking HPB_READ */ @@ -1275,6 +1298,14 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_active_cnt++; } + if (hpb->is_hcm) { + /* + * in host control mode the device is not allowed to inactivate + * regions + */ + goto out; + } + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, @@ -1299,6 +1330,7 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_inactive_cnt++; } +out: dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 14b7ba9bda3a..8119b1a3d1e5 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -119,6 +119,8 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; + unsigned long rgn_flags; +#define RGN_FLAG_DIRTY 0 }; #define for_each_sub_region(rgn, i, srgn) \ From patchwork Tue Mar 2 13:24:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A243EC4332E for ; Tue, 2 Mar 2021 19:51:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61EE264F21 for ; Tue, 2 Mar 2021 19:51:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581527AbhCBS57 (ORCPT ); Tue, 2 Mar 2021 13:57:59 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:23192 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347028AbhCBNbV (ORCPT ); Tue, 2 Mar 2021 08:31:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691880; x=1646227880; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tuMBzJcfZ8Nw6jlG9OQ1dI7Gar9QuErc0UbP0X6+pg0=; b=JPBNg3kAKgcUx4mncMjTRdts91ot5dfILJH5kTlU2G5k4Gz+eDm8prU+ sgK8c8KYK1XcYDZQEqF8EExRYVTJEL3RC7/3YBiJdhKYyqYc/3sMp8m7E JdawtFiby3DN95WafyQ/pEUQhP9Ya0v+oYagFrJzvwt1+83tqVkpH+LVt YIj47v0tU+g8zKnuu7XKLoCF0ovU8vvy7epp8f3Kqsnign9T8t6fa8I4M Q1IwaJpyYbDs5Spcc9bLwRTSOIzznaft4eeHZV9ztnvi/mPm9sbf9JgUi rgFH0dZ/5zZAkBBF2H0d1iRYQYJp+/VVs6frLgulPvAiqgSOpLQX7avt3 A==; IronPort-SDR: IDhbzDHATuASBSn2FankBZGtkZ+MdPpx1eholt1oF6Xca6x66iOq2POfAEB0GbE8cuLkFb+9pd ZWnZbGqN3UfCu0IX6BHE/VkJ/yZDd3kCRcEHv1UJN0b1D22vSmYXQ7idI/tEOHrNTHT92/L/r9 MxUQuQpEWf4riY4XCjBm8ROLjV1ez034g9KwiMMiy2j3+yoUuErFsn7OLBKo9gz3jBhN5h3jc+ we7jBBpgq8fAmjhmSkHrARqBulXtlqiRUdPKmB3lLD8AXNBuCPpAzskDp9oxEoenAkSqSYjo+Y /jQ= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="162324688" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:17 +0800 IronPort-SDR: A2rvw3Bj03ZGBySr8jtUd/ZTzTWIPQTDImp0W+AUZ6MJZt+95MU+fnABDY5Q5oXIS3GJ+Lw1rQ HT5Y1udYsu0sUha1vSGbdqzCMTOTL3uxxddpTUakHrne59pJbzdADqPrR2cfqgdwEJNbOIl9G8 LovBUauNdOoYOioY1F+lZK14/R7zsgy9ZpRQtoZ8O3bnDziDC9CJeZSkkHBoIxk7hyeplgQ3AB wmLVx4oyOn50oaki5KwxnqQlpM6xfHTVXmd+wL/XF2rCFTzKlG3DLWxeXa5ruW9+5FoSBpV3Ak ZiaegKQ4Z7bQIj/nI1aRaMvP Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:09:22 -0800 IronPort-SDR: wkeK+El7hBfZCpuT2sZtpIvc/rCylu33+FfCOjnGynaCEPQL8TX9GqmZmw934Ue4I7JrFFUHsl 1uac14nJ9V3nbpiw3BF29G/17vXkHIHCShlQNEjAS2tAiHgYstJ8jakqYopvXsVui9TmP5rFul NPaXImlHKv2IQAwtaEi75jGrl6W5Z54xyoTSoWhrwRmIzlWtZ4tUqZ7DKFzsopscFXUUDNMzB2 zLLyfL8vZ62hv9hf5jO0SrCtGZm+tfEuxxuR32om211yYWV/wGPJLgtUn5n9JFiImDXkP5jSll UNs= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:13 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 03/10] scsi: ufshpb: Add region's reads counter Date: Tue, 2 Mar 2021 15:24:56 +0200 Message-Id: <20210302132503.224670-4-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode, reads are the major source of activation trials. Keep track of those reads counters, for both active as well inactive regions. We reset the read counter upon write - we are only interested in "clean" reads. less intuitive however, is that we also reset it upon region's deactivation. Region deactivation is often due to the fact that eviction took place: a region become active on the expense of another. This is happening when the max-active-regions limit has crossed. If we don’t reset the counter, we will trigger a lot of trashing of the HPB database, since few reads (or even one) to the region that was deactivated, will trigger a re-activation trial. Keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. If during consecutive normalizations an active region has exhaust its reads - inactivate it. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 102 ++++++++++++++++++++++++++++++++------ drivers/scsi/ufs/ufshpb.h | 5 ++ 2 files changed, 92 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 044fec9854a0..a8f8d13af21a 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -16,6 +16,8 @@ #include "ufshpb.h" #include "../sd.h" +#define ACTIVATION_THRESHOLD 4 /* 4 IOs */ + /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; static mempool_t *ufshpb_mctx_pool; @@ -554,6 +556,21 @@ static int ufshpb_issue_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd, return ret; } +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + list_del_init(&rgn->list_inact_rgn); + + if (list_empty(&srgn->list_act_srgn)) + list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); +} + /* * This function will set up HPB read command using host-side L2P map data. */ @@ -600,12 +617,44 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + if (hpb->is_hcm) { + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = 0; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + } + return 0; } if (!ufshpb_is_support_chunk(hpb, transfer_len)) return 0; + if (hpb->is_hcm) { + bool activate = false; + /* + * in host control mode, reads are the main source for + * activation trials. + */ + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads++; + if (rgn->reads == ACTIVATION_THRESHOLD) + activate = true; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + if (activate) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + hpb->stats.rb_active_cnt++; + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate region %d-%d\n", rgn_idx, srgn_idx); + } + + /* keep those counters normalized */ + if (rgn->reads > hpb->entries_per_srgn) + schedule_work(&hpb->ufshpb_normalization_work); + } + spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { @@ -745,21 +794,6 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, return 0; } -static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, - int srgn_idx) -{ - struct ufshpb_region *rgn; - struct ufshpb_subregion *srgn; - - rgn = hpb->rgn_tbl + rgn_idx; - srgn = rgn->srgn_tbl + srgn_idx; - - list_del_init(&rgn->list_inact_rgn); - - if (list_empty(&srgn->list_act_srgn)) - list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); -} - static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) { struct ufshpb_region *rgn; @@ -1079,6 +1113,14 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb, ufshpb_cleanup_lru_info(lru_info, rgn); + if (hpb->is_hcm) { + unsigned long flags; + + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = 0; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + } + for_each_sub_region(rgn, srgn_idx, srgn) ufshpb_purge_active_subregion(hpb, srgn); } @@ -1523,6 +1565,31 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_normalization_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb; + int rgn_idx; + unsigned long flags; + + hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; + + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = (rgn->reads >> 1); + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + + if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) + continue; + + /* if region is active but has no reads - inactivate it */ + spin_lock(&hpb->rsp_list_lock); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock(&hpb->rsp_list_lock); + } +} + static void ufshpb_map_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); @@ -1913,6 +1980,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + if (hpb->is_hcm) + INIT_WORK(&hpb->ufshpb_normalization_work, + ufshpb_normalization_work_handler); hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2012,6 +2082,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { + if (hpb->is_hcm) + cancel_work_sync(&hpb->ufshpb_normalization_work); cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 8119b1a3d1e5..bd4308010466 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -121,6 +121,10 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 + + /* region reads - for host mode */ + spinlock_t rgn_lock; + unsigned int reads; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -211,6 +215,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; + struct work_struct ufshpb_normalization_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Tue Mar 2 13:24:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1C8AC433E9 for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8735864F2D for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581549AbhCBS6a (ORCPT ); Tue, 2 Mar 2021 13:58:30 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:19062 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351129AbhCBNb5 (ORCPT ); Tue, 2 Mar 2021 08:31:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691916; x=1646227916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m+6wspnTXV7QglngrZG4/3odIUM/+3SS06tigLAb6fY=; b=NNJXHkKNWNB6Rvn5FCxhLHw5dMv66byzfkJyBG97rlNgBZ/Qe53RG3q0 9V3+CApiZIP0Ek70Y6y6hLVLCTbGZEREy4fFwVpIV9c2RCh5oiq3jM0OW Mtrcg6quQ4HiYeiiVjMuuq2KxLjJC+ysu6xb4GB2lR8syPPOdHxrtOnfc MbcOTIjIlvvXuQ5TIbj0ZPctqgt/9jlrPFwcabw8q+r+LNUTzSJYqQXHo KUIWpyXZls0D66Dl0o0noE3LKk67jHY412dgZbWF5qluyH5Rtppxx8rvh 4kj5f4IP2DhJKYY+vDhB2UecainngDUtLK+u4ZU1NcYH/Kmz/FHplULza w==; IronPort-SDR: se8vgd3ifR3hWY/tv3aQb0kq68YrZR8rw8a9LB+CjwijJHSC2Kt05Y9ymtqcM7pohJZfnj8lqE lTv9+AVU4eRv9WHd2vEwWIbGPN0c95yawUCiNLxmImRgbY3AlBluj5NfCBlAJMaIDxHuML8b0E fvBCj4csOuKFdgBHYor3VHNygCgxszERgcpIeag/kWInK9s2CjSYbQORBc1z4zW/FKUqzL8WVq OgqXOjslOFiYIM0D2Opp0BHUMLpdKT9EMPX5bq9I2xbuK4G+fsQTO5fCA+rE602Cf+mXvh+RzO lJE= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="165637121" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:23 +0800 IronPort-SDR: 1DeuyRWzIPPNfTz2HDIX4hKticQCTYarGlqmbQx9SRmcphxo1LfusKVOCZJh+EjE5KTQWypcYg Q5cP/2JFEd4AfrYsXwWOwhmusstb8dzKI1Nh8YgLqBZXaBMc9xnGdFPU9GF4kVKTTT+Xtz908V 3bvS1dXn3v94kc0RRXiT7oIWkoc07nQAhkSL7u1MZQiRrHWL0xIPRJBZ+0fximoNHxtjzuC2kl NyVO/cPf1ZySFpb+fk5FTS6umSY60bs9fvTE84FYaUNe/XQ0lhYg44E41Gu4UJz6hUxws20LEi 6k3N3po8/uvJISmdt6ZIfuAc Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:07:39 -0800 IronPort-SDR: ktsiY3aG4gNm9Fzuw+r26RectOBYMVCRNNXDqKfWMuW8iT6Gcs6K26tvWuAmmXIbdcXhSqGBeI j11GUhE9A9TjT8mo4Wl7IEKRsWNma6M/lfRnbHp4bckPjxLk561ROC00rZ3My6ca9m+Yr/zpd7 BBzoY69tCvF83K3D54r/hOeJ7xwgXrOrEdwSnJrnlOvJXWpmFCnc7X3inbTgnnrYiN9rn/fZ/7 pf64Z9+p06/h6UHGO97XGT00APGqdGIcTu4APjIbeZ/HYdta4HCNEQOQyrRc8ubmJiIfnU3uLF p0A= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:20 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 04/10] scsi: ufshpb: Make eviction depends on region's reads Date: Tue, 2 Mar 2021 15:24:57 +0200 Message-Id: <20210302132503.224670-5-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, eviction is considered an extreme measure. verify that the entering region has enough reads, and the exiting region has much less reads. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index a8f8d13af21a..6f4fd22eaf2f 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,6 +17,7 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 4 /* 4 IOs */ +#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 6) /* 256 IOs */ /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1050,6 +1051,13 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) if (ufshpb_check_srgns_issue_state(hpb, rgn)) continue; + /* + * in host control mode, verify that the exiting region + * has less reads + */ + if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + continue; + victim_rgn = rgn; break; } @@ -1235,7 +1243,7 @@ static int ufshpb_issue_map_req(struct ufshpb_lu *hpb, static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) { - struct ufshpb_region *victim_rgn; + struct ufshpb_region *victim_rgn = NULL; struct victim_select_info *lru_info = &hpb->lru_info; unsigned long flags; int ret = 0; @@ -1263,6 +1271,16 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * because the device could detect this region * by not issuing HPB_READ */ + + /* + * in host control mode, verify that the entering + * region has enough reads + */ + if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + ret = -EACCES; + goto out; + } + victim_rgn = ufshpb_victim_lru_info(hpb); if (!victim_rgn) { dev_warn(&hpb->sdev_ufs_lu->sdev_dev, From patchwork Tue Mar 2 13:24:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359D5C433E0 for ; Tue, 2 Mar 2021 19:52:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D25660232 for ; Tue, 2 Mar 2021 19:52:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581532AbhCBS6B (ORCPT ); Tue, 2 Mar 2021 13:58:01 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:11077 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351061AbhCBNbV (ORCPT ); Tue, 2 Mar 2021 08:31:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691880; x=1646227880; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LMnp9+2/jvR23G64ulLSsuDDDVG9gP189H3fURWEBWA=; b=p+hdDJmq6jV/BLuWc/FbUFsP36bysiJ0ALkNgGBJ+2W0PgC16GfJIlZR LeiloI1pJR877dOWjHdrysAVmnxJjiZ8zVz4Yv/n010B34t5P0rymltsN mD3U1YA1F0OoXCcTR/nOYhM90G90GtvUVAZhIVWD8vI8lEfhr78Wrg2Em UaCDYa+E3xSvGWKXHXvaoc+qZDvf/Y0OLA2XsNl2FIXKWfryXmS4mXpCh q45+nG56kq705AVSJo6uPrTkxWrUdr142k6F65lrP9CNWHknf3Yk77rO8 6bjGV156IC3DlTHE6Za6DW8StZk2lAPH2v099u+cw/4X58EINxvjWZ9Lg g==; IronPort-SDR: vE5MwXPycqgySBPyRak7EZ0iHt4oNt4+t4f92ImO+dAV8iv0fjxjPQM2uqfKVzigS5g6Cz/LoC KPOUDh9l6ea1OqkfUvkt6weeYG38yjuNg4E8Uq2FckyGQoekVq5XJleA8OQQoxOaWDIhWgwbPB PFYv3bEj/XkN/+qugQSWA6M6Ke0RPairS7ehauLvSrv32Qkt+Ya+f8fZyRP0tW/+g2q42trpjq O1TW7tQs/n4VlV6YO3ERH7bH5o+o1NP9TYXmn/wJ9ZPS0GIzdgM/UoSpsmAtaTKiBMtb7bBOSu F+k= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="271767219" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:30 +0800 IronPort-SDR: 54uCqvwDI3Dt861m98yg3MNHuUcqNDmLE41FFPsnpK4QxHW7Nx7hZgIzaw1JYL4h4yTGGEZOko 4ciGwZuC9ZcfMf6k3crwSVFswon+CVNnshc1zEl/QIv9iMuE8GAOpBwXy5Q0RnTBcl/U6PNdRm r9ltMu1q15MyRgAmULOwyvfujDBTYD1NDaYUEWr3bclrF+ko+0WtDApQsM29cV2R8FLwCHS7N+ Ctt2x6F9ljxIRvDJ7CJMH+wPYLqMkMDQ80h10egQHDzNF3D62ehnDNtBngCZ8Fvlu9uGw+3BJG sDCOoExjvSbNATc+dY/SRaUs Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:07:46 -0800 IronPort-SDR: wlw30RL21bPYeOyNaYxbngMgSQUNwEbnhvETLY2RGLA5zbf3d1hfnKCT7Oy2MN+bLJgehdtQwM p7ErXiSpaOnUpCrLx12bKzRaanXIdXRQXnMBHtdz7exmV7UNymvFxkkmZuBCZ7+lDR0+pYe/1/ 7e0ecNYC0MDHVdhv59E5twbyoA6X98kzg2+HOlGTTdvuDJx96eNnFwcjGKrnfng5IstCB4TFf/ gLDxX8biWmFRYOEiLylp2UFx+uZ8fD9gX3vh9zDvmgRMFOilpTDGMlhURB193dtZcjmI3D6jWM 6mg= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:27 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 05/10] scsi: ufshpb: Region inactivation in host mode Date: Tue, 2 Mar 2021 15:24:58 +0200 Message-Id: <20210302132503.224670-6-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org I host mode, the host is expected to send HPB-WRITE-BUFFER with buffer-id = 0x1 when it inactivates a region. Use the map-requests pool as there is no point in assigning a designated cache for umap-requests. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 14 ++++++++++++++ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 15 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 6f4fd22eaf2f..0744feb4d484 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -907,6 +907,7 @@ static int ufshpb_execute_umap_req(struct ufshpb_lu *hpb, blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_umap_req_compl_fn); + hpb->stats.umap_req_cnt++; return 0; } @@ -1103,6 +1104,12 @@ static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, return -EAGAIN; } +static int ufshpb_issue_umap_single_req(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + return ufshpb_issue_umap_req(hpb, rgn); +} + static int ufshpb_issue_umap_all_req(struct ufshpb_lu *hpb) { return ufshpb_issue_umap_req(hpb, NULL); @@ -1115,6 +1122,10 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn; int srgn_idx; + + if (hpb->is_hcm && ufshpb_issue_umap_single_req(hpb, rgn)) + return; + lru_info = &hpb->lru_info; dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx); @@ -1855,6 +1866,7 @@ ufshpb_sysfs_attr_show_func(rb_noti_cnt); ufshpb_sysfs_attr_show_func(rb_active_cnt); ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); +ufshpb_sysfs_attr_show_func(umap_req_cnt); static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, @@ -1863,6 +1875,7 @@ static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_rb_active_cnt.attr, &dev_attr_rb_inactive_cnt.attr, &dev_attr_map_req_cnt.attr, + &dev_attr_umap_req_cnt.attr, NULL, }; @@ -1978,6 +1991,7 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) hpb->stats.rb_active_cnt = 0; hpb->stats.rb_inactive_cnt = 0; hpb->stats.map_req_cnt = 0; + hpb->stats.umap_req_cnt = 0; } static void ufshpb_param_init(struct ufshpb_lu *hpb) diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index bd4308010466..84598a317897 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -186,6 +186,7 @@ struct ufshpb_stats { u64 rb_inactive_cnt; u64 map_req_cnt; u64 pre_req_cnt; + u64 umap_req_cnt; }; struct ufshpb_lu { From patchwork Tue Mar 2 13:24:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F592C433E6 for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D80164F25 for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581545AbhCBS6Y (ORCPT ); Tue, 2 Mar 2021 13:58:24 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:31299 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351130AbhCBNb5 (ORCPT ); Tue, 2 Mar 2021 08:31:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691916; x=1646227916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j6DLWYgVXFPTG7STfqj4A3+JM3M57pgEcTH+5GaWOkI=; b=EXvlj6uoRWRO42s7y0SOTU8U2tdNyuySLJP9vXSXudwmAkYKp5cpKkyQ yxGC93EiFQLDsh2D/nuw0iB5B+IhfZ3klejs0kM1URBhQdY95BpZELyXl YoHbu2JIiM9FhOhGWKZQJ9WebqVOrA/wK227s5EDRvDQYANQz3a1Yrffi V1JBM2WRRrtyKIcp0knE60doQxjOOTtEBT1i/XQlzCtbOSCRwJDL+N/qM vD4qnah32n8Rs36fklwDrRRdymDcThEW7fJrHuo8erymTROercrCz6c2F E0C+Np3TNdbuvamADbHh1W3fL9LJlxMDciJddIoLA+nbyeDHBalC2P3bg Q==; IronPort-SDR: rYQZpTTUZ/z9HhZx2tcbvW0iEjgGqBlze9Rm0QUAIxAskpD4vmpLUerg6KQi28/7X4/biq0JWA sMr/Un/dwXl5rpw53nXgXkuU1jAJAdeoUZPYeRoQUQqOd+QNu5GbwJVSR95mo3jCDEL+C25UWv 5cyroLeXpFcpSR1pf/ILEgf8dt/ZZR4sYsSr1ZbohinwQSYWB8o7FeV3eqvJxcgFrNREx/t+ET /VVVmBCsRcwMIlsBujVIkAl6Zts0JjcT2DxFRwfFHzJPARVT+iN4FwvWmUzyiC8bodE4ntkTjB 6PI= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="161146243" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:38 +0800 IronPort-SDR: FoVxyRPIcO8CLcYb3r2s2RAWvNBrCJCHuXGaaGb65GyVl70Z+xAIemjsNPmsrJOkL/fxlkC4LV o4HgHevhUygtZP74ArZgmWkUGKKTv8W21CQrtP7iSYCVf2oiibqMfxyg0usQiUL+YDIzoOXx3r o0kIDrkAZgQMezs0qionlwtDE5cHtfIW47qIW2WXWA/lZ6L4yGxqhS2PBlYQpnceM8xuYF26E9 BQVXD7QZSa8tSYajjLmekc9jB99LAshZDfZXGGWPuymPoZoRPvDTYAm2XH26DDV3J65MwcPbp2 Dveo9SVHmd/lL3+Y7vO6N88N Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:07:54 -0800 IronPort-SDR: QFFw73nbnDnCFuZd2Z5CgDkyl/PwrTYKqPR/bzezOf49rKPkHtGQXwZPK4vNs3jJm2aGOsbDuj eaTZ2OSjnx+QDOazkyJ4gJE5F0VdAMT2oNyqLJa+rOqC0yFVr9nsqOebnnpue/Y46UYmSIMPYw QTGT2r+nl8b1o6m9X/uB54jj1FUalj9C+3/gfRvwmO9EpAOC/CUSwSzdZ15sNd4V0epHcangCu MKWvaQEvyKkenvRdinZe4c0nUpGQZd1ol+dY4NQWrMwV+7N4tW3SXpSIpjascwmg8xTPx4USO5 /34= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:35 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 06/10] scsi: ufshpb: Add hpb dev reset response Date: Tue, 2 Mar 2021 15:24:59 +0200 Message-Id: <20210302132503.224670-7-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The spec does not define what is the host's recommended response when the device send hpb dev reset response (oper 0x2). We will update all active hpb regions: mark them and do that on the next read. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 47 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 2 ++ 2 files changed, 46 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 0744feb4d484..0034fa03fdc6 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -642,7 +642,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) if (rgn->reads == ACTIVATION_THRESHOLD) activate = true; spin_unlock_irqrestore(&rgn->rgn_lock, flags); - if (activate) { + if (activate || + test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) { spin_lock_irqsave(&hpb->rsp_list_lock, flags); ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); hpb->stats.rb_active_cnt++; @@ -1480,6 +1481,20 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case HPB_RSP_DEV_RESET: dev_warn(&hpb->sdev_ufs_lu->sdev_dev, "UFS device lost HPB information during PM.\n"); + + if (hpb->is_hcm) { + struct scsi_device *sdev; + + __shost_for_each_device(sdev, hba->host) { + struct ufshpb_lu *h = sdev->hostdata; + + if (!h) + continue; + + schedule_work(&hpb->ufshpb_lun_reset_work); + } + } + break; default: dev_notice(&hpb->sdev_ufs_lu->sdev_dev, @@ -1594,6 +1609,25 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_reset_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb; + struct victim_select_info *lru_info; + struct ufshpb_region *rgn; + unsigned long flags; + + hpb = container_of(work, struct ufshpb_lu, ufshpb_lun_reset_work); + + lru_info = &hpb->lru_info; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) + set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags); + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); +} + static void ufshpb_normalization_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb; @@ -1798,6 +1832,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } else { rgn->rgn_state = HPB_RGN_INACTIVE; } + + rgn->rgn_flags = 0; } return 0; @@ -2012,9 +2048,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); - if (hpb->is_hcm) + if (hpb->is_hcm) { INIT_WORK(&hpb->ufshpb_normalization_work, ufshpb_normalization_work_handler); + INIT_WORK(&hpb->ufshpb_lun_reset_work, + ufshpb_reset_work_handler); + } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2114,8 +2153,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { - if (hpb->is_hcm) + if (hpb->is_hcm) { + cancel_work_sync(&hpb->ufshpb_lun_reset_work); cancel_work_sync(&hpb->ufshpb_normalization_work); + } cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 84598a317897..37c1b0ea0c0a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -121,6 +121,7 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 +#define RGN_FLAG_UPDATE 1 /* region reads - for host mode */ spinlock_t rgn_lock; @@ -217,6 +218,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; + struct work_struct ufshpb_lun_reset_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Tue Mar 2 13:25:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D710C433DB for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 342BA60232 for ; Tue, 2 Mar 2021 19:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1581537AbhCBS6E (ORCPT ); Tue, 2 Mar 2021 13:58:04 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:10828 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351125AbhCBNb5 (ORCPT ); Tue, 2 Mar 2021 08:31:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691916; x=1646227916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vTkMTBSxVX84qN8P5RGFFrI2LIZaQDhW62pfY0yBXbo=; b=hynwFS1MtEcD7byUBCCD6M0kiJIkXlPsRTNo9hlGqWulya+KPRvQZgtP hY7MwiSp7W8Pfr8/I1BdbFsojk17/HvAIOlrlImkBBHnMvVpCKvJMOFJN n+ceSYjLsbdnVstKqth0/YX+9fp6TRdt+/wY6mmv5oVhWq5GyWy1O+bTu OvTwE+hozaVkWr2XKlsgCpXmLkTyDeG7nabLeuYuPc5fjNc07tdmVrsNK i88oN4K2Q93LmC4whQcnoDC93+TD/CIfyPxd9dDH9BGvC/8fc5wLBFH9o tykCBbirbNlYcQ03t5mbdtlzkkpIp/BH3owg9kYLDN+txlUww5iYHZKdG g==; IronPort-SDR: Wg33WTHoMGOdqSgJjoq7u1M8Sc6u9y/+wrIxIONO/vClbo1N6qwN12ExcP9xLeUT/fI3QZaAet 7K81f6KLUblOGHt8sxO4aCBZHnqyoTKmlRwAUp3Pp591ZNBM5ijMhOBRKR/bbQttodX+2htz1X Tx6ruj3okfVqKfdEXzHQOGpvFKbisHzDBSYy26Mu9hIU1YnFDCycrnBOcJr4XuY1tC2Jxfj92b YzCm3eyAt9yFqtzjGr+aVJuGbxMHFX9As0bQ7ZyKc0Taayq9Md5jazATfxfq48e/tWSGGtX8/I mpg= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="271767239" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:46 +0800 IronPort-SDR: 49eXPYIj1hPS4EvQZ7ykSQk9BGiCU4jZXDdBxf67blhAv9uuUI3iHcgn2M6Cp036+TUJKA0Yiy qcmGPvqpkknUAlRpgCCRm2VWUJnmKrENCtRCWceKzWHoSWJuKltizcvXgAwVUlzj0dSr3jx4Ef R6HovgALEUsRXBGnoWnJWPp1BUWx+6U0Ix1YDErzlRkVI0s4JOHTCGU7z2pW4fEPG5qxH8EZso 8iG7EC8aQ37UIXNvHjCiMVkJ+ASdIwf1AI0bAVUzU/EO535NrUZNfceJ4NL1bSUanpuoXvVPKQ QaxD2g7b/mASPkv6W547fNfZ Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:09:52 -0800 IronPort-SDR: osj/rXrVS0EtFVEShwgDMW7qpWOufrKnpreZWiGlBE5jv4iyTVm/X2J67JBCKImO6PyO5RaElP XZX5glN3l7g8nKNXCzrQl5AauHESXI5vzrhzqdmotODlWamZrT6FpMy1Myrs5q3p6dgUsAJpi/ WvjPGKOLmSm+lVnX3S7jFMQCtotrnUY0gHQa4fj1i4mr5BqZVzTKwEjJ90su3F9jsLvKDhjENp mQ6X0Ro3+JBwTbKapNElAnCRMTcoyM0lUQgBfeg7z+xlWPwSp0kg1JEmOH9XHZHgpP4tM11zWw sqQ= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:42 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 07/10] scsi: ufshpb: Add "Cold" regions timer Date: Tue, 2 Mar 2021 15:25:00 +0200 Message-Id: <20210302132503.224670-8-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In order not to hang on to “cold” regions, we shall inactivate a region that has no READ access for a predefined amount of time - READ_TO_MS. For that purpose we shall monitor the active regions list, polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add the region to the "to-be-inactivated" list, unless it is clean and did not exhaust its READ_TO_EXPIRIES - another parameter. All this does not apply to pinned regions. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 65 +++++++++++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshpb.h | 6 ++++ 2 files changed, 71 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 0034fa03fdc6..89a930e72cff 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -18,6 +18,9 @@ #define ACTIVATION_THRESHOLD 4 /* 4 IOs */ #define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 6) /* 256 IOs */ +#define READ_TO_MS 1000 +#define READ_TO_EXPIRIES 100 +#define POLLING_INTERVAL_MS 200 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1024,12 +1027,61 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb, return 0; } +static void ufshpb_read_to_handler(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ufshpb_lu *hpb; + struct victim_select_info *lru_info; + struct ufshpb_region *rgn; + unsigned long flags; + LIST_HEAD(expired_list); + + hpb = container_of(dwork, struct ufshpb_lu, ufshpb_read_to_work); + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + lru_info = &hpb->lru_info; + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) { + bool timedout = ktime_after(ktime_get(), rgn->read_timeout); + + if (timedout) { + rgn->read_timeout_expiries--; + if (is_rgn_dirty(rgn) || + rgn->read_timeout_expiries == 0) + list_add(&rgn->list_expired_rgn, &expired_list); + else + rgn->read_timeout = ktime_add_ms(ktime_get(), + READ_TO_MS); + } + } + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + list_for_each_entry(rgn, &expired_list, list_expired_rgn) { + list_del_init(&rgn->list_expired_rgn); + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + hpb->stats.rb_inactive_cnt++; + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + } + + ufshpb_kick_map_work(hpb); + + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); +} + static void ufshpb_add_lru_info(struct victim_select_info *lru_info, struct ufshpb_region *rgn) { rgn->rgn_state = HPB_RGN_ACTIVE; list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); + if (rgn->hpb->is_hcm) { + rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); + rgn->read_timeout_expiries = READ_TO_EXPIRIES; + } } static void ufshpb_hit_lru_info(struct victim_select_info *lru_info, @@ -1813,6 +1865,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); + INIT_LIST_HEAD(&rgn->list_expired_rgn); if (rgn_idx == hpb->rgns_per_lu - 1) { srgn_cnt = ((hpb->srgns_per_lu - 1) % @@ -1834,6 +1887,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } rgn->rgn_flags = 0; + rgn->hpb = hpb; } return 0; @@ -2053,6 +2107,8 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_normalization_work_handler); INIT_WORK(&hpb->ufshpb_lun_reset_work, ufshpb_reset_work_handler); + INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work, + ufshpb_read_to_handler); } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", @@ -2087,6 +2143,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + return 0; release_pre_req_mempool: @@ -2154,6 +2214,7 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { if (hpb->is_hcm) { + cancel_delayed_work_sync(&hpb->ufshpb_read_to_work); cancel_work_sync(&hpb->ufshpb_lun_reset_work); cancel_work_sync(&hpb->ufshpb_normalization_work); } @@ -2264,6 +2325,10 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 37c1b0ea0c0a..b49e9a34267f 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -109,6 +109,7 @@ struct ufshpb_subregion { }; struct ufshpb_region { + struct ufshpb_lu *hpb; struct ufshpb_subregion *srgn_tbl; enum HPB_RGN_STATE rgn_state; int rgn_idx; @@ -126,6 +127,10 @@ struct ufshpb_region { /* region reads - for host mode */ spinlock_t rgn_lock; unsigned int reads; + /* region "cold" timer - for host mode */ + ktime_t read_timeout; + unsigned int read_timeout_expiries; + struct list_head list_expired_rgn; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -219,6 +224,7 @@ struct ufshpb_lu { struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; struct work_struct ufshpb_lun_reset_work; + struct delayed_work ufshpb_read_to_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Tue Mar 2 13:25:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C539C4332B for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4274764F1E for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381111AbhCBS4P (ORCPT ); Tue, 2 Mar 2021 13:56:15 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:19087 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1446197AbhCBN2y (ORCPT ); Tue, 2 Mar 2021 08:28:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691734; x=1646227734; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=waigJLGe+aCgbuib6LdVYw7L5S248c/bP/DF2YgcfCA=; b=gbKhXDdn3Gnq7FZbdpzUUV9wQLaCdnJp4Pv/U0Jtzljj2KoAsOmWFpjd st3hHLfhOJPm5e9srevHLIOY7S70KP2QzYYZYe+rvsp5XsxDODkdxci3v QpQYJpcLru8h7pTOjij3QaXGCwA15RAEIxVLgfdjL6h5hB0DMtoDUyKLp y63lP4lfAwHs/9x2tYvYnSYtS0HR8/NNTAAWVdwy2gLbjD5vwXrAqIDIH uOoSDnQb575jLzfOSN4D5SoBRWwLTp2CwBVZYClaLY4bJHQv4xDZT2Wfa zFn1dNLkRUfKnKMBj6JkoImT1xPPqNmaRSK8n9bDuN6GoAc1KwsiqtQsy g==; IronPort-SDR: uOw+JiB99itxMPbjTZw9hMgKtZ8CqIPTAbcYI0862+dDGM8E+wt+U8/SMoC20cV0x+dDG959MY 9HtMz4oCGvpJvj7zsdZubGLS6oMWlQJaeYDhwXu11WJUR0xsuIiVAUAI0prWLNzlbzlsvJ/sVU gzY7zRET2Bf/nLL3XBYNmwwJggBUQpBIsYcKm7L0lrDmBjbqVnBLRXhKLIk1SRvVJmiAAcm6pN JwDk4u+qncGRUoPNZx/hz2awB0lwWILAp1j4xjtGYfoBVvoXuXUEpkYQ8DPgEy5mfUWOMHZP9P kEg= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="165637146" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:26:54 +0800 IronPort-SDR: 8bzjK3gCNuEzLSRYJMDrEUNHh9qjx8gUnrHwsHBqyGSpjhjseMf5PujWZhquv6+cgCZ6koyhLx LhqXkmF8csSAwB6LxIEHmo2aLsxg4FVxk+vhkPSw1E3LnIQLB25LNwxVVMUmV8fh8Gzw88TK/q 8V0otNYJVlDyUiBTVW+yVLIJxoC/dLmGLeb4UB+OH8lZTeBoO88TqA0oiLPzLdsmoU+qakkHDT Ej8jpZ0BMT3BSucmpKfAEdWY2BmGFEVnfZrrpkFqzk3a5JRFgfWseK71qNoCgerKlT/eGWNaDT ORh2gfaEVI9QkSVMsyMqJDZ4 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:10:00 -0800 IronPort-SDR: NPhnFLu4BlcUed2FRu5b3AdXAnZK+fPfIkxA7l1f3AJWlhwWcgrWEEEsXj75NUfmEsrGYE6ugr QdW72hyIXAmJVfZJhge4TvRP8t6OgsGVYgKuXwNj3mugoGwb2sVblA5B48xdZXZLFD1Nc7W5K/ R72zG0vnDNkt7Py0KRCMqp60c2THm5Gg63IiYIF+DKOd7mo2C4fVF/UnmcZwrAMwKtnh5nmQCN gJr01SgaBaER6bNR3VRtlkPetXrZ2r24fcn/GvKzikZeBqjtDP36oXWbcVNdEw9MHs0tYXrxoE S1U= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:51 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 08/10] scsi: ufshpb: Limit the number of inflight map requests Date: Tue, 2 Mar 2021 15:25:01 +0200 Message-Id: <20210302132503.224670-9-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org in host control mode the host is the originator of map requests. To not flood the device with map requests, use a simple throttling mechanism that limits the number of inflight map requests. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 11 +++++++++++ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 12 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 89a930e72cff..74da69727340 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -21,6 +21,7 @@ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 +#define THROTTLE_MAP_REQ_DEFAULT 1 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -750,6 +751,14 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct ufshpb_req *map_req; struct bio *bio; + if (hpb->is_hcm && + hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + dev_info(&hpb->sdev_ufs_lu->sdev_dev, + "map_req throttle. inflight %d throttle %d", + hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + return NULL; + } + map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN); if (!map_req) return NULL; @@ -764,6 +773,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, map_req->rb.srgn_idx = srgn->srgn_idx; map_req->rb.mctx = srgn->mctx; + hpb->num_inflight_map_req++; return map_req; } @@ -773,6 +783,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, { bio_put(map_req->bio); ufshpb_put_req(hpb, map_req); + hpb->num_inflight_map_req--; } static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b49e9a34267f..d83ab488688a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -212,6 +212,7 @@ struct ufshpb_lu { struct ufshpb_req *pre_req; int num_inflight_pre_req; int throttle_pre_req; + int num_inflight_map_req; struct list_head lh_pre_req_free; int cur_read_id; int pre_req_min_tr_len; From patchwork Tue Mar 2 13:25:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45709C43332 for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1DFFD60232 for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380698AbhCBSzO (ORCPT ); Tue, 2 Mar 2021 13:55:14 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:19117 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1446201AbhCBN2y (ORCPT ); Tue, 2 Mar 2021 08:28:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614691734; x=1646227734; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A8takycfQOXdn/3l4X9q25+hFq533U9bSyqrV2sSN/I=; b=r8XdGjfLmfRoxYuPLVIpkXY3KUCUqVdJzAf8ops7h2n2OGulo/2RPJFi Sum+PYe0iVIuwsX+FmMJypS94Rn0l2tPZVkk+qGkIunriAr063TgO3vYn vcEEWSN/zZ6uuN9UuuW/IlWyuBnW3QXi2a7vkuoR47NZVP1Fs/s810DOE Mvf4bR41wqrKLPjCUIaa9kP54aSxJ/pl5I6UhQb67dplcJsHUYn2gqXRW 2FQ/Ttg50aKvbGz6LaPknuD/yn98GhF3HAe4gXvrROIjFGrHmiVwOeezd rVeZTgtw6zQNA0xfavcbWmz973ZkzT2+4edhi0sxtA8nJGhpwsQQOTw0B w==; IronPort-SDR: lyHtkNbQxH1RhFTHkDO8qDmAPzIi8jfuUH+z/8cdv6923q89gmizjiDC4ANlxJjaGD3x2hqeTe 77e7LIJiLfaOciGRsKFecX0b6TjEnb8NYJ5f1kDt5VPIexbhvuKqh9L56MjiU0AWkJXlQAXCzR kgOfU58IZj0skaaTGxFQ/7hqPJopo13HR7Os91xLKGCx+LwHd9ZNPj5DLd3lPll3FmYpYvt+BA s8oF9Za4AYGry+SNA0CAaXN/3FTKDdWdTK4GyE1M3d4qrW4CcYFuNJiEbeH6w4H/7M5sJS/j3s XOc= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="165637157" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:27:03 +0800 IronPort-SDR: rOy6vLtNSQMOq5HE831lyN0Yib17vy0VmUSa4w4zQxFVcQ6kvzX+luUq9Y5DvEL22UwpPjcoex 9LdBmXXZwye++FZe8zMwa6NSDsABUcYDBBB10W2Io+n/7resP2KfJH4Gbcn7eJAb4Qt9nrzNs1 ORgIkVj0TAbl9IYicY5z6UWcFV7z5tUtfcQ2LuYxrRCDJnDscUvxDnadir4enVmDAzwtflPRsW BMCupOiz9exMXwq9i7CQuTIMu8TpnGpEOnKrRVI1gdrfomRCMevcfbpJzFEgkGNJNtAsUvvOgz ofUDOfaROM79xcK30Wj1yNzX Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:10:09 -0800 IronPort-SDR: ispMW874JpmwJq6ZYQdLsdH027t/odzvH0KsqXd1QPfnGNAFqUo6lWvlYMve9Pn+TdtLAvNpbx FHhVEd10Qn+oi01nCVpkEI+MjgtVV2OIWxCJ2jTLLzM4cRCWtTykk6HWcP64YkCHo1/Yc/tz39 9RlXgc1PAyHgvmN/osao3IIVTneX6aETm0di81PdhwJNIyI/5AzFwU14n98h2m4Lv0h7DaCBEz ALPz0f0jL8eN4EwNBwXwtC6xOCekNT7L5xxz9QW7Q/UgelNSUyjP+jXzkMs+10M/MLNtNWp8jC k3k= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:26:59 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 09/10] scsi: ufshpb: Add support for host control mode Date: Tue, 2 Mar 2021 15:25:02 +0200 Message-Id: <20210302132503.224670-10-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Support devices that report they are using host control mode. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 74da69727340..7b749b95accb 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -2567,12 +2567,6 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) u32 max_hpb_sigle_cmd = 0; hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { - dev_err(hba->dev, "%s: host control mode is not supported.\n", - __func__); - hpb_dev_info->hpb_disabled = true; - return; - } version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); if ((version != HPB_SUPPORT_VERSION) && From patchwork Tue Mar 2 13:25:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12112181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7227C433E0 for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E4A164F21 for ; Tue, 2 Mar 2021 19:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1576012AbhCBS4s (ORCPT ); Tue, 2 Mar 2021 13:56:48 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:17112 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1446136AbhCBN24 (ORCPT ); Tue, 2 Mar 2021 08:28:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1614692254; x=1646228254; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GCchBRP3jpI7i/NSPQ9yq+lKDAClVndYibe3J0+EGoM=; b=BP5B+NV0U2Ec0Fu/VfH03FQslajQyF5471qgpYvmhusLkzBQne/SMBdj vwAajamjWmmysYlZzjcOTn450cLsXdA9REa85Ls/G+ChydPbQ9XL5NpMe vJ+MPALSEFdOLSFn5TlqsmlAxjylCAHtqZy4yABFkmrflGacr+l9mjI9p sk7W40DKm7L6lYLHni77rB94AfnCSdb2MLQhbbxh0iS+wxokRT4H7cuRL bBf+QHX/ufknysYPgA0jBWRH34cjMUFLWo2EuUCvkyv8biWQkA1EgWigQ lfE61B+Z6bXQnIAspKXU4X1ZnD1zkqZSYlUDU3TTyxkogT4zEREZef/1O w==; IronPort-SDR: sPjf9wQWbwUdz+1SRkARqKT7a4G4MfGYpNF6i5ge6x+SU02Deny5j6KqmlAk3Du8O6U4bHkF/D DmTFU1QNlZ3FF3Lh7MLWJiLme74Bx/nQebQkvqgfqH9M1dpq1anEdFqCf5XyRpuTR8sk9NDDhO A2yo/Bv93JI4/tS2DkCP0sK1h0dTiRenflco6dHb4xmBQ6xS8UIEIGgHpaxsVHClbyrhkq8Qgc ZYnjLyUaS83EtkVjbQRX19DIGlaEZBv5kqLAH4EJdUT0UxtNLhszGn2gT2mkOHl27ZkJ4Cx4vG bLA= X-IronPort-AV: E=Sophos;i="5.81,216,1610380800"; d="scan'208";a="265440560" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Mar 2021 21:34:58 +0800 IronPort-SDR: ljXtf1jQotE6SkuVjmdsaHbdKDTkYsXQUdLCZK1kJxWSsoS5tiEV42gJgnTnfmvRoBc7EYLWgY +HXWnXseK0DIx3q0dmeKEFi/FdJaD97CdO+39ugaMiJhCjEKEOFc+4X1i3yyvpjldwQWxXZUQL 6h0wye1tUOdML4yF00IbJ0sQspjCQS1Yql8LiNJGgh2LUD9ZCSMn+vfkG5pGQAWJECQEvDy69u cX596sXjtvFizW32SE396K6tfHEiCXaqzjHL+T0qF8DHgapyGgBlecv3VnGY6tK93h6lOeHl3z rPOpzC/h+VvC1dXwRCypQvqn Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2021 05:08:26 -0800 IronPort-SDR: Bfw+p7PxHv3Q7PNowyworr1sKqBHw7OSXXAH9nE7mbpfmPxOVpFKRKg/GR9oejGO5gIvkGflHA TA/dn8cZANBNGD7iMtzdBOd5Cd3zmpa2qTn+ZZ1NcBePGiLBx33ble1z0QHzOqDQmModn9DpnN 4KaLMVV4TJ4FHn66irls4zzLpO8u9f/+kvaLu/Kur85HWUcFzeqCmN2fXuUkfSfG0BOJ9VEX7h 91Pi2AKEeQqen0lDFlno1d7HvBn/tmudRcTgVcqpvDgM0642DAoFphl8rMGplDTl+L9lH47xKL oNU= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Mar 2021 05:27:07 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v5 10/10] scsi: ufshpb: Make host mode parameters configurable Date: Tue, 2 Mar 2021 15:25:03 +0200 Message-Id: <20210302132503.224670-11-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210302132503.224670-1-avri.altman@wdc.com> References: <20210302132503.224670-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We can make use of this commit, to elaborate some more of the host control mode logic, explaining what role play each and every variable. While at it, allow those parameters to be configurable. Signed-off-by: Avri Altman --- Documentation/ABI/testing/sysfs-driver-ufs | 74 ++++++ drivers/scsi/ufs/ufshpb.c | 290 +++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 20 ++ 3 files changed, 368 insertions(+), 16 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index 0017eaf89cbe..b91a4d018b3e 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1322,3 +1322,77 @@ Description: This entry shows the maximum HPB data size for using single HPB === ======== The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, reads are the major source of activation + trials. once this threshold hs met, the region is added to the + "to-be-activated" list. Since we reset the read counter upon + write, this include sending a rb command updating the region + ppn as well. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, We think of the regions as "buckets". + Those buckets are being filled with reads, and emptied on write. + We use entries_per_srgn - the amount of blocks in a subregion as + our bucket size. This applies because HPB1.0 only concern a + single-block reads. Once the bucket size is crossed, we trigger + a normalization work - not only to avoid overflow, but mainly + because we want to keep those counters normalized, as we are + using those reads as a comparative score, to make various decisions. + The normalization is dividing (shift right) the read counter by + the normalization_factor. If during consecutive normalizations + an active region has exhaust its reads - inactivate it. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter +Date: February 2021 +Contact: Avri Altman +Description: Region deactivation is often due to the fact that eviction took + place: a region become active on the expense of another. This is + happening when the max-active-regions limit has crossed. + In host mode, eviction is considered an extreme measure. We + want to verify that the entering region has enough reads, and + the exiting region has much less reads. eviction_thld_enter is + the min reads that a region must have in order to be considered + as a candidate to evict other region. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit +Date: February 2021 +Contact: Avri Altman +Description: same as above for the exiting region. A region is consider to + be a candidate to be evicted, only if it has less reads than + eviction_thld_exit. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms +Date: February 2021 +Contact: Avri Altman +Description: In order not to hang on to “cold” regions, we shall inactivate + a region that has no READ access for a predefined amount of + time - read_timeout_ms. If read_timeout_ms has expired, and the + region is dirty - it is less likely that we can make any use of + HPB-READing it. So we inactivate it. Still, deactivation has + its overhead, and we may still benefit from HPB-READing this + region if it is clean - see read_timeout_expiries. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries +Date: February 2021 +Contact: Avri Altman +Description: if the region read timeout has expired, but the region is clean, + just re-wind its timer for another spin. Do that as long as it + is clean and did not exhaust its read_timeout_expiries threshold. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms +Date: February 2021 +Contact: Avri Altman +Description: the frequency in which the delayed worker that checks the + read_timeouts is awaken. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req +Date: February 2021 +Contact: Avri Altman +Description: in host control mode the host is the originator of map requests. + To not flood the device with map requests, use a simple throttling + mechanism that limits the number of inflight map requests. diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 7b749b95accb..dce94a069c85 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,7 +17,6 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 4 /* 4 IOs */ -#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 6) /* 256 IOs */ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 @@ -643,7 +642,7 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) */ spin_lock_irqsave(&rgn->rgn_lock, flags); rgn->reads++; - if (rgn->reads == ACTIVATION_THRESHOLD) + if (rgn->reads == hpb->params.activation_thld) activate = true; spin_unlock_irqrestore(&rgn->rgn_lock, flags); if (activate || @@ -752,10 +751,11 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct bio *bio; if (hpb->is_hcm && - hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + hpb->num_inflight_map_req >= hpb->params.inflight_map_req) { dev_info(&hpb->sdev_ufs_lu->sdev_dev, "map_req throttle. inflight %d throttle %d", - hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + hpb->num_inflight_map_req, + hpb->params.inflight_map_req); return NULL; } @@ -1045,6 +1045,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) struct victim_select_info *lru_info; struct ufshpb_region *rgn; unsigned long flags; + unsigned int poll; LIST_HEAD(expired_list); hpb = container_of(dwork, struct ufshpb_lu, ufshpb_read_to_work); @@ -1063,7 +1064,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) list_add(&rgn->list_expired_rgn, &expired_list); else rgn->read_timeout = ktime_add_ms(ktime_get(), - READ_TO_MS); + hpb->params.read_timeout_ms); } } @@ -1079,8 +1080,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) ufshpb_kick_map_work(hpb); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); } static void ufshpb_add_lru_info(struct victim_select_info *lru_info, @@ -1090,8 +1092,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info, list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); if (rgn->hpb->is_hcm) { - rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); - rgn->read_timeout_expiries = READ_TO_EXPIRIES; + rgn->read_timeout = + ktime_add_ms(ktime_get(), + rgn->hpb->params.read_timeout_ms); + rgn->read_timeout_expiries = + rgn->hpb->params.read_timeout_expiries; } } @@ -1120,7 +1125,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) * in host control mode, verify that the exiting region * has less reads */ - if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + if (hpb->is_hcm && + rgn->reads > hpb->params.eviction_thld_exit) continue; victim_rgn = rgn; @@ -1351,7 +1357,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * in host control mode, verify that the entering * region has enough reads */ - if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + if (hpb->is_hcm && + rgn->reads < hpb->params.eviction_thld_enter) { ret = -EACCES; goto out; } @@ -1696,14 +1703,16 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) struct ufshpb_lu *hpb; int rgn_idx; unsigned long flags; + u8 factor; hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); + factor = hpb->params.normalization_factor; for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; spin_lock_irqsave(&rgn->rgn_lock, flags); - rgn->reads = (rgn->reads >> 1); + rgn->reads = (rgn->reads >> factor); spin_unlock_irqrestore(&rgn->rgn_lock, flags); if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) @@ -2022,8 +2031,248 @@ requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RW(requeue_timeout_ms); +ufshpb_sysfs_param_show_func(activation_thld); +static ssize_t +activation_thld_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.activation_thld = val; + + return count; +} +static DEVICE_ATTR_RW(activation_thld); + +ufshpb_sysfs_param_show_func(normalization_factor); +static ssize_t +normalization_factor_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) + return -EINVAL; + + hpb->params.normalization_factor = val; + + return count; +} +static DEVICE_ATTR_RW(normalization_factor); + +ufshpb_sysfs_param_show_func(eviction_thld_enter); +static ssize_t +eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.eviction_thld_exit) + return -EINVAL; + + hpb->params.eviction_thld_enter = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_enter); + +ufshpb_sysfs_param_show_func(eviction_thld_exit); +static ssize_t +eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.activation_thld) + return -EINVAL; + + hpb->params.eviction_thld_exit = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_exit); + +ufshpb_sysfs_param_show_func(read_timeout_ms); +static ssize_t +read_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* read_timeout >> timeout_polling_interval */ + if (val < hpb->params.timeout_polling_interval_ms * 2) + return -EINVAL; + + hpb->params.read_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_ms); + +ufshpb_sysfs_param_show_func(read_timeout_expiries); +static ssize_t +read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_expiries = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_expiries); + +ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); +static ssize_t +timeout_polling_interval_ms_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* timeout_polling_interval << read_timeout */ + if (val <= 0 || val > hpb->params.read_timeout_ms / 2) + return -EINVAL; + + hpb->params.timeout_polling_interval_ms = val; + + return count; +} +static DEVICE_ATTR_RW(timeout_polling_interval_ms); + +ufshpb_sysfs_param_show_func(inflight_map_req); +static ssize_t inflight_map_req_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1) + return -EINVAL; + + hpb->params.inflight_map_req = val; + + return count; +} +static DEVICE_ATTR_RW(inflight_map_req); + + +static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.activation_thld = ACTIVATION_THRESHOLD; + hpb->params.normalization_factor = 1; + hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 6); + hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 5); + hpb->params.read_timeout_ms = READ_TO_MS; + hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; + hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; + hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT; +} + static struct attribute *hpb_dev_param_attrs[] = { &dev_attr_requeue_timeout_ms.attr, + &dev_attr_activation_thld.attr, + &dev_attr_normalization_factor.attr, + &dev_attr_eviction_thld_enter.attr, + &dev_attr_eviction_thld_exit.attr, + &dev_attr_read_timeout_ms.attr, + &dev_attr_read_timeout_expiries.attr, + &dev_attr_timeout_polling_interval_ms.attr, + &dev_attr_inflight_map_req.attr, NULL, }; @@ -2098,6 +2347,8 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) static void ufshpb_param_init(struct ufshpb_lu *hpb) { hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; + if (hpb->is_hcm) + ufshpb_hcm_param_init(hpb); } static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2154,9 +2405,13 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); - if (hpb->is_hcm) + if (hpb->is_hcm) { + unsigned int poll; + + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); + } return 0; @@ -2336,10 +2591,13 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); - if (hpb->is_hcm) - schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + if (hpb->is_hcm) { + unsigned int poll = + hpb->params.timeout_polling_interval_ms; + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(poll)); + } } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index d83ab488688a..91151593faad 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -180,8 +180,28 @@ struct victim_select_info { atomic_t active_cnt; }; +/** + * ufshpb_params - ufs hpb parameters + * @requeue_timeout_ms - requeue threshold of wb command (0x2) + * @activation_thld - min reads [IOs] to activate/update a region + * @normalization_factor - shift right the region's reads + * @eviction_thld_enter - min reads [IOs] for the entering region in eviction + * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction + * @read_timeout_ms - timeout [ms] from the last read IO to the region + * @read_timeout_expiries - amount of allowable timeout expireis + * @timeout_polling_interval_ms - frequency in which timeouts are checked + * @inflight_map_req - number of inflight map requests + */ struct ufshpb_params { unsigned int requeue_timeout_ms; + unsigned int activation_thld; + unsigned int normalization_factor; + unsigned int eviction_thld_enter; + unsigned int eviction_thld_exit; + unsigned int read_timeout_ms; + unsigned int read_timeout_expiries; + unsigned int timeout_polling_interval_ms; + unsigned int inflight_map_req; }; struct ufshpb_stats {