From patchwork Mon Jun 7 06:13:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 972F9C47082 for ; Mon, 7 Jun 2021 06:14:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 735F86121E for ; Mon, 7 Jun 2021 06:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230193AbhFGGQX (ORCPT ); Mon, 7 Jun 2021 02:16:23 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:10219 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbhFGGQW (ORCPT ); Mon, 7 Jun 2021 02:16:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046473; x=1654582473; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o1RBvppYbzo9lDI/z75bmk4RKInfGwbwTZ8ujyGC3aw=; b=RoZAs8uCHx+3/cBE0Qw6EXjRifINxhCQvVidO9SmwBGXRU47CuubHCRp zNIpx8NpmP7fvV0f+S5vobUEytCemAxIGO1Q+2IFr5ZZossqKfgbEn3x7 prloZUGX051gDXVkp7I8lE+OYp3YUomc7vamsf5VkH4hw5LRkj6xSWkM3 CY3vc0xAxPV25++IIQ4G2xANy3h+iz3FXP728zsA9+CN2HCvuHLrHmGkA gAXUSv3FB4js7FkgE6EYnI4iLyEVnPYIvXT+JotAWpAqSux801lpVsqc1 2ljjSQ2C1qeVycPZsI+ZqrsFjmnB9R3i2MSIgKILRyZ0XBlVN9y7Wup0Z Q==; IronPort-SDR: JC07m2JjR9tmqo+I2qj1o981+/phkEFBixBLSVQdG6f+78P2t+c9tc7+Kb4vX7xoD6yr6CycE4 2BVcxjoJE6NjSn8sQFeysOV05Nw0w1N4N0HIDPGWMHL8gUkiCCbX60nbmsYQDHEzXzclHBToJC aWlQUO+lf5/KY/XvGhXre4m0oIQmEQHS4xdwpOAgyE/wMi/nHVzTrVPujOc83Gptb6xUsWXRFr JqxWJraoSJV35M23HpsRMVZaOFDDl0bIUJpHOb0ZbG4/3/5WAhnFM+0+FhyAPu5S/HQVfadcL5 OZA= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="274818153" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:33 +0800 IronPort-SDR: rcAT6/V8oUJkdO02OacMW7NVJqTdlhGg4GKrUgNzkKQdoaQDD2Y2aW0x9nUhsT386kSIvsQgdv eg+pol60/fgGASoEM/X7M2K/TcNjrw0GLavDFGAEZQMVnqhb5fD710x+pMs9Q/L2euZYmg1tCo RmaAcnWJl43eQMF7t73K8XDMdG0AntdUwBZaZ61kE9GRMj1XTI61nzLXAHhecB9EES4O7OpUlL JckiLz7Ei9l1UzOft7vkZkpmqHso1IHHKm5SSbnkHfKdXotA1s9SAA0TIIVU6J983LNg8bDtSH IRTbviuk18ejEPcH9J20Y5vZ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:52:14 -0700 IronPort-SDR: 2s17Djd8Ysi0bpPd0spPJ194ckYRbvj4+ZoXsTqLOQ5Yx6zD/AuVixJn+XWh//PwanHN9moC13 XyphAhlDNQij241IY4rEcl1Nyg3tsbxxAVnxP/+0lAC+HJNE5oAZVYwQSCuxZzErviCVqv3MTZ 77L0gqKQtco8Vk09vibHTBfmXYhjuoYas45hIQiozOVQ92y7T2/MwRPSWh4VzVVW1x0oO9moLF 4cRPWD1IXCptHV4ddEQEntJVX+JRge53brS4D9Nc+pGQckdXcDTk1vt3EAlmv11IuaC3rHZdzb 6h8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:28 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 01/12] scsi: ufshpb: Cache HPB Control mode on init Date: Mon, 7 Jun 2021 09:13:50 +0300 Message-Id: <20210607061401.58884-2-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We will use it later, when we'll need to differentiate between device and host control modes. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshcd.h | 2 ++ drivers/scsi/ufs/ufshpb.c | 8 +++++--- drivers/scsi/ufs/ufshpb.h | 2 ++ 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index d902414e4a6f..ccb1bccf6380 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -654,6 +654,7 @@ struct ufs_hba_variant_params { * @hpb_disabled: flag to check if HPB is disabled * @max_hpb_single_cmd: maximum size of single HPB command * @is_legacy: flag to check HPB 1.0 + * @control_mode: either host or device */ struct ufshpb_dev_info { int num_lu; @@ -663,6 +664,7 @@ struct ufshpb_dev_info { bool hpb_disabled; int max_hpb_single_cmd; bool is_legacy; + u8 control_mode; }; #endif diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 1e60772be40c..d45343a00e9f 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -1614,6 +1614,9 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba, % (hpb->srgn_mem_size / HPB_ENTRY_SIZE); hpb->pages_per_srgn = DIV_ROUND_UP(hpb->srgn_mem_size, PAGE_SIZE); + + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) + hpb->is_hcm = true; } static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2305,11 +2308,10 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) { struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; int version, ret; - u8 hpb_mode; u32 max_hpb_single_cmd = HPB_MULTI_CHUNK_LOW; - hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_mode == HPB_HOST_CONTROL) { + hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { dev_err(hba->dev, "%s: host control mode is not supported.\n", __func__); hpb_dev_info->hpb_disabled = true; diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b1128b0ce486..7df30340386a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -228,6 +228,8 @@ struct ufshpb_lu { u32 entries_per_srgn_shift; u32 pages_per_srgn; + bool is_hcm; + struct ufshpb_stats stats; struct ufshpb_params params; From patchwork Mon Jun 7 06:13:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58EDEC47082 for ; Mon, 7 Jun 2021 06:14:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 409BF611AD for ; Mon, 7 Jun 2021 06:14:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230222AbhFGGQe (ORCPT ); Mon, 7 Jun 2021 02:16:34 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:24848 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbhFGGQd (ORCPT ); Mon, 7 Jun 2021 02:16:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046483; x=1654582483; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kU7WBgg2cM0JHdF73dtNBxdvqY08/m7Ks0SLhflPAFY=; b=ogrUHWW/eXvSgI5JRR/TgK6TBJFB1dTnnW6u56T7+dzkzkuqSy/NOMcV LSO/wkvCQUGDQoTr+MUxastXhju42aLJAqrV0MOUJO9Bj2RJWgiYi68q8 w2qZbz3n7j5T8ZjdBdCefZUtuguKeC/fD3NQCoBw9RIruKM4z80u9mCEu i31cZkCioLDF0zNIaVakrA0MFJkJtoz9NXL9rqBpkEs+pRcCfhyiGGfBB g7g2UF4eOV6vEadWnfrfhl2DPtvCNZY+BeiM6xHCQX1SBVMXLT3Sgyp4T HGdbqee00e1Jz63wNqmYWIco32E+9NkDeY/JAGTe7t7j99euVvnlo75rf Q==; IronPort-SDR: 76/qI0u5ectsvMu715vcXUUMFeCAgcWSpvfG4mPz239GFe5IvIp75BEetScf76UOsA1s1Yuvos 0y84ArPASMqVTT2NcymHGt2vKQIi/zs/YvpMPInCt3LXgIUMgmiye5wrwCTzN4ICCghXonyxmd Qy6v5eLfjrX1KUWVhnpZ3Uti7Js+5sRwkM7niawWtaEbNuyeDtioHTbBNJ32Wt8zGq1XzO+YU2 4k06/GYxp/V+2fswMMRwMMyQ6snLhNPqlPFpgP/M5EFI66nubfluy9mhF+Scvz2PkaHPHGTzL+ 8XM= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="171530251" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:43 +0800 IronPort-SDR: KgO51tMf75kICpXxr7hU+I5K1aYvBWQlhf+bVJOaY5dkUsuhVP9pBvuI3bXVtPYzLiF+QYSRU/ dwpm+D3JOTdQIehveOoxVJzUQ9iEzbh+syQy518DAgLrZqkXw/ifFnDK+hRsZ3R1boaHonFSge 8E87j26RuVmd9OrQvI0P0gmrk6nn8It3zatn0xJyb5OPfal10UYyUY9+7xwlAinYywU/WpIoFg zVIcBIfZ3+vAQ4b5svGBM9nNZDRvt0OuVcAyQXTAnFXwvrehHwPwbz0StMasChB0ArrPOpQGup f5xxv/XVAyTImso4SbzHagMZ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:50 -0700 IronPort-SDR: 8uxDMEJbslS2cLvcFTHZiZLM7lAug4VNwjcv9lMyqlzlzPfm7bpwgIDg4RmMxYtoXDx8CKzXgP 118t8q6b46gIOzMnbiCpQ9CmMUWFrgTY8LfquIP45dDLrk9pkv/ejaQ0ANhsw85I8rQboawEox VoiMRDF3C6gxG+8pZtPokUTXXZFmvMFxcPyijE96R3HuQyhzI8KHftE82yu1QmanUQZTnKQa3/ Lje4Qm0f3fdExezthuGGBcKC6kQX3YX2mc53CX7Nx7hOmvxCWKA9tnu5JNTbaa2ezOz7Q2ehwF 5h8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:38 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 02/12] scsi: ufshpb: Add host control mode support to rsp_upiu Date: Mon, 7 Jun 2021 09:13:51 +0300 Message-Id: <20210607061401.58884-3-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In device control mode, the device may recommend the host to either activate or inactivate a region, and the host should follow. Meaning those are not actually recommendations, but more of instructions. On the contrary, in host control mode, the recommendation protocol is slightly changed: a) The device may only recommend the host to update a subregion of an already-active region. And, b) The device may *not* recommend to inactivate a region. Furthermore, in host control mode, the host may choose not to follow any of the device's recommendations. However, in case of a recommendation to update an active and clean subregion, it is better to follow those recommendation because otherwise the host has no other way to know that some internal relocation took place. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 34 +++++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 2 ++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index d45343a00e9f..6f2e3b3c9252 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -166,6 +166,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); @@ -235,6 +237,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static inline bool is_rgn_dirty(struct ufshpb_region *rgn) +{ + return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); +} + static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int len, __be64 *ppn_buf) @@ -712,6 +719,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn) { + struct ufshpb_region *rgn; u32 num_entries = hpb->entries_per_srgn; if (!srgn->mctx) { @@ -725,6 +733,10 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, num_entries = hpb->last_srgn_entries; bitmap_zero(srgn->mctx->ppn_dirty, num_entries); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + return 0; } @@ -1244,6 +1256,18 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, srgn_i = be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + rgn = hpb->rgn_tbl + rgn_i; + if (hpb->is_hcm && + (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { + /* + * in host control mode, subregion activation + * recommendations are only allowed to active regions. + * Also, ignore recommendations for dirty regions - the + * host will make decisions concerning those by himself + */ + continue; + } + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "activate(%d) region %d - %d\n", i, rgn_i, srgn_i); @@ -1251,7 +1275,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, ufshpb_update_active_info(hpb, rgn_i, srgn_i); spin_unlock(&hpb->rsp_list_lock); - rgn = hpb->rgn_tbl + rgn_i; srgn = rgn->srgn_tbl + srgn_i; /* blocking HPB_READ */ @@ -1262,6 +1285,14 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_active_cnt++; } + if (hpb->is_hcm) { + /* + * in host control mode the device is not allowed to inactivate + * regions + */ + goto out; + } + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, @@ -1286,6 +1317,7 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_inactive_cnt++; } +out: dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 7df30340386a..032672114881 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -121,6 +121,8 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; + unsigned long rgn_flags; +#define RGN_FLAG_DIRTY 0 }; #define for_each_sub_region(rgn, i, srgn) \ From patchwork Mon Jun 7 06:13:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05C29C47082 for ; Mon, 7 Jun 2021 06:14:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB9AF61164 for ; Mon, 7 Jun 2021 06:14:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230237AbhFGGQl (ORCPT ); Mon, 7 Jun 2021 02:16:41 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:24858 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbhFGGQl (ORCPT ); Mon, 7 Jun 2021 02:16:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046491; x=1654582491; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=um+wQRj2JYLUF5LWiN+y2eGDfcfnvZDTGj3nbheNpZs=; b=a1wTQT81/YVIt0ZYYceSMiefpxCYXlcu9hTIoIuY7CTR9K66HXf+6FGj lTDovg9dkh0876K6UQJccvqo0sIGKUe6NxpqjrildylnzYuIlghXuSKoW ao1J9VUn2sHeR9fqcJcXrWxJRiUFVwhSymbxkepNCG2wh5VIk0nKaihMf SwPXc2+IOEgPESAcy22RsSscbv1g8H3zynl7dpgFfCyCe1efnFAS1f2iJ 3opG11UBY69O64Gq3Aq0wmUA9C326kOyINOfF9Z+8+/xv7D78WamPOS/Z zVUjpnk3CVUET2PdXFUQoVhv0TXjJ/jlJnE0f9gjEA2SKZBWBBYCM6reY w==; IronPort-SDR: nh1xG4cHH8VII9sYr9FemQbv1bB/kvQT1KGFo3raqtQ1j/+QexURcG9QJkdngZTxN08U736Wo8 wIeRYOvbT+MyNF/2P+rIOD9z49wFV0J5sD90KxRBuNLBUNStCstEdxaG7P2UEiw0JbHADhnRra u0/o4wMDEk00VWPViHFizbdIAnyMAJIXz/0x+lm//z8T5itpQ7VlA8odMHt26jxxWbtb107HQ+ 2/gPAria5tCEW6M3I1fdyTTPO/MpSeSFmdagUeFZ2s3s33xW48wIK9gH8nRYiN95wrOMQFzD4D V2I= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="171530264" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:51 +0800 IronPort-SDR: wtq1VG7qNrDlK3mmRxuCGg8cO4ly88jp7F4LMXQckZT28meIGJ3UQNK6InM9EA/D/h+qf1u2LQ 3dOf34GZEMj/FrKaiYMW3cfcvA4dcb9CyxL+toUoO7y7J3DpNw/WpGowlskV21E5v8TwPe23t3 iVc7lLOl5Mr/Nm0wpEvHdZuzcpMaqePpGkoRvUS/lJFokKKgtnAj58+M/0InWAufb3pvw+U9u2 NYu4ZB7Sb1WvMWrdJFgjGQw640aMz2YC5AYaW2hWQduXpOwcW5S929eo7DWvShtE4LvjscecDc DSOzESuuKZ6qHiNV1iA//4nf Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:52:32 -0700 IronPort-SDR: r4f0BotFgqFvD6TPDDjycWzwmDZpe8r2VaGTPxT8xA1m4PVXlapLJcT8prsWScBUlzddVq+wml QPIa0Cb5nhXBJQYPhYTyjCzi66VBWfhDJ1QAzowsEqV+yRAijM8Afhy286MpBb/DXezna2LzQl 5ngYFWGyUm5d7dCDEKeDh6EVpJEy1OK6BdLUdt3q2kNVvc/+r7F/fo+nqrlbcANC5XU/yQBzX2 9shmSY+zRXwcQPCWSaR2zw+gFyHVgkW45c/PVb/yjClHdTH7lna03CN4inlZl3vJaeUboTRYl9 TXE= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:46 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 03/12] scsi: ufshpb: Transform set_dirty to iterate_rgn Date: Mon, 7 Jun 2021 09:13:52 +0300 Message-Id: <20210607061401.58884-4-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Given a transfer length, set_dirty meticulously runs over all the entries, across subregions and regions if needed. Currently its only use is to mark dirty blocks, but soon HCM may profit from it as well, when managing its read counters. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 6f2e3b3c9252..01a4efa37db8 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -144,13 +144,14 @@ static bool ufshpb_is_hpb_rsp_valid(struct ufs_hba *hba, return true; } -static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, - int srgn_idx, int srgn_offset, int cnt) +static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, + int srgn_offset, int cnt, bool set_dirty) { struct ufshpb_region *rgn; struct ufshpb_subregion *srgn; int set_bit_len; int bitmap_len; + unsigned long flags; next_srgn: rgn = hpb->rgn_tbl + rgn_idx; @@ -166,11 +167,14 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (set_dirty) + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); - if (rgn->rgn_state != HPB_RGN_INACTIVE && + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); srgn_offset = 0; if (++srgn_idx == hpb->srgns_per_rgn) { @@ -590,10 +594,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) /* If command type is WRITE or DISCARD, set bitmap as drity */ if (ufshpb_is_write_or_discard(cmd)) { - spin_lock_irqsave(&hpb->rgn_state_lock, flags); - ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, - transfer_len); - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len, true); return 0; } From patchwork Mon Jun 7 06:13:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E9ADC47082 for ; Mon, 7 Jun 2021 06:14:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 719566121D for ; Mon, 7 Jun 2021 06:14:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230252AbhFGGQs (ORCPT ); Mon, 7 Jun 2021 02:16:48 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:4757 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230191AbhFGGQr (ORCPT ); Mon, 7 Jun 2021 02:16:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046497; x=1654582497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/uVpI9cCRF77MVsFcmUIyS2US+7d5GjqQu2LbdJilI4=; b=rqU4QVRQdU8JoZOgR+vQqM4GYr7JcLuFwf3yGc4nhmrZmDKppBd/SCKD UT7cDqKvT4jwjhWjVIBL/Z3JHRtJpbIOKru9ZgQIxpNMsIa+c3XV1bTmQ lekkynvMCe0ykB2UFjR54FT8wp4gFVaNKYV+RPjbjZTraBSTC6aZRjlFh 5vD1fz2+BPfsEHVXqzUPhicCkNALoUH4C1Gu53YoHiydFYUrqd0S0XkTx pvlTeyKPvqP7zjz8Y4YqlGy16NOJc6fxiXZHb3d7c6YFlozgNaOUPjkoA E8kRZFEyLwV5bJ1nmXvZ3/FZEthjJiHZbfMT+vBO6x/GLfrTTpCF9Cw3A Q==; IronPort-SDR: Dyl2D4TfihI/MD1E14l7Oc4VDKKyHx7WWHzt/o/LIhD1lYiM4YOMo7mZy78tFkV1i62MF2ZteA u+AxeyVebEcb32uBtHtaDIquXX+cnKuBfYNTvNPJH4gIvbnmwU6yPhBt6suGrcHIlx7S1Nmjo9 jgxWrL1Hf8hFMV+9nun0StCwSwBRNFZv9tcZrPyxU8CDczUk0DnatDmnOZB/noVO58uwWSj2Sg fMHbdzHzhlvxd0M61RuVoQTUh4pvA/FaTsBVduospareQp5X1ZFW38Wxych8Yr9iOCbCpRZCCd OJA= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170991540" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:56 +0800 IronPort-SDR: SXar8ADT02dTlnHnqtuNHLWG47/foleNLwQsYfSNgPpKXBqi66MXV/9OiLXxB0taDvMU3D89sD XBhqRwNSFGSRdNGJVPYF7nVnaVnoTBNR6eZW94OLbEN4OrJd9yGxHo/YPr2xbz8lJMot3gWfCn kraUSOv1jDe+xBXjK8a8DM3wTpcis7WIb+9kPDolIU55hZLTAm53FXuiN/dR34H/H0ynFVCpjw dXXCL76kY3QHxexA+6gFeXyhSOmIveKPRust2Q5+514U3dZIx1iXiQp2iP0zPHZgtjPUH6JOJE 3fzMQPPh2ygWFBjwmBt+B0wj Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:04 -0700 IronPort-SDR: xqN+qbWCiUYQ6BZ9QjKbg9t7YRrKcfMvuYeypAZojGiRLzjdKNM/kBmfvoRdLptgxmrL2LgTUP XnRYz4WkF3RsvUVBX0MMvLLmZIiW73ZDbrBClTqqNHVWOsfCTAvMVcvFdWkpFEXkcEjJIKXGZz 0nxFpSafjZ8Zw/InBDg+v/nbVTk1nbNjpF6B/Uco+GnNa3yyna5YIY4VkIVNO3iBwuGjIxAsIF uuaq3cAO7dOl3mLqxg2GbsOYrgYQFWOn3uHdJ+MTfXrl2MJ26JlmnxT16t3zYpMPfpfNCunfyd Dvw= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:52 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 04/12] scsi: ufshpb: Add reads counter Date: Mon, 7 Jun 2021 09:13:53 +0300 Message-Id: <20210607061401.58884-5-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode, reads are the major source of activation trials. Keep track of those reads counters, for both active as well inactive regions. We reset the read counter upon write - we are only interested in "clean" reads. Keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. If during consecutive normalizations an active region has exhaust its reads - inactivate it. while at it, protect the {active,inactive}_count stats by adding them into the applicable handler. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 94 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 9 ++++ 2 files changed, 97 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 01a4efa37db8..b080bd9ca35a 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -16,6 +16,8 @@ #include "ufshpb.h" #include "../sd.h" +#define ACTIVATION_THRESHOLD 8 /* 8 IOs */ + /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; static mempool_t *ufshpb_mctx_pool; @@ -26,6 +28,9 @@ static int tot_active_srgn_pages; static struct workqueue_struct *ufshpb_wq; +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx); + bool ufshpb_is_allowed(struct ufs_hba *hba) { return !(hba->ufshpb_dev.hpb_disabled); @@ -148,7 +153,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, int srgn_offset, int cnt, bool set_dirty) { struct ufshpb_region *rgn; - struct ufshpb_subregion *srgn; + struct ufshpb_subregion *srgn, *prev_srgn = NULL; int set_bit_len; int bitmap_len; unsigned long flags; @@ -167,15 +172,39 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, else set_bit_len = cnt; - if (set_dirty) - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); - spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + if (hpb->is_hcm && prev_srgn != srgn) { + bool activate = false; + + spin_lock(&rgn->rgn_lock); + if (set_dirty) { + rgn->reads -= srgn->reads; + srgn->reads = 0; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + } else { + srgn->reads++; + rgn->reads++; + if (srgn->reads == ACTIVATION_THRESHOLD) + activate = true; + } + spin_unlock(&rgn->rgn_lock); + + if (activate) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate region %d-%d\n", rgn_idx, srgn_idx); + } + + prev_srgn = srgn; + } + srgn_offset = 0; if (++srgn_idx == hpb->srgns_per_rgn) { srgn_idx = 0; @@ -604,6 +633,19 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) WARN_ON_ONCE(transfer_len > HPB_MULTI_CHUNK_HIGH); + if (hpb->is_hcm) { + /* + * in host control mode, reads are the main source for + * activation trials. + */ + ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len, false); + + /* keep those counters normalized */ + if (rgn->reads > hpb->entries_per_srgn) + schedule_work(&hpb->ufshpb_normalization_work); + } + spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { @@ -755,6 +797,8 @@ static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, if (list_empty(&srgn->list_act_srgn)) list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); + + hpb->stats.rb_active_cnt++; } static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) @@ -770,6 +814,8 @@ static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) if (list_empty(&rgn->list_inact_rgn)) list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn); + + hpb->stats.rb_inactive_cnt++; } static void ufshpb_activate_subregion(struct ufshpb_lu *hpb, @@ -1090,6 +1136,7 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) rgn->rgn_idx); goto out; } + if (!list_empty(&rgn->list_lru_rgn)) { if (ufshpb_check_srgns_issue_state(hpb, rgn)) { ret = -EBUSY; @@ -1284,7 +1331,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, if (srgn->srgn_state == HPB_SRGN_VALID) srgn->srgn_state = HPB_SRGN_INVALID; spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_active_cnt++; } if (hpb->is_hcm) { @@ -1316,7 +1362,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, } spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_inactive_cnt++; } out: @@ -1515,6 +1560,36 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_normalization_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_normalization_work); + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; + int srgn_idx; + + spin_lock(&rgn->rgn_lock); + rgn->reads = 0; + for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { + struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + + srgn->reads >>= 1; + rgn->reads += srgn->reads; + } + spin_unlock(&rgn->rgn_lock); + + if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) + continue; + + /* if region is active but has no reads - inactivate it */ + spin_lock(&hpb->rsp_list_lock); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock(&hpb->rsp_list_lock); + } +} + static void ufshpb_map_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); @@ -1673,6 +1748,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) rgn = rgn_table + rgn_idx; rgn->rgn_idx = rgn_idx; + spin_lock_init(&rgn->rgn_lock); + INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); @@ -1912,6 +1989,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + if (hpb->is_hcm) + INIT_WORK(&hpb->ufshpb_normalization_work, + ufshpb_normalization_work_handler); hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2011,6 +2091,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { + if (hpb->is_hcm) + cancel_work_sync(&hpb->ufshpb_normalization_work); cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 032672114881..87495e59fcf1 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -106,6 +106,10 @@ struct ufshpb_subregion { int rgn_idx; int srgn_idx; bool is_last; + + /* subregion reads - for host mode */ + unsigned int reads; + /* below information is used by rsp_list */ struct list_head list_act_srgn; }; @@ -123,6 +127,10 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 + + /* region reads - for host mode */ + spinlock_t rgn_lock; + unsigned int reads; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -212,6 +220,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; + struct work_struct ufshpb_normalization_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Mon Jun 7 06:13:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF79CC47082 for ; Mon, 7 Jun 2021 06:15:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B8E6461164 for ; Mon, 7 Jun 2021 06:15:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230127AbhFGGQz (ORCPT ); Mon, 7 Jun 2021 02:16:55 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:24879 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230191AbhFGGQy (ORCPT ); Mon, 7 Jun 2021 02:16:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046504; x=1654582504; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7Xj0TsC+kDWP3MaOssYqB1aCtL21OM+5VdbGe0c9GsI=; b=JD+zJg1m0l2ED7U4bJdQPuNWineqVpm3jlCnfy6+9ZFC9Apz1F7Ac9nf XWB5SeVJJOQR208pJzz6E4vK8ss6uyaSXrk6/Tu0AG5d+EQmmuFKwya+/ u0fTSYAEojV6rsfLnCMBzHcp4NKVzYGhPtN7W47q10jef+fP0EcT2ISPL mnROtzptIl6y40MT4QAodg9Uga3qiNenHbtRxhNK6xgVXgNb2A7gKpJJG hymoJLY5YC1nQC5JTfy+3w9BnVTczqV7HA0JVyvJU30CEsxqIlhhm1acZ v/RDxg8G4v3WkD/anpcvA1guz1q4v5HxATagJKqGQIDYsF4syH8r9UFOd w==; IronPort-SDR: 7mq/T1lEonA3mUiuJxqkhGmGIADtpKonfCEHTx49FmSYe1L2qd0gs90Z9DxTwVMyZSlPMHF5uJ nlM2R34a0d6sGVhfcZ25osvTMI0nqflLJnxRwTekxyzTOUA1D1Uy2eFY9vFNM2AFmfOPGjL1o5 a2Ef1SLxq/Xd1PtpvqJRIawECqhk8bZc574nX0wvHJD4TZEHr4MZF5+EWFU/l+kFkbY1028yHx jad3FCbjk4Vtngpklp5nQYxXh8aqCjZQVbjJdn8halDGoOivsKRdFxBPx155T6hhyw/oMIQh/w 7fI= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="171530283" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:04 +0800 IronPort-SDR: lv0FYk8FgdfdFQwR31Zrr3bJ/2Da/O0pDCzRl7MLEY1/qCq/mS6ClpA6Xuro2UP0+X7Eg0yN4w ixKz1rcick9D6qFAortM0/qJP0LFqksuSEW8J/J3nQrwBvklLobYfnvNMzt6qAEoEJ6LhHxorl wqufmBWpa6Ej/IcqRECQdZEvG+0xGX0BOYmsNuxAPCj6ndB7nVq80nr5XVjVmqZ2iHE4oT9pX0 cmb61OzUsbfqTEk4xprInL02Kaf68Z/S9QZIOhkiHkh+bAWNAqEDs2dk1Amb1o88RPFYFsMDw3 rNgi1VIOBLXgLrEYuEmBfSx7 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:12 -0700 IronPort-SDR: gd1OIIr7v/Bf7TTFhdlhaqTjcP5UBQ2veTwCMmqShR+WsxaJR6Pa7XpwslM57f0VUUXLM3557d IyQChCvlpnWu20mhfGa5wyCpEGN919XQFeJbgcF5pIKJOSit/dQpBCipthLQhBDhZG6IzKna6x QPk5tEb/O7L1nA0phwThANEsYDo0sAqm8/nXEADGtAliVtqzOoR9se08yu1Aq3akc0q6s5MZaH a8wKKdqO3t0Y9qv9ShKX4sdwZuWTFDzkuvvST5lPTu3gZP+3sOqqTRzZvtlArDlSClDonkLyfr OOc= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:00 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 05/12] scsi: ufshpb: Make eviction depends on region's reads Date: Mon, 7 Jun 2021 09:13:54 +0300 Message-Id: <20210607061401.58884-6-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, eviction is considered an extreme measure. verify that the entering region has enough reads, and the exiting region has much less reads. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index b080bd9ca35a..f9efef35316e 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,6 +17,7 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ +#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1057,6 +1058,13 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) if (ufshpb_check_srgns_issue_state(hpb, rgn)) continue; + /* + * in host control mode, verify that the exiting region + * has less reads + */ + if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + continue; + victim_rgn = rgn; break; } @@ -1229,7 +1237,7 @@ static int ufshpb_issue_map_req(struct ufshpb_lu *hpb, static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) { - struct ufshpb_region *victim_rgn; + struct ufshpb_region *victim_rgn = NULL; struct victim_select_info *lru_info = &hpb->lru_info; unsigned long flags; int ret = 0; @@ -1256,7 +1264,15 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * It is okay to evict the least recently used region, * because the device could detect this region * by not issuing HPB_READ + * + * in host control mode, verify that the entering + * region has enough reads */ + if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + ret = -EACCES; + goto out; + } + victim_rgn = ufshpb_victim_lru_info(hpb); if (!victim_rgn) { dev_warn(&hpb->sdev_ufs_lu->sdev_dev, From patchwork Mon Jun 7 06:13:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD29C47082 for ; Mon, 7 Jun 2021 06:15:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 570A061164 for ; Mon, 7 Jun 2021 06:15:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbhFGGRE (ORCPT ); Mon, 7 Jun 2021 02:17:04 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:52776 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbhFGGRD (ORCPT ); Mon, 7 Jun 2021 02:17:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046512; x=1654582512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1XKwUM4VXln0o2FqHchK+4k1fpAauX0jY9w4mGCeyYw=; b=DkqPRYBnuxNQhcpJ2uG2M+HI3gcbPqev/80v6PcLTVKXyGScnLJjpGiJ GSk3KpaPwJseLCtQIS+UjPznFLj1IFne125qdHEELvcWUMazTS4JgNV1X nj+JgKeUM7M81ow9PGrNUsvyZvB4Fqn1ULnQa01XCDe/P9Atlv+7mxMwK 9sKIhbf2uuXfsN3vMS8kSNZyccbYLyHZvcgLqqvpNWMll9BskTy7RJeh9 gFD0a0AVLuPZtzDPcvwOaPi8GtohTE5zUwooSV59+YoIHZjBmcLKq2RKP bSBnhX9LEfns3r3ae6KJitvw+VVWsw6IWMLCYSJuu3UqJ8N2TUuX5i5HR Q==; IronPort-SDR: eZh1ax7/sSm+UvAJLxXyzHMP2YUxgiMRGzN5c2+Df/fnmNQ1O9p6+uqm+arj7G2qDDjYVe2HAN u0pLYhRbBLyQeFAvNPhxRSCynaL3zi/8LvILMIciXqYYDOU7LvLuEGVG9PbzsQzCsIx3C0Vr4I RWJn1zWRNfxDwFguGzxBdAmtuGcE5IGvo9HmCXf9a9USo7wstIJJiHbvLwsafZAuQNc3cpdhT1 6Yvo+Bm8q7IxonDtg1d2Y7dS/cHWSUBYAn4Pbs2BfYrPBkTT5m8JVU8ZxxD1n0kd6tkA/6AOTJ CUc= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170277913" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:10 +0800 IronPort-SDR: jazZu2vO/RDZfRCvTix9g2y59aShKTl6Kq13mgbCwzxK/UpuQj/KR2HgUITCFqEMBScTcV/nxM 8dkJ+XMSgAlrcg/DiIjaMbrUVoDthJoyPuvvcEBKYvfkRi21rMsk0QfYcrgtUv5MXntLEqH3c2 kU+xUEEQY7VSOeq4g2IGqbNJn47vKaegF3r2Vf8qSQ+eAm7OIS3brbQNcky9M+FJiE8f7P+Cxy lIVcyOCiAVdeTOOVj086FotnDuAVSzaqo/0mJZIDOi4/6RwDLiH7eFhSLllrx9J3B+DpqnhSGc tGZC5fEjadzNJOZOOiwgpvKA Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:19 -0700 IronPort-SDR: wmoWBl7FImJqVPoTa9p1//KgMMcZL8YnW+3XamLjw3SvZfuyZLKCI3WJ/6T7d6wSEorqp3XgAh ArKzQ53CaVL4mhvj+Vw7Sjzvpmlz+hBB+KUAiCgz4rD34sNOhDjLzLFfR7+aWOC8E/0qbud1TQ BMqaOFTx/Yt/lg53GUqQO4lorAoXdtiSXrNXbIrez9mAwQmSIi9RZH3+49r6OHb5ROZipJIS2K b1zGkdHThiqnXLf62dCXOi4ylNWU0Wm2g6LYr20k97iZjiUIxNEbA6/30dlRv0VB1qGFpRTg/Q 9o0= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:07 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 06/12] scsi: ufshpb: Region inactivation in host mode Date: Mon, 7 Jun 2021 09:13:55 +0300 Message-Id: <20210607061401.58884-7-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, the host is expected to send HPB-WRITE-BUFFER with buffer-id = 0x1 when it inactivates a region. Use the map-requests pool as there is no point in assigning a designated cache for umap-requests. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 46 +++++++++++++++++++++++++++++++++------ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index f9efef35316e..0ef46aa71045 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -691,7 +691,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) } static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, - int rgn_idx, enum req_opf dir) + int rgn_idx, enum req_opf dir, + bool atomic) { struct ufshpb_req *rq; struct request *req; @@ -705,7 +706,7 @@ static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, req = blk_get_request(hpb->sdev_ufs_lu->request_queue, dir, BLK_MQ_REQ_NOWAIT); - if ((PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { + if (!atomic && (PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { usleep_range(3000, 3100); goto retry; } @@ -736,7 +737,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct ufshpb_req *map_req; struct bio *bio; - map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN); + map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN, false); if (!map_req) return NULL; @@ -914,6 +915,7 @@ static int ufshpb_execute_umap_req(struct ufshpb_lu *hpb, blk_execute_rq_nowait(NULL, req, 1, ufshpb_umap_req_compl_fn); + hpb->stats.umap_req_cnt++; return 0; } @@ -1091,12 +1093,13 @@ static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb, } static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, - struct ufshpb_region *rgn) + struct ufshpb_region *rgn, + bool atomic) { struct ufshpb_req *umap_req; int rgn_idx = rgn ? rgn->rgn_idx : 0; - umap_req = ufshpb_get_req(hpb, rgn_idx, REQ_OP_SCSI_OUT); + umap_req = ufshpb_get_req(hpb, rgn_idx, REQ_OP_SCSI_OUT, atomic); if (!umap_req) return -ENOMEM; @@ -1110,13 +1113,19 @@ static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, return -EAGAIN; } +static int ufshpb_issue_umap_single_req(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + return ufshpb_issue_umap_req(hpb, rgn, true); +} + static int ufshpb_issue_umap_all_req(struct ufshpb_lu *hpb) { - return ufshpb_issue_umap_req(hpb, NULL); + return ufshpb_issue_umap_req(hpb, NULL, false); } static void __ufshpb_evict_region(struct ufshpb_lu *hpb, - struct ufshpb_region *rgn) + struct ufshpb_region *rgn) { struct victim_select_info *lru_info; struct ufshpb_subregion *srgn; @@ -1151,6 +1160,14 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) goto out; } + if (hpb->is_hcm) { + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + ret = ufshpb_issue_umap_single_req(hpb, rgn); + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + if (ret) + goto out; + } + __ufshpb_evict_region(hpb, rgn); } out: @@ -1285,6 +1302,18 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) "LRU full (%d), choose victim %d\n", atomic_read(&lru_info->active_cnt), victim_rgn->rgn_idx); + + if (hpb->is_hcm) { + spin_unlock_irqrestore(&hpb->rgn_state_lock, + flags); + ret = ufshpb_issue_umap_single_req(hpb, + victim_rgn); + spin_lock_irqsave(&hpb->rgn_state_lock, + flags); + if (ret) + goto out; + } + __ufshpb_evict_region(hpb, victim_rgn); } @@ -1853,6 +1882,7 @@ ufshpb_sysfs_attr_show_func(rb_noti_cnt); ufshpb_sysfs_attr_show_func(rb_active_cnt); ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); +ufshpb_sysfs_attr_show_func(umap_req_cnt); static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, @@ -1861,6 +1891,7 @@ static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_rb_active_cnt.attr, &dev_attr_rb_inactive_cnt.attr, &dev_attr_map_req_cnt.attr, + &dev_attr_umap_req_cnt.attr, NULL, }; @@ -1985,6 +2016,7 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) hpb->stats.rb_active_cnt = 0; hpb->stats.rb_inactive_cnt = 0; hpb->stats.map_req_cnt = 0; + hpb->stats.umap_req_cnt = 0; } static void ufshpb_param_init(struct ufshpb_lu *hpb) diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 87495e59fcf1..1ea58c17a4de 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -191,6 +191,7 @@ struct ufshpb_stats { u64 rb_inactive_cnt; u64 map_req_cnt; u64 pre_req_cnt; + u64 umap_req_cnt; }; struct ufshpb_lu { From patchwork Mon Jun 7 06:13:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CDC0C47082 for ; Mon, 7 Jun 2021 06:15:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70F68611AD for ; Mon, 7 Jun 2021 06:15:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230194AbhFGGR1 (ORCPT ); Mon, 7 Jun 2021 02:17:27 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:2821 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbhFGGR0 (ORCPT ); Mon, 7 Jun 2021 02:17:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046534; x=1654582534; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AXJBFjm/7TYzVmtA72gTqs/XcMuZbx07I+y7xGfmgC4=; b=b4bP9RA5yy1DacYD32KbDFIJx3TRn45VEwuCBmLMZb/NLgWqGhxxVvFG xBSgAOn88p7szKpmLMA/Gvb8SMIsEXHoBdxQkzZXzeSK0ra+zbiycECm6 ykXmRRFuzYc/Ezk2u04EHRUBTXphOp6L4x6UkbTFib4U3KU+AfYf/fGbs a2Q1Rhh0CeTPqlztZvrq+wEcMYT9d4GsxOUhZaFgha6T3StTXV1yzEhwK OKaY9rhStb0kUm1h911r4onUcA1Ua7wJGG/3VTi67Bojl/DWzcybfPewi kAY+gkANS2YmYTdACUDiVyMcQ+y9VBqKKaAoMXxmCpJEsAYS4HY5Bpvvm w==; IronPort-SDR: urVnFkjvf/R1oE970TvZaPHftm/chj9b/EhXGbLE5aanhfrk8E6eBj3SUicETeY8GepU86H1eP H5Wka6DZ03V7eztF6P7PITh9sonGF6YWHtGk2/d4wZvBMKM6Ow6hy1I16M3++edWOcr1Bajkck yRSChsZTqzJmX/wkeh3KFGf9l9llTD9wyqpbJE7sKIYMN3NalSjw2xCcvj2Ss1U16+BPSVYQJq GVwzahhUkfB9Q4jPRcqfnQsZyItv9s/ANeTyYbahkVnSxf0va749IsXC1iSboPHP9ZduB9m7f5 oO4= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="175741178" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:34 +0800 IronPort-SDR: okqM6UCQRpMccJuneo+TJ5roxGQ2zyPDxUqKVRkrMGXXitNUi5khfdG90aMhfSmPQuZQdyBnfN LGhgYoOxLSciS+T3gRIfAiswkHLXv4S74XCtRBl31VKTCzCYiwqrZ1kkMSqVTTYoSc8RANGhe+ rdpELsF56tnHNvMGKWcXeo+Hl75RqinOh0iKYj8XHyu3zLrgGYe8+K/lY/N1aCd7M7yVA0GiIL fe9DO/x5lBwz1ym8QWICSwd+viuyc2BotxM0bX70F2w5pSVYESUVM34fJOIJTKqolApI3Jd/HA M2R31nCs/GV0md6myiEtKLTf Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:17 -0700 IronPort-SDR: t3NsnUIGc+k2cKIG+n/RK88LiXHHrq8v9nnHxut+geflD6AlbEOIIRFzNH3Dni5h0JEfhXFkYS oWnLG196bvt5hHVB1DaIFwUjnUzuyl8hi8b4rtxrqumhG6oF869XmMCT+7vPRozMDu6ChMh7/a JVTubvtwYyZ/Ex2shQWPf2kHMI6kZ7uYKUhAxTZxM6IcTGYfLHXEpzP70MQUP5wA4hM59Jp9La Ho5UtZAH6kil8u574K3zsoYN39iuUiyQBed+IE+xETG7juHHQ3voECPqPMsuT+t5dm0bb/GvFw tc8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:31 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 07/12] scsi: ufshpb: Add hpb dev reset response Date: Mon, 7 Jun 2021 09:13:56 +0300 Message-Id: <20210607061401.58884-8-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The spec does not define what is the host's recommended response when the device send hpb dev reset response (oper 0x2). We will update all active hpb regions: mark them and do that on the next read. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 32 +++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 0ef46aa71045..1a29fe491c62 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -195,7 +195,8 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, } spin_unlock(&rgn->rgn_lock); - if (activate) { + if (activate || + test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) { spin_lock_irqsave(&hpb->rsp_list_lock, flags); ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); @@ -1417,6 +1418,20 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, queue_work(ufshpb_wq, &hpb->map_work); } +static void ufshpb_dev_reset_handler(struct ufshpb_lu *hpb) +{ + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn; + unsigned long flags; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) + set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags); + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); +} + /* * This function will parse recommended active subregion information in sense * data field of response UPIU with SAM_STAT_GOOD state. @@ -1491,6 +1506,18 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case HPB_RSP_DEV_RESET: dev_warn(&hpb->sdev_ufs_lu->sdev_dev, "UFS device lost HPB information during PM.\n"); + + if (hpb->is_hcm) { + struct scsi_device *sdev; + + __shost_for_each_device(sdev, hba->host) { + struct ufshpb_lu *h = sdev->hostdata; + + if (h) + ufshpb_dev_reset_handler(h); + } + } + break; default: dev_notice(&hpb->sdev_ufs_lu->sdev_dev, @@ -1816,6 +1843,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } else { rgn->rgn_state = HPB_RGN_INACTIVE; } + + rgn->rgn_flags = 0; } return 0; @@ -2141,6 +2170,7 @@ static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { if (hpb->is_hcm) cancel_work_sync(&hpb->ufshpb_normalization_work); + cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 1ea58c17a4de..b863540e28d6 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -127,6 +127,7 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 +#define RGN_FLAG_UPDATE 1 /* region reads - for host mode */ spinlock_t rgn_lock; From patchwork Mon Jun 7 06:13:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60541C47082 for ; Mon, 7 Jun 2021 06:15:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43D366121E for ; Mon, 7 Jun 2021 06:15:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbhFGGRj (ORCPT ); Mon, 7 Jun 2021 02:17:39 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:52823 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230313AbhFGGRi (ORCPT ); Mon, 7 Jun 2021 02:17:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046547; x=1654582547; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JCTBWphomUj5hPGqAy6IKZsirWcMAhErpLzwtjrGlx0=; b=W+UsDNMCKoJblmCaDhQq8CqRvtqabKRXj8eZpFeUPHobNvFWkuu6kCKr 2i6/6Ub7QUXhfTx1K7t1a0A11Yl+qaPc02eZXSgTCJ0ZNOiHrWzB5btg/ d2YKcvugxt6c1DqOLM3wesRCevKyA41Ccus2JBUUNzuGn58Z6wRqS1RwF IdfZVz6BL6n+gISIV1sQGGX4CfEYE8yvo3nXE/ywI0YvCSJsz+vXXnril 9skAtQ30nKul2l0D/cdMueWgAvV+nJPgnMazXi9pa8tXxUvXdYtKE6Ww1 cqTstf1QBgoo/8pjPByIBpFLDvsDNlQ+dguaIdASx702CX7toALka61gK w==; IronPort-SDR: IQT1cw8lHEvaUTtoEV1Qy+hCu/d0fMmd6NO9n4i8FQiKRytBKjU3fe7B9o3wzPbmeLIwF9wbHa idVT31frO+miVFIovq90I2okComJz334YsRYOLYEJfdwSyoD3qVz61javtM3e6L7ygIVRONmyi A/qQ4kuom9zHHvOlf800Nr7n1xEtSLizhuadcL4WrH7rrzCf4triJTSkv9WO7wNF67OrA3+4Jh 6d8jqjn+KfFnARl3E0kuUJmNiXmkPUxP+2vSD8ZZ7n62TLkr4kBcX2zvfHSBaDNaaC7+W103HA EDA= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170277959" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:47 +0800 IronPort-SDR: tRS/ybSJr0szVfLh8WhJeyZhgyO3vhbg+EFHzgIGhChlEUzeLLAZvn3wOgA6mYVMCAOXBTl0zp 5GaJIAV40ueFdMRmHuK2baw4fSgS+FLo21qt3kFwPZx5qr5/GtvI41Yo55zDIH5wq5AOEvAnfs UJzhMzg/PfVF0uHmkZczvv9Zug02TQjfzLGFUste4N7ObHabcP0Zlmph0CtYmbezvMEdVskjFV ojA3b9kEIqIbKvlV64bs/loUX6vBOeSAxgHGFQCWVgEP4HPWqqNXRpAL1IEfDU9ktc96xf85h6 deWbjLO6TXUbLcsiSRtClxLO Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:55 -0700 IronPort-SDR: 88VWmI3wymfRA+D2YtuhOWn3JNYjuMLR9UpQNGLFhyQt6qVBLmPLwlXtDXRibbl0MwW1GjOn9/ jJIe21GIvDTxZQRDn0K5mEWi+WDAAX7K2bY3CofSLgOUagVij92zQDJiF6AM5AVUvy9Qo/XHmp xRt5hB3x7ste91OsOgOYl/UfGb2pNbY95OMMqqrc4q8Ke3ziQ2W9aTqyM24Of4vj48BbpGRjj9 sDR4O9Sfm4PiziICra44KmysT2BAsuOvqR1qKfFuWElmvFHDmfh0u/DLKDtIz58KD185CrchCB uCM= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:43 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 08/12] scsi: ufshpb: Add "Cold" regions timer Date: Mon, 7 Jun 2021 09:13:57 +0300 Message-Id: <20210607061401.58884-9-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In order not to hang on to “cold” regions, we shall inactivate a region that has no READ access for a predefined amount of time - READ_TO_MS. For that purpose we shall monitor the active regions list, polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add the region to the "to-be-inactivated" list, unless it is clean and did not exhaust its READ_TO_EXPIRIES - another parameter. All this does not apply to pinned regions. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 74 +++++++++++++++++++++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 8 +++++ 2 files changed, 79 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 1a29fe491c62..a31a9a6979de 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -18,6 +18,9 @@ #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ #define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ +#define READ_TO_MS 1000 +#define READ_TO_EXPIRIES 100 +#define POLLING_INTERVAL_MS 200 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1032,12 +1035,63 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb, return 0; } +static void ufshpb_read_to_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_read_to_work.work); + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn, *next_rgn; + unsigned long flags; + LIST_HEAD(expired_list); + + if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) + return; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn, + list_lru_rgn) { + bool timedout = ktime_after(ktime_get(), rgn->read_timeout); + + if (timedout) { + rgn->read_timeout_expiries--; + if (is_rgn_dirty(rgn) || + rgn->read_timeout_expiries == 0) + list_add(&rgn->list_expired_rgn, &expired_list); + else + rgn->read_timeout = ktime_add_ms(ktime_get(), + READ_TO_MS); + } + } + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &expired_list, + list_expired_rgn) { + list_del_init(&rgn->list_expired_rgn); + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + } + + ufshpb_kick_map_work(hpb); + + clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); +} + static void ufshpb_add_lru_info(struct victim_select_info *lru_info, struct ufshpb_region *rgn) { rgn->rgn_state = HPB_RGN_ACTIVE; list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); + if (rgn->hpb->is_hcm) { + rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); + rgn->read_timeout_expiries = READ_TO_EXPIRIES; + } } static void ufshpb_hit_lru_info(struct victim_select_info *lru_info, @@ -1824,6 +1878,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); + INIT_LIST_HEAD(&rgn->list_expired_rgn); if (rgn_idx == hpb->rgns_per_lu - 1) { srgn_cnt = ((hpb->srgns_per_lu - 1) % @@ -1845,6 +1900,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } rgn->rgn_flags = 0; + rgn->hpb = hpb; } return 0; @@ -2066,9 +2122,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); - if (hpb->is_hcm) + if (hpb->is_hcm) { INIT_WORK(&hpb->ufshpb_normalization_work, ufshpb_normalization_work_handler); + INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work, + ufshpb_read_to_handler); + } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2102,6 +2161,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + return 0; release_pre_req_mempool: @@ -2168,9 +2231,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { - if (hpb->is_hcm) + if (hpb->is_hcm) { + cancel_delayed_work_sync(&hpb->ufshpb_read_to_work); cancel_work_sync(&hpb->ufshpb_normalization_work); - + } cancel_work_sync(&hpb->map_work); } @@ -2278,6 +2342,10 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b863540e28d6..448062a94760 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -115,6 +115,7 @@ struct ufshpb_subregion { }; struct ufshpb_region { + struct ufshpb_lu *hpb; struct ufshpb_subregion *srgn_tbl; enum HPB_RGN_STATE rgn_state; int rgn_idx; @@ -132,6 +133,10 @@ struct ufshpb_region { /* region reads - for host mode */ spinlock_t rgn_lock; unsigned int reads; + /* region "cold" timer - for host mode */ + ktime_t read_timeout; + unsigned int read_timeout_expiries; + struct list_head list_expired_rgn; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -223,6 +228,9 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; + struct delayed_work ufshpb_read_to_work; + unsigned long work_data_bits; +#define TIMEOUT_WORK_RUNNING 0 /* pinned region information */ u32 lu_pinned_start; From patchwork Mon Jun 7 06:13:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D97ABC47082 for ; Mon, 7 Jun 2021 06:15:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA99E611AD for ; Mon, 7 Jun 2021 06:15:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230351AbhFGGRr (ORCPT ); Mon, 7 Jun 2021 02:17:47 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:4838 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230353AbhFGGRq (ORCPT ); Mon, 7 Jun 2021 02:17:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046555; x=1654582555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Eg7FuxGEda1N/nS5nNBZ3iFh8zPGLfT1LdyYWIqHRXM=; b=DeN0TUpV57QjXGkdFv2mpimKMtAfZkqLZ87kBeFGVCHy6fUPH605JbrC RJx4xZzBS8HrRjfop/mPljXjr6laKqptajA5c5G6XWXVaG/EqW4wTbbFY 0uRPly27dA3CUp6BqxFSC5YcKF283blrbvb3u7yL4PsPNheU4QPsJO8yr 03Y90HY+vnaMw2J9Px/bgspvQF3BmlljN43rjLvmEoUT6D5dBdnpU0cMl wsiI1h4xl353iiVqArAqieVQLl7Zpjlfl7p0yxhIKHoql0/B2OtlSepDA vCCSqhBBUq/PeA1/KFu4KVR7n+1SW6SkjOwknuZqhQ9amCm3j7EpikRgd Q==; IronPort-SDR: 2kwnSo+e1h+enGEWPZHFGsYQ2SJnChc3jQQyiPFRP17Xx7WXrpj1gdn4A97QZASA3/X4C/Fr3E CYo+0qTQDYP2xc8Y4w0TOOjeARck2Z6obk6nlsNOC9V4elZW+PAAWwD+p6xc4s3b/97m6v7r5O iedWtjrwW+gBD9tHz7MHAt8Rv60yjyRRrDepagOgUrjDUO1yNSWzLw8Ho860D5L7I25Xp9N5K5 N6ihrQUyCGCqxoE7vgYG9TNbm46VMebuvtrnGvQmcVApYRx6QzGRFaFdGQYA49aZziNCIbXWZg wk0= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170991635" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:55 +0800 IronPort-SDR: t1CIkNPH6bbCvbqXlZbWBmbr38ukQm3Qc6d/Uaw9TfJMkbX52mp32p63oQtdmuSv6kTrhBG5ZC 8P09AzwIRz16pCbMBSEZJ3tiznhB78iWowGlh8sNrwL7ZI3tKIr5E28+Cq78iJj11SKdfcZ6wf 1LzHGunUeyqUkz78OiH3sYZqKqfyC0uR8yQKilumAFKdNc9f3HoBL2eICovyhsE/ZkqJGOhp/N MZ3B0vIhe7KK8buYEDTqCLLX49RWrVyUAWHnayd+aZABuu+AIg3qJz0BYpdzUKagUnJ3C1IAUZ 7TdXQ4cQ4KD51Fbdddzm7EwZ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:55:04 -0700 IronPort-SDR: iXTA1paGG1tWlPlQhUrMONaZWE0WhnTOAS9gY6q6F3ni8J/lCb5D8AockGbyiw/nYmCKWss9bu 2XvgKGry4XdfS9q/egY03hLrjBmRNKOW+kUIWaNGbMGPmspjOrowUhyq5ewLsedSIFq+/VxN66 91+oWIxXJw3Lu9Dad36qsy934DU9eD/X8B/UKJPWWCokFlqG66IudX97kGrKsOYDBk2gp/RI7o lfgIP3O21lH7u910eplFzzFjQbu/owC4csJ5jPVmzJEKvqML+1deMPjbxh/7PISE0PmQjRwA3E rWI= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:52 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 09/12] scsi: ufshpb: Limit the number of inflight map requests Date: Mon, 7 Jun 2021 09:13:58 +0300 Message-Id: <20210607061401.58884-10-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode the host is the originator of map requests. To not flood the device with map requests, use a simple throttling mechanism that limits the number of inflight map requests. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 11 +++++++++++ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 12 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index a31a9a6979de..bcfdf338244b 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -21,6 +21,7 @@ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 +#define THROTTLE_MAP_REQ_DEFAULT 1 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -741,6 +742,14 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct ufshpb_req *map_req; struct bio *bio; + if (hpb->is_hcm && + hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + dev_info(&hpb->sdev_ufs_lu->sdev_dev, + "map_req throttle. inflight %d throttle %d", + hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + return NULL; + } + map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN, false); if (!map_req) return NULL; @@ -755,6 +764,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, map_req->rb.srgn_idx = srgn->srgn_idx; map_req->rb.mctx = srgn->mctx; + hpb->num_inflight_map_req++; return map_req; } @@ -764,6 +774,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, { bio_put(map_req->bio); ufshpb_put_req(hpb, map_req); + hpb->num_inflight_map_req--; } static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 448062a94760..cfa0abac21db 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -217,6 +217,7 @@ struct ufshpb_lu { struct ufshpb_req *pre_req; int num_inflight_pre_req; int throttle_pre_req; + int num_inflight_map_req; struct list_head lh_pre_req_free; int cur_read_id; int pre_req_min_tr_len; From patchwork Mon Jun 7 06:13:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 738BFC47082 for ; Mon, 7 Jun 2021 06:16:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5CAD1611AD for ; Mon, 7 Jun 2021 06:16:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230373AbhFGGRy (ORCPT ); Mon, 7 Jun 2021 02:17:54 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:27093 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230353AbhFGGRx (ORCPT ); Mon, 7 Jun 2021 02:17:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046563; x=1654582563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZJQ/NRNaJlybQ/31KgdHbsj2CZRmUMaLw3D3Q66Q6kY=; b=fBY7t35/5eKDT06HWpkVjNO43HQqSSXFtjJ8HMNBSbXn4gEgsAypyQaX u0ZTE9h+kCgbeEgl61ODSfNYHqjCFSDwSG6/cBjKHtc+zzNxlf1q9A4m2 3o62TyDPMzLnH11r0ULRlWIJ7VMZ9s7QAA+VBbhyXinEtWdxJVidH1iwg f4BxBUySlwHgpMlGw3POtAQlvFa3p6+56LzpR7Egpa86LcOAl4V+dTasL iS6uK/TNIU+T7fF/dOfQeWYM5n1ryrypQq3z6MbmBu1kePrFOM0XkwC4M F8voEcY5bGj+YxSaZYAw4E2qSJJG8VAHKQigYtyAnwOUu+Kqn+nIw3Nbv A==; IronPort-SDR: RG8t/6tvLy1wHUj6y3G/Y9NB9BKHZI9zEe0aRAcH+WL2FSAlMLFLQ5WDEs2btB8hUtg7DEgY4r pd/Vcg0o/uXR3RIicL6dKTYE7Nn4rxtjydYw2YszZxlcsos0mBZbwdG4kfIUYlBNC720qdpiFX oyFcebjVWiTqfeGwySQqUb0znzOhGGeeVH+wZ4Bk70lqEa8EeFtpGIq/qR8EEHTSAVIAfXV/Gd Mr4zeBuE4uhowmw+6QxJv377HfzfbxFLTGgR/R6DGj6pJkcj1P9gi0oZODI4sM9Y+/Rw3rp+4c dWk= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="282406603" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:16:02 +0800 IronPort-SDR: MnCh90/+xdYfMuiresTvgLhnmqFQslpZgXFVI2Ux3HG1DH9RSZzD53RxG5BoYESgD29FzdrdWm YRRq9Ghn72W2+aC/oG7rg0F7RIMGfIVn79PBCYGlZt+C6ORCmtdr0xAkSYb65nnV2P+Xr+aYRQ IRly5OB3yaJoWlZZZ9iEpF6KmP3NXkp5j0zPpQSkFKcXMUWK95laUR476eGDF5CRAoA9uL5le+ EkL7gXtcdEe1LGdrObr0dmu6r7Elju6odlbOwoHIWcCgF34I0hGPGUJ8Us0PK92t8B+HJoUj1d pu+PUsqn0ieKGo3VIm6Wbn/k Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:45 -0700 IronPort-SDR: ERVSPo22VbVw8JD79DfhN4RDe02aOv6xnDSELMzfcmnkj84WA023GjVqFgYN7n/JLgWteiVClV NZ0KBjGtie3XtkR4h9X54LoxgQLbK0Zg5wzH+0lPaU7oNFmA/JmrqczF7mBzLw8RFt6mgCQTuW lXcQt/SOLiwWSuIfSJ4hos1Up2y6G09g/ohMMC0tzaTW4G4BDSnlOKvKKvDIqC6DGzJp9fi5Vy g0XcLY+gRB0oKecXGMKr7Htq9sdkYHSglx3pEvMIWeMYoIWm0ntYonLS7NyN6+uG40IMPUOdsg DuE= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:59 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 10/12] scsi: ufshpb: Do not send umap_all in host control mode Date: Mon, 7 Jun 2021 09:13:59 +0300 Message-Id: <20210607061401.58884-11-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org HPB-WRITE-BUFFER with buffer-id = 0x3h is supported in device control mode only. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index bcfdf338244b..98c107ca4a4e 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -2461,7 +2461,8 @@ static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba) ufshpb_set_state(hpb, HPB_PRESENT); if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0) queue_work(ufshpb_wq, &hpb->map_work); - ufshpb_issue_umap_all_req(hpb); + if (!hpb->is_hcm) + ufshpb_issue_umap_all_req(hpb); } else { dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun); ufshpb_destroy_lu(hba, sdev); From patchwork Mon Jun 7 06:14:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 731BFC47082 for ; Mon, 7 Jun 2021 06:16:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5FFFA611AD for ; Mon, 7 Jun 2021 06:16:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbhFGGSB (ORCPT ); Mon, 7 Jun 2021 02:18:01 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:2423 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbhFGGSA (ORCPT ); Mon, 7 Jun 2021 02:18:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046571; x=1654582571; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uCM6ztR68ONhBFMAqxH6M4v+IG6pYst0Aq+gii77JqU=; b=MwqlLs6HneA/h/MgKUSzZ044NaLiWievVitT+QXAruk4ll9B065c/qca Ac99tk2ScUwNAfm1N43yDtm0Ykdjt9L2GmNXf9kUlgNBMVhyqx1+FFQUW hxgjdtrBmVgO3UReFPlznnk1u1/zSuaJ5iz82u84uEPWlPa2Cy965h5Q2 f9peNX8jOgYBlRZMD3/M4nv383DBm4Q61urHNnOFJ61IekjqYdK4pYWzT RJ7VnO0hq9YGqIhDVVfo7WqHivQuM5sQfOQJgp57zsqJ/flFO57khO3fo bpBGwnCXS92o1BtbxElSJWnGeQHEk0+WEfHWffCvhkLGL7CiERsdOhvlN w==; IronPort-SDR: ZFRykyaBeIPK4myNKAc8ypIgH06q5FnLDc086AWZyPhfw1y82csm0YYuU86RaEWxijYQ2iWVaZ EWAVKhoHuxkpkGm6De4rs8EKSN0E/Qw6jmeTgVnG8bkqIfouDzuISugRFNZy4trGqbWgVa+aCD ifDJfXFaz0qiQaV9yOhzceYjynboHF6RGs/NXofjVVZ8gvVPwdgf6ncENCfFIVfH3d4jqMhLl7 TmEiDX84HSoyr7llG+G4m7AZyEQkNI78jK+sDWrdnmN7wgE3Gt0aykFtKZII9HLMD+12cEQ4+F Y00= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="171530375" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:16:10 +0800 IronPort-SDR: g8WINktT4FtABmVnZ6u4ECdgYQvcSSSYmoa+6s6nGGTsxInAOqR8Ecjv+GgRcO15Ceg+B6dCqO G7KLZGm48JJNjGpRlqacsKuQgSL6GgJv+zUgx8T7eBh3mlSxTqYN7UZAblNhnjroHLp7QSiIwq YeFwDj8DOkEMC70lF9JBUbsxcGVHOjDeFLQsjaL9XA/tIU8fBRkiXbAqoEQlRRuzc28PVsZ+GL xM8EofcUwtpAmylA30ogVjDnLbUcyTE8YTRnZau/TBjS5pWr2W9JuaIEDPkVtihiGzwl7x+Szs jSXJ6yEUYcRhPcXFgAHpO+Ns Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:52 -0700 IronPort-SDR: OYmOalzL6EWpSLMfhD/iKqrre6Oy9nq2CAxrIAiqjNTvKqB8qger+s6maFJ8+sW74NpIGjPE2f pHx1p0WDAbtQigvj2ivXaeU73zRtlfnzorh3iIbzy+NpFSAUfmJnS3Y36QtnCWzIzW0zq78xPA gf3XBPJMjlJH7A5Go7c+aDhRwP90loJ2WKtMReNrCzZ3/JcCtjNrLm9F4N7xitas+TJs7jYzAc 8nqhzB8RZZ1hoCAcaNPAS8OFadOuVfkj4X1d+7HhbQuCU/BKe5KS2xk+Qy4U0mlq2JFanWgAm3 MGQ= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:16:06 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 11/12] scsi: ufshpb: Add support for host control mode Date: Mon, 7 Jun 2021 09:14:00 +0300 Message-Id: <20210607061401.58884-12-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Support devices that report they are using host control mode. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 98c107ca4a4e..53f94ad5e7a3 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -2585,12 +2585,6 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) u32 max_hpb_single_cmd = HPB_MULTI_CHUNK_LOW; hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { - dev_err(hba->dev, "%s: host control mode is not supported.\n", - __func__); - hpb_dev_info->hpb_disabled = true; - return; - } version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); if ((version != HPB_SUPPORT_VERSION) && From patchwork Mon Jun 7 06:14:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12302581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B47DC47082 for ; Mon, 7 Jun 2021 06:16:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11A1260C3F for ; Mon, 7 Jun 2021 06:16:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230410AbhFGGSK (ORCPT ); Mon, 7 Jun 2021 02:18:10 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:14086 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbhFGGSI (ORCPT ); Mon, 7 Jun 2021 02:18:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046588; x=1654582588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eMBmmlv9OIKlJjChb+ZLGvGEKVN46GYejU3GC9QRMbI=; b=C1ioDxBhfVVURqCHlXZ557jnpdL6jswkrNMaC0/bUTE1P1rP270xjzD6 vATAJl5T79YJkS3TrD0794O1XfxkyoiMitiJYpaYIOYmprzudAMKbPYMG D4JWA9r9OvWIAuH/gZUKaAsbJutaYcQcoGx8llh/E5sJpy4rFKPxq4qYQ GFSNN4fZ+F9307LEr3zZ0OspkKOA2kwXWg+0uOIWvpeQZKXk8aXaL5y6q m4+9+0Ffax855/yhay/2shwhekCNwGNsDXefC4+siVly9chQvlFqCKwug c0z7j+VhG/L4jv5ejmaMwLjNoch8StFi4pRR/Mg5Z8o5f3ronthqAQvfd w==; IronPort-SDR: S24iRhsa1FVhzQTCzqKDMQST2mgIIh53RA7nuf5Yc3UZrOhguMFR0Hjxm05c2wSJF8kYhOtvmh yjNOjOIAQyoUAtF4QnPPki+hfJhKTZ9MjCauWa4CPpvcORPDalrr99l0c+2wcRkYyEE163qm3r eNkRUonVygIDGHhp4Qqwl3IFYRPTGqqarMsPtR8Igx9LGviHFPruvPcP8KXsUzMw5fhpk6PvTN iWMHeUvdr7r2hmj+pACTLdv2bduJQTavhWJwSaW52NcqWBWH2lhee8lx+JRG+zMrtmTcf7gVst +oU= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="274818303" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:16:26 +0800 IronPort-SDR: PSAlg/Fae6YWBLrV8eQ7YXKKkKbcldis9UNJPAjWBnl3NvOUaDganbKpkDcrKbxVbz4s5Bqcuv tPs1rQufjnV98JJsmhCFdS8S5xpn++g0glItLljNWt0gAhgEBfYAQBhhp4ZvvUjbh0pqvYCUEu 3HTUl29ZanPS27hwAt1QzCTbYsxzj4lpRNDykxW3HgkwknmcqvoOnX8lsKplni/GTzKPf7lqbJ ch9vXuzZ1QM1ACjwsZoxszJY7NFID60wZOjtR7liAAlSv4GDiE4nnG4++h80Ka7Srq4A1AGIOf M6if9o7jHYpOcVr2cLxVeBi7 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:58 -0700 IronPort-SDR: w8c/e0FsIO61s/vOlxHZjaXz/5+chhls5RZIxRRWZXWTQ8rNx7yKrIZVhwk6cV3KW0uxfgc26F qMBMaoeXGH7WsR2xhId+T58TupeiVa3vdaU7kQ7c+PycLQEWBh/KeK7fI9dtVAfV01aMUT/OCq V9rT3L3ssgD976ypTIFN7FPk+9qfybvnPVXkV8R3UednrVYbvBDJ3C3ru5RLEo+fGFwM5dlAMZ rg6d9DfBIlTad6uVjaUFaawdmIZc6GvrdFIuaZ3VOGb3lhFn4YL6LIgLatu+xbeeKYZg/3799X rDE= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:16:12 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 12/12] scsi: ufshpb: Make host mode parameters configurable Date: Mon, 7 Jun 2021 09:14:01 +0300 Message-Id: <20210607061401.58884-13-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We can make use of this commit, to elaborate some more of the host control mode logic, explaining what role play each and every variable. While at it, allow those parameters to be configurable. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- Documentation/ABI/testing/sysfs-driver-ufs | 76 +++++- drivers/scsi/ufs/ufshpb.c | 288 +++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 20 ++ 3 files changed, 367 insertions(+), 17 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index 2501bad03487..ef2399bfc2e2 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1449,7 +1449,7 @@ Description: This entry shows the maximum HPB data size for using single HPB The file is read only. -What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable +What: /sys/bus/platform/drivers/ufshcd/*/flags/hpb_enable Date: June 2021 Contact: Daejun Park Description: This entry shows the status of HPB. @@ -1460,3 +1460,77 @@ Description: This entry shows the status of HPB. == ============================ The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, reads are the major source of activation + trials. once this threshold hs met, the region is added to the + "to-be-activated" list. Since we reset the read counter upon + write, this include sending a rb command updating the region + ppn as well. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, We think of the regions as "buckets". + Those buckets are being filled with reads, and emptied on write. + We use entries_per_srgn - the amount of blocks in a subregion as + our bucket size. This applies because HPB1.0 only concern a + single-block reads. Once the bucket size is crossed, we trigger + a normalization work - not only to avoid overflow, but mainly + because we want to keep those counters normalized, as we are + using those reads as a comparative score, to make various decisions. + The normalization is dividing (shift right) the read counter by + the normalization_factor. If during consecutive normalizations + an active region has exhaust its reads - inactivate it. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter +Date: February 2021 +Contact: Avri Altman +Description: Region deactivation is often due to the fact that eviction took + place: a region become active on the expense of another. This is + happening when the max-active-regions limit has crossed. + In host mode, eviction is considered an extreme measure. We + want to verify that the entering region has enough reads, and + the exiting region has much less reads. eviction_thld_enter is + the min reads that a region must have in order to be considered + as a candidate to evict other region. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit +Date: February 2021 +Contact: Avri Altman +Description: same as above for the exiting region. A region is consider to + be a candidate to be evicted, only if it has less reads than + eviction_thld_exit. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms +Date: February 2021 +Contact: Avri Altman +Description: In order not to hang on to “cold” regions, we shall inactivate + a region that has no READ access for a predefined amount of + time - read_timeout_ms. If read_timeout_ms has expired, and the + region is dirty - it is less likely that we can make any use of + HPB-READing it. So we inactivate it. Still, deactivation has + its overhead, and we may still benefit from HPB-READing this + region if it is clean - see read_timeout_expiries. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries +Date: February 2021 +Contact: Avri Altman +Description: if the region read timeout has expired, but the region is clean, + just re-wind its timer for another spin. Do that as long as it + is clean and did not exhaust its read_timeout_expiries threshold. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms +Date: February 2021 +Contact: Avri Altman +Description: the frequency in which the delayed worker that checks the + read_timeouts is awaken. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req +Date: February 2021 +Contact: Avri Altman +Description: in host control mode the host is the originator of map requests. + To not flood the device with map requests, use a simple throttling + mechanism that limits the number of inflight map requests. diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 53f94ad5e7a3..1fe776470911 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,7 +17,6 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ -#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 @@ -194,7 +193,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, } else { srgn->reads++; rgn->reads++; - if (srgn->reads == ACTIVATION_THRESHOLD) + if (srgn->reads == hpb->params.activation_thld) activate = true; } spin_unlock(&rgn->rgn_lock); @@ -743,10 +742,11 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct bio *bio; if (hpb->is_hcm && - hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + hpb->num_inflight_map_req >= hpb->params.inflight_map_req) { dev_info(&hpb->sdev_ufs_lu->sdev_dev, "map_req throttle. inflight %d throttle %d", - hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + hpb->num_inflight_map_req, + hpb->params.inflight_map_req); return NULL; } @@ -1053,6 +1053,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) struct victim_select_info *lru_info = &hpb->lru_info; struct ufshpb_region *rgn, *next_rgn; unsigned long flags; + unsigned int poll; LIST_HEAD(expired_list); if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) @@ -1071,7 +1072,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) list_add(&rgn->list_expired_rgn, &expired_list); else rgn->read_timeout = ktime_add_ms(ktime_get(), - READ_TO_MS); + hpb->params.read_timeout_ms); } } @@ -1089,8 +1090,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); } static void ufshpb_add_lru_info(struct victim_select_info *lru_info, @@ -1100,8 +1102,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info, list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); if (rgn->hpb->is_hcm) { - rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); - rgn->read_timeout_expiries = READ_TO_EXPIRIES; + rgn->read_timeout = + ktime_add_ms(ktime_get(), + rgn->hpb->params.read_timeout_ms); + rgn->read_timeout_expiries = + rgn->hpb->params.read_timeout_expiries; } } @@ -1130,7 +1135,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) * in host control mode, verify that the exiting region * has less reads */ - if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + if (hpb->is_hcm && + rgn->reads > hpb->params.eviction_thld_exit) continue; victim_rgn = rgn; @@ -1351,7 +1357,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * in host control mode, verify that the entering * region has enough reads */ - if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + if (hpb->is_hcm && + rgn->reads < hpb->params.eviction_thld_enter) { ret = -EACCES; goto out; } @@ -1702,6 +1709,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); int rgn_idx; + u8 factor = hpb->params.normalization_factor; for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; @@ -1712,7 +1720,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; - srgn->reads >>= 1; + srgn->reads >>= factor; rgn->reads += srgn->reads; } spin_unlock(&rgn->rgn_lock); @@ -2033,8 +2041,247 @@ requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RW(requeue_timeout_ms); +ufshpb_sysfs_param_show_func(activation_thld); +static ssize_t +activation_thld_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.activation_thld = val; + + return count; +} +static DEVICE_ATTR_RW(activation_thld); + +ufshpb_sysfs_param_show_func(normalization_factor); +static ssize_t +normalization_factor_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) + return -EINVAL; + + hpb->params.normalization_factor = val; + + return count; +} +static DEVICE_ATTR_RW(normalization_factor); + +ufshpb_sysfs_param_show_func(eviction_thld_enter); +static ssize_t +eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.eviction_thld_exit) + return -EINVAL; + + hpb->params.eviction_thld_enter = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_enter); + +ufshpb_sysfs_param_show_func(eviction_thld_exit); +static ssize_t +eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.activation_thld) + return -EINVAL; + + hpb->params.eviction_thld_exit = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_exit); + +ufshpb_sysfs_param_show_func(read_timeout_ms); +static ssize_t +read_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* read_timeout >> timeout_polling_interval */ + if (val < hpb->params.timeout_polling_interval_ms * 2) + return -EINVAL; + + hpb->params.read_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_ms); + +ufshpb_sysfs_param_show_func(read_timeout_expiries); +static ssize_t +read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_expiries = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_expiries); + +ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); +static ssize_t +timeout_polling_interval_ms_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* timeout_polling_interval << read_timeout */ + if (val <= 0 || val > hpb->params.read_timeout_ms / 2) + return -EINVAL; + + hpb->params.timeout_polling_interval_ms = val; + + return count; +} +static DEVICE_ATTR_RW(timeout_polling_interval_ms); + +ufshpb_sysfs_param_show_func(inflight_map_req); +static ssize_t inflight_map_req_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1) + return -EINVAL; + + hpb->params.inflight_map_req = val; + + return count; +} +static DEVICE_ATTR_RW(inflight_map_req); + +static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.activation_thld = ACTIVATION_THRESHOLD; + hpb->params.normalization_factor = 1; + hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 5); + hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 4); + hpb->params.read_timeout_ms = READ_TO_MS; + hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; + hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; + hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT; +} + static struct attribute *hpb_dev_param_attrs[] = { &dev_attr_requeue_timeout_ms.attr, + &dev_attr_activation_thld.attr, + &dev_attr_normalization_factor.attr, + &dev_attr_eviction_thld_enter.attr, + &dev_attr_eviction_thld_exit.attr, + &dev_attr_read_timeout_ms.attr, + &dev_attr_read_timeout_expiries.attr, + &dev_attr_timeout_polling_interval_ms.attr, + &dev_attr_inflight_map_req.attr, NULL, }; @@ -2118,6 +2365,8 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) static void ufshpb_param_init(struct ufshpb_lu *hpb) { hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; + if (hpb->is_hcm) + ufshpb_hcm_param_init(hpb); } static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2172,9 +2421,13 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); - if (hpb->is_hcm) + if (hpb->is_hcm) { + unsigned int poll; + + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); + } return 0; @@ -2353,10 +2606,13 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); - if (hpb->is_hcm) - schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + if (hpb->is_hcm) { + unsigned int poll = + hpb->params.timeout_polling_interval_ms; + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(poll)); + } } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index cfa0abac21db..68a5af0ff682 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -185,8 +185,28 @@ struct victim_select_info { atomic_t active_cnt; }; +/** + * ufshpb_params - ufs hpb parameters + * @requeue_timeout_ms - requeue threshold of wb command (0x2) + * @activation_thld - min reads [IOs] to activate/update a region + * @normalization_factor - shift right the region's reads + * @eviction_thld_enter - min reads [IOs] for the entering region in eviction + * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction + * @read_timeout_ms - timeout [ms] from the last read IO to the region + * @read_timeout_expiries - amount of allowable timeout expireis + * @timeout_polling_interval_ms - frequency in which timeouts are checked + * @inflight_map_req - number of inflight map requests + */ struct ufshpb_params { unsigned int requeue_timeout_ms; + unsigned int activation_thld; + unsigned int normalization_factor; + unsigned int eviction_thld_enter; + unsigned int eviction_thld_exit; + unsigned int read_timeout_ms; + unsigned int read_timeout_expiries; + unsigned int timeout_polling_interval_ms; + unsigned int inflight_map_req; }; struct ufshpb_stats {