From patchwork Wed Aug 5 04:58:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 11701319 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1AA56C1 for ; Wed, 5 Aug 2020 05:23:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C484421744 for ; Wed, 5 Aug 2020 05:23:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="ddFRLq8L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725950AbgHEFXG (ORCPT ); Wed, 5 Aug 2020 01:23:06 -0400 Received: from mailout4.samsung.com ([203.254.224.34]:63954 "EHLO mailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725868AbgHEFXG (ORCPT ); Wed, 5 Aug 2020 01:23:06 -0400 Received: from epcas1p1.samsung.com (unknown [182.195.41.45]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20200805052302epoutp0408754bac87893b4ae85aeda2ddec9ca1~oR8XDdFZa1752217522epoutp04a for ; Wed, 5 Aug 2020 05:23:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20200805052302epoutp0408754bac87893b4ae85aeda2ddec9ca1~oR8XDdFZa1752217522epoutp04a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1596604982; bh=Vv6SM1wYGRvFEWv4+K3CiduycYOby04MPbyseFMmwbw=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=ddFRLq8Lp6CitecZFJDlsijZNyYSXaWrX4RQQqXTqix3DSLC3Hp63oDdEHN28YHmt MubRmbQCCd9kNkvjQA0ArSLb3H6FeWnBxVH5D+xDVF2vG64V8EV5R2++vbjkjeMZ6f 6vyle+W3KwCLfhK7vSGz7KqaiWZnDcJauDemm2hM= Received: from epcpadp2 (unknown [182.195.40.12]) by epcas1p4.samsung.com (KnoxPortal) with ESMTP id 20200805052302epcas1p415ad979a6bba89eeb879f1a386352722~oR8WgAHpN0237402374epcas1p49; Wed, 5 Aug 2020 05:23:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v7 1/4] scsi: ufs: Add UFS feature related parameter Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <231786897.01596600181895.JavaMail.epsvc@epcpadp2> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <231786897.01596604981993.JavaMail.epsvc@epcpadp2> Date: Wed, 05 Aug 2020 13:58:48 +0900 X-CMS-MailID: 20200805045848epcms2p57f67844bf334f1b3a5014ec0d98fddf0 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200805033750epcms2p3fd74b94500593df38d50e1bf426c2347 References: <231786897.01596600181895.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for parameters to be used for UFS features layer and HPB module. Reviewed-by: Can Guo Tested-by: Bean Huo Signed-off-by: Daejun Park --- drivers/scsi/ufs/ufs.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h index f8ab16f30fdc..ae557b8d3eba 100644 --- a/drivers/scsi/ufs/ufs.h +++ b/drivers/scsi/ufs/ufs.h @@ -122,6 +122,7 @@ enum flag_idn { QUERY_FLAG_IDN_WB_EN = 0x0E, QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F, QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10, + QUERY_FLAG_IDN_HPB_RESET = 0x11, }; /* Attribute idn for Query requests */ @@ -195,6 +196,9 @@ enum unit_desc_param { UNIT_DESC_PARAM_PHY_MEM_RSRC_CNT = 0x18, UNIT_DESC_PARAM_CTX_CAPABILITIES = 0x20, UNIT_DESC_PARAM_LARGE_UNIT_SIZE_M1 = 0x22, + UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS = 0x23, + UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET = 0x25, + UNIT_DESC_HPB_LU_NUM_PIN_REGIONS = 0x27, UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS = 0x29, }; @@ -235,6 +239,8 @@ enum device_desc_param { DEVICE_DESC_PARAM_PSA_MAX_DATA = 0x25, DEVICE_DESC_PARAM_PSA_TMT = 0x29, DEVICE_DESC_PARAM_PRDCT_REV = 0x2A, + DEVICE_DESC_PARAM_HPB_VER = 0x40, + DEVICE_DESC_PARAM_HPB_CONTROL = 0x42, DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F, DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53, DEVICE_DESC_PARAM_WB_TYPE = 0x54, @@ -283,6 +289,10 @@ enum geometry_desc_param { GEOMETRY_DESC_PARAM_ENM4_MAX_NUM_UNITS = 0x3E, GEOMETRY_DESC_PARAM_ENM4_CAP_ADJ_FCTR = 0x42, GEOMETRY_DESC_PARAM_OPT_LOG_BLK_SIZE = 0x44, + GEOMETRY_DESC_HPB_REGION_SIZE = 0x48, + GEOMETRY_DESC_HPB_NUMBER_LU = 0x49, + GEOMETRY_DESC_HPB_SUBREGION_SIZE = 0x4A, + GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS = 0x4B, GEOMETRY_DESC_PARAM_WB_MAX_ALLOC_UNITS = 0x4F, GEOMETRY_DESC_PARAM_WB_MAX_WB_LUNS = 0x53, GEOMETRY_DESC_PARAM_WB_BUFF_CAP_ADJ = 0x54, @@ -327,6 +337,7 @@ enum { /* Possible values for dExtendedUFSFeaturesSupport */ enum { + UFS_DEV_HPB_SUPPORT = BIT(7), UFS_DEV_WRITE_BOOSTER_SUP = BIT(8), }; @@ -537,6 +548,7 @@ struct ufs_dev_info { u8 *model; u16 wspecversion; u32 clk_gating_wait_us; + u8 b_ufs_feature_sup; u32 d_ext_ufs_feature_sup; u8 b_wb_buffer_type; u32 d_wb_alloc_units; From patchwork Wed Aug 5 05:25:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 11701335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C984B722 for ; Wed, 5 Aug 2020 05:30:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8787321744 for ; Wed, 5 Aug 2020 05:30:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="N4KMZBnt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726377AbgHEFaK (ORCPT ); Wed, 5 Aug 2020 01:30:10 -0400 Received: from mailout3.samsung.com ([203.254.224.33]:59057 "EHLO mailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725868AbgHEFaK (ORCPT ); Wed, 5 Aug 2020 01:30:10 -0400 Received: from epcas1p2.samsung.com (unknown [182.195.41.46]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20200805053002epoutp0394735eae2278f90325ee32887f36d786~oSCedQ1Go2468024680epoutp03H for ; Wed, 5 Aug 2020 05:30:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20200805053002epoutp0394735eae2278f90325ee32887f36d786~oSCedQ1Go2468024680epoutp03H DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1596605402; bh=5nHtKBzUhVjjJWTUpXEYKifCRlUJdq3x2e0GxooyEQU=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=N4KMZBntJzepJwsHOpQfg8kEv1lhHBFzFS+NpS6U1t07p9Xcn6c3mEXWenvd8od8w OCLysC8lPPpeUJ6hnPoN7CaPWQ8gMRDC1P7G5/XvK0nxai1i4etFA8NajI8hB8R/v8 l4/phKVG+5/yqCoOZMnB2dwFBgdgU/l9B3z68Mro= Received: from epcpadp1 (unknown [182.195.40.11]) by epcas1p1.samsung.com (KnoxPortal) with ESMTP id 20200805053002epcas1p1df1aec1ff317898e5348106414281b9e~oSCd0Ym2f0646206462epcas1p1S; Wed, 5 Aug 2020 05:30:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v7 2/4] scsi: ufs: Introduce HPB feature Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <231786897.01596604981993.JavaMail.epsvc@epcpadp2> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <963815509.21596605402187.JavaMail.epsvc@epcpadp1> Date: Wed, 05 Aug 2020 14:25:24 +0900 X-CMS-MailID: 20200805052524epcms2p5ae9a7922e6dee2ca488489bced803a9d X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200805033750epcms2p3fd74b94500593df38d50e1bf426c2347 References: <231786897.01596604981993.JavaMail.epsvc@epcpadp2> <231786897.01596600181895.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for the HPB feature. This patch adds HPB function calls to UFS core driver. The mininum size of the memory pool used in the HPB is implemented as a Kconfig parameter (SCSI_UFS_HPB_HOST_MEM), so that it can be configurable. Reviewed-by: Can Guo Tested-by: Bean Huo Signed-off-by: Daejun Park --- drivers/scsi/ufs/Kconfig | 18 + drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufshcd.c | 42 +++ drivers/scsi/ufs/ufshcd.h | 9 + drivers/scsi/ufs/ufshpb.c | 738 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshpb.h | 169 +++++++++ 6 files changed, 977 insertions(+) create mode 100644 drivers/scsi/ufs/ufshpb.c create mode 100644 drivers/scsi/ufs/ufshpb.h diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index f6394999b98c..33296478f411 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -182,3 +182,21 @@ config SCSI_UFS_CRYPTO Enabling this makes it possible for the kernel to use the crypto capabilities of the UFS device (if present) to perform crypto operations on data being transferred to/from the device. + +config SCSI_UFS_HPB + bool "Support UFS Host Performance Booster" + depends on SCSI_UFSHCD + help + A UFS HPB Feature improves random read performance. It caches + L2P map of UFS to host DRAM. The driver uses HPB read command + by piggybacking physical page number for bypassing FTL's L2P address + translation. + +config SCSI_UFS_HPB_HOST_MEM + int "Host-side cached memory size (KB) for HPB support" + default 32 + depends on SCSI_UFS_HPB + help + The mininum size of the memory pool used in the HPB module. It can + be configurable by the user. If this value is larger than required + memory size, kernel resizes cached memory size. diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 4679af1b564e..663e17cee359 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o ufshcd-core-y += ufshcd.o ufs-sysfs.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o +ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index cdff7e5ee588..a99afdcf8dc0 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -234,6 +234,17 @@ static int ufshcd_wb_ctrl(struct ufs_hba *hba, bool enable); static int ufshcd_wb_toggle_flush_during_h8(struct ufs_hba *hba, bool set); static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable); +#ifndef CONFIG_SCSI_UFS_HPB +static void ufshpb_resume(struct ufs_hba *hba) {} +static void ufshpb_suspend(struct ufs_hba *hba) {} +static void ufshpb_reset(struct ufs_hba *hba) {} +static void ufshpb_reset_host(struct ufs_hba *hba) {} +static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {} +static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {} +static void ufshpb_remove(struct ufs_hba *hba) {} +static void ufshpb_scan_feature(struct ufs_hba *hba) {} +#endif + static inline bool ufshcd_valid_tag(struct ufs_hba *hba, int tag) { return tag >= 0 && tag < hba->nutrs; @@ -2559,6 +2570,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) ufshcd_comp_scsi_upiu(hba, lrbp); + ufshpb_prep(hba, lrbp); + err = ufshcd_map_sg(hba, lrbp); if (err) { lrbp->cmd = NULL; @@ -4681,6 +4694,19 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth) return scsi_change_queue_depth(sdev, depth); } +static void ufshcd_hpb_configure(struct ufs_hba *hba, struct scsi_device *sdev) +{ + /* skip well-known LU */ + if (sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) + return; + + if (!(hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) + return; + + atomic_inc(&hba->ufsf.slave_conf_cnt); + wake_up(&hba->ufsf.sdev_wait); +} + /** * ufshcd_slave_configure - adjust SCSI device configurations * @sdev: pointer to SCSI device @@ -4690,6 +4716,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) struct ufs_hba *hba = shost_priv(sdev->host); struct request_queue *q = sdev->request_queue; + ufshcd_hpb_configure(hba, sdev); + blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (ufshcd_is_rpm_autosuspend_allowed(hba)) @@ -4818,6 +4846,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) */ pm_runtime_get_noresume(hba->dev); } + + if (scsi_status == SAM_STAT_GOOD) + ufshpb_rsp_upiu(hba, lrbp); break; case UPIU_TRANSACTION_REJECT_UPIU: /* TODO: handle Reject UPIU Response */ @@ -6569,6 +6600,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba) * Stop the host controller and complete the requests * cleared by h/w */ + ufshpb_reset_host(hba); + ufshcd_hba_stop(hba); spin_lock_irqsave(hba->host->host_lock, flags); @@ -7003,6 +7036,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba) /* getting Specification Version in big endian format */ dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 | desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1]; + dev_info->b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT]; model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; @@ -7373,6 +7407,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba) } ufs_bsg_probe(hba); + ufshpb_scan_feature(hba); scsi_scan_host(hba->host); pm_runtime_put_sync(hba->dev); @@ -7461,6 +7496,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufshpb_reset(hba); out: trace_ufshcd_init(dev_name(hba->dev), ret, @@ -8218,6 +8254,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) req_link_state = UIC_LINK_OFF_STATE; } + ufshpb_suspend(hba); + /* * If we can't transition into any of the low power modes * just gate the clocks. @@ -8339,6 +8377,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) hba->clk_gating.is_suspended = false; hba->dev_info.b_rpm_dev_flush_capable = false; ufshcd_release(hba); + ufshpb_resume(hba); out: if (hba->dev_info.b_rpm_dev_flush_capable) { schedule_delayed_work(&hba->rpm_dev_flush_recheck_work, @@ -8435,6 +8474,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufshpb_resume(hba); + if (hba->dev_info.b_rpm_dev_flush_capable) { hba->dev_info.b_rpm_dev_flush_capable = false; cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); @@ -8659,6 +8700,7 @@ EXPORT_SYMBOL(ufshcd_shutdown); void ufshcd_remove(struct ufs_hba *hba) { ufs_bsg_remove(hba); + ufshpb_remove(hba); ufs_sysfs_remove_nodes(hba->dev); blk_cleanup_queue(hba->tmf_queue); blk_mq_free_tag_set(&hba->tmf_tag_set); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index b2ef18f1b746..904c19796e09 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -47,6 +47,9 @@ #include "ufs.h" #include "ufs_quirks.h" #include "ufshci.h" +#ifdef CONFIG_SCSI_UFS_HPB +#include "ufshpb.h" +#endif #define UFSHCD "ufshcd" #define UFSHCD_DRIVER_VERSION "0.2" @@ -579,6 +582,11 @@ struct ufs_hba_variant_params { u32 wb_flush_threshold; }; +struct ufsf_feature_info { + atomic_t slave_conf_cnt; + wait_queue_head_t sdev_wait; +}; + /** * struct ufs_hba - per adapter private structure * @mmio_base: UFSHCI base register address @@ -757,6 +765,7 @@ struct ufs_hba { bool wb_enabled; struct delayed_work rpm_dev_flush_recheck_work; + struct ufsf_feature_info ufsf; #ifdef CONFIG_SCSI_UFS_CRYPTO union ufs_crypto_capabilities crypto_capabilities; union ufs_crypto_cap_entry *crypto_cap_array; diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c new file mode 100644 index 000000000000..e1f9c68ae415 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.c @@ -0,0 +1,738 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#include +#include + +#include "ufshcd.h" +#include "ufshpb.h" + +/* SYSFS functions */ +#define ufshpb_sysfs_attr_show_func(__name) \ +static ssize_t __name##_show(struct ufshpb_lu *hpb, char *buf) \ +{ \ + return snprintf(buf, PAGE_SIZE, "%d\n", \ + atomic_read(&hpb->stats.__name)); \ +} + +#define HPB_ATTR_RO(_name) \ + struct ufshpb_sysfs_entry hpb_attr_##_name = __ATTR_RO(_name) + +/* HPB enabled lu list */ +static LIST_HEAD(lh_hpb_lu); + +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb); + +static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, + struct ufshpb_subregion *srgn) +{ + return rgn->rgn_state != HPB_RGN_INACTIVE && + srgn->srgn_state == HPB_SRGN_VALID; +} + +static inline int ufshpb_get_state(struct ufshpb_lu *hpb) +{ + return atomic_read(&hpb->hpb_state); +} + +static inline void ufshpb_set_state(struct ufshpb_lu *hpb, int state) +{ + atomic_set(&hpb->hpb_state, state); +} + +void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +{ +} + +void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +{ +} + +static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + + srgn->rgn_idx = rgn->rgn_idx; + srgn->srgn_idx = srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } +} + +static inline int ufshpb_alloc_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + int srgn_cnt) +{ + rgn->srgn_tbl = kvcalloc(srgn_cnt, sizeof(struct ufshpb_subregion), + GFP_KERNEL); + if (!rgn->srgn_tbl) + return -ENOMEM; + + rgn->srgn_cnt = srgn_cnt; + return 0; +} + +static void ufshpb_init_lu_parameter(struct ufs_hba *hba, + struct ufshpb_lu *hpb, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + u32 entries_per_rgn; + u64 rgn_mem_size; + + hpb->lu_pinned_start = hpb_lu_info->pinned_start; + hpb->lu_pinned_end = hpb_lu_info->num_pinned ? + (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) + : PINNED_NOT_SET; + + rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT + / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; + hpb->srgn_mem_size = (1ULL << hpb_dev_info->srgn_size) + * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; + + entries_per_rgn = rgn_mem_size / HPB_ENTRY_SIZE; + hpb->entries_per_rgn_shift = ilog2(entries_per_rgn); + hpb->entries_per_rgn_mask = entries_per_rgn - 1; + + hpb->entries_per_srgn = hpb->srgn_mem_size / HPB_ENTRY_SIZE; + hpb->entries_per_srgn_shift = ilog2(hpb->entries_per_srgn); + hpb->entries_per_srgn_mask = hpb->entries_per_srgn - 1; + + hpb->srgns_per_rgn = rgn_mem_size / hpb->srgn_mem_size; + + hpb->rgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + (rgn_mem_size / HPB_ENTRY_SIZE)); + hpb->srgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + (hpb->srgn_mem_size / HPB_ENTRY_SIZE)); + + hpb->pages_per_srgn = hpb->srgn_mem_size / PAGE_SIZE; + + dev_info(hba->dev, "ufshpb(%d): region memory size - %llu (bytes)\n", + hpb->lun, rgn_mem_size); + dev_info(hba->dev, "ufshpb(%d): subregion memory size - %u (bytes)\n", + hpb->lun, hpb->srgn_mem_size); + dev_info(hba->dev, "ufshpb(%d): total blocks per lu - %d\n", + hpb->lun, hpb_lu_info->num_blocks); + dev_info(hba->dev, "ufshpb(%d): subregions per region - %d, regions per lu - %u", + hpb->lun, hpb->srgns_per_rgn, hpb->rgns_per_lu); +} + +static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn_table, *rgn; + int rgn_idx, i; + int ret = 0; + + rgn_table = kvcalloc(hpb->rgns_per_lu, sizeof(struct ufshpb_region), + GFP_KERNEL); + if (!rgn_table) + return -ENOMEM; + + hpb->rgn_tbl = rgn_table; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + int srgn_cnt = hpb->srgns_per_rgn; + + rgn = rgn_table + rgn_idx; + rgn->rgn_idx = rgn_idx; + + if (rgn_idx == hpb->rgns_per_lu - 1) + srgn_cnt = ((hpb->srgns_per_lu - 1) % + hpb->srgns_per_rgn) + 1; + + ret = ufshpb_alloc_subregion_tbl(hpb, rgn, srgn_cnt); + if (ret) + goto release_srgn_table; + ufshpb_init_subregion_tbl(hpb, rgn); + + rgn->rgn_state = HPB_RGN_INACTIVE; + } + + return 0; + +release_srgn_table: + for (i = 0; i < rgn_idx; i++) { + rgn = rgn_table + i; + if (rgn->srgn_tbl) + kvfree(rgn->srgn_tbl); + } + kvfree(rgn_table); + return ret; +} + +static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + struct ufshpb_subregion *srgn; + + srgn = rgn->srgn_tbl + srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } +} + +static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb) +{ + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn; + + rgn = hpb->rgn_tbl + rgn_idx; + if (rgn->rgn_state != HPB_RGN_INACTIVE) { + rgn->rgn_state = HPB_RGN_INACTIVE; + + ufshpb_destroy_subregion_tbl(hpb, rgn); + } + + kvfree(rgn->srgn_tbl); + } + + kvfree(hpb->rgn_tbl); +} + +static void ufshpb_stat_init(struct ufshpb_lu *hpb) +{ + atomic_set(&hpb->stats.hit_cnt, 0); + atomic_set(&hpb->stats.miss_cnt, 0); + atomic_set(&hpb->stats.rb_noti_cnt, 0); + atomic_set(&hpb->stats.rb_active_cnt, 0); + atomic_set(&hpb->stats.rb_inactive_cnt, 0); + atomic_set(&hpb->stats.map_req_cnt, 0); +} + +struct ufshpb_sysfs_entry { + struct attribute attr; + ssize_t (*show)(struct ufshpb_lu *hpb, char *page); + ssize_t (*store)(struct ufshpb_lu *hpb, const char *page, size_t len); +}; + +ufshpb_sysfs_attr_show_func(hit_cnt); +ufshpb_sysfs_attr_show_func(miss_cnt); +ufshpb_sysfs_attr_show_func(rb_noti_cnt); +ufshpb_sysfs_attr_show_func(rb_active_cnt); +ufshpb_sysfs_attr_show_func(rb_inactive_cnt); +ufshpb_sysfs_attr_show_func(map_req_cnt); + +static HPB_ATTR_RO(hit_cnt); +static HPB_ATTR_RO(miss_cnt); +static HPB_ATTR_RO(rb_noti_cnt); +static HPB_ATTR_RO(rb_active_cnt); +static HPB_ATTR_RO(rb_inactive_cnt); +static HPB_ATTR_RO(map_req_cnt); + +static struct attribute *hpb_dev_attrs[] = { + &hpb_attr_hit_cnt.attr, + &hpb_attr_miss_cnt.attr, + &hpb_attr_rb_noti_cnt.attr, + &hpb_attr_rb_active_cnt.attr, + &hpb_attr_rb_inactive_cnt.attr, + &hpb_attr_map_req_cnt.attr, + NULL, +}; + +static struct attribute_group ufshpb_sysfs_group = { + .attrs = hpb_dev_attrs, +}; + +static ssize_t ufshpb_attr_show(struct kobject *kobj, struct attribute *attr, + char *page) +{ + struct ufshpb_sysfs_entry *entry; + struct ufshpb_lu *hpb; + ssize_t error; + + entry = container_of(attr, struct ufshpb_sysfs_entry, attr); + hpb = container_of(kobj, struct ufshpb_lu, kobj); + + if (!entry->show) + return -EIO; + + mutex_lock(&hpb->sysfs_lock); + error = entry->show(hpb, page); + mutex_unlock(&hpb->sysfs_lock); + return error; +} + +static ssize_t ufshpb_attr_store(struct kobject *kobj, struct attribute *attr, + const char *page, size_t len) +{ + struct ufshpb_sysfs_entry *entry; + struct ufshpb_lu *hpb; + ssize_t error; + + entry = container_of(attr, struct ufshpb_sysfs_entry, attr); + hpb = container_of(kobj, struct ufshpb_lu, kobj); + + if (!entry->store) + return -EIO; + + mutex_lock(&hpb->sysfs_lock); + error = entry->store(hpb, page, len); + mutex_unlock(&hpb->sysfs_lock); + return error; +} + +static const struct sysfs_ops ufshpb_sysfs_ops = { + .show = ufshpb_attr_show, + .store = ufshpb_attr_store, +}; + +static struct kobj_type ufshpb_ktype = { + .sysfs_ops = &ufshpb_sysfs_ops, + .release = NULL, +}; + +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + int ret; + + ufshpb_stat_init(hpb); + + kobject_init(&hpb->kobj, &ufshpb_ktype); + mutex_init(&hpb->sysfs_lock); + + ret = kobject_add(&hpb->kobj, kobject_get(&hba->dev->kobj), + "ufshpb_lu%d", hpb->lun); + + if (ret) + return ret; + + ret = sysfs_create_group(&hpb->kobj, &ufshpb_sysfs_group); + + if (ret) { + dev_err(hba->dev, "ufshpb_lu%d create file error\n", hpb->lun); + return ret; + } + + dev_info(hba->dev, "ufshpb_lu%d sysfs adds uevent", hpb->lun); + kobject_uevent(&hpb->kobj, KOBJ_ADD); + + return 0; +} + +static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb, + struct ufshpb_dev_info *hpb_dev_info) +{ + int ret; + + spin_lock_init(&hpb->hpb_state_lock); + + ret = ufshpb_alloc_region_tbl(hba, hpb); + if (ret) + return ret; + + ret = ufshpb_create_sysfs(hba, hpb); + if (ret) + goto release_rgn_table; + + return 0; + +release_rgn_table: + ufshpb_destroy_region_tbl(hpb); + return ret; +} + +static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + struct ufshpb_lu *hpb; + int ret; + + hpb = kzalloc(sizeof(struct ufshpb_lu), GFP_KERNEL); + if (!hpb) + return NULL; + + hpb->ufsf = &hba->ufsf; + hpb->lun = lun; + + ufshpb_init_lu_parameter(hba, hpb, hpb_dev_info, hpb_lu_info); + + ret = ufshpb_lu_hpb_init(hba, hpb, hpb_dev_info); + if (ret) { + dev_err(hba->dev, "hpb lu init failed. ret %d", ret); + goto release_hpb; + } + + return hpb; + +release_hpb: + kfree(hpb); + return NULL; +} + +static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba) +{ + int err; + int retries; + + for (retries = 0; retries < HPB_RESET_REQ_RETRIES; retries++) { + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, NULL); + if (err) + dev_dbg(hba->dev, + "%s: failed with error %d, retries %d\n", + __func__, err, retries); + else + break; + } + + if (err) { + dev_err(hba->dev, + "%s setting fHpbReset flag failed with error %d\n", + __func__, err); + return; + } +} + +static void ufshpb_check_hpb_reset_query(struct ufs_hba *hba) +{ + int err; + bool flag_res = true; + int try = 0; + + /* wait for the device to complete HPB reset query */ + do { + if (++try == HPB_RESET_REQ_RETRIES) + break; + + dev_info(hba->dev, + "%s start flag reset polling %d times\n", + __func__, try); + + /* Poll fHpbReset flag to be cleared */ + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res); + usleep_range(1000, 1100); + } while (flag_res); + + if (err) { + dev_err(hba->dev, + "%s reading fHpbReset flag failed with error %d\n", + __func__, err); + return; + } + + if (flag_res) { + dev_err(hba->dev, + "%s fHpbReset was not cleared by the device\n", + __func__); + } +} + +void ufshpb_reset(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + ufshpb_set_state(hpb, HPB_PRESENT); +} + +void ufshpb_reset_host(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + dev_info(hba->dev, "ufshpb run reset_host"); + + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + ufshpb_set_state(hpb, HPB_RESET); +} + +void ufshpb_suspend(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + dev_info(hba->dev, "ufshpb goto suspend"); + + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + ufshpb_set_state(hpb, HPB_SUSPEND); +} + +void ufshpb_resume(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + dev_info(hba->dev, "ufshpb resume"); + + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + ufshpb_set_state(hpb, HPB_PRESENT); +} + +static int ufshpb_read_desc(struct ufs_hba *hba, u8 desc_id, u8 desc_index, + u8 selector, u8 *desc_buf) +{ + int err = 0; + int size; + + ufshcd_map_desc_id_to_length(hba, desc_id, &size); + + pm_runtime_get_sync(hba->dev); + + err = ufshcd_query_descriptor_retry(hba, UPIU_QUERY_OPCODE_READ_DESC, + desc_id, desc_index, + selector, + desc_buf, &size); + if (err) + dev_err(hba->dev, "read desc failed: %d, id %d, idx %d\n", + err, desc_id, desc_index); + + pm_runtime_put_sync(hba->dev); + + return err; +} + +static int ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf, + struct ufshpb_dev_info *hpb_dev_info) +{ + int hpb_device_max_active_rgns = 0; + int hpb_num_lu; + + hpb_num_lu = geo_buf[GEOMETRY_DESC_HPB_NUMBER_LU]; + if (hpb_num_lu == 0) { + dev_err(hba->dev, "No HPB LU supported\n"); + return -ENODEV; + } + + hpb_dev_info->rgn_size = geo_buf[GEOMETRY_DESC_HPB_REGION_SIZE]; + hpb_dev_info->srgn_size = geo_buf[GEOMETRY_DESC_HPB_SUBREGION_SIZE]; + hpb_device_max_active_rgns = + get_unaligned_be16(geo_buf + + GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS); + + if (hpb_dev_info->rgn_size == 0 || hpb_dev_info->srgn_size == 0 || + hpb_device_max_active_rgns == 0) { + dev_err(hba->dev, "No HPB supported device\n"); + return -ENODEV; + } + + return 0; +} + +static int ufshpb_get_dev_info(struct ufs_hba *hba, + struct ufshpb_dev_info *hpb_dev_info, + u8 *desc_buf) +{ + int ret; + int version; + u8 hpb_mode; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_DEVICE, 0, 0, desc_buf); + if (ret) { + dev_err(hba->dev, "%s: idn: %d query request failed\n", + __func__, QUERY_DESC_IDN_DEVICE); + return -ENODEV; + } + + hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_mode == HPB_HOST_CONTROL) { + dev_err(hba->dev, "%s: host control mode is not supported.\n", + __func__); + return -ENODEV; + } + + version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); + if (version != HPB_SUPPORT_VERSION) { + dev_err(hba->dev, "%s: HPB %x version is not supported.\n", + __func__, version); + return -ENODEV; + } + + /* + * Get the number of user logical unit to check whether all + * scsi_device finish initialization + */ + hpb_dev_info->num_lu = desc_buf[DEVICE_DESC_PARAM_NUM_LU]; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_GEOMETRY, 0, 0, desc_buf); + if (ret) { + dev_err(hba->dev, "%s: idn: %d query request failed\n", + __func__, QUERY_DESC_IDN_DEVICE); + return ret; + } + + ret = ufshpb_get_geo_info(hba, desc_buf, hpb_dev_info); + if (ret) + return ret; + + return 0; +} + +static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun, + struct ufshpb_lu_info *hpb_lu_info, + u8 *desc_buf) +{ + u16 max_active_rgns; + u8 lu_enable; + int ret; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_UNIT, lun, 0, desc_buf); + if (ret) { + dev_err(hba->dev, + "%s: idn: %d lun: %d query request failed", + __func__, QUERY_DESC_IDN_UNIT, lun); + return ret; + } + + lu_enable = desc_buf[UNIT_DESC_PARAM_LU_ENABLE]; + if (lu_enable != LU_ENABLED_HPB_FUNC) + return -ENODEV; + + max_active_rgns = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS); + if (!max_active_rgns) { + dev_err(hba->dev, + "lun %d wrong number of max active regions\n", lun); + return -ENODEV; + } + + hpb_lu_info->num_blocks = get_unaligned_be64( + desc_buf + UNIT_DESC_PARAM_LOGICAL_BLK_COUNT); + hpb_lu_info->pinned_start = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET); + hpb_lu_info->num_pinned = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_NUM_PIN_REGIONS); + hpb_lu_info->max_active_rgns = max_active_rgns; + + return 0; +} + +static void ufshpb_scan_hpb_lu(struct ufs_hba *hba, + struct ufshpb_dev_info *hpb_dev_info, + u8 *desc_buf) +{ + struct scsi_device *sdev; + struct ufshpb_lu *hpb; + int find_hpb_lu = 0; + int ret; + + shost_for_each_device(sdev, hba->host) { + struct ufshpb_lu_info hpb_lu_info = { 0 }; + int lun = sdev->lun; + + if (lun >= hba->dev_info.max_lu_supported) + continue; + + ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info, desc_buf); + if (ret) + continue; + + hpb = ufshpb_alloc_hpb_lu(hba, lun, hpb_dev_info, + &hpb_lu_info); + if (!hpb) + continue; + + hpb->sdev_ufs_lu = sdev; + sdev->hostdata = hpb; + + list_add_tail(&hpb->list_hpb_lu, &lh_hpb_lu); + find_hpb_lu++; + } + + if (!find_hpb_lu) + return; + + ufshpb_check_hpb_reset_query(hba); + + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) { + dev_info(hba->dev, "set state to present\n"); + ufshpb_set_state(hpb, HPB_PRESENT); + } +} + +static void ufshpb_init(void *data, async_cookie_t cookie) +{ + struct ufsf_feature_info *ufsf = (struct ufsf_feature_info *)data; + struct ufs_hba *hba; + struct ufshpb_dev_info hpb_dev_info = { 0 }; + char *desc_buf; + int ret; + + hba = container_of(ufsf, struct ufs_hba, ufsf); + + desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL); + if (!desc_buf) + goto release_desc_buf; + + ret = ufshpb_get_dev_info(hba, &hpb_dev_info, desc_buf); + if (ret) + goto release_desc_buf; + + /* + * Because HPB driver uses scsi_device data structure, + * we should wait at this point until finishing initialization of all + * scsi devices. Even if timeout occurs, HPB driver will search + * the scsi_device list on struct scsi_host (shost->__host list_head) + * and can find out HPB logical units in all scsi_devices + */ + wait_event_timeout(hba->ufsf.sdev_wait, + (atomic_read(&hba->ufsf.slave_conf_cnt) + == hpb_dev_info.num_lu), + SDEV_WAIT_TIMEOUT); + + ufshpb_issue_hpb_reset_query(hba); + + dev_dbg(hba->dev, "ufshpb: slave count %d, lu count %d\n", + atomic_read(&hba->ufsf.slave_conf_cnt), hpb_dev_info.num_lu); + + ufshpb_scan_hpb_lu(hba, &hpb_dev_info, desc_buf); + +release_desc_buf: + kfree(desc_buf); +} + +static inline void ufshpb_remove_sysfs(struct ufshpb_lu *hpb) +{ + kobject_uevent(&hpb->kobj, KOBJ_REMOVE); + dev_info(&hpb->sdev_ufs_lu->sdev_dev, + "ufshpb removes sysfs lu %d %p", hpb->lun, &hpb->kobj); + kobject_del(&hpb->kobj); +} + +void ufshpb_remove(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb, *n_hpb; + struct ufsf_feature_info *ufsf; + struct scsi_device *sdev; + + ufsf = &hba->ufsf; + + list_for_each_entry_safe(hpb, n_hpb, &lh_hpb_lu, list_hpb_lu) { + ufshpb_set_state(hpb, HPB_FAILED); + + sdev = hpb->sdev_ufs_lu; + sdev->hostdata = NULL; + + ufshpb_destroy_region_tbl(hpb); + + list_del_init(&hpb->list_hpb_lu); + ufshpb_remove_sysfs(hpb); + + kfree(hpb); + } + + dev_info(hba->dev, "ufshpb: remove success\n"); +} + +void ufshpb_scan_feature(struct ufs_hba *hba) +{ + init_waitqueue_head(&hba->ufsf.sdev_wait); + atomic_set(&hba->ufsf.slave_conf_cnt, 0); + + if (hba->dev_info.wspecversion >= HPB_SUPPORT_VERSION && + (hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) + async_schedule(ufshpb_init, &hba->ufsf); +} diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h new file mode 100644 index 000000000000..b91b447ed0c8 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#ifndef _UFSHPB_H_ +#define _UFSHPB_H_ + +/* hpb response UPIU macro */ +#define MAX_ACTIVE_NUM 2 +#define MAX_INACTIVE_NUM 2 +#define HPB_RSP_NONE 0x00 +#define HPB_RSP_REQ_REGION_UPDATE 0x01 +#define HPB_RSP_DEV_RESET 0x02 +#define DEV_DATA_SEG_LEN 0x14 +#define DEV_SENSE_SEG_LEN 0x12 +#define DEV_DES_TYPE 0x80 +#define DEV_ADDITIONAL_LEN 0x10 + +/* hpb map & entries macro */ +#define HPB_RGN_SIZE_UNIT 512 +#define HPB_ENTRY_BLOCK_SIZE 4096 +#define HPB_ENTRY_SIZE 0x8 +#define PINNED_NOT_SET U32_MAX + +/* hpb support chunk size */ +#define HPB_MULTI_CHUNK_HIGH 1 + +/* hpb vender defined opcode */ +#define UFSHPB_READ 0xF8 +#define UFSHPB_READ_BUFFER 0xF9 +#define UFSHPB_READ_BUFFER_ID 0x01 +#define HPB_READ_BUFFER_CMD_LENGTH 10 +#define LU_ENABLED_HPB_FUNC 0x02 + +#define SDEV_WAIT_TIMEOUT (10 * HZ) +#define MAP_REQ_TIMEOUT (30 * HZ) +#define HPB_RESET_REQ_RETRIES 10 +#define HPB_RESET_REQ_MSLEEP 2 + +#define HPB_SUPPORT_VERSION 0x100 + +enum UFSHPB_MODE { + HPB_HOST_CONTROL, + HPB_DEVICE_CONTROL, +}; + +enum UFSHPB_STATE { + HPB_PRESENT = 1, + HPB_SUSPEND, + HPB_FAILED, + HPB_RESET, +}; + +enum HPB_RGN_STATE { + HPB_RGN_INACTIVE, + HPB_RGN_ACTIVE, + /* pinned regions are always active */ + HPB_RGN_PINNED, +}; + +enum HPB_SRGN_STATE { + HPB_SRGN_UNUSED, + HPB_SRGN_INVALID, + HPB_SRGN_VALID, + HPB_SRGN_ISSUED, +}; + +/** + * struct ufshpb_dev_info - UFSHPB device related info + * @num_lu: the number of user logical unit to check whether all lu finished + * initialization + * @rgn_size: device reported HPB region size + * @srgn_size: device reported HPB sub-region size + */ +struct ufshpb_dev_info { + int num_lu; + int rgn_size; + int srgn_size; +}; + +/** + * struct ufshpb_lu_info - UFSHPB logical unit related info + * @num_blocks: the number of logical block + * @pinned_start: the start region number of pinned region + * @num_pinned: the number of pinned regions + * @max_active_rgns: maximum number of active regions + */ +struct ufshpb_lu_info { + int num_blocks; + int pinned_start; + int num_pinned; + int max_active_rgns; +}; + +struct ufshpb_subregion { + enum HPB_SRGN_STATE srgn_state; + int rgn_idx; + int srgn_idx; +}; + +struct ufshpb_region { + struct ufshpb_subregion *srgn_tbl; + enum HPB_RGN_STATE rgn_state; + int rgn_idx; + int srgn_cnt; +}; + +struct ufshpb_stats { + atomic_t hit_cnt; + atomic_t miss_cnt; + atomic_t rb_noti_cnt; + atomic_t rb_active_cnt; + atomic_t rb_inactive_cnt; + atomic_t map_req_cnt; +}; + +struct ufshpb_lu { + int lun; + struct scsi_device *sdev_ufs_lu; + struct ufshpb_region *rgn_tbl; + + struct kobject kobj; + struct mutex sysfs_lock; + + spinlock_t hpb_state_lock; + atomic_t hpb_state; /* hpb_state_lock */ + + /* pinned region information */ + u32 lu_pinned_start; + u32 lu_pinned_end; + + /* HPB related configuration */ + u32 rgns_per_lu; + u32 srgns_per_lu; + int srgns_per_rgn; + u32 srgn_mem_size; + u32 entries_per_rgn_mask; + u32 entries_per_rgn_shift; + u32 entries_per_srgn; + u32 entries_per_srgn_mask; + u32 entries_per_srgn_shift; + u32 pages_per_srgn; + + struct ufshpb_stats stats; + + struct ufsf_feature_info *ufsf; + struct list_head list_hpb_lu; +}; + +struct ufs_hba; +struct ufshcd_lrb; + +void ufshpb_resume(struct ufs_hba *hba); +void ufshpb_suspend(struct ufs_hba *hba); +void ufshpb_reset(struct ufs_hba *hba); +void ufshpb_reset_host(struct ufs_hba *hba); +void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +void ufshpb_scan_feature(struct ufs_hba *hba); +void ufshpb_remove(struct ufs_hba *hba); + +#endif /* End of Header */ From patchwork Wed Aug 5 05:36:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 11701349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9377A138A for ; Wed, 5 Aug 2020 06:00:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 659E82245C for ; Wed, 5 Aug 2020 06:00:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="BzM1L++z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727794AbgHEGAK (ORCPT ); Wed, 5 Aug 2020 02:00:10 -0400 Received: from mailout3.samsung.com ([203.254.224.33]:22645 "EHLO mailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725372AbgHEGAK (ORCPT ); Wed, 5 Aug 2020 02:00:10 -0400 Received: from epcas1p3.samsung.com (unknown [182.195.41.47]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20200805060002epoutp03ab8ab28eaf3ba102dd970d4d53e418ec~oScqe0Y-91904619046epoutp036 for ; Wed, 5 Aug 2020 06:00:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20200805060002epoutp03ab8ab28eaf3ba102dd970d4d53e418ec~oScqe0Y-91904619046epoutp036 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1596607202; bh=f1bPJTLz2Berac6pfHWefFpZpXqEoorjIHbTp70uTNk=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=BzM1L++zEhFO5v+Mm5poV+zaKIs9XFyOPAKwcv+1AuX0oyHiee8kaLiNK9f73CMBw 1XRSUbnKwmFN2vQ79yPg0NhN+y7ds/QDq+o4Ku/vDq1edaysI8CALFWl+qzgF7ICxY u7A4DWYRRyoUjSBuGYszzqsM1MO5yFIbL4Xh95cI= Received: from epcpadp1 (unknown [182.195.40.11]) by epcas1p3.samsung.com (KnoxPortal) with ESMTP id 20200805060001epcas1p3a89657c9249c8178e1e42c070d26733c~oScp-UekV2387123871epcas1p3M; Wed, 5 Aug 2020 06:00:01 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v7 3/4] scsi: ufs: L2P map management for HPB read Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <963815509.21596605402187.JavaMail.epsvc@epcpadp1> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <231786897.01596607201961.JavaMail.epsvc@epcpadp1> Date: Wed, 05 Aug 2020 14:36:59 +0900 X-CMS-MailID: 20200805053659epcms2p245db40854a35976466f1043a4221f329 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200805033750epcms2p3fd74b94500593df38d50e1bf426c2347 References: <963815509.21596605402187.JavaMail.epsvc@epcpadp1> <231786897.01596604981993.JavaMail.epsvc@epcpadp2> <231786897.01596600181895.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for managing L2P map in HPB module. The HPB divides logical addresses into several regions. A region consists of several sub-regions. The sub-region is a basic unit where L2P mapping is managed. The driver loads L2P mapping data of each sub-region. The loaded sub-region is called active-state. The HPB driver unloads L2P mapping data as region unit. The unloaded region is called inactive-state. Sub-region/region candidates to be loaded and unloaded are delivered from the UFS device. The UFS device delivers the recommended active sub-region and inactivate region to the driver using sensedata. The HPB module performs L2P mapping management on the host through the delivered information. A pinned region is a pre-set regions on the UFS device that is always activate-state. The data structure for map data request and L2P map uses mempool API, minimizing allocation overhead while avoiding static allocation. The map_work manages active/inactive by 2 "to-do" lists. Each hpb lun maintains 2 "to-do" lists: hpb->lh_inact_rgn - regions to be inactivated, and hpb->lh_act_srgn - subregions to be activated Those lists are maintained on IO completion. Reviewed-by: Can Guo Tested-by: Bean Huo Signed-off-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 973 +++++++++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 72 +++ 2 files changed, 1039 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index e1f9c68ae415..25cd7153f102 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -26,6 +26,14 @@ static ssize_t __name##_show(struct ufshpb_lu *hpb, char *buf) \ #define HPB_ATTR_RO(_name) \ struct ufshpb_sysfs_entry hpb_attr_##_name = __ATTR_RO(_name) +/* memory management */ +static struct kmem_cache *ufshpb_mctx_cache; +static mempool_t *ufshpb_mctx_pool; +static mempool_t *ufshpb_page_pool; +static unsigned int ufshpb_host_map_kbytes; + +static struct workqueue_struct *ufshpb_wq; + /* HPB enabled lu list */ static LIST_HEAD(lh_hpb_lu); @@ -38,6 +46,62 @@ static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, srgn->srgn_state == HPB_SRGN_VALID; } +static inline bool ufshpb_is_general_lun(int lun) +{ + return lun < UFS_UPIU_MAX_UNIT_NUM_ID; +} + +static inline bool +ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx) +{ + if (hpb->lu_pinned_end != PINNED_NOT_SET && + rgn_idx >= hpb->lu_pinned_start && + rgn_idx <= hpb->lu_pinned_end) + return true; + + return false; +} + +static bool ufshpb_is_empty_rsp_lists(struct ufshpb_lu *hpb) +{ + bool ret = true; + unsigned long flags; + + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + if (!list_empty(&hpb->lh_inact_rgn) || !list_empty(&hpb->lh_act_srgn)) + ret = false; + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + + return ret; +} + +static inline int ufshpb_may_field_valid(struct ufs_hba *hba, + struct ufshcd_lrb *lrbp, + struct ufshpb_rsp_field *rsp_field) +{ + if (be16_to_cpu(rsp_field->sense_data_len) != DEV_SENSE_SEG_LEN || + rsp_field->desc_type != DEV_DES_TYPE || + rsp_field->additional_len != DEV_ADDITIONAL_LEN || + rsp_field->hpb_type == HPB_RSP_NONE || + rsp_field->active_rgn_cnt > MAX_ACTIVE_NUM || + rsp_field->inactive_rgn_cnt > MAX_INACTIVE_NUM || + (!rsp_field->active_rgn_cnt && !rsp_field->inactive_rgn_cnt)) + return -EINVAL; + + if (!ufshpb_is_general_lun(lrbp->lun)) { + dev_warn(hba->dev, "ufshpb: lun(%d) not supported\n", + lrbp->lun); + return -EINVAL; + } + + return 0; +} + +static inline struct ufshpb_lu *ufshpb_get_hpb_data(struct scsi_cmnd *cmd) +{ + return cmd->device->hostdata; +} + static inline int ufshpb_get_state(struct ufshpb_lu *hpb) { return atomic_read(&hpb->hpb_state); @@ -52,8 +116,737 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { } +static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, + struct ufshpb_subregion *srgn) +{ + struct ufshpb_req *map_req; + struct request *req; + struct bio *bio; + + map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL); + if (!map_req) + return NULL; + + req = blk_get_request(hpb->sdev_ufs_lu->request_queue, + REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT); + if (IS_ERR(req)) + goto free_map_req; + + bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn); + if (!bio) { + blk_put_request(req); + goto free_map_req; + } + + map_req->hpb = hpb; + map_req->req = req; + map_req->bio = bio; + + map_req->rgn_idx = srgn->rgn_idx; + map_req->srgn_idx = srgn->srgn_idx; + map_req->mctx = srgn->mctx; + map_req->lun = hpb->lun; + + return map_req; + +free_map_req: + kmem_cache_free(hpb->map_req_cache, map_req); + return NULL; +} + +static inline void ufshpb_put_map_req(struct ufshpb_lu *hpb, + struct ufshpb_req *map_req) +{ + bio_put(map_req->bio); + blk_put_request(map_req->req); + kmem_cache_free(hpb->map_req_cache, map_req); +} + +static inline int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, + struct ufshpb_subregion *srgn) +{ + WARN_ON(!srgn->mctx); + bitmap_zero(srgn->mctx->ppn_dirty, hpb->entries_per_srgn); + return 0; +} + +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + list_del_init(&rgn->list_inact_rgn); + + if (list_empty(&srgn->list_act_srgn)) + list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); +} + +static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + int srgn_idx; + + rgn = hpb->rgn_tbl + rgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + list_del_init(&srgn->list_act_srgn); + } + + if (list_empty(&rgn->list_inact_rgn)) + list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn); +} + +static void ufshpb_activate_subregion(struct ufshpb_lu *hpb, + struct ufshpb_subregion *srgn) +{ + struct ufshpb_region *rgn; + + /* + * If there is no mctx in subregion + * after I/O progress for HPB_READ_BUFFER, the region to which the + * subregion belongs was evicted. + * Mask sure the the region must not evict in I/O progress + */ + WARN_ON(!srgn->mctx); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + + if (unlikely(rgn->rgn_state == HPB_RGN_INACTIVE)) { + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "region %d subregion %d evicted\n", + srgn->rgn_idx, srgn->srgn_idx); + return; + } + srgn->srgn_state = HPB_SRGN_VALID; +} + +static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error) +{ + struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data; + struct ufshpb_lu *hpb = map_req->hpb; + struct ufshpb_subregion *srgn; + unsigned long flags; + + srgn = hpb->rgn_tbl[map_req->rgn_idx].srgn_tbl + + map_req->srgn_idx; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + ufshpb_activate_subregion(hpb, srgn); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + + ufshpb_put_map_req(map_req->hpb, map_req); +} + +static inline void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx, + int srgn_idx, int srgn_mem_size) +{ + cdb[0] = UFSHPB_READ_BUFFER; + cdb[1] = UFSHPB_READ_BUFFER_ID; + + put_unaligned_be16(rgn_idx, &cdb[2]); + put_unaligned_be16(srgn_idx, &cdb[4]); + put_unaligned_be24(srgn_mem_size, &cdb[6]); + + cdb[9] = 0x00; +} + +static int ufshpb_map_req_add_bio_page(struct ufshpb_lu *hpb, + struct request_queue *q, struct bio *bio, + struct ufshpb_map_ctx *mctx) +{ + int i, ret = 0; + + for (i = 0; i < hpb->pages_per_srgn; i++) { + ret = bio_add_pc_page(q, bio, mctx->m_page[i], PAGE_SIZE, 0); + if (ret != PAGE_SIZE) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "bio_add_pc_page fail %d\n", ret); + return -ENOMEM; + } + } + + return 0; +} + +static int ufshpb_execute_map_req(struct ufshpb_lu *hpb, + struct ufshpb_req *map_req) +{ + struct request_queue *q; + struct request *req; + struct scsi_request *rq; + int ret = 0; + + q = hpb->sdev_ufs_lu->request_queue; + ret = ufshpb_map_req_add_bio_page(hpb, q, map_req->bio, + map_req->mctx); + if (ret) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "map_req_add_bio_page fail %d - %d\n", + map_req->rgn_idx, map_req->srgn_idx); + return ret; + } + + req = map_req->req; + + blk_rq_append_bio(req, &map_req->bio); + + req->timeout = 0; + req->end_io_data = (void *)map_req; + + rq = scsi_req(req); + ufshpb_set_read_buf_cmd(rq->cmd, map_req->rgn_idx, + map_req->srgn_idx, hpb->srgn_mem_size); + rq->cmd_len = HPB_READ_BUFFER_CMD_LENGTH; + + blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_map_req_compl_fn); + + atomic_inc(&hpb->stats.map_req_cnt); + return 0; +} + +static struct ufshpb_map_ctx *ufshpb_get_map_ctx(struct ufshpb_lu *hpb) +{ + struct ufshpb_map_ctx *mctx; + int i, j; + + mctx = mempool_alloc(ufshpb_mctx_pool, GFP_KERNEL); + if (!mctx) + return NULL; + + mctx->m_page = kmem_cache_alloc(hpb->m_page_cache, GFP_KERNEL); + if (!mctx->m_page) + goto release_mctx; + + mctx->ppn_dirty = bitmap_zalloc(hpb->entries_per_srgn, GFP_KERNEL); + if (!mctx->ppn_dirty) + goto release_m_page; + + for (i = 0; i < hpb->pages_per_srgn; i++) { + mctx->m_page[i] = mempool_alloc(ufshpb_page_pool, GFP_KERNEL); + if (!mctx->m_page[i]) { + for (j = 0; j < i; j++) + mempool_free(mctx->m_page[j], ufshpb_page_pool); + goto release_ppn_dirty; + } + clear_page(page_address(mctx->m_page[i])); + } + + return mctx; + +release_ppn_dirty: + bitmap_free(mctx->ppn_dirty); +release_m_page: + kmem_cache_free(hpb->m_page_cache, mctx->m_page); +release_mctx: + mempool_free(mctx, ufshpb_mctx_pool); + return NULL; +} + +static inline void ufshpb_put_map_ctx(struct ufshpb_lu *hpb, + struct ufshpb_map_ctx *mctx) +{ + int i; + + for (i = 0; i < hpb->pages_per_srgn; i++) + mempool_free(mctx->m_page[i], ufshpb_page_pool); + + bitmap_free(mctx->ppn_dirty); + kmem_cache_free(hpb->m_page_cache, mctx->m_page); + mempool_free(mctx, ufshpb_mctx_pool); +} + +static int ufshpb_check_issue_state_srgns(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + struct ufshpb_subregion *srgn; + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + if (srgn->srgn_state == HPB_SRGN_ISSUED) + return -EPERM; + } + return 0; +} + +static inline void ufshpb_add_lru_info(struct victim_select_info *lru_info, + struct ufshpb_region *rgn) +{ + rgn->rgn_state = HPB_RGN_ACTIVE; + list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); + atomic_inc(&lru_info->active_cnt); +} + +static inline void ufshpb_hit_lru_info(struct victim_select_info *lru_info, + struct ufshpb_region *rgn) +{ + list_move_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); +} + +static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) +{ + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn, *victim_rgn = NULL; + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) { + WARN_ON(!rgn); + if (ufshpb_check_issue_state_srgns(hpb, rgn)) + continue; + + victim_rgn = rgn; + break; + } + + return victim_rgn; +} + +static inline void ufshpb_cleanup_lru_info(struct victim_select_info *lru_info, + struct ufshpb_region *rgn) +{ + list_del_init(&rgn->list_lru_rgn); + rgn->rgn_state = HPB_RGN_INACTIVE; + atomic_dec(&lru_info->active_cnt); +} + +static inline void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb, + struct ufshpb_subregion *srgn) +{ + if (srgn->srgn_state != HPB_SRGN_UNUSED) { + ufshpb_put_map_ctx(hpb, srgn->mctx); + srgn->srgn_state = HPB_SRGN_UNUSED; + srgn->mctx = NULL; + } +} + +static void __ufshpb_evict_region(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + struct victim_select_info *lru_info; + struct ufshpb_subregion *srgn; + int srgn_idx; + + lru_info = &hpb->lru_info; + + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx); + + ufshpb_cleanup_lru_info(lru_info, rgn); + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + ufshpb_purge_active_subregion(hpb, srgn); + } +} + +static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) +{ + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + if (rgn->rgn_state == HPB_RGN_PINNED) { + dev_warn(&hpb->sdev_ufs_lu->sdev_dev, + "pinned region cannot drop-out. region %d\n", + rgn->rgn_idx); + goto out; + } + if (!list_empty(&rgn->list_lru_rgn)) { + if (ufshpb_check_issue_state_srgns(hpb, rgn)) { + ret = -EBUSY; + goto out; + } + + __ufshpb_evict_region(hpb, rgn); + } +out: + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return ret; +} + +static inline struct +ufshpb_rsp_field *ufshpb_get_hpb_rsp(struct ufshcd_lrb *lrbp) +{ + return (struct ufshpb_rsp_field *)&lrbp->ucd_rsp_ptr->sr.sense_data_len; +} + +static int ufshpb_issue_map_req(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + struct ufshpb_subregion *srgn) +{ + struct ufshpb_req *map_req; + unsigned long flags; + int ret; + int err = -EAGAIN; + bool alloc_required = false; + enum HPB_SRGN_STATE state = HPB_SRGN_INVALID; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + /* + * Since the region state change occurs only in the map_work, + * the state of the region cannot HPB_RGN_INACTIVE at this point. + * The region state must be changed in the map_work + */ + WARN_ON(rgn->rgn_state == HPB_RGN_INACTIVE); + + if (srgn->srgn_state == HPB_SRGN_UNUSED) + alloc_required = true; + + /* + * If the subregion is already ISSUED state, + * a specific event (e.g., GC or wear-leveling, etc.) occurs in + * the device and HPB response for map loading is received. + * In this case, after finishing the HPB_READ_BUFFER, + * the next HPB_READ_BUFFER is performed again to obtain the latest + * map data. + */ + if (srgn->srgn_state == HPB_SRGN_ISSUED) + goto unlock_out; + + srgn->srgn_state = HPB_SRGN_ISSUED; + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + + if (alloc_required) { + WARN_ON(srgn->mctx); + srgn->mctx = ufshpb_get_map_ctx(hpb); + if (!srgn->mctx) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "get map_ctx failed. region %d - %d\n", + rgn->rgn_idx, srgn->srgn_idx); + state = HPB_SRGN_UNUSED; + goto change_srgn_state; + } + } + + ufshpb_clear_dirty_bitmap(hpb, srgn); + map_req = ufshpb_get_map_req(hpb, srgn); + if (!map_req) + goto change_srgn_state; + + ret = ufshpb_execute_map_req(hpb, map_req); + if (ret) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "%s: issue map_req failed: %d, region %d - %d\n", + __func__, ret, srgn->rgn_idx, srgn->srgn_idx); + goto free_map_req; + } + return 0; + +free_map_req: + ufshpb_put_map_req(hpb, map_req); +change_srgn_state: + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + srgn->srgn_state = state; +unlock_out: + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return err; +} + +static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) +{ + struct ufshpb_region *victim_rgn; + struct victim_select_info *lru_info = &hpb->lru_info; + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + /* + * If region belongs to lru_list, just move the region + * to the front of lru list. because the state of the region + * is already active-state + */ + if (!list_empty(&rgn->list_lru_rgn)) { + ufshpb_hit_lru_info(lru_info, rgn); + goto out; + } + + if (rgn->rgn_state == HPB_RGN_INACTIVE) { + if (atomic_read(&lru_info->active_cnt) + == lru_info->max_lru_active_cnt) { + /* + * If the maximum number of active regions + * is exceeded, evict the least recently used region. + * This case may occur when the device responds + * to the eviction information late. + * It is okay to evict the least recently used region, + * because the device could detect this region + * by not issuing HPB_READ + */ + victim_rgn = ufshpb_victim_lru_info(hpb); + if (!victim_rgn) { + dev_warn(&hpb->sdev_ufs_lu->sdev_dev, + "cannot get victim region error\n"); + ret = -ENOMEM; + goto out; + } + + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "LRU full (%d), choost victim %d\n", + atomic_read(&lru_info->active_cnt), + victim_rgn->rgn_idx); + __ufshpb_evict_region(hpb, victim_rgn); + } + + /* + * When a region is added to lru_info list_head, + * it is guaranteed that the subregion has been + * assigned all mctx. If failed, try to receive mctx again + * without being added to lru_info list_head + */ + ufshpb_add_lru_info(lru_info, rgn); + } +out: + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return ret; +} + +static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, + struct ufshpb_rsp_field *rsp_field) +{ + int i, rgn_idx, srgn_idx; + + /* + * If the active region and the inactive region are the same, + * we will inactivate this region. + * The device could check this (region inactivated) and + * will response the proper active region information + */ + spin_lock(&hpb->rsp_list_lock); + for (i = 0; i < rsp_field->active_rgn_cnt; i++) { + rgn_idx = + be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn); + srgn_idx = + be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate(%d) region %d - %d\n", i, rgn_idx, srgn_idx); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + atomic_inc(&hpb->stats.rb_active_cnt); + } + + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { + rgn_idx = be16_to_cpu(rsp_field->hpb_inactive_field[i]); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "inactivate(%d) region %d\n", i, rgn_idx); + ufshpb_update_inactive_info(hpb, rgn_idx); + atomic_inc(&hpb->stats.rb_inactive_cnt); + } + spin_unlock(&hpb->rsp_list_lock); + + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", + rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); + + queue_work(ufshpb_wq, &hpb->map_work); +} + +/* routine : isr (ufs) */ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { + struct ufshpb_lu *hpb; + struct ufshpb_rsp_field *rsp_field; + int data_seg_len; + + data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2) + & MASK_RSP_UPIU_DATA_SEG_LEN; + + /* To flush remained rsp_list, we queue the map_work task */ + if (!data_seg_len) { + if (!ufshpb_is_general_lun(lrbp->lun)) + return; + + hpb = ufshpb_get_hpb_data(lrbp->cmd); + if (!hpb) + return; + + if (!ufshpb_is_empty_rsp_lists(hpb)) + queue_work(ufshpb_wq, &hpb->map_work); + return; + } + + /* Check HPB_UPDATE_ALERT */ + if (!(lrbp->ucd_rsp_ptr->header.dword_2 & + UPIU_HEADER_DWORD(0, 2, 0, 0))) + return; + + rsp_field = ufshpb_get_hpb_rsp(lrbp); + if (ufshpb_may_field_valid(hba, lrbp, rsp_field)) + return; + + hpb = ufshpb_get_hpb_data(lrbp->cmd); + if (!hpb) + return; + + atomic_inc(&hpb->stats.rb_noti_cnt); + + switch (rsp_field->hpb_type) { + case HPB_RSP_REQ_REGION_UPDATE: + WARN_ON(data_seg_len != DEV_DATA_SEG_LEN); + ufshpb_rsp_req_region_update(hpb, rsp_field); + break; + case HPB_RSP_DEV_RESET: + dev_warn(&hpb->sdev_ufs_lu->sdev_dev, + "UFS device lost HPB information during PM.\n"); + break; + default: + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "hpb_type is not available: %d\n", + rsp_field->hpb_type); + break; + } +} + +static void ufshpb_add_active_list(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + struct ufshpb_subregion *srgn) +{ + if (!list_empty(&rgn->list_inact_rgn)) + return; + + if (!list_empty(&srgn->list_act_srgn)) { + list_move(&srgn->list_act_srgn, &hpb->lh_act_srgn); + return; + } + + list_add(&srgn->list_act_srgn, &hpb->lh_act_srgn); +} + +static void ufshpb_add_pending_evict_list(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + struct list_head *pending_list) +{ + struct ufshpb_subregion *srgn; + int srgn_idx; + + if (!list_empty(&rgn->list_inact_rgn)) + return; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + if (!list_empty(&srgn->list_act_srgn)) + return; + } + + list_add_tail(&rgn->list_inact_rgn, pending_list); +} + +static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + while ((srgn = list_first_entry_or_null(&hpb->lh_act_srgn, + struct ufshpb_subregion, + list_act_srgn))) { + list_del_init(&srgn->list_act_srgn); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + ret = ufshpb_add_region(hpb, rgn); + if (ret) + goto active_failed; + + ret = ufshpb_issue_map_req(hpb, rgn, srgn); + if (ret) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "issue map_req failed. ret %d, region %d - %d\n", + ret, rgn->rgn_idx, srgn->srgn_idx); + goto active_failed; + } + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + } + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + return; + +active_failed: + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, "region %d - %d, will retry\n", + rgn->rgn_idx, srgn->srgn_idx); + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_add_active_list(hpb, rgn, srgn); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); +} + +static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn; + unsigned long flags; + int ret; + LIST_HEAD(pending_list); + + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + while ((rgn = list_first_entry_or_null(&hpb->lh_inact_rgn, + struct ufshpb_region, + list_inact_rgn))) { + list_del_init(&rgn->list_inact_rgn); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + + ret = ufshpb_evict_region(hpb, rgn); + if (ret) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_add_pending_evict_list(hpb, rgn, &pending_list); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + } + + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + } + + list_splice(&pending_list, &hpb->lh_inact_rgn); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); +} + +static void ufshpb_map_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); + + ufshpb_run_inactive_region_list(hpb); + ufshpb_run_active_subregion_list(hpb); +} + +/* + * this function doesn't need to hold lock due to be called in init. + * (hpb_state_lock, rsp_list_lock, etc..) + */ +static int ufshpb_init_pinned_active_region(struct ufs_hba *hba, + struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + struct ufshpb_subregion *srgn; + int srgn_idx, i; + int err = 0; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + srgn->mctx = ufshpb_get_map_ctx(hpb); + srgn->srgn_state = HPB_SRGN_INVALID; + if (!srgn->mctx) { + dev_err(hba->dev, + "alloc mctx for pinned region failed\n"); + goto release; + } + + list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); + } + + rgn->rgn_state = HPB_RGN_PINNED; + return 0; + +release: + for (i = 0; i < srgn_idx; i++) { + srgn = rgn->srgn_tbl + i; + ufshpb_put_map_ctx(hpb, srgn->mctx); + } + return err; } static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, @@ -64,6 +857,8 @@ static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + INIT_LIST_HEAD(&srgn->list_act_srgn); + srgn->rgn_idx = rgn->rgn_idx; srgn->srgn_idx = srgn_idx; srgn->srgn_state = HPB_SRGN_UNUSED; @@ -95,6 +890,8 @@ static void ufshpb_init_lu_parameter(struct ufs_hba *hba, hpb->lu_pinned_end = hpb_lu_info->num_pinned ? (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) : PINNED_NOT_SET; + hpb->lru_info.max_lru_active_cnt = + hpb_lu_info->max_active_rgns - hpb_lu_info->num_pinned; rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; @@ -147,6 +944,9 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) rgn = rgn_table + rgn_idx; rgn->rgn_idx = rgn_idx; + INIT_LIST_HEAD(&rgn->list_inact_rgn); + INIT_LIST_HEAD(&rgn->list_lru_rgn); + if (rgn_idx == hpb->rgns_per_lu - 1) srgn_cnt = ((hpb->srgns_per_lu - 1) % hpb->srgns_per_rgn) + 1; @@ -156,7 +956,13 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) goto release_srgn_table; ufshpb_init_subregion_tbl(hpb, rgn); - rgn->rgn_state = HPB_RGN_INACTIVE; + if (ufshpb_is_pinned_region(hpb, rgn_idx)) { + ret = ufshpb_init_pinned_active_region(hba, hpb, rgn); + if (ret) + goto release_srgn_table; + } else { + rgn->rgn_state = HPB_RGN_INACTIVE; + } } return 0; @@ -180,7 +986,10 @@ static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn; srgn = rgn->srgn_tbl + srgn_idx; - srgn->srgn_state = HPB_SRGN_UNUSED; + if (srgn->srgn_state != HPB_SRGN_UNUSED) { + srgn->srgn_state = HPB_SRGN_UNUSED; + ufshpb_put_map_ctx(hpb, srgn->mctx); + } } } @@ -330,10 +1139,36 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb, int ret; spin_lock_init(&hpb->hpb_state_lock); + spin_lock_init(&hpb->rsp_list_lock); + + INIT_LIST_HEAD(&hpb->lru_info.lh_lru_rgn); + INIT_LIST_HEAD(&hpb->lh_act_srgn); + INIT_LIST_HEAD(&hpb->lh_inact_rgn); + INIT_LIST_HEAD(&hpb->list_hpb_lu); + + INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + + hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", + sizeof(struct ufshpb_req), 0, 0, NULL); + if (!hpb->map_req_cache) { + dev_err(hba->dev, "ufshpb(%d) ufshpb_req_cache create fail", + hpb->lun); + return -ENOMEM; + } + + hpb->m_page_cache = kmem_cache_create("ufshpb_m_page_cache", + sizeof(struct page *) * hpb->pages_per_srgn, + 0, 0, NULL); + if (!hpb->m_page_cache) { + dev_err(hba->dev, "ufshpb(%d) ufshpb_m_page_cache create fail", + hpb->lun); + ret = -ENOMEM; + goto release_req_cache; + } ret = ufshpb_alloc_region_tbl(hba, hpb); if (ret) - return ret; + goto release_m_page_cache; ret = ufshpb_create_sysfs(hba, hpb); if (ret) @@ -343,6 +1178,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb, release_rgn_table: ufshpb_destroy_region_tbl(hpb); +release_m_page_cache: + kmem_cache_destroy(hpb->m_page_cache); +release_req_cache: + kmem_cache_destroy(hpb->map_req_cache); return ret; } @@ -375,6 +1214,33 @@ static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, return NULL; } +static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn, *next_rgn; + struct ufshpb_subregion *srgn, *next_srgn; + unsigned long flags; + + /* + * If the device reset occurred, the remained HPB region information + * may be stale. Therefore, by dicarding the lists of HPB response + * that remained after reset, it prevents unnecessary work. + */ + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + list_for_each_entry_safe(rgn, next_rgn, &hpb->lh_inact_rgn, + list_inact_rgn) + list_del_init(&rgn->list_inact_rgn); + + list_for_each_entry_safe(srgn, next_srgn, &hpb->lh_act_srgn, + list_act_srgn) + list_del_init(&srgn->list_act_srgn); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); +} + +static inline void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) +{ + cancel_work_sync(&hpb->map_work); +} + static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba) { int err; @@ -448,8 +1314,11 @@ void ufshpb_reset_host(struct ufs_hba *hba) dev_info(hba->dev, "ufshpb run reset_host"); - list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) { ufshpb_set_state(hpb, HPB_RESET); + ufshpb_cancel_jobs(hpb); + ufshpb_discard_rsp_lists(hpb); + } } void ufshpb_suspend(struct ufs_hba *hba) @@ -458,8 +1327,10 @@ void ufshpb_suspend(struct ufs_hba *hba) dev_info(hba->dev, "ufshpb goto suspend"); - list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) { ufshpb_set_state(hpb, HPB_SUSPEND); + ufshpb_cancel_jobs(hpb); + } } void ufshpb_resume(struct ufs_hba *hba) @@ -468,8 +1339,11 @@ void ufshpb_resume(struct ufs_hba *hba) dev_info(hba->dev, "ufshpb resume"); - list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) { ufshpb_set_state(hpb, HPB_PRESENT); + if (!ufshpb_is_empty_rsp_lists(hpb)) + queue_work(ufshpb_wq, &hpb->map_work); + } } static int ufshpb_read_desc(struct ufs_hba *hba, u8 desc_id, u8 desc_index, @@ -617,6 +1491,8 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev; struct ufshpb_lu *hpb; int find_hpb_lu = 0; + int tot_active_srgn_pages = 0; + int pool_size; int ret; shost_for_each_device(sdev, hba->host) { @@ -635,6 +1511,9 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba, if (!hpb) continue; + tot_active_srgn_pages += hpb_lu_info.max_active_rgns * + hpb->srgns_per_rgn * hpb->pages_per_srgn; + hpb->sdev_ufs_lu = sdev; sdev->hostdata = hpb; @@ -647,10 +1526,78 @@ static void ufshpb_scan_hpb_lu(struct ufs_hba *hba, ufshpb_check_hpb_reset_query(hba); + pool_size = DIV_ROUND_UP(ufshpb_host_map_kbytes * 1024, PAGE_SIZE); + if (pool_size > tot_active_srgn_pages) { + dev_info(hba->dev, + "reset pool_size to %lu KB.\n", + tot_active_srgn_pages * PAGE_SIZE / 1024); + mempool_resize(ufshpb_mctx_pool, tot_active_srgn_pages); + mempool_resize(ufshpb_page_pool, tot_active_srgn_pages); + } + list_for_each_entry(hpb, &lh_hpb_lu, list_hpb_lu) { dev_info(hba->dev, "set state to present\n"); ufshpb_set_state(hpb, HPB_PRESENT); + + if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0) { + dev_info(hba->dev, + "loading pinned regions %d - %d\n", + hpb->lu_pinned_start, hpb->lu_pinned_end); + queue_work(ufshpb_wq, &hpb->map_work); + } + } +} + +static int ufshpb_init_mem_wq(void) +{ + int ret; + unsigned int pool_size; + + ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache", + sizeof(struct ufshpb_map_ctx), + 0, 0, NULL); + if (!ufshpb_mctx_cache) { + pr_err("ufshpb: cannot init mctx cache\n"); + return -ENOMEM; + } + + ufshpb_host_map_kbytes = CONFIG_SCSI_UFS_HPB_HOST_MEM; + pool_size = DIV_ROUND_UP(ufshpb_host_map_kbytes * 1024, PAGE_SIZE); + pr_info("%s:%d ufshpb_host_map_kbytes %u pool_size %u\n", + __func__, __LINE__, ufshpb_host_map_kbytes, pool_size); + + ufshpb_mctx_pool = mempool_create_slab_pool(pool_size, + ufshpb_mctx_cache); + if (!ufshpb_mctx_pool) { + pr_err("ufshpb: cannot init mctx pool\n"); + ret = -ENOMEM; + goto release_mctx_cache; + } + + ufshpb_page_pool = mempool_create_page_pool(pool_size, 0); + if (!ufshpb_page_pool) { + pr_err("ufshpb: cannot init page pool\n"); + ret = -ENOMEM; + goto release_mctx_pool; + } + + ufshpb_wq = alloc_workqueue("ufshpb-wq", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0); + if (!ufshpb_wq) { + pr_err("ufshpb: alloc workqueue failed\n"); + ret = -ENOMEM; + goto release_page_pool; } + + return 0; + +release_page_pool: + mempool_destroy(ufshpb_page_pool); +release_mctx_pool: + mempool_destroy(ufshpb_mctx_pool); +release_mctx_cache: + kmem_cache_destroy(ufshpb_mctx_cache); + return ret; } static void ufshpb_init(void *data, async_cookie_t cookie) @@ -661,6 +1608,9 @@ static void ufshpb_init(void *data, async_cookie_t cookie) char *desc_buf; int ret; + if (ufshpb_init_mem_wq()) + return; + hba = container_of(ufsf, struct ufs_hba, ufsf); desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL); @@ -716,14 +1666,25 @@ void ufshpb_remove(struct ufs_hba *hba) sdev = hpb->sdev_ufs_lu; sdev->hostdata = NULL; + ufshpb_cancel_jobs(hpb); + ufshpb_destroy_region_tbl(hpb); + kmem_cache_destroy(hpb->map_req_cache); + kmem_cache_destroy(hpb->m_page_cache); + list_del_init(&hpb->list_hpb_lu); ufshpb_remove_sysfs(hpb); kfree(hpb); } + mempool_destroy(ufshpb_page_pool); + mempool_destroy(ufshpb_mctx_pool); + kmem_cache_destroy(ufshpb_mctx_cache); + + destroy_workqueue(ufshpb_wq); + dev_info(hba->dev, "ufshpb: remove success\n"); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b91b447ed0c8..4ed091f5bd57 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -99,10 +99,36 @@ struct ufshpb_lu_info { int max_active_rgns; }; +struct ufshpb_active_field { + __be16 active_rgn; + __be16 active_srgn; +} __packed; + +struct ufshpb_rsp_field { + __be16 sense_data_len; + u8 desc_type; + u8 additional_len; + u8 hpb_type; + u8 reserved; + u8 active_rgn_cnt; + u8 inactive_rgn_cnt; + struct ufshpb_active_field hpb_active_field[2]; + __be16 hpb_inactive_field[2]; +} __packed; + +struct ufshpb_map_ctx { + struct page **m_page; + unsigned long *ppn_dirty; +}; + struct ufshpb_subregion { + struct ufshpb_map_ctx *mctx; enum HPB_SRGN_STATE srgn_state; int rgn_idx; int srgn_idx; + + /* below information is used by rsp_list */ + struct list_head list_act_srgn; }; struct ufshpb_region { @@ -110,6 +136,39 @@ struct ufshpb_region { enum HPB_RGN_STATE rgn_state; int rgn_idx; int srgn_cnt; + + /* below information is used by rsp_list */ + struct list_head list_inact_rgn; + + /* below information is used by lru */ + struct list_head list_lru_rgn; +}; + +/** + * struct ufshpb_req - UFSHPB READ BUFFER (for caching map) request structure + * @req: block layer request for READ BUFFER + * @bio: bio for holding map page + * @hpb: ufshpb_lu structure that related to the L2P map + * @mctx: L2P map information + * @rgn_idx: target region index + * @srgn_idx: target sub-region index + * @lun: target logical unit number + */ +struct ufshpb_req { + struct request *req; + struct bio *bio; + struct ufshpb_lu *hpb; + struct ufshpb_map_ctx *mctx; + + unsigned int rgn_idx; + unsigned int srgn_idx; + unsigned int lun; +}; + +struct victim_select_info { + struct list_head lh_lru_rgn; + int max_lru_active_cnt; /* supported hpb #region - pinned #region */ + atomic_t active_cnt; }; struct ufshpb_stats { @@ -132,6 +191,16 @@ struct ufshpb_lu { spinlock_t hpb_state_lock; atomic_t hpb_state; /* hpb_state_lock */ + spinlock_t rsp_list_lock; + struct list_head lh_act_srgn; /* rsp_list_lock */ + struct list_head lh_inact_rgn; /* rsp_list_lock */ + + /* cached L2P map management worker */ + struct work_struct map_work; + + /* for selecting victim */ + struct victim_select_info lru_info; + /* pinned region information */ u32 lu_pinned_start; u32 lu_pinned_end; @@ -150,6 +219,9 @@ struct ufshpb_lu { struct ufshpb_stats stats; + struct kmem_cache *map_req_cache; + struct kmem_cache *m_page_cache; + struct ufsf_feature_info *ufsf; struct list_head list_hpb_lu; }; From patchwork Wed Aug 5 06:00:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 11701353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BE18138A for ; Wed, 5 Aug 2020 06:13:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3415722B40 for ; Wed, 5 Aug 2020 06:13:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="cDXxJaUP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726971AbgHEGNF (ORCPT ); Wed, 5 Aug 2020 02:13:05 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:40992 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726799AbgHEGNF (ORCPT ); Wed, 5 Aug 2020 02:13:05 -0400 Received: from epcas1p4.samsung.com (unknown [182.195.41.48]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20200805061302epoutp01ab4dfa14fe8545fae0da61c8db07c419~oSoBBzlkb1524215242epoutp01L for ; Wed, 5 Aug 2020 06:13:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20200805061302epoutp01ab4dfa14fe8545fae0da61c8db07c419~oSoBBzlkb1524215242epoutp01L DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1596607982; bh=bQeZ6Y+7CBCceoyI2BeoMjkmmZIPFsL4/CyH5T7S6rs=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=cDXxJaUPfI8RBHIT9sL2vK5pgVTiOKZz/RR641AyrQmqi1cRL1Ijqg23pPGp77wNK tsxGhnvu2whllFaL4O2QuwmmTMziYH8zwJuum2iBH/up+isWK6xkzOA5Smesh2cfIQ yIecKi5ZRawM7fWRT9nIgDkbpFki/ybng9WiaRgI= Received: from epcpadp2 (unknown [182.195.40.12]) by epcas1p2.samsung.com (KnoxPortal) with ESMTP id 20200805061302epcas1p27c2dbe5d8a29091e2b2797131387f826~oSoAhA7cG2654026540epcas1p2G; Wed, 5 Aug 2020 06:13:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v7 4/4] scsi: ufs: Prepare HPB read for cached sub-region Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <231786897.01596607201961.JavaMail.epsvc@epcpadp1> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <231786897.01596607982045.JavaMail.epsvc@epcpadp2> Date: Wed, 05 Aug 2020 15:00:37 +0900 X-CMS-MailID: 20200805060037epcms2p339f2cfd2b22299b0b63117c84e9b5091 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200805033750epcms2p3fd74b94500593df38d50e1bf426c2347 References: <231786897.01596607201961.JavaMail.epsvc@epcpadp1> <963815509.21596605402187.JavaMail.epsvc@epcpadp1> <231786897.01596604981993.JavaMail.epsvc@epcpadp2> <231786897.01596600181895.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch changes the read I/O to the HPB read I/O. If the logical address of the read I/O belongs to active sub-region, the HPB driver modifies the read I/O command to HPB read. It modifies the UPIU command of UFS instead of modifying the existing SCSI command. In the HPB version 1.0, the maximum read I/O size that can be converted to HPB read is 4KB. The dirty map of the active sub-region prevents an incorrect HPB read that has stale physical page number which is updated by previous write I/O. Tested-by: Bean Huo Signed-off-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 227 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 227 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 25cd7153f102..fe24b2277621 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -46,6 +46,22 @@ static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, srgn->srgn_state == HPB_SRGN_VALID; } +static inline bool ufshpb_is_read_cmd(struct scsi_cmnd *cmd) +{ + return req_op(cmd->request) == REQ_OP_READ; +} + +static inline bool ufshpb_is_write_discard_cmd(struct scsi_cmnd *cmd) +{ + return op_is_write(req_op(cmd->request)) || + op_is_discard(req_op(cmd->request)); +} + +static inline bool ufshpb_is_support_chunk(int transfer_len) +{ + return transfer_len <= HPB_MULTI_CHUNK_HIGH; +} + static inline bool ufshpb_is_general_lun(int lun) { return lun < UFS_UPIU_MAX_UNIT_NUM_ID; @@ -112,8 +128,219 @@ static inline void ufshpb_set_state(struct ufshpb_lu *hpb, int state) atomic_set(&hpb->hpb_state, state); } +static inline u32 ufshpb_get_lpn(struct scsi_cmnd *cmnd) +{ + return blk_rq_pos(cmnd->request) >> + (ilog2(cmnd->device->sector_size) - 9); +} + +static inline unsigned int ufshpb_get_len(struct scsi_cmnd *cmnd) +{ + return blk_rq_sectors(cmnd->request) >> + (ilog2(cmnd->device->sector_size) - 9); +} + +static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx, int srgn_offset, int cnt) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + int set_bit_len; + int bitmap_len = hpb->entries_per_srgn; + +next_srgn: + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + if ((srgn_offset + cnt) > bitmap_len) + set_bit_len = bitmap_len - srgn_offset; + else + set_bit_len = cnt; + + if (rgn->rgn_state != HPB_RGN_INACTIVE && + srgn->srgn_state == HPB_SRGN_VALID) + bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); + + srgn_offset = 0; + if (++srgn_idx == hpb->srgns_per_rgn) { + srgn_idx = 0; + rgn_idx++; + } + + cnt -= set_bit_len; + if (cnt > 0) + goto next_srgn; + + WARN_ON(cnt < 0); +} + +static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx, int srgn_offset, int cnt) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + int bitmap_len = hpb->entries_per_srgn; + int bit_len; + +next_srgn: + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + if (!ufshpb_is_valid_srgn(rgn, srgn)) + return true; + + /* + * If the region state is active, mctx must be allocated. + * In this case, check whether the region is evicted or + * mctx allcation fail. + */ + WARN_ON(!srgn->mctx); + + if ((srgn_offset + cnt) > bitmap_len) + bit_len = bitmap_len - srgn_offset; + else + bit_len = cnt; + + if (find_next_bit(srgn->mctx->ppn_dirty, + bit_len, srgn_offset) >= srgn_offset) + return true; + + srgn_offset = 0; + if (++srgn_idx == hpb->srgns_per_rgn) { + srgn_idx = 0; + rgn_idx++; + } + + cnt -= bit_len; + if (cnt > 0) + goto next_srgn; + + return false; +} + +static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb, + struct ufshpb_map_ctx *mctx, int pos, int *error) +{ + u64 *ppn_table; + struct page *page; + int index, offset; + + index = pos / (PAGE_SIZE / HPB_ENTRY_SIZE); + offset = pos % (PAGE_SIZE / HPB_ENTRY_SIZE); + + page = mctx->m_page[index]; + if (unlikely(!page)) { + *error = -ENOMEM; + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "error. cannot find page in mctx\n"); + return 0; + } + + ppn_table = page_address(page); + if (unlikely(!ppn_table)) { + *error = -ENOMEM; + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "error. cannot get ppn_table\n"); + return 0; + } + + return ppn_table[offset]; +} + +static inline void +ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx, + int *srgn_idx, int *offset) +{ + int rgn_offset; + + *rgn_idx = lpn >> hpb->entries_per_rgn_shift; + rgn_offset = lpn & hpb->entries_per_rgn_mask; + *srgn_idx = rgn_offset >> hpb->entries_per_srgn_shift; + *offset = rgn_offset & hpb->entries_per_srgn_mask; +} + +static void +ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp, + u32 lpn, u64 ppn, unsigned int transfer_len) +{ + unsigned char *cdb = lrbp->ucd_req_ptr->sc.cdb; + + cdb[0] = UFSHPB_READ; + + put_unaligned_be64(ppn, &cdb[6]); + cdb[14] = transfer_len; +} + +/* routine : READ10 -> HPB_READ */ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { + struct ufshpb_lu *hpb; + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + struct scsi_cmnd *cmd = lrbp->cmd; + u32 lpn; + u64 ppn; + unsigned long flags; + int transfer_len, rgn_idx, srgn_idx, srgn_offset; + int err = 0; + + hpb = ufshpb_get_hpb_data(cmd); + if (!hpb) + return; + + WARN_ON(hpb->lun != cmd->device->lun); + if (!ufshpb_is_write_discard_cmd(cmd) && + !ufshpb_is_read_cmd(cmd)) + return; + + transfer_len = ufshpb_get_len(cmd); + if (unlikely(!transfer_len)) + return; + + lpn = ufshpb_get_lpn(cmd); + ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset); + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + /* If command type is WRITE or DISCARD, set bitmap as drity */ + if (ufshpb_is_write_discard_cmd(cmd)) { + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return; + } + + WARN_ON(!ufshpb_is_read_cmd(cmd)); + + if (!ufshpb_is_support_chunk(transfer_len)) + return; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len)) { + atomic_inc(&hpb->stats.miss_cnt); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return; + } + + ppn = ufshpb_get_ppn(hpb, srgn->mctx, srgn_offset, &err); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + if (unlikely(err)) { + /* + * In this case, the region state is active, + * but the ppn table is not allocated. + * Make sure that ppn table must be allocated on + * active state. + */ + WARN_ON(true); + dev_err(hba->dev, "ufshpb_get_ppn failed. err %d\n", err); + return; + } + + ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len); + + atomic_inc(&hpb->stats.hit_cnt); } static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,