From patchwork Wed Oct 18 23:16:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "David E. Box" X-Patchwork-Id: 13428083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09B4BCDB483 for ; Wed, 18 Oct 2023 23:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232222AbjJRXQg (ORCPT ); Wed, 18 Oct 2023 19:16:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231963AbjJRXQ3 (ORCPT ); Wed, 18 Oct 2023 19:16:29 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35F54111; Wed, 18 Oct 2023 16:16:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697670987; x=1729206987; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=ij0hJ46MmQQCzRboafmS4zwP24cbWPUtSETzKBKq7Wg=; b=kSmwSHkvki3x+jAVfMs3iuW9peLgviENw2qtJHuuubgfZ+TCt+VL40d2 /5SyEn0mDRyYJIrZGhnAeaDG1RfjveXYokqBuzN0rprpv9OIYbGz+Tn/8 G35Iz28Vg0ks0WNWbpK21TzBmQujT7LgG1LfwBFKc9Yhjzue4Q0rEEWPm FRMT65fTuiF7rxSIRsP0Dl6WQQJDLnok7jut+pSQC3IqYCZu8kBhd4LWm 08/STM200jvLtWMtX4JhBc5n5RPLYs+k20Y4u1uEX/KyFOllRISEK5GPn Fqvijiz1c4LEcr1X/1Tqxh6bckVK4IRhb002+geMFRFedfbDwIXfm1v4E A==; X-IronPort-AV: E=McAfee;i="6600,9927,10867"; a="385013049" X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="385013049" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 16:16:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="4730734" Received: from linux.intel.com ([10.54.29.200]) by fmviesa001.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 16:16:30 -0700 Received: from debox1-desk4.lan (unknown [10.209.71.91]) by linux.intel.com (Postfix) with ESMTP id 627AB580DD4; Wed, 18 Oct 2023 16:16:26 -0700 (PDT) From: "David E. Box" To: linux-kernel@vger.kernel.org, platform-driver-x86@vger.kernel.org, ilpo.jarvinen@linux.intel.com, rajvi.jingar@linux.intel.com Subject: [PATCH V4 14/17] platform/x86/intel/pmc: Retrieve LPM information using Intel PMT Date: Wed, 18 Oct 2023 16:16:21 -0700 Message-Id: <20231018231624.1044633-15-david.e.box@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231018231624.1044633-1-david.e.box@linux.intel.com> References: <20231018231624.1044633-1-david.e.box@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: platform-driver-x86@vger.kernel.org From: Xi Pardee On supported platforms, the low power mode (LPM) requirements for entering each idle substate are described in Platform Monitoring Technology (PMT) telemetry entries. Provide a function for platform code to attempt to find and read the requirements from the telemetry entries. Signed-off-by: Xi Pardee Signed-off-by: David E. Box Reviewed-by: Ilpo Järvinen --- V4 - no change V3 - no change V2 - remove extra parens drivers/platform/x86/intel/pmc/core.h | 3 + drivers/platform/x86/intel/pmc/core_ssram.c | 135 ++++++++++++++++++++ 2 files changed, 138 insertions(+) diff --git a/drivers/platform/x86/intel/pmc/core.h b/drivers/platform/x86/intel/pmc/core.h index edaa70067e41..85b6f6ae4995 100644 --- a/drivers/platform/x86/intel/pmc/core.h +++ b/drivers/platform/x86/intel/pmc/core.h @@ -320,6 +320,7 @@ struct pmc_reg_map { const u32 lpm_status_offset; const u32 lpm_live_status_offset; const u32 etr3_offset; + const u8 *lpm_reg_index; }; /** @@ -329,6 +330,7 @@ struct pmc_reg_map { * specific attributes */ struct pmc_info { + u32 guid; u16 devid; const struct pmc_reg_map *map; }; @@ -486,6 +488,7 @@ extern const struct pmc_bit_map *mtl_ioem_lpm_maps[]; extern const struct pmc_reg_map mtl_ioem_reg_map; extern void pmc_core_get_tgl_lpm_reqs(struct platform_device *pdev); +extern int pmc_core_ssram_get_lpm_reqs(struct pmc_dev *pmcdev); extern int pmc_core_send_ltr_ignore(struct pmc_dev *pmcdev, u32 value); int pmc_core_resume_common(struct pmc_dev *pmcdev); diff --git a/drivers/platform/x86/intel/pmc/core_ssram.c b/drivers/platform/x86/intel/pmc/core_ssram.c index 936aa0d5f452..f007964156bc 100644 --- a/drivers/platform/x86/intel/pmc/core_ssram.c +++ b/drivers/platform/x86/intel/pmc/core_ssram.c @@ -24,6 +24,140 @@ #define SSRAM_IOE_OFFSET 0x68 #define SSRAM_DEVID_OFFSET 0x70 +/* PCH query */ +#define LPM_HEADER_OFFSET 1 +#define LPM_REG_COUNT 28 +#define LPM_MODE_OFFSET 1 + +static u32 pmc_core_find_guid(struct pmc_info *list, const struct pmc_reg_map *map) +{ + for (; list->map; ++list) + if (list->map == map) + return list->guid; + + return 0; +} + +static int pmc_core_get_lpm_req(struct pmc_dev *pmcdev, struct pmc *pmc) +{ + struct telem_endpoint *ep; + const u8 *lpm_indices; + int num_maps, mode_offset = 0; + int ret, mode, i; + int lpm_size; + u32 guid; + + lpm_indices = pmc->map->lpm_reg_index; + num_maps = pmc->map->lpm_num_maps; + lpm_size = LPM_MAX_NUM_MODES * num_maps; + + guid = pmc_core_find_guid(pmcdev->regmap_list, pmc->map); + if (!guid) + return -ENXIO; + + ep = pmt_telem_find_and_register_endpoint(pmcdev->ssram_pcidev, guid, 0); + if (IS_ERR(ep)) { + dev_dbg(&pmcdev->pdev->dev, "couldn't get telem endpoint %ld", + PTR_ERR(ep)); + return -EPROBE_DEFER; + } + + pmc->lpm_req_regs = devm_kzalloc(&pmcdev->pdev->dev, + lpm_size * sizeof(u32), + GFP_KERNEL); + if (!pmc->lpm_req_regs) { + ret = -ENOMEM; + goto unregister_ep; + } + + /* + * PMC Low Power Mode (LPM) table + * + * In telemetry space, the LPM table contains a 4 byte header followed + * by 8 consecutive mode blocks (one for each LPM mode). Each block + * has a 4 byte header followed by a set of registers that describe the + * IP state requirements for the given mode. The IP mapping is platform + * specific but the same for each block, making for easy analysis. + * Platforms only use a subset of the space to track the requirements + * for their IPs. Callers provide the requirement registers they use as + * a list of indices. Each requirement register is associated with an + * IP map that's maintained by the caller. + * + * Header + * +----+----------------------------+----------------------------+ + * | 0 | REVISION | ENABLED MODES | + * +----+--------------+-------------+-------------+--------------+ + * + * Low Power Mode 0 Block + * +----+--------------+-------------+-------------+--------------+ + * | 1 | SUB ID | SIZE | MAJOR | MINOR | + * +----+--------------+-------------+-------------+--------------+ + * | 2 | LPM0 Requirements 0 | + * +----+---------------------------------------------------------+ + * | | ... | + * +----+---------------------------------------------------------+ + * | 29 | LPM0 Requirements 27 | + * +----+---------------------------------------------------------+ + * + * ... + * + * Low Power Mode 7 Block + * +----+--------------+-------------+-------------+--------------+ + * | | SUB ID | SIZE | MAJOR | MINOR | + * +----+--------------+-------------+-------------+--------------+ + * | 60 | LPM7 Requirements 0 | + * +----+---------------------------------------------------------+ + * | | ... | + * +----+---------------------------------------------------------+ + * | 87 | LPM7 Requirements 27 | + * +----+---------------------------------------------------------+ + * + */ + mode_offset = LPM_HEADER_OFFSET + LPM_MODE_OFFSET; + pmc_for_each_mode(i, mode, pmcdev) { + u32 *req_offset = pmc->lpm_req_regs + (mode * num_maps); + int m; + + for (m = 0; m < num_maps; m++) { + u8 sample_id = lpm_indices[m] + mode_offset; + + ret = pmt_telem_read32(ep, sample_id, req_offset, 1); + if (ret) { + dev_err(&pmcdev->pdev->dev, + "couldn't read Low Power Mode requirements: %d\n", ret); + devm_kfree(&pmcdev->pdev->dev, pmc->lpm_req_regs); + goto unregister_ep; + } + ++req_offset; + } + mode_offset += LPM_REG_COUNT + LPM_MODE_OFFSET; + } + +unregister_ep: + pmt_telem_unregister_endpoint(ep); + + return ret; +} + +int pmc_core_ssram_get_lpm_reqs(struct pmc_dev *pmcdev) +{ + int ret, i; + + if (!pmcdev->ssram_pcidev) + return -ENODEV; + + for (i = 0; i < ARRAY_SIZE(pmcdev->pmcs); ++i) { + if (!pmcdev->pmcs[i]) + continue; + + ret = pmc_core_get_lpm_req(pmcdev, pmcdev->pmcs[i]); + if (ret) + return ret; + } + + return 0; +} + static void pmc_add_pmt(struct pmc_dev *pmcdev, u64 ssram_base, void __iomem *ssram) { @@ -215,3 +349,4 @@ int pmc_core_ssram_init(struct pmc_dev *pmcdev) return ret; } MODULE_IMPORT_NS(INTEL_VSEC); +MODULE_IMPORT_NS(INTEL_PMT_TELEMETRY);