From patchwork Fri Jun 16 06:32:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Lin X-Patchwork-Id: 13282100 X-Patchwork-Delegate: mail@conchuod.ie Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6A59EB64DA for ; Fri, 16 Jun 2023 06:33:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XDjbv5usxRX1x+u+QwCjsh3TnbqHeDEZ0AE6dGIKxAE=; b=b74Ij2Su8yqDiw AnJbXNc8i6DhYlHqjPy87fBc2WpZY9scYOSFYKHcCIid4HcUtxvM9k0s8Ou/kKMZeg/V+gNxdBD/6 RSPi7fUN74qZWwIDK8u30dD+BifXVLgTyZJdc/oddkO+tbyzvT54wPifSIUUAtYlY3QB08eCfvS7c dR5Q624U/5XstRtANR/qzsdUzjcJFylVC8XXES/S+B/CHIr1ZHaKzxmlnJi5NJpiLK0LYdFJcLRJW wQ+UtpzBP3kY9+MDLWm2cWAKoUGuEqhn71QanUeX88v1ptCmS3dt8PEINnE+aQ4OjoghN48L0yj9F acCjZfOMKUy3SvmE8xUw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qA31F-00H2xu-2y; Fri, 16 Jun 2023 06:33:13 +0000 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qA31C-00H2xI-1g for linux-riscv@lists.infradead.org; Fri, 16 Jun 2023 06:33:12 +0000 Received: by mail-pj1-x1035.google.com with SMTP id 98e67ed59e1d1-25ea43115dbso352371a91.1 for ; Thu, 15 Jun 2023 23:33:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1686897189; x=1689489189; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N6YSZ0jN6WYvSGX+ufTjHHZyva5PIfigTR1ByXQYtm0=; b=IqQ9a51FjUNBSzlKpPyp3MusWnXDnwNCNSYBHVhQvEVqrE3RAsintNhGm7UZyjYsqX teliZYmFJSdgdd8oH8eMkas3/J538S0w6AaBh4YaBmcMDvWjSkuWtd7d0PpIH3WN4M0I N5fwzg/6WnLkUka93CMBo+fhwlnb94u+s9OGbYqSy0LsYn4VTLY/HK5AxjgxUWxBMDxc KtB23hkIBhv/UNLaJfGfejRFieFOBBKq0Kl3qKlTJWGpdhU/7gpUZsaFUV4P/g51uJHy emN9+o/XtZuF8wfEFpUtzpxnvLAsjitUNM0BmMA/c6/qjaP33A4xrH/DM7FlBil37Ixv iDZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686897189; x=1689489189; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N6YSZ0jN6WYvSGX+ufTjHHZyva5PIfigTR1ByXQYtm0=; b=jyH3X9HWyhSvARJ4xRIHCaOedxi8Xmy2vQx+uI9BPqsETiv1sXe5jRuN2fRSefuy16 dxWWAzhJNSq5lhwG3g3/4vdn5+XHZMGh/VbhvzmkXhEAnwome5vosP24mTCWm9QSvHP+ 9vw0lyD9Vk5gT9CNrD1dRXXOfi1i58V90/3SnBjMBjPdLM1oWfmYaTXknbrH4eDHs+Tz /JVKJrlmCWuP8UMIo2oi4PZ2UZXVacyCIr9MclU7F5UlM//HmXHPvaJchGnmP+79CL2I /kwhGxP47FP5UgvVBQRQbatxGo13LN2bITejY43qeAKenvcbX7Q1cYsb6DIEnrDp3z6b kFLA== X-Gm-Message-State: AC+VfDz+sKfn8TENjurQzJ1RZg2Kb0t6LolRyp/z1c5vX9HIKHsX049W ec4FOtyRtdVC/t7RVEC72+zS0Q== X-Google-Smtp-Source: ACHHUZ40qadwVqKjatzTtizgMS+PmeQMB/ZBh5FKIHRqNfO0maHBvs2QQLaaLa2Af4K8NXCvHBGpLw== X-Received: by 2002:a17:90a:d711:b0:25e:af28:ae98 with SMTP id y17-20020a17090ad71100b0025eaf28ae98mr990368pju.23.1686897189356; Thu, 15 Jun 2023 23:33:09 -0700 (PDT) Received: from hsinchu16.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id u11-20020a17090a410b00b0025023726fc4sm617596pjf.26.2023.06.15.23.33.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 23:33:08 -0700 (PDT) From: Eric Lin To: conor@kernel.org, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, palmer@dabbelt.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, maz@kernel.org, chenhuacai@kernel.org, baolu.lu@linux.intel.com, will@kernel.org, kan.liang@linux.intel.com, nnac123@linux.ibm.com, pierre.gondois@arm.com, huangguangbin2@huawei.com, jgross@suse.com, chao.gao@intel.com, maobibo@loongson.cn, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, dslin1010@gmail.com Cc: Eric Lin , Nick Hu , Zong Li Subject: [PATCH 1/3] soc: sifive: Add SiFive private L2 cache support Date: Fri, 16 Jun 2023 14:32:08 +0800 Message-Id: <20230616063210.19063-2-eric.lin@sifive.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230616063210.19063-1-eric.lin@sifive.com> References: <20230616063210.19063-1-eric.lin@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_233310_567281_9DC3D2DF X-CRM114-Status: GOOD ( 25.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This adds SiFive private L2 cache driver which will show cache config information when booting and add cpu hotplug callback functions. Signed-off-by: Eric Lin Signed-off-by: Nick Hu Reviewed-by: Zong Li --- drivers/soc/sifive/Kconfig | 8 + drivers/soc/sifive/Makefile | 1 + drivers/soc/sifive/sifive_pl2.h | 25 ++++ drivers/soc/sifive/sifive_pl2_cache.c | 202 ++++++++++++++++++++++++++ include/linux/cpuhotplug.h | 1 + 5 files changed, 237 insertions(+) create mode 100644 drivers/soc/sifive/sifive_pl2.h create mode 100644 drivers/soc/sifive/sifive_pl2_cache.c diff --git a/drivers/soc/sifive/Kconfig b/drivers/soc/sifive/Kconfig index e86870be34c9..573564295058 100644 --- a/drivers/soc/sifive/Kconfig +++ b/drivers/soc/sifive/Kconfig @@ -7,4 +7,12 @@ config SIFIVE_CCACHE help Support for the composable cache controller on SiFive platforms. +config SIFIVE_PL2 + bool "Sifive private L2 Cache controller" + help + Support for the private L2 cache controller on SiFive platforms. + The SiFive Private L2 Cache Controller is per hart and communicates + with both the upstream L1 caches and downstream L3 cache or memory, + enabling a high-performance cache subsystem. + endif diff --git a/drivers/soc/sifive/Makefile b/drivers/soc/sifive/Makefile index 1f5dc339bf82..707493e1c691 100644 --- a/drivers/soc/sifive/Makefile +++ b/drivers/soc/sifive/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o +obj-$(CONFIG_SIFIVE_PL2) += sifive_pl2_cache.o diff --git a/drivers/soc/sifive/sifive_pl2.h b/drivers/soc/sifive/sifive_pl2.h new file mode 100644 index 000000000000..57aa1019d5ed --- /dev/null +++ b/drivers/soc/sifive/sifive_pl2.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 SiFive, Inc. + * + */ + +#ifndef _SIFIVE_PL2_H +#define _SIFIVE_PL2_H + +#define SIFIVE_PL2_CONFIG1_OFFSET 0x1000 +#define SIFIVE_PL2_CONFIG0_OFFSET 0x1008 +#define SIFIVE_PL2_PMCLIENT_OFFSET 0x2800 + +struct sifive_pl2_state { + void __iomem *pl2_base; + u32 config1; + u32 config0; + u64 pmclientfilter; +}; + +int sifive_pl2_pmu_init(void); +int sifive_pl2_pmu_probe(struct device_node *pl2_node, + void __iomem *pl2_base, int cpu); + +#endif /*_SIFIVE_PL2_H */ diff --git a/drivers/soc/sifive/sifive_pl2_cache.c b/drivers/soc/sifive/sifive_pl2_cache.c new file mode 100644 index 000000000000..aeb51d576af9 --- /dev/null +++ b/drivers/soc/sifive/sifive_pl2_cache.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * SiFive private L2 cache controller Driver + * + * Copyright (C) 2018-2023 SiFive, Inc. + */ + +#define pr_fmt(fmt) "pL2CACHE: " fmt + +#include +#include +#include +#include +#include +#include +#include "sifive_pl2.h" + +static DEFINE_PER_CPU(struct sifive_pl2_state, sifive_pl2_state); + +static void sifive_pl2_state_save(struct sifive_pl2_state *pl2_state) +{ + void __iomem *pl2_base = pl2_state->pl2_base; + + if (!pl2_base) + return; + + pl2_state->config1 = readl(pl2_base + SIFIVE_PL2_CONFIG1_OFFSET); + pl2_state->config0 = readl(pl2_base + SIFIVE_PL2_CONFIG0_OFFSET); + pl2_state->pmclientfilter = readq(pl2_base + SIFIVE_PL2_PMCLIENT_OFFSET); +} + +static void sifive_pl2_state_restore(struct sifive_pl2_state *pl2_state) +{ + void __iomem *pl2_base = pl2_state->pl2_base; + + if (!pl2_base) + return; + + writel(pl2_state->config1, pl2_base + SIFIVE_PL2_CONFIG1_OFFSET); + writel(pl2_state->config0, pl2_base + SIFIVE_PL2_CONFIG0_OFFSET); + writeq(pl2_state->pmclientfilter, pl2_base + SIFIVE_PL2_PMCLIENT_OFFSET); +} + +/* + * CPU Hotplug call back function + */ +static int sifive_pl2_online_cpu(unsigned int cpu) +{ + struct sifive_pl2_state *pl2_state = this_cpu_ptr(&sifive_pl2_state); + + sifive_pl2_state_restore(pl2_state); + + return 0; +} + +static int sifive_pl2_offline_cpu(unsigned int cpu) +{ + struct sifive_pl2_state *pl2_state = this_cpu_ptr(&sifive_pl2_state); + + /* Save the pl2 state */ + sifive_pl2_state_save(pl2_state); + + return 0; +} + +/* + * PM notifer for suspend to ram + */ +#ifdef CONFIG_CPU_PM +static int sifive_pl2_pm_notify(struct notifier_block *b, unsigned long cmd, + void *v) +{ + struct sifive_pl2_state *pl2_state = this_cpu_ptr(&sifive_pl2_state); + + switch (cmd) { + case CPU_PM_ENTER: + /* Save the pl2 state */ + sifive_pl2_state_save(pl2_state); + break; + case CPU_PM_ENTER_FAILED: + case CPU_PM_EXIT: + sifive_pl2_state_restore(pl2_state); + break; + default: + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block sifive_pl2_pm_notifier_block = { + .notifier_call = sifive_pl2_pm_notify, +}; + +static inline void sifive_pl2_pm_init(void) +{ + cpu_pm_register_notifier(&sifive_pl2_pm_notifier_block); +} + +#else +static inline void sifive_pl2_pm_init(void) { } +#endif /* CONFIG_CPU_PM */ + +static const struct of_device_id sifive_pl2_cache_of_ids[] = { + { .compatible = "sifive,pL2Cache0" }, + { .compatible = "sifive,pL2Cache1" }, + { /* sentinel value */ } +}; + +static void pl2_config_read(void __iomem *pl2_base, int cpu) +{ + u32 regval, bank, way, set, cacheline; + + regval = readl(pl2_base); + bank = regval & 0xff; + pr_info("in the CPU: %d\n", cpu); + pr_info("No. of Banks in the cache: %d\n", bank); + way = (regval & 0xff00) >> 8; + pr_info("No. of ways per bank: %d\n", way); + set = (regval & 0xff0000) >> 16; + pr_info("Total sets: %llu\n", (uint64_t)1 << set); + cacheline = (regval & 0xff000000) >> 24; + pr_info("Bytes per cache block: %llu\n", (uint64_t)1 << cacheline); + pr_info("Size: %d\n", way << (set + cacheline)); +} + +static int sifive_pl2_cache_dev_probe(struct platform_device *pdev) +{ + struct resource *res; + int cpu, ret = -EINVAL; + struct device_node *cpu_node, *pl2_node; + struct sifive_pl2_state *pl2_state = NULL; + void __iomem *pl2_base; + + /* Traverse all cpu nodes to find the one mapping to its pl2 node. */ + for_each_cpu(cpu, cpu_possible_mask) { + cpu_node = of_cpu_device_node_get(cpu); + pl2_node = of_parse_phandle(cpu_node, "next-level-cache", 0); + + /* Found it! */ + if (dev_of_node(&pdev->dev) == pl2_node) { + /* Use cpu to get its percpu data sifive_pl2_state. */ + pl2_state = per_cpu_ptr(&sifive_pl2_state, cpu); + break; + } + } + + if (!pl2_state) { + pr_err("Not found the corresponding cpu_node in dts.\n"); + goto early_err; + } + + /* Set base address of select and counter registers. */ + pl2_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); + if (IS_ERR(pl2_base)) { + ret = PTR_ERR(pl2_base); + goto early_err; + } + + /* Print pL2 configs. */ + pl2_config_read(pl2_base, cpu); + pl2_state->pl2_base = pl2_base; + + return 0; + +early_err: + return ret; +} + +static struct platform_driver sifive_pl2_cache_driver = { + .driver = { + .name = "SiFive-pL2-cache", + .of_match_table = sifive_pl2_cache_of_ids, + }, + .probe = sifive_pl2_cache_dev_probe, +}; + +static int __init sifive_pl2_cache_init(void) +{ + int ret; + + ret = cpuhp_setup_state(CPUHP_AP_RISCV_SIFIVE_PL2_ONLINE, + "soc/sifive/pl2:online", + sifive_pl2_online_cpu, + sifive_pl2_offline_cpu); + if (ret < 0) { + pr_err("Failed to register CPU hotplug notifier %d\n", ret); + return ret; + } + + ret = platform_driver_register(&sifive_pl2_cache_driver); + if (ret) { + pr_err("Failed to register sifive_pl2_cache_driver: %d\n", ret); + return ret; + } + + sifive_pl2_pm_init(); + + return ret; +} + +device_initcall(sifive_pl2_cache_init); diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 0f1001dca0e0..35cd5ba0030b 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -207,6 +207,7 @@ enum cpuhp_state { CPUHP_AP_IRQ_AFFINITY_ONLINE, CPUHP_AP_BLK_MQ_ONLINE, CPUHP_AP_ARM_MVEBU_SYNC_CLOCKS, + CPUHP_AP_RISCV_SIFIVE_PL2_ONLINE, CPUHP_AP_X86_INTEL_EPB_ONLINE, CPUHP_AP_PERF_ONLINE, CPUHP_AP_PERF_X86_ONLINE, From patchwork Fri Jun 16 06:32:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Lin X-Patchwork-Id: 13282101 X-Patchwork-Delegate: mail@conchuod.ie Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0AD96EB64DA for ; Fri, 16 Jun 2023 06:33:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6nLKPUvdFwsKwqRa/LKwcwKewtEmgXJzYHFXAxdBQVA=; b=wwW6SJnloeNUp8 vPPcf+k7PC28UcMuVSXyivMiy6dzCLMaoN3TjHcGB/fDvMN5/jYkGTkcAsSi1zDbmIa+/HtfGUz0a pXmNsGvcxRxK6On7C0ueCs9Cw9YUc3y+irLoIA5mDsV0FFS32m5qTqYK6419/bws2vM2+JVu7RNbW skVr9QQ19/K4WIxPOVVQDQmzytxaiffQkUvJSLPpuFCBkh7JLfhPRi9Qe3aELPLoqbEKlqd5G0QPL 3GSVhHsM6S7nnZdldiqHnmIfDxCC3qTBWUxk1p68RGIyETreObffg9UoxOkgSUCV+pLyy5yMbMn7y sAkYkf4vC6Zr6b4aBhmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qA31M-00H2zh-27; Fri, 16 Jun 2023 06:33:20 +0000 Received: from mail-pj1-x1032.google.com ([2607:f8b0:4864:20::1032]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qA31I-00H2yS-2j for linux-riscv@lists.infradead.org; Fri, 16 Jun 2023 06:33:19 +0000 Received: by mail-pj1-x1032.google.com with SMTP id 98e67ed59e1d1-25ea1b6b659so304554a91.2 for ; Thu, 15 Jun 2023 23:33:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1686897196; x=1689489196; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PnIZ8VDudgFXwVLJBK0w+uNsZZ7kbesmBQziRvVkNMk=; b=aP6wTxKsjvfut3WojK9ldSE5L0bLVq0M8JQGqIFti4x0K6iWipC3AX6dclMeSfQC9O HY1a3FVYzyb9OeSSXSmWut8OqRY/KyjXsr+sfzrRmJQ7XJDWcWoqa4ciMxNDRJ1T3t5u 2ZoBzlfxq7zLeqmdfCTPNubg/i633q5UPprOAU5Hcui87HyYcKDgD610KSm4FodSKEHI q8UF6Jgv5UdUIJkc74Es0tiDGi3qGVU9Po7+40i81SA9VXTusdDveOSBDFOUSru8yRii xkchyWa+3ZL6KYF1TpR1cfGp0R4dPg7ygPM8/qkGYuTRefOPwfT6/g3HOS2GEMnkh8PV Os5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686897196; x=1689489196; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PnIZ8VDudgFXwVLJBK0w+uNsZZ7kbesmBQziRvVkNMk=; b=IWRj3crWqt7Krqhlt30vX6WWM+r8yB+cAdjdnRdK4edwsQDiPr29IfNvvJ6XyXAOmi a9tvni0xi3rKnKySbEo6HTa+Vd28VDjQ4VKeMO057TVfFhwhg4sUccxDh4hJx84IIB5x rdSPIMs3YbxgHVdDqZdc69gYhFGzqH4QWk9hL5RycrnIcJ9xypQXo22Q8BTItj5VCO9u hdOLUJH4ixrS2FQaiEvLb/Soivjbg8R9NW3diETXQ3ZxaLe9b9gxZK5WQimtz2LtKKum btHYoXpK/6U3j+TTm4JZ8PRBsWome43asgnvDJXfmS4SDUgpS+6p//lcbt9I5lSTenOM kdZg== X-Gm-Message-State: AC+VfDxgUcMxQPpNYx+M4b7qjDsh5ydHYHZ6F6lawkqfk7/6IFaQcd9Q It1cdGZhbXBYeR18czKBYLdszA== X-Google-Smtp-Source: ACHHUZ5KXmbXecsNozNEnmzBao2PUPZeY8s0HE8zwv3lRmSzl+eDMKtbwa8SAITIl+9LUV4okkei2Q== X-Received: by 2002:a17:90b:3687:b0:25e:aeaa:521d with SMTP id mj7-20020a17090b368700b0025eaeaa521dmr773240pjb.15.1686897195587; Thu, 15 Jun 2023 23:33:15 -0700 (PDT) Received: from hsinchu16.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id u11-20020a17090a410b00b0025023726fc4sm617596pjf.26.2023.06.15.23.33.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 23:33:15 -0700 (PDT) From: Eric Lin To: conor@kernel.org, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, palmer@dabbelt.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, maz@kernel.org, chenhuacai@kernel.org, baolu.lu@linux.intel.com, will@kernel.org, kan.liang@linux.intel.com, nnac123@linux.ibm.com, pierre.gondois@arm.com, huangguangbin2@huawei.com, jgross@suse.com, chao.gao@intel.com, maobibo@loongson.cn, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, dslin1010@gmail.com Cc: Greentime Hu , Eric Lin , Zong Li , Nick Hu Subject: [PATCH 2/3] soc: sifive: Add SiFive private L2 cache PMU driver Date: Fri, 16 Jun 2023 14:32:09 +0800 Message-Id: <20230616063210.19063-3-eric.lin@sifive.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230616063210.19063-1-eric.lin@sifive.com> References: <20230616063210.19063-1-eric.lin@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_233316_892538_8D0A2686 X-CRM114-Status: GOOD ( 21.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Greentime Hu This adds SiFive private L2 cache PMU driver. User can use perf tool to profile by event name and event id. Example: $ perf stat -C 0 -e /sifive_pl2_pmu/inner_acquire_block_btot/ -e /sifive_pl2_pmu/inner_acquire_block_ntob/ -e /sifive_pl2_pmu/inner_acquire_block_ntot/ ls Performance counter stats for 'CPU(s) 0': 300 sifive_pl2_pmu/inner_acquire_block_btot/ 17801 sifive_pl2_pmu/inner_acquire_block_ntob/ 5253 sifive_pl2_pmu/inner_acquire_block_ntot/ 0.088917326 seconds time elapsed $ perf stat -C 0 -e /sifive_pl2_pmu/event=0x10001/ -e /sifive_pl2_pmu/event=0x4001/ -e /sifive_pl2_pmu/event=0x8001/ ls Performance counter stats for 'CPU(s) 0': 251 sifive_pl2_pmu/event=0x10001/ 2620 sifive_pl2_pmu/event=0x4001/ 644 sifive_pl2_pmu/event=0x8001/ 0.092827110 seconds time elapsed Signed-off-by: Greentime Hu Signed-off-by: Eric Lin Reviewed-by: Zong Li Reviewed-by: Nick Hu --- drivers/soc/sifive/Kconfig | 9 + drivers/soc/sifive/Makefile | 1 + drivers/soc/sifive/sifive_pl2.h | 20 + drivers/soc/sifive/sifive_pl2_cache.c | 16 + drivers/soc/sifive/sifive_pl2_pmu.c | 669 ++++++++++++++++++++++++++ include/linux/cpuhotplug.h | 1 + 6 files changed, 716 insertions(+) create mode 100644 drivers/soc/sifive/sifive_pl2_pmu.c diff --git a/drivers/soc/sifive/Kconfig b/drivers/soc/sifive/Kconfig index 573564295058..deeb752287c7 100644 --- a/drivers/soc/sifive/Kconfig +++ b/drivers/soc/sifive/Kconfig @@ -15,4 +15,13 @@ config SIFIVE_PL2 with both the upstream L1 caches and downstream L3 cache or memory, enabling a high-performance cache subsystem. +config SIFIVE_PL2_PMU + bool "Sifive private L2 Cache PMU" + depends on SIFIVE_PL2 && PERF_EVENTS + default y + help + Support for the private L2 cache controller performance monitor unit + (PMU) on SiFive platforms. The SiFive private L2 PMU can monitor the + each hart L2 cache performance and it consists of a set of event + programmable counters and their event selector registers. endif diff --git a/drivers/soc/sifive/Makefile b/drivers/soc/sifive/Makefile index 707493e1c691..4bb3f97ef3f8 100644 --- a/drivers/soc/sifive/Makefile +++ b/drivers/soc/sifive/Makefile @@ -2,3 +2,4 @@ obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o obj-$(CONFIG_SIFIVE_PL2) += sifive_pl2_cache.o +obj-$(CONFIG_SIFIVE_PL2_PMU) += sifive_pl2_pmu.o diff --git a/drivers/soc/sifive/sifive_pl2.h b/drivers/soc/sifive/sifive_pl2.h index 57aa1019d5ed..21207b0d6092 100644 --- a/drivers/soc/sifive/sifive_pl2.h +++ b/drivers/soc/sifive/sifive_pl2.h @@ -7,10 +7,16 @@ #ifndef _SIFIVE_PL2_H #define _SIFIVE_PL2_H +#define SIFIVE_PL2_PMU_MAX_COUNTERS 64 +#define SIFIVE_PL2_SELECT_BASE_OFFSET 0x2000 +#define SIFIVE_PL2_COUNTER_BASE_OFFSET 0x3000 + #define SIFIVE_PL2_CONFIG1_OFFSET 0x1000 #define SIFIVE_PL2_CONFIG0_OFFSET 0x1008 #define SIFIVE_PL2_PMCLIENT_OFFSET 0x2800 +#define SIFIVE_PL2_COUNTER_MASK GENMASK_ULL(63, 0) + struct sifive_pl2_state { void __iomem *pl2_base; u32 config1; @@ -18,6 +24,20 @@ struct sifive_pl2_state { u64 pmclientfilter; }; +struct sifive_pl2_pmu_event { + struct perf_event **events; + void __iomem *event_counter_base; + void __iomem *event_select_base; + u32 counters; + DECLARE_BITMAP(used_mask, SIFIVE_PL2_PMU_MAX_COUNTERS); +}; + +struct sifive_pl2_pmu { + struct pmu *pmu; + struct hlist_node node; + cpumask_t cpumask; +}; + int sifive_pl2_pmu_init(void); int sifive_pl2_pmu_probe(struct device_node *pl2_node, void __iomem *pl2_base, int cpu); diff --git a/drivers/soc/sifive/sifive_pl2_cache.c b/drivers/soc/sifive/sifive_pl2_cache.c index aeb51d576af9..56d67879de54 100644 --- a/drivers/soc/sifive/sifive_pl2_cache.c +++ b/drivers/soc/sifive/sifive_pl2_cache.c @@ -161,6 +161,14 @@ static int sifive_pl2_cache_dev_probe(struct platform_device *pdev) pl2_config_read(pl2_base, cpu); pl2_state->pl2_base = pl2_base; + if (IS_ENABLED(CONFIG_SIFIVE_PL2_PMU)) { + ret = sifive_pl2_pmu_probe(pl2_node, pl2_base, cpu); + if (ret) { + pr_err("Fail to probe sifive_pl2_pmu driver.\n"); + goto early_err; + } + } + return 0; early_err: @@ -196,6 +204,14 @@ static int __init sifive_pl2_cache_init(void) sifive_pl2_pm_init(); + if (IS_ENABLED(CONFIG_SIFIVE_PL2_PMU)) { + ret = sifive_pl2_pmu_init(); + if (ret) { + pr_err("Fail to init sifive_pl2_pmu driver.\n"); + return ret; + } + } + return ret; } diff --git a/drivers/soc/sifive/sifive_pl2_pmu.c b/drivers/soc/sifive/sifive_pl2_pmu.c new file mode 100644 index 000000000000..848f0445437a --- /dev/null +++ b/drivers/soc/sifive/sifive_pl2_pmu.c @@ -0,0 +1,669 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * SiFive private L2 cache controller PMU Driver + * + * Copyright (C) 2018-2023 SiFive, Inc. + */ + +#define pr_fmt(fmt) "pL2CACHE_PMU: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "sifive_pl2.h" + +static bool pl2pmu_init_done; +static struct sifive_pl2_pmu sifive_pl2_pmu; +static DEFINE_PER_CPU(struct sifive_pl2_pmu_event, sifive_pl2_pmu_event); + +#ifndef readq +static inline unsigned long long readq(void __iomem *addr) +{ + return readl(addr) | (((unsigned long long)readl(addr + 4)) << 32LL); +} +#endif + +#ifndef writeq +static inline void writeq(unsigned long long v, void __iomem *addr) +{ + writel(lower_32_bits(v), addr); + writel(upper_32_bits(v), addr + 4); +} +#endif + +/* + * Add sysfs attributes + * + * We export: + * - formats, used by perf user space and other tools to configure events + * - events, used by perf user space and other tools to create events + * symbolically, e.g.: + * perf stat -a -e sifive_pl2_pmu/inner_put_partial_data_hit/ ls + * perf stat -a -e sifive_pl2_pmu/event=0x101/ ls + * - cpumask, used by perf user space and other tools to know on which CPUs + */ + +/* cpumask */ +static ssize_t cpumask_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &sifive_pl2_pmu.cpumask); +}; + +static DEVICE_ATTR_RO(cpumask); + +static struct attribute *sifive_pl2_pmu_cpumask_attrs[] = { + &dev_attr_cpumask.attr, + NULL, +}; + +static const struct attribute_group sifive_pl2_pmu_cpumask_attr_group = { + .attrs = sifive_pl2_pmu_cpumask_attrs, +}; + +/* formats */ +static ssize_t sifive_pl2_pmu_format_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct dev_ext_attribute *eattr; + + eattr = container_of(attr, struct dev_ext_attribute, attr); + return sysfs_emit(buf, "%s\n", (char *)eattr->var); +} + +#define SIFIVE_PL2_PMU_PMU_FORMAT_ATTR(_name, _config) \ + (&((struct dev_ext_attribute[]) { \ + { .attr = __ATTR(_name, 0444, sifive_pl2_pmu_format_show, NULL),\ + .var = (void *)_config, } \ + })[0].attr.attr) + +static struct attribute *sifive_pl2_pmu_formats[] = { + SIFIVE_PL2_PMU_PMU_FORMAT_ATTR(event, "config:0-63"), + NULL, +}; + +static struct attribute_group sifive_pl2_pmu_format_group = { + .name = "format", + .attrs = sifive_pl2_pmu_formats, +}; + +/* events */ + +static ssize_t sifive_pl2_pmu_event_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + return sysfs_emit(page, "event=0x%02llx\n", pmu_attr->id); +} + +#define SET_EVENT_SELECT(_event, _set) (((u64)1 << ((_event) + 8)) | (_set)) +#define PL2_PMU_EVENT_ATTR(_name, _event, _set) \ + PMU_EVENT_ATTR_ID(_name, sifive_pl2_pmu_event_show, \ + SET_EVENT_SELECT(_event, _set)) + +enum pl2_pmu_event_set1 { + INNER_PUT_FULL_DATA = 0, + INNER_PUT_PARTIAL_DATA, + INNER_ARITHMETIC_DATA, + INNER_GET, + INNER_PREFETCH_READ, + INNER_PREFETCH_WRITE, + INNER_ACQUIRE_BLOCK_NTOB, + INNER_ACQUIRE_BLOCK_NTOT, + INNER_ACQUIRE_BLOCK_BTOT, + INNER_ACQUIRE_PERM_NTOT, + INNER_ACQUIRE_PERM_BTOT, + INNER_RELEASE_TTOB, + INNER_RELEASE_TTON, + INNER_RELEASE_BTON, + INNER_RELEASE_DATA_TTOB, + INNER_RELEASE_DATA_TTON, + INNER_RELEASE_DATA_BTON, + INNER_RELEASE_DATA_TTOT, + INNER_PROBE_BLOCK_TOT, + INNER_PROBE_BLOCK_TOB, + INNER_PROBE_BLOCK_TON, + INNER_PROBE_PERM_TON, + INNER_PROBE_ACK_TTOB, + INNER_PROBE_ACK_TTON, + INNER_PROBE_ACK_BTON, + INNER_PROBE_ACK_TTOT, + INNER_PROBE_ACK_BTOB, + INNER_PROBE_ACK_NTON, + INNER_PROBE_ACK_DATA_TTOB, + INNER_PROBE_ACK_DATA_TTON, + INNER_PROBE_ACK_DATA_TTOT, + PL2_PMU_MAX_EVENT1_IDX +}; + +enum pl2_pmu_event_set2 { + INNER_PUT_FULL_DATA_HIT = 0, + INNER_PUT_PARTIAL_DATA_HIT, + INNER_ARITHMETIC_DATA_HIT, + INNER_GET_HIT, + INNER_PREFETCH_READ_HIT, + INNER_ACQUIRE_BLOCK_NTOB_HIT, + INNER_ACQUIRE_PERM_NTOT_HIT, + INNER_RELEASE_TTOB_HIT, + INNER_RELEASE_DATA_TTOB_HIT, + OUTER_PROBE_BLOCK_TOT_HIT, + INNER_PUT_FULL_DATA_HIT_SHARED, + INNER_PUT_PARTIAL_DATA_HIT_SHARED, + INNER_ARITHMETIC_DATA_HIT_SHARED, + INNER_GET_HIT_SHARED, + INNER_PREFETCH_READ_HIT_SHARED, + INNER_ACQUIRE_BLOCK_HIT_SHARED, + INNER_ACQUIRE_PERM_NTOT_HIT_SHARED, + OUTER_PROBE_BLOCK_TOT_HIT_SHARED, + OUTER_PROBE_BLOCK_TOT_HIT_DIRTY, + PL2_PMU_MAX_EVENT2_IDX +}; + +enum pl2_pmu_event_set3 { + OUTER_PUT_FULL_DATA = 0, + OUTER_PUT_PARTIAL_DATA, + OUTER_ARITHMETIC_DATA, + OUTER_GET, + OUTER_PREFETCH_READ, + OUTER_PREFETCH_WRITE, + OUTER_ACQUIRE_BLOCK_NTOB, + OUTER_ACQUIRE_BLOCK_NTOT, + OUTER_ACQUIRE_BLOCK_BTOT, + OUTER_ACQUIRE_PERM_NTOT, + OUTER_ACQUIRE_PERM_BTOT, + OUTER_RELEARE_TTOB, + OUTER_RELEARE_TTON, + OUTER_RELEARE_BTON, + OUTER_RELEARE_DATA_TTOB, + OUTER_RELEARE_DATA_TTON, + OUTER_RELEARE_DATA_BTON, + OUTER_RELEARE_DATA_TTOT, + OUTER_PROBE_BLOCK_TOT, + OUTER_PROBE_BLOCK_TOB, + OUTER_PROBE_BLOCK_TON, + OUTER_PROBE_PERM_TON, + OUTER_PROBE_ACK_TTOB, + OUTER_PROBE_ACK_TTON, + OUTER_PROBE_ACK_BTON, + OUTER_PROBE_ACK_TTOT, + OUTER_PROBE_ACK_BTOB, + OUTER_PROBE_ACK_NTON, + OUTER_PROBE_ACK_DATA_TTOB, + OUTER_PROBE_ACK_DATA_TTON, + OUTER_PROBE_ACK_DATA_TTOT, + PL2_PMU_MAX_EVENT3_IDX +}; + +enum pl2_pmu_event_set4 { + INNER_HINT_HITS_MSHR = 0, + INNER_READ_HITS_MSHR, + INNER_WRITE_HITS_MSHR, + INNER_READ_REPLAY, + INNER_WRITE_REPLAY, + OUTER_PROBE_REPLAY, + PL2_PMU_MAX_EVENT4_IDX +}; + +static struct attribute *sifive_pl2_pmu_events[] = { + PL2_PMU_EVENT_ATTR(inner_put_full_data, INNER_PUT_FULL_DATA, 1), + PL2_PMU_EVENT_ATTR(inner_put_partial_data, INNER_PUT_PARTIAL_DATA, 1), + PL2_PMU_EVENT_ATTR(inner_arithmetic_data, INNER_ARITHMETIC_DATA, 1), + PL2_PMU_EVENT_ATTR(inner_get, INNER_GET, 1), + PL2_PMU_EVENT_ATTR(inner_prefetch_read, INNER_PREFETCH_READ, 1), + PL2_PMU_EVENT_ATTR(inner_prefetch_write, INNER_PREFETCH_WRITE, 1), + PL2_PMU_EVENT_ATTR(inner_acquire_block_ntob, INNER_ACQUIRE_BLOCK_NTOB, 1), + PL2_PMU_EVENT_ATTR(inner_acquire_block_ntot, INNER_ACQUIRE_BLOCK_NTOT, 1), + PL2_PMU_EVENT_ATTR(inner_acquire_block_btot, INNER_ACQUIRE_BLOCK_BTOT, 1), + PL2_PMU_EVENT_ATTR(inner_acquire_perm_ntot, INNER_ACQUIRE_PERM_NTOT, 1), + PL2_PMU_EVENT_ATTR(inner_acquire_perm_btot, INNER_ACQUIRE_PERM_BTOT, 1), + PL2_PMU_EVENT_ATTR(inner_release_ttob, INNER_RELEASE_TTOB, 1), + PL2_PMU_EVENT_ATTR(inner_release_tton, INNER_RELEASE_TTON, 1), + PL2_PMU_EVENT_ATTR(inner_release_bton, INNER_RELEASE_BTON, 1), + PL2_PMU_EVENT_ATTR(inner_release_data_ttob, INNER_RELEASE_DATA_TTOB, 1), + PL2_PMU_EVENT_ATTR(inner_release_data_tton, INNER_RELEASE_DATA_TTON, 1), + PL2_PMU_EVENT_ATTR(inner_release_data_bton, INNER_RELEASE_DATA_BTON, 1), + PL2_PMU_EVENT_ATTR(inner_release_data_ttot, INNER_RELEASE_DATA_TTOT, 1), + PL2_PMU_EVENT_ATTR(inner_probe_block_tot, INNER_PROBE_BLOCK_TOT, 1), + PL2_PMU_EVENT_ATTR(inner_probe_block_tob, INNER_PROBE_BLOCK_TOB, 1), + PL2_PMU_EVENT_ATTR(inner_probe_block_ton, INNER_PROBE_BLOCK_TON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_perm_ton, INNER_PROBE_PERM_TON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_ttob, INNER_PROBE_ACK_TTOB, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_tton, INNER_PROBE_ACK_TTON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_bton, INNER_PROBE_ACK_BTON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_ttot, INNER_PROBE_ACK_TTOT, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_btob, INNER_PROBE_ACK_BTOB, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_nton, INNER_PROBE_ACK_NTON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_data_ttob, INNER_PROBE_ACK_DATA_TTOB, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_data_tton, INNER_PROBE_ACK_DATA_TTON, 1), + PL2_PMU_EVENT_ATTR(inner_probe_ack_data_ttot, INNER_PROBE_ACK_DATA_TTOT, 1), + + PL2_PMU_EVENT_ATTR(inner_put_full_data_hit, INNER_PUT_FULL_DATA_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_put_partial_data_hit, INNER_PUT_PARTIAL_DATA_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_arithmetic_data_hit, INNER_ARITHMETIC_DATA_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_get_hit, INNER_GET_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_prefetch_read_hit, INNER_PREFETCH_READ_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_acquire_block_ntob_hit, INNER_ACQUIRE_BLOCK_NTOB_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_acquire_perm_ntot_hit, INNER_ACQUIRE_PERM_NTOT_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_release_ttob_hit, INNER_RELEASE_TTOB_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_release_data_ttob_hit, INNER_RELEASE_DATA_TTOB_HIT, 2), + PL2_PMU_EVENT_ATTR(outer_probe_block_tot_hit, OUTER_PROBE_BLOCK_TOT_HIT, 2), + PL2_PMU_EVENT_ATTR(inner_put_full_data_hit_shared, INNER_PUT_FULL_DATA_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_put_partial_data_hit_shared, INNER_PUT_PARTIAL_DATA_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_arithmetic_data_hit_shared, INNER_ARITHMETIC_DATA_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_get_hit_shared, INNER_GET_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_prefetch_read_hit_shared, INNER_PREFETCH_READ_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_acquire_block_hit_shared, INNER_ACQUIRE_BLOCK_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(inner_acquire_perm_hit_shared, INNER_ACQUIRE_PERM_NTOT_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(outer_probe_block_tot_hit_shared, OUTER_PROBE_BLOCK_TOT_HIT_SHARED, 2), + PL2_PMU_EVENT_ATTR(outer_probe_block_tot_hit_dirty, OUTER_PROBE_BLOCK_TOT_HIT_DIRTY, 2), + + PL2_PMU_EVENT_ATTR(outer_put_full_data, OUTER_PUT_FULL_DATA, 3), + PL2_PMU_EVENT_ATTR(outer_put_partial_data, OUTER_PUT_PARTIAL_DATA, 3), + PL2_PMU_EVENT_ATTR(outer_arithmetic_data, OUTER_ARITHMETIC_DATA, 3), + PL2_PMU_EVENT_ATTR(outer_get, OUTER_GET, 3), + PL2_PMU_EVENT_ATTR(outer_prefetch_read, OUTER_PREFETCH_READ, 3), + PL2_PMU_EVENT_ATTR(outer_prefetch_write, OUTER_PREFETCH_WRITE, 3), + PL2_PMU_EVENT_ATTR(outer_acquire_block_ntob, OUTER_ACQUIRE_BLOCK_NTOB, 3), + PL2_PMU_EVENT_ATTR(outer_acquire_block_ntot, OUTER_ACQUIRE_BLOCK_NTOT, 3), + PL2_PMU_EVENT_ATTR(outer_acquire_block_btot, OUTER_ACQUIRE_BLOCK_BTOT, 3), + PL2_PMU_EVENT_ATTR(outer_acquire_perm_ntot, OUTER_ACQUIRE_PERM_NTOT, 3), + PL2_PMU_EVENT_ATTR(outer_acquire_perm_btot, OUTER_ACQUIRE_PERM_BTOT, 3), + PL2_PMU_EVENT_ATTR(outer_release_ttob, OUTER_RELEARE_TTOB, 3), + PL2_PMU_EVENT_ATTR(outer_release_tton, OUTER_RELEARE_TTON, 3), + PL2_PMU_EVENT_ATTR(outer_release_bton, OUTER_RELEARE_BTON, 3), + PL2_PMU_EVENT_ATTR(outer_release_data_ttob, OUTER_RELEARE_DATA_TTOB, 3), + PL2_PMU_EVENT_ATTR(outer_release_data_tton, OUTER_RELEARE_DATA_TTON, 3), + PL2_PMU_EVENT_ATTR(outer_release_data_bton, OUTER_RELEARE_DATA_BTON, 3), + PL2_PMU_EVENT_ATTR(outer_release_data_ttot, OUTER_RELEARE_DATA_TTOT, 3), + PL2_PMU_EVENT_ATTR(outer_probe_block_tot, OUTER_PROBE_BLOCK_TOT, 3), + PL2_PMU_EVENT_ATTR(outer_probe_block_tob, OUTER_PROBE_BLOCK_TOB, 3), + PL2_PMU_EVENT_ATTR(outer_probe_block_ton, OUTER_PROBE_BLOCK_TON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_perm_ton, OUTER_PROBE_PERM_TON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_ttob, OUTER_PROBE_ACK_TTOB, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_tton, OUTER_PROBE_ACK_TTON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_bton, OUTER_PROBE_ACK_BTON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_ttot, OUTER_PROBE_ACK_TTOT, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_btob, OUTER_PROBE_ACK_BTOB, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_nton, OUTER_PROBE_ACK_NTON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_data_ttob, OUTER_PROBE_ACK_DATA_TTOB, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_data_tton, OUTER_PROBE_ACK_DATA_TTON, 3), + PL2_PMU_EVENT_ATTR(outer_probe_ack_data_ttot, OUTER_PROBE_ACK_DATA_TTOT, 3), + + PL2_PMU_EVENT_ATTR(inner_hint_hits_mshr, INNER_HINT_HITS_MSHR, 4), + PL2_PMU_EVENT_ATTR(inner_read_hits_mshr, INNER_READ_HITS_MSHR, 4), + PL2_PMU_EVENT_ATTR(inner_write_hits_mshr, INNER_WRITE_HITS_MSHR, 4), + PL2_PMU_EVENT_ATTR(inner_read_replay, INNER_READ_REPLAY, 4), + PL2_PMU_EVENT_ATTR(inner_write_replay, INNER_WRITE_REPLAY, 4), + PL2_PMU_EVENT_ATTR(outer_probe_replay, OUTER_PROBE_REPLAY, 4), + NULL +}; + +static struct attribute_group sifive_pl2_pmu_events_group = { + .name = "events", + .attrs = sifive_pl2_pmu_events, +}; + +/* + * Per PMU device attribute groups + */ + +static const struct attribute_group *sifive_pl2_pmu_attr_grps[] = { + &sifive_pl2_pmu_format_group, + &sifive_pl2_pmu_events_group, + &sifive_pl2_pmu_cpumask_attr_group, + NULL, +}; + +/* + * Low-level functions: reading and writing counters + */ + +static inline u64 read_counter(int idx) +{ + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + + if (WARN_ON_ONCE(idx < 0 || idx > ptr->counters)) + return -EINVAL; + + return readq(ptr->event_counter_base + idx * 8); +} + +static inline void write_counter(int idx, u64 val) +{ + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + + writeq(val, ptr->event_counter_base + idx * 8); +} + +/* + * pmu->read: read and update the counter + */ +static void sifive_pl2_pmu_read(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + u64 prev_raw_count, new_raw_count; + u64 oldval; + int idx = hwc->idx; + u64 delta; + + do { + prev_raw_count = local64_read(&hwc->prev_count); + new_raw_count = read_counter(idx); + + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count, + new_raw_count); + } while (oldval != prev_raw_count); + + /* delta is the value to update the counter we maintain in the kernel. */ + delta = (new_raw_count - prev_raw_count) & SIFIVE_PL2_COUNTER_MASK; + local64_add(delta, &event->count); +} + +/* + * State transition functions: + * + * stop()/start() & add()/del() + */ + +/* + * pmu->stop: stop the counter + */ +static void sifive_pl2_pmu_stop(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + + /* Disable this counter to count events */ + writeq(0, ptr->event_select_base + (hwc->idx * 8)); + + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); + hwc->state |= PERF_HES_STOPPED; + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + sifive_pl2_pmu_read(event); + hwc->state |= PERF_HES_UPTODATE; + } +} + +/* + * pmu->start: start the event. + */ +static void sifive_pl2_pmu_start(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + if (flags & PERF_EF_RELOAD) + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); + + hwc->state = 0; + perf_event_update_userpage(event); + + /* Set initial value 0 */ + local64_set(&hwc->prev_count, 0); + write_counter(hwc->idx, 0); + + /* Enable counter to count these events */ + writeq(hwc->config, ptr->event_select_base + (hwc->idx * 8)); +} + +/* + * pmu->add: add the event to PMU. + */ +static int sifive_pl2_pmu_add(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + int idx; + u64 config = event->attr.config; + u64 set = config & 0xff; + u64 ev_type = config >> 8; + + /* Check if this is a valid set and event. */ + switch (set) { + case 1: + if (ev_type >= (BIT_ULL(PL2_PMU_MAX_EVENT1_IDX))) + return -ENOENT; + break; + case 2: + if (ev_type >= (BIT_ULL(PL2_PMU_MAX_EVENT2_IDX))) + return -ENOENT; + break; + case 3: + if (ev_type >= (BIT_ULL(PL2_PMU_MAX_EVENT3_IDX))) + return -ENOENT; + break; + case 4: + if (ev_type >= (BIT_ULL(PL2_PMU_MAX_EVENT4_IDX))) + return -ENOENT; + break; + case 0: + default: + return -ENOENT; + } + + idx = find_first_zero_bit(ptr->used_mask, ptr->counters); + /* The counters are all in use. */ + if (idx == ptr->counters) + return -EAGAIN; + + set_bit(idx, ptr->used_mask); + + /* Found an available counter idx for this event. */ + hwc->idx = idx; + ptr->events[hwc->idx] = event; + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + + if (flags & PERF_EF_START) + sifive_pl2_pmu_start(event, PERF_EF_RELOAD); + + perf_event_update_userpage(event); + return 0; +} + +/* + * pmu->del: delete the event from PMU. + */ +static void sifive_pl2_pmu_del(struct perf_event *event, int flags) +{ + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + struct hw_perf_event *hwc = &event->hw; + + /* Stop the counter and release this counter. */ + ptr->events[hwc->idx] = NULL; + sifive_pl2_pmu_stop(event, PERF_EF_UPDATE); + clear_bit(hwc->idx, ptr->used_mask); + perf_event_update_userpage(event); +} + +/* + * Event Initialization/Finalization + */ + +static int sifive_pl2_pmu_event_init(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + /* Don't allocate hw counter yet. */ + hwc->idx = -1; + hwc->config = event->attr.config; + + return 0; +} + +/* + * Initialization + */ + +static struct pmu sifive_pl2_generic_pmu = { + .name = "sifive_pl2_pmu", + .task_ctx_nr = perf_invalid_context, + .event_init = sifive_pl2_pmu_event_init, + .add = sifive_pl2_pmu_add, + .del = sifive_pl2_pmu_del, + .start = sifive_pl2_pmu_start, + .stop = sifive_pl2_pmu_stop, + .read = sifive_pl2_pmu_read, + .attr_groups = sifive_pl2_pmu_attr_grps, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, +}; + +static struct sifive_pl2_pmu sifive_pl2_pmu = { + .pmu = &sifive_pl2_generic_pmu, +}; + +/* + * CPU Hotplug call back function + */ +static int sifive_pl2_pmu_online_cpu(unsigned int cpu, struct hlist_node *node) +{ + struct sifive_pl2_pmu *ptr = hlist_entry_safe(node, struct sifive_pl2_pmu, node); + + if (!cpumask_test_cpu(cpu, &ptr->cpumask)) + cpumask_set_cpu(cpu, &ptr->cpumask); + + return 0; +} + +static int sifive_pl2_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) +{ + struct sifive_pl2_pmu *ptr = hlist_entry_safe(node, struct sifive_pl2_pmu, node); + + /* Clear this cpu in cpumask */ + cpumask_test_and_clear_cpu(cpu, &ptr->cpumask); + + return 0; +} + +/* + * PM notifer for suspend to ram + */ +#ifdef CONFIG_CPU_PM +static int sifive_pl2_pmu_pm_notify(struct notifier_block *b, unsigned long cmd, + void *v) +{ + struct sifive_pl2_pmu_event *ptr = this_cpu_ptr(&sifive_pl2_pmu_event); + struct perf_event *event; + int idx; + int enabled_event = bitmap_weight(ptr->used_mask, ptr->counters); + + if (!enabled_event) + return NOTIFY_OK; + + for (idx = 0; idx < ptr->counters; idx++) { + event = ptr->events[idx]; + if (!event) + continue; + + switch (cmd) { + case CPU_PM_ENTER: + /* Stop and update the counter */ + sifive_pl2_pmu_stop(event, PERF_EF_UPDATE); + break; + case CPU_PM_ENTER_FAILED: + case CPU_PM_EXIT: + /* + * Restore and enable the counter. + * + * Requires RCU read locking to be functional, + * wrap the call within RCU_NONIDLE to make the + * RCU subsystem aware this cpu is not idle from + * an RCU perspective for the sifive_pl2_pmu_start() call + * duration. + */ + RCU_NONIDLE(sifive_pl2_pmu_start(event, PERF_EF_RELOAD)); + break; + default: + break; + } + } + + return NOTIFY_OK; +} + +static struct notifier_block sifive_pl2_pmu_pm_notifier_block = { + .notifier_call = sifive_pl2_pmu_pm_notify, +}; + +static inline void sifive_pl2_pmu_pm_init(void) +{ + cpu_pm_register_notifier(&sifive_pl2_pmu_pm_notifier_block); +} + +#else +static inline void sifive_pl2_pmu_pm_init(void) { } +#endif /* CONFIG_CPU_PM */ + +int sifive_pl2_pmu_probe(struct device_node *pl2_node, + void __iomem *pl2_base, int cpu) +{ + struct sifive_pl2_pmu_event *ptr = per_cpu_ptr(&sifive_pl2_pmu_event, cpu); + int ret = -EINVAL; + + /* Get counter numbers. */ + ret = of_property_read_u32(pl2_node, "sifive,perfmon-counters", &ptr->counters); + if (ret) { + pr_err("Not found sifive,perfmon-counters property\n"); + goto early_err; + } + pr_info("perfmon-counters: %d for CPU %d\n", ptr->counters, cpu); + + /* Allocate perf_event. */ + ptr->events = kcalloc(ptr->counters, sizeof(struct perf_event), GFP_KERNEL); + if (!ptr->events) + return -ENOMEM; + + ptr->event_select_base = pl2_base + SIFIVE_PL2_SELECT_BASE_OFFSET; + ptr->event_counter_base = pl2_base + SIFIVE_PL2_COUNTER_BASE_OFFSET; + + if (!pl2pmu_init_done) { + ret = perf_pmu_register(sifive_pl2_pmu.pmu, sifive_pl2_pmu.pmu->name, -1); + if (ret) { + cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_SIFIVE_PL2_PMU_ONLINE, + &sifive_pl2_pmu.node); + pr_err("Failed to register sifive_pl2_pmu.pmu: %d\n", ret); + } + sifive_pl2_pmu_pm_init(); + pl2pmu_init_done = true; + } + + return 0; + +early_err: + return ret; +} + +int sifive_pl2_pmu_init(void) +{ + int ret = 0; + + ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_RISCV_SIFIVE_PL2_PMU_ONLINE, + "perf/sifive/pl2pmu:online", + sifive_pl2_pmu_online_cpu, + sifive_pl2_pmu_offline_cpu); + if (ret) + pr_err("Failed to register CPU hotplug notifier %d\n", ret); + + ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_SIFIVE_PL2_PMU_ONLINE, + &sifive_pl2_pmu.node); + if (ret) + pr_err("Failed to add hotplug instance: %d\n", ret); + + return ret; +} diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 35cd5ba0030b..9c1a91c8cdaa 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -243,6 +243,7 @@ enum cpuhp_state { CPUHP_AP_PERF_POWERPC_HV_24x7_ONLINE, CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE, CPUHP_AP_PERF_CSKY_ONLINE, + CPUHP_AP_PERF_RISCV_SIFIVE_PL2_PMU_ONLINE, CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, CPUHP_AP_RANDOM_ONLINE, From patchwork Fri Jun 16 06:32:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Lin X-Patchwork-Id: 13282102 X-Patchwork-Delegate: mail@conchuod.ie Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4409CEB64DA for ; Fri, 16 Jun 2023 06:33:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Vcfy7SOC0LfUixsPK/uD9BjsjNw9vnhbgE/ZS/yLJIU=; b=xAJv6Pn1sOfnuw K1+iPXA1xNWe/4ka9mdnRElOoP9HYhl1JBHAovPahi+iJAjxeNBKLpCmtq+1zgCuKHv+CaSk/xIN+ irUQUNFKsuZyTk7EtRSosi2QcdbVQvqRYpfVO7hO9581XTXux1xalfh0+QrWFVke92dDqjG1T9Rdm DpBBHNe70hzeRGN9HmnXBBa1DxAoztBafwokY4sldIT6eHgw5Pn3L6sY8if5N08XmfAIbgG7OJaFb I4H8GWC20MjHvr+q1lYZLgxcf9cO37yFyRRZRYTZbeirIB9e3NZW9RaBsdQ4zEkNYCtAt2qMgPJg6 rQ84ZFYEEe5rZdxhhjZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qA31R-00H31U-12; Fri, 16 Jun 2023 06:33:25 +0000 Received: from mail-pg1-x52d.google.com ([2607:f8b0:4864:20::52d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qA31O-00H30C-1C for linux-riscv@lists.infradead.org; Fri, 16 Jun 2023 06:33:23 +0000 Received: by mail-pg1-x52d.google.com with SMTP id 41be03b00d2f7-52cb8e5e9f5so342369a12.0 for ; Thu, 15 Jun 2023 23:33:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1686897201; x=1689489201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+E9GNUVpev13YAg9DBkxIqHR4HyT2vWr1j4jNPBgL6s=; b=bZE3yEb4sbWGRu5PWYoEkhv0ZERLgUP1HmCsCGGPh270f/mYXpWtkHpOQtb5eHnQRf AfyibpT6bBu0TbdxD6Oy60lR0/Ccf/VOmVii6vt1dCBQT+zqn16499+oMEw75PNVuT6I o59WdRbKIO/KexXHUiWXdgmq1OG44e4RnZiU0EH8mKd9pCnqOC+fezodViMU2rEFQVYf bdG8TGqnpy4wHXYMPttBzJw8LQE3unM3f9ORI4DL1HfxnWut+AfKirXU1HFOoXCB6j91 R19uQzzAkEuF1YbyIwkyYv8sy2xOOMyLtdRb3ROivlfTluJYUfdyb9cfrIeUuVnpIzL3 4nJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686897201; x=1689489201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+E9GNUVpev13YAg9DBkxIqHR4HyT2vWr1j4jNPBgL6s=; b=lxDrkwUIbMrdnFkZr0M5JkqdvE5lXj3wFDIR59phNhyKDYxK8h0DwE3PaYHQH9Kwlh eLIpC9Iz8f66ohLNHZauB1CQ1WWah29/Vhv6ABRHWfYAi87Rf5Lz1Y4W7+6jFbew7JsN 1LLGfb8y3/n1UFSXUgJk13WqLBZQpz67HhzHEdGsRhC3YDJPa/C+sfWfm6A9Lcu8Cn/W X9PBMUdDyj7XH7akVkDU/WMUs9Qc1HkxY99zhjXhqNC61rYxXWoQZI30hDm/IJ1PacaL fJylIkNDlYHDnBZiQqaCSGoajCvBBUNAC3c+krwSAx+tYq1sxxJv3uwkf/r4I8AFGOe/ tVnQ== X-Gm-Message-State: AC+VfDy7cobdcKzxGRm68MgFjlUr682aKcca345sHX3THoLo+wB4+WGj TY6i/b+W5BVf3T+XNYXTSZjM1g== X-Google-Smtp-Source: ACHHUZ5XeuETTVg3HS7ANsWZZaudTqsBAvzVANoKKQGWACp3oqIesLrX/wtyl3tWrLQLkk82CTT+AA== X-Received: by 2002:a17:90a:ad89:b0:25e:a9d1:8ad4 with SMTP id s9-20020a17090aad8900b0025ea9d18ad4mr1319918pjq.17.1686897201373; Thu, 15 Jun 2023 23:33:21 -0700 (PDT) Received: from hsinchu16.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id u11-20020a17090a410b00b0025023726fc4sm617596pjf.26.2023.06.15.23.33.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jun 2023 23:33:20 -0700 (PDT) From: Eric Lin To: conor@kernel.org, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, palmer@dabbelt.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, maz@kernel.org, chenhuacai@kernel.org, baolu.lu@linux.intel.com, will@kernel.org, kan.liang@linux.intel.com, nnac123@linux.ibm.com, pierre.gondois@arm.com, huangguangbin2@huawei.com, jgross@suse.com, chao.gao@intel.com, maobibo@loongson.cn, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, dslin1010@gmail.com Cc: Eric Lin , Zong Li , Nick Hu Subject: [PATCH 3/3] dt-bindings: riscv: sifive: Add SiFive Private L2 cache controller Date: Fri, 16 Jun 2023 14:32:10 +0800 Message-Id: <20230616063210.19063-4-eric.lin@sifive.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230616063210.19063-1-eric.lin@sifive.com> References: <20230616063210.19063-1-eric.lin@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_233322_413966_76DF3457 X-CRM114-Status: GOOD ( 11.00 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This add YAML DT binding documentation for SiFive Private L2 cache controller Signed-off-by: Eric Lin Reviewed-by: Zong Li Reviewed-by: Nick Hu --- .../bindings/riscv/sifive,pL2Cache0.yaml | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 Documentation/devicetree/bindings/riscv/sifive,pL2Cache0.yaml diff --git a/Documentation/devicetree/bindings/riscv/sifive,pL2Cache0.yaml b/Documentation/devicetree/bindings/riscv/sifive,pL2Cache0.yaml new file mode 100644 index 000000000000..b5d8d4a39dde --- /dev/null +++ b/Documentation/devicetree/bindings/riscv/sifive,pL2Cache0.yaml @@ -0,0 +1,81 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +# Copyright (C) 2023 SiFive, Inc. +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/riscv/sifive,pL2Cache0.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: SiFive Private L2 Cache Controller + +maintainers: + - Greentime Hu + - Eric Lin + +description: + The SiFive Private L2 Cache Controller is per hart and communicates with both the upstream + L1 caches and downstream L3 cache or memory, enabling a high-performance cache subsystem. + All the properties in ePAPR/DeviceTree specification applies for this platform. + +allOf: + - $ref: /schemas/cache-controller.yaml# + +select: + properties: + compatible: + contains: + enum: + - sifive,pL2Cache0 + - sifive,pL2Cache1 + + required: + - compatible + +properties: + compatible: + items: + - enum: + - sifive,pL2Cache0 + - sifive,pL2Cache1 + + cache-block-size: + const: 64 + + cache-level: + const: 2 + + cache-sets: + const: 512 + + cache-size: + const: 262144 + + cache-unified: true + + reg: + maxItems: 1 + + next-level-cache: true + +additionalProperties: false + +required: + - compatible + - cache-block-size + - cache-level + - cache-sets + - cache-size + - cache-unified + - reg + +examples: + - | + pl2@10104000 { + compatible = "sifive,pL2Cache0"; + cache-block-size = <64>; + cache-level = <2>; + cache-sets = <512>; + cache-size = <262144>; + cache-unified; + reg = <0x10104000 0x4000>; + next-level-cache = <&L4>; + };