From patchwork Thu Mar 20 17:41:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024220 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A8E61D7E41; Thu, 20 Mar 2025 17:41:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492516; cv=none; b=gJKckxx9Uzp0IQCUwZGDCFTJ5PVJpD5fbnJBQyQcI60v1zEJC1Nef2sHHY5TrVqrL2xaTU5hLA0ssYtxvI61vVX63YhkPGZvzmSs1hH3Sk20rB38VTm4LZbCf/JqWDX4l5F/jOWCbUp+hmM4MnPq/WujIouGxfII4ZirH0Otdd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492516; c=relaxed/simple; bh=cTo2mhYZAa7rPVJjMjZDksRTSMBHfwJ60qG0oJZIa1g=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ls/DpBk3gwVJcq2Zn559r/6cudXguzytBXkz15QBRbDi7GVP4c7Jtg2QKCyjU1XxJKvzwLkE8bCJxYDEmLoKek7mg6bAudoOtHdHonefLIRuzTrzO405dbqGYWtHg5swHrpDGqJ35zZURDGuHYfNjyJPv0Dgz7QKZbGR5W0XtYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXqW3MDtz6M4LB; Fri, 21 Mar 2025 01:38:31 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id BA7F21405A0; Fri, 21 Mar 2025 01:41:51 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:41:51 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 1/6] memregion: Support fine grained invalidate by cpu_cache_invalidate_memregion() Date: Thu, 20 Mar 2025 17:41:13 +0000 Message-ID: <20250320174118.39173-2-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) From: Yicong Yang Extend cpu_cache_invalidate_memregion() to support invalidate certain range of memory. Control of types of invalidation is left for when usecases turn up. For now everything is Clean and Invalidate. Signed-off-by: Yicong Yang --- arch/x86/mm/pat/set_memory.c | 2 +- drivers/cxl/core/region.c | 6 +++++- drivers/nvdimm/region.c | 3 ++- drivers/nvdimm/region_devs.c | 3 ++- include/linux/memregion.h | 8 ++++++-- 5 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ef4514d64c05..5d2adec1b0d4 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -354,7 +354,7 @@ bool cpu_cache_has_invalidate_memregion(void) } EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM"); -int cpu_cache_invalidate_memregion(int res_desc) +int cpu_cache_invalidate_memregion(int res_desc, phys_addr_t start, size_t len) { if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion())) return -ENXIO; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index e8d11a988fd9..6d1c9eee4cd9 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -238,7 +238,11 @@ static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) } } - cpu_cache_invalidate_memregion(IORES_DESC_CXL); + if (!cxlr->params.res) + return -ENXIO; + cpu_cache_invalidate_memregion(IORES_DESC_CXL, + cxlr->params.res->start, + resource_size(cxlr->params.res)); return 0; } diff --git a/drivers/nvdimm/region.c b/drivers/nvdimm/region.c index 88dc062af5f8..033e40f4dc52 100644 --- a/drivers/nvdimm/region.c +++ b/drivers/nvdimm/region.c @@ -110,7 +110,8 @@ static void nd_region_remove(struct device *dev) * here is ok. */ if (cpu_cache_has_invalidate_memregion()) - cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY); + cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY, + 0, -1); } static int child_notify(struct device *dev, void *data) diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index 37417ce5ec7b..eefd294a0f13 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -90,7 +90,8 @@ static int nd_region_invalidate_memregion(struct nd_region *nd_region) } } - cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY); + cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY, + 0, -1); out: for (i = 0; i < nd_region->ndr_mappings; i++) { struct nd_mapping *nd_mapping = &nd_region->mapping[i]; diff --git a/include/linux/memregion.h b/include/linux/memregion.h index c01321467789..91d088ee3695 100644 --- a/include/linux/memregion.h +++ b/include/linux/memregion.h @@ -28,6 +28,9 @@ static inline void memregion_free(int id) * cpu_cache_invalidate_memregion - drop any CPU cached data for * memregions described by @res_desc * @res_desc: one of the IORES_DESC_* types + * @start: start physical address of the target memory region. + * @len: length of the target memory region. -1 for all the regions of + * the target type. * * Perform cache maintenance after a memory event / operation that * changes the contents of physical memory in a cache-incoherent manner. @@ -46,7 +49,7 @@ static inline void memregion_free(int id) * the cache maintenance. */ #ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION -int cpu_cache_invalidate_memregion(int res_desc); +int cpu_cache_invalidate_memregion(int res_desc, phys_addr_t start, size_t len); bool cpu_cache_has_invalidate_memregion(void); #else static inline bool cpu_cache_has_invalidate_memregion(void) @@ -54,7 +57,8 @@ static inline bool cpu_cache_has_invalidate_memregion(void) return false; } -static inline int cpu_cache_invalidate_memregion(int res_desc) +static inline int cpu_cache_invalidate_memregion(int res_desc, + phys_addr_t start, size_t len) { WARN_ON_ONCE("CPU cache invalidation required"); return -ENXIO; From patchwork Thu Mar 20 17:41:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024228 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEED51D7E41; Thu, 20 Mar 2025 17:42:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492547; cv=none; b=EOk8ahUSDtoCStxybWJJMsApaWhuKfsg4IRhHRvitYBKZgPdYw0B/fhkDdTMzHxhCQJ16et7BgO0PPlvPlva0P+uwi8HpDDGuI0kVl6+Hn7yMoklLO9iBVYdw3qsEXdCKEthUXSe0PHI/WB/4NcK0zMdQ89tzgGAHZ//jGPP8jU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492547; c=relaxed/simple; bh=FTr+lyk7JqhK8p0DnorWhl9mDPFmTWkvv1/y/ay/2hQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EOkfVOCdct1gyVvfj18DVGx9EJaS5Gm//4dx5b5zBfMjQmgppbsT3uhD25P9jp9xr24VYXwc44iv1WcdEnN1IuqL3BI6E5UdefriISZ9PWFhJlFgd8VamLQwwM75ICxurWOR6HUxxFTlLQsZgLpDmjmURM6BEqSU63XCGm7OCe4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXr66Rdqz6M4js; Fri, 21 Mar 2025 01:39:02 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 2F93614050D; Fri, 21 Mar 2025 01:42:23 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:42:22 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 2/6] arm64: Support ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION Date: Thu, 20 Mar 2025 17:41:14 +0000 Message-ID: <20250320174118.39173-3-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) From: Yicong Yang ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION provides the mechanism for invalidate certain memory regions in a cache-incoherent manner. Currently is used by NVIDMM and CXL memory. This is mainly done by the system component and is implementation define per spec. Provides a method for the platforms register their own invalidate method and implement ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION. Signed-off-by: Yicong Yang Signed-off-by: Jonathan Cameron --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/cacheflush.h | 14 ++++++++++ arch/arm64/mm/flush.c | 42 +++++++++++++++++++++++++++++ 3 files changed, 57 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 940343beb3d4..11ecd20ec3b8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -21,6 +21,7 @@ config ARM64 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE select ARCH_HAS_CC_PLATFORM + select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION select ARCH_HAS_CRC32 select ARCH_HAS_CRC_T10DIF if KERNEL_MODE_NEON select ARCH_HAS_CURRENT_STACK_POINTER diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 28ab96e808ef..b8eb8738c965 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -139,6 +139,20 @@ static __always_inline void icache_inval_all_pou(void) dsb(ish); } +#ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION + +#include + +struct system_cache_flush_method { + int (*invalidate_memregion)(int res_desc, + phys_addr_t start, size_t len); +}; + +void arm64_set_sys_cache_flush_method(const struct system_cache_flush_method *method); +void arm64_clr_sys_cache_flush_method(const struct system_cache_flush_method *method); + +#endif /* CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION */ + #include #endif /* __ASM_CACHEFLUSH_H */ diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 013eead9b695..d822406d925d 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -100,3 +101,44 @@ void arch_invalidate_pmem(void *addr, size_t size) } EXPORT_SYMBOL_GPL(arch_invalidate_pmem); #endif + +#ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION + +static const struct system_cache_flush_method *scfm_data; +DEFINE_SPINLOCK(scfm_lock); + +void arm64_set_sys_cache_flush_method(const struct system_cache_flush_method *method) +{ + guard(spinlock_irqsave)(&scfm_lock); + if (scfm_data || !method || !method->invalidate_memregion) + return; + + scfm_data = method; +} +EXPORT_SYMBOL_GPL(arm64_set_sys_cache_flush_method); + +void arm64_clr_sys_cache_flush_method(const struct system_cache_flush_method *method) +{ + guard(spinlock_irqsave)(&scfm_lock); + if (scfm_data && scfm_data == method) + scfm_data = NULL; +} + +int cpu_cache_invalidate_memregion(int res_desc, phys_addr_t start, size_t len) +{ + guard(spinlock_irqsave)(&scfm_lock); + if (!scfm_data) + return -EOPNOTSUPP; + + return scfm_data->invalidate_memregion(res_desc, start, len); +} +EXPORT_SYMBOL_NS_GPL(cpu_cache_invalidate_memregion, "DEVMEM"); + +bool cpu_cache_has_invalidate_memregion(void) +{ + guard(spinlock_irqsave)(&scfm_lock); + return !!scfm_data; +} +EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM"); + +#endif /* CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION */ From patchwork Thu Mar 20 17:41:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024229 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DCDC224231; Thu, 20 Mar 2025 17:42:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492578; cv=none; b=QEGK2Laj7JxF5GaXATbzLLc6Fn9WDFbaPlyEogsGzJUwNYFhO9gRUqlX6jOSvSwaA4webSz6sFb/sHI4GLG3KSn/Vh11Wdh840hx6TUMcAURl+0E0jhuosPZR/DykfC5CAe5e7TteWiwcAz0Ra2Y/cg+xSKL82aAw+NIO8u1kas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492578; c=relaxed/simple; bh=E5yd5RFVp2gC9uJp3ZevTlVPOweiUZNt/VR5UAR5F0c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EPvg6jePnCGW5/It6QiR7gPTI5vn7owGgLcx+UK3dqR6g3RG5clvEbXAyRqUg8jXs0pZCi22j+Rqu/com2N+qCw3oTXywLqCfjw7CFoeGT1Asfgvx7y4kMU4mR/yN0BGafkfzkqhW1jjqbxKAFKIvViWaq4YaN2kYhyh1+deRFw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXpv5y5jz6L4v9; Fri, 21 Mar 2025 01:37:59 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 4F7A714050A; Fri, 21 Mar 2025 01:42:54 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:42:53 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 3/6] cache: coherency device class Date: Thu, 20 Mar 2025 17:41:15 +0000 Message-ID: <20250320174118.39173-4-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) Some systems contain devices that are capable of issuing certain cache invalidation messages to all components in the system. A typical use is when the memory backing a physical address is being changed, such as when a region is configured for CXL or a dynamic capacity event occurs. These perform a similar action to the WBINVD instruction on x86 but may provide finer granularity and/or a richer set of actions. Add a tiny device class to which drivers may register. Each driver provides an ops structure and the first op is Write Back and Invalidate by PA Range. The driver may over invalidate. An optional completion check is also provided. If present that should be called to ensure that the action has finished. If multiple agents are present in the system each should register with this class and the core code will issue the invalidate to all of them before checking for completion on each. This is done to avoid need for filtering in the core code which can become complex when interleave, potentially across different cache coherency hardware, is going on, so it is easier to tell everyone and let those who don't care do nothing. RFC question: 1. Location. Conor, do you mind us putting this in drivers/cache? I'm more than happy to add a maintainers entry and not make this your problem! 2. Need to figure out if all agents have turned up yet? No idea how to ensure this. 3. Is just iterating over devices registered with the class the simplest solution? 4. Given we have a nice device class, to which we can add sysfs attributes, what info might be useful to userspace? It might be useful to indicate how expensive a flush might be for example or where in the topology of the system it is being triggered? Signed-off-by: Jonathan Cameron --- drivers/cache/Kconfig | 6 ++ drivers/cache/Makefile | 2 + drivers/cache/coherency_core.c | 130 ++++++++++++++++++++++++++++++++ include/linux/cache_coherency.h | 60 +++++++++++++++ 4 files changed, 198 insertions(+) diff --git a/drivers/cache/Kconfig b/drivers/cache/Kconfig index db51386c663a..45dc4b2935e7 100644 --- a/drivers/cache/Kconfig +++ b/drivers/cache/Kconfig @@ -1,6 +1,12 @@ # SPDX-License-Identifier: GPL-2.0 menu "Cache Drivers" +config CACHE_COHERENCY_CLASS + bool "Cache coherency control class" + help + Class to which coherency control drivers register allowing core kernel + subsystems to issue invalidations and similar coherency operations. + config AX45MP_L2_CACHE bool "Andes Technology AX45MP L2 Cache controller" depends on RISCV diff --git a/drivers/cache/Makefile b/drivers/cache/Makefile index 55c5e851034d..b72b20f4248f 100644 --- a/drivers/cache/Makefile +++ b/drivers/cache/Makefile @@ -3,3 +3,5 @@ obj-$(CONFIG_AX45MP_L2_CACHE) += ax45mp_cache.o obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o obj-$(CONFIG_STARFIVE_STARLINK_CACHE) += starfive_starlink_cache.o + +obj-$(CONFIG_CACHE_COHERENCY_CLASS) += coherency_core.o diff --git a/drivers/cache/coherency_core.c b/drivers/cache/coherency_core.c new file mode 100644 index 000000000000..52cb4ceae00c --- /dev/null +++ b/drivers/cache/coherency_core.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Class to manage OS controlled coherency agents within the system. + * Specifically to enable operations such as write back and invalidate. + * + * Copyright: Huawei 2025 + * Some elements based on fwctl class as an example of a modern + * lightweight class. + */ + +#include +#include +#include +#include +#include +#include + +#include + +static DEFINE_IDA(cache_coherency_ida); + +static void cache_coherency_device_release(struct device *device) +{ + struct cache_coherency_device *ccd = + container_of(device, struct cache_coherency_device, dev); + + ida_free(&cache_coherency_ida, ccd->id); +} + +static struct class cache_coherency_class = { + .name = "cache_coherency", + .dev_release = cache_coherency_device_release, +}; + +static int cache_inval_one(struct device *dev, void *data) +{ + struct cache_coherency_device *ccd = + container_of(dev, struct cache_coherency_device, dev); + + if (!ccd->ops) + return -EINVAL; + + return ccd->ops->wbinv(ccd, data); +} + +static int cache_inval_done_one(struct device *dev, void *data) +{ + struct cache_coherency_device *ccd = + container_of(dev, struct cache_coherency_device, dev); + if (!ccd->ops) + return -EINVAL; + + return ccd->ops->done(ccd); +} + +static int cache_invalidate_memregion(int res_desc, + phys_addr_t addr, size_t size) +{ + int ret; + struct cc_inval_params params = { + .addr = addr, + .size = size, + }; + + ret = class_for_each_device(&cache_coherency_class, NULL, ¶ms, + cache_inval_one); + if (ret) + return ret; + + return class_for_each_device(&cache_coherency_class, NULL, NULL, + cache_inval_done_one); +} + +static const struct system_cache_flush_method cache_flush_method = { + .invalidate_memregion = cache_invalidate_memregion, +}; + +struct cache_coherency_device * +_cache_coherency_alloc_device(struct device *parent, + const struct coherency_ops *ops, size_t size) +{ + + if (!ops || !ops->wbinv) + return NULL; + + struct cache_coherency_device *ccd __free(kfree) = kzalloc(size, GFP_KERNEL); + + if (!ccd) + return NULL; + + ccd->dev.class = &cache_coherency_class; + ccd->dev.parent = parent; + ccd->ops = ops; + ccd->id = ida_alloc(&cache_coherency_ida, GFP_KERNEL); + + if (dev_set_name(&ccd->dev, "cache_coherency%d", ccd->id)) + return NULL; + + device_initialize(&ccd->dev); + + return_ptr(ccd); +} +EXPORT_SYMBOL_NS_GPL(_cache_coherency_alloc_device, "CACHE_COHERENCY"); + +int cache_coherency_device_register(struct cache_coherency_device *ccd) +{ + return device_add(&ccd->dev); +} +EXPORT_SYMBOL_NS_GPL(cache_coherency_device_register, "CACHE_COHERENCY"); + +void cache_coherency_device_unregister(struct cache_coherency_device *ccd) +{ + device_del(&ccd->dev); +} +EXPORT_SYMBOL_NS_GPL(cache_coherency_device_unregister, "CACHE_COHERENCY"); + +static int __init cache_coherency_init(void) +{ + int ret; + + ret = class_register(&cache_coherency_class); + if (ret) + return ret; + + //TODO: generalize + arm64_set_sys_cache_flush_method(&cache_flush_method); + + return 0; +} +subsys_initcall(cache_coherency_init); diff --git a/include/linux/cache_coherency.h b/include/linux/cache_coherency.h new file mode 100644 index 000000000000..d49be27ad4f1 --- /dev/null +++ b/include/linux/cache_coherency.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Cache coherency device drivers + * + * Copyright Huawei 2025 + */ +#ifndef _LINUX_CACHE_COHERENCY_H_ +#define _LINUX_CACHE_COHERENCY_H_ + +#include +#include + +struct cache_coherency_device; +struct cc_inval_params { + phys_addr_t addr; + size_t size; +}; +struct coherency_ops { + int (*wbinv)(struct cache_coherency_device *ccd, struct cc_inval_params *invp); + int (*done)(struct cache_coherency_device *ccd); +}; + +struct cache_coherency_device { + struct device dev; + const struct coherency_ops *ops; + int id; +}; + +int cache_coherency_device_register(struct cache_coherency_device *ccd); +void cache_coherency_device_unregister(struct cache_coherency_device *ccd); + +struct cache_coherency_device * +_cache_coherency_alloc_device(struct device *parent, + const struct coherency_ops *ops, size_t size); + +#define cache_coherency_alloc_device(parent, ops, drv_struct, member) \ + ({ \ + static_assert(__same_type(struct cache_coherency_device, \ + ((drv_struct *)NULL)->member)); \ + static_assert(offsetof(drv_struct, member) == 0); \ + (drv_struct *)_cache_coherency_alloc_device(parent, ops, \ + sizeof(drv_struct)); \ + }) + +static inline struct cache_coherency_device * + cache_coherency_device_get(struct cache_coherency_device *ccd) +{ + get_device(&ccd->dev); + return ccd; +} + +static inline void cache_coherency_device_put(struct cache_coherency_device *ccd) +{ + put_device(&ccd->dev); +} + +DEFINE_FREE(cache_coherency_device, struct cache_coherency_device *, + if (_T) cache_coherency_device_put(_T)); + +#endif From patchwork Thu Mar 20 17:41:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024230 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9897F1494BB; Thu, 20 Mar 2025 17:43:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492609; cv=none; b=jidnk38KUuEmGrPqwz4K0D1kH+ACiJ8XNBLCkD1X47Icx0MURsT+9mlGXJdfq7YKKzP8u0ZSUdQSPYJdwpJThEuJVaz9dI/6IpZzkTOx21NAZieoU3MScaQBwfvNc81lIYKAjh8ANoyuHY1thwiCFErGaJahZQeTHLH0fs+BDag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492609; c=relaxed/simple; bh=Wnv6007JJ4oe/TcMxI2IqKPunpao+J+diV0lAkYwqf8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KeI3OyGmo/RyyOUWknbvwYW5l8hByXOMASSba/m4zGN9WuT/iOFJBHwH9Azk69ST6bUMCGm4BMjqkfHSNnnfpniexi+0OvzWO6GYovIWDbT5rRrTYUXDV5AXXeMBgmxRZFK4BK6gpr7FghflYpaQg9i732dsVSMSOnSnACjbpvg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXsl1VN3z6K9M7; Fri, 21 Mar 2025 01:40:27 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id ABEA714050D; Fri, 21 Mar 2025 01:43:25 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:43:24 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 4/6] cache: Support cache maintenance for HiSilicon SoC Hydra Home Agent Date: Thu, 20 Mar 2025 17:41:16 +0000 Message-ID: <20250320174118.39173-5-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) From: Yicong Yang Hydra Home Agent is a device used for maintain cache coherency, add support of cache maintenance operations for it. Same as HiSilicon L3 cache and L3 cache PMU, memory resource of HHA conflicts with that of HHA PMU. Same workaround is implemented here to workaround resource conflict check. Signed-off-by: Yicong Yang Co-developed-by: Yushan Wang Signed-off-by: Yushan Wang Signed-off-by: Jonathan Cameron --- drivers/cache/Kconfig | 12 +++ drivers/cache/Makefile | 1 + drivers/cache/hisi_soc_hha.c | 193 +++++++++++++++++++++++++++++++++++ 3 files changed, 206 insertions(+) diff --git a/drivers/cache/Kconfig b/drivers/cache/Kconfig index 45dc4b2935e7..6dab014bc581 100644 --- a/drivers/cache/Kconfig +++ b/drivers/cache/Kconfig @@ -6,6 +6,18 @@ config CACHE_COHERENCY_CLASS help Class to which coherency control drivers register allowing core kernel subsystems to issue invalidations and similar coherency operations. +if CACHE_COHERENCY_CLASS + +config HISI_SOC_HHA + tristate "HiSilicon Hydra Home Agent (HHA) device driver" + depends on ARM64 && ACPI || COMPILE_TEST + help + The Hydra Home Agent (HHA) is responsible of cache coherency + on SoC. This drivers provides cache maintenance functions of HHA. + + This driver can be built as a module. If so, the module will be + called hisi_soc_hha. +endif config AX45MP_L2_CACHE bool "Andes Technology AX45MP L2 Cache controller" diff --git a/drivers/cache/Makefile b/drivers/cache/Makefile index b72b20f4248f..48b319984b45 100644 --- a/drivers/cache/Makefile +++ b/drivers/cache/Makefile @@ -5,3 +5,4 @@ obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o obj-$(CONFIG_STARFIVE_STARLINK_CACHE) += starfive_starlink_cache.o obj-$(CONFIG_CACHE_COHERENCY_CLASS) += coherency_core.o +obj-$(CONFIG_HISI_SOC_HHA) += hisi_soc_hha.o diff --git a/drivers/cache/hisi_soc_hha.c b/drivers/cache/hisi_soc_hha.c new file mode 100644 index 000000000000..c2af06e5839e --- /dev/null +++ b/drivers/cache/hisi_soc_hha.c @@ -0,0 +1,193 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for HiSilicon Hydra Home Agent (HHA). + * + * Copyright (c) 2024 HiSilicon Technologies Co., Ltd. + * Author: Yicong Yang + * Yushan Wang + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define HISI_HHA_CTRL 0x5004 +#define HISI_HHA_CTRL_EN BIT(0) +#define HISI_HHA_CTRL_RANGE BIT(1) +#define HISI_HHA_CTRL_TYPE GENMASK(3, 2) +#define HISI_HHA_START_L 0x5008 +#define HISI_HHA_START_H 0x500c +#define HISI_HHA_LEN_L 0x5010 +#define HISI_HHA_LEN_H 0x5014 + +/* The maintain operation performs in a 128 Byte granularity */ +#define HISI_HHA_MAINT_ALIGN 128 + +#define HISI_HHA_POLL_GAP_US 10 +#define HISI_HHA_POLL_TIMEOUT_US 50000 + +struct hisi_soc_hha { + struct cache_coherency_device ccd; + /* Locks HHA instance to forbid overlapping access. */ + spinlock_t lock; + void __iomem *base; +}; + +static bool hisi_hha_cache_maintain_wait_finished(struct hisi_soc_hha *soc_hha) +{ + u32 val; + + return !readl_poll_timeout_atomic(soc_hha->base + HISI_HHA_CTRL, val, + !(val & HISI_HHA_CTRL_EN), + HISI_HHA_POLL_GAP_US, + HISI_HHA_POLL_TIMEOUT_US); +} + +static int hisi_hha_cache_do_maintain(struct hisi_soc_hha *soc_hha, + phys_addr_t addr, size_t size) +{ + int ret = 0; + u32 reg; + + if (!size || !IS_ALIGNED(size, HISI_HHA_MAINT_ALIGN)) + return -EINVAL; + + /* + * Hardware will maintain in a range of + * [addr, addr + size + HISI_HHA_MAINT_ALIGN] + */ + size -= HISI_HHA_MAINT_ALIGN; + + guard(spinlock)(&soc_hha->lock); + + if (!hisi_hha_cache_maintain_wait_finished(soc_hha)) + return -EBUSY; + + writel(lower_32_bits(addr), soc_hha->base + HISI_HHA_START_L); + writel(upper_32_bits(addr), soc_hha->base + HISI_HHA_START_H); + writel(lower_32_bits(size), soc_hha->base + HISI_HHA_LEN_L); + writel(upper_32_bits(size), soc_hha->base + HISI_HHA_LEN_H); + + reg = FIELD_PREP(HISI_HHA_CTRL_TYPE, 1); /* Clean Invalid */ + reg |= HISI_HHA_CTRL_RANGE | HISI_HHA_CTRL_EN; + writel(reg, soc_hha->base + HISI_HHA_CTRL); + + return ret; +} + +static int hisi_hha_cache_poll_maintain_done(struct hisi_soc_hha *soc_hha, + phys_addr_t addr, size_t size) +{ + guard(spinlock)(&soc_hha->lock); + + if (!hisi_hha_cache_maintain_wait_finished(soc_hha)) + return -ETIMEDOUT; + + return 0; +} + +static int hisi_soc_hha_wbinv(struct cache_coherency_device *ccd, struct cc_inval_params *invp) +{ + struct hisi_soc_hha *hha = container_of(ccd, struct hisi_soc_hha, ccd); + + return hisi_hha_cache_do_maintain(hha, invp->addr, invp->size); +} + +static int hisi_soc_hha_done(struct cache_coherency_device *ccd) +{ + struct hisi_soc_hha *hha = container_of(ccd, struct hisi_soc_hha, ccd); + + return hisi_hha_cache_poll_maintain_done(hha, 0, 0); +} + +static const struct coherency_ops hha_ops = { + .wbinv = hisi_soc_hha_wbinv, + .done = hisi_soc_hha_done, +}; + +DEFINE_FREE(hisi_soc_hha, struct hisi_soc_hha *, if (_T) cache_coherency_device_put(&_T->ccd)) +static int hisi_soc_hha_probe(struct platform_device *pdev) +{ + struct resource *mem; + int ret; + + struct hisi_soc_hha *soc_hha __free(hisi_soc_hha) = + cache_coherency_alloc_device(&pdev->dev, &hha_ops, + struct hisi_soc_hha, ccd); + if (!soc_hha) + return -ENOMEM; + + platform_set_drvdata(pdev, soc_hha); + + spin_lock_init(&soc_hha->lock); + + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!mem) + return -ENODEV; + + /* + * HHA cache driver share the same register region with HHA uncore PMU + * driver in hardware's perspective, none of them should reserve the + * resource to itself only. Here exclusive access verification is + * avoided by calling devm_ioremap instead of devm_ioremap_resource to + * allow both drivers to exist at the same time. + */ + soc_hha->base = ioremap(mem->start, resource_size(mem)); + if (IS_ERR_OR_NULL(soc_hha->base)) { + return dev_err_probe(&pdev->dev, PTR_ERR(soc_hha->base), + "failed to remap io memory"); + } + + ret = cache_coherency_device_register(&soc_hha->ccd); + if (ret) { + iounmap(soc_hha->base); + return ret; + } + + platform_set_drvdata(pdev, no_free_ptr(soc_hha)); + return 0; +} + +static void hisi_soc_hha_remove(struct platform_device *pdev) +{ + struct hisi_soc_hha *soc_hha = platform_get_drvdata(pdev); + + cache_coherency_device_unregister(&soc_hha->ccd); + iounmap(soc_hha->base); + cache_coherency_device_put(&soc_hha->ccd); +} + +static const struct acpi_device_id hisi_soc_hha_ids[] = { + { "HISI0511", }, + { } +}; +MODULE_DEVICE_TABLE(acpi, hisi_soc_hha_ids); + +static struct platform_driver hisi_soc_hha_driver = { + .driver = { + .name = "hisi_soc_hha", + .acpi_match_table = hisi_soc_hha_ids, + }, + .probe = hisi_soc_hha_probe, + .remove = hisi_soc_hha_remove, +}; + +module_platform_driver(hisi_soc_hha_driver); + +MODULE_IMPORT_NS("CACHE_COHERENCY"); +MODULE_DESCRIPTION("Hisilicon Hydra Home Agent driver supporting cache maintenance"); +MODULE_AUTHOR("Yicong Yang "); +MODULE_AUTHOR("Yushan Wang "); +MODULE_LICENSE("GPL"); From patchwork Thu Mar 20 17:41:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024231 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE90A1494BB; Thu, 20 Mar 2025 17:43:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492641; cv=none; b=hgZzT/u93Flmz8hL1MQdDwEvzb0SE0HPhCwAjHAVYAKK4oR/5Way/ICsU2mIVhTzwRZxRN2rhvkLTtd85VXJNNHBc3ZlCMLHCzJ7Qg3f34F5M2tIB5PoyTBYIPzU8uLGQTabqupu017zo0u4agNTNFzAstld0Pnn0NSMobeJN14= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492641; c=relaxed/simple; bh=iOEO7zp6gyoY46DNWmKK60UAfRYUXwK8zdvY4F+64Po=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=P+/ojQid2rZzOE6uv/YltjK76fxvcQ+qGbN4fINj0/ozyPH+WQjJt/6oPlUNqH4G/Y6S51zEhCS7+gWejWJWXdjjhAkOMy2jThN3QVhHB69vf5xLh8EeUniO8PE4AzQ6Ujvv7u5TphFiI7I/ypwIXIHDQm56ou84SyfBpf+dCG8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXsw4H2fz6M4kK; Fri, 21 Mar 2025 01:40:36 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id DB01114050C; Fri, 21 Mar 2025 01:43:56 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:43:56 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 5/6] acpi: PoC of Cache control via ACPI0019 and _DSM Date: Thu, 20 Mar 2025 17:41:17 +0000 Message-ID: <20250320174118.39173-6-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) Do not merge. This is the bare outline of what may become an ACPI code first specification proposal. From various discussions, it has become clear that there is some desire not to end up needing a cache flushing driver for every host that supports the flush by PA range functionality that is needed for CXL and similar disagregated memory pools where the host PA mapping to actual memory may change over time and where various races can occur with prefetchers making it hard to use CPU instructions to flush all stale data out. There was once an ARM PSCI specification [1] that defined a firmware interface to solve this problem. However that meant dropping into a more privileged mode, or chatting to an external firmware. That was overkill for those systems that provide a simple MMIO register interface for these operations. That specification never made it beyond alpha level. For the typical class of machine that actually uses these disaggregated pools, ACPI can potentially provide the same benefits with a great deal more flexibility. A _DSM in DSDT via operation regions, may be used to do any of: 1) Make firmware calls 2) Operate a register based state machine. 3) Most other things you might dream of. This was prototyped against an implementation of the ARM specification in [1] wrapped up in _DSM magic. That was chosen to give a second (be it abandoned) example of how this cache control class can be used. DSDT blob example that uses PSCI underneath. Device (CCT0) { Name (_HID, "ACPI0019") // _HID: Hardware ID Name (_UID, Zero) // _UID: Unique ID OperationRegion (AFFH, FFixedHW, One, 0x90) Field (AFFH, BufferAcc, NoLock, Preserve) { AFFX, 1152 } Method (_DSM, 4, Serialized) // _DSM: Device-Specific Method { If ((Arg0 == ToUUID ("61fdc7d5-1468-4807-b565-515bf6b75319") /* Unknown UUID */)) { If ((Arg2 == Zero)) { Return (Buffer (One) { 0x0F // . }) } If ((Arg2 == One)) { Name (CCR1, Buffer (0x08) { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 // ........ }) CreateDWordField (CCR1, Zero, RRR1) CCR1 = AFFX = Buffer (0x10) { /* 0000 */ 0x17, 0x00, 0x00, 0xC4, 0x00, 0x00, 0x00, 0x00, // ........ /* 0008 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 // ........ } Name (CCOP, Zero) CCOP = RRR1 /* \_SB_.CCT0._DSM.RRR1 */ CCR1 = AFFX = Buffer (0x10) { /* 0000 */ 0x17, 0x00, 0x00, 0xC4, 0x00, 0x00, 0x00, 0x00, // ........ /* 0008 */ 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 // ........ } Name (CCLA, Zero) CCLA = RRR1 /* \_SB_.CCT0._DSM.RRR1 */ CCR1 = AFFX = Buffer (0x10) { /* 0000 */ 0x17, 0x00, 0x00, 0xC4, 0x00, 0x00, 0x00, 0x00, // ........ /* 0008 */ 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 // ........ } Name (CCRL, Zero) CCRL = RRR1 /* \_SB_.CCT0._DSM.RRR1 */ Return (Package (0x04) { CCOP, CCLA, CCRL, One }) } If ((Arg2 == 0x02)) { If (!((ObjectType (Arg3) == 0x04) && (SizeOf (Arg3) == 0x03))) { Return (One) } Name (BUFF, Buffer (0x28) { /* 0000 */ 0x16, 0x00, 0x00, 0xC4, 0x00, 0x00, 0x00, 0x00, // ........ /* 0008 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // ........ /* 0010 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // ........ /* 0018 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // ........ /* 0020 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 // ........ }) CreateQWordField (BUFF, 0x08, BFBA) CreateQWordField (BUFF, 0x10, BFSZ) CreateQWordField (BUFF, 0x20, BFFL) Local0 = Arg3 [Zero] BFBA = DerefOf (Local0) Local0 = Arg3 [One] BFSZ = DerefOf (Local0) Local0 = Arg3 [0x02] Local1 = DerefOf (Local0) If ((Local1 != Zero)) { Return (One) } AFFX = BUFF /* \_SB_.CCT0._DSM.BUFF */ Return (Zero) } If ((Arg2 == 0x03)) { Return (Zero) } } } } Link: https://developer.arm.com/documentation/den0022/falp1/?lang=en [1] Signed-off-by: Jonathan Cameron --- drivers/acpi/Makefile | 1 + drivers/cache/Kconfig | 8 ++ drivers/cache/Makefile | 1 + drivers/cache/acpi_cache_control.c | 157 +++++++++++++++++++++++++++++ 4 files changed, 167 insertions(+) diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index 40208a0f5dfb..fcb6635d27d8 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -129,3 +129,4 @@ obj-$(CONFIG_ACPI_VIOT) += viot.o obj-$(CONFIG_RISCV) += riscv/ obj-$(CONFIG_X86) += x86/ + diff --git a/drivers/cache/Kconfig b/drivers/cache/Kconfig index 6dab014bc581..5569af0903c4 100644 --- a/drivers/cache/Kconfig +++ b/drivers/cache/Kconfig @@ -17,6 +17,14 @@ config HISI_SOC_HHA This driver can be built as a module. If so, the module will be called hisi_soc_hha. + +config ACPI_CACHE_CONTROL + tristate "ACPI cache maintenance" + depends on ARM64 + help + ACPI0019 device ID in DSDT identifies an interface that may be used + to carry out certain forms of cache flush operation. + endif config AX45MP_L2_CACHE diff --git a/drivers/cache/Makefile b/drivers/cache/Makefile index 48b319984b45..9f0679a6dab1 100644 --- a/drivers/cache/Makefile +++ b/drivers/cache/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_STARFIVE_STARLINK_CACHE) += starfive_starlink_cache.o obj-$(CONFIG_CACHE_COHERENCY_CLASS) += coherency_core.o obj-$(CONFIG_HISI_SOC_HHA) += hisi_soc_hha.o +obj-$(CONFIG_ACPI_CACHE_CONTROL) += acpi_cache_control.o diff --git a/drivers/cache/acpi_cache_control.c b/drivers/cache/acpi_cache_control.c new file mode 100644 index 000000000000..d5b91f505a26 --- /dev/null +++ b/drivers/cache/acpi_cache_control.c @@ -0,0 +1,157 @@ + +#include +#include +#include +#include + +struct acpi_cache_control { + struct cache_coherency_device ccd; + /* Stuff */ +}; + +static const guid_t testguid = + GUID_INIT(0x61FDC7D5, 0x1468, 0x4807, + 0xB5, 0x65, 0x51, 0x5B, 0xF6, 0xB7, 0x53, 0x19); + +static int acpi_cache_control_query(struct acpi_device *device) +{ + union acpi_object *out_obj; + + out_obj = acpi_evaluate_dsm(device->handle, &testguid, 1, 1, NULL);//&in_obj); + if (out_obj->package.count < 4) { + printk("Only partial capabilities received\n"); + return -EINVAL; + } + for (int i = 0; i < out_obj->package.count; i++) + if (out_obj->package.elements[i].type != 1) { + printk("Element %d not integer\n", i); + return -EINVAL; + } + switch (out_obj->package.elements[0].integer.value) { + case 0: + printk("Supports range\n"); + break; + case 1: + printk("Full flush only\n"); + break; + default: + printk("unknown op type %llx\n", + out_obj->package.elements[0].integer.value); + break; + } + + printk("Latency is %lld msecs\n", + out_obj->package.elements[1].integer.value); + printk("Min delay between calls is %lld msecs\n", + out_obj->package.elements[2].integer.value); + + if (out_obj->package.elements[3].integer.value & BIT(0)) + printk("CLEAN_INVALIDATE\n"); + if (out_obj->package.elements[3].integer.value & BIT(1)) + printk("CLEAN\n"); + if (out_obj->package.elements[3].integer.value & BIT(2)) + printk("INVALIDATE\n"); + ACPI_FREE(out_obj); + return 0; +} + +static int acpi_cache_control_inval(struct acpi_device *device, u64 base, u64 size) +{ + union acpi_object *out_obj; + union acpi_object in_array[] = { + [0].integer = { ACPI_TYPE_INTEGER, base }, + [1].integer = { ACPI_TYPE_INTEGER, size }, + [2].integer = { ACPI_TYPE_INTEGER, 0 }, // Clean invalidate + }; + union acpi_object in_obj = { + .package = { + .type = ACPI_TYPE_PACKAGE, + .count = ARRAY_SIZE(in_array), + .elements = in_array, + }, + }; + + out_obj = acpi_evaluate_dsm(device->handle, &testguid, 1, 2, &in_obj); + printk("out type %d\n", out_obj->type); + printk("out val %lld\n", out_obj->integer.value); + ACPI_FREE(out_obj); + return 0; +} + +static int acpi_cc_wbinv(struct cache_coherency_device *ccd, + struct cc_inval_params *invp) +{ + struct acpi_device *acpi_dev = + container_of(ccd->dev.parent, struct acpi_device, dev); + + return acpi_cache_control_inval(acpi_dev, invp->addr, invp->size); +} + +static int acpi_cc_done(struct cache_coherency_device *ccd) +{ + /* Todo */ + return 0; +} + +static const struct coherency_ops acpi_cc_ops = { + .wbinv = acpi_cc_wbinv, + .done = acpi_cc_done, +}; + +DEFINE_FREE(acpi_cache_control, struct acpi_cache_control *, + if (_T) cache_coherency_device_put(&_T->ccd)); + +static int acpi_cache_control_add(struct acpi_device *device) +{ + int ret; + + printk("ACPI device bound\n"); + + ret = acpi_cache_control_query(device); + if (ret) + return ret; + + struct acpi_cache_control *acpi_cc __free(acpi_cache_control) = + cache_coherency_alloc_device(&device->dev, &acpi_cc_ops, + struct acpi_cache_control, ccd); + if (!acpi_cc) + return -ENOMEM; + + ret = cache_coherency_device_register(&acpi_cc->ccd); + if (ret) + return ret; + + dev_set_drvdata(&device->dev, acpi_cc); + return 0; +} + +static void acpi_cache_control_del(struct acpi_device *device) +{ + struct acpi_cache_control *acpi_cc = dev_get_drvdata(&device->dev); + + cache_coherency_device_unregister(&acpi_cc->ccd); + cache_coherency_device_put(&acpi_cc->ccd); +} + +static const struct acpi_device_id acpi_cache_control_ids[] = { + { "ACPI0019" }, + { } +}; + +MODULE_DEVICE_TABLE(acpi, acpi_cache_control_ids); + +static struct acpi_driver acpi_cache_control_driver = { + .name = "acpi_cache_control", + .ids = acpi_cache_control_ids, + .ops = { + .add = acpi_cache_control_add, + .remove = acpi_cache_control_del, + }, +}; + +module_acpi_driver(acpi_cache_control_driver); + +MODULE_IMPORT_NS("CACHE_COHERENCY"); +MODULE_AUTHOR("Jonathan Cameron "); +MODULE_DESCRIPTION("HACKS HACKS HACKS"); +MODULE_LICENSE("GPL"); From patchwork Thu Mar 20 17:41:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 14024232 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07C8C1494BB; Thu, 20 Mar 2025 17:44:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492671; cv=none; b=RaFYYljp64FJlvn33HZw5IOtnT2QQw2KV2ibP95QGHS5KGcGArQRswqYqOSVQdkLUX80Wlrp3qO8pvMzeATzIy20ul1Hd4lyJnKYccMJtSiDScKILigq0tVMT2uVEXNBdkE7c3Km6fVmv+7QmiPQPvgV1wnwE6Ub/UVwI5eTc8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742492671; c=relaxed/simple; bh=t9Nj6WMhTxqvW9S5CSHuDrY52HJZGzWXKKgEonbqeTc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=br4ILOix2IpQhxpse5Yx4ANK3t11G+Lmzsh/Zj8DvamFcApFrSnDH2efuTnJlAoZ+qJDXkENpq+1HiRaVrg52jNm1sAGSOE9B5/2qdDl3NImZ8OBGzwL+XVWJfTPQhCfjdFy1DCNPWZAaNlbyVrnDb2pjlonairOJZFNaL0kA6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZJXtx4jStz6K9Fg; Fri, 21 Mar 2025 01:41:29 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 1893114050A; Fri, 21 Mar 2025 01:44:28 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.19.247) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 20 Mar 2025 18:44:27 +0100 From: Jonathan Cameron To: , , , , Yicong Yang , CC: , , Yushan Wang , , , Lorenzo Pieralisi , Mark Rutland , Catalin Marinas , Will Deacon , Dan Williams Subject: [RFC PATCH 6/6] Hack: Pretend we have PSCI 1.2 Date: Thu, 20 Mar 2025 17:41:18 +0000 Message-ID: <20250320174118.39173-7-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> References: <20250320174118.39173-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To frapeml500008.china.huawei.com (7.182.85.71) Need to update QEMU to PSCI 1.2. In meantime lie. This is just here to aid testing, not for review! Signed-off-by: Jonathan Cameron --- drivers/firmware/psci/psci.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c index a1ebbe9b73b1..6a7a6e91fcc4 100644 --- a/drivers/firmware/psci/psci.c +++ b/drivers/firmware/psci/psci.c @@ -646,6 +646,8 @@ static void __init psci_init_smccc(void) } } + /* Hack until qemu version stuff updated */ + arm_smccc_version_init(ARM_SMCCC_VERSION_1_2, psci_conduit); /* * Conveniently, the SMCCC and PSCI versions are encoded the * same way. No, this isn't accidental.