diff mbox series

[2/3] riscv: Implement Zicbom-based cache management operations

Message ID 20220610004308.1903626-3-heiko@sntech.de (mailing list archive)
State New, archived
Headers show
Series riscv: implement Zicbom-based CMO instructions + the t-head variant | expand

Commit Message

Heiko Stuebner June 10, 2022, 12:43 a.m. UTC
The Zicbom ISA-extension was ratified in november 2021
and introduces instructions for dcache invalidate, clean
and flush operations.

Implement cache management operations based on them.

Of course not all cores will support this, so implement an
alternative-based mechanism that replaces empty instructions
with ones done around Zicbom instructions.

We're using prebuild instructions for the Zicbom instructions
for now, to not require a bleeding-edge compiler (gcc-12)
for these somewhat simple instructions.

Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Guo Ren <guoren@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Guo Ren <guoren@kernel.org>
---
 arch/riscv/Kconfig                   | 15 +++++
 arch/riscv/include/asm/cacheflush.h  |  6 ++
 arch/riscv/include/asm/errata_list.h | 36 ++++++++++-
 arch/riscv/include/asm/hwcap.h       |  1 +
 arch/riscv/kernel/cpu.c              |  1 +
 arch/riscv/kernel/cpufeature.c       | 18 ++++++
 arch/riscv/kernel/setup.c            |  2 +
 arch/riscv/mm/Makefile               |  1 +
 arch/riscv/mm/dma-noncoherent.c      | 93 ++++++++++++++++++++++++++++
 9 files changed, 172 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/mm/dma-noncoherent.c

Comments

Randy Dunlap June 10, 2022, 3:22 a.m. UTC | #1
On 6/9/22 17:43, Heiko Stuebner wrote:
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 32ffef9f6e5b..384d0c15f2b6 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -376,6 +376,21 @@ config RISCV_ISA_SVPBMT
>  
>  	   If you don't know what to do here, say Y.
>  
> +config RISCV_ISA_ZICBOM
> +	bool "Zicbom extension support for non-coherent dma operation"
> +	select ARCH_HAS_DMA_PREP_COHERENT
> +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> +	select ARCH_HAS_SYNC_DMA_FOR_CPU
> +	select ARCH_HAS_SETUP_DMA_OPS
> +	select DMA_DIRECT_REMAP
> +	select RISCV_ALTERNATIVE

Since RISCV_ALTERNATIVE depends on !XIP_KERNEL and since 'select' does not
follow any dependency chains, this config also needs to depend on
!XIP_KERNEL.

> +	default y
> +	help
> +	   Adds support to dynamically detect the presence of the ZICBOM extension
> +	   (Cache Block Management Operations) and enable its usage.
> +
> +	   If you don't know what to do here, say Y.
Christoph Hellwig June 10, 2022, 5:56 a.m. UTC | #2
On Fri, Jun 10, 2022 at 02:43:07AM +0200, Heiko Stuebner wrote:
> +config RISCV_ISA_ZICBOM
> +	bool "Zicbom extension support for non-coherent dma operation"
> +	select ARCH_HAS_DMA_PREP_COHERENT
> +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> +	select ARCH_HAS_SYNC_DMA_FOR_CPU
> +	select ARCH_HAS_SETUP_DMA_OPS
> +	select DMA_DIRECT_REMAP
> +	select RISCV_ALTERNATIVE
> +	default y
> +	help
> +	   Adds support to dynamically detect the presence of the ZICBOM extension

Overly long line here.

> +	   (Cache Block Management Operations) and enable its usage.
> +
> +	   If you don't know what to do here, say Y.

But more importantly I think the whole text here is not very helpful.
What users care about is non-coherent DMA support.  What extension is
used for that is rather secondary.  Also please capitalize DMA.

> +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> +{
> +	switch (dir) {
> +	case DMA_TO_DEVICE:
> +		ALT_CMO_OP(CLEAN, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	case DMA_FROM_DEVICE:
> +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	case DMA_BIDIRECTIONAL:
> +		ALT_CMO_OP(FLUSH, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	default:
> +		break;
> +	}

Pleae avoid all these crazy long lines.  and use a logical variable
for the virtual address.  And why do you pass that virtual address
as an unsigned long to ALT_CMO_OP?  You're going to make your life
much easier if you simply always pass a pointer.

Last but not last, does in RISC-V clean mean writeback and flush mean
writeback plus invalidate?  If so the code is correct, but the choice
of names in the RISC-V spec is extremely unfortunate.

> +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> +{
> +	switch (dir) {
> +	case DMA_TO_DEVICE:
> +		break;
> +	case DMA_FROM_DEVICE:
> +	case DMA_BIDIRECTIONAL:
> +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	default:
> +		break;
> +	}
> +}

Same comment here and in few other places.

> +
> +void arch_dma_prep_coherent(struct page *page, size_t size)
> +{
> +	void *flush_addr = page_address(page);
> +
> +	memset(flush_addr, 0, size);
> +	ALT_CMO_OP(FLUSH, (unsigned long)flush_addr, size, riscv_cbom_block_size);
> +}

arch_dma_prep_coherent should never zero the memory, that is left
for the upper layers.`

> +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> +		const struct iommu_ops *iommu, bool coherent)
> +{
> +	/* If a specific device is dma-coherent, set it here */

This comment isn't all that useful.

> +	dev->dma_coherent = coherent;
> +}

But more importantly, this assums that once this code is built all
devices are non-coherent by default.  I.e. with this patch applied
and the config option enabled we'll now suddenly start doing cache
management operations or setups that didn't do it before.
Samuel Holland June 12, 2022, 7:15 p.m. UTC | #3
On 6/9/22 7:43 PM, Heiko Stuebner wrote:
> The Zicbom ISA-extension was ratified in november 2021
> and introduces instructions for dcache invalidate, clean
> and flush operations.
> 
> Implement cache management operations based on them.
> 
> Of course not all cores will support this, so implement an
> alternative-based mechanism that replaces empty instructions
> with ones done around Zicbom instructions.
> 
> We're using prebuild instructions for the Zicbom instructions
> for now, to not require a bleeding-edge compiler (gcc-12)
> for these somewhat simple instructions.
> 
> Signed-off-by: Heiko Stuebner <heiko@sntech.de>
> Reviewed-by: Anup Patel <anup@brainfault.org>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Atish Patra <atish.patra@wdc.com>
> Cc: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/Kconfig                   | 15 +++++
>  arch/riscv/include/asm/cacheflush.h  |  6 ++
>  arch/riscv/include/asm/errata_list.h | 36 ++++++++++-
>  arch/riscv/include/asm/hwcap.h       |  1 +
>  arch/riscv/kernel/cpu.c              |  1 +
>  arch/riscv/kernel/cpufeature.c       | 18 ++++++
>  arch/riscv/kernel/setup.c            |  2 +
>  arch/riscv/mm/Makefile               |  1 +
>  arch/riscv/mm/dma-noncoherent.c      | 93 ++++++++++++++++++++++++++++
>  9 files changed, 172 insertions(+), 1 deletion(-)
>  create mode 100644 arch/riscv/mm/dma-noncoherent.c
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 32ffef9f6e5b..384d0c15f2b6 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -376,6 +376,21 @@ config RISCV_ISA_SVPBMT
>  
>  	   If you don't know what to do here, say Y.
>  
> +config RISCV_ISA_ZICBOM
> +	bool "Zicbom extension support for non-coherent dma operation"
> +	select ARCH_HAS_DMA_PREP_COHERENT
> +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> +	select ARCH_HAS_SYNC_DMA_FOR_CPU
> +	select ARCH_HAS_SETUP_DMA_OPS

ARCH_HAS_SETUP_DMA_OPS needs to be separate from the non-coherent DMA option,
because iommu_setup_dma_ops() will need to be called from arch_setup_dma_ops()
even on a fully-coherent system. (But this change is not strictly necessary for
this series.)

> +	select DMA_DIRECT_REMAP
> +	select RISCV_ALTERNATIVE
> +	default y
> +	help
> +	   Adds support to dynamically detect the presence of the ZICBOM extension
> +	   (Cache Block Management Operations) and enable its usage.
> +
> +	   If you don't know what to do here, say Y.
> +
>  config FPU
>  	bool "FPU support"
>  	default y
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 23ff70350992..eb12d014b158 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -42,6 +42,12 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
>  
>  #endif /* CONFIG_SMP */
>  
> +#ifdef CONFIG_RISCV_ISA_ZICBOM
> +void riscv_init_cbom_blocksize(void);
> +#else
> +static inline void riscv_init_cbom_blocksize(void) { }
> +#endif
> +
>  /*
>   * Bits in sys_riscv_flush_icache()'s flags argument.
>   */
> diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h
> index 398e351e7002..2e80a75b5241 100644
> --- a/arch/riscv/include/asm/errata_list.h
> +++ b/arch/riscv/include/asm/errata_list.h
> @@ -20,7 +20,8 @@
>  #endif
>  
>  #define	CPUFEATURE_SVPBMT 0
> -#define	CPUFEATURE_NUMBER 1
> +#define	CPUFEATURE_CMO 1

This seems like it should be CPUFEATURE_ZICBOM for consistency.

> +#define	CPUFEATURE_NUMBER 2
>  
>  #ifdef __ASSEMBLY__
>  
> @@ -87,6 +88,39 @@ asm volatile(ALTERNATIVE(						\
>  #define ALT_THEAD_PMA(_val)
>  #endif
>  
> +/*
> + * cbo.clean rs1
> + * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
> + *    0...01     rs1       010      00000  0001111
> + *
> + * cbo.flush rs1
> + * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
> + *    0...10     rs1       010      00000  0001111
> + *
> + * cbo.inval rs1
> + * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
> + *    0...00     rs1       010      00000  0001111
> + */
> +#define CBO_INVAL_A0	".long 0x15200F"
> +#define CBO_CLEAN_A0	".long 0x25200F"
> +#define CBO_FLUSH_A0	".long 0x05200F"
> +
> +#define ALT_CMO_OP(_op, _start, _size, _cachesize)			\
> +asm volatile(ALTERNATIVE(						\
> +	__nops(5),							\
> +	"mv a0, %1\n\t"							\
> +	"j 2f\n\t"							\
> +	"3:\n\t"							\
> +	CBO_##_op##_A0 "\n\t"						\
> +	"add a0, a0, %0\n\t"						\
> +	"2:\n\t"							\
> +	"bltu a0, %2, 3b\n\t", 0,					\
> +		CPUFEATURE_CMO, CONFIG_RISCV_ISA_ZICBOM)		\
> +	: : "r"(_cachesize),						\
> +	    "r"(ALIGN((_start), (_cachesize))),				\
> +	    "r"(ALIGN((_start) + (_size), (_cachesize)))		\

Here, the starting address needs to be truncated, not aligned upward. Otherwise,
the first partial cache line will be missed. And since the starting address is
known to be a multiple of the cache line size, the ending address does not need
to be rounded up either:

-           "r"(ALIGN((_start), (_cachesize))),          \
-           "r"(ALIGN((_start) + (_size), (_cachesize))) \
+           "r"((_start) & ~((_cachesize) - 1UL)),       \
+           "r"((_start) + (_size))                      \

Then to prevent causing any collateral damage, you need to make sure slab
allocations can not share a cache block:

--- a/arch/riscv/include/asm/cache.h
+++ b/arch/riscv/include/asm/cache.h
@@ -11,6 +11,8 @@

 #define L1_CACHE_BYTES         (1 << L1_CACHE_SHIFT)

+#define ARCH_DMA_MINALIGN      L1_CACHE_BYTES
+
 /*
  * RISC-V requires the stack pointer to be 16-byte aligned, so ensure that
  * the flat loader aligns it accordingly.

> +	: "a0")
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
> index 4e2486881840..6044e402003d 100644
> --- a/arch/riscv/include/asm/hwcap.h
> +++ b/arch/riscv/include/asm/hwcap.h
> @@ -53,6 +53,7 @@ extern unsigned long elf_hwcap;
>  enum riscv_isa_ext_id {
>  	RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE,
>  	RISCV_ISA_EXT_SVPBMT,
> +	RISCV_ISA_EXT_ZICBOM,
>  	RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
>  };
>  
> diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
> index fba9e9f46a8c..0365557f7122 100644
> --- a/arch/riscv/kernel/cpu.c
> +++ b/arch/riscv/kernel/cpu.c
> @@ -89,6 +89,7 @@ int riscv_of_parent_hartid(struct device_node *node)
>  static struct riscv_isa_ext_data isa_ext_arr[] = {
>  	__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
>  	__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
> +	__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
>  	__RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX),
>  };
>  
> diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> index 6a40cb8134bd..e956f4d763ec 100644
> --- a/arch/riscv/kernel/cpufeature.c
> +++ b/arch/riscv/kernel/cpufeature.c
> @@ -199,6 +199,7 @@ void __init riscv_fill_hwcap(void)
>  			} else {
>  				SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
>  				SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
> +				SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
>  			}
>  #undef SET_ISA_EXT_MAP
>  		}
> @@ -259,6 +260,20 @@ static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage)
>  	return false;
>  }
>  
> +static bool __init_or_module cpufeature_probe_cmo(unsigned int stage)
> +{
> +#ifdef CONFIG_RISCV_ISA_ZICBOM
> +	switch (stage) {
> +	case RISCV_ALTERNATIVES_EARLY_BOOT:
> +		return false;
> +	default:
> +		return riscv_isa_extension_available(NULL, ZICBOM);
> +	}
> +#endif
> +
> +	return false;
> +}
> +
>  /*
>   * Probe presence of individual extensions.
>   *
> @@ -273,6 +288,9 @@ static u32 __init_or_module cpufeature_probe(unsigned int stage)
>  	if (cpufeature_probe_svpbmt(stage))
>  		cpu_req_feature |= (1U << CPUFEATURE_SVPBMT);
>  
> +	if (cpufeature_probe_cmo(stage))
> +		cpu_req_feature |= (1U << CPUFEATURE_CMO);
> +
>  	return cpu_req_feature;
>  }
>  
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index f0f36a4a0e9b..95ef6e2bf45c 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -22,6 +22,7 @@
>  #include <linux/crash_dump.h>
>  
>  #include <asm/alternative.h>
> +#include <asm/cacheflush.h>
>  #include <asm/cpu_ops.h>
>  #include <asm/early_ioremap.h>
>  #include <asm/pgtable.h>
> @@ -296,6 +297,7 @@ void __init setup_arch(char **cmdline_p)
>  #endif
>  
>  	riscv_fill_hwcap();
> +	riscv_init_cbom_blocksize();
>  	apply_boot_alternatives();
>  }
>  
> diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
> index ac7a25298a04..548f2f3c00e9 100644
> --- a/arch/riscv/mm/Makefile
> +++ b/arch/riscv/mm/Makefile
> @@ -30,3 +30,4 @@ endif
>  endif
>  
>  obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
> +obj-$(CONFIG_RISCV_ISA_ZICBOM) += dma-noncoherent.o
> diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
> new file mode 100644
> index 000000000000..f77f4c529835
> --- /dev/null
> +++ b/arch/riscv/mm/dma-noncoherent.c
> @@ -0,0 +1,93 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * RISC-V specific functions to support DMA for non-coherent devices
> + *
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + */
> +
> +#include <linux/dma-direct.h>
> +#include <linux/dma-map-ops.h>
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/libfdt.h>
> +#include <linux/mm.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <asm/cacheflush.h>
> +
> +static unsigned int riscv_cbom_block_size = L1_CACHE_BYTES;
> +
> +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> +{
> +	switch (dir) {
> +	case DMA_TO_DEVICE:
> +		ALT_CMO_OP(CLEAN, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	case DMA_FROM_DEVICE:
> +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;

arch_sync_dma_for_device(DMA_FROM_DEVICE) is a no-op from the CPU's perspective.
Invalidating the CPU's cache goes in arch_sync_dma_for_cpu(DMA_FROM_DEVICE).

> +	case DMA_BIDIRECTIONAL:
> +		ALT_CMO_OP(FLUSH, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> +{
> +	switch (dir) {
> +	case DMA_TO_DEVICE:
> +		break;
> +	case DMA_FROM_DEVICE:
> +	case DMA_BIDIRECTIONAL:
> +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);

For arch_sync_dma_for_cpu(DMA_BIDIRECTIONAL), we expect the CPU to have written
to the buffer, so this should flush, not invalidate.

Regards,
Samuel

> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +void arch_dma_prep_coherent(struct page *page, size_t size)
> +{
> +	void *flush_addr = page_address(page);
> +
> +	memset(flush_addr, 0, size);
> +	ALT_CMO_OP(FLUSH, (unsigned long)flush_addr, size, riscv_cbom_block_size);
> +}
> +
> +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> +		const struct iommu_ops *iommu, bool coherent)
> +{
> +	/* If a specific device is dma-coherent, set it here */
> +	dev->dma_coherent = coherent;
> +}
> +
> +void riscv_init_cbom_blocksize(void)
> +{
> +	struct device_node *node;
> +	int ret;
> +	u32 val;
> +
> +	for_each_of_cpu_node(node) {
> +		int hartid = riscv_of_processor_hartid(node);
> +		int cbom_hartid;
> +
> +		if (hartid < 0)
> +			continue;
> +
> +		/* set block-size for cbom extension if available */
> +		ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
> +		if (ret)
> +			continue;
> +
> +		if (!riscv_cbom_block_size) {
> +			riscv_cbom_block_size = val;
> +			cbom_hartid = hartid;
> +		} else {
> +			if (riscv_cbom_block_size != val)
> +				pr_warn("cbom-block-size mismatched between harts %d and %d\n",
> +					cbom_hartid, hartid);
> +		}
> +	}
> +}
>
Christoph Hellwig June 13, 2022, 5:50 a.m. UTC | #4
On Sun, Jun 12, 2022 at 02:15:00PM -0500, Samuel Holland wrote:
> > +config RISCV_ISA_ZICBOM
> > +	bool "Zicbom extension support for non-coherent dma operation"
> > +	select ARCH_HAS_DMA_PREP_COHERENT
> > +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> > +	select ARCH_HAS_SYNC_DMA_FOR_CPU
> > +	select ARCH_HAS_SETUP_DMA_OPS
> 
> ARCH_HAS_SETUP_DMA_OPS needs to be separate from the non-coherent DMA option,
> because iommu_setup_dma_ops() will need to be called from arch_setup_dma_ops()
> even on a fully-coherent system. (But this change is not strictly necessary for
> this series.)

It doesn't need to be separate, you can just add another select for
the symbol.

> > +	case DMA_FROM_DEVICE:
> > +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> > +		break;
> 
> arch_sync_dma_for_device(DMA_FROM_DEVICE) is a no-op from the CPU's perspective.
> Invalidating the CPU's cache goes in arch_sync_dma_for_cpu(DMA_FROM_DEVICE).

Only if you guarantee that there is never any speculation.  See:

https://lore.kernel.org/lkml/20180518175004.GF17671@n2100.armlinux.org.uk
Heiko Stuebner June 15, 2022, 4:56 p.m. UTC | #5
Hi Christoph,

Am Freitag, 10. Juni 2022, 07:56:08 CEST schrieb Christoph Hellwig:
> On Fri, Jun 10, 2022 at 02:43:07AM +0200, Heiko Stuebner wrote:
> > +config RISCV_ISA_ZICBOM
> > +	bool "Zicbom extension support for non-coherent dma operation"
> > +	select ARCH_HAS_DMA_PREP_COHERENT
> > +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> > +	select ARCH_HAS_SYNC_DMA_FOR_CPU
> > +	select ARCH_HAS_SETUP_DMA_OPS
> > +	select DMA_DIRECT_REMAP
> > +	select RISCV_ALTERNATIVE
> > +	default y
> > +	help
> > +	   Adds support to dynamically detect the presence of the ZICBOM extension
> 
> Overly long line here.

fixed

> 
> > +	   (Cache Block Management Operations) and enable its usage.
> > +
> > +	   If you don't know what to do here, say Y.
> 
> But more importantly I think the whole text here is not very helpful.
> What users care about is non-coherent DMA support.  What extension is
> used for that is rather secondary.

I guess it might make sense to split that in some way.
I.e. Zicbom provides one implementation for handling non-coherence,
the D1 uses different (but very similar) instructions while the SoC on the
Beagle-V does something completely different.

So I guess it could make sense to have a general DMA_NONCOHERENT option
and which gets selected by the relevant users.

This also fixes the issue that Zicbom needs a very new binutils
but if beagle-v support happens that wouldn't need that.


> Also please capitalize DMA.

fixed

> > +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> > +{
> > +	switch (dir) {
> > +	case DMA_TO_DEVICE:
> > +		ALT_CMO_OP(CLEAN, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> > +		break;
> > +	case DMA_FROM_DEVICE:
> > +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> > +		break;
> > +	case DMA_BIDIRECTIONAL:
> > +		ALT_CMO_OP(FLUSH, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> > +		break;
> > +	default:
> > +		break;
> > +	}
> 
> Pleae avoid all these crazy long lines.  and use a logical variable
> for the virtual address.  And why do you pass that virtual address
> as an unsigned long to ALT_CMO_OP?  You're going to make your life
> much easier if you simply always pass a pointer.

fixed all of those.
And of course you're right, not having the cast when calling ALT_CMO_OP
makes things definitly a lot nicer looking.

> Last but not last, does in RISC-V clean mean writeback and flush mean
> writeback plus invalidate?  If so the code is correct, but the choice
> of names in the RISC-V spec is extremely unfortunate.

clean: 
    makes data [...] visible to a set of non-coherent agents [...] by
    performing a write transfer of a copy of a cache block [...]

flush:
    performs a clean followed by an invalidate

So that's a yes to your question

> > +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
> > +{
> > +	switch (dir) {
> > +	case DMA_TO_DEVICE:
> > +		break;
> > +	case DMA_FROM_DEVICE:
> > +	case DMA_BIDIRECTIONAL:
> > +		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
> > +		break;
> > +	default:
> > +		break;
> > +	}
> > +}
> 
> Same comment here and in few other places.

fixed

> > +
> > +void arch_dma_prep_coherent(struct page *page, size_t size)
> > +{
> > +	void *flush_addr = page_address(page);
> > +
> > +	memset(flush_addr, 0, size);
> > +	ALT_CMO_OP(FLUSH, (unsigned long)flush_addr, size, riscv_cbom_block_size);
> > +}
> 
> arch_dma_prep_coherent should never zero the memory, that is left
> for the upper layers.`

fixed

> > +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> > +		const struct iommu_ops *iommu, bool coherent)
> > +{
> > +	/* If a specific device is dma-coherent, set it here */
> 
> This comment isn't all that useful.

ok, I've dropped it

> > +	dev->dma_coherent = coherent;
> > +}
> 
> But more importantly, this assums that once this code is built all
> devices are non-coherent by default.  I.e. with this patch applied
> and the config option enabled we'll now suddenly start doing cache
> management operations or setups that didn't do it before.

If I'm reading things correctly [0], the default for those functions
is for those to be empty - but defined in the coherent case.

When you look at the definition of ALT_CMO_OP

#define ALT_CMO_OP(_op, _start, _size, _cachesize)                      \
asm volatile(ALTERNATIVE_2(                                             \
        __nops(6),                                                      \

 you'll see that it's default variant is to do nothing and it doing any
non-coherency voodoo is only patched in if the Zicbom extension
(or T-Head errata) is detected at runtime.

So in the coherent case (with the memset removed as you suggested),
the arch_sync_dma_* and arch_dma_prep_coherent functions end up as
something like

void arch_dma_prep_coherent(struct page *page, size_t size)
{
        void *flush_addr = page_address(page);

        nops(6);
}

which is very mich similar to the defaults [0] I guess, or am I
overlooking something?

Thanks for taking the time for that review
Heiko


[0] https://elixir.bootlin.com/linux/latest/source/include/linux/dma-map-ops.h#L293
Christoph Hellwig June 15, 2022, 5:49 p.m. UTC | #6
On Wed, Jun 15, 2022 at 06:56:40PM +0200, Heiko Stübner wrote:
> If I'm reading things correctly [0], the default for those functions
> is for those to be empty - but defined in the coherent case.

That's not the point.

Zicbom is just an extension that allows the CPU to support managing
cache state.  Non-coherent DMA is just one of the use cases there
are others like persistent memory.  And when a CPU core supports
Zicbom it might or might not have any non-coherent periphals.  Or
even some coherent and some non-coherent ones, something that
is pretty common in arm/arm64 CPUs, where PCIe is usually cache
coherent, but some other cheap periphals might not be.

That is why Linux ports require the plaform (usually through
DT or ACPI) to mark which devices are coherent and which ones
are not.
Heiko Stuebner June 16, 2022, 9:46 a.m. UTC | #7
Hi,

Am Mittwoch, 15. Juni 2022, 19:49:10 CEST schrieb Christoph Hellwig:
> On Wed, Jun 15, 2022 at 06:56:40PM +0200, Heiko Stübner wrote:
> > If I'm reading things correctly [0], the default for those functions
> > is for those to be empty - but defined in the coherent case.
> 
> That's not the point.
> 
> Zicbom is just an extension that allows the CPU to support managing
> cache state.  Non-coherent DMA is just one of the use cases there
> are others like persistent memory.  And when a CPU core supports
> Zicbom it might or might not have any non-coherent periphals.  Or
> even some coherent and some non-coherent ones, something that
> is pretty common in arm/arm64 CPUs, where PCIe is usually cache
> coherent, but some other cheap periphals might not be.
> 
> That is why Linux ports require the plaform (usually through
> DT or ACPI) to mark which devices are coherent and which ones
> are not.

I "get" it now I think. I was somewhat struggling what you were aiming
at, but that was something of not seeing "the forest for the trees" on
my part. And of course you were right in recognizing that issue :-) .

Without CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE and friends
dev_is_dma_coherent() will always return true otherwise the dma_coherent
attribute. Hence the "coherent" value for every system not managing things
will suddenly show as non-coherent where it showed as coherent before.


As we already have detection-points for non-coherent systems (zicbom
detection, t-head errata detection) I guess just also switching some boolean
might solve that, so that arch_setup_dma_ops() will set the dma_coherent
attribute to true always except when some non-coherent system is detected.


void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
                const struct iommu_ops *iommu, bool coherent)
{
        /* only track coherent attributes, if cache-management is available */
        if (enable_noncoherency)
                dev->dma_coherent = coherent;
        else
                dev->dma_coherent = true;
}


Heiko
Christoph Hellwig June 16, 2022, 11:53 a.m. UTC | #8
On Thu, Jun 16, 2022 at 11:46:45AM +0200, Heiko Stübner wrote:
> Without CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE and friends
> dev_is_dma_coherent() will always return true otherwise the dma_coherent
> attribute. Hence the "coherent" value for every system not managing things
> will suddenly show as non-coherent where it showed as coherent before.

Yes.

> As we already have detection-points for non-coherent systems (zicbom
> detection, t-head errata detection)

No, we don't.  There are plenty of reasons to support Zicbom without
every having any non-coherent DMA periphals.  Or just some non-coherent
ones.  So that alone is not a good indicator at all.

The proper interface for that in DT-based setups i of_dma_is_coherent(),
which looks at the dma-coherent DT property.  And given that RISC-V
started out as all coherent we need to follow the powerpc way
(CONFIG_OF_DMA_DEFAULT_COHERENT) and default to coherent with an
explcit propery for non-coherent devices, and not the arm/arm64 way
where non-coherent is the default and coherent devices need the property.
Heiko Stuebner June 16, 2022, 12:09 p.m. UTC | #9
Am Donnerstag, 16. Juni 2022, 13:53:42 CEST schrieb Christoph Hellwig:
> On Thu, Jun 16, 2022 at 11:46:45AM +0200, Heiko Stübner wrote:
> > Without CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE and friends
> > dev_is_dma_coherent() will always return true otherwise the dma_coherent
> > attribute. Hence the "coherent" value for every system not managing things
> > will suddenly show as non-coherent where it showed as coherent before.
> 
> Yes.
> 
> > As we already have detection-points for non-coherent systems (zicbom
> > detection, t-head errata detection)
> 
> No, we don't.  There are plenty of reasons to support Zicbom without
> every having any non-coherent DMA periphals.  Or just some non-coherent
> ones.  So that alone is not a good indicator at all.
> 
> The proper interface for that in DT-based setups i of_dma_is_coherent(),
> which looks at the dma-coherent DT property.  And given that RISC-V
> started out as all coherent we need to follow the powerpc way
> (CONFIG_OF_DMA_DEFAULT_COHERENT) and default to coherent with an
> explcit propery for non-coherent devices, and not the arm/arm64 way
> where non-coherent is the default and coherent devices need the property.

I did look at the dma-coherent-property -> of_dma_is_coherent()
	 -> of_dma_configure_id() -> arch_setup_dma_ops() chain yesterday
which setups the value dev_is_dma_coherent() returns.

The Zicbom extension will only be in new platforms, so none of the currently
supported ones will have that. So my thinking was that we can default to
true in arch_setup_dma_ops() and only use the read coherency value when
actual cache-managment-operations are defined.

My guess was that new platforms implementing cache-management will want
to be non-coherent by default?

So if the kernel is running on a platform not implementing cache-management
dev_is_dma_coherent() would always return true as it does now, while on a
new platform implementing cache-management we'd default to non-coherent
and expect that new devicetree/firmware to specify the coherent devices.
Christoph Hellwig June 16, 2022, 12:11 p.m. UTC | #10
On Thu, Jun 16, 2022 at 02:09:47PM +0200, Heiko Stübner wrote:
> My guess was that new platforms implementing cache-management will want
> to be non-coherent by default?

No.  Cache incoherent DMA is absolutely horrible and almost impossible
to get right for the corner cases.  It is a cost cutting measure seen on
cheap SOCs and mostly avoided for more enterprise grade products.
Heiko Stuebner June 17, 2022, 8:30 a.m. UTC | #11
Am Donnerstag, 16. Juni 2022, 14:11:57 CEST schrieb Christoph Hellwig:
> On Thu, Jun 16, 2022 at 02:09:47PM +0200, Heiko Stübner wrote:
> > My guess was that new platforms implementing cache-management will want
> > to be non-coherent by default?
> 
> No.  Cache incoherent DMA is absolutely horrible and almost impossible
> to get right for the corner cases.  It is a cost cutting measure seen on
> cheap SOCs and mostly avoided for more enterprise grade products.

ok, then we'll do it the other way around as suggested :-) .
Coherent by default and marking the non-coherent parts.

DT people on IRC yesterday were also open to adding a dma-noncoherent
property for that case.

Heiko
diff mbox series

Patch

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 32ffef9f6e5b..384d0c15f2b6 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -376,6 +376,21 @@  config RISCV_ISA_SVPBMT
 
 	   If you don't know what to do here, say Y.
 
+config RISCV_ISA_ZICBOM
+	bool "Zicbom extension support for non-coherent dma operation"
+	select ARCH_HAS_DMA_PREP_COHERENT
+	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+	select ARCH_HAS_SYNC_DMA_FOR_CPU
+	select ARCH_HAS_SETUP_DMA_OPS
+	select DMA_DIRECT_REMAP
+	select RISCV_ALTERNATIVE
+	default y
+	help
+	   Adds support to dynamically detect the presence of the ZICBOM extension
+	   (Cache Block Management Operations) and enable its usage.
+
+	   If you don't know what to do here, say Y.
+
 config FPU
 	bool "FPU support"
 	default y
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 23ff70350992..eb12d014b158 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -42,6 +42,12 @@  void flush_icache_mm(struct mm_struct *mm, bool local);
 
 #endif /* CONFIG_SMP */
 
+#ifdef CONFIG_RISCV_ISA_ZICBOM
+void riscv_init_cbom_blocksize(void);
+#else
+static inline void riscv_init_cbom_blocksize(void) { }
+#endif
+
 /*
  * Bits in sys_riscv_flush_icache()'s flags argument.
  */
diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h
index 398e351e7002..2e80a75b5241 100644
--- a/arch/riscv/include/asm/errata_list.h
+++ b/arch/riscv/include/asm/errata_list.h
@@ -20,7 +20,8 @@ 
 #endif
 
 #define	CPUFEATURE_SVPBMT 0
-#define	CPUFEATURE_NUMBER 1
+#define	CPUFEATURE_CMO 1
+#define	CPUFEATURE_NUMBER 2
 
 #ifdef __ASSEMBLY__
 
@@ -87,6 +88,39 @@  asm volatile(ALTERNATIVE(						\
 #define ALT_THEAD_PMA(_val)
 #endif
 
+/*
+ * cbo.clean rs1
+ * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
+ *    0...01     rs1       010      00000  0001111
+ *
+ * cbo.flush rs1
+ * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
+ *    0...10     rs1       010      00000  0001111
+ *
+ * cbo.inval rs1
+ * | 31 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
+ *    0...00     rs1       010      00000  0001111
+ */
+#define CBO_INVAL_A0	".long 0x15200F"
+#define CBO_CLEAN_A0	".long 0x25200F"
+#define CBO_FLUSH_A0	".long 0x05200F"
+
+#define ALT_CMO_OP(_op, _start, _size, _cachesize)			\
+asm volatile(ALTERNATIVE(						\
+	__nops(5),							\
+	"mv a0, %1\n\t"							\
+	"j 2f\n\t"							\
+	"3:\n\t"							\
+	CBO_##_op##_A0 "\n\t"						\
+	"add a0, a0, %0\n\t"						\
+	"2:\n\t"							\
+	"bltu a0, %2, 3b\n\t", 0,					\
+		CPUFEATURE_CMO, CONFIG_RISCV_ISA_ZICBOM)		\
+	: : "r"(_cachesize),						\
+	    "r"(ALIGN((_start), (_cachesize))),				\
+	    "r"(ALIGN((_start) + (_size), (_cachesize)))		\
+	: "a0")
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
index 4e2486881840..6044e402003d 100644
--- a/arch/riscv/include/asm/hwcap.h
+++ b/arch/riscv/include/asm/hwcap.h
@@ -53,6 +53,7 @@  extern unsigned long elf_hwcap;
 enum riscv_isa_ext_id {
 	RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE,
 	RISCV_ISA_EXT_SVPBMT,
+	RISCV_ISA_EXT_ZICBOM,
 	RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
 };
 
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
index fba9e9f46a8c..0365557f7122 100644
--- a/arch/riscv/kernel/cpu.c
+++ b/arch/riscv/kernel/cpu.c
@@ -89,6 +89,7 @@  int riscv_of_parent_hartid(struct device_node *node)
 static struct riscv_isa_ext_data isa_ext_arr[] = {
 	__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
 	__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
+	__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
 	__RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX),
 };
 
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 6a40cb8134bd..e956f4d763ec 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -199,6 +199,7 @@  void __init riscv_fill_hwcap(void)
 			} else {
 				SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
 				SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
+				SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
 			}
 #undef SET_ISA_EXT_MAP
 		}
@@ -259,6 +260,20 @@  static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage)
 	return false;
 }
 
+static bool __init_or_module cpufeature_probe_cmo(unsigned int stage)
+{
+#ifdef CONFIG_RISCV_ISA_ZICBOM
+	switch (stage) {
+	case RISCV_ALTERNATIVES_EARLY_BOOT:
+		return false;
+	default:
+		return riscv_isa_extension_available(NULL, ZICBOM);
+	}
+#endif
+
+	return false;
+}
+
 /*
  * Probe presence of individual extensions.
  *
@@ -273,6 +288,9 @@  static u32 __init_or_module cpufeature_probe(unsigned int stage)
 	if (cpufeature_probe_svpbmt(stage))
 		cpu_req_feature |= (1U << CPUFEATURE_SVPBMT);
 
+	if (cpufeature_probe_cmo(stage))
+		cpu_req_feature |= (1U << CPUFEATURE_CMO);
+
 	return cpu_req_feature;
 }
 
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index f0f36a4a0e9b..95ef6e2bf45c 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -22,6 +22,7 @@ 
 #include <linux/crash_dump.h>
 
 #include <asm/alternative.h>
+#include <asm/cacheflush.h>
 #include <asm/cpu_ops.h>
 #include <asm/early_ioremap.h>
 #include <asm/pgtable.h>
@@ -296,6 +297,7 @@  void __init setup_arch(char **cmdline_p)
 #endif
 
 	riscv_fill_hwcap();
+	riscv_init_cbom_blocksize();
 	apply_boot_alternatives();
 }
 
diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
index ac7a25298a04..548f2f3c00e9 100644
--- a/arch/riscv/mm/Makefile
+++ b/arch/riscv/mm/Makefile
@@ -30,3 +30,4 @@  endif
 endif
 
 obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
+obj-$(CONFIG_RISCV_ISA_ZICBOM) += dma-noncoherent.o
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
new file mode 100644
index 000000000000..f77f4c529835
--- /dev/null
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -0,0 +1,93 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * RISC-V specific functions to support DMA for non-coherent devices
+ *
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ */
+
+#include <linux/dma-direct.h>
+#include <linux/dma-map-ops.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/libfdt.h>
+#include <linux/mm.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <asm/cacheflush.h>
+
+static unsigned int riscv_cbom_block_size = L1_CACHE_BYTES;
+
+void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
+{
+	switch (dir) {
+	case DMA_TO_DEVICE:
+		ALT_CMO_OP(CLEAN, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
+		break;
+	case DMA_FROM_DEVICE:
+		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
+		break;
+	case DMA_BIDIRECTIONAL:
+		ALT_CMO_OP(FLUSH, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
+		break;
+	default:
+		break;
+	}
+}
+
+void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir)
+{
+	switch (dir) {
+	case DMA_TO_DEVICE:
+		break;
+	case DMA_FROM_DEVICE:
+	case DMA_BIDIRECTIONAL:
+		ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size);
+		break;
+	default:
+		break;
+	}
+}
+
+void arch_dma_prep_coherent(struct page *page, size_t size)
+{
+	void *flush_addr = page_address(page);
+
+	memset(flush_addr, 0, size);
+	ALT_CMO_OP(FLUSH, (unsigned long)flush_addr, size, riscv_cbom_block_size);
+}
+
+void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+		const struct iommu_ops *iommu, bool coherent)
+{
+	/* If a specific device is dma-coherent, set it here */
+	dev->dma_coherent = coherent;
+}
+
+void riscv_init_cbom_blocksize(void)
+{
+	struct device_node *node;
+	int ret;
+	u32 val;
+
+	for_each_of_cpu_node(node) {
+		int hartid = riscv_of_processor_hartid(node);
+		int cbom_hartid;
+
+		if (hartid < 0)
+			continue;
+
+		/* set block-size for cbom extension if available */
+		ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
+		if (ret)
+			continue;
+
+		if (!riscv_cbom_block_size) {
+			riscv_cbom_block_size = val;
+			cbom_hartid = hartid;
+		} else {
+			if (riscv_cbom_block_size != val)
+				pr_warn("cbom-block-size mismatched between harts %d and %d\n",
+					cbom_hartid, hartid);
+		}
+	}
+}