diff mbox series

[v2,1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS

Message ID 20231203135753.1575-2-jszhang@kernel.org (mailing list archive)
State Superseded
Headers show
Series riscv: enable EFFICIENT_UNALIGNED_ACCESS and DCACHE_WORD_ACCESS | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR fail PR summary
conchuod/patch-1-test-1 success .github/scripts/patches/build_rv32_defconfig.sh
conchuod/patch-1-test-2 success .github/scripts/patches/build_rv64_clang_allmodconfig.sh
conchuod/patch-1-test-3 success .github/scripts/patches/build_rv64_gcc_allmodconfig.sh
conchuod/patch-1-test-4 success .github/scripts/patches/build_rv64_nommu_k210_defconfig.sh
conchuod/patch-1-test-5 success .github/scripts/patches/build_rv64_nommu_virt_defconfig.sh
conchuod/patch-1-test-6 success .github/scripts/patches/checkpatch.sh
conchuod/patch-1-test-7 success .github/scripts/patches/dtb_warn_rv64.sh
conchuod/patch-1-test-8 success .github/scripts/patches/header_inline.sh
conchuod/patch-1-test-9 success .github/scripts/patches/kdoc.sh
conchuod/patch-1-test-10 success .github/scripts/patches/module_param.sh
conchuod/patch-1-test-11 success .github/scripts/patches/verify_fixes.sh
conchuod/patch-1-test-12 success .github/scripts/patches/verify_signedoff.sh

Commit Message

Jisheng Zhang Dec. 3, 2023, 1:57 p.m. UTC
Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
support efficient unaligned access, for performance reason we want
to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
avoid performance regressions on other non efficient unaligned access
platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.

To solve this problem, runtime code patching based on the detected
speed is a good solution. But that's not easy, it involves lots of
work to modify vairous subsystems such as net, mm, lib and so on.
This can be done step by step.

So let's take an easier solution: add support to efficient unaligned
access and hide the support under NONPORTABLE.

Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on
NONPORTABLE, if users know during config time that the kernel will be
only run on those efficient unaligned access hw platforms, they can
enable it. Obviously, generic unified kernel Image shouldn't enable it.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
 arch/riscv/Kconfig | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Charlie Jenkins Dec. 4, 2023, 7:15 p.m. UTC | #1
On Sun, Dec 03, 2023 at 09:57:52PM +0800, Jisheng Zhang wrote:
> Some riscv implementations such as T-HEAD's C906, C908, C910 and C920
> support efficient unaligned access, for performance reason we want
> to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To
> avoid performance regressions on other non efficient unaligned access
> platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected.
> 
> To solve this problem, runtime code patching based on the detected
> speed is a good solution. But that's not easy, it involves lots of
> work to modify vairous subsystems such as net, mm, lib and so on.
> This can be done step by step.
> 
> So let's take an easier solution: add support to efficient unaligned
> access and hide the support under NONPORTABLE.
> 
> Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on
> NONPORTABLE, if users know during config time that the kernel will be
> only run on those efficient unaligned access hw platforms, they can
> enable it. Obviously, generic unified kernel Image shouldn't enable it.
> 
> Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
> ---
>  arch/riscv/Kconfig | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 7f8aa25457ba..0a76209e9b02 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
>  	  load/store for both kernel and userspace. When disable, misaligned
>  	  accesses will generate SIGBUS in userspace and panic in kernel.
>  
> +config RISCV_EFFICIENT_UNALIGNED_ACCESS

There already exists hwprobe for this purpose. If kernel code wants to
leverage the efficient unaligned accesses of hardware, it can use static
keys. I have a patch that will set this static key if the hardware was
detected to have fast unaligned accesses:

https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/

- Charlie

> +	bool "Use unaligned access for some functions"
> +	depends on NONPORTABLE
> +	select HAVE_EFFICIENT_UNALIGNED_ACCESS
> +	default n
> +	help
> +	  Say Y here if you want the kernel only run on hardware platforms which
> +	  support efficient unaligned access, then unaligned access will be used
> +	  in some functions for optimized performance.
> +
> +	  If unsure what to do here, say N.
> +
>  endmenu # "Platform type"
>  
>  menu "Kernel features"
> -- 
> 2.42.0
> 
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Eric Biggers Dec. 5, 2023, 2:14 a.m. UTC | #2
On Mon, Dec 04, 2023 at 11:15:28AM -0800, Charlie Jenkins wrote:
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index 7f8aa25457ba..0a76209e9b02 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
> >  	  load/store for both kernel and userspace. When disable, misaligned
> >  	  accesses will generate SIGBUS in userspace and panic in kernel.
> >  
> > +config RISCV_EFFICIENT_UNALIGNED_ACCESS
> 
> There already exists hwprobe for this purpose. If kernel code wants to
> leverage the efficient unaligned accesses of hardware, it can use static
> keys. I have a patch that will set this static key if the hardware was
> detected to have fast unaligned accesses:
> 
> https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/

Is the plan to make the get_unaligned* and put_unaligned* macros expand to code
for both cases, and select between them using a static key?  Note that there are
a very large number of callers of these macros in the kernel.  And what about
kernel code that checks CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS directly?

AFAIK, no other Linux architecture supports kernel images where the unaligned
access support is unknown at compile time.  It's not clear to me that such an
approach is feasible.  A static key can easily be provided, but it's unclear
what code would use it, given that currently lots of kernel code assumes that
unaligned access support is known at compile time.

Meanwhile, there are people building kernels they know will only be deployed on
systems where unaligned accesses are supported.  To me, it seems useful to
provide a kconfig option for them to build a more efficient kernel.

- Eric
Qingfang Deng Dec. 5, 2023, 8:39 a.m. UTC | #3
Hi,

You may as well remove the -mstrict-align CFLAGS in the Makefile, if
this option is enabled:

--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -108,7 +108,9 @@ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
 # unaligned accesses.  While unaligned accesses are explicitly allowed in the
 # RISC-V ISA, they're emulated by machine mode traps on all extant
 # architectures.  It's faster to have GCC emit only aligned accesses.
+ifneq ($(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS),y)
 KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
+endif
 
 ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y)
 prepare: stack_protector_prepare
Jisheng Zhang Dec. 5, 2023, 1:53 p.m. UTC | #4
On Mon, Dec 04, 2023 at 06:14:06PM -0800, Eric Biggers wrote:
> On Mon, Dec 04, 2023 at 11:15:28AM -0800, Charlie Jenkins wrote:
> > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > > index 7f8aa25457ba..0a76209e9b02 100644
> > > --- a/arch/riscv/Kconfig
> > > +++ b/arch/riscv/Kconfig
> > > @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
> > >  	  load/store for both kernel and userspace. When disable, misaligned
> > >  	  accesses will generate SIGBUS in userspace and panic in kernel.
> > >  
> > > +config RISCV_EFFICIENT_UNALIGNED_ACCESS
> > 
> > There already exists hwprobe for this purpose. If kernel code wants to
> > leverage the efficient unaligned accesses of hardware, it can use static
> > keys. I have a patch that will set this static key if the hardware was
> > detected to have fast unaligned accesses:
> > 
> > https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/
> 
> Is the plan to make the get_unaligned* and put_unaligned* macros expand to code
> for both cases, and select between them using a static key?  Note that there are
> a very large number of callers of these macros in the kernel.  And what about
> kernel code that checks CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS directly?
> 
> AFAIK, no other Linux architecture supports kernel images where the unaligned
> access support is unknown at compile time.  It's not clear to me that such an
> approach is feasible.  A static key can easily be provided, but it's unclear
> what code would use it, given that currently lots of kernel code assumes that
> unaligned access support is known at compile time.
> 
> Meanwhile, there are people building kernels they know will only be deployed on
> systems where unaligned accesses are supported.  To me, it seems useful to
> provide a kconfig option for them to build a more efficient kernel.

Generally, I agree with Eric's above points. Various subsystem such as net, mm,
lib and so on have different code path for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS,
while Charlie's patch only touch partial code of arch/riscv, and even if those
subsystem maintainers agree with dynamic code patching(I still believe
persuading those subsystem maintainers is not easy), that's still a
huge task which needs to be done step by step. So before that, we'd
better let this series merged and benefit all efficient unaligned access
riscv systems. When the huge task is completed, we can remove the config
option.

Thanks
Charlie Jenkins Dec. 5, 2023, 8:56 p.m. UTC | #5
On Tue, Dec 05, 2023 at 09:53:50PM +0800, Jisheng Zhang wrote:
> On Mon, Dec 04, 2023 at 06:14:06PM -0800, Eric Biggers wrote:
> > On Mon, Dec 04, 2023 at 11:15:28AM -0800, Charlie Jenkins wrote:
> > > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > > > index 7f8aa25457ba..0a76209e9b02 100644
> > > > --- a/arch/riscv/Kconfig
> > > > +++ b/arch/riscv/Kconfig
> > > > @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
> > > >  	  load/store for both kernel and userspace. When disable, misaligned
> > > >  	  accesses will generate SIGBUS in userspace and panic in kernel.
> > > >  
> > > > +config RISCV_EFFICIENT_UNALIGNED_ACCESS
> > > 
> > > There already exists hwprobe for this purpose. If kernel code wants to
> > > leverage the efficient unaligned accesses of hardware, it can use static
> > > keys. I have a patch that will set this static key if the hardware was
> > > detected to have fast unaligned accesses:
> > > 
> > > https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/
> > 
> > Is the plan to make the get_unaligned* and put_unaligned* macros expand to code
> > for both cases, and select between them using a static key?  Note that there are
> > a very large number of callers of these macros in the kernel.  And what about
> > kernel code that checks CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS directly?
> > 
> > AFAIK, no other Linux architecture supports kernel images where the unaligned
> > access support is unknown at compile time.  It's not clear to me that such an
> > approach is feasible.  A static key can easily be provided, but it's unclear
> > what code would use it, given that currently lots of kernel code assumes that
> > unaligned access support is known at compile time.
> > 
> > Meanwhile, there are people building kernels they know will only be deployed on
> > systems where unaligned accesses are supported.  To me, it seems useful to
> > provide a kconfig option for them to build a more efficient kernel.
> 
> Generally, I agree with Eric's above points. Various subsystem such as net, mm,
> lib and so on have different code path for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS,
> while Charlie's patch only touch partial code of arch/riscv, and even if those
> subsystem maintainers agree with dynamic code patching(I still believe
> persuading those subsystem maintainers is not easy), that's still a
> huge task which needs to be done step by step. So before that, we'd
> better let this series merged and benefit all efficient unaligned access
> riscv systems. When the huge task is completed, we can remove the config
> option.
> 
> Thanks

It would be best to enable all of the paths that leverage
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS at runtime (using hwprobe)
instead of using a compile-time flag to do so. However, as you say, that
is large task and doesn't need to be done immediately. For now I agree
it is sufficient to use this new RISCV_EFFICIENT_UNALIGNED_ACCESS
config.

- Charlie

Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
Charles Lohr Dec. 6, 2023, 12:05 a.m. UTC | #6
The automatic detection code has become a bit of a thorn both for
folks like me who use the kernel for some fast-spin up aodr
virtualization (where check_unaligned_access soaks up 1/4 to 1/3 of
the total boot time and unaligned accesses are always fast) as well as
causing issues for the FPGA soft core development where they easily
know ahead of time what the situation is going to be.  It would be
extremely welcome if the access could always be overridden with a
config value that could either force on or force off unaligned access
and avoid execution of the check function permanently.  I don't see a
world where for some of us, we would ever want autodetection on.  In
the RISC-V arena, many times we're dealing with very small systems
where the marginal cost of dead code is rather high.

On Tue, Dec 5, 2023 at 12:57 PM Charlie Jenkins <charlie@rivosinc.com> wrote:
>
> On Tue, Dec 05, 2023 at 09:53:50PM +0800, Jisheng Zhang wrote:
> > On Mon, Dec 04, 2023 at 06:14:06PM -0800, Eric Biggers wrote:
> > > On Mon, Dec 04, 2023 at 11:15:28AM -0800, Charlie Jenkins wrote:
> > > > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > > > > index 7f8aa25457ba..0a76209e9b02 100644
> > > > > --- a/arch/riscv/Kconfig
> > > > > +++ b/arch/riscv/Kconfig
> > > > > @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
> > > > >           load/store for both kernel and userspace. When disable, misaligned
> > > > >           accesses will generate SIGBUS in userspace and panic in kernel.
> > > > >
> > > > > +config RISCV_EFFICIENT_UNALIGNED_ACCESS
> > > >
> > > > There already exists hwprobe for this purpose. If kernel code wants to
> > > > leverage the efficient unaligned accesses of hardware, it can use static
> > > > keys. I have a patch that will set this static key if the hardware was
> > > > detected to have fast unaligned accesses:
> > > >
> > > > https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/
> > >
> > > Is the plan to make the get_unaligned* and put_unaligned* macros expand to code
> > > for both cases, and select between them using a static key?  Note that there are
> > > a very large number of callers of these macros in the kernel.  And what about
> > > kernel code that checks CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS directly?
> > >
> > > AFAIK, no other Linux architecture supports kernel images where the unaligned
> > > access support is unknown at compile time.  It's not clear to me that such an
> > > approach is feasible.  A static key can easily be provided, but it's unclear
> > > what code would use it, given that currently lots of kernel code assumes that
> > > unaligned access support is known at compile time.
> > >
> > > Meanwhile, there are people building kernels they know will only be deployed on
> > > systems where unaligned accesses are supported.  To me, it seems useful to
> > > provide a kconfig option for them to build a more efficient kernel.
> >
> > Generally, I agree with Eric's above points. Various subsystem such as net, mm,
> > lib and so on have different code path for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS,
> > while Charlie's patch only touch partial code of arch/riscv, and even if those
> > subsystem maintainers agree with dynamic code patching(I still believe
> > persuading those subsystem maintainers is not easy), that's still a
> > huge task which needs to be done step by step. So before that, we'd
> > better let this series merged and benefit all efficient unaligned access
> > riscv systems. When the huge task is completed, we can remove the config
> > option.
> >
> > Thanks
>
> It would be best to enable all of the paths that leverage
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS at runtime (using hwprobe)
> instead of using a compile-time flag to do so. However, as you say, that
> is large task and doesn't need to be done immediately. For now I agree
> it is sufficient to use this new RISCV_EFFICIENT_UNALIGNED_ACCESS
> config.
>
> - Charlie
>
> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Palmer Dabbelt Dec. 6, 2023, 4:19 p.m. UTC | #7
On Tue, 05 Dec 2023 16:05:27 PST (-0800), lohr85@gmail.com wrote:
> The automatic detection code has become a bit of a thorn both for
> folks like me who use the kernel for some fast-spin up aodr
> virtualization (where check_unaligned_access soaks up 1/4 to 1/3 of
> the total boot time and unaligned accesses are always fast) as well as
> causing issues for the FPGA soft core development where they easily
> know ahead of time what the situation is going to be.  It would be
> extremely welcome if the access could always be overridden with a
> config value that could either force on or force off unaligned access
> and avoid execution of the check function permanently.  I don't see a
> world where for some of us, we would ever want autodetection on.  In
> the RISC-V arena, many times we're dealing with very small systems
> where the marginal cost of dead code is rather high.

That seems generally reasonable to me.

We'd talked about putting misaligned access performance informaiton in 
the DT at some point, but we went with the probing instead.  So I think 
our options are a Kconfig or a kernel command line argument, both seem 
generally useful to me so I'd be fine with either (or both).

So I think someone should send a patch... ;)

Also: I think it's not really a blocker for this patch set, as the 
probing behavior is there already.  IIUC it's really the probing that's 
the problem here due to the boot time performance impact, so even if we 
did nothing with the probed information it'd still be causing your 
issues.

> On Tue, Dec 5, 2023 at 12:57 PM Charlie Jenkins <charlie@rivosinc.com> wrote:
>>
>> On Tue, Dec 05, 2023 at 09:53:50PM +0800, Jisheng Zhang wrote:
>> > On Mon, Dec 04, 2023 at 06:14:06PM -0800, Eric Biggers wrote:
>> > > On Mon, Dec 04, 2023 at 11:15:28AM -0800, Charlie Jenkins wrote:
>> > > > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> > > > > index 7f8aa25457ba..0a76209e9b02 100644
>> > > > > --- a/arch/riscv/Kconfig
>> > > > > +++ b/arch/riscv/Kconfig
>> > > > > @@ -654,6 +654,18 @@ config RISCV_MISALIGNED
>> > > > >           load/store for both kernel and userspace. When disable, misaligned
>> > > > >           accesses will generate SIGBUS in userspace and panic in kernel.
>> > > > >
>> > > > > +config RISCV_EFFICIENT_UNALIGNED_ACCESS
>> > > >
>> > > > There already exists hwprobe for this purpose. If kernel code wants to
>> > > > leverage the efficient unaligned accesses of hardware, it can use static
>> > > > keys. I have a patch that will set this static key if the hardware was
>> > > > detected to have fast unaligned accesses:
>> > > >
>> > > > https://lore.kernel.org/linux-riscv/20231117-optimize_checksum-v11-2-7d9d954fe361@rivosinc.com/
>> > >
>> > > Is the plan to make the get_unaligned* and put_unaligned* macros expand to code
>> > > for both cases, and select between them using a static key?  Note that there are
>> > > a very large number of callers of these macros in the kernel.  And what about
>> > > kernel code that checks CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS directly?
>> > >
>> > > AFAIK, no other Linux architecture supports kernel images where the unaligned
>> > > access support is unknown at compile time.  It's not clear to me that such an
>> > > approach is feasible.  A static key can easily be provided, but it's unclear
>> > > what code would use it, given that currently lots of kernel code assumes that
>> > > unaligned access support is known at compile time.

I agree we won't be able to get everything, but there's some focused 
routines like memcpy() where having runtime-variant behavior can make 
things measurably faster.  I'd guess there's some of this in crypto land 
as well.  We'd have to really look into the benefits, though: not only 
do we end up with a bunch of complexity, but also using ALTERNATIVE() 
tends to cause lower quality codegen because of all the inline assembly 
trickery.

All of that is really based on replacing a whole function at runtime, 
though.  I don't think we're going to be able to do anything dynamic for 
the more general case of misaligned access support, though -- that's 
really in the relm of fine-grained compiler code generation, and trying 
to do that at runtime with the alternative-type approach is just going 
to lead to a bunch of poor quality codegen and patched-in NOPs.  We'd 
essentially be trying to build a full JIT inside the kernel at that 
point.

It essentially the same problem as things like CMOV and bitmanip.

>> > > Meanwhile, there are people building kernels they know will only be deployed on
>> > > systems where unaligned accesses are supported.  To me, it seems useful to
>> > > provide a kconfig option for them to build a more efficient kernel.

I agree.  We've got a bit of a mess in Kconfig land where we don't 
differentiate between "build a kernel that tries to probe for $FEATURE" 
and "build a kernel that requires HW that supports $FEATURE".  We need 
to clean that up at some point, but there's enough of them I'm OK taking 
one more.

>> > Generally, I agree with Eric's above points. Various subsystem such as net, mm,
>> > lib and so on have different code path for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS,
>> > while Charlie's patch only touch partial code of arch/riscv, and even if those
>> > subsystem maintainers agree with dynamic code patching(I still believe
>> > persuading those subsystem maintainers is not easy), that's still a
>> > huge task which needs to be done step by step. So before that, we'd
>> > better let this series merged and benefit all efficient unaligned access
>> > riscv systems. When the huge task is completed, we can remove the config
>> > option.
>> >
>> > Thanks
>>
>> It would be best to enable all of the paths that leverage
>> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS at runtime (using hwprobe)
>> instead of using a compile-time flag to do so. However, as you say, that
>> is large task and doesn't need to be done immediately. For now I agree
>> it is sufficient to use this new RISCV_EFFICIENT_UNALIGNED_ACCESS
>> config.

We've got a lot more JIT-ish stuff in the RISC-V port than other ports 
do, it's kind of ugly but that's just the nature of the ISA.
It's kind of the same spot we're in with things like CMOV or the 
bitmanip extensions: there'll be some specific routines where the 
feature makes a big difference and we can provide an alternative (string 
and crypto stuff, for example), but trying to do it everywhere is just 
going to lead to chaos (and probably worse performance).

So I don't know exactly where the line is, but we're always going to 
have some amount of compile-time performance tuning -- at least until we 
just replace the whole kernel with BPF ;)

>>
>> - Charlie
>>
>> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
>>
>>
>> _______________________________________________
>> linux-riscv mailing list
>> linux-riscv@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-riscv
Eric Biggers Dec. 22, 2023, 5:04 a.m. UTC | #8
On Tue, Dec 05, 2023 at 04:39:24PM +0800, Qingfang DENG wrote:
> Hi,
> 
> You may as well remove the -mstrict-align CFLAGS in the Makefile, if
> this option is enabled:
> 
> --- a/arch/riscv/Makefile
> +++ b/arch/riscv/Makefile
> @@ -108,7 +108,9 @@ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
>  # unaligned accesses.  While unaligned accesses are explicitly allowed in the
>  # RISC-V ISA, they're emulated by machine mode traps on all extant
>  # architectures.  It's faster to have GCC emit only aligned accesses.
> +ifneq ($(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS),y)
>  KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
> +endif
>  

Agreed.  When CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y, we shouldn't use
-mstrict-align, so that the compiler can actually use unaligned memory accesses.

If I understand correctly, beyond the change requested above, people seem to be
happy with this patch.  Jisheng, can you resend it with the above feedback
addressed?  Thanks!

- Eric
diff mbox series

Patch

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 7f8aa25457ba..0a76209e9b02 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -654,6 +654,18 @@  config RISCV_MISALIGNED
 	  load/store for both kernel and userspace. When disable, misaligned
 	  accesses will generate SIGBUS in userspace and panic in kernel.
 
+config RISCV_EFFICIENT_UNALIGNED_ACCESS
+	bool "Use unaligned access for some functions"
+	depends on NONPORTABLE
+	select HAVE_EFFICIENT_UNALIGNED_ACCESS
+	default n
+	help
+	  Say Y here if you want the kernel only run on hardware platforms which
+	  support efficient unaligned access, then unaligned access will be used
+	  in some functions for optimized performance.
+
+	  If unsure what to do here, say N.
+
 endmenu # "Platform type"
 
 menu "Kernel features"