diff mbox series

[1/7] mips: dmi: Fix early remap on MIPS32

Message ID 20231122182419.30633-2-fancer.lancer@gmail.com (mailing list archive)
State Superseded
Headers show
Series MIPS: mm: Fix some memory-related issues | expand

Commit Message

Serge Semin Nov. 22, 2023, 6:23 p.m. UTC
dmi_early_remap() has been defined as ioremap_cache() which on MIPS32 gets
to be converted to the VM-based mapping. DMI early remapping is performed
at the setup_arch() stage with no VM available. So calling the
dmi_early_remap() for MIPS32 causes the system to crash at the early boot
time. Fix that by converting dmi_early_remap() to the uncached remapping
which is always available on both 32 and 64-bits MIPS systems.

Fixes: be8fa1cb444c ("MIPS: Add support for Desktop Management Interface (DMI)")
Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
---
 arch/mips/include/asm/dmi.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Arnd Bergmann Nov. 22, 2023, 7:35 p.m. UTC | #1
On Wed, Nov 22, 2023, at 19:23, Serge Semin wrote:
> dmi_early_remap() has been defined as ioremap_cache() which on MIPS32 gets
> to be converted to the VM-based mapping. DMI early remapping is performed
> at the setup_arch() stage with no VM available. So calling the
> dmi_early_remap() for MIPS32 causes the system to crash at the early boot
> time. Fix that by converting dmi_early_remap() to the uncached remapping
> which is always available on both 32 and 64-bits MIPS systems.
>
> Fixes: be8fa1cb444c ("MIPS: Add support for Desktop Management Interface (DMI)")
> Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
> ---
>  arch/mips/include/asm/dmi.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
> index 27415a288adf..525aad1572d1 100644
> --- a/arch/mips/include/asm/dmi.h
> +++ b/arch/mips/include/asm/dmi.h
> @@ -5,7 +5,7 @@
>  #include <linux/io.h>
>  #include <linux/memblock.h>
> 
> -#define dmi_early_remap(x, l)		ioremap_cache(x, l)
> +#define dmi_early_remap(x, l)		ioremap_uc(x, l)

Please don't use ioremap_uc() in new code, we are in the (long)
process of removing it from the kernel for everything except
x86-32, and it already returns NULL on most of them.

Would the normal ioremap() work for you here? It seems to
do the same thing as ioremap_uc() on mips and a couple of 
other architectures that have not yet killed it off.

   Arnd
Serge Semin Nov. 23, 2023, 9:32 a.m. UTC | #2
Hi Arnd

On Wed, Nov 22, 2023 at 08:35:01PM +0100, Arnd Bergmann wrote:
> On Wed, Nov 22, 2023, at 19:23, Serge Semin wrote:
> > dmi_early_remap() has been defined as ioremap_cache() which on MIPS32 gets
> > to be converted to the VM-based mapping. DMI early remapping is performed
> > at the setup_arch() stage with no VM available. So calling the
> > dmi_early_remap() for MIPS32 causes the system to crash at the early boot
> > time. Fix that by converting dmi_early_remap() to the uncached remapping
> > which is always available on both 32 and 64-bits MIPS systems.
> >
> > Fixes: be8fa1cb444c ("MIPS: Add support for Desktop Management Interface (DMI)")
> > Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
> > ---
> >  arch/mips/include/asm/dmi.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
> > index 27415a288adf..525aad1572d1 100644
> > --- a/arch/mips/include/asm/dmi.h
> > +++ b/arch/mips/include/asm/dmi.h
> > @@ -5,7 +5,7 @@
> >  #include <linux/io.h>
> >  #include <linux/memblock.h>
> > 
> > -#define dmi_early_remap(x, l)		ioremap_cache(x, l)
> > +#define dmi_early_remap(x, l)		ioremap_uc(x, l)
> 

> Please don't use ioremap_uc() in new code, we are in the (long)
> process of removing it from the kernel for everything except
> x86-32, and it already returns NULL on most of them.
> 
> Would the normal ioremap() work for you here? It seems to
> do the same thing as ioremap_uc() on mips and a couple of 
> other architectures that have not yet killed it off.

Ok. Thanks for the heads up. I'll fix the patch to be using ioremap()
in v2. ioremap_uc() is just an macro-alias of ioremap() on MIPS.

-Serge(y)

> 
>    Arnd
Jiaxun Yang Nov. 23, 2023, 12:13 p.m. UTC | #3
在2023年11月23日十一月 上午9:32,Serge Semin写道:
> Hi Arnd
>
> On Wed, Nov 22, 2023 at 08:35:01PM +0100, Arnd Bergmann wrote:
>> On Wed, Nov 22, 2023, at 19:23, Serge Semin wrote:
>> > dmi_early_remap() has been defined as ioremap_cache() which on MIPS32 gets
>> > to be converted to the VM-based mapping. DMI early remapping is performed
>> > at the setup_arch() stage with no VM available. So calling the
>> > dmi_early_remap() for MIPS32 causes the system to crash at the early boot
>> > time. Fix that by converting dmi_early_remap() to the uncached remapping
>> > which is always available on both 32 and 64-bits MIPS systems.
>> >
>> > Fixes: be8fa1cb444c ("MIPS: Add support for Desktop Management Interface (DMI)")
>> > Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
>> > ---
>> >  arch/mips/include/asm/dmi.h | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
>> > index 27415a288adf..525aad1572d1 100644
>> > --- a/arch/mips/include/asm/dmi.h
>> > +++ b/arch/mips/include/asm/dmi.h
>> > @@ -5,7 +5,7 @@
>> >  #include <linux/io.h>
>> >  #include <linux/memblock.h>
>> > 
>> > -#define dmi_early_remap(x, l)		ioremap_cache(x, l)
>> > +#define dmi_early_remap(x, l)		ioremap_uc(x, l)
>> 
>
>> Please don't use ioremap_uc() in new code, we are in the (long)
>> process of removing it from the kernel for everything except
>> x86-32, and it already returns NULL on most of them.
>> 
>> Would the normal ioremap() work for you here? It seems to
>> do the same thing as ioremap_uc() on mips and a couple of 
>> other architectures that have not yet killed it off.
>
> Ok. Thanks for the heads up. I'll fix the patch to be using ioremap()
> in v2. ioremap_uc() is just an macro-alias of ioremap() on MIPS.

Perhaps we need to fix ioremap_cache so it can give a KSEG1 address?
AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
blindly will cause inconsistency.

Thanks
- Jiaxun

>
> -Serge(y)
>
>> 
>>    Arnd
Thomas Bogendoerfer Nov. 23, 2023, 12:29 p.m. UTC | #4
On Thu, Nov 23, 2023 at 12:13:11PM +0000, Jiaxun Yang wrote:
> > Ok. Thanks for the heads up. I'll fix the patch to be using ioremap()
> > in v2. ioremap_uc() is just an macro-alias of ioremap() on MIPS.
> 
> Perhaps we need to fix ioremap_cache so it can give a KSEG1 address?

KSEG0 ?

> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
> blindly will cause inconsistency.

why ?

Thomas.
Jiaxun Yang Nov. 23, 2023, 3:07 p.m. UTC | #5
在2023年11月23日十一月 下午12:29,Thomas Bogendoerfer写道:
> On Thu, Nov 23, 2023 at 12:13:11PM +0000, Jiaxun Yang wrote:
>> > Ok. Thanks for the heads up. I'll fix the patch to be using ioremap()
>> > in v2. ioremap_uc() is just an macro-alias of ioremap() on MIPS.
>> 
>> Perhaps we need to fix ioremap_cache so it can give a KSEG1 address?
>
> KSEG0 ?

Ah yes it's KSEG0.

>
>> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
>> blindly will cause inconsistency.
>
> why ?

Firmware sometimes does not flush those tables from cache back to memory.
For Loongson systems (as well as most MTI systems) cache is enabled by
firmware.

Thanks.
>
> Thomas.
>
> -- 
> Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> good idea.                                                [ RFC1925, 2.3 ]
Thomas Bogendoerfer Nov. 23, 2023, 4:07 p.m. UTC | #6
On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年11月23日十一月 下午12:29,Thomas Bogendoerfer写道:
> > On Thu, Nov 23, 2023 at 12:13:11PM +0000, Jiaxun Yang wrote:
> >> > Ok. Thanks for the heads up. I'll fix the patch to be using ioremap()
> >> > in v2. ioremap_uc() is just an macro-alias of ioremap() on MIPS.
> >> 
> >> Perhaps we need to fix ioremap_cache so it can give a KSEG1 address?
> >
> > KSEG0 ?
> 
> Ah yes it's KSEG0.

the problem with all 32bit unmapped segments is their limitations in
size. But there is always room to try to use unmapped and fall back
to mapped, if it doesn't work. But I doubt anybody is going to
implement that.

> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
> >> blindly will cause inconsistency.
> >
> > why ?
> 
> Firmware sometimes does not flush those tables from cache back to memory.
> For Loongson systems (as well as most MTI systems) cache is enabled by
> firmware.

kernel flushes all caches on startup, so there shouldn't be a problem.

Thomas.
Jiaxun Yang Nov. 23, 2023, 5:33 p.m. UTC | #7
在2023年11月23日十一月 下午4:07,Thomas Bogendoerfer写道:
> On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
>> 
[...]
>
> the problem with all 32bit unmapped segments is their limitations in
> size. But there is always room to try to use unmapped and fall back
> to mapped, if it doesn't work. But I doubt anybody is going to
> implement that.

Yep, I guess fallback should be implemented for ioremap_cache as well.

>
>> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
>> >> blindly will cause inconsistency.
>> >
>> > why ?
>> 
>> Firmware sometimes does not flush those tables from cache back to memory.
>> For Loongson systems (as well as most MTI systems) cache is enabled by
>> firmware.
>
> kernel flushes all caches on startup, so there shouldn't be a problem.

Actually dmi_setup() is called before cpu_cache_init().

Thanks
>
> Thomas.
>
> -- 
> Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> good idea.                                                [ RFC1925, 2.3 ]
Serge Semin Nov. 24, 2023, 6:52 p.m. UTC | #8
On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年11月23日十一月 下午4:07,Thomas Bogendoerfer写道:
> > On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
> >> 
> [...]
> >
> > the problem with all 32bit unmapped segments is their limitations in
> > size. But there is always room to try to use unmapped and fall back
> > to mapped, if it doesn't work. But I doubt anybody is going to
> > implement that.
> 
> Yep, I guess fallback should be implemented for ioremap_cache as well.
> 
> >
> >> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
> >> >> blindly will cause inconsistency.
> >> >
> >> > why ?
> >> 
> >> Firmware sometimes does not flush those tables from cache back to memory.
> >> For Loongson systems (as well as most MTI systems) cache is enabled by
> >> firmware.
> >
> > kernel flushes all caches on startup, so there shouldn't be a problem.
> 
> Actually dmi_setup() is called before cpu_cache_init().

To preliminary sum the discussion, indeed there can be issues on the
platforms which have DMI initialized on the cached region. Here are
several solutions and additional difficulties I think may be caused by
implementing them:

1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
method.
This solution a bit clumsy than it looks on the first glance.
ioremap_prot() can be used for various types of the cachability
mapping. Currently it's a default-cacheable CA preserved in the
_page_cachable_default variable and Write-combined CA saved in
boot_cpu_data.writecombine. Based on that we would have needed to use
the unmapped cached region utilized for the IO-remaps called with the
"_page_cachable_default" mapping flags passed only. The rest of the IO
range mappings, including the write-combined ones, would have been
handled by VM means. This would have made the ioremap_prot() a bit
less maintainable, but still won't be that hard to implement (unless I
miss something):
--- a/arch/mips/mm/ioremap.c
+++ b/arch/mips/mm/ioremap.c
        /*
-        * Map uncached objects in the low 512mb of address space using KSEG1,
-        * otherwise map using page tables.
+        * Map uncached/default-cached objects in the low 512mb of address
+        * space using KSEG1/KSEG0, otherwise map using page tables.
         */
-       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
-           flags == _CACHE_UNCACHED)
-               return (void __iomem *) CKSEG1ADDR(phys_addr);
+       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
+               if (flags == _CACHE_UNCACHED)
+                       return (void __iomem *) CKSEG1ADDR(phys_addr);
+               else if (flags == _page_cachable_default)
+                       return (void __iomem *) CKSEG0ADDR(phys_addr);
+       }

Currently I can't figure out what obvious problems it may cause. But
It seems suspicious that the cacheable IO-mapping hasn't been
implemented by the unmapped cacheable region in the first place. In
anyway this solution looks more dangerous than solution 2. because it
affects all the MIPS32 platforms at once.

2. Convert dmi_remap_early() to ioremap_uc() (actually just ioremap()
as noted by Arnd).
As Jiaxun correctly noted this may cause problems on the platforms
which don't flush caches before jumping out to the kernel. Thomas said
that kernel flushes the caches early on boot, but Jiaxun noted that
it's still done after early DMI setup. So the issue with solution 2 is
that the setup_arch() method calls dmi_setup() before it flushes the
caches by means of the cpu_cache_init() method. I guess it can be
fixed just by moving the dmi_setup() method invocation to be after the
cpu_cache_init() is called. This solution looks much less invasive
than solution 1.

So what do you think? What solution do you prefer? Perhaps
alternative?

-Serge(y)

> 
> Thanks
> >
> > Thomas.
> >
> > -- 
> > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> > good idea.                                                [ RFC1925, 2.3 ]
> 
> -- 
> - Jiaxun
Jiaxun Yang Nov. 24, 2023, 10:03 p.m. UTC | #9
在2023年11月24日十一月 下午6:52,Serge Semin写道:
> On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
>> 
[...]
>> Actually dmi_setup() is called before cpu_cache_init().
>
> To preliminary sum the discussion, indeed there can be issues on the
> platforms which have DMI initialized on the cached region. Here are
> several solutions and additional difficulties I think may be caused by
> implementing them:

Thanks for such detailed conclusion!
I'd prefer go solution 1, with comments below.
>
> 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
> method.
> This solution a bit clumsy than it looks on the first glance.
> ioremap_prot() can be used for various types of the cachability
> mapping. Currently it's a default-cacheable CA preserved in the
> _page_cachable_default variable and Write-combined CA saved in
> boot_cpu_data.writecombine. Based on that we would have needed to use
> the unmapped cached region utilized for the IO-remaps called with the
> "_page_cachable_default" mapping flags passed only. The rest of the IO
> range mappings, including the write-combined ones, would have been
> handled by VM means. This would have made the ioremap_prot() a bit
> less maintainable, but still won't be that hard to implement (unless I
> miss something):
> --- a/arch/mips/mm/ioremap.c
> +++ b/arch/mips/mm/ioremap.c
>         /*
> -        * Map uncached objects in the low 512mb of address space using KSEG1,
> -        * otherwise map using page tables.
> +        * Map uncached/default-cached objects in the low 512mb of address
> +        * space using KSEG1/KSEG0, otherwise map using page tables.
>          */
> -       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
> -           flags == _CACHE_UNCACHED)
> -               return (void __iomem *) CKSEG1ADDR(phys_addr);
> +       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
> +               if (flags == _CACHE_UNCACHED)
> +                       return (void __iomem *) CKSEG1ADDR(phys_addr);
> +               else if (flags == _page_cachable_default)
> +                       return (void __iomem *) CKSEG0ADDR(phys_addr);
> +       }
>
> Currently I can't figure out what obvious problems it may cause. But
> It seems suspicious that the cacheable IO-mapping hasn't been
> implemented by the unmapped cacheable region in the first place. In
> anyway this solution looks more dangerous than solution 2. because it
> affects all the MIPS32 platforms at once.

I just made a quick grep in tree, and it seems like we don't have much
user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
a safe assumption.

>
> 2. Convert dmi_remap_early() to ioremap_uc() (actually just ioremap()
> as noted by Arnd).
> As Jiaxun correctly noted this may cause problems on the platforms
> which don't flush caches before jumping out to the kernel. Thomas said
> that kernel flushes the caches early on boot, but Jiaxun noted that
> it's still done after early DMI setup. So the issue with solution 2 is
> that the setup_arch() method calls dmi_setup() before it flushes the
> caches by means of the cpu_cache_init() method. I guess it can be
> fixed just by moving the dmi_setup() method invocation to be after the
> cpu_cache_init() is called. This solution looks much less invasive
> than solution 1.

I recall Tiezhu made dmi_setup() here for reasons. The first reason is that
DMI is placed at memory space that is not reserved, so it may get clobbered
after mm is up. The second is we may have some early quirks depends on DMI
information.

Thanks.
>
> So what do you think? What solution do you prefer? Perhaps
> alternative?
>
> -Serge(y)
>
>> 
>> Thanks
>> >
>> > Thomas.
>> >
>> > -- 
>> > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
>> > good idea.                                                [ RFC1925, 2.3 ]
>> 
>> -- 
>> - Jiaxun
Jiaxun Yang Nov. 24, 2023, 10:34 p.m. UTC | #10
在2023年11月24日十一月 下午6:52,Serge Semin写道:
> On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
>> 
>> 
>> 在2023年11月23日十一月 下午4:07,Thomas Bogendoerfer写道:
>> > On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
>> >> 
>> [...]
>> >
>> > the problem with all 32bit unmapped segments is their limitations in
>> > size. But there is always room to try to use unmapped and fall back
>> > to mapped, if it doesn't work. But I doubt anybody is going to
>> > implement that.
>> 
>> Yep, I guess fallback should be implemented for ioremap_cache as well.
>> 
>> >
>> >> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
>> >> >> blindly will cause inconsistency.
>> >> >
>> >> > why ?
>> >> 
>> >> Firmware sometimes does not flush those tables from cache back to memory.
>> >> For Loongson systems (as well as most MTI systems) cache is enabled by
>> >> firmware.
>> >
>> > kernel flushes all caches on startup, so there shouldn't be a problem.
>> 
>> Actually dmi_setup() is called before cpu_cache_init().
>
> To preliminary sum the discussion, indeed there can be issues on the
> platforms which have DMI initialized on the cached region. Here are
> several solutions and additional difficulties I think may be caused by
> implementing them:
>
> 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
> method.
> This solution a bit clumsy than it looks on the first glance.
> ioremap_prot() can be used for various types of the cachability
> mapping. Currently it's a default-cacheable CA preserved in the
> _page_cachable_default variable and Write-combined CA saved in
> boot_cpu_data.writecombine. Based on that we would have needed to use
> the unmapped cached region utilized for the IO-remaps called with the
> "_page_cachable_default" mapping flags passed only. The rest of the IO
> range mappings, including the write-combined ones, would have been
> handled by VM means. This would have made the ioremap_prot() a bit
> less maintainable, but still won't be that hard to implement (unless I
> miss something):
> --- a/arch/mips/mm/ioremap.c
> +++ b/arch/mips/mm/ioremap.c
>         /*
> -        * Map uncached objects in the low 512mb of address space using KSEG1,
> -        * otherwise map using page tables.
> +        * Map uncached/default-cached objects in the low 512mb of address
> +        * space using KSEG1/KSEG0, otherwise map using page tables.
>          */
> -       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
> -           flags == _CACHE_UNCACHED)
> -               return (void __iomem *) CKSEG1ADDR(phys_addr);
> +       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
> +               if (flags == _CACHE_UNCACHED)
> +                       return (void __iomem *) CKSEG1ADDR(phys_addr);
> +               else if (flags == _page_cachable_default)
> +                       return (void __iomem *) CKSEG0ADDR(phys_addr);
> +       }
>
A nip, _page_cachable_default is set in cpu_cache_init() as well. We'd
better move it to cpu-probe.c, or give it a reasonable default value.

Thanks
Serge Semin Nov. 27, 2023, 4:23 p.m. UTC | #11
Hi Jiaxun

On Fri, Nov 24, 2023 at 10:03:49PM +0000, Jiaxun Yang wrote:
> 在2023年11月24日十一月 下午6:52,Serge Semin写道:
> > On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
> >> 
> [...]
> >> Actually dmi_setup() is called before cpu_cache_init().
> >
> > To preliminary sum the discussion, indeed there can be issues on the
> > platforms which have DMI initialized on the cached region. Here are
> > several solutions and additional difficulties I think may be caused by
> > implementing them:
> 
> Thanks for such detailed conclusion!
> I'd prefer go solution 1, with comments below.
> >
> > 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
> > method.
> > This solution a bit clumsy than it looks on the first glance.
> > ioremap_prot() can be used for various types of the cachability
> > mapping. Currently it's a default-cacheable CA preserved in the
> > _page_cachable_default variable and Write-combined CA saved in
> > boot_cpu_data.writecombine. Based on that we would have needed to use
> > the unmapped cached region utilized for the IO-remaps called with the
> > "_page_cachable_default" mapping flags passed only. The rest of the IO
> > range mappings, including the write-combined ones, would have been
> > handled by VM means. This would have made the ioremap_prot() a bit
> > less maintainable, but still won't be that hard to implement (unless I
> > miss something):
> > --- a/arch/mips/mm/ioremap.c
> > +++ b/arch/mips/mm/ioremap.c
> >         /*
> > -        * Map uncached objects in the low 512mb of address space using KSEG1,
> > -        * otherwise map using page tables.
> > +        * Map uncached/default-cached objects in the low 512mb of address
> > +        * space using KSEG1/KSEG0, otherwise map using page tables.
> >          */
> > -       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
> > -           flags == _CACHE_UNCACHED)
> > -               return (void __iomem *) CKSEG1ADDR(phys_addr);
> > +       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
> > +               if (flags == _CACHE_UNCACHED)
> > +                       return (void __iomem *) CKSEG1ADDR(phys_addr);
> > +               else if (flags == _page_cachable_default)
> > +                       return (void __iomem *) CKSEG0ADDR(phys_addr);
> > +       }
> >
> > Currently I can't figure out what obvious problems it may cause. But
> > It seems suspicious that the cacheable IO-mapping hasn't been
> > implemented by the unmapped cacheable region in the first place. In
> > anyway this solution looks more dangerous than solution 2. because it
> > affects all the MIPS32 platforms at once.
> 
> I just made a quick grep in tree, and it seems like we don't have much
> user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
> a safe assumption.

I wouldn't say there aren't much users. ioremap_wc() and it's
devm-version is widely utilized in the GPU and network and some other
subsystems. ioremap_cache() isn't widespread indeed. In anyway even a
single user must be supported in safely calling the method if it's
provided by the arch-code, otherwise the method could be considered as
just a bogus stub to have the kernel successfully built. I bet you'll
agree with that. But that's not the point in this case.

A bit later you also noted:

On Fri, Nov 24, 2023 at 10:34:49PM +0000, Jiaxun Yang wrote:
> A nip, _page_cachable_default is set in cpu_cache_init() as well. We'd
> better move it to cpu-probe.c, or give it a reasonable default value.

Right. Thanks. To be honest I haven't noticed that before your
message. _page_cachable_default is indeed initialized in the
cpu_cache_init() method, several steps after it would be used in the
framework of dmi_remap_early(). On the other hand ioremap_cache() is
defined as ioremap_prot() with the _page_cachable_default variable
passed. So my code will still correctly work unless
_page_cachable_default is pre-initialized with something other than
zero. On the other hand we can't easily change its default value
because it will affect and likely break the r3k (CPU_R3000) and Octeon
based platforms, because it's utilized to initialize the
protection-map table. Of course we can fix the r3k_cache_init() and
octeon_cache_init() methods too so they would get the
_page_cachable_default variable back to zero, but it will also make
things around it more complicated.

Also note, moving the _page_cachable_default initialization to the
earlier stages like cpu_probe() won't work better because the field
value may get change for instance in the framework of the smp_setup()
function (see cps_smp_setup()).

So after all the considerations above this solution now looks even
clumsier than before.( Any idea how to make it better?

> 
> >
> > 2. Convert dmi_remap_early() to ioremap_uc() (actually just ioremap()
> > as noted by Arnd).
> > As Jiaxun correctly noted this may cause problems on the platforms
> > which don't flush caches before jumping out to the kernel. Thomas said
> > that kernel flushes the caches early on boot, but Jiaxun noted that
> > it's still done after early DMI setup. So the issue with solution 2 is
> > that the setup_arch() method calls dmi_setup() before it flushes the
> > caches by means of the cpu_cache_init() method. I guess it can be
> > fixed just by moving the dmi_setup() method invocation to be after the
> > cpu_cache_init() is called. This solution looks much less invasive
> > than solution 1.
> 
> I recall Tiezhu made dmi_setup() here for reasons. The first reason is that
> DMI is placed at memory space that is not reserved, so it may get clobbered
> after mm is up.

Note the memory might be clobbered even before dmi_setup() for
instance by means of the early_memtest() method. In anyway it would be
better if the system booloader would have already reserved the DMI
memory (in DTB) or it would have been done by the platform-specific
plat_mem_setup() method.

> The second is we may have some early quirks depends on DMI
> information.

Which quirks do you mean to be dependent in between the current
dmi_setup() call place and the cpu_cache_init() method invocation?

-Serge(y)

> 
> Thanks.
> >
> > So what do you think? What solution do you prefer? Perhaps
> > alternative?
> >
> > -Serge(y)
> >
> >> 
> >> Thanks
> >> >
> >> > Thomas.
> >> >
> >> > -- 
> >> > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> >> > good idea.                                                [ RFC1925, 2.3 ]
> >> 
> >> -- 
> >> - Jiaxun
> 
> -- 
> - Jiaxun
Jiaxun Yang Nov. 27, 2023, 9:08 p.m. UTC | #12
在2023年11月27日十一月 下午4:23,Serge Semin写道:
[...]
>> I just made a quick grep in tree, and it seems like we don't have much
>> user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
>> a safe assumption.
>
> I wouldn't say there aren't much users. ioremap_wc() and it's
> devm-version is widely utilized in the GPU and network and some other
> subsystems. ioremap_cache() isn't widespread indeed. In anyway even a
> single user must be supported in safely calling the method if it's
> provided by the arch-code, otherwise the method could be considered as
> just a bogus stub to have the kernel successfully built. I bet you'll
> agree with that. But that's not the point in this case,
>
> A bit later you also noted:
>
> On Fri, Nov 24, 2023 at 10:34:49PM +0000, Jiaxun Yang wrote:
>> A nip, _page_cachable_default is set in cpu_cache_init() as well. We'd
>> better move it to cpu-probe.c, or give it a reasonable default value.
>
> Right. Thanks. To be honest I haven't noticed that before your
> message. _page_cachable_default is indeed initialized in the
> cpu_cache_init() method, several steps after it would be used in the
> framework of dmi_remap_early(). On the other hand ioremap_cache() is
> defined as ioremap_prot() with the _page_cachable_default variable
> passed. So my code will still correctly work unless
> _page_cachable_default is pre-initialized with something other than
> zero. On the other hand we can't easily change its default value
> because it will affect and likely break the r3k (CPU_R3000) and Octeon
> based platforms, because it's utilized to initialize the
> protection-map table. Of course we can fix the r3k_cache_init() and
> octeon_cache_init() methods too so they would get the
> _page_cachable_default variable back to zero, but it will also make
> things around it more complicated.
>
> Also note, moving the _page_cachable_default initialization to the
> earlier stages like cpu_probe() won't work better because the field
> value may get change for instance in the framework of the smp_setup()
> function (see cps_smp_setup()).
>
> So after all the considerations above this solution now looks even
> clumsier than before.( Any idea how to make it better?

I think the best solution maybe just use CKSEG0 to setup map here.

Btw I was thinking about 64 bit here, I thought for 64bit we would
just embedded prot into XKPHYS, however I quickly figure out
ioremap_cache was never implemented properly on 64-bit system,
so does ioremap_wc.

> u64 base = (flags == _CACHE_UNCACHED ? IO_BASE : UNCAC_BASE);

Which is always uncached mapping.

>> 
[...]
>
> Note the memory might be clobbered even before dmi_setup() for
> instance by means of the early_memtest() method. In anyway it would be
> better if the system booloader would have already reserved the DMI
> memory (in DTB) or it would have been done by the platform-specific
> plat_mem_setup() method.

Unfortunately, too many machines are shipped with those badly designed
firmware. We rely on dmi_setup code to scan and preserve dmi table from
random location in memory.

>
>> The second is we may have some early quirks depends on DMI
>> information.
>
> Which quirks do you mean to be dependent in between the current
> dmi_setup() call place and the cpu_cache_init() method invocation?

I think we don't have any for now.
Serge Semin Nov. 28, 2023, 11:34 a.m. UTC | #13
On Mon, Nov 27, 2023 at 09:08:11PM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年11月27日十一月 下午4:23,Serge Semin写道:
> [...]
> >> I just made a quick grep in tree, and it seems like we don't have much
> >> user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
> >> a safe assumption.
> >
> > I wouldn't say there aren't much users. ioremap_wc() and it's
> > devm-version is widely utilized in the GPU and network and some other
> > subsystems. ioremap_cache() isn't widespread indeed. In anyway even a
> > single user must be supported in safely calling the method if it's
> > provided by the arch-code, otherwise the method could be considered as
> > just a bogus stub to have the kernel successfully built. I bet you'll
> > agree with that. But that's not the point in this case,
> >
> > A bit later you also noted:
> >
> > On Fri, Nov 24, 2023 at 10:34:49PM +0000, Jiaxun Yang wrote:
> >> A nip, _page_cachable_default is set in cpu_cache_init() as well. We'd
> >> better move it to cpu-probe.c, or give it a reasonable default value.
> >
> > Right. Thanks. To be honest I haven't noticed that before your
> > message. _page_cachable_default is indeed initialized in the
> > cpu_cache_init() method, several steps after it would be used in the
> > framework of dmi_remap_early(). On the other hand ioremap_cache() is
> > defined as ioremap_prot() with the _page_cachable_default variable
> > passed. So my code will still correctly work unless
> > _page_cachable_default is pre-initialized with something other than
> > zero. On the other hand we can't easily change its default value
> > because it will affect and likely break the r3k (CPU_R3000) and Octeon
> > based platforms, because it's utilized to initialize the
> > protection-map table. Of course we can fix the r3k_cache_init() and
> > octeon_cache_init() methods too so they would get the
> > _page_cachable_default variable back to zero, but it will also make
> > things around it more complicated.
> >
> > Also note, moving the _page_cachable_default initialization to the
> > earlier stages like cpu_probe() won't work better because the field
> > value may get change for instance in the framework of the smp_setup()
> > function (see cps_smp_setup()).
> >
> > So after all the considerations above this solution now looks even
> > clumsier than before.( Any idea how to make it better?
> 

> I think the best solution maybe just use CKSEG0 to setup map here.
> 
> Btw I was thinking about 64 bit here, I thought for 64bit we would
> just embedded prot into XKPHYS, however I quickly figure out
> ioremap_cache was never implemented properly on 64-bit system,
> so does ioremap_wc.
> 
> > u64 base = (flags == _CACHE_UNCACHED ? IO_BASE : UNCAC_BASE);
> 
> Which is always uncached mapping.

Indeed. Thanks for pointing that out. In the last days several times I
was looking at that line and for some reason UNCAC_BASE seemed as
CAC_BASE to me.) Based on what both IO_BASE and UNCAC_BASE are defined
as of the uncached region anyway, then it should be safe for any
currently supported MIPS64 (including the Loongson's) to use ioremap()
in place of dmi_early_remap(). So basically my current patch in the
subject won't change the method semantics. Let's not to try to fix a
problem which doesn't exist then, and keep the patch as is especially
seeing that the alternatives might still cause some troubles. Will you
be ok with that?

-Serge(y)

> 
> >> 
> [...]
> >
> > Note the memory might be clobbered even before dmi_setup() for
> > instance by means of the early_memtest() method. In anyway it would be
> > better if the system booloader would have already reserved the DMI
> > memory (in DTB) or it would have been done by the platform-specific
> > plat_mem_setup() method.
> 
> Unfortunately, too many machines are shipped with those badly designed
> firmware. We rely on dmi_setup code to scan and preserve dmi table from
> random location in memory.
> 
> >
> >> The second is we may have some early quirks depends on DMI
> >> information.
> >
> > Which quirks do you mean to be dependent in between the current
> > dmi_setup() call place and the cpu_cache_init() method invocation?
> 
> I think we don't have any for now.
> 
> -- 
> - Jiaxun
Arnd Bergmann Nov. 28, 2023, 12:41 p.m. UTC | #14
On Mon, Nov 27, 2023, at 17:23, Serge Semin wrote:
> On Fri, Nov 24, 2023 at 10:03:49PM +0000, Jiaxun Yang wrote:
>> 在2023年11月24日十一月 下午6:52,Serge Semin写道:
>> > On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
>> >> 
>> [...]
>> >> Actually dmi_setup() is called before cpu_cache_init().
>> >
>> > To preliminary sum the discussion, indeed there can be issues on the
>> > platforms which have DMI initialized on the cached region. Here are
>> > several solutions and additional difficulties I think may be caused by
>> > implementing them:
>> 
>> Thanks for such detailed conclusion!
>> I'd prefer go solution 1, with comments below.
>> >
>> > 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
>> > method.
>> > This solution a bit clumsy than it looks on the first glance.
>> > ioremap_prot() can be used for various types of the cachability
>> > mapping. Currently it's a default-cacheable CA preserved in the
>> > _page_cachable_default variable and Write-combined CA saved in
>> > boot_cpu_data.writecombine. Based on that we would have needed to use
>> > the unmapped cached region utilized for the IO-remaps called with the
>> > "_page_cachable_default" mapping flags passed only. The rest of the IO
>> > range mappings, including the write-combined ones, would have been
>> > handled by VM means. This would have made the ioremap_prot() a bit
>> > less maintainable, but still won't be that hard to implement (unless I
>> > miss something):
>> > --- a/arch/mips/mm/ioremap.c
>> > +++ b/arch/mips/mm/ioremap.c
>> >         /*
>> > -        * Map uncached objects in the low 512mb of address space using KSEG1,
>> > -        * otherwise map using page tables.
>> > +        * Map uncached/default-cached objects in the low 512mb of address
>> > +        * space using KSEG1/KSEG0, otherwise map using page tables.
>> >          */
>> > -       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
>> > -           flags == _CACHE_UNCACHED)
>> > -               return (void __iomem *) CKSEG1ADDR(phys_addr);
>> > +       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
>> > +               if (flags == _CACHE_UNCACHED)
>> > +                       return (void __iomem *) CKSEG1ADDR(phys_addr);
>> > +               else if (flags == _page_cachable_default)
>> > +                       return (void __iomem *) CKSEG0ADDR(phys_addr);
>> > +       }
>> >
>> > Currently I can't figure out what obvious problems it may cause. But
>> > It seems suspicious that the cacheable IO-mapping hasn't been
>> > implemented by the unmapped cacheable region in the first place. In
>> > anyway this solution looks more dangerous than solution 2. because it
>> > affects all the MIPS32 platforms at once.
>> 
>> I just made a quick grep in tree, and it seems like we don't have much
>> user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
>> a safe assumption.
>
> I wouldn't say there aren't much users. ioremap_wc() and it's
> devm-version is widely utilized in the GPU and network and some other
> subsystems. ioremap_cache() isn't widespread indeed. In anyway even a
> single user must be supported in safely calling the method if it's
> provided by the arch-code, otherwise the method could be considered as
> just a bogus stub to have the kernel successfully built. I bet you'll
> agree with that. But that's not the point in this case.

ioremap_wc() is useful for mapping PCI attached memory such as frame
buffers, but ioremap_cache() is generally underspecified because the
resulting pointer is neither safe to dereference nor to pass into
readl()/writel()/memcpy_fromio() on all architectures.

There was an effort to convert the remaining ioremap_cache() calls
into memremap() a few years ago, not sure if that's still being worked
on but it would be the right thing to do.

     Arnd
Serge Semin Nov. 28, 2023, 1:52 p.m. UTC | #15
Hi Arnd

On Tue, Nov 28, 2023 at 01:41:51PM +0100, Arnd Bergmann wrote:
> On Mon, Nov 27, 2023, at 17:23, Serge Semin wrote:
> > On Fri, Nov 24, 2023 at 10:03:49PM +0000, Jiaxun Yang wrote:
> >> 在2023年11月24日十一月 下午6:52,Serge Semin写道:
> >> > On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
> >> >> 
> >> [...]
> >> >> Actually dmi_setup() is called before cpu_cache_init().
> >> >
> >> > To preliminary sum the discussion, indeed there can be issues on the
> >> > platforms which have DMI initialized on the cached region. Here are
> >> > several solutions and additional difficulties I think may be caused by
> >> > implementing them:
> >> 
> >> Thanks for such detailed conclusion!
> >> I'd prefer go solution 1, with comments below.
> >> >
> >> > 1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
> >> > method.
> >> > This solution a bit clumsy than it looks on the first glance.
> >> > ioremap_prot() can be used for various types of the cachability
> >> > mapping. Currently it's a default-cacheable CA preserved in the
> >> > _page_cachable_default variable and Write-combined CA saved in
> >> > boot_cpu_data.writecombine. Based on that we would have needed to use
> >> > the unmapped cached region utilized for the IO-remaps called with the
> >> > "_page_cachable_default" mapping flags passed only. The rest of the IO
> >> > range mappings, including the write-combined ones, would have been
> >> > handled by VM means. This would have made the ioremap_prot() a bit
> >> > less maintainable, but still won't be that hard to implement (unless I
> >> > miss something):
> >> > --- a/arch/mips/mm/ioremap.c
> >> > +++ b/arch/mips/mm/ioremap.c
> >> >         /*
> >> > -        * Map uncached objects in the low 512mb of address space using KSEG1,
> >> > -        * otherwise map using page tables.
> >> > +        * Map uncached/default-cached objects in the low 512mb of address
> >> > +        * space using KSEG1/KSEG0, otherwise map using page tables.
> >> >          */
> >> > -       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
> >> > -           flags == _CACHE_UNCACHED)
> >> > -               return (void __iomem *) CKSEG1ADDR(phys_addr);
> >> > +       if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
> >> > +               if (flags == _CACHE_UNCACHED)
> >> > +                       return (void __iomem *) CKSEG1ADDR(phys_addr);
> >> > +               else if (flags == _page_cachable_default)
> >> > +                       return (void __iomem *) CKSEG0ADDR(phys_addr);
> >> > +       }
> >> >
> >> > Currently I can't figure out what obvious problems it may cause. But
> >> > It seems suspicious that the cacheable IO-mapping hasn't been
> >> > implemented by the unmapped cacheable region in the first place. In
> >> > anyway this solution looks more dangerous than solution 2. because it
> >> > affects all the MIPS32 platforms at once.
> >> 
> >> I just made a quick grep in tree, and it seems like we don't have much
> >> user of ioremap_cache (as well as ioremap_uc/wc) here so I think it is
> >> a safe assumption.
> >
> > I wouldn't say there aren't much users. ioremap_wc() and it's
> > devm-version is widely utilized in the GPU and network and some other
> > subsystems. ioremap_cache() isn't widespread indeed. In anyway even a
> > single user must be supported in safely calling the method if it's
> > provided by the arch-code, otherwise the method could be considered as
> > just a bogus stub to have the kernel successfully built. I bet you'll
> > agree with that. But that's not the point in this case.
> 

> ioremap_wc() is useful for mapping PCI attached memory such as frame
> buffers, 

Thanks for clarification. That's actually the reason why I originally
added the ioremap_wc() support to the MIPS32 arch. In one of the
projects we had SM750/SM768-based graphic cards attached to the
MIPS32-based SoC. Using ioremap_wc() for the framebuffer significantly
improved the graphic subsystem performance indeed. It was mostly
required for the SM750 chips though, which provided a narrow and slow
PCIe Gen.1 x1 interface.

> but ioremap_cache() is generally underspecified because the
> resulting pointer is neither safe to dereference nor to pass into
> readl()/writel()/memcpy_fromio() on all architectures.

I don't know about ARM64 (which for instance has it utilized to access
the DMI region), but at least in case of MIPS32 (a fortiori MIPS64
seeing the ioremap_cache() method actually returns a pointer to the
uncached region) I don't see a reason why it wouldn't be safe in both
cases described by you. All IO and memory regions are accessed by the
generic load and store instructions. The only difference is that the
MMIO-space accessors normally implies additional barriers, which just
slow down the execution, but shouldn't cause any other problem. Could
you clarify why do you think otherwise?

> 
> There was an effort to convert the remaining ioremap_cache() calls
> into memremap() a few years ago, not sure if that's still being worked
> on but it would be the right thing to do.

I see. Thanks for the pointing out to that. I guess it could be done
for MIPS too (at least on our MIPS32 platform DMI is just a memory
region pre-initialized by the bootloader), but the conversion would
require much efforts. Alas currently I can't afford to get it
implemented in the framework of this patchset. (I saved your note in
my MIPS TODO list though. Let's hope eventually I'll be able to get
back to this topic.)

-Serge(y)

> 
>      Arnd
Jiaxun Yang Nov. 28, 2023, 3:46 p.m. UTC | #16
在2023年11月28日十一月 上午11:34,Serge Semin写道:
> On Mon, Nov 27, 2023 at 09:08:11PM +0000, Jiaxun Yang wrote:
[...]
>
> Indeed. Thanks for pointing that out. In the last days several times I
> was looking at that line and for some reason UNCAC_BASE seemed as
> CAC_BASE to me.) Based on what both IO_BASE and UNCAC_BASE are defined
> as of the uncached region anyway, then it should be safe for any
> currently supported MIPS64 (including the Loongson's) to use ioremap()
> in place of dmi_early_remap(). So basically my current patch in the
> subject won't change the method semantics. Let's not to try to fix a
> problem which doesn't exist then, and keep the patch as is especially
> seeing that the alternatives might still cause some troubles. Will you
> be ok with that?

I'd say the safest option is to use CKSEG0 or TO_CAC here, but I'm fine
with ioremap as long as the semantic remains uncached on Loongson.

Thanks.
>
> -Serge(y)
>
>> 
[...]
Arnd Bergmann Nov. 28, 2023, 9:59 p.m. UTC | #17
On Tue, Nov 28, 2023, at 14:52, Serge Semin wrote:
> On Tue, Nov 28, 2023 at 01:41:51PM +0100, Arnd Bergmann wrote:
>> On Mon, Nov 27, 2023, at 17:23, Serge Semin wrote:
>> > On Fri, Nov 24, 2023 at 10:03:49PM +0000, Jiaxun Yang wrote:
>> but ioremap_cache() is generally underspecified because the
>> resulting pointer is neither safe to dereference nor to pass into
>> readl()/writel()/memcpy_fromio() on all architectures.
>
> I don't know about ARM64 (which for instance has it utilized to access
> the DMI region), but at least in case of MIPS32 (a fortiori MIPS64
> seeing the ioremap_cache() method actually returns a pointer to the
> uncached region) I don't see a reason why it wouldn't be safe in both
> cases described by you. All IO and memory regions are accessed by the
> generic load and store instructions. The only difference is that the
> MMIO-space accessors normally implies additional barriers, which just
> slow down the execution, but shouldn't cause any other problem. Could
> you clarify why do you think otherwise?

On arch/powerpc, CONFIG_PPC_INDIRECT_MMIO makes all ioremap()
type functions return a token that can be passed into the readl/writel
family but that is not a pointer you can dereference.

On s390, the mechanism is different, but similarly __iomem
tokens are not pointers at all.

>> There was an effort to convert the remaining ioremap_cache() calls
>> into memremap() a few years ago, not sure if that's still being worked
>> on but it would be the right thing to do.
>
> I see. Thanks for the pointing out to that. I guess it could be done
> for MIPS too (at least on our MIPS32 platform DMI is just a memory
> region pre-initialized by the bootloader), but the conversion would
> require much efforts. Alas currently I can't afford to get it
> implemented in the framework of this patchset. (I saved your note in
> my MIPS TODO list though. Let's hope eventually I'll be able to get
> back to this topic.)

I just noticed that the only architectures that actually provide
ioremap_cache() are x86, arm, arm64, mips, loongarch, powerpc, sh
and xtensa. The ones that have ACPI support still definitely
need it, most of the other ones can probably be fixed without
too much trouble.

     Arnd
Serge Semin Nov. 30, 2023, 7:16 p.m. UTC | #18
On Tue, Nov 28, 2023 at 03:46:37PM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年11月28日十一月 上午11:34,Serge Semin写道:
> > On Mon, Nov 27, 2023 at 09:08:11PM +0000, Jiaxun Yang wrote:
> [...]
> >
> > Indeed. Thanks for pointing that out. In the last days several times I
> > was looking at that line and for some reason UNCAC_BASE seemed as
> > CAC_BASE to me.) Based on what both IO_BASE and UNCAC_BASE are defined
> > as of the uncached region anyway, then it should be safe for any
> > currently supported MIPS64 (including the Loongson's) to use ioremap()
> > in place of dmi_early_remap(). So basically my current patch in the
> > subject won't change the method semantics. Let's not to try to fix a
> > problem which doesn't exist then, and keep the patch as is especially
> > seeing that the alternatives might still cause some troubles. Will you
> > be ok with that?
> 

> I'd say the safest option is to use CKSEG0 or TO_CAC here, 

I would have agreed with you if MIPS didn't have that special
_page_cachable_default variable which is undefined for some platforms
and which might be re-defined during the boot-up process, and if
MIPS64 didn't have ioremap_prot() always mapping to the uncached
region.  But IMO updating ioremap_prot() currently seems more risky
than just converting dmi_early_remap() to the uncached version
especially seeing it won't change anything. MIPS64 always have IO
remapped to the uncached region. MIPS32 won't be able to have cached
mapping until VM is available, and paging and slabs are initialized.
So on the early MIPS32 bootup stages ioremap_cache() wouldn't have
worked anyway.

> but I'm fine
> with ioremap as long as the semantic remains uncached on Loongson.

Ok. Thanks.

-Serge(y)

> 
> Thanks.
> >
> > -Serge(y)
> >
> >> 
> [...]
> -- 
> - Jiaxun
Serge Semin Nov. 30, 2023, 7:26 p.m. UTC | #19
On Tue, Nov 28, 2023 at 10:59:10PM +0100, Arnd Bergmann wrote:
> On Tue, Nov 28, 2023, at 14:52, Serge Semin wrote:
> > On Tue, Nov 28, 2023 at 01:41:51PM +0100, Arnd Bergmann wrote:
> >> On Mon, Nov 27, 2023, at 17:23, Serge Semin wrote:
> >> > On Fri, Nov 24, 2023 at 10:03:49PM +0000, Jiaxun Yang wrote:
> >> but ioremap_cache() is generally underspecified because the
> >> resulting pointer is neither safe to dereference nor to pass into
> >> readl()/writel()/memcpy_fromio() on all architectures.
> >
> > I don't know about ARM64 (which for instance has it utilized to access
> > the DMI region), but at least in case of MIPS32 (a fortiori MIPS64
> > seeing the ioremap_cache() method actually returns a pointer to the
> > uncached region) I don't see a reason why it wouldn't be safe in both
> > cases described by you. All IO and memory regions are accessed by the
> > generic load and store instructions. The only difference is that the
> > MMIO-space accessors normally implies additional barriers, which just
> > slow down the execution, but shouldn't cause any other problem. Could
> > you clarify why do you think otherwise?
> 
> On arch/powerpc, CONFIG_PPC_INDIRECT_MMIO makes all ioremap()
> type functions return a token that can be passed into the readl/writel
> family but that is not a pointer you can dereference.
> 
> On s390, the mechanism is different, but similarly __iomem
> tokens are not pointers at all.

Ah, you meant that it was not generically safe. Then your were correct
for sure. I was talking about the MIPS arch, which doesn't
differentiate normal and IO memory pointers: all of them are accessed
by the same instructions. So ioremap_prot() returns just a normal
pointer there, which can be safely de-referenced.

> 
> >> There was an effort to convert the remaining ioremap_cache() calls
> >> into memremap() a few years ago, not sure if that's still being worked
> >> on but it would be the right thing to do.
> >
> > I see. Thanks for the pointing out to that. I guess it could be done
> > for MIPS too (at least on our MIPS32 platform DMI is just a memory
> > region pre-initialized by the bootloader), but the conversion would
> > require much efforts. Alas currently I can't afford to get it
> > implemented in the framework of this patchset. (I saved your note in
> > my MIPS TODO list though. Let's hope eventually I'll be able to get
> > back to this topic.)
> 

> I just noticed that the only architectures that actually provide
> ioremap_cache() are x86, arm, arm64, mips, loongarch, powerpc, sh
> and xtensa. The ones that have ACPI support still definitely
> need it, most of the other ones can probably be fixed without
> too much trouble.

Ok. Thanks. I'll have a look at that on my free time.

-Serge(y)

> 
>      Arnd
Jiaxun Yang Dec. 1, 2023, 12:13 a.m. UTC | #20
在2023年11月30日十一月 下午7:16,Serge Semin写道:
> On Tue, Nov 28, 2023 at 03:46:37PM +0000, Jiaxun Yang wrote:
[...]
>
>> I'd say the safest option is to use CKSEG0 or TO_CAC here, 
>
> I would have agreed with you if MIPS didn't have that special
> _page_cachable_default variable which is undefined for some platforms
> and which might be re-defined during the boot-up process, and if
> MIPS64 didn't have ioremap_prot() always mapping to the uncached
> region.  But IMO updating ioremap_prot() currently seems more risky
> than just converting dmi_early_remap() to the uncached version
> especially seeing it won't change anything. MIPS64 always have IO
> remapped to the uncached region. MIPS32 won't be able to have cached
> mapping until VM is available, and paging and slabs are initialized.
> So on the early MIPS32 bootup stages ioremap_cache() wouldn't have
> worked anyway.

I really didn't get that, using CKSEG0 on 32bit system and TO_CAC
on 64bit system won't hurt.

Something like:
#ifdef CONFIG_64BIT
#define dmi_remap(x, l)		(void *)TO_CAC(x)
#else
#define dmi_remap(x, l)		(void *)CKSEG0(x)
#endif

Can help us avoid all the hassle. Since it always ensures we are
using same CCA to access DMI tables. We can always trust Config.K0
left by firmware in this case.

You may add some sanity check on 32 bit to avoid generating invalid
pointer. (And perhaps implement it as ioremap_early.....) 

Thanks
Serge Semin Dec. 1, 2023, 2:54 p.m. UTC | #21
On Fri, Dec 01, 2023 at 12:13:22AM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年11月30日十一月 下午7:16,Serge Semin写道:
> > On Tue, Nov 28, 2023 at 03:46:37PM +0000, Jiaxun Yang wrote:
> [...]
> >
> >> I'd say the safest option is to use CKSEG0 or TO_CAC here, 
> >
> > I would have agreed with you if MIPS didn't have that special
> > _page_cachable_default variable which is undefined for some platforms
> > and which might be re-defined during the boot-up process, and if
> > MIPS64 didn't have ioremap_prot() always mapping to the uncached
> > region.  But IMO updating ioremap_prot() currently seems more risky
> > than just converting dmi_early_remap() to the uncached version
> > especially seeing it won't change anything. MIPS64 always have IO
> > remapped to the uncached region. MIPS32 won't be able to have cached
> > mapping until VM is available, and paging and slabs are initialized.
> > So on the early MIPS32 bootup stages ioremap_cache() wouldn't have
> > worked anyway.
> 

> I really didn't get that, using CKSEG0 on 32bit system and TO_CAC
> on 64bit system won't hurt.
> 
> Something like:
> #ifdef CONFIG_64BIT
> #define dmi_remap(x, l)		(void *)TO_CAC(x)
> #else
> #define dmi_remap(x, l)		(void *)CKSEG0(x)
> #endif
> 
> Can help us avoid all the hassle. Since it always ensures we are
> using same CCA to access DMI tables. We can always trust Config.K0
> left by firmware in this case.

Please note my only concern is about dmi_early_remap(), not
dmi_remap(). The later one can be safely left backended by the
ioremap_cache() method because at the stage it's utilized MIPS32
version of ioremap_prot() will be able to create any mapping it's
requested to. The dmi_early_remap() function is called very early with
no paging or VM or even cache stuff initialized. So currently AFAICS
it just doesn't work on _all_ _MIPS32_ platform, because
ioremap_prot() relies on VM and slab being available to have any
cacheable mapping, which aren't at the moment of the dmi_setup()
function invocation. Seeing the ioremap_cache() is just a stub on
MIPS64 which always performs the uncached mapping, it will be
completely safe to just convert dmi_early_remap() to ioremap() with
no risk to beak anything. dmi_early_remap() semantics won't be
actually changed, it will work as before on MIPS64 and will be fixed
on MIPS32. This (AFAICS) is a completely safe fix of the problem with
just a few affected platforms around.

Getting back to what you suggest. You want to change the
ioremap_prot() semantics so one would return a pointer to the cached
unmapped region for the ioremap_cache() method. First of all
ioremap_cache() doesn't define what type of the cached mapping it
needs but merely relies on the _page_cachable_default variable value.
That variable is uninitialized on the early stages and then only
initialized for the r4k platforms (this makes me also thinking that
ioremap_cache() doesn't properly work for r3k and Octeon platforms),
thus we would need to have it initialized with some value until the
cpu_cache_init() is called and have the r3k and Octen cache init
functions fixed to get it back to the uninitialized zero value .
Moreover all the _CACHE_* field values are already occupied. What
default value should be use then for _page_cachable_default? You say
to read Config.K0 earlier, but Config.K0 may be changed later in the
framework of the cps_smp_setup() method and actually in
cpu_cache_init() for r4k if 'cca' kernel parameter is specified. So do
we need _page_cachable_default being re-initialized then?.. There
might be some other underwater rocks in the fix you suggest. But all
of that already makes your solution much more risky than the one
described before.

Howbeit if you still think that none of the concerns listed above is
worth being that much worried about, then please note your solution is
mainly targeted to fix ioremap_cache(). Meanwhile this patch is about
the DMI region mapping. So if ioremap_cache() needs to be fixed in a
way you suggest it's better to be done in a framework of another
patch. But considering the possible problems it may cause I wouldn't
risk to have it backported to the stable kernels.

-Serge(y)

> 
> You may add some sanity check on 32 bit to avoid generating invalid
> pointer. (And perhaps implement it as ioremap_early.....) 
> 
> Thanks
> -- 
> - Jiaxun
Jiaxun Yang Dec. 1, 2023, 3:10 p.m. UTC | #22
在2023年12月1日十二月 下午2:54,Serge Semin写道:
> On Fri, Dec 01, 2023 at 12:13:22AM +0000, Jiaxun Yang wrote:
>> 
>> 
>> 在2023年11月30日十一月 下午7:16,Serge Semin写道:
>> > On Tue, Nov 28, 2023 at 03:46:37PM +0000, Jiaxun Yang wrote:
>> [...]
>> >
>> >> I'd say the safest option is to use CKSEG0 or TO_CAC here, 
>> >
>> > I would have agreed with you if MIPS didn't have that special
>> > _page_cachable_default variable which is undefined for some platforms
>> > and which might be re-defined during the boot-up process, and if
>> > MIPS64 didn't have ioremap_prot() always mapping to the uncached
>> > region.  But IMO updating ioremap_prot() currently seems more risky
>> > than just converting dmi_early_remap() to the uncached version
>> > especially seeing it won't change anything. MIPS64 always have IO
>> > remapped to the uncached region. MIPS32 won't be able to have cached
>> > mapping until VM is available, and paging and slabs are initialized.
>> > So on the early MIPS32 bootup stages ioremap_cache() wouldn't have
>> > worked anyway.
>> 
>
>> I really didn't get that, using CKSEG0 on 32bit system and TO_CAC
>> on 64bit system won't hurt.
>> 
>> Something like:
>> #ifdef CONFIG_64BIT
>> #define dmi_remap(x, l)		(void *)TO_CAC(x)
>> #else
>> #define dmi_remap(x, l)		(void *)CKSEG0(x)
>> #endif
>> 
>> Can help us avoid all the hassle. Since it always ensures we are
>> using same CCA to access DMI tables. We can always trust Config.K0
>> left by firmware in this case.
>
> Please note my only concern is about dmi_early_remap(), not
> dmi_remap(). The later one can be safely left backended by the
> ioremap_cache() method because at the stage it's utilized MIPS32
> version of ioremap_prot() will be able to create any mapping it's
> requested to. The dmi_early_remap() function is called very early with
> no paging or VM or even cache stuff initialized. So currently AFAICS
> it just doesn't work on _all_ _MIPS32_ platform, because
> ioremap_prot() relies on VM and slab being available to have any
> cacheable mapping, which aren't at the moment of the dmi_setup()
> function invocation. Seeing the ioremap_cache() is just a stub on
> MIPS64 which always performs the uncached mapping, it will be
> completely safe to just convert dmi_early_remap() to ioremap() with
> no risk to beak anything. dmi_early_remap() semantics won't be
> actually changed, it will work as before on MIPS64 and will be fixed
> on MIPS32. This (AFAICS) is a completely safe fix of the problem with
> just a few affected platforms around.
>

The only platform enabled DMI in upstream kernel is Loongson64, which
I'm perfectly sure that the mapping for DMI tables *should* be Cached.
It is an accident that ioremap_cache is misused here, so I'm proposing
to replace it with CKSEG0/TO_CAC. Also as per MIPS UHI spec, all the
data passed from bootloader to firmware should lay in KSEG0, please
let me know if your platform is an exception here.

Using ioremap_cache at dmi_early_remap does not sound safe to me as well.
What if DMI code tried to remap something beyond KSEG range at this place?

The safest option here is just bypassing ioremap framework, which does
not give you any advantage but only burden.

I'll propose a patch later.
Serge Semin Dec. 1, 2023, 6:26 p.m. UTC | #23
On Fri, Dec 01, 2023 at 03:10:13PM +0000, Jiaxun Yang wrote:
> 
> 
> 在2023年12月1日十二月 下午2:54,Serge Semin写道:
> > On Fri, Dec 01, 2023 at 12:13:22AM +0000, Jiaxun Yang wrote:
> >> 
> >> 
> >> 在2023年11月30日十一月 下午7:16,Serge Semin写道:
> >> > On Tue, Nov 28, 2023 at 03:46:37PM +0000, Jiaxun Yang wrote:
> >> [...]
> >> >
> >> >> I'd say the safest option is to use CKSEG0 or TO_CAC here, 
> >> >
> >> > I would have agreed with you if MIPS didn't have that special
> >> > _page_cachable_default variable which is undefined for some platforms
> >> > and which might be re-defined during the boot-up process, and if
> >> > MIPS64 didn't have ioremap_prot() always mapping to the uncached
> >> > region.  But IMO updating ioremap_prot() currently seems more risky
> >> > than just converting dmi_early_remap() to the uncached version
> >> > especially seeing it won't change anything. MIPS64 always have IO
> >> > remapped to the uncached region. MIPS32 won't be able to have cached
> >> > mapping until VM is available, and paging and slabs are initialized.
> >> > So on the early MIPS32 bootup stages ioremap_cache() wouldn't have
> >> > worked anyway.
> >> 
> >
> >> I really didn't get that, using CKSEG0 on 32bit system and TO_CAC
> >> on 64bit system won't hurt.
> >> 
> >> Something like:
> >> #ifdef CONFIG_64BIT
> >> #define dmi_remap(x, l)		(void *)TO_CAC(x)
> >> #else
> >> #define dmi_remap(x, l)		(void *)CKSEG0(x)
> >> #endif
> >> 
> >> Can help us avoid all the hassle. Since it always ensures we are
> >> using same CCA to access DMI tables. We can always trust Config.K0
> >> left by firmware in this case.
> >
> > Please note my only concern is about dmi_early_remap(), not
> > dmi_remap(). The later one can be safely left backended by the
> > ioremap_cache() method because at the stage it's utilized MIPS32
> > version of ioremap_prot() will be able to create any mapping it's
> > requested to. The dmi_early_remap() function is called very early with
> > no paging or VM or even cache stuff initialized. So currently AFAICS
> > it just doesn't work on _all_ _MIPS32_ platform, because
> > ioremap_prot() relies on VM and slab being available to have any
> > cacheable mapping, which aren't at the moment of the dmi_setup()
> > function invocation. Seeing the ioremap_cache() is just a stub on
> > MIPS64 which always performs the uncached mapping, it will be
> > completely safe to just convert dmi_early_remap() to ioremap() with
> > no risk to beak anything. dmi_early_remap() semantics won't be
> > actually changed, it will work as before on MIPS64 and will be fixed
> > on MIPS32. This (AFAICS) is a completely safe fix of the problem with
> > just a few affected platforms around.
> >
> 

> The only platform enabled DMI in upstream kernel is Loongson64, which
> I'm perfectly sure that the mapping for DMI tables *should* be Cached.

Then it looks like it must have been broken in the first place.

> It is an accident that ioremap_cache is misused here, so I'm proposing
> to replace it with CKSEG0/TO_CAC. Also as per MIPS UHI spec, all the
> data passed from bootloader to firmware should lay in KSEG0,

I failed to find the MIPS UHI spec. The only link google freely
provide is
http://prplfoundation.org/wiki/MIPS_documentation
but it's broken. Could you please share the doc somehow?
Anyway AFAICS from the MIPS arch code it only concerns the dtb pointer
passed to the kernel. Does it really mandate all the data being in KSEG0?

> please
> let me know if your platform is an exception here.

No, it's not. U-boot is executed in kseg0 anyway.

> 
> Using ioremap_cache at dmi_early_remap does not sound safe to me as well.
> What if DMI code tried to remap something beyond KSEG range at this place?

Right. I've found out that for the safety sake the generic version of the
ioremap_prot() has been recently updated not to do any mapping if the
slab hasn't been initialized:
Link: https://lore.kernel.org/lkml/20230515090848.833045-7-bhe@redhat.com/
I'll add a similar fix to the MIPS-version of the ioremap_prop()
method to make sure no early cached remapping is attempted.

> 
> The safest option here is just bypassing ioremap framework, which does
> not give you any advantage but only burden.
> 
> I'll propose a patch later.

Ok. I see. I'll resubmit my series today as is then. Should you need
to have the problem fixed differently, please either re-base your
patch on top of it, or add your explicit comment that you'll have a
better fix so Thomas could be able to consider to postpone this patch
mergein until your fix is ready.

-Serge(y)

> -- 
> - Jiaxun
diff mbox series

Patch

diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
index 27415a288adf..525aad1572d1 100644
--- a/arch/mips/include/asm/dmi.h
+++ b/arch/mips/include/asm/dmi.h
@@ -5,7 +5,7 @@ 
 #include <linux/io.h>
 #include <linux/memblock.h>
 
-#define dmi_early_remap(x, l)		ioremap_cache(x, l)
+#define dmi_early_remap(x, l)		ioremap_uc(x, l)
 #define dmi_early_unmap(x, l)		iounmap(x)
 #define dmi_remap(x, l)			ioremap_cache(x, l)
 #define dmi_unmap(x)			iounmap(x)