diff mbox series

[v2,04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's

Message ID 20190528161026.13193-5-steve.capper@arm.com (mailing list archive)
State New, archived
Headers show
Series 52-bit kernel + user VAs | expand

Commit Message

Steve Capper May 28, 2019, 4:10 p.m. UTC
In order to prepare for a variable VA_BITS we need to account for a
variable size VMEMMAP which in turn means the position of the fixed map
is variable at compile time.

Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
position with BUG_ON's.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Ard Biesheuvel May 28, 2019, 5:07 p.m. UTC | #1
On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
>
> In order to prepare for a variable VA_BITS we need to account for a
> variable size VMEMMAP which in turn means the position of the fixed map
> is variable at compile time.
>
> Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> position with BUG_ON's.
>
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Do we still need this patch now that VMEMMAP_SIZE is a compile time
constant again? Or am I missing something?

> ---
>  arch/arm64/mm/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b0401b2ec4da..1b88d9d81954 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -846,7 +846,7 @@ void __init early_fixmap_init(void)
>          * The boot-ioremap range spans multiple pmds, for which
>          * we are not prepared:
>          */
> -       BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> +       BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
>                      != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
>
>         if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
> @@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
>          * On 4k pages, we'll use section mappings for the FDT so we only
>          * have to be in the same PUD.
>          */
> -       BUILD_BUG_ON(dt_virt_base % SZ_2M);
> +       BUG_ON(dt_virt_base % SZ_2M);
>
> -       BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> +       BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
>                      __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
>
>         offset = dt_phys % SWAPPER_BLOCK_SIZE;
> --
> 2.20.1
>
Ard Biesheuvel May 28, 2019, 5:11 p.m. UTC | #2
On Tue, 28 May 2019 at 19:07, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
> On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
> >
> > In order to prepare for a variable VA_BITS we need to account for a
> > variable size VMEMMAP which in turn means the position of the fixed map
> > is variable at compile time.
> >
> > Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> > position with BUG_ON's.
> >
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
>
> Do we still need this patch now that VMEMMAP_SIZE is a compile time
> constant again? Or am I missing something?
>

Never mind. You are moving the start of the linear region and the
start of the vmemmap region.

I wonder if this is really necessry, though. On a 48-bit VA only
system, the linear region could still start at 0xfff0_.... as long as
we don't put any physical memory there (and we already move around our
physical memory inside the linear region when KASLR is enabled)


> > ---
> >  arch/arm64/mm/mmu.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index b0401b2ec4da..1b88d9d81954 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -846,7 +846,7 @@ void __init early_fixmap_init(void)
> >          * The boot-ioremap range spans multiple pmds, for which
> >          * we are not prepared:
> >          */
> > -       BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> > +       BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> >                      != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
> >
> >         if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
> > @@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
> >          * On 4k pages, we'll use section mappings for the FDT so we only
> >          * have to be in the same PUD.
> >          */
> > -       BUILD_BUG_ON(dt_virt_base % SZ_2M);
> > +       BUG_ON(dt_virt_base % SZ_2M);
> >
> > -       BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> > +       BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> >                      __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
> >
> >         offset = dt_phys % SWAPPER_BLOCK_SIZE;
> > --
> > 2.20.1
> >
Steve Capper May 29, 2019, 9:28 a.m. UTC | #3
On Tue, May 28, 2019 at 07:11:22PM +0200, Ard Biesheuvel wrote:
> On Tue, 28 May 2019 at 19:07, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >
> > On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
> > >
> > > In order to prepare for a variable VA_BITS we need to account for a
> > > variable size VMEMMAP which in turn means the position of the fixed map
> > > is variable at compile time.
> > >
> > > Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> > > position with BUG_ON's.
> > >
> > > Signed-off-by: Steve Capper <steve.capper@arm.com>
> >
> > Do we still need this patch now that VMEMMAP_SIZE is a compile time
> > constant again? Or am I missing something?
> >
> 
> Never mind. You are moving the start of the linear region and the
> start of the vmemmap region.
> 
> I wonder if this is really necessry, though. On a 48-bit VA only
> system, the linear region could still start at 0xfff0_.... as long as
> we don't put any physical memory there (and we already move around our
> physical memory inside the linear region when KASLR is enabled)
> 
>

Thanks Ard,
This patch is actually superfluous, it was needed in earlier versions of
the series; but VMEMMAP is fixed at compile time.

I have removed this patch from the series. I will also see if I can
simplify the de-constifying patches and bitmask replacement logic.

Cheers,
diff mbox series

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b0401b2ec4da..1b88d9d81954 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -846,7 +846,7 @@  void __init early_fixmap_init(void)
 	 * The boot-ioremap range spans multiple pmds, for which
 	 * we are not prepared:
 	 */
-	BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+	BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
 		     != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
 
 	if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
@@ -914,9 +914,9 @@  void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
 	 * On 4k pages, we'll use section mappings for the FDT so we only
 	 * have to be in the same PUD.
 	 */
-	BUILD_BUG_ON(dt_virt_base % SZ_2M);
+	BUG_ON(dt_virt_base % SZ_2M);
 
-	BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
+	BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
 		     __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
 
 	offset = dt_phys % SWAPPER_BLOCK_SIZE;