diff mbox series

[v2,01/12] arm/arm64: KVM: Formalise end of direct linear map

Message ID 20190528161026.13193-2-steve.capper@arm.com (mailing list archive)
State New, archived
Headers show
Series 52-bit kernel + user VAs | expand

Commit Message

Steve Capper May 28, 2019, 4:10 p.m. UTC
We assume that the direct linear map ends at ~0 in the KVM HYP map
intersection checking code. This assumption will become invalid later on
for arm64 when the address space of the kernel is re-arranged.

This patch introduces a new constant PAGE_OFFSET_END for both arm and
arm64 and defines it to be ~0UL

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm/include/asm/memory.h   | 1 +
 arch/arm64/include/asm/memory.h | 1 +
 virt/kvm/arm/mmu.c              | 4 ++--
 3 files changed, 4 insertions(+), 2 deletions(-)

Comments

Marc Zyngier May 28, 2019, 4:27 p.m. UTC | #1
Hi Steve,

On 28/05/2019 17:10, Steve Capper wrote:
> We assume that the direct linear map ends at ~0 in the KVM HYP map

Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
Introduce EL2 VA randomisation").

> intersection checking code. This assumption will become invalid later on
> for arm64 when the address space of the kernel is re-arranged.
> 
> This patch introduces a new constant PAGE_OFFSET_END for both arm and
> arm64 and defines it to be ~0UL
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm/include/asm/memory.h   | 1 +
>  arch/arm64/include/asm/memory.h | 1 +
>  virt/kvm/arm/mmu.c              | 4 ++--
>  3 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index ed8fd0d19a3e..45c211fd50da 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -24,6 +24,7 @@
>  
>  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
>  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> +#define PAGE_OFFSET_END		(~0UL)
>  
>  #ifdef CONFIG_MMU
>  
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 8ffcf5a512bb..9fd387a63b9b 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -52,6 +52,7 @@
>  	(UL(1) << VA_BITS) + 1)
>  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
>  	(UL(1) << (VA_BITS - 1)) + 1)
> +#define PAGE_OFFSET_END		(~0UL)
>  #define KIMAGE_VADDR		(MODULES_END)
>  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
>  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 74b6582eaa3c..e1a777275b37 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
>  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
>  	kvm_debug("HYP VA range: %lx:%lx\n",
>  		  kern_hyp_va(PAGE_OFFSET),
> -		  kern_hyp_va((unsigned long)high_memory - 1));
> +		  kern_hyp_va(PAGE_OFFSET_END));
>  
>  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
>  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
>  		/*
>  		 * The idmap page is intersecting with the VA space,
> 

This definitely looks like a move in the wrong direction (reverting part
of the above commit). Is it that this is just an old patch that should
have been dropped? Or am I completely missing the point?

Thanks,

	M.
Steve Capper May 28, 2019, 5:01 p.m. UTC | #2
On Tue, May 28, 2019 at 05:27:17PM +0100, Marc Zyngier wrote:
> Hi Steve,

Hi Marc,

> 
> On 28/05/2019 17:10, Steve Capper wrote:
> > We assume that the direct linear map ends at ~0 in the KVM HYP map
> 
> Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
> Introduce EL2 VA randomisation").
> 
> > intersection checking code. This assumption will become invalid later on
> > for arm64 when the address space of the kernel is re-arranged.
> > 
> > This patch introduces a new constant PAGE_OFFSET_END for both arm and
> > arm64 and defines it to be ~0UL
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> > ---
> >  arch/arm/include/asm/memory.h   | 1 +
> >  arch/arm64/include/asm/memory.h | 1 +
> >  virt/kvm/arm/mmu.c              | 4 ++--
> >  3 files changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > index ed8fd0d19a3e..45c211fd50da 100644
> > --- a/arch/arm/include/asm/memory.h
> > +++ b/arch/arm/include/asm/memory.h
> > @@ -24,6 +24,7 @@
> >  
> >  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
> >  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> > +#define PAGE_OFFSET_END		(~0UL)
> >  
> >  #ifdef CONFIG_MMU
> >  
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 8ffcf5a512bb..9fd387a63b9b 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -52,6 +52,7 @@
> >  	(UL(1) << VA_BITS) + 1)
> >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> >  	(UL(1) << (VA_BITS - 1)) + 1)
> > +#define PAGE_OFFSET_END		(~0UL)
> >  #define KIMAGE_VADDR		(MODULES_END)
> >  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
> >  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > index 74b6582eaa3c..e1a777275b37 100644
> > --- a/virt/kvm/arm/mmu.c
> > +++ b/virt/kvm/arm/mmu.c
> > @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
> >  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
> >  	kvm_debug("HYP VA range: %lx:%lx\n",
> >  		  kern_hyp_va(PAGE_OFFSET),
> > -		  kern_hyp_va((unsigned long)high_memory - 1));
> > +		  kern_hyp_va(PAGE_OFFSET_END));
> >  
> >  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> > -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> > +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
> >  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
> >  		/*
> >  		 * The idmap page is intersecting with the VA space,
> > 
> 
> This definitely looks like a move in the wrong direction (reverting part
> of the above commit). Is it that this is just an old patch that should
> have been dropped? Or am I completely missing the point?

I suspect this is a case of me rebasing my series... poorly.
I'll re-examine the logic here and either update the patch or the commit
log to make it clearer.

Cheers,
Steve Capper May 29, 2019, 9:26 a.m. UTC | #3
On Tue, May 28, 2019 at 05:01:29PM +0000, Steve Capper wrote:
> On Tue, May 28, 2019 at 05:27:17PM +0100, Marc Zyngier wrote:
> > Hi Steve,
> 
> Hi Marc,
> 
> > 
> > On 28/05/2019 17:10, Steve Capper wrote:
> > > We assume that the direct linear map ends at ~0 in the KVM HYP map
> > 
> > Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
> > Introduce EL2 VA randomisation").
> > 
> > > intersection checking code. This assumption will become invalid later on
> > > for arm64 when the address space of the kernel is re-arranged.
> > > 
> > > This patch introduces a new constant PAGE_OFFSET_END for both arm and
> > > arm64 and defines it to be ~0UL
> > > 
> > > Signed-off-by: Steve Capper <steve.capper@arm.com>
> > > ---
> > >  arch/arm/include/asm/memory.h   | 1 +
> > >  arch/arm64/include/asm/memory.h | 1 +
> > >  virt/kvm/arm/mmu.c              | 4 ++--
> > >  3 files changed, 4 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > > index ed8fd0d19a3e..45c211fd50da 100644
> > > --- a/arch/arm/include/asm/memory.h
> > > +++ b/arch/arm/include/asm/memory.h
> > > @@ -24,6 +24,7 @@
> > >  
> > >  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
> > >  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> > > +#define PAGE_OFFSET_END		(~0UL)
> > >  
> > >  #ifdef CONFIG_MMU
> > >  
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index 8ffcf5a512bb..9fd387a63b9b 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -52,6 +52,7 @@
> > >  	(UL(1) << VA_BITS) + 1)
> > >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> > >  	(UL(1) << (VA_BITS - 1)) + 1)
> > > +#define PAGE_OFFSET_END		(~0UL)
> > >  #define KIMAGE_VADDR		(MODULES_END)
> > >  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
> > >  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > > index 74b6582eaa3c..e1a777275b37 100644
> > > --- a/virt/kvm/arm/mmu.c
> > > +++ b/virt/kvm/arm/mmu.c
> > > @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
> > >  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
> > >  	kvm_debug("HYP VA range: %lx:%lx\n",
> > >  		  kern_hyp_va(PAGE_OFFSET),
> > > -		  kern_hyp_va((unsigned long)high_memory - 1));
> > > +		  kern_hyp_va(PAGE_OFFSET_END));
> > >  
> > >  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> > > -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> > > +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
> > >  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
> > >  		/*
> > >  		 * The idmap page is intersecting with the VA space,
> > > 
> > 
> > This definitely looks like a move in the wrong direction (reverting part
> > of the above commit). Is it that this is just an old patch that should
> > have been dropped? Or am I completely missing the point?
> 
> I suspect this is a case of me rebasing my series... poorly.
> I'll re-examine the logic here and either update the patch or the commit
> log to make it clearer.
>

Hi Marc,
Thanks, this was indeed an overzealous rebase. I've removed this patch from the
series and kvmtool is happy booting guests (when rebased to v5.2-rc2).

Cheers,
diff mbox series

Patch

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index ed8fd0d19a3e..45c211fd50da 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -24,6 +24,7 @@ 
 
 /* PAGE_OFFSET - the virtual address of the start of the kernel image */
 #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
+#define PAGE_OFFSET_END		(~0UL)
 
 #ifdef CONFIG_MMU
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 8ffcf5a512bb..9fd387a63b9b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -52,6 +52,7 @@ 
 	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET_END		(~0UL)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 74b6582eaa3c..e1a777275b37 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2202,10 +2202,10 @@  int kvm_mmu_init(void)
 	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
 	kvm_debug("HYP VA range: %lx:%lx\n",
 		  kern_hyp_va(PAGE_OFFSET),
-		  kern_hyp_va((unsigned long)high_memory - 1));
+		  kern_hyp_va(PAGE_OFFSET_END));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
+	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,