diff mbox

[05/12] arm64: mm: Remove VMALLOC checks from update_mapping_prot(.)

Message ID 20171204141313.31604-6-steve.capper@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Steve Capper Dec. 4, 2017, 2:13 p.m. UTC
update_mapping_prot assumes that it will be used on the VA for the
kernel .text section. (Via the check virt >= VMALLOC_START)

Recent kdump patches employ this function to modify the protection of
the direct linear mapping (which is strictly speaking outside of this
area), via mark_linear_text_alias_ro(.).

We "get away" with this as the direct linear mapping currently follows
the VA for the kernel text, so the check passes.

This patch removes the check in update_mapping_prot allowing us to move
the kernel VA layout without spuriously firing the warning.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/mmu.c | 6 ------
 1 file changed, 6 deletions(-)

Comments

Ard Biesheuvel Dec. 4, 2017, 4:01 p.m. UTC | #1
On 4 December 2017 at 14:13, Steve Capper <steve.capper@arm.com> wrote:
> update_mapping_prot assumes that it will be used on the VA for the
> kernel .text section. (Via the check virt >= VMALLOC_START)
>
> Recent kdump patches employ this function to modify the protection of
> the direct linear mapping (which is strictly speaking outside of this
> area), via mark_linear_text_alias_ro(.).
>

Isn't that a bug? Is it guaranteed that those protection attributes
can be modified without splitting live page tables, and the resulting
risk of TLB conflicts?

> We "get away" with this as the direct linear mapping currently follows
> the VA for the kernel text, so the check passes.
>
> This patch removes the check in update_mapping_prot allowing us to move
> the kernel VA layout without spuriously firing the warning.
>
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm64/mm/mmu.c | 6 ------
>  1 file changed, 6 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58b1ed6fd7ec..c8f486384fe3 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -383,12 +383,6 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>                                 phys_addr_t size, pgprot_t prot)
>  {
> -       if (virt < VMALLOC_START) {
> -               pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
> -                       &phys, virt);
> -               return;
> -       }
> -
>         __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
>                              NO_CONT_MAPPINGS);
>
> --
> 2.11.0
>
Steve Capper Dec. 12, 2017, 3:39 p.m. UTC | #2
Hi Ard,

On Mon, Dec 04, 2017 at 04:01:09PM +0000, Ard Biesheuvel wrote:
> On 4 December 2017 at 14:13, Steve Capper <steve.capper@arm.com> wrote:
> > update_mapping_prot assumes that it will be used on the VA for the
> > kernel .text section. (Via the check virt >= VMALLOC_START)
> >
> > Recent kdump patches employ this function to modify the protection of
> > the direct linear mapping (which is strictly speaking outside of this
> > area), via mark_linear_text_alias_ro(.).
> >
> 
> Isn't that a bug? Is it guaranteed that those protection attributes
> can be modified without splitting live page tables, and the resulting
> risk of TLB conflicts?

IIUC there is a bug in my earlier patch to flip the kernel VA space round,
it should allow addresses from the linear mapping to be processed by
update_mapping_prot. I'll update the logic in VA flip patch, so this
patch can be removed from the series.

It is not apparent to me how mark_linear_text_alias_ro(.) guarantees
that no page table entries for the linear map are split, though.

Cheers,
Catalin Marinas Dec. 13, 2017, 4:04 p.m. UTC | #3
On Tue, Dec 12, 2017 at 03:39:23PM +0000, Steve Capper wrote:
> It is not apparent to me how mark_linear_text_alias_ro(.) guarantees
> that no page table entries for the linear map are split, though.

map_mem() ensures that when mapped via __map_memblock(), no contiguous
entries are created. Also both ends of the _text..__init_begin would be
mapped to the granularity permitted by their alignment so that no later
splitting is necessary.
diff mbox

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 58b1ed6fd7ec..c8f486384fe3 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -383,12 +383,6 @@  void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
-		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
-			&phys, virt);
-		return;
-	}
-
 	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
 			     NO_CONT_MAPPINGS);