From patchwork Thu Nov 19 17:37:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 7659381 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2BA879F392 for ; Thu, 19 Nov 2015 17:40:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3EC4E204A2 for ; Thu, 19 Nov 2015 17:40:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 17F4720497 for ; Thu, 19 Nov 2015 17:40:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZzTA3-00072O-2L; Thu, 19 Nov 2015 17:38:19 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZzT9z-0006z7-CF for linux-arm-kernel@lists.infradead.org; Thu, 19 Nov 2015 17:38:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 46FF141D; Thu, 19 Nov 2015 09:37:37 -0800 (PST) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 82D303F21A; Thu, 19 Nov 2015 09:37:51 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] arm64: efi: correctly align vaddr for runtime maps Date: Thu, 19 Nov 2015 17:37:36 +0000 Message-Id: <1447954656-10435-1-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151119_093815_433314_373F3A9C X-CRM114-Status: GOOD ( 12.22 ) X-Spam-Score: -7.5 (-------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon , Leif Lindholm , Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The kernel may use a page granularity of 4K, 16K, or 64K depending on configuration. When mapping EFI runtime regions, we use memrange_efi_to_native to round the physical base address of a region down to a granule-aligned boundary, and round the size up to a granule-aligned boundary. However, we fail to similarly round the virtual base address down to a granule-aligned boundary. The virtual base address may be up to PAGE_SIZE - 4K above what it should be, and in create_pgd_mapping, we may erroneously map an additional page at the end of any region which does not have a granule-aligned virtual base address. Depending on the memory map, this page may be in a region we are not intended/permitted to map, or may clash with a different region that we wich to map. Prevent this issue by rounding the virtual base address down to the kernel page granularity, matching what we do for the physical base address. Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Leif Lindholm Cc: Will Deacon --- arch/arm64/kernel/efi.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) I spotted this by playing with Will's break-before-make checker [1], which detected an erroneously created PTE being overwritten with a different output address. It looks like the VA bug was introduced in commit f3cdfd239da56a4c ("arm64/efi: move SetVirtualAddressMap() to UEFI stub"). Prior to commit 60305db9884515ca ("arm64/efi: move virtmap init to early initcall") so manual fixup is required, but the logic fix is the same. Mark. [1] https://git.kernel.org/cgit/linux/kernel/git/will/linux.git/commit/?h=aarch64/devel&id=372f39220ad35fa39a75419f2221ffeb6ffd78d3 diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index de46b50..7855b69 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -225,7 +225,7 @@ static bool __init efi_virtmap_init(void) efi_memory_desc_t *md; for_each_efi_memory_desc(&memmap, md) { - u64 paddr, npages, size; + u64 paddr, vaddr, npages, size; pgprot_t prot; if (!(md->attribute & EFI_MEMORY_RUNTIME)) @@ -237,6 +237,8 @@ static bool __init efi_virtmap_init(void) npages = md->num_pages; memrange_efi_to_native(&paddr, &npages); size = npages << PAGE_SHIFT; + vaddr = md->virt_addr; + vaddr &= PAGE_MASK; pr_info(" EFI remap 0x%016llx => %p\n", md->phys_addr, (void *)md->virt_addr); @@ -254,7 +256,7 @@ static bool __init efi_virtmap_init(void) else prot = PAGE_KERNEL; - create_pgd_mapping(&efi_mm, paddr, md->virt_addr, size, prot); + create_pgd_mapping(&efi_mm, paddr, vaddr, size, prot); } return true; }