From patchwork Thu Mar 9 20:52:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9614181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6070E604DD for ; Thu, 9 Mar 2017 20:55:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 563D228429 for ; Thu, 9 Mar 2017 20:55:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A808286B2; Thu, 9 Mar 2017 20:55:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 7664428429 for ; Thu, 9 Mar 2017 20:55:54 +0000 (UTC) Received: (qmail 3074 invoked by uid 550); 9 Mar 2017 20:55:52 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29820 invoked from network); 9 Mar 2017 20:54:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8A50Pb42yz7N/B/WlXtYS57tkvWGKS58ehX/zdoPHrY=; b=kWQTdk4FfWrVbOEKd34xw4qxCOOizxolKtgcoQmF+eNvk/sL0eEI5MJJ1iwYnrEq+4 RGp06Wo+Y6Gtb/MgBD24wVvyBKrvvm4ywGUGc9sG6WtWEvtxNMbZW30x4xt42ieNl7yC DdnZiUgDnaXBsN35KhPwtaX3/7K3vv+ytzBhw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8A50Pb42yz7N/B/WlXtYS57tkvWGKS58ehX/zdoPHrY=; b=lyk409DjkoFsFLSieFPHBXeuT8I+wTrvDUkgXtaFCZs5IiZqxxOYpzVMKdCzdp9x4I ii9CC/pm1LJ2rRojxI+ChEsAeITFVzM1lUtYJcJAn/TmKQba0YsDC/BILt+hHgx+sQ6x JuO6xjXy89bvVudYUXMVz0N7qznPrurhoXW5n9qCGhezGv25Ma4CG9OE4PcUcWRVbNZ1 h4LQNxCQAguuh5V5QwHmt094F97pGCA67UxKrPE8v6FB9mA4AtCn8jNnHWX7aA9AGMxz 1Kpo7ZGwot26rjTWDqNIWhx0WqLKDdmDh+a8wdIVJDDMxqYAbIff4PsnX7pOs0MiBYpe Du6Q== X-Gm-Message-State: AMke39mVjYYMn5BB38+aU8/RvdH/5LsHfs/l4RE7dyc8X3QuZeegJ5O03jNDtLowvDG8BWwF X-Received: by 10.223.172.135 with SMTP id o7mr11622978wrc.121.1489092844256; Thu, 09 Mar 2017 12:54:04 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, keescook@chromium.org, labbott@fedoraproject.org Cc: kernel-hardening@lists.openwall.com, will.deacon@arm.com, catalin.marinas@arm.com, Ard Biesheuvel Date: Thu, 9 Mar 2017 21:52:05 +0100 Message-Id: <1489092729-16871-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1489092729-16871-1-git-send-email-ard.biesheuvel@linaro.org> References: <1489092729-16871-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [PATCH v6 07/11] arm64/mmu: ignore debug_pagealloc for kernel segments X-Virus-Scanned: ClamAV using ClamSMTP The debug_pagealloc facility manipulates kernel mappings in the linear region at page granularity to detect out of bounds or use-after-free accesses. Since the kernel segments are not allocated dynamically, there is no point in taking the debug_pagealloc_enabled flag into account for them, and we can use block mappings unconditionally. Note that this applies equally to the linear alias of text/rodata: we will never have dynamic allocations there given that the same memory is statically in use by the kernel image. Reviewed-by: Mark Rutland Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index bb9179084217..ec23aec6433f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -328,8 +328,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt, return; } - __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, - NULL, debug_pagealloc_enabled()); + __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, false); /* flush the TLBs after updating live kernel mappings */ flush_tlb_kernel_range(virt, virt + size); @@ -381,7 +380,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end */ __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start), kernel_end - kernel_start, PAGE_KERNEL, - early_pgtable_alloc, debug_pagealloc_enabled()); + early_pgtable_alloc, false); } void __init mark_linear_text_alias_ro(void) @@ -437,7 +436,7 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, BUG_ON(!PAGE_ALIGNED(size)); __create_pgd_mapping(pgd, pa_start, (unsigned long)va_start, size, prot, - early_pgtable_alloc, debug_pagealloc_enabled()); + early_pgtable_alloc, false); vma->addr = va_start; vma->phys_addr = pa_start;