From patchwork Wed Apr 15 15:34:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 6221681 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7DC3C9F313 for ; Wed, 15 Apr 2015 15:43:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A277520274 for ; Wed, 15 Apr 2015 15:43:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 876D8201CE for ; Wed, 15 Apr 2015 15:43:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPQg-0005JU-71; Wed, 15 Apr 2015 15:40:42 +0000 Received: from mail-wi0-f171.google.com ([209.85.212.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMf-0001BA-Nf for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:34 +0000 Received: by wizk4 with SMTP id k4so159918510wiz.1 for ; Wed, 15 Apr 2015 08:36:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EkmSl0ut/FLfEA9A9l4KpugApxYfZddeAQQKM8es7vg=; b=dRWYuMM5eeYMuZSqZjAWS78I6wEGzeCEqwrh4tZSWGdN6ibLhPb2g3fR9cPpp+0nI9 GWMCmCwvCdenjU5/nOohjI0h51Bau7UDSXM8kLNueFqNUzKr7jEAFvT5UfGzpI1Uley8 WfJq4iJA0wmaZXbWRivoP9yVJdxgqpDTkARaZnOXfjAdT3qew5H8mcoN1RUrhKNXFw6T P55hCQpD27eRhdYbgOm/SkyW8ZCVxHPQM8ynTrUKjP5jAlSR8je35XXgEKRKgTEiejZl PdU8hgH32aE7asT1C1ssFeQhAttMp6A124wKOm/Vw2HEsdCTyoEqT3YGB0ge0FDN/qQ9 tClQ== X-Gm-Message-State: ALoCoQlDXXSIxQy0jpVoW/PpVYh7bIxLM/C+NbLp4bM9RFJncHKjsLZmlgWel5SNeDrE4W3W366I X-Received: by 10.194.222.135 with SMTP id qm7mr52055061wjc.14.1429112170914; Wed, 15 Apr 2015 08:36:10 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.36.04 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:36:10 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 11/13] arm64: map linear region as non-executable Date: Wed, 15 Apr 2015 17:34:22 +0200 Message-Id: <1429112064-19952-12-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083633_962402_68D62A67 X-CRM114-Status: GOOD ( 13.39 ) X-Spam-Score: -0.7 (/) Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we moved the kernel text out of the linear region, there is no longer a reason to map it as executable. This also allows us to completely get rid of the __map_mem() variant that only maps some of it executable if CONFIG_DEBUG_RODATA is selected. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 41 ++--------------------------------------- 1 file changed, 2 insertions(+), 39 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b457b7e425cc..c07ba8bdd8ed 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -303,47 +303,10 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, phys, virt, size, prot, late_alloc); } -#ifdef CONFIG_DEBUG_RODATA static void __init __map_memblock(phys_addr_t start, phys_addr_t end) { - /* - * Set up the executable regions using the existing section mappings - * for now. This will get more fine grained later once all memory - * is mapped - */ - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); - - if (end < kernel_x_start) { - create_mapping(start, __phys_to_virt(start), - end - start, PAGE_KERNEL); - } else if (start >= kernel_x_end) { - create_mapping(start, __phys_to_virt(start), - end - start, PAGE_KERNEL); - } else { - if (start < kernel_x_start) - create_mapping(start, __phys_to_virt(start), - kernel_x_start - start, - PAGE_KERNEL); - create_mapping(kernel_x_start, - __phys_to_virt(kernel_x_start), - kernel_x_end - kernel_x_start, - PAGE_KERNEL_EXEC); - if (kernel_x_end < end) - create_mapping(kernel_x_end, - __phys_to_virt(kernel_x_end), - end - kernel_x_end, - PAGE_KERNEL); - } - -} -#else -static void __init __map_memblock(phys_addr_t start, phys_addr_t end) -{ - create_mapping(start, __phys_to_virt(start), end - start, - PAGE_KERNEL_EXEC); + create_mapping(start, __phys_to_virt(start), end - start, PAGE_KERNEL); } -#endif struct bootstrap_pgtables { pte_t pte[PTRS_PER_PTE]; @@ -429,7 +392,7 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) #endif create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, - PAGE_KERNEL_EXEC); + PAGE_KERNEL); /* * Temporarily limit the memblock range. We need to do this as