From patchwork Tue Feb 14 20:52:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9572863 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AD2E260578 for ; Tue, 14 Feb 2017 20:55:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BFA226C9B for ; Tue, 14 Feb 2017 20:55:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F9D72840E; Tue, 14 Feb 2017 20:55:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C16E426C9B for ; Tue, 14 Feb 2017 20:55:09 +0000 (UTC) Received: (qmail 24376 invoked by uid 550); 14 Feb 2017 20:55:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24069 invoked from network); 14 Feb 2017 20:55:02 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8rlQUfI9Fo3MOyR3k1hBXCNGOjyEm0yIQDfOSbfpLOg=; b=b4QrAi2hNPDTTLS1voFs7dnFNHzUovVnIyuu9/zNXmQNsGUDb2Ls95O+3GdYef1YJh 3JiGsWdXQYLDMF/PuA8oCXgYIDCOOnyVYg2z/o9KhLeovgcqiUTfcK65UCToJ2Py50sZ guODqVyRcqfufAaJvClKhOi0a73wS817dSfrM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8rlQUfI9Fo3MOyR3k1hBXCNGOjyEm0yIQDfOSbfpLOg=; b=fjgxKvQMUEakIAHsDnqrIck0yA0WOJThBRWlHwUKEE2X7xntkdKZnVBkvUm5Whp43n ncCkh1tbAXNHmfBCdCRuZziWN7GX5k4vkGX1ha8+KOrs1TPndBTzvZigmwloYxm3Pk57 iJ6go1bgtiqy7u2yxrMU+WMJuynbOQdHpTQqy0egW1ZUQ2oUpjnOz8Suk76OUOID5o/a gepfaeUGv/5Qx3yGEeFG0RtSY5pzqCEi/w2EeEesoJ9h3QseJx75EUe67HqdutzjpI1f o2DGFJ8L7w/LC0/r5UbgUpwm2LrIWGPfPROJdZC2+vSn0Xykd9ZAhzbxm/uFpdAEgfBa rV7A== X-Gm-Message-State: AMke39mUqip5BNKqZ2Xv5sL0+OgGPlC1zMZ9gmLKV42W1kfsxbkJQ8EN0zK3VBksM0TfTnbZ X-Received: by 10.28.69.202 with SMTP id l71mr5134684wmi.68.1487105690997; Tue, 14 Feb 2017 12:54:50 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, labbott@fedoraproject.org Cc: kernel-hardening@lists.openwall.com, kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, andre.przywara@arm.com, suzuki.poulose@arm.com, james.morse@arm.com, keescook@chromium.org, Ard Biesheuvel Date: Tue, 14 Feb 2017 20:52:35 +0000 Message-Id: <1487105558-21897-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487105558-21897-1-git-send-email-ard.biesheuvel@linaro.org> References: <1487105558-21897-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [PATCH v3 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late() X-Virus-Scanned: ClamAV using ClamSMTP In preparation of refactoring the kernel mapping logic so that text regions are never mapped writable, which would require adding explicit TLB maintenance to new call sites of create_mapping_late() (which is currently invoked twice from the same function), move the TLB maintenance from the call site into create_mapping_late() itself, and change it from a full TLB flush into a flush by VA, which is more appropriate here. Also, given that create_mapping_late() has evolved into a routine that only updates protection bits on existing mappings, rename it to update_mapping_prot() Reviewed-by: Mark Rutland Tested-by: Mark Rutland Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 2131521ddc24..a98419b72a09 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -345,17 +345,20 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, pgd_pgtable_alloc, page_mappings_only); } -static void create_mapping_late(phys_addr_t phys, unsigned long virt, - phys_addr_t size, pgprot_t prot) +static void update_mapping_prot(phys_addr_t phys, unsigned long virt, + phys_addr_t size, pgprot_t prot) { if (virt < VMALLOC_START) { - pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n", + pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n", &phys, virt); return; } __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, debug_pagealloc_enabled()); + + /* flush the TLBs after updating live kernel mappings */ + flush_tlb_kernel_range(virt, virt + size); } static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) @@ -428,19 +431,16 @@ void mark_rodata_ro(void) unsigned long section_size; section_size = (unsigned long)_etext - (unsigned long)_text; - create_mapping_late(__pa_symbol(_text), (unsigned long)_text, + update_mapping_prot(__pa_symbol(_text), (unsigned long)_text, section_size, PAGE_KERNEL_ROX); /* * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; - create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, + update_mapping_prot(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); - /* flush the TLBs after updating live kernel mappings */ - flush_tlb_all(); - debug_checkwx(); }