From patchwork Wed Jul 11 17:04:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10520243 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2869F603D7 for ; Wed, 11 Jul 2018 17:05:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16B2129446 for ; Wed, 11 Jul 2018 17:05:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A86E296A6; Wed, 11 Jul 2018 17:05:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8A55F29446 for ; Wed, 11 Jul 2018 17:05:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=TJiEwMVlU6hA4dtQgBdHqI/pXz1MPCmM0P3qK/Uguvs=; b=odJ hOOHLdiVOhBk2PAvtzQPRhtgGc885iI7uQWMXh3mvYEPHM/JekP/epySVmLZwxHIlZl/juYlmY4UF 2AXrxNZN308DOJAb2CkFWuhw+tuMCvbRFzHhWWm5GgXtozIybdRIrcyHqvvfjNo3BrzENls+IMEDu Sq/DBws6tP4qxIhji62AZQQ2WZlxZlAdrYnlMAlMlksyqurLsbw8OEYolri5MCrYhzozVoTADYu7M inj/J8a6Scr3kqOvc/FinpYWS/PFvhO6yg2LdSLiKmQpaGBa5zNlfDBtiKBnKTTnK2rRTZN5CgBGz ulcT+fINFC/Q11NCNi5OAClr53Pksaw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdIY9-0008JG-NE; Wed, 11 Jul 2018 17:05:09 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdIXr-00071Y-Tc for linux-arm-kernel@lists.infradead.org; Wed, 11 Jul 2018 17:04:53 +0000 Received: by mail-wr1-x441.google.com with SMTP id q10-v6so18903929wrd.4 for ; Wed, 11 Jul 2018 10:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=sPGjJhBZ+WikzsstZYipVU5Pul7Y+k0BO9ECxiuPkCE=; b=Md7VUoGagcRoDn95WrMAY1YmFUDx1MNXtLJs9PThDaBUaS91Riwv2X/Z/nE4eg+y5+ qqmN6oe9JwU2U2bn5MrfZzjhMkKLPrF7pb4JkH8tNk7POzTBei+rx4hRveDs3ak5e+bm sT0qRo4cAvlDazuuONcczj/cXaZbk1JJTql28= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=sPGjJhBZ+WikzsstZYipVU5Pul7Y+k0BO9ECxiuPkCE=; b=qzqrllxoTJMbFhLFm2l6MQ3wPcMYc1UCSF/A9myK8kUmB/WdMBuSyhQffuzWyFlXYW frzGm/o9Fp3uucCismDmQ5XgMy1bEkVoXvmMGcBLu4G/8MyLHRFKIUQvHPJcThaue8zF SpDSmtYywzFQBkJV7AZ8Tb1GvBhrAu9Dt42oXm38NuKKOEv+QmLf95wE+dsFFxzU6UHo 0nd/Fsjs6xMOpW1DLAjptNqp81cQo9KSPK0AoSbpFN2GUiWj7GVgOP+s9lQ8miEUCpLW BLEgKbxyeaYOTofNn+d/UhsgT9NrynJpUpn5fGXtX/QKlUd/tV23umUtHhX2CpczumMv CtCQ== X-Gm-Message-State: APt69E0aZ0WMraaIiARXCQ9KkeR1DKBAPk3vAua+dDYCx8Lgq2NMwcy4 W3eQplw6UMxF0VizLwcWTFJEoGXVJdQ= X-Google-Smtp-Source: AAOMgpcmCAw0KQQtPTCNj2tiooGEUu9KZobsGmpl+COZPZ7LyIQpAvFtQNb2acbkFkVFVa07dO8uqQ== X-Received: by 2002:adf:9aeb:: with SMTP id a98-v6mr20386841wrc.110.1531328679544; Wed, 11 Jul 2018 10:04:39 -0700 (PDT) Received: from localhost.localdomain (33.153.69.91.rev.sfr.net. [91.69.153.33]) by smtp.gmail.com with ESMTPSA id t124-v6sm2569933wmt.29.2018.07.11.10.04.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 10:04:38 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-hardening@lists.openwall.com Subject: [PATCH v2] arm64/mm: unmap the linear alias of module allocations Date: Wed, 11 Jul 2018 19:04:26 +0200 Message-Id: <20180711170426.11133-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180711_100451_958157_B23A981C X-CRM114-Status: GOOD ( 18.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, keescook@chromium.org, Ard Biesheuvel , catalin.marinas@arm.com, will.deacon@arm.com, james.morse@arm.com, labbott@redhat.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When CONFIG_STRICT_KERNEL_RWX=y [which is the default on arm64], we take great care to ensure that the mappings of modules in the vmalloc space are locked down as much as possible, i.e., executable code is mapped read-only, read-only data is mapped read-only and non-executable, and read-write data is mapped non-executable as well. However, due to the way we map the linear region [aka the kernel direct mapping], those module regions are aliased by read-write mappings, and it is possible for an attacker to modify code or data that was assumed to be immutable in this configuration. So let's ensure that the linear alias of module memory is remapped read-only upon allocation and remapped read-write when it is freed. The latter requires some special handling involving a workqueue due to the fact that it may be called in softirq context at which time calling find_vm_area() is unsafe. Note that this requires the entire linear region to be mapped down to pages, which may result in a performance regression in some hardware configurations. Signed-off-by: Ard Biesheuvel Reviewed-by: Kees Cook --- Changes since RFC/v1: - remap linear alias read-only rather than invalid - honour rodata_enabled, i.e., 'rodata=off' will disable this functionality - use VMA nr_pages field rather than size to obtain the number of pages to remap arch/arm64/include/asm/module.h | 6 +++ arch/arm64/kernel/module.c | 42 ++++++++++++++++++++ arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pageattr.c | 19 +++++++++ 4 files changed, 68 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index 97d0ef12e2ff..f16f0073642b 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -91,4 +91,10 @@ static inline bool plt_entries_equal(const struct plt_entry *a, a->mov2 == b->mov2; } +#ifdef CONFIG_STRICT_KERNEL_RWX +void remap_linear_module_alias(void *module_region, bool ro); +#else +static inline void remap_linear_module_alias(void *module_region, bool ro) {} +#endif + #endif /* __ASM_MODULE_H */ diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index f0f27aeefb73..2b61ca0285eb 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -26,10 +26,51 @@ #include #include #include +#include #include #include #include +#ifdef CONFIG_STRICT_KERNEL_RWX + +static struct workqueue_struct *module_free_wq; + +static int init_workqueue(void) +{ + module_free_wq = alloc_ordered_workqueue("module_free_wq", 0); + WARN_ON(!module_free_wq); + + return 0; +} +pure_initcall(init_workqueue); + +static void module_free_wq_worker(struct work_struct *work) +{ + remap_linear_module_alias(work, false); + vfree(work); +} + +void module_memfree(void *module_region) +{ + struct work_struct *work; + + if (!module_region) + return; + + /* + * At this point, module_region is a pointer to an allocation of at + * least PAGE_SIZE bytes that is mapped read-write. So instead of + * allocating memory for a data structure containing a work_struct + * instance and a copy of the value of module_region, just reuse the + * allocation directly. + */ + work = module_region; + INIT_WORK(work, module_free_wq_worker); + queue_work(module_free_wq, work); +} + +#endif + void *module_alloc(unsigned long size) { gfp_t gfp_mask = GFP_KERNEL; @@ -65,6 +106,7 @@ void *module_alloc(unsigned long size) return NULL; } + remap_linear_module_alias(p, true); return p; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 493ff75670ff..5492b691aafd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -432,7 +432,7 @@ static void __init map_mem(pgd_t *pgdp) struct memblock_region *reg; int flags = 0; - if (debug_pagealloc_enabled()) + if (rodata_enabled || debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index a56359373d8b..cc04be572660 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -138,6 +138,25 @@ int set_memory_valid(unsigned long addr, int numpages, int enable) __pgprot(PTE_VALID)); } +#ifdef CONFIG_STRICT_KERNEL_RWX +void remap_linear_module_alias(void *module_region, bool ro) +{ + struct vm_struct *vm; + int i; + + if (!rodata_enabled) + return; + + vm = find_vm_area(module_region); + WARN_ON(!vm || !vm->pages); + + for (i = 0; i < vm->nr_pages; i++) + __change_memory_common((u64)page_address(vm->pages[i]), PAGE_SIZE, + ro ? __pgprot(PTE_RDONLY) : __pgprot(PTE_WRITE), + ro ? __pgprot(PTE_WRITE) : __pgprot(PTE_RDONLY)); +} +#endif + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) {