From patchwork Tue Jun 26 16:54:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10489569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C00BA602D8 for ; Tue, 26 Jun 2018 16:56:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AEB38285CC for ; Tue, 26 Jun 2018 16:56:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A074E285D1; Tue, 26 Jun 2018 16:56:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A9030285CC for ; Tue, 26 Jun 2018 16:56:24 +0000 (UTC) Received: (qmail 7966 invoked by uid 550); 26 Jun 2018 16:55:21 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7848 invoked from network); 26 Jun 2018 16:55:19 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=eMm/N/nj5zh2Pbck+WVZyIY8xCkAhHcmlZ2hxsy9fGg=; b=Qi0h/iZD5kswxzx3hg1UrPisZ1+zL4YKy9ruX7bvDlrT6pHcpg5utFF99LK5VlJIS+ M4/RjkMpprBNJkshvPsqNiWDASfInNxADDTcjy9cHoJ7deUd9ryN1kkIH7s9CTnqJGF3 c99Da3DpfwAg7w75eISO+RnyJHjLq7xBu+d4s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=eMm/N/nj5zh2Pbck+WVZyIY8xCkAhHcmlZ2hxsy9fGg=; b=Q52748XQI7ilc/Os7K1IkWHcKnRjiLxyOwYdUmhhhHRNH/5t4aNngwZGtXhCjYiSnv lFwr/jTJXuoGuUS2cGbDmYINsvyp6n5h1ULAi7gegoW5NEMokYj3TFnq1uio1IUHgI2a 6mLapIVpi4HyFQqszLh8IhhepbpkrBCDZSTJ3oNMCRz7wpwyj82Tip8rk1aiCI/obJMM jXqrbIoI9z7Adb7ARfiBbAJSkYp8wVy+4HDEMsBaCyFRm9BPT1v/Dw/HH0JWK9gBfEOr qwGSW03gw3mYvn9P/0U74QACziNApxHPLUScC+6oXN2dX6BeH+kQkcoJ/P+gDLMziTvg eVnQ== X-Gm-Message-State: APt69E0vT/gYFoYXN77F5VWjWXCZRxOXmjc5b70ozDXZo+YcVw6JAS3C mSyVnP+jDngRjKgpi5xu8kGH0g== X-Google-Smtp-Source: AAOMgpcMla+vrn/EoW3R99J8PFbrkuwDUvUrMKBZ0eH9fB7RmIq4ZbRndtPpxFyyZwwij1TKydnLrg== X-Received: by 2002:a1c:c3c6:: with SMTP id t189-v6mr2386964wmf.41.1530032108279; Tue, 26 Jun 2018 09:55:08 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: mark.rutland@arm.com, keescook@chromium.org, will.deacon@arm.com, catalin.marinas@arm.com, james.morse@arm.com, labbott@fedoraproject.org, kernel-hardening@lists.openwall.com, yaojun8558363@gmail.com, Ard Biesheuvel Subject: [RFC PATCH] arm64/mm: unmap the linear alias of module allocations Date: Tue, 26 Jun 2018 18:54:55 +0200 Message-Id: <20180626165455.22636-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 X-Virus-Scanned: ClamAV using ClamSMTP When CONFIG_STRICT_KERNEL_RXW=y [which is the default on arm64], we take great care to ensure that the mappings of modules in the vmalloc space are locked down as much as possible, i.e., executable code is mapped read-only, read-only data is mapped read-only and non-executable, and read-write data is mapped non-executable as well. However, due to the way we map the linear region [aka the kernel direct mapping], those module regions are aliased by read-write mappings, and it is possible for an attacker to modify code or data that was assumed to be immutable in this configuration. So let's ensure that the linear alias of module memory is unmapped upon allocation and remapped when it is freed. The latter requires some special handling involving a workqueue due to the fact that it may be called in softirq context at which time calling find_vm_area() is unsafe. Note that this requires the entire linear region to be mapped down to pages, which may result in a performance regression in some configurations. Signed-off-by: Ard Biesheuvel --- For this RFC, I simply reused set_memory_valid() to do the unmap/remap, but I am aware that this likely breaks hibernation, and perhaps some other things as well, so we should probably remap r/o instead. arch/arm64/kernel/module.c | 57 ++++++++++++++++++++ arch/arm64/mm/mmu.c | 2 +- 2 files changed, 58 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 155fd91e78f4..4a1d3c7486f5 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -26,10 +26,66 @@ #include #include #include +#include #include +#include #include #include +#ifdef CONFIG_STRICT_KERNEL_RWX + +static struct workqueue_struct *module_free_wq; + +static int init_workqueue(void) +{ + module_free_wq = alloc_ordered_workqueue("module_free_wq", 0); + WARN_ON(!module_free_wq); + + return 0; +} +pure_initcall(init_workqueue); + +static void remap_linear_module_alias(void *module_region, int enable) +{ + struct vm_struct *vm = find_vm_area(module_region); + struct page **p; + unsigned long size; + + WARN_ON(!vm || !vm->pages); + + for (p = vm->pages, size = vm->size; size > 0; size -= PAGE_SIZE) + set_memory_valid((u64)page_address(*p++), 1, enable); +} + +static void module_free_wq_worker(struct work_struct *work) +{ + remap_linear_module_alias(work, true); + vfree(work); +} + +void module_memfree(void *module_region) +{ + struct work_struct *work; + + if (!module_region) + return; + + /* + * At this point, module_region is a pointer to an allocation of at + * least PAGE_SIZE bytes that is mapped read-write. So instead of + * allocating memory for a data structure containing a work_struct + * instance and a copy of the value of module_region, just reuse the + * allocation directly. + */ + work = module_region; + INIT_WORK(work, module_free_wq_worker); + queue_work(module_free_wq, work); +} + +#else +static void remap_linear_module_alias(void *module_region, int enable) {} +#endif + void *module_alloc(unsigned long size) { gfp_t gfp_mask = GFP_KERNEL; @@ -65,6 +121,7 @@ void *module_alloc(unsigned long size) return NULL; } + remap_linear_module_alias(p, false); return p; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 493ff75670ff..e1057ebb672d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -432,7 +432,7 @@ static void __init map_mem(pgd_t *pgdp) struct memblock_region *reg; int flags = 0; - if (debug_pagealloc_enabled()) + if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) || debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /*