From patchwork Mon Jun 2 20:57:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 4283851 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 681A1BEEA7 for ; Mon, 2 Jun 2014 21:00:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6FD2B20253 for ; Mon, 2 Jun 2014 21:00:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5EC532021F for ; Mon, 2 Jun 2014 21:00:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WrZJH-0005jq-12; Mon, 02 Jun 2014 20:58:23 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WrZIy-0005e4-DX for linux-arm-kernel@lists.infradead.org; Mon, 02 Jun 2014 20:58:05 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id DCC5513EF46; Mon, 2 Jun 2014 20:57:44 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id CF56413F327; Mon, 2 Jun 2014 20:57:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2B9D313EF46; Mon, 2 Jun 2014 20:57:44 +0000 (UTC) From: Laura Abbott To: Will Deacon , Catalin Marinas Subject: [PATCHv2 1/4] arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support Date: Mon, 2 Jun 2014 13:57:35 -0700 Message-Id: <1401742658-11841-2-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1401742658-11841-1-git-send-email-lauraa@codeaurora.org> References: <1401742658-11841-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140602_135804_549037_E419C40F X-CRM114-Status: GOOD ( 19.39 ) X-Spam-Score: -0.7 (/) Cc: Steve Capper , Laura Abbott , Kees Cook , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In a similar fashion to other architecture, add the infrastructure and Kconfig to enable DEBUG_SET_MODULE_RONX support. When enabled, module ranges will be marked read-only/no-execute as appropriate. Signed-off-by: Laura Abbott --- arch/arm64/Kconfig.debug | 11 ++++ arch/arm64/include/asm/cacheflush.h | 4 ++ arch/arm64/mm/Makefile | 2 +- arch/arm64/mm/pageattr.c | 121 ++++++++++++++++++++++++++++++++++++ 4 files changed, 137 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/mm/pageattr.c diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index d10ec33..53979ac 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -37,4 +37,15 @@ config PID_IN_CONTEXTIDR instructions during context switch. Say Y here only if you are planning to use hardware trace tools with this kernel. +config DEBUG_SET_MODULE_RONX + bool "Set loadable kernel module data as NX and text as RO" + depends on MODULES + help + This option helps catch unintended modifications to loadable + kernel module's text and read-only data. It also prevents execution + of module data. Such protection may interfere with run-time code + patching and dynamic kernel tracing - and they might also protect + against certain classes of kernel exploits. + If in doubt, say "N". + endmenu diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 4c60e64..c12f837 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -157,4 +157,8 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) { } +int set_memory_ro(unsigned long addr, int numpages); +int set_memory_rw(unsigned long addr, int numpages); +int set_memory_x(unsigned long addr, int numpages); +int set_memory_nx(unsigned long addr, int numpages); #endif diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index b51d364..25b1114 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -1,5 +1,5 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ cache.o copypage.o flush.o \ ioremap.o mmap.o pgd.o mmu.o \ - context.o tlb.o proc.o + context.o tlb.o proc.o pageattr.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c new file mode 100644 index 0000000..d8ab747 --- /dev/null +++ b/arch/arm64/mm/pageattr.c @@ -0,0 +1,121 @@ +/* + * Copyright (c) 2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include +#include +#include + +#include +#include + +static pte_t clear_pte_bit(pte_t pte, pgprot_t prot) +{ + pte_val(pte) &= ~pgprot_val(prot); + return pte; +} + +static pte_t set_pte_bit(pte_t pte, pgprot_t prot) +{ + pte_val(pte) |= pgprot_val(prot); + return pte; +} + +static int __change_memory(pte_t *ptep, pgtable_t token, unsigned long addr, + pgprot_t prot, bool set) +{ + pte_t pte; + + if (set) + pte = set_pte_bit(*ptep, prot); + else + pte = clear_pte_bit(*ptep, prot); + set_pte(ptep, pte); + return 0; +} + +static int set_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, + void *data) +{ + pgprot_t prot = (pgprot_t)data; + + return __change_memory(ptep, token, addr, prot, true); +} + +static int clear_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, + void *data) +{ + pgprot_t prot = (pgprot_t)data; + + return __change_memory(ptep, token, addr, prot, false); +} + +static int change_memory_common(unsigned long addr, int numpages, + pgprot_t prot, bool set) +{ + unsigned long start = addr; + unsigned long size = PAGE_SIZE*numpages; + unsigned long end = start + size; + int ret; + + if (start < MODULES_VADDR || start >= MODULES_END) + return -EINVAL; + + if (end < MODULES_VADDR || end >= MODULES_END) + return -EINVAL; + + if (set) + ret = apply_to_page_range(&init_mm, start, size, + set_page_range, (void *)prot); + else + ret = apply_to_page_range(&init_mm, start, size, + clear_page_range, (void *)prot); + + flush_tlb_kernel_range(start, end); + isb(); + return ret; +} + +static int change_memory_set_bit(unsigned long addr, int numpages, + pgprot_t prot) +{ + return change_memory_common(addr, numpages, prot, true); +} + +static int change_memory_clear_bit(unsigned long addr, int numpages, + pgprot_t prot) +{ + return change_memory_common(addr, numpages, prot, false); +} + +int set_memory_ro(unsigned long addr, int numpages) +{ + return change_memory_set_bit(addr, numpages, __pgprot(PTE_RDONLY)); +} +EXPORT_SYMBOL_GPL(set_memory_ro); + +int set_memory_rw(unsigned long addr, int numpages) +{ + return change_memory_clear_bit(addr, numpages, __pgprot(PTE_RDONLY)); +} +EXPORT_SYMBOL_GPL(set_memory_rw); + +int set_memory_nx(unsigned long addr, int numpages) +{ + return change_memory_set_bit(addr, numpages, __pgprot(PTE_PXN)); +} +EXPORT_SYMBOL_GPL(set_memory_nx); + +int set_memory_x(unsigned long addr, int numpages) +{ + return change_memory_clear_bit(addr, numpages, __pgprot(PTE_PXN)); +} +EXPORT_SYMBOL_GPL(set_memory_x);