From patchwork Wed Feb 26 06:23:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6194314BC for ; Wed, 26 Feb 2020 06:24:54 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 90574222C2 for ; Wed, 26 Feb 2020 06:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="G+4qtLJ6"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="lGuDp6E0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90574222C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17935-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26088 invoked by uid 550); 26 Feb 2020 06:24:48 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25936 invoked from network); 26 Feb 2020 06:24:46 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=QmRLxlds8fxsD uf/8I5NnhplT9+peUfJKPgYWcvn/PE=; b=G+4qtLJ6I6V3PnN1aJDM4Yhnc7kIz hc7Qn8Vhci86eY4PAOS4Up1VhrhALU0CQTcccTrYCngvUIyFlZqtPpEpR5SFP9d/ mp4NuiGqenpeDZj7Mdsf7fx+GSI/QofUixWa1obSSjFKqqPMBxuZn3JIzTwtEjJ7 O5foH5p6OBk3h1lzFncIQkYdhqFlRGyfDy1v+PNIUNTF2b0++KgruLyGc6lUM8XD NZXiWEl7DJgAskUj4p6kz3vqJTvJlctqKnrtkv2qbY3ls1eQ5rO2tvU8dSz3N54m o8LBIloAhj4wR6BxNA2XzNXZcBBteDkMkwwdauF/sD9ClLVKZju+3gFaQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=QmRLxlds8fxsDuf/8I5NnhplT9+peUfJKPgYWcvn/PE=; b=lGuDp6E0 dZBbEhlvcOxhlp8J+JQ4Tq8+BbVbQtiikNyZUdQFgmgQJdFvQqtfG+EE1LQMmk/i NJPtKn/2JOlUQzZnmNCpgRb3o6mILPb8q+w22sZvbh47iiylmugGKKOqXf09izLo uL8O4l4j42l6cBWpitWD9DiH/AvTU47fDxzdIId6iy/0RAI1nigQ9PGNihu3zlrE Gti28yF2stcPSp5Hi0fbx2PkwyY+6zm0l+VBUbllCyGVenuc/Fp7uqFu9tcsSOQL 3if+tXKQz4lLWkpqP0Gctdo6iPKL9yIP7SmLm78IQbvIozkYWM9HxM3WOeHYsp8e l6+UHgcMeyJ+Dg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculdduhedmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgeptdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH v4 1/8] powerpc/mm: Implement set_memory() routines Date: Wed, 26 Feb 2020 17:23:56 +1100 Message-Id: <20200226062403.63790-2-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX, and are generally useful primitives to have. This implementation is designed to be completely generic across powerpc's many MMUs. It's possible that this could be optimised to be faster for specific MMUs, but the focus is on having a generic and safe implementation for now. This implementation does not handle cases where the caller is attempting to change the mapping of the page it is executing from, or if another CPU is concurrently using the page being altered. These cases likely shouldn't happen, but a more complex implementation with MMU-specific code could safely handle them, so that is left as a TODO for now. Signed-off-by: Russell Currey Signed-off-by: Christophe Leroy --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/set_memory.h | 32 ++++++++++++ arch/powerpc/mm/Makefile | 2 +- arch/powerpc/mm/pageattr.c | 74 +++++++++++++++++++++++++++ 4 files changed, 108 insertions(+), 1 deletion(-) create mode 100644 arch/powerpc/include/asm/set_memory.h create mode 100644 arch/powerpc/mm/pageattr.c diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 497b7d0b2d7e..bd074246e34e 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -129,6 +129,7 @@ config PPC select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_MEMBARRIER_CALLBACKS select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 + select ARCH_HAS_SET_MEMORY select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION) select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UACCESS_FLUSHCACHE diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h new file mode 100644 index 000000000000..64011ea444b4 --- /dev/null +++ b/arch/powerpc/include/asm/set_memory.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_SET_MEMORY_H +#define _ASM_POWERPC_SET_MEMORY_H + +#define SET_MEMORY_RO 0 +#define SET_MEMORY_RW 1 +#define SET_MEMORY_NX 2 +#define SET_MEMORY_X 3 + +int change_memory_attr(unsigned long addr, int numpages, long action); + +static inline int set_memory_ro(unsigned long addr, int numpages) +{ + return change_memory_attr(addr, numpages, SET_MEMORY_RO); +} + +static inline int set_memory_rw(unsigned long addr, int numpages) +{ + return change_memory_attr(addr, numpages, SET_MEMORY_RW); +} + +static inline int set_memory_nx(unsigned long addr, int numpages) +{ + return change_memory_attr(addr, numpages, SET_MEMORY_NX); +} + +static inline int set_memory_x(unsigned long addr, int numpages) +{ + return change_memory_attr(addr, numpages, SET_MEMORY_X); +} + +#endif diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index 5e147986400d..a998fdac52f9 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -5,7 +5,7 @@ ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) -obj-y := fault.o mem.o pgtable.o mmap.o \ +obj-y := fault.o mem.o pgtable.o mmap.o pageattr.o \ init_$(BITS).o pgtable_$(BITS).o \ pgtable-frag.o ioremap.o ioremap_$(BITS).o \ init-common.o mmu_context.o drmem.o diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c new file mode 100644 index 000000000000..2b573768a7f7 --- /dev/null +++ b/arch/powerpc/mm/pageattr.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * MMU-generic set_memory implementation for powerpc + * + * Copyright 2019, IBM Corporation. + */ + +#include +#include + +#include +#include +#include + + +/* + * Updates the attributes of a page in three steps: + * + * 1. invalidate the page table entry + * 2. flush the TLB + * 3. install the new entry with the updated attributes + * + * This is unsafe if the caller is attempting to change the mapping of the + * page it is executing from, or if another CPU is concurrently using the + * page being altered. + * + * TODO make the implementation resistant to this. + */ +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data) +{ + long action = (long)data; + pte_t pte; + + spin_lock(&init_mm.page_table_lock); + + /* invalidate the PTE so it's safe to modify */ + pte = ptep_get_and_clear(&init_mm, addr, ptep); + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + + /* modify the PTE bits as desired, then apply */ + switch (action) { + case SET_MEMORY_RO: + pte = pte_wrprotect(pte); + break; + case SET_MEMORY_RW: + pte = pte_mkwrite(pte); + break; + case SET_MEMORY_NX: + pte = pte_exprotect(pte); + break; + case SET_MEMORY_X: + pte = pte_mkexec(pte); + break; + default: + break; + } + + set_pte_at(&init_mm, addr, ptep, pte); + spin_unlock(&init_mm.page_table_lock); + + return 0; +} + +int change_memory_attr(unsigned long addr, int numpages, long action) +{ + unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE); + unsigned long sz = numpages * PAGE_SIZE; + + if (!numpages) + return 0; + + return apply_to_page_range(&init_mm, start, sz, change_page_attr, (void *)action); +} From patchwork Wed Feb 26 06:23:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BBF514BC for ; Wed, 26 Feb 2020 06:25:01 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E7CC3222C2 for ; Wed, 26 Feb 2020 06:25:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="Om9SaGrt"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Ou0hrHGh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7CC3222C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17936-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26448 invoked by uid 550); 26 Feb 2020 06:24:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26342 invoked from network); 26 Feb 2020 06:24:50 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=MN4A9pGwx4Xvy IA4PshSsOurx0y3KhuXjclcGJOa81k=; b=Om9SaGrtTWabYgpMsCrt/Z15hL6Pm E1kR3wOcI3rIYdr9YDxX3b5LVXc9TOktg9brw030yc06PEK3hgpABqAvqm33ZmQF MDORR0J0sW2j6Z3CEWTxD/kRABYkhTtyxGDNyv4Jxn0ZRhX1Gb/X4g7vmj7rhx3p 07rvRIMviK8zODvXz+Rkib2iU6KJE+E58sWM3JVkx0nJSglZccfRurG85Rr7eqCf J9jaFs/QzeYLEEz2osMTpKICNowRDVwaIDV4eU1Rv1NHPxghnvL9GF5VP5b/1abb ZpamUmRJvwIZGgpatDX5WPqJki3cWwv0GbvVk8OOyn9vmqB58fC3wOifg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=MN4A9pGwx4XvyIA4PshSsOurx0y3KhuXjclcGJOa81k=; b=Ou0hrHGh 9hoCxhD4UeojpgWxviUYh4ype55tovwkpFsSNWQHJJAyxMA4ynB0gHbPlxVL0UGa 5VQSZks4/Rz8qdN70hMWpOFiK05O45GPnHhEUmDRlHIvMxNUhh1ssGWBbhjRy/zj KmkWPYuojCD5LPVTS5zTi62skTc5IKlI84IQHeVMrg701xBq4Zt3Z4Kl+F4BtqwM szLxguvk7mJ5hijgyFnzvK/Z3LdSAsIdq6eLUNAjA/laQ6IO6sru+CK3rDwM1mIy HPkphVGA6+VMn79WNp9AKuH4TOLeHjIHE7aTQjywiBAu8BTBbaIHH0/I+EmE8vhn 3G2/zYf8/IhRjA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculddutddmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgepudenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH v4 2/8] powerpc/kprobes: Mark newly allocated probes as RO Date: Wed, 26 Feb 2020 17:23:57 +1100 Message-Id: <20200226062403.63790-3-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one W+X page at boot by default. This can be tested with CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the kernel log during boot. powerpc doesn't implement its own alloc() for kprobes like other architectures do, but we couldn't immediately mark RO anyway since we do a memcpy to the page we allocate later. After that, nothing should be allowed to modify the page, and write permissions are removed well before the kprobe is armed. The memcpy() would fail if >1 probes were allocated, so use patch_instruction() instead which is safe for RO. Reviewed-by: Daniel Axtens Signed-off-by: Russell Currey Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/kprobes.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 2d27ec4feee4..bfab91ded234 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -24,6 +24,8 @@ #include #include #include +#include +#include DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); @@ -102,6 +104,16 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset) return addr; } +void *alloc_insn_page(void) +{ + void *page = vmalloc_exec(PAGE_SIZE); + + if (page) + set_memory_ro((unsigned long)page, 1); + + return page; +} + int arch_prepare_kprobe(struct kprobe *p) { int ret = 0; @@ -124,11 +136,8 @@ int arch_prepare_kprobe(struct kprobe *p) } if (!ret) { - memcpy(p->ainsn.insn, p->addr, - MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); + patch_instruction(p->ainsn.insn, *p->addr); p->opcode = *p->addr; - flush_icache_range((unsigned long)p->ainsn.insn, - (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t)); } p->ainsn.boostable = 0; From patchwork Wed Feb 26 06:23:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A41213A4 for ; Wed, 26 Feb 2020 06:25:09 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id A6F8B222C2 for ; Wed, 26 Feb 2020 06:25:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="cCODXfxY"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="DJBdHd3h" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6F8B222C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17937-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27821 invoked by uid 550); 26 Feb 2020 06:24:54 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27700 invoked from network); 26 Feb 2020 06:24:53 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=N5bnzgbs5sk/h cpbVb2U9rRYKHzeu3F1EjOu4dcDwjg=; b=cCODXfxYu6LzegqSjTMUwx3zs6VB4 NmQWoWdXNo9AU+REVaplRuP+D+9V/NJ7G8O3I8vHTvzQAd3vxZjMDLWcMmBdXqI6 6+b5qoBuxANVRLIwaR3UzEwa5kkj1dyE4yAx3rb2IdxeD5LkCs5BQWQ+h9O5t47B 5yQjdYBvoWjK7PvEYaRHD/SuU2w854aDSiDa/Ff5KQTKFqhiTMEncEej4kUAl5Cr tlbE1ViAFzHmlVzqiMxXM2D1ZMOKqCoo1wLGWeTUDO6Ru2qxKsjkyffLv0/olxoM 4bmWk2EKObu6W6t3Zqhv1svjC9jL6wFJq1QZ1gsSAdT8CV9h716bFlx+g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=N5bnzgbs5sk/hcpbVb2U9rRYKHzeu3F1EjOu4dcDwjg=; b=DJBdHd3h dsW7MGGsNXYy/s1RIus85OsdYYDBABHhnbLxVXspA2cd5ntsVuruY4DB4fDkQ/Mi gjajTHPI/Vf4YeQDND2hek/q3sYdBj7W6VvM3Fiv+lFu0YeqQcnnfdst2sV4TcWa +UCgSYVxh90+n/ckfBP1X89M7UlHAnuicNmyPQRPg56llRfOhXjOJiPppddG3qaU VWLFNWgEPq5svOjEaomXxmGskHuf6w8ViELuFlG9BWRfE2wSKR1wtDjjGTRmhqVq tH3OAsdek8wlgHdlSIe8JLRZOzjH2iOnPJmY6y/ekdyAqDAAHTbY+83STNRYetTo uPJGSsbvi2f1dw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecufghrlhcuvffnffculdeftddmnecujfgurhephffvuf ffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgv hicuoehruhhstghurhesrhhushhsvghllhdrtggtqeenucfkphepuddvvddrleelrdekvd druddtnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhep rhhushgtuhhrsehruhhsshgvlhhlrdgttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Russell Currey , christophe.leroy@c-s.fr, joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com Subject: [PATCH v4 3/8] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Date: Wed, 26 Feb 2020 17:23:58 +1100 Message-Id: <20200226062403.63790-4-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 Very rudimentary, just echo 1 > [debugfs]/check_wx_pages and check the kernel log. Useful for testing strict module RWX. Updated the Kconfig entry to reflect this. Also fixed a typo. Signed-off-by: Russell Currey --- arch/powerpc/Kconfig.debug | 6 ++++-- arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++++++++++++++++- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index 0b063830eea8..e37960ef68c6 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -370,7 +370,7 @@ config PPC_PTDUMP If you are unsure, say N. config PPC_DEBUG_WX - bool "Warn on W+X mappings at boot" + bool "Warn on W+X mappings at boot & enable manual checks at runtime" depends on PPC_PTDUMP && STRICT_KERNEL_RWX help Generate a warning if any W+X mappings are found at boot. @@ -384,7 +384,9 @@ config PPC_DEBUG_WX of other unfixed kernel bugs easier. There is no runtime or memory usage effect of this option - once the kernel has booted up - it's a one time check. + once the kernel has booted up, it only automatically checks once. + + Enables the "check_wx_pages" debugfs entry for checking at runtime. If in doubt, say "Y". diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c index 206156255247..a15e19a3b14e 100644 --- a/arch/powerpc/mm/ptdump/ptdump.c +++ b/arch/powerpc/mm/ptdump/ptdump.c @@ -4,7 +4,7 @@ * * This traverses the kernel pagetables and dumps the * information about the used sections of memory to - * /sys/kernel/debug/kernel_pagetables. + * /sys/kernel/debug/kernel_page_tables. * * Derived from the arm64 implementation: * Copyright (c) 2014, The Linux Foundation, Laura Abbott. @@ -413,6 +413,25 @@ void ptdump_check_wx(void) else pr_info("Checked W+X mappings: passed, no W+X pages found\n"); } + +static int check_wx_debugfs_set(void *data, u64 val) +{ + if (val != 1ULL) + return -EINVAL; + + ptdump_check_wx(); + + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, NULL, check_wx_debugfs_set, "%llu\n"); + +static int ptdump_check_wx_init(void) +{ + return debugfs_create_file("check_wx_pages", 0200, NULL, + NULL, &check_wx_fops) ? 0 : -ENOMEM; +} +device_initcall(ptdump_check_wx_init); #endif static int ptdump_init(void) From patchwork Wed Feb 26 06:23:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 077BE13A4 for ; Wed, 26 Feb 2020 06:25:18 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 63B2421556 for ; Wed, 26 Feb 2020 06:25:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="smFOLbq+"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="MQO+bh6i" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63B2421556 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17938-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28164 invoked by uid 550); 26 Feb 2020 06:24:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28050 invoked from network); 26 Feb 2020 06:24:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=Z84AqZl7dQCow nF4r9rNWqCkiMXAjJ064l1LHKxjRLw=; b=smFOLbq+zsgiAS4lVZSUu08eP5nd9 s835PQdJ3S6lbc7vptT52P9TjNL2BSjPqVM7z9y13UBINCFDFYC/JRpDNPa0GRWn VEV7DCAMjqd4EmYM5FroumvXsbonrfucNaIMSbFbzbACiWLPakxy0r+seLDIoOr7 3knro/W+bx+YdWkMVm3nSXM0E8XVug/RfxfwyC9JcgNkW3fpDw0e2ynj/UMO/zwT +fzQsFAwd4v3QsmRxoKsPGr4iUxm3AFcvMHWK0FsCNeifxOFjsdbca/+7n21RPyE kRcypIUTDjbDdX8YGIMrnEDOVzmbMRqmIiOtMtJDeiPzM7Acfp0ht7jDg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=Z84AqZl7dQCownF4r9rNWqCkiMXAjJ064l1LHKxjRLw=; b=MQO+bh6i wcuL9aHNsnUJA+jasgYE4/h/6bFaQsRPNy1PtXDoWepuS9gIiV8eBzQtV6UrVODR KUsVh+fkESc/CCHMu+wiyZXE9hhDH18OIDfuVmCJGqrr/6gm+pbQiFtuqr6hTqbr l4bFOVGXCnyIQKxO5wueCXxsVUhIYimZFjlagQr9vsPUbfYBM+SJ7cbuCrL4MNgr ax93nAFCFaz4Zlo7Mix+eyFtZW5obUjCvtqHKNCWFp4TRKiTZyL4l9lEH9jKUrGz OXmF7jVve2x3kOsS0SYkVDjwgGAhuCf46KtqMb09xWF9qCHSh/uu0ZYOsJpWO44S oUOsQ/grPAZQnQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecufghrlhcuvffnffculdeftddmnecujfgurhephffvuf ffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgv hicuoehruhhstghurhesrhhushhsvghllhdrtggtqeenucfkphepuddvvddrleelrdekvd druddtnecuvehluhhsthgvrhfuihiivgepfeenucfrrghrrghmpehmrghilhhfrhhomhep rhhushgtuhhrsehruhhsshgvlhhlrdgttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Russell Currey , christophe.leroy@c-s.fr, joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com Subject: [PATCH v4 4/8] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Date: Wed, 26 Feb 2020 17:23:59 +1100 Message-Id: <20200226062403.63790-5-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 To enable strict module RWX on powerpc, set: CONFIG_STRICT_MODULE_RWX=y You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real security benefit. ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX. This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that makes STRICT_MODULE_RWX *on by default* in configurations where STRICT_KERNEL_RWX is *unavailable*. Since this doesn't make much sense, and module RWX without kernel RWX doesn't make much sense, having the same dependencies as kernel RWX works around this problem. Signed-off-by: Russell Currey --- arch/powerpc/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index bd074246e34e..e1fc7fba10bf 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -131,6 +131,7 @@ config PPC select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 select ARCH_HAS_SET_MEMORY select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION) + select ARCH_HAS_STRICT_MODULE_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UACCESS_FLUSHCACHE select ARCH_HAS_UACCESS_MCSAFE if PPC64 From patchwork Wed Feb 26 06:24:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1415B14BC for ; Wed, 26 Feb 2020 06:25:27 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6F75A21556 for ; Wed, 26 Feb 2020 06:25:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="bJ+ybytN"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="YYhTw7Ip" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F75A21556 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17939-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28553 invoked by uid 550); 26 Feb 2020 06:25:01 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28436 invoked from network); 26 Feb 2020 06:25:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=fIQUcfHh7QuR4 38tK6lJLzDWB37n7jovUl45LUEKMUk=; b=bJ+ybytNw+AgsIL1M9PUKtLkAJyNb 9EX+LPSQZRzQ3mSfNnh0E7cWyUb2vkxMxDKd7Cp/MrQC+aWIIdAqzK3KLBVhFsdp RfA8ZKKrDLYocx6DJxMoCIyEEdc7pNfISq5Bkj+T5ZCSevMCjP2GQpJNpLcg5U1g qeIfln0dxE8Pf/pCuzmCxK75gN6EEk+lLDyqwx69koesDTleXnxhpZ6zPJwdvQnY tIQlI12ZS8gTyXEheLg/hxpvD2NUq1J7mi6kTIy4LpZ1vnEg6djs3KWui7gFAD4S VWJmVC8jMasiZLRjcQ9P3labbHrtN9LMUGMVBDtmHxQ6PQnmbPGMLXXqw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=fIQUcfHh7QuR438tK6lJLzDWB37n7jovUl45LUEKMUk=; b=YYhTw7Ip +MaYW2FOuBQoC0hwu15yFBuvVQ/M5BeX84WMuaEO7wim1oZIqKZxHaTTeBIbQZDa ABxkvXeJEp19sWyBOU59+5m5ukB8WPwglRp2/GmAEao5IvH/6l8WoEtVeKDe4qKY gwLWlcdIx4OWndHPd7o4InICAttG2a8TyH+2uNgSbpUz1ci2MD23Y440Dha7Adgo BR0O4iHa4rMD9jcp25HwdRsNLDhiM7A9p6wQ5ahAJv9YFPXXORNDpaCFTj2La05H 3SWadYy5Cof3CHAiXHD/CnOufhPXlUnlPAr4RESi0eFfjyFYMyZey6W9FLie43By CB6kNaOz+2hCSw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculdduhedmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgepfeenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Russell Currey , christophe.leroy@c-s.fr, joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, Joel Stanley Subject: [PATCH v4 5/8] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Date: Wed, 26 Feb 2020 17:24:00 +1100 Message-Id: <20200226062403.63790-6-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX enabled, and if you want memory protection for kernel text you'd want it for modules too, so enable STRICT_MODULE_RWX there. Acked-by: Joel Stanley Signed-off-by: Russell Currey --- arch/powerpc/configs/skiroot_defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig index 1b6bdad36b13..66d20dbe67b7 100644 --- a/arch/powerpc/configs/skiroot_defconfig +++ b/arch/powerpc/configs/skiroot_defconfig @@ -51,6 +51,7 @@ CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet" # CONFIG_PPC_MEM_KEYS is not set CONFIG_JUMP_LABEL=y CONFIG_STRICT_KERNEL_RWX=y +CONFIG_STRICT_MODULE_RWX=y CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_SIG_FORCE=y From patchwork Wed Feb 26 06:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405351 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B9BC159A for ; Wed, 26 Feb 2020 06:25:36 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 03FEF21556 for ; Wed, 26 Feb 2020 06:25:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="He3FAu92"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="G3n/pQ/W" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03FEF21556 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17940-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29924 invoked by uid 550); 26 Feb 2020 06:25:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29814 invoked from network); 26 Feb 2020 06:25:04 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=EgHYFloF896Dk NrEkKKfNC70JzHBIsSmfxjTHg9bncU=; b=He3FAu92Y6kUd8qbIADbJRUDOG09K v/3qD9x9+KvVdEziWeMvugqKmDMWENt+OhJAN8kDuwot4qodVveBdZCakbj5QmfQ qgym5y63jC2giexilzxwHvKa5KeTyS+0Bh0AFk7at0hv34rAq3fGibjrAau5cW7D JPSef00uaPb+A+HcDuVyNaGEPKGAluz5RF4YnXakPLBBNfA2Yyl8/L0Fx7YNXeUz XNoy3tmfauHfG5kqJ+5vLe67A3OL6dxkdXV8sKjcOocgAAuA3w/jyJXYzTVNRgsg /aPsc4XgPgF4asfhiwj2OmX4pqvuM2OWDKy4oDxKCANLz3o3eJaJFolQg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=EgHYFloF896DkNrEkKKfNC70JzHBIsSmfxjTHg9bncU=; b=G3n/pQ/W 6GGrsLWYxRruhN+zX2Fv3o0b9gRQZysAYsMT9AHBVWy18cxTaFodVUoIppOKaEq7 YGFgqoZ0MSH1Y31mANWOsu0kb+VvK9LNaPPxajY6bQm53D9eJ1SgQyQi3IT57rx/ RxAcTRqo5QCEuBNJfR6L9kPON1LLb0fYfgoT7TZKB9gffyUZb/HVFfMhGbQANf7c ckYa5ASipadrrJk6V8nyvVX7e86uIkxbQiNJDBmNVijhqQqY0bSdhPWzsvd4CMM4 rjDQURCF0fBV8n5p1mFZsqavORmi2Pg2B637n69B9gQHkdjTvvvzmNiaUnY61p0x MXzYEswoWvk1HQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculddutddmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgepheenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, kbuild test robot , Russell Currey Subject: [PATCH v4 6/8] powerpc/mm: implement set_memory_attr() Date: Wed, 26 Feb 2020 17:24:01 +1100 Message-Id: <20200226062403.63790-7-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy In addition to the set_memory_xx() functions which allows to change the memory attributes of not (yet) used memory regions, implement a set_memory_attr() function to: - set the final memory protection after init on currently used kernel regions. - enable/disable kernel memory regions in the scope of DEBUG_PAGEALLOC. Unlike the set_memory_xx() which can act in three step as the regions are unused, this function must modify 'on the fly' as the kernel is executing from them. At the moment only PPC32 will use it and changing page attributes on the fly is not an issue. Signed-off-by: Christophe Leroy Reported-by: kbuild test robot [ruscur: cast "data" to unsigned long instead of int] Signed-off-by: Russell Currey --- v4: Cast "data" to unsigned long instead of int arch/powerpc/include/asm/set_memory.h | 2 ++ arch/powerpc/mm/pageattr.c | 33 +++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h index 64011ea444b4..b040094f7920 100644 --- a/arch/powerpc/include/asm/set_memory.h +++ b/arch/powerpc/include/asm/set_memory.h @@ -29,4 +29,6 @@ static inline int set_memory_x(unsigned long addr, int numpages) return change_memory_attr(addr, numpages, SET_MEMORY_X); } +int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot); + #endif diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c index 2b573768a7f7..ee6b5e3b7604 100644 --- a/arch/powerpc/mm/pageattr.c +++ b/arch/powerpc/mm/pageattr.c @@ -72,3 +72,36 @@ int change_memory_attr(unsigned long addr, int numpages, long action) return apply_to_page_range(&init_mm, start, sz, change_page_attr, (void *)action); } + +/* + * Set the attributes of a page: + * + * This function is used by PPC32 at the end of init to set final kernel memory + * protection. It includes changing the maping of the page it is executing from + * and data pages it is using. + */ +static int set_page_attr(pte_t *ptep, unsigned long addr, void *data) +{ + pgprot_t prot = __pgprot((unsigned long)data); + + spin_lock(&init_mm.page_table_lock); + + set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot)); + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + + spin_unlock(&init_mm.page_table_lock); + + return 0; +} + +int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot) +{ + unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE); + unsigned long sz = numpages * PAGE_SIZE; + + if (!numpages) + return 0; + + return apply_to_page_range(&init_mm, start, sz, set_page_attr, + (void *)pgprot_val(prot)); +} From patchwork Wed Feb 26 06:24:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A68914BC for ; Wed, 26 Feb 2020 06:25:47 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9A3EA21556 for ; Wed, 26 Feb 2020 06:25:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="AqGBiBgT"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="WgGlw2bf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A3EA21556 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17941-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 30203 invoked by uid 550); 26 Feb 2020 06:25:08 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 30110 invoked from network); 26 Feb 2020 06:25:07 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=nQz/eopHyv6q5 rSvIql6tVPdKhyrOQXmquL2XzUOAb4=; b=AqGBiBgTw5Iv9Tb/DaEelvOzKt49f f351kJZ1byA+vp9aHTISOJpVoZ3oo4YCjrI3nAYa2WwGjx9d8XBVqZOK6gQBtyLn LlMRJu6NcGXPB6VwY2MZUwLSAiouQTpCFdUChM7gz7ridifBS433ux6Wr0pRtDil f3xnn3+1WHRsEjm3ufoAZx0h5rqmfOxJpgYw1siRCUtSZIIPPqAlizJwkEDZlt9t yZt9Yb+0g0prrt8lDrE0abcY9JlEjgoV2l7nzQaKdMm0sIyRrvClzL8lavChNMwc 8z9tvaRAH+eObZQBsl18/Bjf7jx+pRkmvxFXRJjUAhYTyPK31Frg54TFg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=nQz/eopHyv6q5rSvIql6tVPdKhyrOQXmquL2XzUOAb4=; b=WgGlw2bf VGHMogbTOw32XiMZ1BfT6CqGhY4d8bR763XDi9ZiM+vE5y332QwjaSxXQBof9Q4M fsixDQ82X0BZHvqDmvMKKcLQfYdMn9cJ7FzkQjOBW2I8G8YHXlftUNXKo/jc5LH5 XYU2+LdWAle7fHSOHbdhRxQOin2t81voJyT8YHHlbfuNVb5NEIFNCeW6abTYH4dC yRN/S6aQL9NZN50N1pAGjW3mUGhwg/2+YwIQyi3I6SPLlkCwu+JcsFNYAY45k6io Lts+jHDJzoSZc0CCVcCX3TlV0M86iGzk6kMHEe5TSEtfTDCX75AVh3gPVvzcF1c3 8uA9tO3nnufCiw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculdduhedmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgepheenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com Subject: [PATCH v4 7/8] powerpc/32: use set_memory_attr() Date: Wed, 26 Feb 2020 17:24:02 +1100 Message-Id: <20200226062403.63790-8-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy Use set_memory_attr() instead of the PPC32 specific change_page_attr() change_page_attr() was checking that the address was not mapped by blocks and was handling highmem, but that's unneeded because the affected pages can't be in highmem and block mapping verification is already done by the callers. Signed-off-by: Christophe Leroy --- arch/powerpc/mm/pgtable_32.c | 95 ++++-------------------------------- 1 file changed, 10 insertions(+), 85 deletions(-) diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index 5fb90edd865e..3d92eaf3ee2f 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -121,99 +122,20 @@ void __init mapin_ram(void) } } -/* Scan the real Linux page tables and return a PTE pointer for - * a virtual address in a context. - * Returns true (1) if PTE was found, zero otherwise. The pointer to - * the PTE pointer is unmodified if PTE is not found. - */ -static int -get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp) -{ - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int retval = 0; - - pgd = pgd_offset(mm, addr & PAGE_MASK); - if (pgd) { - pud = pud_offset(pgd, addr & PAGE_MASK); - if (pud && pud_present(*pud)) { - pmd = pmd_offset(pud, addr & PAGE_MASK); - if (pmd_present(*pmd)) { - pte = pte_offset_map(pmd, addr & PAGE_MASK); - if (pte) { - retval = 1; - *ptep = pte; - if (pmdp) - *pmdp = pmd; - /* XXX caller needs to do pte_unmap, yuck */ - } - } - } - } - return(retval); -} - -static int __change_page_attr_noflush(struct page *page, pgprot_t prot) -{ - pte_t *kpte; - pmd_t *kpmd; - unsigned long address; - - BUG_ON(PageHighMem(page)); - address = (unsigned long)page_address(page); - - if (v_block_mapped(address)) - return 0; - if (!get_pteptr(&init_mm, address, &kpte, &kpmd)) - return -EINVAL; - __set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0); - pte_unmap(kpte); - - return 0; -} - -/* - * Change the page attributes of an page in the linear mapping. - * - * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY - */ -static int change_page_attr(struct page *page, int numpages, pgprot_t prot) -{ - int i, err = 0; - unsigned long flags; - struct page *start = page; - - local_irq_save(flags); - for (i = 0; i < numpages; i++, page++) { - err = __change_page_attr_noflush(page, prot); - if (err) - break; - } - wmb(); - local_irq_restore(flags); - flush_tlb_kernel_range((unsigned long)page_address(start), - (unsigned long)page_address(page)); - return err; -} - void mark_initmem_nx(void) { - struct page *page = virt_to_page(_sinittext); unsigned long numpages = PFN_UP((unsigned long)_einittext) - PFN_DOWN((unsigned long)_sinittext); if (v_block_mapped((unsigned long)_stext + 1)) mmu_mark_initmem_nx(); else - change_page_attr(page, numpages, PAGE_KERNEL); + set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL); } #ifdef CONFIG_STRICT_KERNEL_RWX void mark_rodata_ro(void) { - struct page *page; unsigned long numpages; if (v_block_mapped((unsigned long)_sinittext)) { @@ -222,20 +144,18 @@ void mark_rodata_ro(void) return; } - page = virt_to_page(_stext); numpages = PFN_UP((unsigned long)_etext) - PFN_DOWN((unsigned long)_stext); - change_page_attr(page, numpages, PAGE_KERNEL_ROX); + set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX); /* * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ - page = virt_to_page(__start_rodata); numpages = PFN_UP((unsigned long)__init_begin) - PFN_DOWN((unsigned long)__start_rodata); - change_page_attr(page, numpages, PAGE_KERNEL_RO); + set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO); // mark_initmem_nx() should have already run by now ptdump_check_wx(); @@ -245,9 +165,14 @@ void mark_rodata_ro(void) #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { + unsigned long addr = (unsigned long)page_address(page); + if (PageHighMem(page)) return; - change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0)); + if (enable) + set_memory_attr(addr, numpages, PAGE_KERNEL); + else + set_memory_attr(addr, numpages, __pgprot(0)); } #endif /* CONFIG_DEBUG_PAGEALLOC */ From patchwork Wed Feb 26 06:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11405355 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4389114BC for ; Wed, 26 Feb 2020 06:25:59 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9EC7321556 for ; Wed, 26 Feb 2020 06:25:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="ig3tKLSd"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="frUbAQk+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EC7321556 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17942-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 30592 invoked by uid 550); 26 Feb 2020 06:25:12 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 30502 invoked from network); 26 Feb 2020 06:25:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=gC9R82Z78vRPQ T2nLufVwEVT7hVhqTaScp+rv4N5psk=; b=ig3tKLSd8Raq/f9CKyLsyOtp2xg0p M6k3229pNXNqKc/4FZtRMabYp0SHheExoD/IVpmPh0Df1NVlLFg2l5oUxL4GfMZ6 wLUuNqJsVmB8/MSkx3h6T6fbwxEOIz6FAFyuKT0CN06iElfG4X4DF+n0+vhqjRwO PUCGuiMmbjQcwIXZJWzKBvzrL0uGwtcfc6V2AZERdGp4PgjNzQIxElofoAkLk3t7 P8o3PBPfthiAQhUtI/ewAi6+Fcj7ycBjZwIXt77022+NckTpEPpA9jTsGw/kJFaM wL5OSq7jybpA22SGyW8IKd6fZOsoOTKbAu1BlYHwnO50L7HzpeJGY7hxA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=gC9R82Z78vRPQT2nLufVwEVT7hVhqTaScp+rv4N5psk=; b=frUbAQk+ +WhF2mjFG6mqR7RrBSsHeEnZIwA9UxXrczy/gHnoTS7GInTQ6elwiolhgKeEliX0 wIh9kputHoQkGkADEBHBRCDnBFo4xULfJZwhMvoZ0rVmKawksIHlsfuKE53ohxaA DP7NJXlYhgM7Vr3cfCV/QOWpUPfdifq03agp/s8Rh0U+Rp91EnMkx9OL+FB0Qs90 4KKgX1rmnlX9iFQuGsEFJ9kXYaf7X+FfyWQorR94L9kMoRIh4Id8Uc08QbY3gIkA rjZSMSyHPeeDSi1joc42f8N+Haa/0g/CRNufTS1bJSYIYMtpcxo+Xz827cXkid7N X3YP5avmMMzQkw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrleefgdeljecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculdduhedmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvvddrleelrdekvddruddtnecuvehluhhsthgvrhfuihii vgepjeenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhlrd gttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Russell Currey , christophe.leroy@c-s.fr, joel@jms.id.au, mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, Jordan Niethe Subject: [PATCH v4 8/8] powerpc/mm: Disable set_memory() routines when strict RWX isn't enabled Date: Wed, 26 Feb 2020 17:24:03 +1100 Message-Id: <20200226062403.63790-9-ruscur@russell.cc> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200226062403.63790-1-ruscur@russell.cc> References: <20200226062403.63790-1-ruscur@russell.cc> MIME-Version: 1.0 There are a couple of reasons that the set_memory() functions are problematic when STRICT_KERNEL_RWX isn't enabled: - The linear mapping is a different size and apply_to_page_range() may modify a giant section, breaking everything - patch_instruction() doesn't know to work around a page being marked RO, and will subsequently crash The latter can be replicated by building a kernel with the set_memory() patches but with STRICT_KERNEL_RWX off and running ftracetest. Reported-by: Jordan Niethe Signed-off-by: Russell Currey --- v4: new arch/powerpc/mm/pageattr.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c index ee6b5e3b7604..ff111930cf5e 100644 --- a/arch/powerpc/mm/pageattr.c +++ b/arch/powerpc/mm/pageattr.c @@ -96,12 +96,17 @@ static int set_page_attr(pte_t *ptep, unsigned long addr, void *data) int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot) { - unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE); - unsigned long sz = numpages * PAGE_SIZE; + unsigned long start, size; + + if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) + return 0; if (!numpages) return 0; - return apply_to_page_range(&init_mm, start, sz, set_page_attr, + start = ALIGN_DOWN(addr, PAGE_SIZE); + size = numpages * PAGE_SIZE; + + return apply_to_page_range(&init_mm, start, size, set_page_attr, (void *)pgprot_val(prot)); }