From patchwork Thu Feb 21 09:35:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823375 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE60A1399 for ; Thu, 21 Feb 2019 09:36:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B89202EDE5 for ; Thu, 21 Feb 2019 09:36:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9D0E3024E; Thu, 21 Feb 2019 09:36:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C757E2EDE5 for ; Thu, 21 Feb 2019 09:36:47 +0000 (UTC) Received: (qmail 24102 invoked by uid 550); 21 Feb 2019 09:36:40 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 23929 invoked from network); 21 Feb 2019 09:36:39 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=1P63kGbsVFuBa K0rcVbu5M1YQbbnh/BjZjuVOGJYTNI=; b=mZrKumfVa2+7bNZwSu1mrxL7Ni1Cm xOBglVrEyzV+vMoIC++PFLxpCNqM+H4yyczyg1K9vHD2F610zTkF6DlG5zssDJQ5 opVTP1RMv+Qb4YnHTJVqZVE9kSksxtIoqGGKFnDUyPjWJCQKe4piKSzdcSNCTYUd SCmx88QZLvMxX3H5IBY0lvVu/W1wq5bPsnl3G+iaPmsnjfeVGQo0p4UgLWjHTvpw bJz46Vx+QuAvAIdLstudNIgdi4+8b/FGQCUnWWPUIR9mGwuSJp1QKlsOP1JREAiP lymEJ1V9eZZhZn7JNUdouH7QFSjcPPbJvLh2lvbWrKSzIFeDAVJYr+5Ag== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=1P63kGbsVFuBaK0rcVbu5M1YQbbnh/BjZjuVOGJYTNI=; b=aele8Py0 V5KPOLwis9IYSxZVdGLAJxumr6YDqFbDPAH1RrrtceUwnSLRlKKyNmbjxnBKcQgX pOAbzjsNR3Ep3ucPZUfkn+Ee0Sdopc65yQsaCgoHzxQE2u+stJ2jIawdY0ZnLZ/Q J6Zr+yDsqO+wBy+UbEESruoryC3dQ1Fff1hqFij+TpLEUOiPIa7b9FfPsmm9Bg2z jksYEKrPNbxVs1MkonGMQPQQCZcabMKgAIYKydvn2QtOKENlSh311SGiGBRGRacl knWcD4SXm9E8IZ3RNj0KSITx9FYM7n0rbllxug6CIqjgBoTH58bRZgvB91sdVMIT Dg11OuvHyCdN+w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection Date: Thu, 21 Feb 2019 20:35:55 +1100 Message-Id: <20190221093601.27920-2-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP From: Christophe Leroy This patch adds a skeleton for Kernel Userspace Protection functionnalities like Kernel Userspace Access Protection and Kernel Userspace Execution Prevention The subsequent implementation of KUAP for radix makes use of a MMU feature in order to patch out assembly when KUAP is disabled or unsupported. This won't work unless there's an entry point for KUP support before the feature magic happens, so for PPC64 setup_kup() is called early in setup. On PPC32, feature_fixup is done too early to allow the same. Suggested-by: Russell Currey Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/kup.h | 11 +++++++++++ arch/powerpc/kernel/setup_64.c | 7 +++++++ arch/powerpc/mm/init-common.c | 5 +++++ arch/powerpc/mm/init_32.c | 3 +++ 4 files changed, 26 insertions(+) create mode 100644 arch/powerpc/include/asm/kup.h diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h new file mode 100644 index 000000000000..7a88b8b9b54d --- /dev/null +++ b/arch/powerpc/include/asm/kup.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KUP_H_ +#define _ASM_POWERPC_KUP_H_ + +#ifndef __ASSEMBLY__ + +void setup_kup(void); + +#endif /* !__ASSEMBLY__ */ + +#endif /* _ASM_POWERPC_KUP_H_ */ diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 236c1151a3a7..771f280a6bf6 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -68,6 +68,7 @@ #include #include #include +#include #include "setup.h" @@ -331,6 +332,12 @@ void __init early_setup(unsigned long dt_ptr) */ configure_exceptions(); + /* + * Configure Kernel Userspace Protection. This needs to happen before + * feature fixups for platforms that implement this using features. + */ + setup_kup(); + /* Apply all the dynamic patching */ apply_feature_fixups(); setup_feature_keys(); diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c index 1e6910eb70ed..36d28e872289 100644 --- a/arch/powerpc/mm/init-common.c +++ b/arch/powerpc/mm/init-common.c @@ -24,6 +24,11 @@ #include #include #include +#include + +void __init setup_kup(void) +{ +} #define CTOR(shift) static void ctor_##shift(void *addr) \ { \ diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 3e59e5d64b01..93cfa8cf015d 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -45,6 +45,7 @@ #include #include #include +#include #include "mmu_decl.h" @@ -182,6 +183,8 @@ void __init MMU_init(void) btext_unmap(); #endif + setup_kup(); + /* Shortly after that, the entire linear mapping will be available */ memblock_set_current_limit(lowmem_end_addr); } From patchwork Thu Feb 21 09:35:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C9E41399 for ; Thu, 21 Feb 2019 09:36:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A4EE2EDE5 for ; Thu, 21 Feb 2019 09:36:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1DEF13024E; Thu, 21 Feb 2019 09:36:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 3A98F2EDE5 for ; Thu, 21 Feb 2019 09:36:55 +0000 (UTC) Received: (qmail 24367 invoked by uid 550); 21 Feb 2019 09:36:42 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24250 invoked from network); 21 Feb 2019 09:36:41 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=UvbkjLtYKPplW i/wSoDRqcrNNJcp1cO+Ue+axzxXrbY=; b=Rc2huHv3Vgs6QMUQHYObfW2jnfPhB xR4Y3LjrVdKqE9gy3/3lzE3KRRh7OYsh9+HaY5Xot6XXT+nCfI+1mpF3hcS7ue+A KCXDA7Jps90EZoMfVqmVavAT7MnwYPwd5zG2jh0X+kmkI6nDVQxdrjtMe3c8KseC uslYj73oJuEauUWJcPikOU2Y2hbRAnNpaEiRojRsi9oTgz4bFrKbca/5FewcfpbF IEwGlS2r95s420HK0z7KSUM0rTaOMqh5ZJdcfKIYSGWrt1Zw1ptvGa4phaoE/wo9 gi2H9sSPnIMJMmcfsoS19ASD2ezzBtqG9DFnb7TKjma6uJgA6tu3VuzCQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=UvbkjLtYKPplWi/wSoDRqcrNNJcp1cO+Ue+axzxXrbY=; b=V8g+6uY4 KVyeJvJlwH/FE51CvhXWhN+mRtXfBjhr1U6c5XaSYD5FP+T1hDamsnn3Oio/C2vy 2tibpC3gbDApDzH9bIOXWXV1MpRKCbO2W4QYn44+ydZ0r2BhaGDzxtXEdSkNZym4 qqsPT7jt8kPcNI2XELQ5KVg7rMsq53QS5FG4axO48MkTSGRXi3mtLgaADxWFi7uG sbJSYDpYVFG/3AF1QwXWepkHyu5HT5/1inGV/Pqe/v1EyEm2QYtaOqkVGtMlZv9U MLqAcNQum2MdWMUV3Bprl+L76ek4y5PXr7B61pF5/mSfZLeKwr8M0OoIPuw1w8KQ lZyQCPITH9OnLQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com Subject: [PATCH 2/7] powerpc: Add skeleton for Kernel Userspace Execution Prevention Date: Thu, 21 Feb 2019 20:35:56 +1100 Message-Id: <20190221093601.27920-3-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP From: Christophe Leroy This patch adds a skeleton for Kernel Userspace Execution Prevention. Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP and provide setup_kuep() function. Signed-off-by: Christophe Leroy --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/powerpc/include/asm/kup.h | 6 ++++++ arch/powerpc/mm/fault.c | 3 ++- arch/powerpc/mm/init-common.c | 11 +++++++++++ arch/powerpc/platforms/Kconfig.cputype | 12 ++++++++++++ 5 files changed, 32 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 858b6c0b9a15..8448a56a9eb9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2812,7 +2812,7 @@ Disable SMAP (Supervisor Mode Access Prevention) even if it is supported by processor. - nosmep [X86] + nosmep [X86,PPC] Disable SMEP (Supervisor Mode Execution Prevention) even if it is supported by processor. diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index 7a88b8b9b54d..af4b5f854ca4 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -6,6 +6,12 @@ void setup_kup(void); +#ifdef CONFIG_PPC_KUEP +void setup_kuep(bool disabled); +#else +static inline void setup_kuep(bool disabled) { } +#endif + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_KUP_H_ */ diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 887f11bcf330..ad65e336721a 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -230,8 +230,9 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code, if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT | DSISR_PROTFAULT))) { printk_ratelimited(KERN_CRIT "kernel tried to execute" - " exec-protected page (%lx) -" + " %s page (%lx) -" "exploit attempt? (uid: %d)\n", + address >= TASK_SIZE ? "exec-protected" : "user", address, from_kuid(&init_user_ns, current_uid())); } diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c index 36d28e872289..83f95a5565d6 100644 --- a/arch/powerpc/mm/init-common.c +++ b/arch/powerpc/mm/init-common.c @@ -26,8 +26,19 @@ #include #include +static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP); + +static int __init parse_nosmep(char *p) +{ + disable_kuep = true; + pr_warn("Disabling Kernel Userspace Execution Prevention\n"); + return 0; +} +early_param("nosmep", parse_nosmep); + void __init setup_kup(void) { + setup_kuep(disable_kuep); } #define CTOR(shift) static void ctor_##shift(void *addr) \ diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 8c7464c3f27f..410c3443652c 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -339,6 +339,18 @@ config PPC_RADIX_MMU_DEFAULT If you're unsure, say Y. +config PPC_HAVE_KUEP + bool + +config PPC_KUEP + bool "Kernel Userspace Execution Prevention" + depends on PPC_HAVE_KUEP + default y + help + Enable support for Kernel Userspace Execution Prevention (KUEP) + + If you're unsure, say Y. + config ARCH_ENABLE_HUGEPAGE_MIGRATION def_bool y depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION From patchwork Thu Feb 21 09:35:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8524D1390 for ; Thu, 21 Feb 2019 09:37:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 712D82EDE5 for ; Thu, 21 Feb 2019 09:37:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 649B030274; Thu, 21 Feb 2019 09:37:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 8659D2EDE5 for ; Thu, 21 Feb 2019 09:37:03 +0000 (UTC) Received: (qmail 25735 invoked by uid 550); 21 Feb 2019 09:36:45 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25628 invoked from network); 21 Feb 2019 09:36:45 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=ZVTHoD/t3tK+K KHNBT3SqX7xLQQ2bgPk2OD041Qv3R8=; b=cCfefc+13NvrCt8Nd2Cavu4cz76+y eVqKW2zruA1QR7ZgD9vx1Yv+rV8HPVe7XQoOommfHo66mWRT8YyFzMi4eKGMV/O6 Xa2gM7pBGsQixCLUjeNjbbpIWu8W9QYx4MUU/nJueyLqHxImYQgu4TE+QnJZXGYz txZtMKRpwbQDMhE/sNrat96dNkslFoWs+Vhg6s9UCwoKlAE9FKE2aK38Z20bGtfS PFoqOtSMwt+OFnuOsXt4Lei+uG/Ww3ZYn4G+BA13ytYEaZKBRNjwswwmgyDP7ntT vNAIr4Ud1LfYssXWG2kW4Ll3KJsAkfKpYEPmGrXkAWMG55zZnsl3DyvbA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=ZVTHoD/t3tK+KKHNBT3SqX7xLQQ2bgPk2OD041Qv3R8=; b=Hw4QIGxz K6Rg5W7sWlnNZI5SoP5RxhDuw5HRYpOuRpwoVdYmU8gVQXT/jv0IyvBUOb/o+wZm 8zMh798KClPD0uItcBvplX68FlF9Kb7+4+TuBAOXLDir4DyATlf9NoBX5CCqf4WS 3NHOH2f4fr83Svwa+PnGjlTpCH0FMPJAmQyEIVbR4sH2E68xaFkiYTqhdGxZMkgr uhHjYf4SLQxS6b+2Rdt4bENdGTa3lpdpka19HV74hjOmdET2FrF+EXL2ZRKf/8Fo bOAyJLNr+MuDipBlGyfHmbLWlROY6gKkdDYpdyjc3yQbnvDtHe6dB7+1y1Fv3Mcm hBjqFw992uhFog== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecuffhomhgrihhnpeefvddrsh gsnecukfhppeduvddvrdelledrkedvrddutdenucfrrghrrghmpehmrghilhhfrhhomhep rhhushgtuhhrsehruhhsshgvlhhlrdgttgenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Date: Thu, 21 Feb 2019 20:35:57 +1100 Message-Id: <20190221093601.27920-4-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP From: Christophe Leroy This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have to possibility to provide their own implementation by providing setup_kuap() and lock/unlock_user_access() Some platform will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. Signed-off-by: Christophe Leroy [ruscur: remove 64-bit exception handling code] Signed-off-by: Russell Currey --- .../admin-guide/kernel-parameters.txt | 2 +- arch/powerpc/include/asm/exception-64e.h | 3 ++ arch/powerpc/include/asm/exception-64s.h | 3 ++ arch/powerpc/include/asm/futex.h | 4 ++ arch/powerpc/include/asm/kup.h | 21 ++++++++++ arch/powerpc/include/asm/paca.h | 3 ++ arch/powerpc/include/asm/processor.h | 3 ++ arch/powerpc/include/asm/ptrace.h | 3 ++ arch/powerpc/include/asm/uaccess.h | 38 +++++++++++++++---- arch/powerpc/kernel/asm-offsets.c | 7 ++++ arch/powerpc/kernel/entry_32.S | 8 +++- arch/powerpc/kernel/process.c | 3 ++ arch/powerpc/lib/checksum_wrappers.c | 4 ++ arch/powerpc/mm/fault.c | 17 +++++++-- arch/powerpc/mm/init-common.c | 10 +++++ arch/powerpc/platforms/Kconfig.cputype | 12 ++++++ 16 files changed, 127 insertions(+), 14 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 8448a56a9eb9..0a76dbb39011 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2808,7 +2808,7 @@ noexec=on: enable non-executable mappings (default) noexec=off: disable non-executable mappings - nosmap [X86] + nosmap [X86,PPC] Disable SMAP (Supervisor Mode Access Prevention) even if it is supported by processor. diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h index 555e22d5e07f..bf25015834ee 100644 --- a/arch/powerpc/include/asm/exception-64e.h +++ b/arch/powerpc/include/asm/exception-64e.h @@ -215,5 +215,8 @@ exc_##label##_book3e: #define RFI_TO_USER \ rfi +#define UNLOCK_USER_ACCESS(reg) +#define LOCK_USER_ACCESS(reg) + #endif /* _ASM_POWERPC_EXCEPTION_64E_H */ diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index 3b4767ed3ec5..8ae2d7855cfe 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -264,6 +264,9 @@ BEGIN_FTR_SECTION_NESTED(943) \ std ra,offset(r13); \ END_FTR_SECTION_NESTED(ftr,ftr,943) +#define UNLOCK_USER_ACCESS(reg) +#define LOCK_USER_ACCESS(reg) + #define EXCEPTION_PROLOG_0(area) \ GET_PACA(r13); \ std r9,area+EX_R9(r13); /* save r9 */ \ diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h index 88b38b37c21b..f85facf3739b 100644 --- a/arch/powerpc/include/asm/futex.h +++ b/arch/powerpc/include/asm/futex.h @@ -35,6 +35,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, { int oldval = 0, ret; + unlock_user_access(uaddr, NULL, sizeof(*uaddr)); pagefault_disable(); switch (op) { @@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, if (!ret) *oval = oldval; + lock_user_access(uaddr, NULL, sizeof(*uaddr)); return ret; } @@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, if (!access_ok(uaddr, sizeof(u32))) return -EFAULT; + unlock_user_access(uaddr, NULL, sizeof(*uaddr)); __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER "1: lwarx %1,0,%3 # futex_atomic_cmpxchg_inatomic\n\ @@ -95,6 +98,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, : "cc", "memory"); *uval = prev; + lock_user_access(uaddr, NULL, sizeof(*uaddr)); return ret; } diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index af4b5f854ca4..2ac540fb488f 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -4,6 +4,8 @@ #ifndef __ASSEMBLY__ +#include + void setup_kup(void); #ifdef CONFIG_PPC_KUEP @@ -12,6 +14,25 @@ void setup_kuep(bool disabled); static inline void setup_kuep(bool disabled) { } #endif +#ifdef CONFIG_PPC_KUAP +void setup_kuap(bool disabled); +#else +static inline void setup_kuap(bool disabled) { } +static inline void unlock_user_access(void __user *to, const void __user *from, + unsigned long size) { } +static inline void lock_user_access(void __user *to, const void __user *from, + unsigned long size) { } +#endif + #endif /* !__ASSEMBLY__ */ +#ifndef CONFIG_PPC_KUAP + +#ifdef CONFIG_PPC32 +#define LOCK_USER_ACCESS(val, sp, sr, srmax, curr) +#define REST_USER_ACCESS(val, sp, sr, srmax, curr) +#endif + +#endif + #endif /* _ASM_POWERPC_KUP_H_ */ diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index e843bc5d1a0f..56236f6d8c89 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -169,6 +169,9 @@ struct paca_struct { u64 saved_r1; /* r1 save for RTAS calls or PM or EE=0 */ u64 saved_msr; /* MSR saved here by enter_rtas */ u16 trap_save; /* Used when bad stack is encountered */ +#ifdef CONFIG_PPC_KUAP + u8 user_access_allowed; /* can the kernel access user memory? */ +#endif u8 irq_soft_mask; /* mask for irq soft masking */ u8 irq_happened; /* irq happened while soft-disabled */ u8 io_sync; /* writel() needs spin_unlock sync */ diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index ee58526cb6c2..4a9a10e86828 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -250,6 +250,9 @@ struct thread_struct { #ifdef CONFIG_PPC32 void *pgdir; /* root of page-table tree */ unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */ +#ifdef CONFIG_PPC_KUAP + unsigned long kuap; /* state of user access protection */ +#endif #endif /* Debug Registers */ struct debug_reg debug; diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h index 0b8a735b6d85..0321ba5b3d12 100644 --- a/arch/powerpc/include/asm/ptrace.h +++ b/arch/powerpc/include/asm/ptrace.h @@ -55,6 +55,9 @@ struct pt_regs #ifdef CONFIG_PPC64 unsigned long ppr; unsigned long __pad; /* Maintain 16 byte interrupt stack alignment */ +#elif defined(CONFIG_PPC_KUAP) + unsigned long kuap; + unsigned long __pad[3]; /* Maintain 16 byte interrupt stack alignment */ #endif }; #endif diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index e3a731793ea2..9db6a4c7c058 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -6,6 +6,7 @@ #include #include #include +#include /* * The fs value determines whether argument validity checking should be @@ -141,6 +142,7 @@ extern long __put_user_bad(void); #define __put_user_size(x, ptr, size, retval) \ do { \ retval = 0; \ + unlock_user_access(ptr, NULL, size); \ switch (size) { \ case 1: __put_user_asm(x, ptr, retval, "stb"); break; \ case 2: __put_user_asm(x, ptr, retval, "sth"); break; \ @@ -148,6 +150,7 @@ do { \ case 8: __put_user_asm2(x, ptr, retval); break; \ default: __put_user_bad(); \ } \ + lock_user_access(ptr, NULL, size); \ } while (0) #define __put_user_nocheck(x, ptr, size) \ @@ -240,6 +243,7 @@ do { \ __chk_user_ptr(ptr); \ if (size > sizeof(x)) \ (x) = __get_user_bad(); \ + unlock_user_access(NULL, ptr, size); \ switch (size) { \ case 1: __get_user_asm(x, ptr, retval, "lbz"); break; \ case 2: __get_user_asm(x, ptr, retval, "lhz"); break; \ @@ -247,6 +251,7 @@ do { \ case 8: __get_user_asm2(x, ptr, retval); break; \ default: (x) = __get_user_bad(); \ } \ + lock_user_access(NULL, ptr, size); \ } while (0) /* @@ -306,15 +311,21 @@ extern unsigned long __copy_tofrom_user(void __user *to, static inline unsigned long raw_copy_in_user(void __user *to, const void __user *from, unsigned long n) { - return __copy_tofrom_user(to, from, n); + unsigned long ret; + + unlock_user_access(to, from, n); + ret = __copy_tofrom_user(to, from, n); + lock_user_access(to, from, n); + return ret; } #endif /* __powerpc64__ */ static inline unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -339,14 +350,18 @@ static inline unsigned long raw_copy_from_user(void *to, } barrier_nospec(); - return __copy_tofrom_user((__force void __user *)to, from, n); + unlock_user_access(NULL, from, n); + ret = __copy_tofrom_user((__force void __user *)to, from, n); + lock_user_access(NULL, from, n); + return ret; } static inline unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -366,17 +381,24 @@ static inline unsigned long raw_copy_to_user(void __user *to, return 0; } - return __copy_tofrom_user(to, (__force const void __user *)from, n); + unlock_user_access(to, NULL, n); + ret = __copy_tofrom_user(to, (__force const void __user *)from, n); + lock_user_access(to, NULL, n); + return ret; } extern unsigned long __clear_user(void __user *addr, unsigned long size); static inline unsigned long clear_user(void __user *addr, unsigned long size) { + unsigned long ret = size; might_fault(); - if (likely(access_ok(addr, size))) - return __clear_user(addr, size); - return size; + if (likely(access_ok(addr, size))) { + unlock_user_access(addr, NULL, size); + ret = __clear_user(addr, size); + lock_user_access(addr, NULL, size); + } + return ret; } extern long strncpy_from_user(char *dst, const char __user *src, long count); diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 9ffc72ded73a..98e94299e728 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -93,6 +93,9 @@ int main(void) OFFSET(THREAD_INFO, task_struct, stack); DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16)); OFFSET(KSP_LIMIT, thread_struct, ksp_limit); +#ifdef CONFIG_PPC_KUAP + OFFSET(KUAP, thread_struct, kuap); +#endif #endif /* CONFIG_PPC64 */ #ifdef CONFIG_LIVEPATCH @@ -260,6 +263,7 @@ int main(void) OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user); OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime); OFFSET(ACCOUNT_SYSTEM_TIME, paca_struct, accounting.stime); + OFFSET(PACA_USER_ACCESS_ALLOWED, paca_struct, user_access_allowed); OFFSET(PACA_TRAP_SAVE, paca_struct, trap_save); OFFSET(PACA_NAPSTATELOST, paca_struct, nap_state_lost); OFFSET(PACA_SPRG_VDSO, paca_struct, sprg_vdso); @@ -320,6 +324,9 @@ int main(void) */ STACK_PT_REGS_OFFSET(_DEAR, dar); STACK_PT_REGS_OFFSET(_ESR, dsisr); +#ifdef CONFIG_PPC_KUAP + STACK_PT_REGS_OFFSET(_KUAP, kuap); +#endif #else /* CONFIG_PPC64 */ STACK_PT_REGS_OFFSET(SOFTE, softe); STACK_PT_REGS_OFFSET(_PPR, ppr); diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S index 0768dfd8a64e..f8f2268f8b12 100644 --- a/arch/powerpc/kernel/entry_32.S +++ b/arch/powerpc/kernel/entry_32.S @@ -36,6 +36,7 @@ #include #include #include +#include /* * MSR_KERNEL is > 0x10000 on 4xx/Book-E since it include MSR_CE. @@ -156,6 +157,7 @@ transfer_to_handler: stw r12,_CTR(r11) stw r2,_XER(r11) mfspr r12,SPRN_SPRG_THREAD + LOCK_USER_ACCESS(r2, r11, r9, r0, r12) addi r2,r12,-THREAD tovirt(r2,r2) /* set r2 to current */ beq 2f /* if from user, fix up THREAD.regs */ @@ -442,6 +444,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX) ACCOUNT_CPU_USER_EXIT(r4, r5, r7) 3: #endif + REST_USER_ACCESS(r7, r1, r4, r5, r2) lwz r4,_LINK(r1) lwz r5,_CCR(r1) mtlr r4 @@ -739,7 +742,8 @@ fast_exception_return: beq 1f /* if not, we've got problems */ #endif -2: REST_4GPRS(3, r11) +2: REST_USER_ACCESS(r3, r11, r4, r5, r2) + REST_4GPRS(3, r11) lwz r10,_CCR(r11) REST_GPR(1, r11) mtcr r10 @@ -957,6 +961,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_47x) 1: #endif /* CONFIG_TRACE_IRQFLAGS */ + REST_USER_ACCESS(r3, r1, r4, r5, r2) + lwz r0,GPR0(r1) lwz r2,GPR2(r1) REST_4GPRS(3, r1) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index ce393df243aa..995ef82a583d 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1771,6 +1771,9 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp) regs->mq = 0; regs->nip = start; regs->msr = MSR_USER; +#ifdef CONFIG_PPC_KUAP + regs->kuap = KUAP_START; +#endif #else if (!is_32bit_task()) { unsigned long entry; diff --git a/arch/powerpc/lib/checksum_wrappers.c b/arch/powerpc/lib/checksum_wrappers.c index 890d4ddd91d6..48e0c91e0083 100644 --- a/arch/powerpc/lib/checksum_wrappers.c +++ b/arch/powerpc/lib/checksum_wrappers.c @@ -28,6 +28,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, { unsigned int csum; + unlock_user_access(NULL, src, len); might_sleep(); *err_ptr = 0; @@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, } out: + lock_user_access(NULL, src, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_from_user); @@ -69,6 +71,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, { unsigned int csum; + unlock_user_access(dst, NULL, len); might_sleep(); *err_ptr = 0; @@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, } out: + lock_user_access(dst, NULL, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_to_user); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index ad65e336721a..5f4a5391f41e 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -223,9 +223,11 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, } /* Is this a bad kernel fault ? */ -static bool bad_kernel_fault(bool is_exec, unsigned long error_code, +static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code, unsigned long address) { + int is_exec = TRAP(regs) == 0x400; + /* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */ if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT | DSISR_PROTFAULT))) { @@ -236,7 +238,13 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code, address, from_kuid(&init_user_ns, current_uid())); } - return is_exec || (address >= TASK_SIZE); + if (!is_exec && address < TASK_SIZE && (error_code & DSISR_PROTFAULT) && + !search_exception_tables(regs->nip)) + printk_ratelimited(KERN_CRIT "Kernel attempted to access user" + " page (%lx) - exploit attempt? (uid: %d)\n", + address, from_kuid(&init_user_ns, + current_uid())); + return is_exec || (address >= TASK_SIZE) || !search_exception_tables(regs->nip); } static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address, @@ -456,9 +464,10 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, /* * The kernel should never take an execute fault nor should it - * take a page fault to a kernel address. + * take a page fault to a kernel address or a page fault to a user + * address outside of dedicated places */ - if (unlikely(!is_user && bad_kernel_fault(is_exec, error_code, address))) + if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address))) return SIGSEGV; /* diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c index 83f95a5565d6..ecaedfff9992 100644 --- a/arch/powerpc/mm/init-common.c +++ b/arch/powerpc/mm/init-common.c @@ -27,6 +27,7 @@ #include static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP); +static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP); static int __init parse_nosmep(char *p) { @@ -36,9 +37,18 @@ static int __init parse_nosmep(char *p) } early_param("nosmep", parse_nosmep); +static int __init parse_nosmap(char *p) +{ + disable_kuap = true; + pr_warn("Disabling Kernel Userspace Access Protection\n"); + return 0; +} +early_param("nosmap", parse_nosmap); + void __init setup_kup(void) { setup_kuep(disable_kuep); + setup_kuap(disable_kuap); } #define CTOR(shift) static void ctor_##shift(void *addr) \ diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 410c3443652c..7fa5ddbdce12 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -351,6 +351,18 @@ config PPC_KUEP If you're unsure, say Y. +config PPC_HAVE_KUAP + bool + +config PPC_KUAP + bool "Kernel Userspace Access Protection" + depends on PPC_HAVE_KUAP + default y + help + Enable support for Kernel Userspace Access Protection (KUAP) + + If you're unsure, say Y. + config ARCH_ENABLE_HUGEPAGE_MIGRATION def_bool y depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION From patchwork Thu Feb 21 09:35:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 717D01399 for ; Thu, 21 Feb 2019 09:37:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 611B92FF8F for ; Thu, 21 Feb 2019 09:37:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 554A930274; Thu, 21 Feb 2019 09:37:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id CD1A52FF8F for ; Thu, 21 Feb 2019 09:37:12 +0000 (UTC) Received: (qmail 26025 invoked by uid 550); 21 Feb 2019 09:36:48 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25916 invoked from network); 21 Feb 2019 09:36:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=BR4nmlG38Vcfk JEZ7L+GC7sDOM8u+gc3unYkKg7uXUE=; b=SLo9XSjWMEjj3DF4Hye2wp9JbawlD UvhVhu/Rjvdohb0NPPAiCKYo6RX/sGCv7VQkPnsBsyqKhW21XQrd3lrHCh0RCwR6 GpGB3hus3U1U4QNh0qpoKGld6pRvJYQzny78biWw47WpnTkhDkQiIdFLp4lOhS7C b3ilzY0Q9X+kZaaKvrYZXx2eiGC5/K7ntJvbqEy2HXhNW9Ie4KI0qaxbxkPI0xCO GvoQfcLevJJyBHCN4UZRLQZCPTyL5p0n9RWCGKBzrf2PGvy3h32nvtN2wdg5zLXB sbUR79MR6sPrDd5RWLPWDCvhUzf2cC/xhjT8vN+sTqvpi3Y+2Uo5OmpDw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=BR4nmlG38VcfkJEZ7L+GC7sDOM8u+gc3unYkKg7uXUE=; b=Y7ESvZkQ B6GNLB+PH4njI7EE5xcvOV5r1lOtZWjpFp8qgnaEzmBtC7nm3Az+dnEweTyIi0DX kwuwrDKo/kUmsYxhu8gU7veZjD6bRR9HfUrBhiz/yCbiEAbarptUQDuXg+Ude7zi 1x7AgmxBG8mz8cHPi6t5HWq1rf7tQKLxcT5XhA/mNz6lUt0PSYcg4ftde7g9+cTA OzCBrJt74iOiuZwY8YI0fq8iqXU6vI8WuZStMGEM3GsRKr3g9pxDVIY3hBj+HYaH fuA1ISMYm5nHKtLruhfvwUMz/YYgLNBOQiwkokSS2IdBmCtwgThRidLh2UcjQ6Cq VBGHZPahAEo9OQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 4/7] powerpc/64: Setup KUP on secondary CPUs Date: Thu, 21 Feb 2019 20:35:58 +1100 Message-Id: <20190221093601.27920-5-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Some platforms (i.e. Radix MMU) need per-CPU initialisation for KUP. Any platforms that only want to do KUP initialisation once globally can just check to see if they're running on the boot CPU, or check if whatever setup they need has already been performed. Note that this is only for 64-bit. Signed-off-by: Russell Currey --- arch/powerpc/kernel/setup_64.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 771f280a6bf6..17921ee7f3a0 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -390,6 +390,9 @@ void early_setup_secondary(void) /* Initialize the hash table or TLB handling */ early_init_mmu_secondary(); + /* Perform any KUP setup that is per-cpu */ + setup_kup(); + /* * At this point, we can let interrupts switch to virtual mode * (the MMU has been setup), so adjust the MSR in the PACA to From patchwork Thu Feb 21 09:35:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 863CD1390 for ; Thu, 21 Feb 2019 09:37:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75FC42EDE5 for ; Thu, 21 Feb 2019 09:37:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 69E433025A; Thu, 21 Feb 2019 09:37:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 97E322EDE5 for ; Thu, 21 Feb 2019 09:37:21 +0000 (UTC) Received: (qmail 26372 invoked by uid 550); 21 Feb 2019 09:36:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26261 invoked from network); 21 Feb 2019 09:36:50 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=W9af/3OISCb3J Rn+pdEMmCl8rnXCvaXdCKyI6HzDcso=; b=xabn7bvkUlTUbsQPTJYEYSrOqSGy/ S6b5K17DXCkkrZQC/mFTlqaaz6od6m8OthQZ6Fm7UqQ6KTSigsq8ptmSBfhJTC5G wRRxdWvPEgrundvVXwUa7KqYerW5nCgLRT6bTsFejpNWmVEr8I3dge8+iKtTyREr lOPSImO9KDQiCb4HxTiOa42cFVje8Ph+fnGnaQTWCFWBU0XItFRDFSW6pVQJdrp5 Ee2htyqFUPXG13O4II2cVwhtm2DVXz96TLoDBXClVnsCeDHlFSbHGnsF6b7oLYLL IGH/niHls+XI/VDhqVrv38NhpaEdBs0HWHOjrVGtpcORtvFnRCbo7Ujpw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=W9af/3OISCb3JRn+pdEMmCl8rnXCvaXdCKyI6HzDcso=; b=2S0Nrp/K sFPCCY0i9OiPEThSOt5cfYZrFCieE1fISZZGVtoh35/zMbTZFWy7CWoWXP9ut1MV 2lU+1hes7NH+Vr3Wt7WqMx5wlp4tzQvFda4g8niMJk7XwmM2IDZw5pExYf0/Rip6 YzpwUS/UNd4KwQZvzgKn8QAfA3fQSwsJLm+2Cf5TWZa5Zle5X+9u2pluGUW/yYQ7 ulsufCR4uNaXC57L9lJUL/9PIWnHyHMhCFSaxDkIVab5PJSPHgTDnjtcdZ4ctRWE ryTArJ3/gvxtIM/dLY60vRDU7lvRf8OVi+2BKuFnQJ/TyrjH6bRV7QTFi82iprHo QlC2zL5ookTfdw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpeef X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 5/7] powerpc/mm/radix: Use KUEP API for Radix MMU Date: Thu, 21 Feb 2019 20:35:59 +1100 Message-Id: <20190221093601.27920-6-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Execution protection already exists on radix, this just refactors the radix init to provide the KUEP setup function instead. Thus, the only functional change is that it can now be disabled. Signed-off-by: Russell Currey --- arch/powerpc/mm/pgtable-radix.c | 12 +++++++++--- arch/powerpc/platforms/Kconfig.cputype | 1 + 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 931156069a81..224bcd4be5ae 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -535,8 +535,15 @@ static void radix_init_amor(void) mtspr(SPRN_AMOR, (3ul << 62)); } -static void radix_init_iamr(void) +#ifdef CONFIG_PPC_KUEP +void __init setup_kuep(bool disabled) { + if (disabled || !early_radix_enabled()) + return; + + if (smp_processor_id() == boot_cpuid) + pr_info("Activating Kernel Userspace Execution Prevention\n"); + /* * Radix always uses key0 of the IAMR to determine if an access is * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction @@ -544,6 +551,7 @@ static void radix_init_iamr(void) */ mtspr(SPRN_IAMR, (1ul << 62)); } +#endif void __init radix__early_init_mmu(void) { @@ -605,7 +613,6 @@ void __init radix__early_init_mmu(void) memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE); - radix_init_iamr(); radix_init_pgtable(); /* Switch to the guard PID before turning on MMU */ radix__switch_mmu_context(NULL, &init_mm); @@ -627,7 +634,6 @@ void radix__early_init_mmu_secondary(void) __pa(partition_tb) | (PATB_SIZE_SHIFT - 12)); radix_init_amor(); } - radix_init_iamr(); radix__switch_mmu_context(NULL, &init_mm); if (cpu_has_feature(CPU_FTR_HVMODE)) diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 7fa5ddbdce12..25cc7d36b27d 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -320,6 +320,7 @@ config PPC_RADIX_MMU bool "Radix MMU Support" depends on PPC_BOOK3S_64 select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA + select PPC_HAVE_KUEP default y help Enable support for the Power ISA 3.0 Radix style MMU. Currently this From patchwork Thu Feb 21 09:36:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D93A51390 for ; Thu, 21 Feb 2019 09:37:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C923D3025A for ; Thu, 21 Feb 2019 09:37:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA92D30277; Thu, 21 Feb 2019 09:37:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id EF8413025A for ; Thu, 21 Feb 2019 09:37:30 +0000 (UTC) Received: (qmail 27688 invoked by uid 550); 21 Feb 2019 09:36:54 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26561 invoked from network); 21 Feb 2019 09:36:53 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=tq/uYvjK6K2rr yCtaH/i03liKnHzLrIx4TwZPHaWGUs=; b=l4+7LG6A7b/xOrClgpInPzAo04W37 iiKSzeTPKTrMJyuabl1ccqVIQ8nP6z8SZzHDsbSDMyePdy4dCnbKnJaaFCThe4C4 w72O5L9n8daDxGAIUT/sREHMtq7r/saiyeOhbn9FZnWi9q9VFIMFoKRc1ALMeT2Y 32TA+nzeaZaHEtQ6HVsT5UlKSAeK7uZACzjURDWt+GGJer9Cqa1VCYgK6qCdf7hk fKKyf/sK+lqsYaF1hgzWYo3osqelVIjl+xsP7dgoQuZssd6gSrzslSf6RJ5INLpk viA3dAdOxJcMJC1/3DWdzGBCDtZLHcH5+WLus4oFEH+eFk6brr5e+o/GQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=tq/uYvjK6K2rryCtaH/i03liKnHzLrIx4TwZPHaWGUs=; b=r8xtgTxB 9xF18QG8OKTHiuo6bSYRt6rUPdxD7mkbpIcpJ/PJQeaShpbjGwyeOQvfGOOMTaio 0v3Y8lSAUPUtUJucBtss7RO7SgcFGWZ9ZEapVbNsX8UjNkYtP2o2dNpbBSzEuiDt BosZG0xIbXoXQ+KFsoORyJkdqozKHiEe7IUdQm26494AwgL0+1Pq/ZO44+vGd2/Q 24Kmn8Mjis/9dT5xYh8W7o/cbzLrdm9+GcDPiaba4JIQmSmSV6cICcNd5XN+5QCK wlAUgCuZ+xuMYgGGmPFCibBNFz5IRirfUmjQ5SZ8IBtxpItM5TJ+ge2J0YUUqZdB Z/QhqSQg6mZQlw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpeef X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 6/7] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Date: Thu, 21 Feb 2019 20:36:00 +1100 Message-Id: <20190221093601.27920-7-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP __patch_instruction() is called in early boot, and uses __put_user_size(), which includes the locks and unlocks for KUAP, which could either be called too early, or in the Radix case, forced to use "early_" versions of functions just to safely handle this one case. __put_user_asm() does not do this, and thus is safe to use both in early boot, and later on since in this case it should only ever be touching kernel memory. __patch_instruction() was previously refactored to use __put_user_size() in order to be able to return -EFAULT, which would allow the kernel to patch instructions in userspace, which should never happen. This has the functional change of causing faults on userspace addresses if KUAP is turned on, which should never happen in practice. A future enhancement could be to double check the patch address is definitely allowed to be tampered with by the kernel. Signed-off-by: Russell Currey --- arch/powerpc/lib/code-patching.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 506413a2c25e..42fdadac6587 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -26,9 +26,9 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr, unsigned int *patch_addr) { - int err; + int err = 0; - __put_user_size(instr, patch_addr, 4, err); + __put_user_asm(instr, patch_addr, err, "stw"); if (err) return err; From patchwork Thu Feb 21 09:36:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 10823387 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E87E1399 for ; Thu, 21 Feb 2019 09:37:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BF862EDE5 for ; Thu, 21 Feb 2019 09:37:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8038A3025A; Thu, 21 Feb 2019 09:37:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 3FFE42EDE5 for ; Thu, 21 Feb 2019 09:37:41 +0000 (UTC) Received: (qmail 27956 invoked by uid 550); 21 Feb 2019 09:36:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27845 invoked from network); 21 Feb 2019 09:36:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=daTEmitcBV9Rj A1K67fu4m4Czn7JssgD7AXGHUGAKgQ=; b=nzvPb4OpeGUmSHHFySA9opn6wNHZa ZWhUV5M1Bxp4PrykW5Uomf1/ZIKWZDukjWEOOn1WHckKlnpoznSzksnB81u28deL i8ZKE2NU7ZnIOM+Ds6W1ZuOlE/as44nopY8Srg2y7t4FwJAeM/VNFuONs4B5cL81 DOJbdmUK3Na8la92CXjih8tb8LB5/MK4uDB3/yCWayaqTfLnBnkJfK6RZZI2utZY 9S94Jik90wBnofCvyOCYgjjAg+btZeRFOAxr8OLB4xXwbvus1YXFykExqttWcaiA EBEZTJvrfqIKApNJuXYsHOQnLgtLW3hl7UgcYdUqWiav9WzyzcVimM0TA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=daTEmitcBV9RjA1K67fu4m4Czn7JssgD7AXGHUGAKgQ=; b=yzey20iI ct1oi0TNCREbDjfEAkP5GxloRyw6cQPNM+0Ino8ftj076K+lbSfOjTLknCcEFGHa NhHdbTozzK07HLUsA7dfOe9hzIt8tJBQAJDvyXoVmyh5IJ2nMHOP1U6Ky3+RiRfS 8C0clSUKRPwbwvXJNSksgfLEfXybKAjcqzLPI1zBdYnHnW4KwJHMBlWW9OvGwxDO 4clvIpOAYL+bRs4bNv6y9rKNSuNkegTzLDGkbSGBoxBn6UZMCdXqK5iWoKihyngH UDl1B2PtuMW+zNDj+unRUD746Gk6TOqIgTITnqmrBJborAJHduMABb7WEg0LhypJ ByI1aroKKyiY9w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrtdekgddtjeculddtuddrgedtledrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucgfrhhlucfvnfffucdlfedtmdenucfjughrpefhvf fufffkofgjfhgggfestdekredtredttdenucfhrhhomheptfhushhsvghllhcuvehurhhr vgihuceorhhushgtuhhrsehruhhsshgvlhhlrdgttgeqnecukfhppeduvddvrdelledrke dvrddutdenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlhhl rdgttgenucevlhhushhtvghrufhiiigvpeef X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au, npiggin@gmail.com, christophe.leroy@c-s.fr, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU Date: Thu, 21 Feb 2019 20:36:01 +1100 Message-Id: <20190221093601.27920-8-ruscur@russell.cc> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190221093601.27920-1-ruscur@russell.cc> References: <20190221093601.27920-1-ruscur@russell.cc> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Kernel Userspace Access Prevention utilises a feature of the Radix MMU which disallows read and write access to userspace addresses. By utilising this, the kernel is prevented from accessing user data from outside of trusted paths that perform proper safety checks, such as copy_{to/from}_user() and friends. Userspace access is disabled from early boot and is only enabled when performing an operation like copy_{to/from}_user(). The register that controls this (AMR) does not prevent userspace from accessing other userspace, so there is no need to save and restore when entering and exiting userspace. This feature has a slight performance impact which I roughly measured to be 3% slower in the worst case (performing 1GB of 1 byte read()/write() syscalls), and is gated behind the CONFIG_PPC_KUAP option for performance-critical builds. This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y) and performing the following: # (echo ACCESS_USERSPACE) > [debugfs]/provoke-crash/DIRECT If enabled, this should send SIGSEGV to the thread. A big limitation of the current implementation is that user access is left unlocked if an exception is taken while user access is unlocked (i.e. if an interrupt is taken during copy_to_user()). This should be resolved in future, and is why the state is tracked in the PACA even though nothing currently uses it. Signed-off-by: Russell Currey --- .../powerpc/include/asm/book3s/64/kup-radix.h | 36 +++++++++++++++++++ arch/powerpc/include/asm/kup.h | 4 +++ arch/powerpc/include/asm/mmu.h | 9 ++++- arch/powerpc/include/asm/reg.h | 1 + arch/powerpc/mm/pgtable-radix.c | 16 +++++++++ arch/powerpc/mm/pkeys.c | 7 ++-- arch/powerpc/platforms/Kconfig.cputype | 1 + 7 files changed, 71 insertions(+), 3 deletions(-) create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h new file mode 100644 index 000000000000..5cfdea954418 --- /dev/null +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KUP_RADIX_H +#define _ASM_POWERPC_KUP_RADIX_H + +#ifndef __ASSEMBLY__ +#ifdef CONFIG_PPC_KUAP +#include +/* + * We do have the ability to individually lock/unlock reads and writes rather + * than both at once, however it's a significant performance hit due to needing + * to do a read-modify-write, which adds a mfspr, which is slow. As a result, + * locking/unlocking both at once is preferred. + */ +static inline void unlock_user_access(void __user *to, const void __user *from, + unsigned long size) +{ + if (!mmu_has_feature(MMU_FTR_RADIX_KUAP)) + return; + + mtspr(SPRN_AMR, 0); + isync(); + get_paca()->user_access_allowed = 1; +} + +static inline void lock_user_access(void __user *to, const void __user *from, + unsigned long size) +{ + if (!mmu_has_feature(MMU_FTR_RADIX_KUAP)) + return; + + mtspr(SPRN_AMR, RADIX_AMR_LOCKED); + get_paca()->user_access_allowed = 0; +} +#endif /* CONFIG_PPC_KUAP */ +#endif /* __ASSEMBLY__ */ +#endif diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index 2ac540fb488f..af583fd5a027 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -2,6 +2,10 @@ #ifndef _ASM_POWERPC_KUP_H_ #define _ASM_POWERPC_KUP_H_ +#ifdef CONFIG_PPC_BOOK3S_64 +#include +#endif + #ifndef __ASSEMBLY__ #include diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h index 25607604a7a5..ea703de9be9b 100644 --- a/arch/powerpc/include/asm/mmu.h +++ b/arch/powerpc/include/asm/mmu.h @@ -107,6 +107,10 @@ */ #define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000) +/* Supports KUAP (key 0 controlling userspace addresses) on radix + */ +#define MMU_FTR_RADIX_KUAP ASM_CONST(0x80000000) + /* MMU feature bit sets for various CPUs */ #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \ MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2 @@ -164,7 +168,10 @@ enum { #endif #ifdef CONFIG_PPC_RADIX_MMU MMU_FTR_TYPE_RADIX | -#endif +#ifdef CONFIG_PPC_KUAP + MMU_FTR_RADIX_KUAP | +#endif /* CONFIG_PPC_KUAP */ +#endif /* CONFIG_PPC_RADIX_MMU */ 0, }; diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 1c98ef1f2d5b..0e789e2c5bc3 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -246,6 +246,7 @@ #define SPRN_DSCR 0x11 #define SPRN_CFAR 0x1c /* Come From Address Register */ #define SPRN_AMR 0x1d /* Authority Mask Register */ +#define RADIX_AMR_LOCKED 0xC000000000000000UL /* Read & Write disabled */ #define SPRN_UAMOR 0x9d /* User Authority Mask Override Register */ #define SPRN_AMOR 0x15d /* Authority Mask Override Register */ #define SPRN_ACOP 0x1F /* Available Coprocessor Register */ diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 224bcd4be5ae..b621cef4825e 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -29,6 +29,7 @@ #include #include #include +#include #include @@ -553,6 +554,21 @@ void __init setup_kuep(bool disabled) } #endif +#ifdef CONFIG_PPC_KUAP +void __init setup_kuap(bool disabled) +{ + if (disabled || !early_radix_enabled()) + return; + + if (smp_processor_id() == boot_cpuid) { + pr_info("Activating Kernel Userspace Access Prevention\n"); + cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP; + } + + mtspr(SPRN_AMR, RADIX_AMR_LOCKED); +} +#endif + void __init radix__early_init_mmu(void) { unsigned long lpcr; diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 587807763737..2223f4b4b1bf 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -267,7 +268,8 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, void thread_pkey_regs_save(struct thread_struct *thread) { - if (static_branch_likely(&pkey_disabled)) + if (static_branch_likely(&pkey_disabled) && + !mmu_has_feature(MMU_FTR_RADIX_KUAP)) return; /* @@ -281,7 +283,8 @@ void thread_pkey_regs_save(struct thread_struct *thread) void thread_pkey_regs_restore(struct thread_struct *new_thread, struct thread_struct *old_thread) { - if (static_branch_likely(&pkey_disabled)) + if (static_branch_likely(&pkey_disabled) && + !mmu_has_feature(MMU_FTR_RADIX_KUAP)) return; if (old_thread->amr != new_thread->amr) diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 25cc7d36b27d..67b2ed9bb9f3 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -321,6 +321,7 @@ config PPC_RADIX_MMU depends on PPC_BOOK3S_64 select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA select PPC_HAVE_KUEP + select PPC_HAVE_KUAP default y help Enable support for the Power ISA 3.0 Radix style MMU. Currently this