From patchwork Thu Feb 2 05:42:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bhupesh Sharma X-Patchwork-Id: 9551509 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9826A604A7 for ; Thu, 2 Feb 2017 09:41:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 81F1E200E9 for ; Thu, 2 Feb 2017 09:41:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 768A42839C; Thu, 2 Feb 2017 09:41:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 68313200E9 for ; Thu, 2 Feb 2017 09:41:04 +0000 (UTC) Received: (qmail 26257 invoked by uid 550); 2 Feb 2017 09:40:58 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 23974 invoked from network); 2 Feb 2017 05:44:09 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Eb0KPz0PXLAzLb8UVwsNF8S5kHXamanurOcmWSZqQEM=; b=fDXvbCv4Z3QYH18NUguh1cQP4txviA81lTjz6JM0uRvMSI13kAdC5S5kqxivCqSwt0 lhcam02fSLEJc8coLYypRLqSqAG48dGLhq9IHCN9NQZK/UY3wKrO0nWlBB1wAI7vL4Cs Q6GKqNscKwugUyAkTOLyfkBLGz+TF41mz5IaMwavf3x7bqsWW5ha4nkcMLWFNctOEOgj C0S/b/jjNTJYEOk5bbqWc1ZpDvFAJSSeScmrjcHyGI3ccQb2s7F1vODtW9eb3BkS/FWa /ZC27Tl3RM0QuX4LrLn44Kc9cXYu9uMziRAo68Cat/b3/tIgCMGEFwNX/hQrIZU5hMnF jBfA== X-Gm-Message-State: AIkVDXJPWxuZSR5YRhulWNH1AUC3aJltoWyWmAk8V1CJFAmIKuSypaJF2QBlEuMuIvfDiwMh X-Received: by 10.237.37.209 with SMTP id y17mr6459337qtc.136.1486014237348; Wed, 01 Feb 2017 21:43:57 -0800 (PST) From: Bhupesh Sharma To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Cc: dcashman@google.com, mpe@ellerman.id.au, bhupesh.linux@gmail.com, keescook@chromium.org, Bhupesh Sharma , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , Anatolij Gustschin , Alistair Popple , Matt Porter , Vitaly Bordug , Scott Wood , Kumar Gala , Daniel Cashman Date: Thu, 2 Feb 2017 11:12:47 +0530 Message-Id: <1486014168-1279-2-git-send-email-bhsharma@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1486014168-1279-1-git-send-email-bhsharma@redhat.com> References: <1486014168-1279-1-git-send-email-bhsharma@redhat.com> Subject: [kernel-hardening] [PATCH 1/2] powerpc: mm: support ARCH_MMAP_RND_BITS X-Virus-Scanned: ClamAV using ClamSMTP powerpc: arch_mmap_rnd() uses hard-coded values, (23-PAGE_SHIFT) for 32-bit and (30-PAGE_SHIFT) for 64-bit, to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep default values as new minimums. This patch makes sure that now powerpc mmap arch_mmap_rnd() approach is similar to other ARCHs like x86, arm64 and arm. Cc: Alexander Graf Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Anatolij Gustschin Cc: Alistair Popple Cc: Matt Porter Cc: Vitaly Bordug Cc: Scott Wood Cc: Kumar Gala Cc: Daniel Cashman Cc: Kees Cook Signed-off-by: Bhupesh Sharma Reviewed-by: Kees Cook --- arch/powerpc/Kconfig | 34 ++++++++++++++++++++++++++++++++++ arch/powerpc/mm/mmap.c | 7 ++++--- 2 files changed, 38 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index a8ee573fe610..b4a843f68705 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -22,6 +22,38 @@ config MMU bool default y +config ARCH_MMAP_RND_BITS_MIN + default 5 if PPC_256K_PAGES && 32BIT + default 12 if PPC_256K_PAGES && 64BIT + default 7 if PPC_64K_PAGES && 32BIT + default 14 if PPC_64K_PAGES && 64BIT + default 9 if PPC_16K_PAGES && 32BIT + default 16 if PPC_16K_PAGES && 64BIT + default 11 if PPC_4K_PAGES && 32BIT + default 18 if PPC_4K_PAGES && 64BIT + +# max bits determined by the following formula: +# VA_BITS - PAGE_SHIFT - 4 +# for e.g for 64K page and 64BIT = 48 - 16 - 4 = 28 +config ARCH_MMAP_RND_BITS_MAX + default 10 if PPC_256K_PAGES && 32BIT + default 26 if PPC_256K_PAGES && 64BIT + default 12 if PPC_64K_PAGES && 32BIT + default 28 if PPC_64K_PAGES && 64BIT + default 14 if PPC_16K_PAGES && 32BIT + default 30 if PPC_16K_PAGES && 64BIT + default 16 if PPC_4K_PAGES && 32BIT + default 32 if PPC_4K_PAGES && 64BIT + +config ARCH_MMAP_RND_COMPAT_BITS_MIN + default 5 if PPC_256K_PAGES + default 7 if PPC_64K_PAGES + default 9 if PPC_16K_PAGES + default 11 + +config ARCH_MMAP_RND_COMPAT_BITS_MAX + default 16 + config HAVE_SETUP_PER_CPU_AREA def_bool PPC64 @@ -100,6 +132,8 @@ config PPC select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) select HAVE_KPROBES select HAVE_ARCH_KGDB + select HAVE_ARCH_MMAP_RND_BITS + select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_KRETPROBES select HAVE_ARCH_TRACEHOOK select HAVE_MEMBLOCK diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c index 2f1e44362198..babf59faab3b 100644 --- a/arch/powerpc/mm/mmap.c +++ b/arch/powerpc/mm/mmap.c @@ -60,11 +60,12 @@ unsigned long arch_mmap_rnd(void) { unsigned long rnd; - /* 8MB for 32bit, 1GB for 64bit */ +#ifdef CONFIG_COMPAT if (is_32bit_task()) - rnd = get_random_long() % (1<<(23-PAGE_SHIFT)); + rnd = get_random_long() & ((1UL << mmap_rnd_compat_bits) - 1); else - rnd = get_random_long() % (1UL<<(30-PAGE_SHIFT)); +#endif + rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1); return rnd << PAGE_SHIFT; }