From patchwork Fri Feb 3 05:11:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bhupesh Sharma X-Patchwork-Id: 9553467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F34EE60424 for ; Fri, 3 Feb 2017 05:11:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E81A02847D for ; Fri, 3 Feb 2017 05:11:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA1E22849C; Fri, 3 Feb 2017 05:11:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id F22232847D for ; Fri, 3 Feb 2017 05:11:57 +0000 (UTC) Received: (qmail 5565 invoked by uid 550); 3 Feb 2017 05:11:56 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5544 invoked from network); 3 Feb 2017 05:11:55 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=DxYtIgDfeo5WuevrDfdyJA8tDQJnfh+aS9McwqqMsQ0=; b=kSlOB7pLkzaFGpz0HygPypZMgO+kU4lUTCi2TWtWP5HIvHMQp5wJVSQQACamQaXli2 6lOlrz/14LWeidQAvGz/QGXu2G9rWBwWSgBBthO9Hr91rX5Gwf2r8cikd+p3NX9V/ZBB 3y0HFeDlTMGzg0CeVB+EHV7zw0LCU8939z8iUmDKPMCFUa9ydQn39nhnglx2Rw1kmFQl jhAqD1kavZKU8ofCVi/QbmTQVIpH0AxTd4xlDK2yX9hGBWIm1eowx/VTPcrpzh6AcsSr fDrTDA7ABxYQetkNtJ9OVKb/l2KVEGOdiRN6oi1X1SxgOO8RbB27rMOXYzX1ynPet8RB OSzA== X-Gm-Message-State: AIkVDXLmruRB99fVN3MAwbpXE6HxeXPuhB1u3fNjCGpBNEcfrNYOKW5mAg+N0zf53A2WnUHB X-Received: by 10.55.105.131 with SMTP id e125mr12379786qkc.174.1486098703507; Thu, 02 Feb 2017 21:11:43 -0800 (PST) From: Bhupesh Sharma To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Cc: dcashman@google.com, mpe@ellerman.id.au, bhupesh.linux@gmail.com, keescook@chromium.org, Bhupesh Sharma , Alexander Graf , Benjamin Herrenschmidt , Paul Mackerras , Anatolij Gustschin , Alistair Popple , Matt Porter , Vitaly Bordug , Scott Wood , Kumar Gala , Daniel Cashman Date: Fri, 3 Feb 2017 10:41:22 +0530 Message-Id: <1486098682-30395-1-git-send-email-bhsharma@redhat.com> X-Mailer: git-send-email 2.7.4 Subject: [kernel-hardening] [PATCH v2 1/1] powerpc: mm: support ARCH_MMAP_RND_BITS X-Virus-Scanned: ClamAV using ClamSMTP powerpc: arch_mmap_rnd() uses hard-coded values, (23-PAGE_SHIFT) for 32-bit and (30-PAGE_SHIFT) for 64-bit, to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep default values as new minimums. This patch makes sure that now powerpc mmap arch_mmap_rnd() approach is similar to other ARCHs like x86, arm64 and arm. Cc: Alexander Graf Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Anatolij Gustschin Cc: Alistair Popple Cc: Matt Porter Cc: Vitaly Bordug Cc: Scott Wood Cc: Kumar Gala Cc: Daniel Cashman Signed-off-by: Bhupesh Sharma Reviewed-by: Kees Cook --- Changes since v1: v1 can be seen here (https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-February/153594.html) - No functional change in this patch. - Added R-B from Kees. - Dropped PATCH 2/2 from v1 as recommended by Kees Cook. arch/powerpc/Kconfig | 34 ++++++++++++++++++++++++++++++++++ arch/powerpc/mm/mmap.c | 7 ++++--- 2 files changed, 38 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index a8ee573fe610..b4a843f68705 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -22,6 +22,38 @@ config MMU bool default y +config ARCH_MMAP_RND_BITS_MIN + default 5 if PPC_256K_PAGES && 32BIT + default 12 if PPC_256K_PAGES && 64BIT + default 7 if PPC_64K_PAGES && 32BIT + default 14 if PPC_64K_PAGES && 64BIT + default 9 if PPC_16K_PAGES && 32BIT + default 16 if PPC_16K_PAGES && 64BIT + default 11 if PPC_4K_PAGES && 32BIT + default 18 if PPC_4K_PAGES && 64BIT + +# max bits determined by the following formula: +# VA_BITS - PAGE_SHIFT - 4 +# for e.g for 64K page and 64BIT = 48 - 16 - 4 = 28 +config ARCH_MMAP_RND_BITS_MAX + default 10 if PPC_256K_PAGES && 32BIT + default 26 if PPC_256K_PAGES && 64BIT + default 12 if PPC_64K_PAGES && 32BIT + default 28 if PPC_64K_PAGES && 64BIT + default 14 if PPC_16K_PAGES && 32BIT + default 30 if PPC_16K_PAGES && 64BIT + default 16 if PPC_4K_PAGES && 32BIT + default 32 if PPC_4K_PAGES && 64BIT + +config ARCH_MMAP_RND_COMPAT_BITS_MIN + default 5 if PPC_256K_PAGES + default 7 if PPC_64K_PAGES + default 9 if PPC_16K_PAGES + default 11 + +config ARCH_MMAP_RND_COMPAT_BITS_MAX + default 16 + config HAVE_SETUP_PER_CPU_AREA def_bool PPC64 @@ -100,6 +132,8 @@ config PPC select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) select HAVE_KPROBES select HAVE_ARCH_KGDB + select HAVE_ARCH_MMAP_RND_BITS + select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_KRETPROBES select HAVE_ARCH_TRACEHOOK select HAVE_MEMBLOCK diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c index 2f1e44362198..babf59faab3b 100644 --- a/arch/powerpc/mm/mmap.c +++ b/arch/powerpc/mm/mmap.c @@ -60,11 +60,12 @@ unsigned long arch_mmap_rnd(void) { unsigned long rnd; - /* 8MB for 32bit, 1GB for 64bit */ +#ifdef CONFIG_COMPAT if (is_32bit_task()) - rnd = get_random_long() % (1<<(23-PAGE_SHIFT)); + rnd = get_random_long() & ((1UL << mmap_rnd_compat_bits) - 1); else - rnd = get_random_long() % (1UL<<(30-PAGE_SHIFT)); +#endif + rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1); return rnd << PAGE_SHIFT; }