From patchwork Tue Jul 26 18:22:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Roberts, William C" X-Patchwork-Id: 9248669 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7E143607D8 for ; Tue, 26 Jul 2016 18:25:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7461420120 for ; Tue, 26 Jul 2016 18:25:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6881226A4D; Tue, 26 Jul 2016 18:25:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 6A75120120 for ; Tue, 26 Jul 2016 18:25:43 +0000 (UTC) Received: (qmail 19675 invoked by uid 550); 26 Jul 2016 18:25:41 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 18200 invoked from network); 26 Jul 2016 18:25:31 -0000 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,426,1464678000"; d="scan'208";a="1002960818" From: william.c.roberts@intel.com To: jason@lakedaemon.net, linux-mm@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, akpm@linux-foundation.org Cc: keescook@chromium.org, gregkh@linuxfoundation.org, nnk@google.com, jeffv@google.com, salyzyn@android.com, dcashman@android.com, William Roberts Date: Tue, 26 Jul 2016 11:22:26 -0700 Message-Id: <1469557346-5534-2-git-send-email-william.c.roberts@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1469557346-5534-1-git-send-email-william.c.roberts@intel.com> References: <1469557346-5534-1-git-send-email-william.c.roberts@intel.com> Subject: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization X-Virus-Scanned: ClamAV using ClamSMTP From: William Roberts This patch introduces the ability randomize mmap locations where the address is not requested, for instance when ld is allocating pages for shared libraries. It chooses to randomize based on the current personality for ASLR. Currently, allocations are done sequentially within unmapped address space gaps. This may happen top down or bottom up depending on scheme. For instance these mmap calls produce contiguous mappings: int size = getpagesize(); mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000 mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000 Note no gap between. After patches: int size = getpagesize(); mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000 mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000 Note gap between. Using the test program mentioned here, that allocates fixed sized blocks till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html, no difference was noticed in the number of allocations. Most varied from run to run, but were always within a few allocations of one another between patched and un-patched runs. Performance Measurements: Using strace with -T option and filtering for mmap on the program ls shows a slowdown of approximate 3.7% Signed-off-by: William Roberts --- mm/mmap.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/mm/mmap.c b/mm/mmap.c index de2c176..7891272 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include @@ -1582,6 +1583,24 @@ unacct_error: return error; } +/* + * Generate a random address within a range. This differs from randomize_addr() by randomizing + * on len sized chunks. This helps prevent fragmentation of the virtual memory map. + */ +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len) +{ + unsigned long slots; + + if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space) + return 0; + + slots = (end - start)/len; + if (!slots) + return 0; + + return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); +} + unsigned long unmapped_area(struct vm_unmapped_area_info *info) { /* @@ -1676,6 +1695,8 @@ found: if (gap_start < info->low_limit) gap_start = info->low_limit; + gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start; + /* Adjust gap address to the desired alignment */ gap_start += (info->align_offset - gap_start) & info->align_mask; @@ -1775,6 +1796,9 @@ found: found_highest: /* Compute highest gap address at the desired alignment */ gap_end -= info->length; + + gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end; + gap_end -= (gap_end - info->align_offset) & info->align_mask; VM_BUG_ON(gap_end < info->low_limit);