From patchwork Wed Mar 25 18:36:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hector Marco-Gisbert X-Patchwork-Id: 6094181 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 06C7F9F667 for ; Wed, 25 Mar 2015 18:37:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 09FE920379 for ; Wed, 25 Mar 2015 18:37:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E42712037D for ; Wed, 25 Mar 2015 18:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753264AbbCYShY (ORCPT ); Wed, 25 Mar 2015 14:37:24 -0400 Received: from smtpsal1.cc.upv.es ([158.42.249.61]:33513 "EHLO smtpsalv.upv.es" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753318AbbCYShX (ORCPT ); Wed, 25 Mar 2015 14:37:23 -0400 Received: from smtpx.upv.es (smtpxv.cc.upv.es [158.42.249.46]) by smtpsalv.upv.es (8.14.4/8.14.4) with ESMTP id t2PIam7Y011307 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 25 Mar 2015 19:36:48 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=upv.es; s=default; t=1427308609; bh=GDuypiiBSQcAMMzE7R9mraIIYwkrvFHx+GBqe0AjXOQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ewhde74GvC8lNRQ1rr3wcJpnMnAYkkPMBoAjRm1XNL5+lEE2mSmfNQPUogFaM+Piw QG6Udby3i3aqEpsiMRZNtnei2ala5n9+xhmDOEFv5BrFKdd1usNrIwB/r/ggODtRrk DpnfHab88Ff0YOn92A1SPuzdaMoK8ONmgjuM+swu364KkF5sRL54GvYwQ0uFrPxSG6 bHa+ytXv5OZ2DZoBZj4uRv48DH0rPR3KnXVxm7GSkCpQMSmELpxT9MMGITHVaJbl5C 4jhxM/eZ6nCBCnm8Wr6tFjlvO/R2GgwGLYrH6MIolNO8CqFc4g1UQivUBUAhzuE/MD ldudWL6+FX/1g== Received: from smtp.upv.es (smtpv.cc.upv.es [158.42.249.16]) by smtpx.upv.es (8.14.3/8.14.3) with ESMTP id t2PIalRU022534; Wed, 25 Mar 2015 19:36:48 +0100 Received: from localhost.localdomain (trinca.disca.upv.es [158.42.52.215]) (authenticated bits=0) by smtp.upv.es (8.14.4/8.14.4) with ESMTP id t2PIaiIm029544 (version=TLSv1/SSLv3 cipher=AES128-SHA256 bits=128 verify=NO); Wed, 25 Mar 2015 19:36:46 +0100 From: Hector Marco-Gisbert To: Borislav Petkov Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Alexander Viro , Jan-Simon , linux-fsdevel@vger.kernel.org, kees Cook , Hector Marco-Gisbert , Ismael Ripoll Subject: [PATCH] mm/x86: AMD Bulldozer ASLR fix Date: Wed, 25 Mar 2015 19:36:17 +0100 Message-Id: <1427308577-2590-1-git-send-email-hecmargi@upv.es> X-Mailer: git-send-email 1.9.1 In-Reply-To: <20150324191556.GA11571@pd.tnic> References: <20150324191556.GA11571@pd.tnic> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A bug in Linux ASLR implementation which affects some AMD processors has been found. The issue affects to all Linux process even if they are not using shared libraries (statically compiled). The problem appears because some mmapped objects (VDSO, libraries, etc.) are poorly randomized in an attempt to avoid cache aliasing penalties for AMD Bulldozer (Family 15h) processors. Affected systems have reduced the mmapped files entropy by eight. The following output is the run on an AMD Opteron 62xx class CPU processor under x86_64 Linux 4.0.0: for i in `seq 1 10`; do cat /proc/self/maps | grep "r-xp.*libc" ; done b7588000-b7736000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6 b7570000-b771e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6 b75d0000-b777e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6 b75b0000-b775e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6 b7578000-b7726000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6 As shown in the previous output, the bits 12, 13 and 14 are always 0. The address always ends in 0x8000 or 0x0000. The bug is caused by a hack to improve performance by avoiding cache aliasing penalties in the Family 15h of AMD Bulldozer processors (commit: dfb09f9b). 32-bit systems are specially sensitive to this issue because the entropy for libraries is reduced from 2^8 to 2^5, which means that libraries only have 32 different places where they can be loaded. This patch randomizes per boot the three affected bits, rather than setting them to zero. Since all the shared pages have the same value at the bits [12..14], there is no cache aliasing problems (which is supposed to be the cause of performance loss). On the other hand, since the value is not known by a potential remote attacker, the ASLR preserves its effectiveness. More details at: http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html Signed-off-by: Hector Marco-Gisbert Signed-off-by: Ismael Ripoll --- arch/x86/include/asm/elf.h | 1 + arch/x86/kernel/cpu/amd.c | 3 +++ arch/x86/kernel/sys_x86_64.c | 20 +++++++++++++++++--- 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index ca3347a..bd292ce 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -365,6 +365,7 @@ enum align_flags { struct va_alignment { int flags; unsigned long mask; + unsigned long bits; } ____cacheline_aligned; extern struct va_alignment va_align; diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 15c5df9..45a41be 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -488,6 +489,8 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) va_align.mask = (upperbit - 1) & PAGE_MASK; va_align.flags = ALIGN_VA_32 | ALIGN_VA_64; + /* A random value per boot for bits 12,13 and 14 */ + va_align.bits = get_random_int() & va_align.mask; } } diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 30277e2..d38905d 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -34,10 +34,16 @@ static unsigned long get_align_mask(void) return va_align.mask; } +static unsigned long get_align_bits(void){ + + return va_align.bits & get_align_mask(); +} + unsigned long align_vdso_addr(unsigned long addr) { unsigned long align_mask = get_align_mask(); - return (addr + align_mask) & ~align_mask; + addr = (addr + align_mask) & ~align_mask; + return addr | get_align_bits(); } static int __init control_va_addr_alignment(char *str) @@ -135,8 +141,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, info.length = len; info.low_limit = begin; info.high_limit = end; - info.align_mask = filp ? get_align_mask() : 0; + info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; + if(filp){ + info.align_mask = get_align_mask(); + info.align_offset += get_align_bits(); + } return vm_unmapped_area(&info); } @@ -174,8 +184,12 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = mm->mmap_base; - info.align_mask = filp ? get_align_mask() : 0; + info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; + if(filp){ + info.align_mask = get_align_mask(); + info.align_offset += get_align_bits(); + } addr = vm_unmapped_area(&info); if (!(addr & ~PAGE_MASK)) return addr;