From patchwork Tue Aug 9 16:51:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9271839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E23786075A for ; Tue, 9 Aug 2016 16:52:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3D7827F80 for ; Tue, 9 Aug 2016 16:52:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7E2128383; Tue, 9 Aug 2016 16:52:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 03CEA27F80 for ; Tue, 9 Aug 2016 16:52:13 +0000 (UTC) Received: (qmail 26510 invoked by uid 550); 9 Aug 2016 16:52:10 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26358 invoked from network); 9 Aug 2016 16:52:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sjvnJgfjnkiND5CI6i3JkfvSupsshjOL/xKH9Go0Dro=; b=HVNofJLAfb+2slbccKRtHVko55SvC3gtFjiktAdkLK956IDs4jq+OTA1fj8CiTteiT hpHWe5Yi282IYeFg9Rl1zX4/kaxEIMQ7YVPSSMdv9n5g/0neKwBiGHIXV5G8V0GfVxkW KH6eVvpqyYbudD3bcb8Tuea3dZYrOUUBrPArRH3Wq/h/XDv/bgw25ouiokYfdKTY9/K0 nz6qJnfi1GpOi+s5IYroNJywauePUwuLxbwc4KZDvxIWcq40ey08qGn97FBWf0hxc1S1 amS8R3vQFqslmxrwFovekvX5uWvEngm1NfWlqL/qSxb9Pu0zAoP16U2N1YoXw9EbEiu+ v8PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sjvnJgfjnkiND5CI6i3JkfvSupsshjOL/xKH9Go0Dro=; b=hegkcrRZjxrihoC5BrDwduLPEX6/mCwL5gL+MpjzzSG2xCgJf8qknNV9Hy9xWGVgDX bLboJn2lpORTx7hgODZUSlViBEL7HFJ4cleC9DL+rZiwTgfNUZTKjMzCT49fHGcXCn3D zpuLfdk8BDaZl8F1rZ2nDy0MmbFFqGxA8K6e23loaV7l5IXofq6+oNS1aisEq5qZYWKJ 2eDPU1vVE3Q0EpQVA9hZrbUWisVqnEpuykzjsl7pSGSiZHuo7aize/Ne6m3ntHcleT1A tQtQ9Ng9UxHOihBB9y8TY+UxL4eBCG73YSB34xCroKC8Fv5q2lOnF+6GZG+VxbUl7zsC P/NA== X-Gm-Message-State: AEkoouuO6Cm8iThsIOdEU8tbfWkJ5RQq3D0mAy/4c3p3s3vzpj3P+Y50TIkvbDCPYiiHbYfY X-Received: by 10.66.86.170 with SMTP id q10mr173301609paz.105.1470761517620; Tue, 09 Aug 2016 09:51:57 -0700 (PDT) From: Thomas Garnier To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Borislav Petkov , Joerg Roedel , Dave Young , "Rafael J . Wysocki" , Lv Zheng , Thomas Garnier , Baoquan He , Dave Hansen , Mark Salter , Aleksey Makarov , Kees Cook , Andrew Morton , Christian Borntraeger , Fabian Frederick , Toshi Kani , Dan Williams Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Aug 2016 09:51:23 -0700 Message-Id: <1470761483-62501-2-git-send-email-thgarnie@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1470761483-62501-1-git-send-email-thgarnie@google.com> References: <1470761483-62501-1-git-send-email-thgarnie@google.com> Subject: [kernel-hardening] [PATCH v3 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization X-Virus-Scanned: ClamAV using ClamSMTP Default implementation expects 6 pages maximum are needed for low page allocations. If KASLR memory randomization is enabled, the worse case of e820 layout would require 12 pages (no large pages). It is due to the PUD level randomization and the variable e820 memory layout. This bug was found while doing extensive testing of KASLR memory randomization on different type of hardware. Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions") Signed-off-by: Thomas Garnier --- Based on next-20160805 --- arch/x86/mm/init.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 6209289..796e7af 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num) return __va(pfn << PAGE_SHIFT); } -/* need 3 4k for initial PMD_SIZE, 3 4k for 0-ISA_END_ADDRESS */ -#define INIT_PGT_BUF_SIZE (6 * PAGE_SIZE) +/* + * By default need 3 4k for initial PMD_SIZE, 3 4k for 0-ISA_END_ADDRESS. + * With KASLR memory randomization, depending on the machine e860 memory layout + * and the PUD alignement. We may need twice more pages when KASLR memoy + * randomization is enabled. + */ +#ifndef CONFIG_RANDOMIZE_MEMORY +#define INIT_PGD_PAGE_COUNT 6 +#else +#define INIT_PGD_PAGE_COUNT 12 +#endif +#define INIT_PGT_BUF_SIZE (INIT_PGD_PAGE_COUNT * PAGE_SIZE) RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE); void __init early_alloc_pgt_buf(void) {