From patchwork Tue Aug 9 17:11:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9271901 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C528560839 for ; Tue, 9 Aug 2016 17:11:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B21DE28066 for ; Tue, 9 Aug 2016 17:11:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A63E028358; Tue, 9 Aug 2016 17:11:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E1A9628066 for ; Tue, 9 Aug 2016 17:11:52 +0000 (UTC) Received: (qmail 5525 invoked by uid 550); 9 Aug 2016 17:11:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5504 invoked from network); 9 Aug 2016 17:11:50 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0YlcZTEFrADwkGHOTucu4L6N8geOZMpHcszo8xf+1eA=; b=DzDltp9BqpCO5dyKWASVcIYFtZBQNI7Ff360BBuUR+rqQ3ayPJnXELbVMmYjjk6RWX TqNrwFgaf8M+e0hPihag0SV0bJzfeWPU6jqeEUrfeJJEyOakLntogjaFQl2AMO7z//Z+ HU0qiASGJIFJllq/SFyOfUkSMdTMb5ACR93UKUm4LbpnrIfgPcPpuzhTBMRLPdnciH8P ntWH72OFcz7RYkmlhNoeUiPlxwp/xnkgPM5mK2rzVy6jP250L5ricNmo1XB8CyS8VAO+ 0FZGJ1nhR+Y0/fA8x+S32muKbUv4ajl7EoDzLs/Sy4qW94YM7R/oVskIBCZpQR29mm1w abbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0YlcZTEFrADwkGHOTucu4L6N8geOZMpHcszo8xf+1eA=; b=S9PkxA1LSiTskyQeBgxxZkTLnuLn11R9KhBrfUa+FfeJdF5WQmzlycZul1MGWjbiL1 9uK9vlMwF64eZFr6wEQCC6LTaqmjfXrBUwGCVi9tG/AOZCsgY0pMAUc9EShXYk4DJsXm zyRBE0lk4oAIZ3o84TYNTw3/Uz/Qa7Jf4eJy44lUJJLSpVw5qmZPqRJLXB9S3OGbJ8QL oylyXfnEg5hltvFmgcO7oEvHloDn2arWMMHmBits7Ij1q33GVLaa1QTaOrvMZY9D/hmo RtynVu7axVgo+eStLCsXui2L89py1XIIZfkWj18CJasZ3HbbdJ1AJzccb/v8edFReBkr RMHg== X-Gm-Message-State: AEkoous0Wn318O0TdlNAXY9sMIJ2q1IE20fiOgtkEg9UFGrQQQ+t6Tl4cpweTVQl3ZVNCzBW X-Received: by 10.98.147.156 with SMTP id r28mr152690451pfk.154.1470762698132; Tue, 09 Aug 2016 10:11:38 -0700 (PDT) From: Thomas Garnier To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Borislav Petkov , Joerg Roedel , Dave Young , "Rafael J . Wysocki" , Lv Zheng , Thomas Garnier , Baoquan He , Dave Hansen , Mark Salter , Aleksey Makarov , Kees Cook , Andrew Morton , Christian Borntraeger , Fabian Frederick , Toshi Kani , Dan Williams Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Aug 2016 10:11:05 -0700 Message-Id: <1470762665-88032-2-git-send-email-thgarnie@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1470762665-88032-1-git-send-email-thgarnie@google.com> References: <1470762665-88032-1-git-send-email-thgarnie@google.com> Subject: [kernel-hardening] [PATCH v4 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization X-Virus-Scanned: ClamAV using ClamSMTP Default implementation expects 6 pages maximum are needed for low page allocations. If KASLR memory randomization is enabled, the worse case of e820 layout would require 12 pages (no large pages). It is due to the PUD level randomization and the variable e820 memory layout. This bug was found while doing extensive testing of KASLR memory randomization on different type of hardware. Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions") Signed-off-by: Thomas Garnier --- Based on next-20160805 --- arch/x86/mm/init.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 6209289..796e7af 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num) return __va(pfn << PAGE_SHIFT); } -/* need 3 4k for initial PMD_SIZE, 3 4k for 0-ISA_END_ADDRESS */ -#define INIT_PGT_BUF_SIZE (6 * PAGE_SIZE) +/* + * By default need 3 4k for initial PMD_SIZE, 3 4k for 0-ISA_END_ADDRESS. + * With KASLR memory randomization, depending on the machine e820 memory + * and the PUD alignment. We may need twice more pages when KASLR memory + * randomization is enabled. + */ +#ifndef CONFIG_RANDOMIZE_MEMORY +#define INIT_PGD_PAGE_COUNT 6 +#else +#define INIT_PGD_PAGE_COUNT 12 +#endif +#define INIT_PGT_BUF_SIZE (INIT_PGD_PAGE_COUNT * PAGE_SIZE) RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE); void __init early_alloc_pgt_buf(void) {