From patchwork Wed Jun 22 00:47:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9191587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9B9546075A for ; Wed, 22 Jun 2016 00:47:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8827F2837F for ; Wed, 22 Jun 2016 00:47:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7A55A2838C; Wed, 22 Jun 2016 00:47:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 6B4EA2837F for ; Wed, 22 Jun 2016 00:47:31 +0000 (UTC) Received: (qmail 23968 invoked by uid 550); 22 Jun 2016 00:47:28 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 23891 invoked from network); 22 Jun 2016 00:47:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TwBYWpFuGIO093/isxXKgOLY+p09tcNFZSKZsdbOQp8=; b=ZfntHZDHbb3r975QT3WD+5PYd63q+kZHzyNgy2LzC3zK8D9PvkF0T5giKkK0ZPOhRB 4zfXMHh7lUaCWL+/c4bquoIIo4fQr59mqvhVHpeRVcNyRV3pjShd/m6aVQRv+I1qk+4/ AdlMqsRZ5+iJtEI5ILA1h5NIDsI7s+p7BUvr4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TwBYWpFuGIO093/isxXKgOLY+p09tcNFZSKZsdbOQp8=; b=NLyQi9gNzhd71oN+z4n5f/ynxrpTIEQtKX5KohYNuyio6Sg15nXz0tW9LD0s4FmYql cWbCxuGZHIJ/TEQW/NMTN6aZ38ZVbFVppyGnhVZWeUxwfEIA1BNTSccPSkCvhOZSgYPo XuX0FS+B9mi8c0wy3jp0Q29G+ar92LLlJjoa3vVU/liKRJxT/MoLvtwJRsMnTVHT7xVK ufxc54zJDVZcG1pg9ugqPeDRwV6rMW3JNTKJUeU5t8pmsDpuTwEWBunuZXtTmpQLNxM6 mo5cdbyBngZ9j10nnpiuJfz+F3ksdjMt5UbI76SWdomVRAqxIvFHpLorzU9ftiHSBfX/ R0Sw== X-Gm-Message-State: ALyK8tLxA1c/x0hmEaUSjJv8l7fUPkUTPFS7KusfMBwwvN8/S67hXA6WLZhAZaYpKIh7usZ4 X-Received: by 10.67.4.137 with SMTP id ce9mr25245119pad.120.1466556434143; Tue, 21 Jun 2016 17:47:14 -0700 (PDT) From: Kees Cook To: Ingo Molnar Cc: Kees Cook , Thomas Garnier , Andy Lutomirski , x86@kernel.org, Borislav Petkov , Baoquan He , Yinghai Lu , Juergen Gross , Matt Fleming , Toshi Kani , Andrew Morton , Dan Williams , "Kirill A. Shutemov" , Dave Hansen , Xiao Guangrong , Martin Schwidefsky , "Aneesh Kumar K.V" , Alexander Kuleshov , Alexander Popov , Dave Young , Joerg Roedel , Lv Zheng , Mark Salter , Dmitry Vyukov , Stephen Smalley , Boris Ostrovsky , Christian Borntraeger , Jan Beulich , linux-kernel@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 21 Jun 2016 17:47:00 -0700 Message-Id: <1466556426-32664-4-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1466556426-32664-1-git-send-email-keescook@chromium.org> References: <1466556426-32664-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64) X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Garnier Minor change that allows early boot physical mapping of PUD level virtual addresses. The current implementation expects the virtual address to be PUD aligned. For KASLR memory randomization, we need to be able to randomize the offset used on the PUD table. It has no impact on current usage. Signed-off-by: Thomas Garnier Signed-off-by: Kees Cook --- arch/x86/mm/init_64.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 6714712bd5da..7bf1ddb54537 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -465,7 +465,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, /* * Create PUD level page table mapping for physical addresses. The virtual - * and physical address have to be aligned at this level. + * and physical address do not have to be aligned at this level. KASLR can + * randomize virtual addresses up to this level. * It returns the last physical address mapped. */ static unsigned long __meminit @@ -474,14 +475,18 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, { unsigned long pages = 0, paddr_next; unsigned long paddr_last = paddr_end; - int i = pud_index(paddr); + unsigned long vaddr = (unsigned long)__va(paddr); + int i = pud_index(vaddr); for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) { - pud_t *pud = pud_page + pud_index(paddr); + pud_t *pud; pmd_t *pmd; pgprot_t prot = PAGE_KERNEL; + vaddr = (unsigned long)__va(paddr); + pud = pud_page + pud_index(vaddr); paddr_next = (paddr & PUD_MASK) + PUD_SIZE; + if (paddr >= paddr_end) { if (!after_bootmem && !e820_any_mapped(paddr & PUD_MASK, paddr_next, @@ -551,7 +556,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, /* * Create page table mapping for the physical memory for specific physical - * addresses. The virtual and physical addresses have to be aligned on PUD level + * addresses. The virtual and physical addresses have to be aligned on PMD level * down. It returns the last physical address mapped. */ unsigned long __meminit