From patchwork Tue Aug 2 19:55:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9260309 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CBDA56048B for ; Tue, 2 Aug 2016 19:56:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF39D28346 for ; Tue, 2 Aug 2016 19:56:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B0DA2284FF; Tue, 2 Aug 2016 19:56:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A6C8D28346 for ; Tue, 2 Aug 2016 19:56:01 +0000 (UTC) Received: (qmail 29909 invoked by uid 550); 2 Aug 2016 19:55:59 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29888 invoked from network); 2 Aug 2016 19:55:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=PDUUUfqiHm3UXz9+6E9SYN4usgnRIFALAX/hhhPxpps=; b=xkhoHahOH0+DDHvPYJRLnqwoH7aKU2dRC2BH7W2TeauzuOVAq+c7KBQa1AA6WAEJzQ epBCCwC1NyemVhtiX8lKOvmIIlFI2PEyeTAoVLu9yYcmc3Px95S4vUCi6rijFdxhOuJH 1mqgd84UHkZyirWPqCIvcNiEDwIWEbgHNG7H9E+ytnG3fNaEPUUvy68hha5sYPPR3l+b T+rMG/GyP3N0w4MbgRIv14TEPZXnYUTeuB4f5Lnxqf1EtGUTXwf62Ip1d+x+Nv+uz8Q3 GmfpBuzmaOFYZy3O24roGKkPB5Eek9sKILtPjrzZyV16BGgQA6Vsdbbh4JWjFHuPtvH0 6AHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=PDUUUfqiHm3UXz9+6E9SYN4usgnRIFALAX/hhhPxpps=; b=Edi1/w0GPFbtNkVY2pdFWoaGLAkR+Kj2NL8IOnkdkZPeTtc/1Nj8LUB4EqJ3gbxHV4 4B33m4XgBsNSUBnAG+yZUhDgKcPgNCRPxI0r8M+BAHGtaZBl6bwj07WrlcoBYbEhx2rP 2n79rs4Zz9ioekzE1O8xiI/SqZzP44OqNOfJ93W/p5tkyu1w++exd73STvmFESyK2Ai1 0Q+B1a86AJ7Ku+Q3F/RfrDuztFhQoQdRpMR/wwrJDSkk+pHMX/ATYgTlwhul7a1mYduT CTJhhi74wS0IibCR20gXTCWlenCWKzBWlzB9q0J4Ig13zdfawfSrqGB4rRTSCyICD3uu ttow== X-Gm-Message-State: AEkooutlzC34DcRWHW49rV11wKZB949MofdryYkVueWulDgT2WW5ZL0IaJ+S8hPw8T7rZBPXqoMCzgBjz96Xww== X-Received: by 10.176.81.246 with SMTP id h51mr2164252uaa.128.1470167746776; Tue, 02 Aug 2016 12:55:46 -0700 (PDT) MIME-Version: 1.0 Sender: yhlu.kernel@gmail.com In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <1470071280-78706-2-git-send-email-thgarnie@google.com> From: Yinghai Lu Date: Tue, 2 Aug 2016 12:55:44 -0700 X-Google-Sender-Auth: -k58btusvU-uUJFd802w75saJUc Message-ID: To: Thomas Garnier Cc: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , "Rafael J . Wysocki" , Pavel Machek , "the arch/x86 maintainers" , Linux Kernel Mailing List , Linux PM list , "kernel-hardening@lists.openwall.com" Subject: [kernel-hardening] Re: [PATCH v1 1/2] x86/power/64: Support unaligned addresses for temporary mapping X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Aug 2, 2016 at 10:48 AM, Thomas Garnier wrote: > On Tue, Aug 2, 2016 at 10:36 AM, Yinghai Lu wrote: >> >> Looks like we need to change the loop from phys address to virtual >> address instead. >> to avoid the overflow. something like attached. --- arch/x86/mm/ident_map.c | 54 ++++++++++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 22 deletions(-) Index: linux-2.6/arch/x86/mm/ident_map.c =================================================================== --- linux-2.6.orig/arch/x86/mm/ident_map.c +++ linux-2.6/arch/x86/mm/ident_map.c @@ -3,40 +3,47 @@ * included by both the compressed kernel and the regular kernel. */ -static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page, +static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) { - addr &= PMD_MASK; - for (; addr < end; addr += PMD_SIZE) { - pmd_t *pmd = pmd_page + pmd_index(addr); + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + + vaddr &= PMD_MASK; + for (; vaddr < vend; vaddr += PMD_SIZE) { + pmd_t *pmd = pmd_page + pmd_index(vaddr); if (!pmd_present(*pmd)) - set_pmd(pmd, __pmd(addr | pmd_flag)); + set_pmd(pmd, __pmd(vaddr - off | info->pmd_flag)); } } static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, unsigned long addr, unsigned long end) { - unsigned long next; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pud_t *pud = pud_page + pud_index(addr); + for (; vaddr < vend; vaddr = vnext) { + pud_t *pud = pud_page + pud_index(vaddr); pmd_t *pmd; - next = (addr & PUD_MASK) + PUD_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PUD_MASK) + PUD_SIZE; + if (vnext > vend) + vnext = vend; if (pud_present(*pud)) { pmd = pmd_offset(pud, 0); - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); continue; } pmd = (pmd_t *)info->alloc_pgt_page(info->context); if (!pmd) return -ENOMEM; - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); } @@ -46,21 +53,24 @@ static int ident_pud_init(struct x86_map int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, unsigned long addr, unsigned long end) { - unsigned long next; int result; - int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pgd_t *pgd = pgd_page + pgd_index(addr) + off; + for (; vaddr < vend; vaddr = vnext) { + pgd_t *pgd = pgd_page + pgd_index(vaddr); pud_t *pud; - next = (addr & PGDIR_MASK) + PGDIR_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PGDIR_MASK) + PGDIR_SIZE; + if (vnext > vend) + vnext = vend; if (pgd_present(*pgd)) { pud = pud_offset(pgd, 0); - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, + vnext - off); if (result) return result; continue; @@ -69,7 +79,7 @@ int kernel_ident_mapping_init(struct x86 pud = (pud_t *)info->alloc_pgt_page(info->context); if (!pud) return -ENOMEM; - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, vnext - off); if (result) return result; set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));