From patchwork Tue Aug 2 19:55:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9260307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id ACACF6048B for ; Tue, 2 Aug 2016 19:55:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D1CA28346 for ; Tue, 2 Aug 2016 19:55:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F349284F9; Tue, 2 Aug 2016 19:55:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_TVD_MIME_EPI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2393F28346 for ; Tue, 2 Aug 2016 19:55:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755515AbcHBTzu (ORCPT ); Tue, 2 Aug 2016 15:55:50 -0400 Received: from mail-ua0-f173.google.com ([209.85.217.173]:36066 "EHLO mail-ua0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753291AbcHBTzr (ORCPT ); Tue, 2 Aug 2016 15:55:47 -0400 Received: by mail-ua0-f173.google.com with SMTP id j59so137059442uaj.3; Tue, 02 Aug 2016 12:55:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=PDUUUfqiHm3UXz9+6E9SYN4usgnRIFALAX/hhhPxpps=; b=xkhoHahOH0+DDHvPYJRLnqwoH7aKU2dRC2BH7W2TeauzuOVAq+c7KBQa1AA6WAEJzQ epBCCwC1NyemVhtiX8lKOvmIIlFI2PEyeTAoVLu9yYcmc3Px95S4vUCi6rijFdxhOuJH 1mqgd84UHkZyirWPqCIvcNiEDwIWEbgHNG7H9E+ytnG3fNaEPUUvy68hha5sYPPR3l+b T+rMG/GyP3N0w4MbgRIv14TEPZXnYUTeuB4f5Lnxqf1EtGUTXwf62Ip1d+x+Nv+uz8Q3 GmfpBuzmaOFYZy3O24roGKkPB5Eek9sKILtPjrzZyV16BGgQA6Vsdbbh4JWjFHuPtvH0 6AHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=PDUUUfqiHm3UXz9+6E9SYN4usgnRIFALAX/hhhPxpps=; b=C3sw6//5DZnYV48NIKcGooMFFBO8fuzO5Ok9wlId0/EzvMFKXoPgnqxWdZ8Ihisc3y xjjPyj2kEbiXOtS2fButCDJhdhGbC+cjI8Prh72fPdIh3fdW+HdrnxXfsngmrZSPDzSW GVku4Z5HeDRn9sN2PJFG7sHrPyWazD4LSedKsPo/DDZXnwMFpbK8RP2W4VlX5BpJRmIO ADbH285NmHAFqd2W6sW2svUlMPpjKyll2mZlzCWOGWB8KZo7wPk+rRzT1yK7iF9p7GMI ExTzOE5rUdHPSzMqxuBmqb6K4zV3pWwE2nAvQLwi5T1myfSVogMW/wgfZJLlv4yWamhZ o6NA== X-Gm-Message-State: AEkoouuvHB52lAKE6mnYRWHVojoxCgtVe3AuSHU9KyBR9QI2ulieQkOx35vu8SsV1Rkfw1YXl2z7h8rBiU/waw== X-Received: by 10.176.81.246 with SMTP id h51mr2164252uaa.128.1470167746776; Tue, 02 Aug 2016 12:55:46 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.26.193 with HTTP; Tue, 2 Aug 2016 12:55:44 -0700 (PDT) In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <1470071280-78706-2-git-send-email-thgarnie@google.com> From: Yinghai Lu Date: Tue, 2 Aug 2016 12:55:44 -0700 X-Google-Sender-Auth: -k58btusvU-uUJFd802w75saJUc Message-ID: Subject: Re: [PATCH v1 1/2] x86/power/64: Support unaligned addresses for temporary mapping To: Thomas Garnier Cc: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , "Rafael J . Wysocki" , Pavel Machek , "the arch/x86 maintainers" , Linux Kernel Mailing List , Linux PM list , "kernel-hardening@lists.openwall.com" Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Aug 2, 2016 at 10:48 AM, Thomas Garnier wrote: > On Tue, Aug 2, 2016 at 10:36 AM, Yinghai Lu wrote: >> >> Looks like we need to change the loop from phys address to virtual >> address instead. >> to avoid the overflow. something like attached. --- arch/x86/mm/ident_map.c | 54 ++++++++++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 22 deletions(-) Index: linux-2.6/arch/x86/mm/ident_map.c =================================================================== --- linux-2.6.orig/arch/x86/mm/ident_map.c +++ linux-2.6/arch/x86/mm/ident_map.c @@ -3,40 +3,47 @@ * included by both the compressed kernel and the regular kernel. */ -static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page, +static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) { - addr &= PMD_MASK; - for (; addr < end; addr += PMD_SIZE) { - pmd_t *pmd = pmd_page + pmd_index(addr); + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + + vaddr &= PMD_MASK; + for (; vaddr < vend; vaddr += PMD_SIZE) { + pmd_t *pmd = pmd_page + pmd_index(vaddr); if (!pmd_present(*pmd)) - set_pmd(pmd, __pmd(addr | pmd_flag)); + set_pmd(pmd, __pmd(vaddr - off | info->pmd_flag)); } } static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, unsigned long addr, unsigned long end) { - unsigned long next; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pud_t *pud = pud_page + pud_index(addr); + for (; vaddr < vend; vaddr = vnext) { + pud_t *pud = pud_page + pud_index(vaddr); pmd_t *pmd; - next = (addr & PUD_MASK) + PUD_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PUD_MASK) + PUD_SIZE; + if (vnext > vend) + vnext = vend; if (pud_present(*pud)) { pmd = pmd_offset(pud, 0); - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); continue; } pmd = (pmd_t *)info->alloc_pgt_page(info->context); if (!pmd) return -ENOMEM; - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); } @@ -46,21 +53,24 @@ static int ident_pud_init(struct x86_map int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, unsigned long addr, unsigned long end) { - unsigned long next; int result; - int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pgd_t *pgd = pgd_page + pgd_index(addr) + off; + for (; vaddr < vend; vaddr = vnext) { + pgd_t *pgd = pgd_page + pgd_index(vaddr); pud_t *pud; - next = (addr & PGDIR_MASK) + PGDIR_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PGDIR_MASK) + PGDIR_SIZE; + if (vnext > vend) + vnext = vend; if (pgd_present(*pgd)) { pud = pud_offset(pgd, 0); - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, + vnext - off); if (result) return result; continue; @@ -69,7 +79,7 @@ int kernel_ident_mapping_init(struct x86 pud = (pud_t *)info->alloc_pgt_page(info->context); if (!pud) return -ENOMEM; - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, vnext - off); if (result) return result; set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));