From patchwork Fri Oct 13 17:32:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 10005447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 87F0C60230 for ; Fri, 13 Oct 2017 17:36:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 715EC29036 for ; Fri, 13 Oct 2017 17:36:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65EB529105; Fri, 13 Oct 2017 17:36:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A434C29036 for ; Fri, 13 Oct 2017 17:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KaZREtljEgrnPfDU0gDMBYIL0sEsYYcXjiCETbHP/Kw=; b=oZhXiYrd4rUNWP YiLVqit8sdrafvLXKb117TeSmH662s5NLbcVNx5E7Gbi2QSCYNHbG2D0uCoe5VEHxSvXprjFs3wQ3 DH6sDlHdgS1NiE2v02K5z1wZ1ratjxw9eXOhB2Y2Q0fm+3w/0g+SBu/SsV5B/WsuKSZi+2LAteVeV CiKDZRNHomPUaMSxYdHruIDJWnonAqhTGmOZWUyaO8aeO+cKLzEmrgvAjt1d26dCVtNQBgJuicFMP BsVDyTPa+Kzox1ZB8HZC5eK8vownFOz3hC8B9ZNf8pVG3KvOE0UlFM4IaKf17WqTtJg2URbWi9njV EdCOASi5DJYXbvaVQ3Xg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e33sR-00035a-Ql; Fri, 13 Oct 2017 17:36:03 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e33q5-0007x6-Nm for linux-arm-kernel@bombadil.infradead.org; Fri, 13 Oct 2017 17:33:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PlRbYlyBBx2rWFa2mC+YDHQ9o0StM2JoXSDqLRPh3KI=; b=RG8hRupUdHv7CinYohLsEA2Dx SHTUVYdYMDEyKliFIUkYoxpqYpNhv852bykqSr55aBoXZWVG8PCNkt8dCPC7TSFtXuIJUTokQdsIF Cb/ey3wHYwvfNbMbMH2XVZxxWhvqrlEpwmfTWx/B4s6FpEN3S0k9CqySEUHvWeJoS/oSQtbxE7p0W hXSbACxjyRXUV/f5fSehZEMGISEUAyLjgoNqvzv/BZmTeNAuBLRYciN7ugJxuG8rUD/hixss6fDv/ DJewmX7JJUpPmx06NjWw7IPsiJwTTRAWBAanU46aLcYHP5fxtG+tXh/OX+vQS4pD+cjOSz2GmlmHV KGm74NRIg==; Received: from aserp1040.oracle.com ([141.146.126.69]) by merlin.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e33q2-0003Ru-IY for linux-arm-kernel@lists.infradead.org; Fri, 13 Oct 2017 17:33:35 +0000 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v9DHWcvu015141 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 13 Oct 2017 17:32:39 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v9DHWcqI020257 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 13 Oct 2017 17:32:38 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id v9DHWbZP024177; Fri, 13 Oct 2017 17:32:37 GMT Received: from localhost.localdomain (/98.216.35.41) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 13 Oct 2017 10:32:36 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org, ard.biesheuvel@linaro.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, sam@ravnborg.org, mgorman@techsingularity.net, akpm@linux-foundation.org, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, bob.picco@oracle.com Subject: [PATCH v12 11/11] arm64: kasan: Avoid using vmemmap_populate to initialise shadow Date: Fri, 13 Oct 2017 13:32:14 -0400 Message-Id: <20171013173214.27300-12-pasha.tatashin@oracle.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171013173214.27300-1-pasha.tatashin@oracle.com> References: <20171013173214.27300-1-pasha.tatashin@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Will Deacon The kasan shadow is currently mapped using vmemmap_populate since that provides a semi-convenient way to map pages into swapper. However, since that no longer zeroes the mapped pages, it is not suitable for kasan, which requires that the shadow is zeroed in order to avoid false positives. This patch removes our reliance on vmemmap_populate and reuses the existing kasan page table code, which is already required for creating the early shadow. Signed-off-by: Will Deacon Signed-off-by: Pavel Tatashin --- arch/arm64/Kconfig | 2 +- arch/arm64/mm/kasan_init.c | 180 +++++++++++++++++++-------------------------- 2 files changed, 76 insertions(+), 106 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0df64a6a56d4..888580b9036e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -68,7 +68,7 @@ config ARM64 select HAVE_ARCH_BITREVERSE select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL - select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48) + select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index cb4af2951c90..acba49fb5aac 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -11,6 +11,7 @@ */ #define pr_fmt(fmt) "kasan: " fmt +#include #include #include #include @@ -28,66 +29,6 @@ static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); -/* Creates mappings for kasan during early boot. The mapped memory is zeroed */ -static int __meminit kasan_map_populate(unsigned long start, unsigned long end, - int node) -{ - unsigned long addr, pfn, next; - unsigned long long size; - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int ret; - - ret = vmemmap_populate(start, end, node); - /* - * We might have partially populated memory, so check for no entries, - * and zero only those that actually exist. - */ - for (addr = start; addr < end; addr = next) { - pgd = pgd_offset_k(addr); - if (pgd_none(*pgd)) { - next = pgd_addr_end(addr, end); - continue; - } - - pud = pud_offset(pgd, addr); - if (pud_none(*pud)) { - next = pud_addr_end(addr, end); - continue; - } - if (pud_sect(*pud)) { - /* This is PUD size page */ - next = pud_addr_end(addr, end); - size = PUD_SIZE; - pfn = pud_pfn(*pud); - } else { - pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) { - next = pmd_addr_end(addr, end); - continue; - } - if (pmd_sect(*pmd)) { - /* This is PMD size page */ - next = pmd_addr_end(addr, end); - size = PMD_SIZE; - pfn = pmd_pfn(*pmd); - } else { - pte = pte_offset_kernel(pmd, addr); - next = addr + PAGE_SIZE; - if (pte_none(*pte)) - continue; - /* This is base size page */ - size = PAGE_SIZE; - pfn = pte_pfn(*pte); - } - } - memset(phys_to_virt(PFN_PHYS(pfn)), 0, size); - } - return ret; -} - /* * The p*d_populate functions call virt_to_phys implicitly so they can't be used * directly on kernel symbols (bm_p*d). All the early functions are called too @@ -95,77 +36,117 @@ static int __meminit kasan_map_populate(unsigned long start, unsigned long end, * with the physical address from __pa_symbol. */ -static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr, - unsigned long end) +static phys_addr_t __init kasan_alloc_zeroed_page(int node) { - pte_t *pte; - unsigned long next; + void *p = memblock_virt_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, + __pa(MAX_DMA_ADDRESS), + MEMBLOCK_ALLOC_ACCESSIBLE, node); + return __pa(p); +} - if (pmd_none(*pmd)) - __pmd_populate(pmd, __pa_symbol(kasan_zero_pte), PMD_TYPE_TABLE); +static pte_t *__init kasan_pte_offset(pmd_t *pmd, unsigned long addr, int node, + bool early) +{ + if (pmd_none(*pmd)) { + phys_addr_t pte_phys = early ? __pa_symbol(kasan_zero_pte) + : kasan_alloc_zeroed_page(node); + __pmd_populate(pmd, pte_phys, PMD_TYPE_TABLE); + } + + return early ? pte_offset_kimg(pmd, addr) + : pte_offset_kernel(pmd, addr); +} + +static pmd_t *__init kasan_pmd_offset(pud_t *pud, unsigned long addr, int node, + bool early) +{ + if (pud_none(*pud)) { + phys_addr_t pmd_phys = early ? __pa_symbol(kasan_zero_pmd) + : kasan_alloc_zeroed_page(node); + __pud_populate(pud, pmd_phys, PMD_TYPE_TABLE); + } + + return early ? pmd_offset_kimg(pud, addr) : pmd_offset(pud, addr); +} + +static pud_t *__init kasan_pud_offset(pgd_t *pgd, unsigned long addr, int node, + bool early) +{ + if (pgd_none(*pgd)) { + phys_addr_t pud_phys = early ? __pa_symbol(kasan_zero_pud) + : kasan_alloc_zeroed_page(node); + __pgd_populate(pgd, pud_phys, PMD_TYPE_TABLE); + } + + return early ? pud_offset_kimg(pgd, addr) : pud_offset(pgd, addr); +} + +static void __init kasan_pte_populate(pmd_t *pmd, unsigned long addr, + unsigned long end, int node, bool early) +{ + unsigned long next; + pte_t *pte = kasan_pte_offset(pmd, addr, node, early); - pte = pte_offset_kimg(pmd, addr); do { + phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page) + : kasan_alloc_zeroed_page(node); next = addr + PAGE_SIZE; - set_pte(pte, pfn_pte(sym_to_pfn(kasan_zero_page), - PAGE_KERNEL)); + set_pte(pte, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); } while (pte++, addr = next, addr != end && pte_none(*pte)); } -static void __init kasan_early_pmd_populate(pud_t *pud, - unsigned long addr, - unsigned long end) +static void __init kasan_pmd_populate(pud_t *pud, unsigned long addr, + unsigned long end, int node, bool early) { - pmd_t *pmd; unsigned long next; + pmd_t *pmd = kasan_pmd_offset(pud, addr, node, early); - if (pud_none(*pud)) - __pud_populate(pud, __pa_symbol(kasan_zero_pmd), PMD_TYPE_TABLE); - - pmd = pmd_offset_kimg(pud, addr); do { next = pmd_addr_end(addr, end); - kasan_early_pte_populate(pmd, addr, next); + kasan_pte_populate(pmd, addr, next, node, early); } while (pmd++, addr = next, addr != end && pmd_none(*pmd)); } -static void __init kasan_early_pud_populate(pgd_t *pgd, - unsigned long addr, - unsigned long end) +static void __init kasan_pud_populate(pgd_t *pgd, unsigned long addr, + unsigned long end, int node, bool early) { - pud_t *pud; unsigned long next; + pud_t *pud = kasan_pud_offset(pgd, addr, node, early); - if (pgd_none(*pgd)) - __pgd_populate(pgd, __pa_symbol(kasan_zero_pud), PUD_TYPE_TABLE); - - pud = pud_offset_kimg(pgd, addr); do { next = pud_addr_end(addr, end); - kasan_early_pmd_populate(pud, addr, next); + kasan_pmd_populate(pud, addr, next, node, early); } while (pud++, addr = next, addr != end && pud_none(*pud)); } -static void __init kasan_map_early_shadow(void) +static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, + int node, bool early) { - unsigned long addr = KASAN_SHADOW_START; - unsigned long end = KASAN_SHADOW_END; unsigned long next; pgd_t *pgd; pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - kasan_early_pud_populate(pgd, addr, next); + kasan_pud_populate(pgd, addr, next, node, early); } while (pgd++, addr = next, addr != end); } +/* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 61)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); - kasan_map_early_shadow(); + kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, + true); +} + +/* Set up full kasan mappings, ensuring that the mapped pages are zeroed */ +static void __init kasan_map_populate(unsigned long start, unsigned long end, + int node) +{ + kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); } /* @@ -202,8 +183,8 @@ void __init kasan_init(void) struct memblock_region *reg; int i; - kimg_shadow_start = (u64)kasan_mem_to_shadow(_text); - kimg_shadow_end = (u64)kasan_mem_to_shadow(_end); + kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK; + kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end)); mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR); mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END); @@ -224,17 +205,6 @@ void __init kasan_init(void) kasan_map_populate(kimg_shadow_start, kimg_shadow_end, pfn_to_nid(virt_to_pfn(lm_alias(_text)))); - /* - * kasan_map_populate() has populated the shadow region that covers the - * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round - * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent - * kasan_populate_zero_shadow() from replacing the page table entries - * (PMD or PTE) at the edges of the shadow region for the kernel - * image. - */ - kimg_shadow_start = round_down(kimg_shadow_start, SWAPPER_BLOCK_SIZE); - kimg_shadow_end = round_up(kimg_shadow_end, SWAPPER_BLOCK_SIZE); - kasan_populate_zero_shadow((void *)KASAN_SHADOW_START, (void *)mod_shadow_start); kasan_populate_zero_shadow((void *)kimg_shadow_end,