From patchwork Wed Apr 3 14:16:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10883951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 053A917E0 for ; Wed, 3 Apr 2019 14:27:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7BD625F31 for ; Wed, 3 Apr 2019 14:27:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7368285A0; Wed, 3 Apr 2019 14:27:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 21F3F25F31 for ; Wed, 3 Apr 2019 14:27:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=k5SV4la3BlumHR9TKOuIurKb0gMPYjF0NGMxmRtD1Ms=; b=ERonbNSkwa7ea6 Jtwr+EYhV61gEV72uVddiVhvDaUAkG5yilYrI5eQDYo4bbLLRIG9JfFcNb9/LAYEajb6q9vDVM6nm A0R2/esspb8wT8W/AC/tgXTtfkOWFBWc/ziZdWudaSl3CjWqhpJlViseRPzvkOvpATiFPTUSeI8+e rdP2xl4ux+DSf/8LXSrXICIqC1OCS6+y5LvpXGxNoSLOWHBbtePN6468xZaPVZpLgaMFlrVOVOIuP 86lvSDPluolUWWC5jfeG8Sg3uDo3+Zi4IVmt6r5TYVZPsRmHwoim5Hnwxqn35fcCk5RK/2V9ns+10 19E7LZ7P3WWze/Mm+xZQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hBgrs-0001zy-V3; Wed, 03 Apr 2019 14:27:56 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hBgrq-0001og-N9 for linux-arm-kernel@bombadil.infradead.org; Wed, 03 Apr 2019 14:27:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=kX4dLJraJo092n4y125DLKk3n4IU7HsLifEATF3quag=; b=UtnrWRJdbuwpu4ciRJbmrm9EeS ai7O8+Jdo8N8bHWkbL8t9edF32QD0eWYiaju96xsXwzLV8ZwKF1TGXsRPwknnMVv4auTPgoVAjXoI GFO14rroEOcCWLw26VIpOmGa+GJlSUWKBU4uVWNF3+LEyuQzs57v87lIsUcV2RE2TveB/JTDnkzxo RrGFGvZcK8mz+3JKD9BBo0iK+1iy18fCcbtDO1UjVAfYZIK5ANLmLN8H6pZPX2MCeR6LIagwrAUd0 1aBH9J9lBVtmFd37Jvc8p+6Bm5m9d6D1umAJa+UQ6piIICd/KrBvoefy19d01ou1Yq9q5VXpCbueZ Q7ifHV+A==; Received: from foss.arm.com ([217.140.101.70]) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hBgii-0000fV-FV for linux-arm-kernel@lists.infradead.org; Wed, 03 Apr 2019 14:18:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3968D169E; Wed, 3 Apr 2019 07:18:28 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CF37D3F68F; Wed, 3 Apr 2019 07:18:24 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v8 20/20] x86: mm: Convert dump_pagetables to use walk_page_range Date: Wed, 3 Apr 2019 15:16:27 +0100 Message-Id: <20190403141627.11664-21-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190403141627.11664-1-steven.price@arm.com> References: <20190403141627.11664-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190403_151829_022500_2272652E X-CRM114-Status: GOOD ( 20.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Make use of the new functionality in walk_page_range to remove the arch page walking code and use the generic code to walk the page tables. The effective permissions are passed down the chain using new fields in struct pg_state. The KASAN optimisation is implemented by including test_p?d callbacks which can decide to skip an entire tree of entries Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 280 ++++++++++++++++++---------------- 1 file changed, 146 insertions(+), 134 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index c0fbb9e5a790..f6b814aaddf7 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -33,6 +33,10 @@ struct pg_state { int level; pgprot_t current_prot; pgprotval_t effective_prot; + pgprotval_t effective_prot_pgd; + pgprotval_t effective_prot_p4d; + pgprotval_t effective_prot_pud; + pgprotval_t effective_prot_pmd; unsigned long start_address; unsigned long current_address; const struct addr_marker *marker; @@ -356,22 +360,21 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) ((prot1 | prot2) & _PAGE_NX); } -static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, - unsigned long P) +static int ptdump_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - int i; - pte_t *pte; - pgprotval_t prot, eff; - - for (i = 0; i < PTRS_PER_PTE; i++) { - st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); - pte = pte_offset_map(&addr, st->current_address); - prot = pte_flags(*pte); - eff = effective_prot(eff_in, prot); - note_page(st, __pgprot(prot), eff, 5); - pte_unmap(pte); - } + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + st->current_address = normalize_addr(addr); + + prot = pte_flags(*pte); + eff = effective_prot(st->effective_prot_pmd, prot); + note_page(st, __pgprot(prot), eff, 5); + + return 0; } + #ifdef CONFIG_KASAN /* @@ -400,131 +403,152 @@ static inline bool kasan_page_table(struct pg_state *st, void *pt) } #endif -#if PTRS_PER_PMD > 1 - -static void walk_pmd_level(struct pg_state *st, pud_t addr, - pgprotval_t eff_in, unsigned long P) +static int ptdump_test_pmd(unsigned long addr, unsigned long next, + pmd_t *pmd, struct mm_walk *walk) { - int i; - pmd_t *start, *pmd_start; - pgprotval_t prot, eff; - - pmd_start = start = (pmd_t *)pud_page_vaddr(addr); - for (i = 0; i < PTRS_PER_PMD; i++) { - st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); - if (!pmd_none(*start)) { - prot = pmd_flags(*start); - eff = effective_prot(eff_in, prot); - if (pmd_large(*start) || !pmd_present(*start)) { - note_page(st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(st, pmd_start)) { - walk_pte_level(st, *start, eff, - P + i * PMD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 4); - start++; - } + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pmd)) + return 1; + return 0; } -#else -#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) -#undef pud_large -#define pud_large(a) pmd_large(__pmd(pud_val(a))) -#define pud_none(a) pmd_none(__pmd(pud_val(a))) -#endif +static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pmd_flags(*pmd); + eff = effective_prot(st->effective_prot_pud, prot); + + st->current_address = normalize_addr(addr); + + if (pmd_large(*pmd)) + note_page(st, __pgprot(prot), eff, 4); -#if PTRS_PER_PUD > 1 + st->effective_prot_pmd = eff; -static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_pud(unsigned long addr, unsigned long next, + pud_t *pud, struct mm_walk *walk) { - int i; - pud_t *start, *pud_start; - pgprotval_t prot, eff; - - pud_start = start = (pud_t *)p4d_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_PUD; i++) { - st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); - if (!pud_none(*start)) { - prot = pud_flags(*start); - eff = effective_prot(eff_in, prot); - if (pud_large(*start) || !pud_present(*start)) { - note_page(st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(st, pud_start)) { - walk_pmd_level(st, *start, eff, - P + i * PUD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 3); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pud)) + return 1; + return 0; } -#else -#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) -#undef p4d_large -#define p4d_large(a) pud_large(__pud(p4d_val(a))) -#define p4d_none(a) pud_none(__pud(p4d_val(a))) -#endif +static int ptdump_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pud_flags(*pud); + eff = effective_prot(st->effective_prot_p4d, prot); + + st->current_address = normalize_addr(addr); + + if (pud_large(*pud)) + note_page(st, __pgprot(prot), eff, 3); + + st->effective_prot_pud = eff; -static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_p4d(unsigned long addr, unsigned long next, + p4d_t *p4d, struct mm_walk *walk) { - int i; - p4d_t *start, *p4d_start; - pgprotval_t prot, eff; - - if (PTRS_PER_P4D == 1) - return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); - - p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_P4D; i++) { - st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); - if (!p4d_none(*start)) { - prot = p4d_flags(*start); - eff = effective_prot(eff_in, prot); - if (p4d_large(*start) || !p4d_present(*start)) { - note_page(st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(st, p4d_start)) { - walk_pud_level(st, *start, eff, - P + i * P4D_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 2); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, p4d)) + return 1; + return 0; } -#undef pgd_large -#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) -#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) +static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = p4d_flags(*p4d); + eff = effective_prot(st->effective_prot_pgd, prot); + + st->current_address = normalize_addr(addr); + + if (p4d_large(*p4d)) + note_page(st, __pgprot(prot), eff, 2); + + st->effective_prot_p4d = eff; + + return 0; +} -static inline bool is_hypervisor_range(int idx) +static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk) { -#ifdef CONFIG_X86_64 - /* - * A hole in the beginning of kernel address space reserved - * for a hypervisor. - */ - return (idx >= pgd_index(GUARD_HOLE_BASE_ADDR)) && - (idx < pgd_index(GUARD_HOLE_END_ADDR)); + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pgd_flags(*pgd); + +#ifdef CONFIG_X86_PAE + eff = _PAGE_USER | _PAGE_RW; #else - return false; + eff = prot; #endif + + st->current_address = normalize_addr(addr); + + if (pgd_large(*pgd)) + note_page(st, __pgprot(prot), eff, 1); + + st->effective_prot_pgd = eff; + + return 0; +} + +static int ptdump_hole(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + note_page(st, __pgprot(0), 0, -1); + + return 0; } static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = mm->pgd; - pgprotval_t prot, eff; - int i; struct pg_state st = {}; + struct mm_walk walk = { + .mm = mm, + .pgd_entry = ptdump_pgd_entry, + .p4d_entry = ptdump_p4d_entry, + .pud_entry = ptdump_pud_entry, + .pmd_entry = ptdump_pmd_entry, + .pte_entry = ptdump_pte_entry, + .test_p4d = ptdump_test_p4d, + .test_pud = ptdump_test_pud, + .test_pmd = ptdump_test_pmd, + .pte_hole = ptdump_hole, + .private = &st + }; st.to_dmesg = dmesg; st.check_wx = checkwx; @@ -532,27 +556,15 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, if (checkwx) st.wx_pages = 0; - for (i = 0; i < PTRS_PER_PGD; i++) { - st.current_address = normalize_addr(i * PGD_LEVEL_MULT); - if (!pgd_none(*start) && !is_hypervisor_range(i)) { - prot = pgd_flags(*start); -#ifdef CONFIG_X86_PAE - eff = _PAGE_USER | _PAGE_RW; + down_read(&mm->mmap_sem); +#ifdef CONFIG_X86_64 + walk_page_range(0, PTRS_PER_PGD*PGD_LEVEL_MULT/2, &walk); + walk_page_range(normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT/2), ~0, + &walk); #else - eff = prot; + walk_page_range(0, ~0, &walk); #endif - if (pgd_large(*start) || !pgd_present(*start)) { - note_page(&st, __pgprot(prot), eff, 1); - } else { - walk_p4d_level(&st, *start, eff, - i * PGD_LEVEL_MULT); - } - } else - note_page(&st, __pgprot(0), 0, 1); - - cond_resched(); - start++; - } + up_read(&mm->mmap_sem); /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT);