From patchwork Tue Nov 5 14:33:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11227965 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C1A0715AB for ; Tue, 5 Nov 2019 14:33:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8502A21D71 for ; Tue, 5 Nov 2019 14:33:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="P/LYGSIS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8502A21D71 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 90EEF6B0008; Tue, 5 Nov 2019 09:33:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8C0076B000A; Tue, 5 Nov 2019 09:33:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AEB86B000C; Tue, 5 Nov 2019 09:33:34 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 5C7976B0008 for ; Tue, 5 Nov 2019 09:33:34 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E833C4DA4 for ; Tue, 5 Nov 2019 14:33:33 +0000 (UTC) X-FDA: 76122467106.21.blade12_3c2b56a98da2b X-Spam-Summary: 2,0,0,3b544e64f0f47f9a,d41d8cd98f00b204,rppt@kernel.org,:chris@zankel.net:jcmvbkbc@gmail.com:linux-xtensa@linux-xtensa.org::linux-kernel@vger.kernel.org:rppt@kernel.org:rppt@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1345:1359:1437:1535:1544:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3868:3870:3874:4321:4419:4605:5007:6119:6120:6261:6653:7576:7901:9036:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13229:14096:14181:14394:14721:21080:21324:21451:21611:21627:30003:30054,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: blade12_3c2b56a98da2b X-Filterd-Recvd-Size: 5825 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Nov 2019 14:33:33 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6DAF821A4A; Tue, 5 Nov 2019 14:33:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572964412; bh=RJ99QllcrN1HNR6BEqcZH1BDdjZzLYeU5xEjL0q3c+8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P/LYGSISJvrOYDYvaiQC2MnO2xwZSTDM8WPqBrtkyIZhkHDEgFGuWVtpicFK+N6XC 0tPbw8YFxb9T4+9TlAf/0krMDkQz+QH6iLwGo6JIH4UVxe+zQCb8bxuOt8+0Dw9bCW qACCLMwfnpZ29/Q9T54ul8vUZ27Qx9y5nc9aiaHk= From: Mike Rapoport To: Chris Zankel , Max Filippov Cc: linux-xtensa@linux-xtensa.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Mike Rapoport Subject: [PATCH 1/2] xtensa: mm: fix PMD folding implementation Date: Tue, 5 Nov 2019 16:33:19 +0200 Message-Id: <1572964400-16542-2-git-send-email-rppt@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572964400-16542-1-git-send-email-rppt@kernel.org> References: <1572964400-16542-1-git-send-email-rppt@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport There was a definition of pmd_offset() in arch/xtensa/include/asm/pgtable.h that shadowed the generic implementation defined in include/asm-generic/pgtable-nopmd.h. As the result, xtensa had shortcuts in page table traversal in several places instead of doing level unfolding. Remove local override for pmd_offset() and add page table unfolding where necessary. Signed-off-by: Mike Rapoport --- arch/xtensa/include/asm/pgtable.h | 3 --- arch/xtensa/mm/fault.c | 10 ++++++++-- arch/xtensa/mm/kasan_init.c | 6 ++++-- arch/xtensa/mm/mmu.c | 3 ++- arch/xtensa/mm/tlb.c | 6 +++++- 5 files changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 3f7fe5a..af72f02 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -371,9 +371,6 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) #define pgd_index(address) ((address) >> PGDIR_SHIFT) -/* Find an entry in the second-level page table.. */ -#define pmd_offset(dir,address) ((pmd_t*)(dir)) - /* Find an entry in the third-level page table.. */ #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) #define pte_offset_kernel(dir,addr) \ diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index f81b147..68a0414 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -197,6 +197,7 @@ void do_page_fault(struct pt_regs *regs) struct mm_struct *act_mm = current->active_mm; int index = pgd_index(address); pgd_t *pgd, *pgd_k; + pud_t *pud, *pud_k; pmd_t *pmd, *pmd_k; pte_t *pte_k; @@ -211,8 +212,13 @@ void do_page_fault(struct pt_regs *regs) pgd_val(*pgd) = pgd_val(*pgd_k); - pmd = pmd_offset(pgd, address); - pmd_k = pmd_offset(pgd_k, address); + pud = pud_offset(pgd, address); + pud_k = pud_offset(pgd_k, address); + if (!pud_present(*pud) || !pud_present(*pud_k)) + goto bad_page_fault; + + pmd = pmd_offset(pud, address); + pmd_k = pmd_offset(pud_k, address); if (!pmd_present(*pmd) || !pmd_present(*pmd_k)) goto bad_page_fault; diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c index af71525..ace98bd 100644 --- a/arch/xtensa/mm/kasan_init.c +++ b/arch/xtensa/mm/kasan_init.c @@ -20,7 +20,8 @@ void __init kasan_early_init(void) { unsigned long vaddr = KASAN_SHADOW_START; pgd_t *pgd = pgd_offset_k(vaddr); - pmd_t *pmd = pmd_offset(pgd, vaddr); + pud_t *pud = pud_offset(pgd, vaddr); + pmd_t *pmd = pmd_offset(pud, vaddr); int i; for (i = 0; i < PTRS_PER_PTE; ++i) @@ -42,7 +43,8 @@ static void __init populate(void *start, void *end) unsigned long i, j; unsigned long vaddr = (unsigned long)start; pgd_t *pgd = pgd_offset_k(vaddr); - pmd_t *pmd = pmd_offset(pgd, vaddr); + pud_t *pud = pud_offset(pgd, vaddr); + pmd_t *pmd = pmd_offset(pud, vaddr); pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE); if (!pte) diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c index 03678c4..018dda2 100644 --- a/arch/xtensa/mm/mmu.c +++ b/arch/xtensa/mm/mmu.c @@ -22,7 +22,8 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages) { pgd_t *pgd = pgd_offset_k(vaddr); - pmd_t *pmd = pmd_offset(pgd, vaddr); + pud_t *pud = pud_offset(pgd, vaddr); + pmd_t *pmd = pmd_offset(pud, vaddr); pte_t *pte; unsigned long i; diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c index 59153d0..164a2ca 100644 --- a/arch/xtensa/mm/tlb.c +++ b/arch/xtensa/mm/tlb.c @@ -169,6 +169,7 @@ static unsigned get_pte_for_vaddr(unsigned vaddr) struct task_struct *task = get_current(); struct mm_struct *mm = task->mm; pgd_t *pgd; + pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -177,7 +178,10 @@ static unsigned get_pte_for_vaddr(unsigned vaddr) pgd = pgd_offset(mm, vaddr); if (pgd_none_or_clear_bad(pgd)) return 0; - pmd = pmd_offset(pgd, vaddr); + pud = pud_offset(pgd, vaddr); + if (pud_none_or_clear_bad(pud)) + return 0; + pmd = pmd_offset(pud, vaddr); if (pmd_none_or_clear_bad(pmd)) return 0; pte = pte_offset_map(pmd, vaddr); From patchwork Tue Nov 5 14:33:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11227967 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05BEE13BD for ; Tue, 5 Nov 2019 14:33:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BCE9421D6C for ; Tue, 5 Nov 2019 14:33:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="vNnYaSw5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BCE9421D6C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6A756B000C; Tue, 5 Nov 2019 09:33:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C1BD56B000D; Tue, 5 Nov 2019 09:33:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B58D06B000E; Tue, 5 Nov 2019 09:33:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9E3B46B000C for ; Tue, 5 Nov 2019 09:33:37 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 5AF3C181AEF1F for ; Tue, 5 Nov 2019 14:33:37 +0000 (UTC) X-FDA: 76122467274.10.sheep18_3c8adb87b7058 X-Spam-Summary: 2,0,0,3db688ef5a3906ae,d41d8cd98f00b204,rppt@kernel.org,:chris@zankel.net:jcmvbkbc@gmail.com:linux-xtensa@linux-xtensa.org::linux-kernel@vger.kernel.org:rppt@kernel.org:rppt@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1431:1437:1535:1543:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3355:3866:3868:4321:4605:5007:6119:6261:6653:7576:10004:11026:11473:11657:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12679:12895:12986:14096:14181:14394:14721:21080:21451:21611:21627:30003:30054,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sheep18_3c8adb87b7058 X-Filterd-Recvd-Size: 5254 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Nov 2019 14:33:35 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F0B4021D7D; Tue, 5 Nov 2019 14:33:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572964415; bh=6O2NEEOOxx8S7SJH4HJjeFMgt6ASRdeLnH/DBw8jvyU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vNnYaSw5E8zeWjE5qyw8SXSRhjEB1Y/l52tPCck474r3Vnxo+geo+xth+4cQaZrvM x9RpPKpytTr8KVcicGo2NvghHix2L0eQgIhc5un5gxqmKul+7/0VxTfXIfTNDeQJ4M b20OdjstrAiailpQintBbuffbGNBSxD85m4355IM= From: Mike Rapoport To: Chris Zankel , Max Filippov Cc: linux-xtensa@linux-xtensa.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Mike Rapoport Subject: [PATCH 2/2] xtensa: get rid of __ARCH_USE_5LEVEL_HACK Date: Tue, 5 Nov 2019 16:33:20 +0200 Message-Id: <1572964400-16542-3-git-send-email-rppt@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572964400-16542-1-git-send-email-rppt@kernel.org> References: <1572964400-16542-1-git-send-email-rppt@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport xtensa has 2-level page tables and already uses pgtable-nopmd for page table folding. Add walks of p4d level where appropriate and drop usage of __ARCH_USE_5LEVEL_HACK. Signed-off-by: Mike Rapoport --- arch/xtensa/include/asm/pgtable.h | 1 - arch/xtensa/mm/fault.c | 10 ++++++++-- arch/xtensa/mm/kasan_init.c | 6 ++++-- arch/xtensa/mm/mmu.c | 3 ++- arch/xtensa/mm/tlb.c | 5 ++++- 5 files changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index af72f02..27ac17c 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -8,7 +8,6 @@ #ifndef _XTENSA_PGTABLE_H #define _XTENSA_PGTABLE_H -#define __ARCH_USE_5LEVEL_HACK #include #include #include diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index 68a0414..bee30a7 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -197,6 +197,7 @@ void do_page_fault(struct pt_regs *regs) struct mm_struct *act_mm = current->active_mm; int index = pgd_index(address); pgd_t *pgd, *pgd_k; + p4d_t *p4d, *p4d_k; pud_t *pud, *pud_k; pmd_t *pmd, *pmd_k; pte_t *pte_k; @@ -212,8 +213,13 @@ void do_page_fault(struct pt_regs *regs) pgd_val(*pgd) = pgd_val(*pgd_k); - pud = pud_offset(pgd, address); - pud_k = pud_offset(pgd_k, address); + p4d = p4d_offset(pgd, address); + p4d_k = p4d_offset(pgd_k, address); + if (!p4d_present(*p4d) || !p4d_present(*p4d_k)) + goto bad_page_fault; + + pud = pud_offset(p4d, address); + pud_k = pud_offset(p4d_k, address); if (!pud_present(*pud) || !pud_present(*pud_k)) goto bad_page_fault; diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c index ace98bd..9c95779 100644 --- a/arch/xtensa/mm/kasan_init.c +++ b/arch/xtensa/mm/kasan_init.c @@ -20,7 +20,8 @@ void __init kasan_early_init(void) { unsigned long vaddr = KASAN_SHADOW_START; pgd_t *pgd = pgd_offset_k(vaddr); - pud_t *pud = pud_offset(pgd, vaddr); + p4d_t *p4d = p4d_offset(pgd, vaddr); + pud_t *pud = pud_offset(p4d, vaddr); pmd_t *pmd = pmd_offset(pud, vaddr); int i; @@ -43,7 +44,8 @@ static void __init populate(void *start, void *end) unsigned long i, j; unsigned long vaddr = (unsigned long)start; pgd_t *pgd = pgd_offset_k(vaddr); - pud_t *pud = pud_offset(pgd, vaddr); + p4d_t *p4d = p4d_offset(pgd, vaddr); + pud_t *pud = pud_offset(p4d, vaddr); pmd_t *pmd = pmd_offset(pud, vaddr); pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE); diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c index 018dda2..37e478a 100644 --- a/arch/xtensa/mm/mmu.c +++ b/arch/xtensa/mm/mmu.c @@ -22,7 +22,8 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages) { pgd_t *pgd = pgd_offset_k(vaddr); - pud_t *pud = pud_offset(pgd, vaddr); + p4d_t *p4d = p4d_offset(pgd, vaddr); + pud_t *pud = pud_offset(p4d, vaddr); pmd_t *pmd = pmd_offset(pud, vaddr); pte_t *pte; unsigned long i; diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c index 164a2ca..a460474 100644 --- a/arch/xtensa/mm/tlb.c +++ b/arch/xtensa/mm/tlb.c @@ -178,7 +178,10 @@ static unsigned get_pte_for_vaddr(unsigned vaddr) pgd = pgd_offset(mm, vaddr); if (pgd_none_or_clear_bad(pgd)) return 0; - pud = pud_offset(pgd, vaddr); + p4d = p4d_offset(pgd, vaddr); + if (p4d_none_or_clear_bad(p4d)) + return 0; + pud = pud_offset(p4d, vaddr); if (pud_none_or_clear_bad(pud)) return 0; pmd = pmd_offset(pud, vaddr);