From patchwork Tue Jul 5 15:46:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0925CCA485 for ; Tue, 5 Jul 2022 15:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232499AbiGEPsN (ORCPT ); Tue, 5 Jul 2022 11:48:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232732AbiGEPre (ORCPT ); Tue, 5 Jul 2022 11:47:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96C8C1BE91; Tue, 5 Jul 2022 08:47:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3754FB8181B; Tue, 5 Jul 2022 15:47:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E7A7C341CE; Tue, 5 Jul 2022 15:47:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036050; bh=W1o4tQ4Mz3yoPsyKuae5TVdq4NrS0aa2i1+psL/+Jzo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DEfK8Os2RNMFDVBTWjxNgDTEPS8wthdO66zbpawEt8SntE3Gem+uOfoOVr1jP2d7G f/+L5+3qkfuwfOtVOkOK2iql5WDDuGalOI9a6yP7sallTvGc+gEKVW1lSd1d0y/Ge3 5WWv4FT+kjQhWlHc2E9QAh+W0dPydj21sRt9ExoLFSn9bKzKrBvdaqbM24DFa6y4jw mxAPtkl0UzXEmzdGHBq1otPtO9TjqGJusv/EBXZiS+gRTB8pqpwgJ+ChLKJ3SbP2Cl d5WIiwoBNelFfQKQvGe1DwdURYY1/7tMsrWUgPXmyqwlel9eyAb1Uz5bRvO6UyRw+k dGBgMuJFIJMqQ== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 01/15] csky: drop definition of PTE_ORDER Date: Tue, 5 Jul 2022 18:46:54 +0300 Message-Id: <20220705154708.181258-2-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PTE. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Guo Ren --- arch/csky/include/asm/pgtable.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index bbe245117777..f8bb1e12334b 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -19,11 +19,10 @@ * C-SKY is two-level paging structure: */ #define PGD_ORDER 0 -#define PTE_ORDER 0 #define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) #define PTRS_PER_PMD 1 -#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t)) +#define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) #define pte_ERROR(e) \ pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, (e).pte_low) From patchwork Tue Jul 5 15:46:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30F8FCCA47F for ; Tue, 5 Jul 2022 15:48:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229925AbiGEPsL (ORCPT ); Tue, 5 Jul 2022 11:48:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232770AbiGEPri (ORCPT ); Tue, 5 Jul 2022 11:47:38 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14F2317E22; Tue, 5 Jul 2022 08:47:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A41FC61B49; Tue, 5 Jul 2022 15:47:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77D15C341CB; Tue, 5 Jul 2022 15:47:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036057; bh=yII5UNWZKGsbdktmuyJMDCcQNmFGi0PKtAPWkTX44+o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qaLjiQFhyHDfW36wq/6nru8GfWGyEzg5fUbs6v3lYDItFli5Crx68fyYHWaAvFRBO ScGG3r5m7DMl3AOG0GlxFiJbuso46mLCVQnEaQ4IpuS3yMEufTX2OIU097bun08x3k 0cyw59FyQOczQqoaaGYSN9BdFYCUfNN6/2gKjlALGUQiR0811lIjNWtHy5IDY9Lg+m HjW4DHtYSoaNDb15Ojv7TeVeKbJqfPzFyPa2HL5upxQQ5RhTxASWA4S50LzQ4mtDpm 3kFTtVBy6FnCijV6aL4WJ8bGa9252Wo24hPE0wWTjkIDSOZuvLwE3tKfHL/MoMq0Vc 676sVFBmG4zOQ== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 02/15] csky: drop definition of PGD_ORDER Date: Tue, 5 Jul 2022 18:46:55 +0300 Message-Id: <20220705154708.181258-3-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Guo Ren --- arch/csky/include/asm/pgalloc.h | 2 +- arch/csky/include/asm/pgtable.h | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index bbbd0698b397..7d57e5da0914 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -44,7 +44,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) pgd_t *ret; pgd_t *init; - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER); + ret = (pgd_t *) __get_free_page(GFP_KERNEL); if (ret) { init = pgd_offset(&init_mm, 0UL); pgd_init((unsigned long *)ret); diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index f8bb1e12334b..0f1e2eda1601 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -18,9 +18,8 @@ /* * C-SKY is two-level paging structure: */ -#define PGD_ORDER 0 -#define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) +#define PTRS_PER_PGD (PAGE_SIZE / sizeof(pgd_t)) #define PTRS_PER_PMD 1 #define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) From patchwork Tue Jul 5 15:46:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BFD6CCA480 for ; Tue, 5 Jul 2022 15:48:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232238AbiGEPsL (ORCPT ); Tue, 5 Jul 2022 11:48:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229567AbiGEPrq (ORCPT ); Tue, 5 Jul 2022 11:47:46 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B55951B794; Tue, 5 Jul 2022 08:47:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5EF8FB8181B; Tue, 5 Jul 2022 15:47:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95803C341CA; Tue, 5 Jul 2022 15:47:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036063; bh=w1vvWwRXGrFGGCu3S45JmRiHVjmuU2Pk/iQYpri8uJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SKLmZZylYHnO/80wqksoONWkN54SXU8qc+mX3+RjzELCd1fPxVgn7swsX7KAbBMd5 pW7SxY346GeQqKL6R2esMbYpaBd7qi9ETpPp8oyPmeyqssXAZ8suKCX7PUvw52oEib kFNQ0uR8kf4sGA3Xh/oJl5jqHLxausS0B+qg0Xrp0H+WL3F9l/NxligHhNEItt9g2z w+CIS2nTdOJbg8UtMNmYagdrDYzoKoYGLRkWCD8ZXhN10G1jl1AekWmakr51ptxMyS BQmXQdyJRlodScZpu8WHGJfMUhRMCzeCVVjAccXq3DkwMJUHXnze5S8NfWLbWfu9cW rZ9yMYpEHqw/Q== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 03/15] mips: Rename PMD_ORDER to PMD_TABLE_ORDER Date: Tue, 5 Jul 2022 18:46:56 +0300 Message-Id: <20220705154708.181258-4-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Matthew Wilcox (Oracle)" This is the order of the page table allocation, not the order of a PMD. While at it remove unused defintion of _PMD_ORDER in asm-offsets. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Mike Rapoport --- arch/mips/include/asm/pgalloc.h | 4 ++-- arch/mips/include/asm/pgtable-32.h | 2 +- arch/mips/include/asm/pgtable-64.h | 18 +++++++++--------- arch/mips/kernel/asm-offsets.c | 3 --- 4 files changed, 12 insertions(+), 15 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index 867e9c3db76e..0ef245cfcae9 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -67,12 +67,12 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) pmd_t *pmd; struct page *pg; - pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER); + pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); if (!pg) return NULL; if (!pgtable_pmd_page_ctor(pg)) { - __free_pages(pg, PMD_ORDER); + __free_pages(pg, PMD_TABLE_ORDER); return NULL; } diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index 95df9c293d8d..8d57bd5b0b94 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -82,7 +82,7 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, #define PGD_ORDER (__PGD_ORDER >= 0 ? __PGD_ORDER : 0) #define PUD_ORDER aieeee_attempt_to_allocate_pud -#define PMD_ORDER aieeee_attempt_to_allocate_pmd +#define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd #define PTE_ORDER 0 #define PTRS_PER_PGD (USER_PTRS_PER_PGD * 2) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 41921acdc9d8..ae0d5a09064d 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -51,12 +51,12 @@ #define PMD_MASK (~(PMD_SIZE-1)) # ifdef __PAGETABLE_PUD_FOLDED -# define PGDIR_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) +# define PGDIR_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_TABLE_ORDER - 3)) # endif #endif #ifndef __PAGETABLE_PUD_FOLDED -#define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) +#define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_TABLE_ORDER - 3)) #define PUD_SIZE (1UL << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE-1)) #define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT + PUD_ORDER - 3)) @@ -91,13 +91,13 @@ # define PGD_ORDER 1 # define PUD_ORDER aieeee_attempt_to_allocate_pud # endif -#define PMD_ORDER 0 +#define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_8KB #define PGD_ORDER 0 #define PUD_ORDER aieeee_attempt_to_allocate_pud -#define PMD_ORDER 0 +#define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_16KB @@ -107,22 +107,22 @@ #define PGD_ORDER 0 #endif #define PUD_ORDER aieeee_attempt_to_allocate_pud -#define PMD_ORDER 0 +#define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_32KB #define PGD_ORDER 0 #define PUD_ORDER aieeee_attempt_to_allocate_pud -#define PMD_ORDER 0 +#define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_64KB #define PGD_ORDER 0 #define PUD_ORDER aieeee_attempt_to_allocate_pud #ifdef CONFIG_MIPS_VA_BITS_48 -#define PMD_ORDER 0 +#define PMD_TABLE_ORDER 0 #else -#define PMD_ORDER aieeee_attempt_to_allocate_pmd +#define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd #endif #define PTE_ORDER 0 #endif @@ -132,7 +132,7 @@ #define PTRS_PER_PUD ((PAGE_SIZE << PUD_ORDER) / sizeof(pud_t)) #endif #ifndef __PAGETABLE_PMD_FOLDED -#define PTRS_PER_PMD ((PAGE_SIZE << PMD_ORDER) / sizeof(pmd_t)) +#define PTRS_PER_PMD ((PAGE_SIZE << PMD_TABLE_ORDER) / sizeof(pmd_t)) #endif #define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t)) diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c index 04ca75278f02..ca7c5af7697d 100644 --- a/arch/mips/kernel/asm-offsets.c +++ b/arch/mips/kernel/asm-offsets.c @@ -197,9 +197,6 @@ void output_mm_defines(void) DEFINE(_PTE_T_LOG2, PTE_T_LOG2); BLANK(); DEFINE(_PGD_ORDER, PGD_ORDER); -#ifndef __PAGETABLE_PMD_FOLDED - DEFINE(_PMD_ORDER, PMD_ORDER); -#endif DEFINE(_PTE_ORDER, PTE_ORDER); BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); From patchwork Tue Jul 5 15:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4400ECCA47B for ; Tue, 5 Jul 2022 15:48:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbiGEPsF (ORCPT ); Tue, 5 Jul 2022 11:48:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232174AbiGEPrx (ORCPT ); Tue, 5 Jul 2022 11:47:53 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E207316582; Tue, 5 Jul 2022 08:47:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 96777B8181F; Tue, 5 Jul 2022 15:47:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB993C341CB; Tue, 5 Jul 2022 15:47:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036069; bh=NMZvlt8So9JjYVYzdxqpcUA9ehtGED3xtzbuJg5wsK0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=plS/eDsq2yt5xuMkPyjDuH01tHM3rBxY2v2g6pdVx8umZcjindiD3swjAk1pVoB18 SwUMrGozpmJM2VeX+wgWN+3w7GMP2b5zdG59pCdR6GUPeGdRpNLKoXQUxvmVYUlkez +IMOn8rW2eoEQQyhqm6vuRzehnEqoRP6jnCdkEkdcQAYa44JOWNC9CWsQzwN3yjbu2 G5NtrL8UAMReoywnmKXiIvh0Eta5uqh+k1uNbSEWEhuLo25pti5rVv/2M8TI3khdHo +QZDQ3MZS7kuO3xL0QdtfYGKJWVtd/tWTyAGhajzzS1tqI82mjjQG+aFsNW84aclu1 bUwL3v2smxG8Q== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 04/15] mips: Rename PUD_ORDER to PUD_TABLE_ORDER Date: Tue, 5 Jul 2022 18:46:57 +0300 Message-Id: <20220705154708.181258-5-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PUD. Signed-off-by: Mike Rapoport --- arch/mips/include/asm/pgalloc.h | 2 +- arch/mips/include/asm/pgtable-32.h | 2 +- arch/mips/include/asm/pgtable-64.h | 16 ++++++++-------- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index 0ef245cfcae9..1ef8e86ae565 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -91,7 +91,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_ORDER); + pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER); if (pud) pud_init((unsigned long)pud, (unsigned long)invalid_pmd_table); return pud; diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index 8d57bd5b0b94..d9ae244a4fce 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -81,7 +81,7 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, #endif #define PGD_ORDER (__PGD_ORDER >= 0 ? __PGD_ORDER : 0) -#define PUD_ORDER aieeee_attempt_to_allocate_pud +#define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd #define PTE_ORDER 0 diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index ae0d5a09064d..7daf9a6509d8 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -59,7 +59,7 @@ #define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_TABLE_ORDER - 3)) #define PUD_SIZE (1UL << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE-1)) -#define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT + PUD_ORDER - 3)) +#define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT + PUD_TABLE_ORDER - 3)) #endif #define PGDIR_SIZE (1UL << PGDIR_SHIFT) @@ -86,17 +86,17 @@ #ifdef CONFIG_PAGE_SIZE_4KB # ifdef CONFIG_MIPS_VA_BITS_48 # define PGD_ORDER 0 -# define PUD_ORDER 0 +# define PUD_TABLE_ORDER 0 # else # define PGD_ORDER 1 -# define PUD_ORDER aieeee_attempt_to_allocate_pud +# define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud # endif #define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_8KB #define PGD_ORDER 0 -#define PUD_ORDER aieeee_attempt_to_allocate_pud +#define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif @@ -106,19 +106,19 @@ #else #define PGD_ORDER 0 #endif -#define PUD_ORDER aieeee_attempt_to_allocate_pud +#define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_32KB #define PGD_ORDER 0 -#define PUD_ORDER aieeee_attempt_to_allocate_pud +#define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_64KB #define PGD_ORDER 0 -#define PUD_ORDER aieeee_attempt_to_allocate_pud +#define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #ifdef CONFIG_MIPS_VA_BITS_48 #define PMD_TABLE_ORDER 0 #else @@ -129,7 +129,7 @@ #define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) #ifndef __PAGETABLE_PUD_FOLDED -#define PTRS_PER_PUD ((PAGE_SIZE << PUD_ORDER) / sizeof(pud_t)) +#define PTRS_PER_PUD ((PAGE_SIZE << PUD_TABLE_ORDER) / sizeof(pud_t)) #endif #ifndef __PAGETABLE_PMD_FOLDED #define PTRS_PER_PMD ((PAGE_SIZE << PMD_TABLE_ORDER) / sizeof(pmd_t)) From patchwork Tue Jul 5 15:46:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17FA0CCA482 for ; Tue, 5 Jul 2022 15:48:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232360AbiGEPsG (ORCPT ); Tue, 5 Jul 2022 11:48:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232415AbiGEPr5 (ORCPT ); Tue, 5 Jul 2022 11:47:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5260C18E19; Tue, 5 Jul 2022 08:47:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E23B461B22; Tue, 5 Jul 2022 15:47:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4E1FC341CA; Tue, 5 Jul 2022 15:47:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036075; bh=amROm/bZnnwpQhmpELzw7HPpFTBI4Ll1Xhds1dqZfkc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jae5nqi+LDie3B68ucwPBnW9WpEvT/CUY0YK0Y/fYbCkg+I9wjll6D2rncfqMuVkX /f/kAhPhtIYDCh4tQndGrHfijQslMiNnoKfSuzX31TJLDhmS3meohQoACq6NWBOWS+ IFpGfUg7WH2a1Jptzfzz3nVdoK+df04YoyDdxuu9Okzg/ylqR/BudJv5m7KyXGS3TC apkXVfV6V2qCCqrdp4FjgaOhuYEEEVMtJ5sufMCrABmc8T/G5aje7BafV+dNlwGJaQ PlPrj7uftHIsDFwzAJXzdGf50kxL0dy4S8gJHL4imsqOLdfl0DFXx16vd1vUW8Ew9o X7Y2yec4VWzpg== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 05/15] mips: drop definitions of PTE_ORDER Date: Tue, 5 Jul 2022 18:46:58 +0300 Message-Id: <20220705154708.181258-6-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PTE. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport --- arch/mips/include/asm/pgtable-32.h | 9 ++++----- arch/mips/include/asm/pgtable-64.h | 15 +++++---------- arch/mips/kernel/asm-offsets.c | 1 - arch/mips/mm/tlbex.c | 2 +- 4 files changed, 10 insertions(+), 17 deletions(-) diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index d9ae244a4fce..35bd519a1078 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -62,9 +62,9 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, /* PGDIR_SHIFT determines what a third-level page table entry can map */ #if defined(CONFIG_MIPS_HUGE_TLB_SUPPORT) && !defined(CONFIG_PHYS_ADDR_T_64BIT) -# define PGDIR_SHIFT (2 * PAGE_SHIFT + PTE_ORDER - PTE_T_LOG2 - 1) +# define PGDIR_SHIFT (2 * PAGE_SHIFT - PTE_T_LOG2 - 1) #else -# define PGDIR_SHIFT (2 * PAGE_SHIFT + PTE_ORDER - PTE_T_LOG2) +# define PGDIR_SHIFT (2 * PAGE_SHIFT - PTE_T_LOG2) #endif #define PGDIR_SIZE (1UL << PGDIR_SHIFT) @@ -83,13 +83,12 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, #define PGD_ORDER (__PGD_ORDER >= 0 ? __PGD_ORDER : 0) #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd -#define PTE_ORDER 0 #define PTRS_PER_PGD (USER_PTRS_PER_PGD * 2) #if defined(CONFIG_MIPS_HUGE_TLB_SUPPORT) && !defined(CONFIG_PHYS_ADDR_T_64BIT) -# define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t) / 2) +# define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t) / 2) #else -# define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t)) +# define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) #endif #define USER_PTRS_PER_PGD (0x80000000UL/PGDIR_SIZE) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 7daf9a6509d8..dbf7e461d360 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -42,11 +42,11 @@ /* PGDIR_SHIFT determines what a third-level page table entry can map */ #ifdef __PAGETABLE_PMD_FOLDED -#define PGDIR_SHIFT (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3) +#define PGDIR_SHIFT (PAGE_SHIFT + PAGE_SHIFT - 3) #else /* PMD_SHIFT determines the size of the area a second-level page table can map */ -#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT + PTE_ORDER - 3)) +#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) @@ -86,19 +86,17 @@ #ifdef CONFIG_PAGE_SIZE_4KB # ifdef CONFIG_MIPS_VA_BITS_48 # define PGD_ORDER 0 -# define PUD_TABLE_ORDER 0 +# define PUD_TABLE_ORDER 0 # else # define PGD_ORDER 1 -# define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud +# define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud # endif #define PMD_TABLE_ORDER 0 -#define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_8KB #define PGD_ORDER 0 #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 -#define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_16KB #ifdef CONFIG_MIPS_VA_BITS_48 @@ -108,13 +106,11 @@ #endif #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 -#define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_32KB #define PGD_ORDER 0 #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 -#define PTE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_64KB #define PGD_ORDER 0 @@ -124,7 +120,6 @@ #else #define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd #endif -#define PTE_ORDER 0 #endif #define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) @@ -134,7 +129,7 @@ #ifndef __PAGETABLE_PMD_FOLDED #define PTRS_PER_PMD ((PAGE_SIZE << PMD_TABLE_ORDER) / sizeof(pmd_t)) #endif -#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t)) +#define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) #define USER_PTRS_PER_PGD ((TASK_SIZE64 / PGDIR_SIZE)?(TASK_SIZE64 / PGDIR_SIZE):1) diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c index ca7c5af7697d..0c97f755e256 100644 --- a/arch/mips/kernel/asm-offsets.c +++ b/arch/mips/kernel/asm-offsets.c @@ -197,7 +197,6 @@ void output_mm_defines(void) DEFINE(_PTE_T_LOG2, PTE_T_LOG2); BLANK(); DEFINE(_PGD_ORDER, PGD_ORDER); - DEFINE(_PTE_ORDER, PTE_ORDER); BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); DEFINE(_PGDIR_SHIFT, PGDIR_SHIFT); diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 8dbbd99fc7e8..6e8e71f12fab 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -2065,7 +2065,7 @@ build_r4000_tlbchange_handler_head(u32 **p, struct uasm_label **l, UASM_i_MFC0(p, wr.r1, C0_BADVADDR); UASM_i_LW(p, wr.r2, 0, wr.r2); - UASM_i_SRL(p, wr.r1, wr.r1, PAGE_SHIFT + PTE_ORDER - PTE_T_LOG2); + UASM_i_SRL(p, wr.r1, wr.r1, PAGE_SHIFT - PTE_T_LOG2); uasm_i_andi(p, wr.r1, wr.r1, (PTRS_PER_PTE - 1) << PTE_T_LOG2); UASM_i_ADDU(p, wr.r2, wr.r2, wr.r1); From patchwork Tue Jul 5 15:46:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4DAECCA487 for ; Tue, 5 Jul 2022 15:48:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232355AbiGEPsI (ORCPT ); Tue, 5 Jul 2022 11:48:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231208AbiGEPsF (ORCPT ); Tue, 5 Jul 2022 11:48:05 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03F892DED; Tue, 5 Jul 2022 08:48:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A683CB81807; Tue, 5 Jul 2022 15:48:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC33FC341CE; Tue, 5 Jul 2022 15:47:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036081; bh=23WC1sQugdkdZL087WhgGrBTkb1KDCfCNGBK/jIciVI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UzfKK2r+ZuagPaB0lU0+NpBG5JSi/1A/P3jIk1yWTJYWgNNhfNtig9k087EhiIfjZ wPfBD0odF9eZjgS8SyBeFOUeRlRBnkXTBGyyfTXeJnvc+y1ByWxQw/qzwZXQhA6fWb e8HdBwrCHvY6akPyPnByuRihfJf9N1zVfWGqDOROcaaDuUPbWIhW2yDwiSe2FRHc1c A4Ww0jcDmQTKkwKKPIJxwUqK6HorU34yHkBt2nJCHx9qv5aMwOalBmaxRsuYjlW8w8 W2yMz7HL6LMCvnQH2lBvn7604BCPuaTEyo299PG2KfxbQW6Sd1UX/eAy86N9LJYbCu mwUIcyo1aUkig== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 06/15] mips: Rename PGD_ORDER to PGD_TABLE_ORDER Date: Tue, 5 Jul 2022 18:46:59 +0300 Message-Id: <20220705154708.181258-7-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. While at it remove unused defintion of _PGD_ORDER in asm-offsets. Signed-off-by: Mike Rapoport --- arch/mips/include/asm/pgalloc.h | 2 +- arch/mips/include/asm/pgtable-32.h | 6 +++--- arch/mips/include/asm/pgtable-64.h | 16 ++++++++-------- arch/mips/kernel/asm-offsets.c | 1 - arch/mips/kvm/mmu.c | 2 +- arch/mips/mm/pgtable.c | 2 +- arch/mips/mm/tlbex.c | 12 ++++++------ 7 files changed, 20 insertions(+), 21 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index 1ef8e86ae565..796035784c73 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -51,7 +51,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, PGD_ORDER); + free_pages((unsigned long)pgd, PGD_TABLE_ORDER); } #define __pte_free_tlb(tlb,pte,address) \ diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index 35bd519a1078..495c603c1a30 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -75,12 +75,12 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, * we don't really have any PUD/PMD directory physically. */ #if defined(CONFIG_MIPS_HUGE_TLB_SUPPORT) && !defined(CONFIG_PHYS_ADDR_T_64BIT) -# define __PGD_ORDER (32 - 3 * PAGE_SHIFT + PGD_T_LOG2 + PTE_T_LOG2 + 1) +# define __PGD_TABLE_ORDER (32 - 3 * PAGE_SHIFT + PGD_T_LOG2 + PTE_T_LOG2 + 1) #else -# define __PGD_ORDER (32 - 3 * PAGE_SHIFT + PGD_T_LOG2 + PTE_T_LOG2) +# define __PGD_TABLE_ORDER (32 - 3 * PAGE_SHIFT + PGD_T_LOG2 + PTE_T_LOG2) #endif -#define PGD_ORDER (__PGD_ORDER >= 0 ? __PGD_ORDER : 0) +#define PGD_TABLE_ORDER (__PGD_TABLE_ORDER >= 0 ? __PGD_TABLE_ORDER : 0) #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER aieeee_attempt_to_allocate_pmd diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index dbf7e461d360..a259ca4d1272 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -85,35 +85,35 @@ */ #ifdef CONFIG_PAGE_SIZE_4KB # ifdef CONFIG_MIPS_VA_BITS_48 -# define PGD_ORDER 0 +# define PGD_TABLE_ORDER 0 # define PUD_TABLE_ORDER 0 # else -# define PGD_ORDER 1 +# define PGD_TABLE_ORDER 1 # define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud # endif #define PMD_TABLE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_8KB -#define PGD_ORDER 0 +#define PGD_TABLE_ORDER 0 #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_16KB #ifdef CONFIG_MIPS_VA_BITS_48 -#define PGD_ORDER 1 +#define PGD_TABLE_ORDER 1 #else -#define PGD_ORDER 0 +#define PGD_TABLE_ORDER 0 #endif #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_32KB -#define PGD_ORDER 0 +#define PGD_TABLE_ORDER 0 #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #define PMD_TABLE_ORDER 0 #endif #ifdef CONFIG_PAGE_SIZE_64KB -#define PGD_ORDER 0 +#define PGD_TABLE_ORDER 0 #define PUD_TABLE_ORDER aieeee_attempt_to_allocate_pud #ifdef CONFIG_MIPS_VA_BITS_48 #define PMD_TABLE_ORDER 0 @@ -122,7 +122,7 @@ #endif #endif -#define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) +#define PTRS_PER_PGD ((PAGE_SIZE << PGD_TABLE_ORDER) / sizeof(pgd_t)) #ifndef __PAGETABLE_PUD_FOLDED #define PTRS_PER_PUD ((PAGE_SIZE << PUD_TABLE_ORDER) / sizeof(pud_t)) #endif diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c index 0c97f755e256..c4501897b870 100644 --- a/arch/mips/kernel/asm-offsets.c +++ b/arch/mips/kernel/asm-offsets.c @@ -196,7 +196,6 @@ void output_mm_defines(void) #endif DEFINE(_PTE_T_LOG2, PTE_T_LOG2); BLANK(); - DEFINE(_PGD_ORDER, PGD_ORDER); BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); DEFINE(_PGDIR_SHIFT, PGDIR_SHIFT); diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 1bfd1b501d82..db17e870bdff 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -80,7 +80,7 @@ pgd_t *kvm_pgd_alloc(void) { pgd_t *ret; - ret = (pgd_t *)__get_free_pages(GFP_KERNEL, PGD_ORDER); + ret = (pgd_t *)__get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); if (ret) kvm_pgd_init(ret); diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c index 05560b042d82..3b7590660a04 100644 --- a/arch/mips/mm/pgtable.c +++ b/arch/mips/mm/pgtable.c @@ -12,7 +12,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *ret, *init; - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER); + ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); if (ret) { init = pgd_offset(&init_mm, 0UL); pgd_init((unsigned long)ret); diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 6e8e71f12fab..a57519ae96b1 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -818,7 +818,7 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r, * everything but the lower xuseg addresses goes down * the module_alloc/vmalloc path. */ - uasm_i_dsrl_safe(p, ptr, tmp, PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3); + uasm_i_dsrl_safe(p, ptr, tmp, PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); uasm_il_bnez(p, r, ptr, label_vmalloc); } else { uasm_il_bltz(p, r, tmp, label_vmalloc); @@ -1127,7 +1127,7 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l, UASM_i_SW(p, scratch, scratchpad_offset(0), 0); uasm_i_dsrl_safe(p, scratch, tmp, - PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3); + PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); uasm_il_bnez(p, r, scratch, label_vmalloc); if (pgd_reg == -1) { @@ -1493,12 +1493,12 @@ static void setup_pw(void) #endif pgd_i = PGDIR_SHIFT; /* 1st level PGD */ #ifndef __PAGETABLE_PMD_FOLDED - pgd_w = PGDIR_SHIFT - PMD_SHIFT + PGD_ORDER; + pgd_w = PGDIR_SHIFT - PMD_SHIFT + PGD_TABLE_ORDER; pmd_i = PMD_SHIFT; /* 2nd level PMD */ pmd_w = PMD_SHIFT - PAGE_SHIFT; #else - pgd_w = PGDIR_SHIFT - PAGE_SHIFT + PGD_ORDER; + pgd_w = PGDIR_SHIFT - PAGE_SHIFT + PGD_TABLE_ORDER; #endif pt_i = PAGE_SHIFT; /* 3rd level PTE */ @@ -1536,7 +1536,7 @@ static void build_loongson3_tlb_refill_handler(void) if (check_for_high_segbits) { uasm_i_dmfc0(&p, K0, C0_BADVADDR); - uasm_i_dsrl_safe(&p, K1, K0, PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3); + uasm_i_dsrl_safe(&p, K1, K0, PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); uasm_il_beqz(&p, &r, K1, label_vmalloc); uasm_i_nop(&p); @@ -2611,7 +2611,7 @@ void build_tlb_refill_handler(void) check_pabits(); #ifdef CONFIG_64BIT - check_for_high_segbits = current_cpu_data.vmbits > (PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3); + check_for_high_segbits = current_cpu_data.vmbits > (PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); #endif if (cpu_has_3kex) { From patchwork Tue Jul 5 15:47:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE37FC433EF for ; Tue, 5 Jul 2022 15:48:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232601AbiGEPsQ (ORCPT ); Tue, 5 Jul 2022 11:48:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232446AbiGEPsJ (ORCPT ); Tue, 5 Jul 2022 11:48:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 869A515FF6; Tue, 5 Jul 2022 08:48:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1FC4B61B4A; Tue, 5 Jul 2022 15:48:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0194EC341CA; Tue, 5 Jul 2022 15:48:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036087; bh=/GRO3PH1/o87mSyvNwgnC65awjhLFzdc6wqe5YuxHNo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mrSMxr4UM+kpGi1Bp7/0aeQTerFylwtcAUF8sdpeamaaid5/gGc8SJ0085JSKxQoL /O++NzgxI6eARjuS61hcNU3lG8jzMyuWEcEMz/+GlumhoBtbBEIjwhLoSxNIBxPsei eD+Q+1GcWzRtvZFkoL3PRg31mHw9/Vd3M6a6ATGHlYjIgoI+SSQQZuRsFZALFBZDKv /X7S+DrDyskUJVeovS6MEiLPLwKZkXqYyBKCLLIyiHjhgdr72AmAnyjO/9YmeKNL55 2y8lgc11D7loVmyml6t9wgIfltQJPvm+yIIhgHafTtIhjOMgFJb9Pc4sqxPRZffAQ5 bCLVPTJIZKYqQ== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 07/15] nios2: drop definition of PTE_ORDER Date: Tue, 5 Jul 2022 18:47:00 +0300 Message-Id: <20220705154708.181258-8-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PTE. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Dinh Nguyen --- arch/nios2/include/asm/pgtable.h | 3 +-- arch/nios2/mm/init.c | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 262d0609268c..eaf8f28baa8b 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -69,10 +69,9 @@ struct mm_struct; #define PAGE_COPY MKP(0, 0, 1) #define PGD_ORDER 0 -#define PTE_ORDER 0 #define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) -#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t)) +#define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) #define USER_PTRS_PER_PGD \ (CONFIG_NIOS2_KERNEL_MMU_REGION_BASE / PGDIR_SIZE) diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index 613fcaa5988a..2d6dbf7701f6 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -80,7 +80,7 @@ void __init mmu_init(void) #define __page_aligned(order) __aligned(PAGE_SIZE << (order)) pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned(PGD_ORDER); -pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned(PTE_ORDER); +pte_t invalid_pte_table[PTRS_PER_PTE] __aligned(PAGE_SIZE); static struct page *kuser_page[1]; static int alloc_kuser_page(void) From patchwork Tue Jul 5 15:47:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906792 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 187CCC43334 for ; Tue, 5 Jul 2022 15:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232711AbiGEPtQ (ORCPT ); Tue, 5 Jul 2022 11:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232701AbiGEPsg (ORCPT ); Tue, 5 Jul 2022 11:48:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A76271ADB9; Tue, 5 Jul 2022 08:48:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 42D4361B4A; Tue, 5 Jul 2022 15:48:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1AFC7C36AF5; Tue, 5 Jul 2022 15:48:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036093; bh=Vue7H8LCr+M1pj6DZ3BvkAhUx8ulVRB7UqrqmOhXw0c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KUuAjX3ls21sld82+qncE1IqLvb+UaJbLEzg+zR4FHBiItcxPyQI9YPQPiPrexqHK rFngIr/HU/ZA0Ev4D41wN+HP076RxleRBt/L69xJ/B8r16G4gDNVgEG6/nl62cN3l5 CVHVfgu25+HyKSo8bauTWNkf0JIw2IXiMDVlrUvOiCbBa2i+meVIloBk3mrraZlKaV iFg6LI+OPWIHIgnO3SdFXCOUW/cVI6HpVoGQuYww7GJC2D8zjTX4BASR34IGXkg/fG Wy4mtYDz1nqMgrCdSnu4I5NG++Nh9pYJ1t4EHgncfXjBqZLyxEIUQspta5O/gphKJ9 hjOLZDMDfCkMw== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 08/15] nios2: drop definition of PGD_ORDER Date: Tue, 5 Jul 2022 18:47:01 +0300 Message-Id: <20220705154708.181258-9-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Dinh Nguyen --- arch/nios2/include/asm/pgtable.h | 4 +--- arch/nios2/mm/init.c | 3 +-- arch/nios2/mm/pgtable.c | 2 +- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index eaf8f28baa8b..74af16dafe86 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -68,9 +68,7 @@ struct mm_struct; #define PAGE_COPY MKP(0, 0, 1) -#define PGD_ORDER 0 - -#define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t)) +#define PTRS_PER_PGD (PAGE_SIZE / sizeof(pgd_t)) #define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) #define USER_PTRS_PER_PGD \ diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index 2d6dbf7701f6..eab65e8ea69c 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -78,8 +78,7 @@ void __init mmu_init(void) flush_tlb_all(); } -#define __page_aligned(order) __aligned(PAGE_SIZE << (order)) -pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned(PGD_ORDER); +pgd_t swapper_pg_dir[PTRS_PER_PGD] __aligned(PAGE_SIZE); pte_t invalid_pte_table[PTRS_PER_PTE] __aligned(PAGE_SIZE); static struct page *kuser_page[1]; diff --git a/arch/nios2/mm/pgtable.c b/arch/nios2/mm/pgtable.c index 9b587fd592dd..7c76e8a7447a 100644 --- a/arch/nios2/mm/pgtable.c +++ b/arch/nios2/mm/pgtable.c @@ -54,7 +54,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *ret, *init; - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER); + ret = (pgd_t *) __get_free_page(GFP_KERNEL); if (ret) { init = pgd_offset(&init_mm, 0UL); pgd_init(ret); From patchwork Tue Jul 5 15:47:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 609B8C433EF for ; Tue, 5 Jul 2022 15:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232605AbiGEPtn (ORCPT ); Tue, 5 Jul 2022 11:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232753AbiGEPtP (ORCPT ); Tue, 5 Jul 2022 11:49:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87DABBDB; Tue, 5 Jul 2022 08:48:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2163CB81829; Tue, 5 Jul 2022 15:48:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38971C36AFD; Tue, 5 Jul 2022 15:48:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036099; bh=HB79TgDejynjwXx+ze+zwa0sbce+L9enN9kanFNzV5U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LaaPXiob0ep82//FrVUkugWoeOi0PJIcd+2Wbo9XNGm1SekBlXYWrxP28Hyy3Ct+F 3j256FeKuXjKqDndi3j5r4uoPUZA9vuctkx5BAv5XadhJHOrQ3gGtfOhmv1aZIshms HbRNleco7/24QbDai+y+n/PMFj+C9hDB/++B+MUELlJvTd5JfzI2JaZ6Ebj4oYQ7Yr bO8XcshzPNDGxtCK2M+cQt/WnraJIwsYYk+KYR9iRZ5Oh4hlmdM9SAxI6UwY2Dwe5r SUcWXZDvT86R7ndr2QmwpbAWL4NaZuYEu5ppyv+8yH8fZpYouW044iP0y+4m/qTxyq oYEp+vLG55QgA== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 09/15] loongarch: drop definition of PTE_ORDER Date: Tue, 5 Jul 2022 18:47:02 +0300 Message-Id: <20220705154708.181258-10-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PTE. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Huacai Chen --- arch/loongarch/include/asm/pgtable.h | 9 ++++----- arch/loongarch/kernel/asm-offsets.c | 1 - arch/loongarch/mm/tlbex.S | 6 +++--- 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d9e86cfa53e2..e0bbfc31fe72 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -24,17 +24,16 @@ #define PGD_ORDER 0 #define PUD_ORDER 0 #define PMD_ORDER 0 -#define PTE_ORDER 0 #if CONFIG_PGTABLE_LEVELS == 2 -#define PGDIR_SHIFT (PAGE_SHIFT + (PAGE_SHIFT + PTE_ORDER - 3)) +#define PGDIR_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #elif CONFIG_PGTABLE_LEVELS == 3 -#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT + PTE_ORDER - 3)) +#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) #define PGDIR_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) #elif CONFIG_PGTABLE_LEVELS == 4 -#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT + PTE_ORDER - 3)) +#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) #define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) @@ -55,7 +54,7 @@ #if CONFIG_PGTABLE_LEVELS > 2 #define PTRS_PER_PMD ((PAGE_SIZE << PMD_ORDER) >> 3) #endif -#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) >> 3) +#define PTRS_PER_PTE (PAGE_SIZE >> 3) #define USER_PTRS_PER_PGD ((TASK_SIZE64 / PGDIR_SIZE)?(TASK_SIZE64 / PGDIR_SIZE):1) diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index bfb65eb2844f..1a1166a7e61c 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -194,7 +194,6 @@ void output_mm_defines(void) #ifndef __PAGETABLE_PMD_FOLDED DEFINE(_PMD_ORDER, PMD_ORDER); #endif - DEFINE(_PTE_ORDER, PTE_ORDER); BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); DEFINE(_PGDIR_SHIFT, PGDIR_SHIFT); diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S index 7eee40271577..e36c2c07dee3 100644 --- a/arch/loongarch/mm/tlbex.S +++ b/arch/loongarch/mm/tlbex.S @@ -83,7 +83,7 @@ vmalloc_done_load: bne t0, $r0, tlb_huge_update_load csrrd t0, LOONGARCH_CSR_BADV - srli.d t0, t0, (PAGE_SHIFT + PTE_ORDER) + srli.d t0, t0, PAGE_SHIFT andi t0, t0, (PTRS_PER_PTE - 1) slli.d t0, t0, _PTE_T_LOG2 add.d t1, ra, t0 @@ -247,7 +247,7 @@ vmalloc_done_store: bne t0, $r0, tlb_huge_update_store csrrd t0, LOONGARCH_CSR_BADV - srli.d t0, t0, (PAGE_SHIFT + PTE_ORDER) + srli.d t0, t0, PAGE_SHIFT andi t0, t0, (PTRS_PER_PTE - 1) slli.d t0, t0, _PTE_T_LOG2 add.d t1, ra, t0 @@ -414,7 +414,7 @@ vmalloc_done_modify: bne t0, $r0, tlb_huge_update_modify csrrd t0, LOONGARCH_CSR_BADV - srli.d t0, t0, (PAGE_SHIFT + PTE_ORDER) + srli.d t0, t0, PAGE_SHIFT andi t0, t0, (PTRS_PER_PTE - 1) slli.d t0, t0, _PTE_T_LOG2 add.d t1, ra, t0 From patchwork Tue Jul 5 15:47:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AC7CCCA483 for ; Tue, 5 Jul 2022 15:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232776AbiGEPtq (ORCPT ); Tue, 5 Jul 2022 11:49:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232563AbiGEPtR (ORCPT ); Tue, 5 Jul 2022 11:49:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CBC35F51; Tue, 5 Jul 2022 08:48:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 14198B8181F; Tue, 5 Jul 2022 15:48:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C877C341CF; Tue, 5 Jul 2022 15:48:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036105; bh=vmTfGAnp2FA4UINeVAHTyZ6gYc9GVzKEgKs8+VE/qfM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UA1PmHVqS7Doq3kkORGYOX9UWI9bk6hGxGCLQYpri8OgNWP05TQ1oO+xTKMSoyp11 rzFs3HWzzuispS+NfpdEtTBvDnPC3Mjh6i1U95VUltb+YPafh/fN/qhVtYZN/ez7Ip dKtni/M8QvWXYcNkWAX98Y2nRFRWoc8vcpRMG+yMMNErodQFZ/eCleT1Eo2GTwIgMJ 1LrN3HZfSyCmSOTkTvPNZyaNqkGNU3x2VG0ibVFUUY9+fL5NY2bSZvzwHhBk6G7PLb SFVEtuPP4xHzgLIwy4ZXJM/TF9nufuolZ7b2xKjbqOQtwEzcCobFAub96SQ7jwjhxM gk6PQAnpBoLNQ== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 10/15] loongarch: drop definition of PMD_ORDER Date: Tue, 5 Jul 2022 18:47:03 +0300 Message-Id: <20220705154708.181258-11-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PMD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Huacai Chen --- arch/loongarch/include/asm/pgalloc.h | 4 ++-- arch/loongarch/include/asm/pgtable.h | 7 +++---- arch/loongarch/kernel/asm-offsets.c | 3 --- 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index b0a57b25c131..93e785f46639 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -66,12 +66,12 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) pmd_t *pmd; struct page *pg; - pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER); + pg = alloc_page(GFP_KERNEL_ACCOUNT); if (!pg) return NULL; if (!pgtable_pmd_page_ctor(pg)) { - __free_pages(pg, PMD_ORDER); + __free_page(pg); return NULL; } diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index e0bbfc31fe72..f926537d2233 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -23,7 +23,6 @@ #define PGD_ORDER 0 #define PUD_ORDER 0 -#define PMD_ORDER 0 #if CONFIG_PGTABLE_LEVELS == 2 #define PGDIR_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) @@ -31,12 +30,12 @@ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) -#define PGDIR_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) +#define PGDIR_SHIFT (PMD_SHIFT + (PAGE_SHIFT - 3)) #elif CONFIG_PGTABLE_LEVELS == 4 #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) -#define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT + PMD_ORDER - 3)) +#define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT - 3)) #define PUD_SIZE (1UL << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE-1)) #define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT + PUD_ORDER - 3)) @@ -52,7 +51,7 @@ #define PTRS_PER_PUD ((PAGE_SIZE << PUD_ORDER) >> 3) #endif #if CONFIG_PGTABLE_LEVELS > 2 -#define PTRS_PER_PMD ((PAGE_SIZE << PMD_ORDER) >> 3) +#define PTRS_PER_PMD (PAGE_SIZE >> 3) #endif #define PTRS_PER_PTE (PAGE_SIZE >> 3) diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index 1a1166a7e61c..aa4ef42d759f 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -191,9 +191,6 @@ void output_mm_defines(void) DEFINE(_PTE_T_LOG2, PTE_T_LOG2); BLANK(); DEFINE(_PGD_ORDER, PGD_ORDER); -#ifndef __PAGETABLE_PMD_FOLDED - DEFINE(_PMD_ORDER, PMD_ORDER); -#endif BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); DEFINE(_PGDIR_SHIFT, PGDIR_SHIFT); From patchwork Tue Jul 5 15:47:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EBA0C433EF for ; Tue, 5 Jul 2022 15:50:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232841AbiGEPuJ (ORCPT ); Tue, 5 Jul 2022 11:50:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232584AbiGEPtm (ORCPT ); Tue, 5 Jul 2022 11:49:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F67365EF; Tue, 5 Jul 2022 08:48:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2E3E8B81825; Tue, 5 Jul 2022 15:48:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67223C341CA; Tue, 5 Jul 2022 15:48:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036111; bh=XUFB6Va8O4VOUoTqeJ/SzqmIGXceAjPUgZz79vTlh1E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G94szT3+bFQ+wpJnBwp0DBOJnWEkyAC9OpsGfwqaaGXZ52mMM1e0fQTTrRr23p6XO WLqUsWA12m7vCBEqvq1B9sfE/8cirwRjD2dMNaWZLserQN76JvFOx4vxkXpBu7UKXX uiONMQD/rbRfNPoolaXP1Etwh0rj356yp+hlBaes6mIZwiv2i7TemISbRgKFzAmN+H blCQUYLDtRYUfHM+mZx/NL4F8zp+3NXgNnhqtaerlx+MFrKRHn5Bftnh+3NHU9waAK ABvPXtRin8neCNzfKG/9p6oxUmjb5QVTLbsl+ToTfxvdYrMaUIk6l6kVlUbFAyNGZs EZG52hwrmYQiw== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 11/15] loongarch: drop definition of PUD_ORDER Date: Tue, 5 Jul 2022 18:47:04 +0300 Message-Id: <20220705154708.181258-12-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PUD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Huacai Chen --- arch/loongarch/include/asm/pgalloc.h | 2 +- arch/loongarch/include/asm/pgtable.h | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index 93e785f46639..4bfeb3c9c9ac 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -90,7 +90,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_ORDER); + pud = (pud_t *) __get_free_page(GFP_KERNEL); if (pud) pud_init((unsigned long)pud, (unsigned long)invalid_pmd_table); return pud; diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index f926537d2233..a97996fefaed 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -22,7 +22,6 @@ #endif #define PGD_ORDER 0 -#define PUD_ORDER 0 #if CONFIG_PGTABLE_LEVELS == 2 #define PGDIR_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) @@ -38,7 +37,7 @@ #define PUD_SHIFT (PMD_SHIFT + (PAGE_SHIFT - 3)) #define PUD_SIZE (1UL << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE-1)) -#define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT + PUD_ORDER - 3)) +#define PGDIR_SHIFT (PUD_SHIFT + (PAGE_SHIFT - 3)) #endif #define PGDIR_SIZE (1UL << PGDIR_SHIFT) @@ -48,7 +47,7 @@ #define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) >> 3) #if CONFIG_PGTABLE_LEVELS > 3 -#define PTRS_PER_PUD ((PAGE_SIZE << PUD_ORDER) >> 3) +#define PTRS_PER_PUD (PAGE_SIZE >> 3) #endif #if CONFIG_PGTABLE_LEVELS > 2 #define PTRS_PER_PMD (PAGE_SIZE >> 3) From patchwork Tue Jul 5 15:47:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 367F8CCA486 for ; Tue, 5 Jul 2022 15:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232355AbiGEPuK (ORCPT ); Tue, 5 Jul 2022 11:50:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232468AbiGEPtn (ORCPT ); Tue, 5 Jul 2022 11:49:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE0C1AF05; Tue, 5 Jul 2022 08:48:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9EEF761B4A; Tue, 5 Jul 2022 15:48:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B95AC341C7; Tue, 5 Jul 2022 15:48:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036118; bh=0JS7cgjvH0E+M2GTeoRrTMm0fz7LeV1nRwTyHoWWdVo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rHfSgIFrp/K7wlSbKwsi4IsyiqBcT84Yhh/OGElrD7cDaOOTe23PV1FmsWaYbhVOH 6mfUl9XA5lnVFScc3HYIrksKfWAjWZ+lnSES2rwGVwKsrHjj4R9UH4scUub2MNqvDn +TVFhQlG1HqUpjuCjpQGtWzpWzO7UHXplxkE2a5mxLWRpXQ+n+YqRMs45OZHsFbQP/ crRo4urCxL2+CIdY9XYDwqT+7w4SHPEfP7Ziq08nLi+PMbjp8ZsDDRWqeNjRtLd6RP YS/8snxLw93R7Coq3Ev08L/qsQ9NwLDyB9NBVlpOtu7NqpLcpuyOsQNDVg8NqrERRr tf97AMgp2x+bw== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 12/15] loongarch: drop definition of PGD_ORDER Date: Tue, 5 Jul 2022 18:47:05 +0300 Message-Id: <20220705154708.181258-13-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Huacai Chen --- arch/loongarch/include/asm/pgtable.h | 6 ++---- arch/loongarch/kernel/asm-offsets.c | 2 -- arch/loongarch/mm/pgtable.c | 2 +- 3 files changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index a97996fefaed..e03443abaf7d 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -21,8 +21,6 @@ #include #endif -#define PGD_ORDER 0 - #if CONFIG_PGTABLE_LEVELS == 2 #define PGDIR_SHIFT (PAGE_SHIFT + (PAGE_SHIFT - 3)) #elif CONFIG_PGTABLE_LEVELS == 3 @@ -43,9 +41,9 @@ #define PGDIR_SIZE (1UL << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) -#define VA_BITS (PGDIR_SHIFT + (PAGE_SHIFT + PGD_ORDER - 3)) +#define VA_BITS (PGDIR_SHIFT + (PAGE_SHIFT - 3)) -#define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) >> 3) +#define PTRS_PER_PGD (PAGE_SIZE >> 3) #if CONFIG_PGTABLE_LEVELS > 3 #define PTRS_PER_PUD (PAGE_SIZE >> 3) #endif diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index aa4ef42d759f..4a3bb1b9aef3 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -190,8 +190,6 @@ void output_mm_defines(void) #endif DEFINE(_PTE_T_LOG2, PTE_T_LOG2); BLANK(); - DEFINE(_PGD_ORDER, PGD_ORDER); - BLANK(); DEFINE(_PMD_SHIFT, PMD_SHIFT); DEFINE(_PGDIR_SHIFT, PGDIR_SHIFT); BLANK(); diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 0569647152e9..ee179ccd3e3f 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -13,7 +13,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *ret, *init; - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER); + ret = (pgd_t *) __get_free_page(GFP_KERNEL); if (ret) { init = pgd_offset(&init_mm, 0UL); pgd_init((unsigned long)ret); From patchwork Tue Jul 5 15:47:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE647C433EF for ; Tue, 5 Jul 2022 15:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232802AbiGEPuZ (ORCPT ); Tue, 5 Jul 2022 11:50:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232868AbiGEPtt (ORCPT ); Tue, 5 Jul 2022 11:49:49 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FD131D0E8; Tue, 5 Jul 2022 08:48:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 70E9CB81823; Tue, 5 Jul 2022 15:48:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 941B5C36AE2; Tue, 5 Jul 2022 15:48:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036124; bh=9oaDWzRU6R0GkjSESj8xB0w7pNOdSSaExN3Vy3D6ls8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cAfMts3hGwakKfseM8AZpijDZr4H9urWzpwNHqBWOGC/J2thqz6Q7V+VJXrNotL/4 yaiztHNM0qSW8XYTkHylOjgM08kCHXZ6cxSjjqJR/BmqhY0EAU9qO1jn6l4NQNfhbW 7eMU4fqbWgfMxzxC0C5NniYlNoAfabHyegMlfcisMGqFTRjGUsGesf5Y0CocKNLRi6 7W+TvZKbJLUYR9ss9hEgMRV77Hy+JoigITZMvmNcahZysMrcw7kiCF4CLMiAQPx72h XUylFJ1YQ9pVlFOo27H5jyvHNjew2ZquZi9Qqzua7/n6nOxbaQ6Ur3TM2QKZCeXQxM nM6xNf0OuqgsA== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 13/15] parisc: Rename PGD_ORDER to PGD_TABLE_ORDER Date: Tue, 5 Jul 2022 18:47:06 +0300 Message-Id: <20220705154708.181258-14-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. Signed-off-by: Mike Rapoport Acked-by: Helge Deller --- arch/parisc/include/asm/pgalloc.h | 6 +++--- arch/parisc/include/asm/pgtable.h | 8 ++++---- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/parisc/include/asm/pgalloc.h b/arch/parisc/include/asm/pgalloc.h index 54b63374579b..e3e142b1c5c5 100644 --- a/arch/parisc/include/asm/pgalloc.h +++ b/arch/parisc/include/asm/pgalloc.h @@ -20,18 +20,18 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *pgd; - pgd = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER); + pgd = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); if (unlikely(pgd == NULL)) return NULL; - memset(pgd, 0, PAGE_SIZE << PGD_ORDER); + memset(pgd, 0, PAGE_SIZE << PGD_TABLE_ORDER); return pgd; } static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, PGD_ORDER); + free_pages((unsigned long)pgd, PGD_TABLE_ORDER); } #if CONFIG_PGTABLE_LEVELS == 3 diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 69765a6dbe89..6790b554bdfd 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -118,9 +118,9 @@ extern void __update_cache(pte_t pte); #if CONFIG_PGTABLE_LEVELS == 3 #define PMD_TABLE_ORDER 1 -#define PGD_ORDER 0 +#define PGD_TABLE_ORDER 0 #else -#define PGD_ORDER 1 +#define PGD_TABLE_ORDER 1 #endif /* Definitions for 3rd level (we use PLD here for Page Lower directory @@ -144,10 +144,10 @@ extern void __update_cache(pte_t pte); /* Definitions for 1st level */ #define PGDIR_SHIFT (PLD_SHIFT + BITS_PER_PTE + BITS_PER_PMD) -#if (PGDIR_SHIFT + PAGE_SHIFT + PGD_ORDER - BITS_PER_PGD_ENTRY) > BITS_PER_LONG +#if (PGDIR_SHIFT + PAGE_SHIFT + PGD_TABLE_ORDER - BITS_PER_PGD_ENTRY) > BITS_PER_LONG #define BITS_PER_PGD (BITS_PER_LONG - PGDIR_SHIFT) #else -#define BITS_PER_PGD (PAGE_SHIFT + PGD_ORDER - BITS_PER_PGD_ENTRY) +#define BITS_PER_PGD (PAGE_SHIFT + PGD_TABLE_ORDER - BITS_PER_PGD_ENTRY) #endif #define PGDIR_SIZE (1UL << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) From patchwork Tue Jul 5 15:47:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9AB0CCA47F for ; Tue, 5 Jul 2022 15:50:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232954AbiGEPu0 (ORCPT ); Tue, 5 Jul 2022 11:50:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232417AbiGEPty (ORCPT ); Tue, 5 Jul 2022 11:49:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 394981D31F; Tue, 5 Jul 2022 08:48:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C59C861B2F; Tue, 5 Jul 2022 15:48:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A945CC385A9; Tue, 5 Jul 2022 15:48:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036130; bh=KLRO24VwSLVIulcSBrEsq68hsqfeXNzy27r4Yp11+QA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GdrXl/xpgVVg8YOxhIc1ceTrw9pKv9FhQkwz1UEXm+27lUW9d5MgCTfJzzq4H0zxU DH4Zdk5p2rI3NgRROs/gqUH6QnF2naI1n3Xj9kniWdfPyWHv7EzPMQm/6vrOb3mGru Ns3ekv3193vDT4yBwRa3tByMpmPYaD4ycpGFe3a2dOobnrqj0IA6kyVR/aTRdWvm4r qWXKPvRQ0PS2iUo/8PdMtrfM8+0M4fllLxebI2/x+C5TZ73uUPPMGVw4PpKIzSklV3 NAROaW7bpnOghvYVskifhYpquvhD20yHgMV6oyOBoGeskXT24OzJY599Idb/zgLrgY fvpnqRFkiizdw== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev Subject: [PATCH v2 14/15] xtensa: drop definition of PGD_ORDER Date: Tue, 5 Jul 2022 18:47:07 +0300 Message-Id: <20220705154708.181258-15-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport This is the order of the page table allocation, not the order of a PGD. Since its always hardwired to 0, simply drop it. Signed-off-by: Mike Rapoport Acked-by: Max Filippov --- arch/xtensa/include/asm/pgalloc.h | 2 +- arch/xtensa/include/asm/pgtable.h | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/xtensa/include/asm/pgalloc.h b/arch/xtensa/include/asm/pgalloc.h index eeb2de3a89e5..7fc0f9126dd3 100644 --- a/arch/xtensa/include/asm/pgalloc.h +++ b/arch/xtensa/include/asm/pgalloc.h @@ -29,7 +29,7 @@ static inline pgd_t* pgd_alloc(struct mm_struct *mm) { - return (pgd_t*) __get_free_pages(GFP_KERNEL | __GFP_ZERO, PGD_ORDER); + return (pgd_t*) __get_free_page(GFP_KERNEL | __GFP_ZERO); } static inline void ptes_clear(pte_t *ptep) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 0a91376131c5..4bd77d2b6715 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -57,7 +57,6 @@ #define PTRS_PER_PTE 1024 #define PTRS_PER_PTE_SHIFT 10 #define PTRS_PER_PGD 1024 -#define PGD_ORDER 0 #define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE) #define FIRST_USER_PGD_NR (FIRST_USER_ADDRESS >> PGDIR_SHIFT) From patchwork Tue Jul 5 15:47:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12906799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C356C43334 for ; Tue, 5 Jul 2022 15:50:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232832AbiGEPun (ORCPT ); Tue, 5 Jul 2022 11:50:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232251AbiGEPuI (ORCPT ); Tue, 5 Jul 2022 11:50:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B112A1EC4D; Tue, 5 Jul 2022 08:48:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2541F61B4A; Tue, 5 Jul 2022 15:48:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE4EEC341CB; Tue, 5 Jul 2022 15:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657036136; bh=0MRT2mp6jbXkb/N3zPVWAzD1HZ6bXCAgcxDNmnhtrWo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=miZdIT7JsRLdt69tvlva3KuD2lKZHxDqGlzFubQD1U1n/zlhLYFv79wIBEOz/P25E 2KWYOUg8w6jNPPD+/0pU8kSPuOsLc4kC1NAhXPmqIhk4XABAqGB4aOGwRSwEzuIts0 liGjUl7mf0AWvM5W1nfuAw6HJDV4stig/zlXb2guDJBiCwsSMztLh6N9PDqPpCha0Q pGHMf4eIYITQkIGVAVDkUWl9exktSvQEPzwaG1dnwi0a6HmOj9BsiXP3AyWYYURC/f zswHWUByWbBVy0mDbMX2629JNzw3t6BsoXQW+91hyrG/QQxd7TGdEZ2072aatpiStj U5toLbi4fFKYA== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Dinh Nguyen , Guo Ren , Helge Deller , Huacai Chen , "James E.J. Bottomley" , Matthew Wilcox , Max Filippov , Mike Rapoport , Mike Rapoport , "Russell King (Oracle)" , Thomas Bogendoerfer , WANG Xuerui , linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, loongarch@lists.linux.dev, Russell King Subject: [PATCH v2 15/15] ARM: head.S: rename PMD_ORDER to PMD_ENTRY_ORDER Date: Tue, 5 Jul 2022 18:47:08 +0300 Message-Id: <20220705154708.181258-16-rppt@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705154708.181258-1-rppt@kernel.org> References: <20220705154708.181258-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Mike Rapoport PMD_ORDER denotes order of magnitude for a PMD entry, i.e PMD entry size is 2 ^ PMD_ORDER. Rename PMD_ORDER to PMD_ENTRY_ORDER to allow a generic definition of PMD_ORDER as order of a PMD allocation: (PMD_SHIFT - PAGE_SHIFT). Signed-off-by: Mike Rapoport Acked-by: Russell King (Oracle) --- arch/arm/kernel/head.S | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 500612d3da2e..29e2900178a1 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -38,10 +38,10 @@ #ifdef CONFIG_ARM_LPAE /* LPAE requires an additional page for the PGD */ #define PG_DIR_SIZE 0x5000 -#define PMD_ORDER 3 +#define PMD_ENTRY_ORDER 3 /* PMD entry size is 2^PMD_ENTRY_ORDER */ #else #define PG_DIR_SIZE 0x4000 -#define PMD_ORDER 2 +#define PMD_ENTRY_ORDER 2 #endif .globl swapper_pg_dir @@ -240,7 +240,7 @@ __create_page_tables: mov r6, r6, lsr #SECTION_SHIFT 1: orr r3, r7, r5, lsl #SECTION_SHIFT @ flags + kernel base - str r3, [r4, r5, lsl #PMD_ORDER] @ identity mapping + str r3, [r4, r5, lsl #PMD_ENTRY_ORDER] @ identity mapping cmp r5, r6 addlo r5, r5, #1 @ next section blo 1b @@ -250,7 +250,7 @@ __create_page_tables: * set two variables to indicate the physical start and end of the * kernel. */ - add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ORDER) + add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ENTRY_ORDER) ldr r6, =(_end - 1) adr_l r5, kernel_sec_start @ _pa(kernel_sec_start) #if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32 @@ -259,8 +259,8 @@ __create_page_tables: str r8, [r5] @ Save physical start of kernel (LE) #endif orr r3, r8, r7 @ Add the MMU flags - add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ORDER) -1: str r3, [r0], #1 << PMD_ORDER + add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ENTRY_ORDER) +1: str r3, [r0], #1 << PMD_ENTRY_ORDER add r3, r3, #1 << SECTION_SHIFT cmp r0, r6 bls 1b @@ -280,14 +280,14 @@ __create_page_tables: mov r3, pc mov r3, r3, lsr #SECTION_SHIFT orr r3, r7, r3, lsl #SECTION_SHIFT - add r0, r4, #(XIP_START & 0xff000000) >> (SECTION_SHIFT - PMD_ORDER) - str r3, [r0, #((XIP_START & 0x00f00000) >> SECTION_SHIFT) << PMD_ORDER]! + add r0, r4, #(XIP_START & 0xff000000) >> (SECTION_SHIFT - PMD_ENTRY_ORDER) + str r3, [r0, #((XIP_START & 0x00f00000) >> SECTION_SHIFT) << PMD_ENTRY_ORDER]! ldr r6, =(_edata_loc - 1) - add r0, r0, #1 << PMD_ORDER - add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ORDER) + add r0, r0, #1 << PMD_ENTRY_ORDER + add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ENTRY_ORDER) 1: cmp r0, r6 add r3, r3, #1 << SECTION_SHIFT - strls r3, [r0], #1 << PMD_ORDER + strls r3, [r0], #1 << PMD_ENTRY_ORDER bls 1b #endif @@ -297,10 +297,10 @@ __create_page_tables: */ mov r0, r2, lsr #SECTION_SHIFT cmp r2, #0 - ldrne r3, =FDT_FIXED_BASE >> (SECTION_SHIFT - PMD_ORDER) + ldrne r3, =FDT_FIXED_BASE >> (SECTION_SHIFT - PMD_ENTRY_ORDER) addne r3, r3, r4 orrne r6, r7, r0, lsl #SECTION_SHIFT - strne r6, [r3], #1 << PMD_ORDER + strne r6, [r3], #1 << PMD_ENTRY_ORDER addne r6, r6, #1 << SECTION_SHIFT strne r6, [r3] @@ -319,7 +319,7 @@ __create_page_tables: addruart r7, r3, r0 mov r3, r3, lsr #SECTION_SHIFT - mov r3, r3, lsl #PMD_ORDER + mov r3, r3, lsl #PMD_ENTRY_ORDER add r0, r4, r3 mov r3, r7, lsr #SECTION_SHIFT @@ -349,7 +349,7 @@ __create_page_tables: * If we're using the NetWinder or CATS, we also need to map * in the 16550-type serial port for the debug messages */ - add r0, r4, #0xff000000 >> (SECTION_SHIFT - PMD_ORDER) + add r0, r4, #0xff000000 >> (SECTION_SHIFT - PMD_ENTRY_ORDER) orr r3, r7, #0x7c000000 str r3, [r0] #endif @@ -359,10 +359,10 @@ __create_page_tables: * Similar reasons here - for debug. This is * only for Acorn RiscPC architectures. */ - add r0, r4, #0x02000000 >> (SECTION_SHIFT - PMD_ORDER) + add r0, r4, #0x02000000 >> (SECTION_SHIFT - PMD_ENTRY_ORDER) orr r3, r7, #0x02000000 str r3, [r0] - add r0, r4, #0xd8000000 >> (SECTION_SHIFT - PMD_ORDER) + add r0, r4, #0xd8000000 >> (SECTION_SHIFT - PMD_ENTRY_ORDER) str r3, [r0] #endif #endif