diff mbox series

[3/6] arm64/mm: Make pgd_pgtable_alloc() call pte_alloc_one() always

Message ID 1548307220-19756-4-git-send-email-anshuman.khandual@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64/mm: Enable accounting for page table pages | expand

Commit Message

Anshuman Khandual Jan. 24, 2019, 5:20 a.m. UTC
pgd_pgtable_alloc() provides page allocation function while creating all
levels of page table (e.g PGD, PUD, CONT PMD) for various kernel mappings.
It calls __get_free_page() and initializes page with pagetable_page_ctor().
pte_alloc_one() already provides a standard interface for allocating a page
table page and initializes it with pagetable_page_ctor(). This removes the
redundancy and instead make it call pte_alloc_one() directly.

Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/mmu.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

Comments

Will Deacon Feb. 14, 2019, 5:11 p.m. UTC | #1
On Thu, Jan 24, 2019 at 10:50:17AM +0530, Anshuman Khandual wrote:
> pgd_pgtable_alloc() provides page allocation function while creating all
> levels of page table (e.g PGD, PUD, CONT PMD) for various kernel mappings.
> It calls __get_free_page() and initializes page with pagetable_page_ctor().
> pte_alloc_one() already provides a standard interface for allocating a page
> table page and initializes it with pagetable_page_ctor(). This removes the
> redundancy and instead make it call pte_alloc_one() directly.
> 
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>  arch/arm64/mm/mmu.c | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b6f5aa5..2dbd723 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -372,13 +372,22 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>  
>  static phys_addr_t pgd_pgtable_alloc(void)
>  {
> -	void *ptr = (void *)__get_free_page(PGALLOC_GFP);
> -	if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))
> -		BUG();
> +
> +	pgtable_t ptr;
> +
> +	/*
> +	 * pgd_pgtable_alloc() is called while creating kernel mappings
> +	 * through __create_pgd_mapping() might not install it through
> +	 * swapper_pg_dir (&init_mm). Even then init_mm is passed here
> +	 * just to indicate that the allocation is kernel specific not
> +	 * for the user space page tables.
> +	 */
> +	ptr = pte_alloc_one(&init_mm);
> +	BUG_ON(!ptr);
>  
>  	/* Ensure the zeroed page is visible to the page table walker */
>  	dsb(ishst);
> -	return __pa(ptr);
> +	return page_to_phys(ptr);

Are you sure this is safe? afaict, this code is used to set up the linear
mapping which happens before the mem_map is initialised. I don't think
page_to_phys() can be used at this point. What am I missing?

Will
Anshuman Khandual Feb. 19, 2019, 4:33 a.m. UTC | #2
On 02/14/2019 10:41 PM, Will Deacon wrote:
> On Thu, Jan 24, 2019 at 10:50:17AM +0530, Anshuman Khandual wrote:
>> pgd_pgtable_alloc() provides page allocation function while creating all
>> levels of page table (e.g PGD, PUD, CONT PMD) for various kernel mappings.
>> It calls __get_free_page() and initializes page with pagetable_page_ctor().
>> pte_alloc_one() already provides a standard interface for allocating a page
>> table page and initializes it with pagetable_page_ctor(). This removes the
>> redundancy and instead make it call pte_alloc_one() directly.
>>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>  arch/arm64/mm/mmu.c | 17 +++++++++++++----
>>  1 file changed, 13 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index b6f5aa5..2dbd723 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -372,13 +372,22 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
>>  
>>  static phys_addr_t pgd_pgtable_alloc(void)
>>  {
>> -	void *ptr = (void *)__get_free_page(PGALLOC_GFP);
>> -	if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))
>> -		BUG();
>> +
>> +	pgtable_t ptr;
>> +
>> +	/*
>> +	 * pgd_pgtable_alloc() is called while creating kernel mappings
>> +	 * through __create_pgd_mapping() might not install it through
>> +	 * swapper_pg_dir (&init_mm). Even then init_mm is passed here
>> +	 * just to indicate that the allocation is kernel specific not
>> +	 * for the user space page tables.
>> +	 */
>> +	ptr = pte_alloc_one(&init_mm);
>> +	BUG_ON(!ptr);
>>  
>>  	/* Ensure the zeroed page is visible to the page table walker */
>>  	dsb(ishst);
>> -	return __pa(ptr);
>> +	return page_to_phys(ptr);
> 
> Are you sure this is safe? afaict, this code is used to set up the linear
> mapping which happens before the mem_map is initialised. I don't think
> page_to_phys() can be used at this point. What am I missing?

We are calling into buddy allocator to get a page before coming here. So
mem_map is definitely initialized. There is again early_pgtable_alloc()
used for __create_pgd_mapping() which allocates page from memblock when
buddy has not been initialized.
diff mbox series

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b6f5aa5..2dbd723 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -372,13 +372,22 @@  static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
 
 static phys_addr_t pgd_pgtable_alloc(void)
 {
-	void *ptr = (void *)__get_free_page(PGALLOC_GFP);
-	if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))
-		BUG();
+
+	pgtable_t ptr;
+
+	/*
+	 * pgd_pgtable_alloc() is called while creating kernel mappings
+	 * through __create_pgd_mapping() might not install it through
+	 * swapper_pg_dir (&init_mm). Even then init_mm is passed here
+	 * just to indicate that the allocation is kernel specific not
+	 * for the user space page tables.
+	 */
+	ptr = pte_alloc_one(&init_mm);
+	BUG_ON(!ptr);
 
 	/* Ensure the zeroed page is visible to the page table walker */
 	dsb(ishst);
-	return __pa(ptr);
+	return page_to_phys(ptr);
 }
 
 /*