diff mbox series

[3/4] mm: hugetlb_vmemmap: move PageVmemmapSelfHosted() check to split_vmemmap_huge_pmd()

Message ID 20231127084645.27017-4-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series Code simplification and clean-up for hugetlb vmemmap | expand

Commit Message

Muchun Song Nov. 27, 2023, 8:46 a.m. UTC
To check a page whether it is self-hosted needs to traverse the page table (e.g.
pmd_off_k()), however, we already have done this in the next calling of
vmemmap_remap_range().  Moving PageVmemmapSelfHosted() check to vmemmap_pmd_entry()
could simplify the code a bit.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 70 +++++++++++++++-----------------------------
 1 file changed, 24 insertions(+), 46 deletions(-)

Comments

Mike Kravetz Dec. 4, 2023, 11:27 p.m. UTC | #1
On 11/27/23 16:46, Muchun Song wrote:
> To check a page whether it is self-hosted needs to traverse the page table (e.g.
> pmd_off_k()), however, we already have done this in the next calling of
> vmemmap_remap_range().  Moving PageVmemmapSelfHosted() check to vmemmap_pmd_entry()
> could simplify the code a bit.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/hugetlb_vmemmap.c | 70 +++++++++++++++-----------------------------
>  1 file changed, 24 insertions(+), 46 deletions(-)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index ef14356855d13..ce920ca6c90ee 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -95,6 +95,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start,
>  static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
>  			     unsigned long next, struct mm_walk *walk)
>  {
> +	int ret = 0;
>  	struct page *head;
>  	struct vmemmap_remap_walk *vmemmap_walk = walk->private;
>  
> @@ -104,9 +105,30 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
>  
>  	spin_lock(&init_mm.page_table_lock);
>  	head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL;

I was going to ask why you dropped the:
	if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG))
However, without the extra page table reads it really is not necessary.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Muchun Song Dec. 5, 2023, 2:46 a.m. UTC | #2
> On Dec 5, 2023, at 07:27, Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
> On 11/27/23 16:46, Muchun Song wrote:
>> To check a page whether it is self-hosted needs to traverse the page table (e.g.
>> pmd_off_k()), however, we already have done this in the next calling of
>> vmemmap_remap_range().  Moving PageVmemmapSelfHosted() check to vmemmap_pmd_entry()
>> could simplify the code a bit.
>> 
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>> mm/hugetlb_vmemmap.c | 70 +++++++++++++++-----------------------------
>> 1 file changed, 24 insertions(+), 46 deletions(-)
>> 
>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>> index ef14356855d13..ce920ca6c90ee 100644
>> --- a/mm/hugetlb_vmemmap.c
>> +++ b/mm/hugetlb_vmemmap.c
>> @@ -95,6 +95,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start,
>> static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
>>      unsigned long next, struct mm_walk *walk)
>> {
>> + int ret = 0;
>> struct page *head;
>> struct vmemmap_remap_walk *vmemmap_walk = walk->private;
>> 
>> @@ -104,9 +105,30 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
>> 
>> spin_lock(&init_mm.page_table_lock);
>> head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL;
> 
> I was going to ask why you dropped the:
> if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG))
> However, without the extra page table reads it really is not necessary.

I thought the compiler is smart enough to find out this code
is unnecessary so that it will be dropped. But I am wrong,
I did a test showing the compiler does optimize the code
when CONFIG_MEMORY_HOTPLUG is disabled, but it is not
enough compared with adding the check of "IS_ENABLED(CONFIG_MEMORY_HOTPLUG)".
I will add the check back in a separate patch (I suspect it is easy
for Andrew to merge).

> 
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

Thanks.

> -- 
> Mike Kravetz
diff mbox series

Patch

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index ef14356855d13..ce920ca6c90ee 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -95,6 +95,7 @@  static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start,
 static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
 			     unsigned long next, struct mm_walk *walk)
 {
+	int ret = 0;
 	struct page *head;
 	struct vmemmap_remap_walk *vmemmap_walk = walk->private;
 
@@ -104,9 +105,30 @@  static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr,
 
 	spin_lock(&init_mm.page_table_lock);
 	head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL;
+	/*
+	 * Due to HugeTLB alignment requirements and the vmemmap
+	 * pages being at the start of the hotplugged memory
+	 * region in memory_hotplug.memmap_on_memory case. Checking
+	 * the vmemmap page associated with the first vmemmap page
+	 * if it is self-hosted is sufficient.
+	 *
+	 * [                  hotplugged memory                  ]
+	 * [        section        ][...][        section        ]
+	 * [ vmemmap ][              usable memory               ]
+	 *   ^  | ^                        |
+	 *   +--+ |                        |
+	 *        +------------------------+
+	 */
+	if (unlikely(!vmemmap_walk->nr_walked)) {
+		struct page *page = head ? head + pte_index(addr) :
+				    pte_page(ptep_get(pte_offset_kernel(pmd, addr)));
+
+		if (PageVmemmapSelfHosted(page))
+			ret = -ENOTSUPP;
+	}
 	spin_unlock(&init_mm.page_table_lock);
-	if (!head)
-		return 0;
+	if (!head || ret)
+		return ret;
 
 	return vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk);
 }
@@ -524,50 +546,6 @@  static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h
 	if (!hugetlb_vmemmap_optimizable(h))
 		return false;
 
-	if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) {
-		pmd_t *pmdp, pmd;
-		struct page *vmemmap_page;
-		unsigned long vaddr = (unsigned long)head;
-
-		/*
-		 * Only the vmemmap page's vmemmap page can be self-hosted.
-		 * Walking the page tables to find the backing page of the
-		 * vmemmap page.
-		 */
-		pmdp = pmd_off_k(vaddr);
-		/*
-		 * The READ_ONCE() is used to stabilize *pmdp in a register or
-		 * on the stack so that it will stop changing under the code.
-		 * The only concurrent operation where it can be changed is
-		 * split_vmemmap_huge_pmd() (*pmdp will be stable after this
-		 * operation).
-		 */
-		pmd = READ_ONCE(*pmdp);
-		if (pmd_leaf(pmd))
-			vmemmap_page = pmd_page(pmd) + pte_index(vaddr);
-		else
-			vmemmap_page = pte_page(*pte_offset_kernel(pmdp, vaddr));
-		/*
-		 * Due to HugeTLB alignment requirements and the vmemmap pages
-		 * being at the start of the hotplugged memory region in
-		 * memory_hotplug.memmap_on_memory case. Checking any vmemmap
-		 * page's vmemmap page if it is marked as VmemmapSelfHosted is
-		 * sufficient.
-		 *
-		 * [                  hotplugged memory                  ]
-		 * [        section        ][...][        section        ]
-		 * [ vmemmap ][              usable memory               ]
-		 *   ^   |     |                                        |
-		 *   +---+     |                                        |
-		 *     ^       |                                        |
-		 *     +-------+                                        |
-		 *          ^                                           |
-		 *          +-------------------------------------------+
-		 */
-		if (PageVmemmapSelfHosted(vmemmap_page))
-			return false;
-	}
-
 	return true;
 }