Message ID | 20200506064635.20114-1-gshan@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64/mm: Remove add_huge_page_size() | expand |
On 05/06/2020 12:16 PM, Gavin Shan wrote: > The function add_huge_page_size(), wrapper of hugetlb_add_hstate(), > avoids to register duplicated huge page states for same size. However, > the same logic has been included in hugetlb_add_hstate(). So it seems > unnecessary to keep add_huge_page_size() and this just removes it. Makes sense. > > Signed-off-by: Gavin Shan <gshan@redhat.com> > --- > arch/arm64/mm/hugetlbpage.c | 18 +++++------------- > 1 file changed, 5 insertions(+), 13 deletions(-) > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index bbeb6a5a6ba6..ed7530413941 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma, > clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); > } > > -static void __init add_huge_page_size(unsigned long size) > -{ > - if (size_to_hstate(size)) > - return; > - > - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); > -} > - > static int __init hugetlbpage_init(void) > { > #ifdef CONFIG_ARM64_4K_PAGES > - add_huge_page_size(PUD_SIZE); > + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); > #endif > - add_huge_page_size(CONT_PMD_SIZE); > - add_huge_page_size(PMD_SIZE); > - add_huge_page_size(CONT_PTE_SIZE); > + hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT); > + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); > + hugetlb_add_hstate(CONT_PTE_SHIFT); Should these page order values be converted into macros instead. Also we should probably keep (CONT_PTE_SHIFT + PAGE_SHIFT - PAGE_SHIFT) as is to make things more clear.
On Wed, May 06, 2020 at 12:36:43PM +0530, Anshuman Khandual wrote: > > > On 05/06/2020 12:16 PM, Gavin Shan wrote: > > The function add_huge_page_size(), wrapper of hugetlb_add_hstate(), > > avoids to register duplicated huge page states for same size. However, > > the same logic has been included in hugetlb_add_hstate(). So it seems > > unnecessary to keep add_huge_page_size() and this just removes it. > > Makes sense. > > > > > Signed-off-by: Gavin Shan <gshan@redhat.com> > > --- > > arch/arm64/mm/hugetlbpage.c | 18 +++++------------- > > 1 file changed, 5 insertions(+), 13 deletions(-) > > > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > > index bbeb6a5a6ba6..ed7530413941 100644 > > --- a/arch/arm64/mm/hugetlbpage.c > > +++ b/arch/arm64/mm/hugetlbpage.c > > @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma, > > clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); > > } > > > > -static void __init add_huge_page_size(unsigned long size) > > -{ > > - if (size_to_hstate(size)) > > - return; > > - > > - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); > > -} > > - > > static int __init hugetlbpage_init(void) > > { > > #ifdef CONFIG_ARM64_4K_PAGES > > - add_huge_page_size(PUD_SIZE); > > + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); > > #endif > > - add_huge_page_size(CONT_PMD_SIZE); > > - add_huge_page_size(PMD_SIZE); > > - add_huge_page_size(CONT_PTE_SIZE); > > + hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT); > > + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); > > + hugetlb_add_hstate(CONT_PTE_SHIFT); Something similar has already been done in linux-next. > Should these page order values be converted into macros instead. Also > we should probably keep (CONT_PTE_SHIFT + PAGE_SHIFT - PAGE_SHIFT) as > is to make things more clear. I think the real confusion stems from us not being consistent with your *_SHIFT definitions on arm64. It's madness for CONT_PTE_SHIFT to be smaller than PAGE_SHIFT imo, but it's just cosmetic I guess. Will
On 5/6/20 5:19 PM, Will Deacon wrote: > On Wed, May 06, 2020 at 12:36:43PM +0530, Anshuman Khandual wrote: >> >> >> On 05/06/2020 12:16 PM, Gavin Shan wrote: >>> The function add_huge_page_size(), wrapper of hugetlb_add_hstate(), >>> avoids to register duplicated huge page states for same size. However, >>> the same logic has been included in hugetlb_add_hstate(). So it seems >>> unnecessary to keep add_huge_page_size() and this just removes it. >> >> Makes sense. >> >>> >>> Signed-off-by: Gavin Shan <gshan@redhat.com> >>> --- >>> arch/arm64/mm/hugetlbpage.c | 18 +++++------------- >>> 1 file changed, 5 insertions(+), 13 deletions(-) >>> >>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c >>> index bbeb6a5a6ba6..ed7530413941 100644 >>> --- a/arch/arm64/mm/hugetlbpage.c >>> +++ b/arch/arm64/mm/hugetlbpage.c >>> @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma, >>> clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); >>> } >>> >>> -static void __init add_huge_page_size(unsigned long size) >>> -{ >>> - if (size_to_hstate(size)) >>> - return; >>> - >>> - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); >>> -} >>> - >>> static int __init hugetlbpage_init(void) >>> { >>> #ifdef CONFIG_ARM64_4K_PAGES >>> - add_huge_page_size(PUD_SIZE); >>> + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); >>> #endif >>> - add_huge_page_size(CONT_PMD_SIZE); >>> - add_huge_page_size(PMD_SIZE); >>> - add_huge_page_size(CONT_PTE_SIZE); >>> + hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT); >>> + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); >>> + hugetlb_add_hstate(CONT_PTE_SHIFT); > > Something similar has already been done in linux-next. > Thanks, Will. I didn't check linux-next before posting this patch. Please ignore it then :) >> Should these page order values be converted into macros instead. Also >> we should probably keep (CONT_PTE_SHIFT + PAGE_SHIFT - PAGE_SHIFT) as >> is to make things more clear. > > I think the real confusion stems from us not being consistent with your > *_SHIFT definitions on arm64. It's madness for CONT_PTE_SHIFT to be smaller > than PAGE_SHIFT imo, but it's just cosmetic I guess. > Yeah, Do you want me to post a patch, to fix it? > Will > Thanks, Gavin
On Thu, May 07, 2020 at 10:15:59AM +1000, Gavin Shan wrote: > On 5/6/20 5:19 PM, Will Deacon wrote: > > On Wed, May 06, 2020 at 12:36:43PM +0530, Anshuman Khandual wrote: > > > > > > > > > On 05/06/2020 12:16 PM, Gavin Shan wrote: > > > > The function add_huge_page_size(), wrapper of hugetlb_add_hstate(), > > > > avoids to register duplicated huge page states for same size. However, > > > > the same logic has been included in hugetlb_add_hstate(). So it seems > > > > unnecessary to keep add_huge_page_size() and this just removes it. > > > > > > Makes sense. > > > > > > > > > > > Signed-off-by: Gavin Shan <gshan@redhat.com> > > > > --- > > > > arch/arm64/mm/hugetlbpage.c | 18 +++++------------- > > > > 1 file changed, 5 insertions(+), 13 deletions(-) > > > > > > > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > > > > index bbeb6a5a6ba6..ed7530413941 100644 > > > > --- a/arch/arm64/mm/hugetlbpage.c > > > > +++ b/arch/arm64/mm/hugetlbpage.c > > > > @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma, > > > > clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); > > > > } > > > > -static void __init add_huge_page_size(unsigned long size) > > > > -{ > > > > - if (size_to_hstate(size)) > > > > - return; > > > > - > > > > - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); > > > > -} > > > > - > > > > static int __init hugetlbpage_init(void) > > > > { > > > > #ifdef CONFIG_ARM64_4K_PAGES > > > > - add_huge_page_size(PUD_SIZE); > > > > + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); > > > > #endif > > > > - add_huge_page_size(CONT_PMD_SIZE); > > > > - add_huge_page_size(PMD_SIZE); > > > > - add_huge_page_size(CONT_PTE_SIZE); > > > > + hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT); > > > > + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); > > > > + hugetlb_add_hstate(CONT_PTE_SHIFT); > > > > Something similar has already been done in linux-next. > > > > Thanks, Will. I didn't check linux-next before posting this patch. > Please ignore it then :) > > > > Should these page order values be converted into macros instead. Also > > > we should probably keep (CONT_PTE_SHIFT + PAGE_SHIFT - PAGE_SHIFT) as > > > is to make things more clear. > > > > I think the real confusion stems from us not being consistent with your > > *_SHIFT definitions on arm64. It's madness for CONT_PTE_SHIFT to be smaller > > than PAGE_SHIFT imo, but it's just cosmetic I guess. > > > > Yeah, Do you want me to post a patch, to fix it? Let's wait until 5.8 is out the door first, since we've already got a fair amount of activity in this area and it would be a pity if we broke something as a result of cleanup! Cheers, Will
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index bbeb6a5a6ba6..ed7530413941 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma, clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } -static void __init add_huge_page_size(unsigned long size) -{ - if (size_to_hstate(size)) - return; - - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); -} - static int __init hugetlbpage_init(void) { #ifdef CONFIG_ARM64_4K_PAGES - add_huge_page_size(PUD_SIZE); + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); #endif - add_huge_page_size(CONT_PMD_SIZE); - add_huge_page_size(PMD_SIZE); - add_huge_page_size(CONT_PTE_SIZE); + hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate(CONT_PTE_SHIFT); return 0; } @@ -473,7 +465,7 @@ static __init int setup_hugepagesz(char *opt) case CONT_PMD_SIZE: case PMD_SIZE: case CONT_PTE_SIZE: - add_huge_page_size(ps); + hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); return 1; }
The function add_huge_page_size(), wrapper of hugetlb_add_hstate(), avoids to register duplicated huge page states for same size. However, the same logic has been included in hugetlb_add_hstate(). So it seems unnecessary to keep add_huge_page_size() and this just removes it. Signed-off-by: Gavin Shan <gshan@redhat.com> --- arch/arm64/mm/hugetlbpage.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-)