diff mbox series

[v5,21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two

Message ID 20201120064325.34492-22-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Free some vmemmap pages of hugetlb page | expand

Commit Message

Muchun Song Nov. 20, 2020, 6:43 a.m. UTC
We only can free the unused vmemmap to the buddy system when the
size of struct page is a power of two.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Michal Hocko Nov. 20, 2020, 8:25 a.m. UTC | #1
On Fri 20-11-20 14:43:25, Muchun Song wrote:
> We only can free the unused vmemmap to the buddy system when the
> size of struct page is a power of two.

Can we actually have !power_of_2 struct pages?

> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/hugetlb_vmemmap.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index c3b3fc041903..7bb749a3eea2 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>  	unsigned int order = huge_page_order(h);
>  	unsigned int vmemmap_pages;
>  
> -	if (hugetlb_free_vmemmap_disabled) {
> +	if (hugetlb_free_vmemmap_disabled ||
> +	    !is_power_of_2(sizeof(struct page))) {
>  		pr_info("disable free vmemmap pages for %s\n", h->name);
>  		return;
>  	}
> -- 
> 2.11.0
>
David Hildenbrand Nov. 20, 2020, 9:15 a.m. UTC | #2
On 20.11.20 09:25, Michal Hocko wrote:
> On Fri 20-11-20 14:43:25, Muchun Song wrote:
>> We only can free the unused vmemmap to the buddy system when the
>> size of struct page is a power of two.
> 
> Can we actually have !power_of_2 struct pages?

AFAIK multiples of 8 bytes (56, 64, 72) are possible.
David Hildenbrand Nov. 20, 2020, 9:16 a.m. UTC | #3
On 20.11.20 07:43, Muchun Song wrote:
> We only can free the unused vmemmap to the buddy system when the
> size of struct page is a power of two.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>   mm/hugetlb_vmemmap.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index c3b3fc041903..7bb749a3eea2 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>   	unsigned int order = huge_page_order(h);
>   	unsigned int vmemmap_pages;
>   
> -	if (hugetlb_free_vmemmap_disabled) {
> +	if (hugetlb_free_vmemmap_disabled ||
> +	    !is_power_of_2(sizeof(struct page))) {
>   		pr_info("disable free vmemmap pages for %s\n", h->name);
>   		return;
>   	}
> 

This patch should be merged into the original patch that introduced 
vmemmap freeing.
Muchun Song Nov. 20, 2020, 10:42 a.m. UTC | #4
On Fri, Nov 20, 2020 at 5:16 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 20.11.20 07:43, Muchun Song wrote:
> > We only can free the unused vmemmap to the buddy system when the
> > size of struct page is a power of two.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >   mm/hugetlb_vmemmap.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index c3b3fc041903..7bb749a3eea2 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >       unsigned int order = huge_page_order(h);
> >       unsigned int vmemmap_pages;
> >
> > -     if (hugetlb_free_vmemmap_disabled) {
> > +     if (hugetlb_free_vmemmap_disabled ||
> > +         !is_power_of_2(sizeof(struct page))) {
> >               pr_info("disable free vmemmap pages for %s\n", h->name);
> >               return;
> >       }
> >
>
> This patch should be merged into the original patch that introduced
> vmemmap freeing.

Oh, yeah. Will do.

>
> --
> Thanks,
>
> David / dhildenb
>
Mike Rapoport Nov. 22, 2020, 1:30 p.m. UTC | #5
On Fri, Nov 20, 2020 at 10:15:30AM +0100, David Hildenbrand wrote:
> On 20.11.20 09:25, Michal Hocko wrote:
> > On Fri 20-11-20 14:43:25, Muchun Song wrote:
> > > We only can free the unused vmemmap to the buddy system when the
> > > size of struct page is a power of two.
> > 
> > Can we actually have !power_of_2 struct pages?
> 
> AFAIK multiples of 8 bytes (56, 64, 72) are possible.

Or multiples of 4 for 32-bit (28, 32, 36). 
 
> -- 
> Thanks,
> 
> David / dhildenb
> 
>
Matthew Wilcox Nov. 22, 2020, 7 p.m. UTC | #6
On Fri, Nov 20, 2020 at 09:25:52AM +0100, Michal Hocko wrote:
> On Fri 20-11-20 14:43:25, Muchun Song wrote:
> > We only can free the unused vmemmap to the buddy system when the
> > size of struct page is a power of two.
> 
> Can we actually have !power_of_2 struct pages?

Yes.  On x86-64, if you don't enable MEMCG, it's 56 bytes.  On SPARC64,
if you do enable MEMCG, it's 72 bytes.  On 32-bit systems, it's
anything from 32-44 bytes, depending on MEMCG, WANT_PAGE_VIRTUAL and
LAST_CPUPID_NOT_IN_PAGE_FLAGS.
Muchun Song Nov. 23, 2020, 3:14 a.m. UTC | #7
On Mon, Nov 23, 2020 at 3:00 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Nov 20, 2020 at 09:25:52AM +0100, Michal Hocko wrote:
> > On Fri 20-11-20 14:43:25, Muchun Song wrote:
> > > We only can free the unused vmemmap to the buddy system when the
> > > size of struct page is a power of two.
> >
> > Can we actually have !power_of_2 struct pages?
>
> Yes.  On x86-64, if you don't enable MEMCG, it's 56 bytes.  On SPARC64,
> if you do enable MEMCG, it's 72 bytes.  On 32-bit systems, it's
> anything from 32-44 bytes, depending on MEMCG, WANT_PAGE_VIRTUAL and
> LAST_CPUPID_NOT_IN_PAGE_FLAGS.
>

On x86-64, even if you do not enable MEMCG, it's also 64 bytes. Because
CONFIG_HAVE_ALIGNED_STRUCT_PAGE is defined if we use SLUB.
diff mbox series

Patch

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index c3b3fc041903..7bb749a3eea2 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -671,7 +671,8 @@  void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int order = huge_page_order(h);
 	unsigned int vmemmap_pages;
 
-	if (hugetlb_free_vmemmap_disabled) {
+	if (hugetlb_free_vmemmap_disabled ||
+	    !is_power_of_2(sizeof(struct page))) {
 		pr_info("disable free vmemmap pages for %s\n", h->name);
 		return;
 	}