Message ID | 20201120064325.34492-22-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Free some vmemmap pages of hugetlb page | expand |
On Fri 20-11-20 14:43:25, Muchun Song wrote: > We only can free the unused vmemmap to the buddy system when the > size of struct page is a power of two. Can we actually have !power_of_2 struct pages? > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > --- > mm/hugetlb_vmemmap.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index c3b3fc041903..7bb749a3eea2 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > unsigned int order = huge_page_order(h); > unsigned int vmemmap_pages; > > - if (hugetlb_free_vmemmap_disabled) { > + if (hugetlb_free_vmemmap_disabled || > + !is_power_of_2(sizeof(struct page))) { > pr_info("disable free vmemmap pages for %s\n", h->name); > return; > } > -- > 2.11.0 >
On 20.11.20 09:25, Michal Hocko wrote: > On Fri 20-11-20 14:43:25, Muchun Song wrote: >> We only can free the unused vmemmap to the buddy system when the >> size of struct page is a power of two. > > Can we actually have !power_of_2 struct pages? AFAIK multiples of 8 bytes (56, 64, 72) are possible.
On 20.11.20 07:43, Muchun Song wrote: > We only can free the unused vmemmap to the buddy system when the > size of struct page is a power of two. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > --- > mm/hugetlb_vmemmap.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index c3b3fc041903..7bb749a3eea2 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > unsigned int order = huge_page_order(h); > unsigned int vmemmap_pages; > > - if (hugetlb_free_vmemmap_disabled) { > + if (hugetlb_free_vmemmap_disabled || > + !is_power_of_2(sizeof(struct page))) { > pr_info("disable free vmemmap pages for %s\n", h->name); > return; > } > This patch should be merged into the original patch that introduced vmemmap freeing.
On Fri, Nov 20, 2020 at 5:16 PM David Hildenbrand <david@redhat.com> wrote: > > On 20.11.20 07:43, Muchun Song wrote: > > We only can free the unused vmemmap to the buddy system when the > > size of struct page is a power of two. > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > --- > > mm/hugetlb_vmemmap.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > > index c3b3fc041903..7bb749a3eea2 100644 > > --- a/mm/hugetlb_vmemmap.c > > +++ b/mm/hugetlb_vmemmap.c > > @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > > unsigned int order = huge_page_order(h); > > unsigned int vmemmap_pages; > > > > - if (hugetlb_free_vmemmap_disabled) { > > + if (hugetlb_free_vmemmap_disabled || > > + !is_power_of_2(sizeof(struct page))) { > > pr_info("disable free vmemmap pages for %s\n", h->name); > > return; > > } > > > > This patch should be merged into the original patch that introduced > vmemmap freeing. Oh, yeah. Will do. > > -- > Thanks, > > David / dhildenb >
On Fri, Nov 20, 2020 at 10:15:30AM +0100, David Hildenbrand wrote: > On 20.11.20 09:25, Michal Hocko wrote: > > On Fri 20-11-20 14:43:25, Muchun Song wrote: > > > We only can free the unused vmemmap to the buddy system when the > > > size of struct page is a power of two. > > > > Can we actually have !power_of_2 struct pages? > > AFAIK multiples of 8 bytes (56, 64, 72) are possible. Or multiples of 4 for 32-bit (28, 32, 36). > -- > Thanks, > > David / dhildenb > >
On Fri, Nov 20, 2020 at 09:25:52AM +0100, Michal Hocko wrote: > On Fri 20-11-20 14:43:25, Muchun Song wrote: > > We only can free the unused vmemmap to the buddy system when the > > size of struct page is a power of two. > > Can we actually have !power_of_2 struct pages? Yes. On x86-64, if you don't enable MEMCG, it's 56 bytes. On SPARC64, if you do enable MEMCG, it's 72 bytes. On 32-bit systems, it's anything from 32-44 bytes, depending on MEMCG, WANT_PAGE_VIRTUAL and LAST_CPUPID_NOT_IN_PAGE_FLAGS.
On Mon, Nov 23, 2020 at 3:00 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Fri, Nov 20, 2020 at 09:25:52AM +0100, Michal Hocko wrote: > > On Fri 20-11-20 14:43:25, Muchun Song wrote: > > > We only can free the unused vmemmap to the buddy system when the > > > size of struct page is a power of two. > > > > Can we actually have !power_of_2 struct pages? > > Yes. On x86-64, if you don't enable MEMCG, it's 56 bytes. On SPARC64, > if you do enable MEMCG, it's 72 bytes. On 32-bit systems, it's > anything from 32-44 bytes, depending on MEMCG, WANT_PAGE_VIRTUAL and > LAST_CPUPID_NOT_IN_PAGE_FLAGS. > On x86-64, even if you do not enable MEMCG, it's also 64 bytes. Because CONFIG_HAVE_ALIGNED_STRUCT_PAGE is defined if we use SLUB.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index c3b3fc041903..7bb749a3eea2 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; - if (hugetlb_free_vmemmap_disabled) { + if (hugetlb_free_vmemmap_disabled || + !is_power_of_2(sizeof(struct page))) { pr_info("disable free vmemmap pages for %s\n", h->name); return; }
We only can free the unused vmemmap to the buddy system when the size of struct page is a power of two. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/hugetlb_vmemmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)