Message ID | 20210923175347.10727-2-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | hugetlb: add demote/split page functionality | expand |
On Thu, 23 Sep 2021 10:53:44 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote: > Two new sysfs files are added to demote hugtlb pages. These files are > both per-hugetlb page size and per node. Files are: > demote_size - The size in Kb that pages are demoted to. (read-write) > demote - The number of huge pages to demote. (write-only) > > By default, demote_size is the next smallest huge page size. Valid huge > page sizes less than huge page size may be written to this file. When > huge pages are demoted, they are demoted to this size. > > Writing a value to demote will result in an attempt to demote that > number of hugetlb pages to an appropriate number of demote_size pages. > > NOTE: Demote interfaces are only provided for huge page sizes if there > is a smaller target demote huge page size. For example, on x86 1GB huge > pages will have demote interfaces. 2MB huge pages will not have demote > interfaces. > > This patch does not provide full demote functionality. It only provides > the sysfs interfaces. > > It also provides documentation for the new interfaces. > > ... > > +static ssize_t demote_store(struct kobject *kobj, > + struct kobj_attribute *attr, const char *buf, size_t len) > +{ > + unsigned long nr_demote; > + unsigned long nr_available; > + nodemask_t nodes_allowed, *n_mask; > + struct hstate *h; > + int err; > + int nid; > + > + err = kstrtoul(buf, 10, &nr_demote); > + if (err) > + return err; > + h = kobj_to_hstate(kobj, &nid); > + > + /* Synchronize with other sysfs operations modifying huge pages */ > + mutex_lock(&h->resize_lock); > + > + spin_lock_irq(&hugetlb_lock); > + if (nid != NUMA_NO_NODE) { > + nr_available = h->free_huge_pages_node[nid]; > + init_nodemask_of_node(&nodes_allowed, nid); > + n_mask = &nodes_allowed; > + } else { > + nr_available = h->free_huge_pages; > + n_mask = &node_states[N_MEMORY]; > + } > + nr_available -= h->resv_huge_pages; > + if (nr_available <= 0) > + goto out; > + nr_demote = min(nr_available, nr_demote); > + > + while (nr_demote) { > + if (!demote_pool_huge_page(h, n_mask)) > + break; > + > + /* > + * We may have dropped the lock in the routines to > + * demote/free a page. Recompute nr_demote as counts could > + * have changed and we want to make sure we do not demote > + * a reserved huge page. > + */ This comment doesn't become true until patch #4, and is a bit confusing in patch #1. Also, saying "the lock" is far less helpful than saying "hugetlb_lock"! > + nr_demote--; > + if (nid != NUMA_NO_NODE) > + nr_available = h->free_huge_pages_node[nid]; > + else > + nr_available = h->free_huge_pages; > + nr_available -= h->resv_huge_pages; > + if (nr_available <= 0) > + nr_demote = 0; > + else > + nr_demote = min(nr_available, nr_demote); > + } > + > +out: > + spin_unlock_irq(&hugetlb_lock); How long can we spend with IRQs disabled here (after patch #4!)? > + mutex_unlock(&h->resize_lock); > + > + return len; > +} > +HSTATE_ATTR_WO(demote); > +
On 9/23/21 2:24 PM, Andrew Morton wrote: > On Thu, 23 Sep 2021 10:53:44 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote: > >> Two new sysfs files are added to demote hugtlb pages. These files are >> both per-hugetlb page size and per node. Files are: >> demote_size - The size in Kb that pages are demoted to. (read-write) >> demote - The number of huge pages to demote. (write-only) >> >> By default, demote_size is the next smallest huge page size. Valid huge >> page sizes less than huge page size may be written to this file. When >> huge pages are demoted, they are demoted to this size. >> >> Writing a value to demote will result in an attempt to demote that >> number of hugetlb pages to an appropriate number of demote_size pages. >> >> NOTE: Demote interfaces are only provided for huge page sizes if there >> is a smaller target demote huge page size. For example, on x86 1GB huge >> pages will have demote interfaces. 2MB huge pages will not have demote >> interfaces. >> >> This patch does not provide full demote functionality. It only provides >> the sysfs interfaces. >> >> It also provides documentation for the new interfaces. >> >> ... >> >> +static ssize_t demote_store(struct kobject *kobj, >> + struct kobj_attribute *attr, const char *buf, size_t len) >> +{ >> + unsigned long nr_demote; >> + unsigned long nr_available; >> + nodemask_t nodes_allowed, *n_mask; >> + struct hstate *h; >> + int err; >> + int nid; >> + >> + err = kstrtoul(buf, 10, &nr_demote); >> + if (err) >> + return err; >> + h = kobj_to_hstate(kobj, &nid); >> + >> + /* Synchronize with other sysfs operations modifying huge pages */ >> + mutex_lock(&h->resize_lock); >> + >> + spin_lock_irq(&hugetlb_lock); >> + if (nid != NUMA_NO_NODE) { >> + nr_available = h->free_huge_pages_node[nid]; >> + init_nodemask_of_node(&nodes_allowed, nid); >> + n_mask = &nodes_allowed; >> + } else { >> + nr_available = h->free_huge_pages; >> + n_mask = &node_states[N_MEMORY]; >> + } >> + nr_available -= h->resv_huge_pages; >> + if (nr_available <= 0) >> + goto out; >> + nr_demote = min(nr_available, nr_demote); >> + >> + while (nr_demote) { >> + if (!demote_pool_huge_page(h, n_mask)) >> + break; >> + >> + /* >> + * We may have dropped the lock in the routines to >> + * demote/free a page. Recompute nr_demote as counts could >> + * have changed and we want to make sure we do not demote >> + * a reserved huge page. >> + */ > > This comment doesn't become true until patch #4, and is a bit confusing > in patch #1. Also, saying "the lock" is far less helpful than saying > "hugetlb_lock"! Right. That is the result of slicing and dicing working code to create individual patches. Sorry. I will correct. The comment is also not 100% accurate. demote_pool_huge_page will always drop hugetlb_lock except in the quick error case which is not really interesting. This helps answer your next question. > > >> + nr_demote--; >> + if (nid != NUMA_NO_NODE) >> + nr_available = h->free_huge_pages_node[nid]; >> + else >> + nr_available = h->free_huge_pages; >> + nr_available -= h->resv_huge_pages; >> + if (nr_available <= 0) >> + nr_demote = 0; >> + else >> + nr_demote = min(nr_available, nr_demote); >> + } >> + >> +out: >> + spin_unlock_irq(&hugetlb_lock); > > How long can we spend with IRQs disabled here (after patch #4!)? Not very long. We will drop the lock on page demote. This is because we need to potentially allocate vmemmap pages. We will actually go through quite a few acquire/drop lock cycles for each demoted page. Something like: dequeue page to be demoted drop lock potentially allocate vmemmap pages for each page of demoted size prep page acquire lock enqueue page to new pool drop lock reacquire lock This is 'no worse' than the lock cycling that happens with existing pool adjustment mechanisms such as "echo > nr_hugepages". The updated comment will point out that there is little need to worry about lock hold/irq disable time.
Mike Kravetz <mike.kravetz@oracle.com> writes: > Two new sysfs files are added to demote hugtlb pages. These files are > both per-hugetlb page size and per node. Files are: > demote_size - The size in Kb that pages are demoted to. (read-write) > demote - The number of huge pages to demote. (write-only) > > By default, demote_size is the next smallest huge page size. Valid huge > page sizes less than huge page size may be written to this file. When > huge pages are demoted, they are demoted to this size. > > Writing a value to demote will result in an attempt to demote that > number of hugetlb pages to an appropriate number of demote_size pages. > > NOTE: Demote interfaces are only provided for huge page sizes if there > is a smaller target demote huge page size. For example, on x86 1GB huge > pages will have demote interfaces. 2MB huge pages will not have demote > interfaces. Should we also check if the platform allows for gigantic_page_runtime_supported() ? -aneesh
On 23.09.21 19:53, Mike Kravetz wrote: > Two new sysfs files are added to demote hugtlb pages. These files are > both per-hugetlb page size and per node. Files are: > demote_size - The size in Kb that pages are demoted to. (read-write) > demote - The number of huge pages to demote. (write-only) > > By default, demote_size is the next smallest huge page size. Valid huge > page sizes less than huge page size may be written to this file. When > huge pages are demoted, they are demoted to this size. > > Writing a value to demote will result in an attempt to demote that > number of hugetlb pages to an appropriate number of demote_size pages. > > NOTE: Demote interfaces are only provided for huge page sizes if there > is a smaller target demote huge page size. For example, on x86 1GB huge > pages will have demote interfaces. 2MB huge pages will not have demote > interfaces. > > This patch does not provide full demote functionality. It only provides > the sysfs interfaces. > > It also provides documentation for the new interfaces. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > Documentation/admin-guide/mm/hugetlbpage.rst | 30 +++- > include/linux/hugetlb.h | 1 + > mm/hugetlb.c | 155 ++++++++++++++++++- > 3 files changed, 183 insertions(+), 3 deletions(-) > > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst > index 8abaeb144e44..0e123a347e1e 100644 > --- a/Documentation/admin-guide/mm/hugetlbpage.rst > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst > @@ -234,8 +234,12 @@ will exist, of the form:: > > hugepages-${size}kB > > -Inside each of these directories, the same set of files will exist:: > +Inside each of these directories, the set of files contained in ``/proc`` > +will exist. In addition, two additional interfaces for demoting huge > +pages may exist:: > > + demote > + demote_size > nr_hugepages > nr_hugepages_mempolicy > nr_overcommit_hugepages > @@ -243,7 +247,29 @@ Inside each of these directories, the same set of files will exist:: > resv_hugepages > surplus_hugepages > > -which function as described above for the default huge page-sized case. > +The demote interfaces provide the ability to split a huge page into > +smaller huge pages. For example, the x86 architecture supports both > +1GB and 2MB huge pages sizes. A 1GB huge page can be split into 512 > +2MB huge pages. Demote interfaces are not available for the smallest > +huge page size. The demote interfaces are: > + > +demote_size > + is the size of demoted pages. When a page is demoted a corresponding > + number of huge pages of demote_size will be created. By default, > + demote_size is set to the next smaller huge page size. If there are > + multiple smaller huge page sizes, demote_size can be set to any of > + these smaller sizes. Only huge page sizes less then the current huge > + pages size are allowed. > + > +demote > + is used to demote a number of huge pages. A user with root privileges > + can write to this file. It may not be possible to demote the > + requested number of huge pages. To determine how many pages were > + actually demoted, compare the value of nr_hugepages before and after > + writing to the demote interface. demote is a write only interface. > + > +The interfaces which are the same as in ``/proc`` (all except demote and > +demote_size) function as described above for the default huge page-sized case. > > .. _mem_policy_and_hp_alloc: > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 1faebe1cd0ed..f2c3979efd69 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -596,6 +596,7 @@ struct hstate { > int next_nid_to_alloc; > int next_nid_to_free; > unsigned int order; > + unsigned int demote_order; > unsigned long mask; > unsigned long max_huge_pages; > unsigned long nr_huge_pages; > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 6378c1066459..c76ee0bd6374 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2986,7 +2986,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > > static void __init hugetlb_init_hstates(void) > { > - struct hstate *h; > + struct hstate *h, *h2; > > for_each_hstate(h) { > if (minimum_order > huge_page_order(h)) > @@ -2995,6 +2995,17 @@ static void __init hugetlb_init_hstates(void) > /* oversize hugepages were init'ed in early boot */ > if (!hstate_is_gigantic(h)) > hugetlb_hstate_alloc_pages(h); > + > + /* > + * Set demote order for each hstate. Note that > + * h->demote_order is initially 0. > + */ > + for_each_hstate(h2) { > + if (h2 == h) > + continue; > + if (h2->order < h->order && h2->order > h->demote_order) > + h->demote_order = h2->order; > + } > } > VM_BUG_ON(minimum_order == UINT_MAX); > } > @@ -3235,9 +3246,29 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, > return 0; > } > > +static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed) > + __must_hold(&hugetlb_lock) > +{ > + int rc = 0; > + > + lockdep_assert_held(&hugetlb_lock); > + > + /* We should never get here if no demote order */ > + if (!h->demote_order) > + return rc; > + > + /* > + * TODO - demote fucntionality will be added in subsequent patch > + */ > + return rc; > +} > + > #define HSTATE_ATTR_RO(_name) \ > static struct kobj_attribute _name##_attr = __ATTR_RO(_name) > > +#define HSTATE_ATTR_WO(_name) \ > + static struct kobj_attribute _name##_attr = __ATTR_WO(_name) > + > #define HSTATE_ATTR(_name) \ > static struct kobj_attribute _name##_attr = \ > __ATTR(_name, 0644, _name##_show, _name##_store) > @@ -3433,6 +3464,112 @@ static ssize_t surplus_hugepages_show(struct kobject *kobj, > } > HSTATE_ATTR_RO(surplus_hugepages); > > +static ssize_t demote_store(struct kobject *kobj, > + struct kobj_attribute *attr, const char *buf, size_t len) > +{ > + unsigned long nr_demote; > + unsigned long nr_available; > + nodemask_t nodes_allowed, *n_mask; > + struct hstate *h; > + int err; > + int nid; > + > + err = kstrtoul(buf, 10, &nr_demote); > + if (err) > + return err; > + h = kobj_to_hstate(kobj, &nid); > + > + /* Synchronize with other sysfs operations modifying huge pages */ > + mutex_lock(&h->resize_lock); > + > + spin_lock_irq(&hugetlb_lock); > + if (nid != NUMA_NO_NODE) { > + nr_available = h->free_huge_pages_node[nid]; > + init_nodemask_of_node(&nodes_allowed, nid); > + n_mask = &nodes_allowed; > + } else { > + nr_available = h->free_huge_pages; > + n_mask = &node_states[N_MEMORY]; > + } > + nr_available -= h->resv_huge_pages; > + if (nr_available <= 0) > + goto out; > + nr_demote = min(nr_available, nr_demote); > + > + while (nr_demote) { > + if (!demote_pool_huge_page(h, n_mask)) > + break; > + > + /* > + * We may have dropped the lock in the routines to > + * demote/free a page. Recompute nr_demote as counts could > + * have changed and we want to make sure we do not demote > + * a reserved huge page. > + */ > + nr_demote--; > + if (nid != NUMA_NO_NODE) > + nr_available = h->free_huge_pages_node[nid]; > + else > + nr_available = h->free_huge_pages; > + nr_available -= h->resv_huge_pages; > + if (nr_available <= 0) > + nr_demote = 0; > + else > + nr_demote = min(nr_available, nr_demote); > + } > Wonder if you could compress that quite a bit: ... spin_lock_irq(&hugetlb_lock); if (nid != NUMA_NO_NODE) { init_nodemask_of_node(&nodes_allowed, nid); n_mask = &nodes_allowed; } else { n_mask = &node_states[N_MEMORY]; } while (nr_demote) { /* * Update after each iteration because we might have temporarily * dropped the lock and our counters changes. */ if (nid != NUMA_NO_NODE) nr_available = h->free_huge_pages_node[nid]; else nr_available = h->free_huge_pages; nr_available -= h->resv_huge_pages; if (nr_available <= 0) break; if (!demote_pool_huge_page(h, n_mask)) break; nr_demote--; }; spin_unlock_irq(&hugetlb_lock); Not sure if that "nr_demote = min(nr_available, nr_demote);" logic is really required. Once nr_available hits <= 0 we'll just stop denoting.
On 9/24/21 12:08 AM, Aneesh Kumar K.V wrote: > Mike Kravetz <mike.kravetz@oracle.com> writes: > >> Two new sysfs files are added to demote hugtlb pages. These files are >> both per-hugetlb page size and per node. Files are: >> demote_size - The size in Kb that pages are demoted to. (read-write) >> demote - The number of huge pages to demote. (write-only) >> >> By default, demote_size is the next smallest huge page size. Valid huge >> page sizes less than huge page size may be written to this file. When >> huge pages are demoted, they are demoted to this size. >> >> Writing a value to demote will result in an attempt to demote that >> number of hugetlb pages to an appropriate number of demote_size pages. >> >> NOTE: Demote interfaces are only provided for huge page sizes if there >> is a smaller target demote huge page size. For example, on x86 1GB huge >> pages will have demote interfaces. 2MB huge pages will not have demote >> interfaces. > > Should we also check if the platform allows for > gigantic_page_runtime_supported() ? > Yes, thanks! Looks like this may only be an issue for giganitc pages on power managed by firmware. Still, needs to be checked. Will update. Thanks,
On 9/24/21 2:28 AM, David Hildenbrand wrote: > On 23.09.21 19:53, Mike Kravetz wrote: >> + spin_lock_irq(&hugetlb_lock); >> + if (nid != NUMA_NO_NODE) { >> + nr_available = h->free_huge_pages_node[nid]; >> + init_nodemask_of_node(&nodes_allowed, nid); >> + n_mask = &nodes_allowed; >> + } else { >> + nr_available = h->free_huge_pages; >> + n_mask = &node_states[N_MEMORY]; >> + } >> + nr_available -= h->resv_huge_pages; >> + if (nr_available <= 0) >> + goto out; >> + nr_demote = min(nr_available, nr_demote); >> + >> + while (nr_demote) { >> + if (!demote_pool_huge_page(h, n_mask)) >> + break; >> + >> + /* >> + * We may have dropped the lock in the routines to >> + * demote/free a page. Recompute nr_demote as counts could >> + * have changed and we want to make sure we do not demote >> + * a reserved huge page. >> + */ >> + nr_demote--; >> + if (nid != NUMA_NO_NODE) >> + nr_available = h->free_huge_pages_node[nid]; >> + else >> + nr_available = h->free_huge_pages; >> + nr_available -= h->resv_huge_pages; >> + if (nr_available <= 0) >> + nr_demote = 0; >> + else >> + nr_demote = min(nr_available, nr_demote); >> + } >> > > Wonder if you could compress that quite a bit: > > > ... > spin_lock_irq(&hugetlb_lock); > > if (nid != NUMA_NO_NODE) { > init_nodemask_of_node(&nodes_allowed, nid); > n_mask = &nodes_allowed; > } else { > n_mask = &node_states[N_MEMORY]; > } > > while (nr_demote) { > /* > * Update after each iteration because we might have temporarily > * dropped the lock and our counters changes. > */ > if (nid != NUMA_NO_NODE) > nr_available = h->free_huge_pages_node[nid]; > else > nr_available = h->free_huge_pages; > nr_available -= h->resv_huge_pages; > if (nr_available <= 0) > break; > if (!demote_pool_huge_page(h, n_mask)) > break; > nr_demote--; > }; > spin_unlock_irq(&hugetlb_lock); > > Not sure if that "nr_demote = min(nr_available, nr_demote);" logic is really required. Once nr_available hits <= 0 we'll just stop denoting. > No, it is not needed. You suggested code looks much nicer. I will incorporate into the next version. Thanks!
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index 8abaeb144e44..0e123a347e1e 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -234,8 +234,12 @@ will exist, of the form:: hugepages-${size}kB -Inside each of these directories, the same set of files will exist:: +Inside each of these directories, the set of files contained in ``/proc`` +will exist. In addition, two additional interfaces for demoting huge +pages may exist:: + demote + demote_size nr_hugepages nr_hugepages_mempolicy nr_overcommit_hugepages @@ -243,7 +247,29 @@ Inside each of these directories, the same set of files will exist:: resv_hugepages surplus_hugepages -which function as described above for the default huge page-sized case. +The demote interfaces provide the ability to split a huge page into +smaller huge pages. For example, the x86 architecture supports both +1GB and 2MB huge pages sizes. A 1GB huge page can be split into 512 +2MB huge pages. Demote interfaces are not available for the smallest +huge page size. The demote interfaces are: + +demote_size + is the size of demoted pages. When a page is demoted a corresponding + number of huge pages of demote_size will be created. By default, + demote_size is set to the next smaller huge page size. If there are + multiple smaller huge page sizes, demote_size can be set to any of + these smaller sizes. Only huge page sizes less then the current huge + pages size are allowed. + +demote + is used to demote a number of huge pages. A user with root privileges + can write to this file. It may not be possible to demote the + requested number of huge pages. To determine how many pages were + actually demoted, compare the value of nr_hugepages before and after + writing to the demote interface. demote is a write only interface. + +The interfaces which are the same as in ``/proc`` (all except demote and +demote_size) function as described above for the default huge page-sized case. .. _mem_policy_and_hp_alloc: diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 1faebe1cd0ed..f2c3979efd69 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -596,6 +596,7 @@ struct hstate { int next_nid_to_alloc; int next_nid_to_free; unsigned int order; + unsigned int demote_order; unsigned long mask; unsigned long max_huge_pages; unsigned long nr_huge_pages; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6378c1066459..c76ee0bd6374 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2986,7 +2986,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) static void __init hugetlb_init_hstates(void) { - struct hstate *h; + struct hstate *h, *h2; for_each_hstate(h) { if (minimum_order > huge_page_order(h)) @@ -2995,6 +2995,17 @@ static void __init hugetlb_init_hstates(void) /* oversize hugepages were init'ed in early boot */ if (!hstate_is_gigantic(h)) hugetlb_hstate_alloc_pages(h); + + /* + * Set demote order for each hstate. Note that + * h->demote_order is initially 0. + */ + for_each_hstate(h2) { + if (h2 == h) + continue; + if (h2->order < h->order && h2->order > h->demote_order) + h->demote_order = h2->order; + } } VM_BUG_ON(minimum_order == UINT_MAX); } @@ -3235,9 +3246,29 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, return 0; } +static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed) + __must_hold(&hugetlb_lock) +{ + int rc = 0; + + lockdep_assert_held(&hugetlb_lock); + + /* We should never get here if no demote order */ + if (!h->demote_order) + return rc; + + /* + * TODO - demote fucntionality will be added in subsequent patch + */ + return rc; +} + #define HSTATE_ATTR_RO(_name) \ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) +#define HSTATE_ATTR_WO(_name) \ + static struct kobj_attribute _name##_attr = __ATTR_WO(_name) + #define HSTATE_ATTR(_name) \ static struct kobj_attribute _name##_attr = \ __ATTR(_name, 0644, _name##_show, _name##_store) @@ -3433,6 +3464,112 @@ static ssize_t surplus_hugepages_show(struct kobject *kobj, } HSTATE_ATTR_RO(surplus_hugepages); +static ssize_t demote_store(struct kobject *kobj, + struct kobj_attribute *attr, const char *buf, size_t len) +{ + unsigned long nr_demote; + unsigned long nr_available; + nodemask_t nodes_allowed, *n_mask; + struct hstate *h; + int err; + int nid; + + err = kstrtoul(buf, 10, &nr_demote); + if (err) + return err; + h = kobj_to_hstate(kobj, &nid); + + /* Synchronize with other sysfs operations modifying huge pages */ + mutex_lock(&h->resize_lock); + + spin_lock_irq(&hugetlb_lock); + if (nid != NUMA_NO_NODE) { + nr_available = h->free_huge_pages_node[nid]; + init_nodemask_of_node(&nodes_allowed, nid); + n_mask = &nodes_allowed; + } else { + nr_available = h->free_huge_pages; + n_mask = &node_states[N_MEMORY]; + } + nr_available -= h->resv_huge_pages; + if (nr_available <= 0) + goto out; + nr_demote = min(nr_available, nr_demote); + + while (nr_demote) { + if (!demote_pool_huge_page(h, n_mask)) + break; + + /* + * We may have dropped the lock in the routines to + * demote/free a page. Recompute nr_demote as counts could + * have changed and we want to make sure we do not demote + * a reserved huge page. + */ + nr_demote--; + if (nid != NUMA_NO_NODE) + nr_available = h->free_huge_pages_node[nid]; + else + nr_available = h->free_huge_pages; + nr_available -= h->resv_huge_pages; + if (nr_available <= 0) + nr_demote = 0; + else + nr_demote = min(nr_available, nr_demote); + } + +out: + spin_unlock_irq(&hugetlb_lock); + mutex_unlock(&h->resize_lock); + + return len; +} +HSTATE_ATTR_WO(demote); + +static ssize_t demote_size_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct hstate *h; + unsigned long demote_size; + int nid; + + h = kobj_to_hstate(kobj, &nid); + demote_size = h->demote_order; + + return sysfs_emit(buf, "%lukB\n", + (unsigned long)(PAGE_SIZE << h->demote_order) / SZ_1K); +} + +static ssize_t demote_size_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + struct hstate *h, *t_hstate; + unsigned long demote_size; + unsigned int demote_order; + int nid; + + demote_size = (unsigned long)memparse(buf, NULL); + + t_hstate = size_to_hstate(demote_size); + if (!t_hstate) + return -EINVAL; + demote_order = t_hstate->order; + + /* demote order must be smaller hstate order */ + h = kobj_to_hstate(kobj, &nid); + if (demote_order >= h->order) + return -EINVAL; + + /* resize_lock synchronizes access to demote size and writes */ + mutex_lock(&h->resize_lock); + h->demote_order = demote_order; + mutex_unlock(&h->resize_lock); + + return count; +} +HSTATE_ATTR(demote_size); + static struct attribute *hstate_attrs[] = { &nr_hugepages_attr.attr, &nr_overcommit_hugepages_attr.attr, @@ -3449,6 +3586,16 @@ static const struct attribute_group hstate_attr_group = { .attrs = hstate_attrs, }; +static struct attribute *hstate_demote_attrs[] = { + &demote_size_attr.attr, + &demote_attr.attr, + NULL, +}; + +static const struct attribute_group hstate_demote_attr_group = { + .attrs = hstate_demote_attrs, +}; + static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent, struct kobject **hstate_kobjs, const struct attribute_group *hstate_attr_group) @@ -3466,6 +3613,12 @@ static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent, hstate_kobjs[hi] = NULL; } + if (h->demote_order) { + if (sysfs_create_group(hstate_kobjs[hi], + &hstate_demote_attr_group)) + pr_warn("HugeTLB unable to create demote interfaces for %s\n", h->name); + } + return retval; }
Two new sysfs files are added to demote hugtlb pages. These files are both per-hugetlb page size and per node. Files are: demote_size - The size in Kb that pages are demoted to. (read-write) demote - The number of huge pages to demote. (write-only) By default, demote_size is the next smallest huge page size. Valid huge page sizes less than huge page size may be written to this file. When huge pages are demoted, they are demoted to this size. Writing a value to demote will result in an attempt to demote that number of hugetlb pages to an appropriate number of demote_size pages. NOTE: Demote interfaces are only provided for huge page sizes if there is a smaller target demote huge page size. For example, on x86 1GB huge pages will have demote interfaces. 2MB huge pages will not have demote interfaces. This patch does not provide full demote functionality. It only provides the sysfs interfaces. It also provides documentation for the new interfaces. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- Documentation/admin-guide/mm/hugetlbpage.rst | 30 +++- include/linux/hugetlb.h | 1 + mm/hugetlb.c | 155 ++++++++++++++++++- 3 files changed, 183 insertions(+), 3 deletions(-)