Message ID | 20200407010431.1286488-3-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: using CMA for 1 GB hugepages allocation | expand |
On Mon 06-04-20 18:04:31, Roman Gushchin wrote: [...] My ack still applies but I have only noticed two minor things now. [...] > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > set_page_refcounted(page); > if (hstate_is_gigantic(h)) { > + /* > + * Temporarily drop the hugetlb_lock, because > + * we might block in free_gigantic_page(). > + */ > + spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > + spin_lock(&hugetlb_lock); This is OK with the current code because existing paths do not have to revalidate the state AFAICS but it is a bit subtle. I have checked the cma_free path and it can only sleep on the cma->lock unless I am missing something. This lock is only used for cma bitmap manipulation and the mutex sounds like an overkill there and it can be replaced by a spinlock. Sounds like a follow up patch material to me. [...] > + for_each_node_state(nid, N_ONLINE) { > + int res; > + > + size = min(per_node, hugetlb_cma_size - reserved); > + size = round_up(size, PAGE_SIZE << order); > + > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > + 0, false, "hugetlb", > + &hugetlb_cma[nid], nid); > + if (res) { > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > + res, nid); > + break; Do we really have to break out after a single node failure? There might be other nodes that can satisfy the allocation. You are not cleaning up previous allocations so there is a partial state and then it would make more sense to me to simply s@break@continue@ here. > + } > + > + reserved += size; > + pr_info("hugetlb_cma: reserved %lu MiB on node %d\n", > + size / SZ_1M, nid); > + > + if (reserved >= hugetlb_cma_size) > + break; > + } > +}
On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote: > On Mon 06-04-20 18:04:31, Roman Gushchin wrote: > [...] > My ack still applies but I have only noticed two minor things now. Hello, Michal! > > [...] > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > set_page_refcounted(page); > > if (hstate_is_gigantic(h)) { > > + /* > > + * Temporarily drop the hugetlb_lock, because > > + * we might block in free_gigantic_page(). > > + */ > > + spin_unlock(&hugetlb_lock); > > destroy_compound_gigantic_page(page, huge_page_order(h)); > > free_gigantic_page(page, huge_page_order(h)); > > + spin_lock(&hugetlb_lock); > > This is OK with the current code because existing paths do not have to > revalidate the state AFAICS but it is a bit subtle. I have checked the > cma_free path and it can only sleep on the cma->lock unless I am missing > something. This lock is only used for cma bitmap manipulation and the > mutex sounds like an overkill there and it can be replaced by a > spinlock. > > Sounds like a follow up patch material to me. I had the same idea and even posted a patch: https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0 However, Joonsoo pointed out that in some cases the bitmap operation might be too long for a spinlock. Alternatively, we can implement an asynchronous delayed release on the cma side, I just don't know if it's worth it (I mean adding code/complexity). > > [...] > > + for_each_node_state(nid, N_ONLINE) { > > + int res; > > + > > + size = min(per_node, hugetlb_cma_size - reserved); > > + size = round_up(size, PAGE_SIZE << order); > > + > > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > > + 0, false, "hugetlb", > > + &hugetlb_cma[nid], nid); > > + if (res) { > > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > > + res, nid); > > + break; > > Do we really have to break out after a single node failure? There might > be other nodes that can satisfy the allocation. You are not cleaning up > previous allocations so there is a partial state and then it would make > more sense to me to simply s@break@continue@ here. But then we should iterate over all nodes in alloc_gigantic_page()? Currently if hugetlb_cma[0] is NULL it will immediately switch back to the fallback approach. Actually, Idk how realistic are use cases with complex node configuration, so that we can hugetlb_cma areas can be allocated only on some of them. I'd leave it up to the moment when we'll have a real world example. Then we probably want something more sophisticated anyway... I have no strong opinion here, so if you really think we should s/break/continue, I'm fine with it too. Thanks!
On Tue 07-04-20 08:25:44, Roman Gushchin wrote: > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote: > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote: > > [...] > > My ack still applies but I have only noticed two minor things now. > > Hello, Michal! > > > > > [...] > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > > > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > > set_page_refcounted(page); > > > if (hstate_is_gigantic(h)) { > > > + /* > > > + * Temporarily drop the hugetlb_lock, because > > > + * we might block in free_gigantic_page(). > > > + */ > > > + spin_unlock(&hugetlb_lock); > > > destroy_compound_gigantic_page(page, huge_page_order(h)); > > > free_gigantic_page(page, huge_page_order(h)); > > > + spin_lock(&hugetlb_lock); > > > > This is OK with the current code because existing paths do not have to > > revalidate the state AFAICS but it is a bit subtle. I have checked the > > cma_free path and it can only sleep on the cma->lock unless I am missing > > something. This lock is only used for cma bitmap manipulation and the > > mutex sounds like an overkill there and it can be replaced by a > > spinlock. > > > > Sounds like a follow up patch material to me. > > I had the same idea and even posted a patch: > https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0 > > However, Joonsoo pointed out that in some cases the bitmap operation might > be too long for a spinlock. I was not aware of this email thread. I will have a look. Thanks! > Alternatively, we can implement an asynchronous delayed release on the cma side, > I just don't know if it's worth it (I mean adding code/complexity). > > > > > [...] > > > + for_each_node_state(nid, N_ONLINE) { > > > + int res; > > > + > > > + size = min(per_node, hugetlb_cma_size - reserved); > > > + size = round_up(size, PAGE_SIZE << order); > > > + > > > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > > > + 0, false, "hugetlb", > > > + &hugetlb_cma[nid], nid); > > > + if (res) { > > > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > > > + res, nid); > > > + break; > > > > Do we really have to break out after a single node failure? There might > > be other nodes that can satisfy the allocation. You are not cleaning up > > previous allocations so there is a partial state and then it would make > > more sense to me to simply s@break@continue@ here. > > But then we should iterate over all nodes in alloc_gigantic_page()? OK, I've managed to miss the early break on hugetlb_cma[node] == NULL there as well. I do not think this makes much sense. Just consider a setup with one node much smaller than others (not unseen on LPAR configurations) and then you are potentially using CMA areas on some nodes without a good reason. > Currently if hugetlb_cma[0] is NULL it will immediately switch back > to the fallback approach. > > Actually, Idk how realistic are use cases with complex node configuration, > so that we can hugetlb_cma areas can be allocated only on some of them. > I'd leave it up to the moment when we'll have a real world example. > Then we probably want something more sophisticated anyway... I do not follow. Isn't the s@break@continue@ in this and alloc_gigantic_page path enough to make it work?
On Tue, Apr 07, 2020 at 05:40:05PM +0200, Michal Hocko wrote: > On Tue 07-04-20 08:25:44, Roman Gushchin wrote: > > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote: > > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote: > > > [...] > > > My ack still applies but I have only noticed two minor things now. > > > > Hello, Michal! > > > > > > > > [...] > > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > > > > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > > > set_page_refcounted(page); > > > > if (hstate_is_gigantic(h)) { > > > > + /* > > > > + * Temporarily drop the hugetlb_lock, because > > > > + * we might block in free_gigantic_page(). > > > > + */ > > > > + spin_unlock(&hugetlb_lock); > > > > destroy_compound_gigantic_page(page, huge_page_order(h)); > > > > free_gigantic_page(page, huge_page_order(h)); > > > > + spin_lock(&hugetlb_lock); > > > > > > This is OK with the current code because existing paths do not have to > > > revalidate the state AFAICS but it is a bit subtle. I have checked the > > > cma_free path and it can only sleep on the cma->lock unless I am missing > > > something. This lock is only used for cma bitmap manipulation and the > > > mutex sounds like an overkill there and it can be replaced by a > > > spinlock. > > > > > > Sounds like a follow up patch material to me. > > > > I had the same idea and even posted a patch: > > https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0 > > > > However, Joonsoo pointed out that in some cases the bitmap operation might > > be too long for a spinlock. > > I was not aware of this email thread. I will have a look. Thanks! > > > Alternatively, we can implement an asynchronous delayed release on the cma side, > > I just don't know if it's worth it (I mean adding code/complexity). > > > > > > > > [...] > > > > + for_each_node_state(nid, N_ONLINE) { > > > > + int res; > > > > + > > > > + size = min(per_node, hugetlb_cma_size - reserved); > > > > + size = round_up(size, PAGE_SIZE << order); > > > > + > > > > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > > > > + 0, false, "hugetlb", > > > > + &hugetlb_cma[nid], nid); > > > > + if (res) { > > > > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > > > > + res, nid); > > > > + break; > > > > > > Do we really have to break out after a single node failure? There might > > > be other nodes that can satisfy the allocation. You are not cleaning up > > > previous allocations so there is a partial state and then it would make > > > more sense to me to simply s@break@continue@ here. > > > > But then we should iterate over all nodes in alloc_gigantic_page()? > > OK, I've managed to miss the early break on hugetlb_cma[node] == NULL > there as well. I do not think this makes much sense. Just consider a > setup with one node much smaller than others (not unseen on LPAR > configurations) and then you are potentially using CMA areas on some > nodes without a good reason. > > > Currently if hugetlb_cma[0] is NULL it will immediately switch back > > to the fallback approach. > > > > Actually, Idk how realistic are use cases with complex node configuration, > > so that we can hugetlb_cma areas can be allocated only on some of them. > > I'd leave it up to the moment when we'll have a real world example. > > Then we probably want something more sophisticated anyway... > > I do not follow. Isn't the s@break@continue@ in this and > alloc_gigantic_page path enough to make it work? Well, of course it will. But for a highly asymmetrical configuration there is probably not much sense to try allocate cma areas of a similar size on each node and rely on allocation failures on some of them. But, again, if you strictly prefer s/break/continue, I can send a v5. Just let me know. Thanks!
On Tue 07-04-20 09:06:40, Roman Gushchin wrote: > On Tue, Apr 07, 2020 at 05:40:05PM +0200, Michal Hocko wrote: > > On Tue 07-04-20 08:25:44, Roman Gushchin wrote: > > > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote: > > > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote: > > > > [...] > > > > My ack still applies but I have only noticed two minor things now. > > > > > > Hello, Michal! > > > > > > > > > > > [...] > > > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > > > > > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > > > > set_page_refcounted(page); > > > > > if (hstate_is_gigantic(h)) { > > > > > + /* > > > > > + * Temporarily drop the hugetlb_lock, because > > > > > + * we might block in free_gigantic_page(). > > > > > + */ > > > > > + spin_unlock(&hugetlb_lock); > > > > > destroy_compound_gigantic_page(page, huge_page_order(h)); > > > > > free_gigantic_page(page, huge_page_order(h)); > > > > > + spin_lock(&hugetlb_lock); > > > > > > > > This is OK with the current code because existing paths do not have to > > > > revalidate the state AFAICS but it is a bit subtle. I have checked the > > > > cma_free path and it can only sleep on the cma->lock unless I am missing > > > > something. This lock is only used for cma bitmap manipulation and the > > > > mutex sounds like an overkill there and it can be replaced by a > > > > spinlock. > > > > > > > > Sounds like a follow up patch material to me. > > > > > > I had the same idea and even posted a patch: > > > https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0 > > > > > > However, Joonsoo pointed out that in some cases the bitmap operation might > > > be too long for a spinlock. > > > > I was not aware of this email thread. I will have a look. Thanks! > > > > > Alternatively, we can implement an asynchronous delayed release on the cma side, > > > I just don't know if it's worth it (I mean adding code/complexity). > > > > > > > > > > > [...] > > > > > + for_each_node_state(nid, N_ONLINE) { > > > > > + int res; > > > > > + > > > > > + size = min(per_node, hugetlb_cma_size - reserved); > > > > > + size = round_up(size, PAGE_SIZE << order); > > > > > + > > > > > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > > > > > + 0, false, "hugetlb", > > > > > + &hugetlb_cma[nid], nid); > > > > > + if (res) { > > > > > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > > > > > + res, nid); > > > > > + break; > > > > > > > > Do we really have to break out after a single node failure? There might > > > > be other nodes that can satisfy the allocation. You are not cleaning up > > > > previous allocations so there is a partial state and then it would make > > > > more sense to me to simply s@break@continue@ here. > > > > > > But then we should iterate over all nodes in alloc_gigantic_page()? > > > > OK, I've managed to miss the early break on hugetlb_cma[node] == NULL > > there as well. I do not think this makes much sense. Just consider a > > setup with one node much smaller than others (not unseen on LPAR > > configurations) and then you are potentially using CMA areas on some > > nodes without a good reason. > > > > > Currently if hugetlb_cma[0] is NULL it will immediately switch back > > > to the fallback approach. > > > > > > Actually, Idk how realistic are use cases with complex node configuration, > > > so that we can hugetlb_cma areas can be allocated only on some of them. > > > I'd leave it up to the moment when we'll have a real world example. > > > Then we probably want something more sophisticated anyway... > > > > I do not follow. Isn't the s@break@continue@ in this and > > alloc_gigantic_page path enough to make it work? > > Well, of course it will. But for a highly asymmetrical configuration > there is probably not much sense to try allocate cma areas of a similar > size on each node and rely on allocation failures on some of them. > > But, again, if you strictly prefer s/break/continue, I can send a v5. > Just let me know. There is no real reason to have such a restriction. I can follow up with a separate patch if you want me but it should be "fixed". Thanks
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 4d5a4fe22703..59cca49a5249 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1473,6 +1473,14 @@ hpet_mmap= [X86, HPET_MMAP] Allow userspace to mmap HPET registers. Default set by CONFIG_HPET_MMAP_DEFAULT. + hugetlb_cma= [HW] The size of a cma area used for allocation + of gigantic hugepages. + Format: nn[KMGTPE] + + Reserve a cma area of given size and allocate gigantic + hugepages using the cma allocator. If enabled, the + boot-time allocation of gigantic hugepages is skipped. + hugepages= [HW,X86-32,IA-64] HugeTLB pages to allocate at boot. hugepagesz= [HW,IA-64,PPC,X86-64] The size of the HugeTLB pages. On x86-64 and powerpc, this option can be specified diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b65dffdfb201..e42727e3568e 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -29,6 +29,7 @@ #include <linux/mm.h> #include <linux/kexec.h> #include <linux/crash_dump.h> +#include <linux/hugetlb.h> #include <asm/boot.h> #include <asm/fixmap.h> @@ -457,6 +458,11 @@ void __init arm64_memblock_init(void) high_memory = __va(memblock_end_of_DRAM() - 1) + 1; dma_contiguous_reserve(arm64_dma32_phys_limit); + +#ifdef CONFIG_ARM64_4K_PAGES + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); +#endif + } void __init bootmem_init(void) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index e6b545047f38..4b3fa6cd3106 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -16,6 +16,7 @@ #include <linux/pci.h> #include <linux/root_dev.h> #include <linux/sfi.h> +#include <linux/hugetlb.h> #include <linux/tboot.h> #include <linux/usb/xhci-dbgp.h> @@ -1157,6 +1158,9 @@ void __init setup_arch(char **cmdline_p) initmem_init(); dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); + if (boot_cpu_has(X86_FEATURE_GBPAGES)) + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); + /* * Reserve memory for crash kernel after SRAT is parsed so that it * won't consume hotpluggable memory. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5ea05879a0a9..43a1cef8f0f1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -895,4 +895,16 @@ static inline spinlock_t *huge_pte_lock(struct hstate *h, return ptl; } +#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA) +extern void __init hugetlb_cma_reserve(int order); +extern void __init hugetlb_cma_check(void); +#else +static inline __init void hugetlb_cma_reserve(int order) +{ +} +static inline __init void hugetlb_cma_check(void) +{ +} +#endif + #endif /* _LINUX_HUGETLB_H */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f9ea1e5197b4..b2c56c3c3e67 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -28,6 +28,7 @@ #include <linux/jhash.h> #include <linux/numa.h> #include <linux/llist.h> +#include <linux/cma.h> #include <asm/page.h> #include <asm/pgtable.h> @@ -44,6 +45,9 @@ int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; struct hstate hstates[HUGE_MAX_HSTATE]; + +static struct cma *hugetlb_cma[MAX_NUMNODES]; + /* * Minimum page order among possible hugepage sizes, set to a proper value * at boot time. @@ -1228,6 +1232,14 @@ static void destroy_compound_gigantic_page(struct page *page, static void free_gigantic_page(struct page *page, unsigned int order) { + /* + * If the page isn't allocated using the cma allocator, + * cma_release() returns false. + */ + if (IS_ENABLED(CONFIG_CMA) && + cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + return; + free_contig_range(page_to_pfn(page), 1 << order); } @@ -1237,6 +1249,21 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { unsigned long nr_pages = 1UL << huge_page_order(h); + if (IS_ENABLED(CONFIG_CMA)) { + struct page *page; + int node; + + for_each_node_mask(node, *nodemask) { + if (!hugetlb_cma[node]) + break; + + page = cma_alloc(hugetlb_cma[node], nr_pages, + huge_page_order(h), true); + if (page) + return page; + } + } + return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); } @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { + /* + * Temporarily drop the hugetlb_lock, because + * we might block in free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); } else { __free_pages(page, huge_page_order(h)); } @@ -2538,6 +2571,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) for (i = 0; i < h->max_huge_pages; ++i) { if (hstate_is_gigantic(h)) { + if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) { + pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n"); + break; + } if (!alloc_bootmem_huge_page(h)) break; } else if (!alloc_pool_huge_page(h, @@ -3193,6 +3230,7 @@ static int __init hugetlb_init(void) default_hstate.max_huge_pages = default_hstate_max_huge_pages; } + hugetlb_cma_check(); hugetlb_init_hstates(); gather_bootmem_prealloc(); report_hugepages(); @@ -5505,3 +5543,74 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason) spin_unlock(&hugetlb_lock); } } + +#ifdef CONFIG_CMA +static unsigned long hugetlb_cma_size __initdata; +static bool cma_reserve_called __initdata; + +static int __init cmdline_parse_hugetlb_cma(char *p) +{ + hugetlb_cma_size = memparse(p, &p); + return 0; +} + +early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); + +void __init hugetlb_cma_reserve(int order) +{ + unsigned long size, reserved, per_node; + int nid; + + cma_reserve_called = true; + + if (!hugetlb_cma_size) + return; + + if (hugetlb_cma_size < (PAGE_SIZE << order)) { + pr_warn("hugetlb_cma: cma area should be at least %lu MiB\n", + (PAGE_SIZE << order) / SZ_1M); + return; + } + + /* + * If 3 GB area is requested on a machine with 4 numa nodes, + * let's allocate 1 GB on first three nodes and ignore the last one. + */ + per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); + pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", + hugetlb_cma_size / SZ_1M, per_node / SZ_1M); + + reserved = 0; + for_each_node_state(nid, N_ONLINE) { + int res; + + size = min(per_node, hugetlb_cma_size - reserved); + size = round_up(size, PAGE_SIZE << order); + + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, + 0, false, "hugetlb", + &hugetlb_cma[nid], nid); + if (res) { + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", + res, nid); + break; + } + + reserved += size; + pr_info("hugetlb_cma: reserved %lu MiB on node %d\n", + size / SZ_1M, nid); + + if (reserved >= hugetlb_cma_size) + break; + } +} + +void __init hugetlb_cma_check(void) +{ + if (!hugetlb_cma_size || cma_reserve_called) + return; + + pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); +} + +#endif /* CONFIG_CMA */