From patchwork Mon Jun 29 06:45:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11630503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38B9D60D for ; Mon, 29 Jun 2020 06:45:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E8F4C23135 for ; Mon, 29 Jun 2020 06:45:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8F4C23135 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1AC436B0005; Mon, 29 Jun 2020 02:45:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 15C356B0007; Mon, 29 Jun 2020 02:45:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04AB56B0008; Mon, 29 Jun 2020 02:45:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id DE34E6B0005 for ; Mon, 29 Jun 2020 02:45:32 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8D8152DFA for ; Mon, 29 Jun 2020 06:45:32 +0000 (UTC) X-FDA: 76981313304.09.chin19_481089526e6d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 6B015180AD801 for ; Mon, 29 Jun 2020 06:45:32 +0000 (UTC) X-Spam-Summary: 1,0,0,3628d6d3e8c255e9,d41d8cd98f00b204,anshuman.khandual@arm.com,,RULES_HIT:41:69:355:379:541:800:960:967:973:988:989:1260:1261:1345:1431:1437:1534:1543:1711:1730:1747:1777:1792:1963:2194:2198:2199:2200:2393:2525:2559:2563:2682:2685:2693:2731:2741:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4605:5007:6119:6261:7875:7903:8603:8634:8660:9025:10004:10946:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:13095:13141:13148:13230:13255:14096:14181:14394:14721:21080:21433:21451:21627:21740:21939:30054:30064:30070,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-62.14.0.100 64.201.201.201;04y8wincn89kf8mz1z4nojtjd47bzypabyn5osi4qj47yo6qtu76grf6tymmxxg.gnhot88poryo4e8rttho6emsznf47dzauemag8absu9jcrxq5a7tbe1qfbz5kju.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domai nCache:0 X-HE-Tag: chin19_481089526e6d X-Filterd-Recvd-Size: 4986 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Mon, 29 Jun 2020 06:45:31 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E206A101E; Sun, 28 Jun 2020 23:45:30 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.83.176]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6C5B33F73C; Sun, 28 Jun 2020 23:45:26 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: robin.murphy@arm.com, Anshuman Khandual , Catalin Marinas , Will Deacon , Mark Rutland , Mike Kravetz , Barry Song , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH V2] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs Date: Mon, 29 Jun 2020 12:15:07 +0530 Message-Id: <1593413107-12779-1-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 X-Rspamd-Queue-Id: 6B015180AD801 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: Mike Kravetz Cc: Barry Song Cc: Andrew Morton Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reported-by: kernel test robot --- Applies on 5.8-rc3. Changes in V2: - Moved arm64_hugetlb_cma_reserve() stub and declaration near call site Changes in V1: (https://patchwork.kernel.org/patch/11619839/) arch/arm64/mm/hugetlbpage.c | 38 +++++++++++++++++++++++++++++++++++++ arch/arm64/mm/init.c | 12 +++++++++--- 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0a52ce46f020..ea7fb48b8617 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -19,6 +19,44 @@ #include #include +/* + * HugeTLB Support Matrix + * + * --------------------------------------------------- + * | Page Size | CONT PTE | PMD | CONT PMD | PUD | + * --------------------------------------------------- + * | 4K | 64K | 2M | 32M | 1G | + * | 16K | 2M | 32M | 1G | | + * | 64K | 2M | 512M | 16G | | + * --------------------------------------------------- + */ + +/* + * Reserve CMA areas for the largest supported gigantic + * huge page when requested. Any other smaller gigantic + * huge pages could still be served from those areas. + */ +#ifdef CONFIG_CMA +void __init arm64_hugetlb_cma_reserve(void) +{ + int order; + +#ifdef CONFIG_ARM64_4K_PAGES + order = PUD_SHIFT - PAGE_SHIFT; +#else + order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT; +#endif + /* + * HugeTLB CMA reservation is required for gigantic + * huge pages which could not be allocated via the + * page allocator. Just warn if there is any change + * breaking this assumption. + */ + WARN_ON(order <= MAX_ORDER); + hugetlb_cma_reserve(order); +} +#endif /* CONFIG_CMA */ + #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION bool arch_hugetlb_migration_supported(struct hstate *h) { diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..8a260ef0cb94 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -406,6 +406,14 @@ void __init arm64_memblock_init(void) dma_contiguous_reserve(arm64_dma32_phys_limit); } +#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA) +void arm64_hugetlb_cma_reserve(void); +#else +static inline void arm64_hugetlb_cma_reserve(void) +{ +} +#endif + void __init bootmem_init(void) { unsigned long min, max; @@ -425,9 +433,7 @@ void __init bootmem_init(void) * initialize node_online_map that gets used in hugetlb_cma_reserve() * while allocating required CMA size across online nodes. */ -#ifdef CONFIG_ARM64_4K_PAGES - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); -#endif + arm64_hugetlb_cma_reserve(); /* * Sparsemem tries to allocate bootmem in memory_present(), so must be