From patchwork Wed Jun 3 23:00:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11586469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6FDC492A for ; Wed, 3 Jun 2020 23:00:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2FB3D208E4 for ; Wed, 3 Jun 2020 23:00:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="s0IJE484" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2FB3D208E4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A54628004C; Wed, 3 Jun 2020 19:00:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 57C40280003; Wed, 3 Jun 2020 19:00:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4922128004C; Wed, 3 Jun 2020 19:00:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 2BA02280003 for ; Wed, 3 Jun 2020 19:00:41 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DDF6A181AEF09 for ; Wed, 3 Jun 2020 23:00:40 +0000 (UTC) X-FDA: 76889421840.04.drop84_22fb892149444 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id C18178000C12 for ; Wed, 3 Jun 2020 23:00:40 +0000 (UTC) X-Spam-Summary: 2,0,0,a62598293885b8dd,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:1:2:41:69:355:379:800:960:967:968:973:988:989:1260:1263:1345:1359:1381:1431:1437:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2525:2559:2564:2682:2685:2693:2731:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4051:4250:4321:4605:5007:6261:6653:6737:6738:7576:7875:7903:8599:8603:9025:9036:9545:9592:10004:10913:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12783:12986:13161:13229:13846:21063:21080:21451:21617:21939:21990:30003:30012:30054:30056:30062:30064,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: drop84_22fb892149444 X-Filterd-Recvd-Size: 10353 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:00:40 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7E71F20C09; Wed, 3 Jun 2020 23:00:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225239; bh=XWMiS3L72vWm/JVwXn3YVWw+uDS15q2UJSOelipky1c=; h=Date:From:To:Subject:In-Reply-To:From; b=s0IJE484u4nuWDfgLMajsYyJZSK4JpjC4DdEVjJJ5cHmaIt7ECu0Ci3H0hSgZStzx /OZ8+nQMbf/lUANGvM7lMq/TBPWke8KvEDbu0pnXLWXuBAoAL8SYWlRtvXmeBm+6i6 YxzNS2lx1QABl+IQw+pyxObZjDkDVnu5BaFhq1No= Date: Wed, 03 Jun 2020 16:00:38 -0700 From: Andrew Morton To: akpm@linux-foundation.org, almasrymina@google.com, anders.roxell@linaro.org, aneesh.kumar@linux.ibm.com, aou@eecs.berkeley.edu, benh@kernel.crashing.org, borntraeger@de.ibm.com, cai@lca.pw, catalin.marinas@arm.com, christophe.leroy@c-s.fr, corbet@lwn.net, dave.hansen@linux.intel.com, davem@davemloft.net, gerald.schaefer@de.ibm.com, gor@linux.ibm.com, heiko.carstens@de.ibm.com, linux-mm@kvack.org, longpeng2@huawei.com, mike.kravetz@oracle.com, mingo@redhat.com, mm-commits@vger.kernel.org, nitesh@redhat.com, palmer@dabbelt.com, paul.walmsley@sifive.com, paulus@samba.org, peterx@redhat.com, rdunlap@infradead.org, sandipan@linux.ibm.com, sfr@canb.auug.org.au, tglx@linutronix.de, torvalds@linux-foundation.org, will@kernel.org Subject: [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code Message-ID: <20200603230038.-AcTE5eTv%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: C18178000C12 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Kravetz Subject: hugetlbfs: move hugepagesz= parsing to arch independent code Now that architectures provide arch_hugetlb_valid_size(), parsing of "hugepagesz=" can be done in architecture independent code. Create a single routine to handle hugepagesz= parsing and remove all arch specific routines. We can also remove the interface hugetlb_bad_size() as this is no longer used outside arch independent code. This also provides consistent behavior of hugetlbfs command line options. The hugepagesz= option should only be specified once for a specific size, but some architectures allow multiple instances. This appears to be more of an oversight when code was added by some architectures to set up ALL huge pages sizes. Link: http://lkml.kernel.org/r/20200417185049.275845-3-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200428205614.246260-3-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz Acked-by: Mina Almasry Reviewed-by: Peter Xu Acked-by: Gerald Schaefer [s390] Acked-by: Will Deacon Tested-by: Sandipan Das Cc: Albert Ou Cc: Benjamin Herrenschmidt Cc: Catalin Marinas Cc: Christian Borntraeger Cc: Christophe Leroy Cc: Dave Hansen Cc: David S. Miller Cc: Heiko Carstens Cc: Ingo Molnar Cc: Jonathan Corbet Cc: Longpeng Cc: Nitesh Narayan Lal Cc: Palmer Dabbelt Cc: Paul Mackerras Cc: Paul Walmsley Cc: Randy Dunlap Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Anders Roxell Cc: "Aneesh Kumar K.V" Cc: Qian Cai Cc: Stephen Rothwell Signed-off-by: Andrew Morton --- arch/arm64/mm/hugetlbpage.c | 15 --------------- arch/powerpc/mm/hugetlbpage.c | 15 --------------- arch/riscv/mm/hugetlbpage.c | 16 ---------------- arch/s390/mm/hugetlbpage.c | 18 ------------------ arch/sparc/mm/init_64.c | 22 ---------------------- arch/x86/mm/hugetlbpage.c | 16 ---------------- include/linux/hugetlb.h | 1 - mm/hugetlb.c | 23 +++++++++++++++++------ 8 files changed, 17 insertions(+), 109 deletions(-) --- a/arch/arm64/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/arm64/mm/hugetlbpage.c @@ -478,18 +478,3 @@ bool __init arch_hugetlb_valid_size(unsi return false; } - -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - add_huge_page_size(ps); - return 1; - } - - hugetlb_bad_size(); - pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10); - return 0; -} -__setup("hugepagesz=", setup_hugepagesz); --- a/arch/powerpc/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/powerpc/mm/hugetlbpage.c @@ -589,21 +589,6 @@ static int __init add_huge_page_size(uns return 0; } -static int __init hugepage_setup_sz(char *str) -{ - unsigned long long size; - - size = memparse(str, &str); - - if (add_huge_page_size(size) != 0) { - hugetlb_bad_size(); - pr_err("Invalid huge page size specified(%llu)\n", size); - } - - return 1; -} -__setup("hugepagesz=", hugepage_setup_sz); - static int __init hugetlbpage_init(void) { bool configured = false; --- a/arch/riscv/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/riscv/mm/hugetlbpage.c @@ -22,22 +22,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); - return 1; - } - - hugetlb_bad_size(); - pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20); - return 0; - -} -__setup("hugepagesz=", setup_hugepagesz); - #ifdef CONFIG_CONTIG_ALLOC static __init int gigantic_pages_init(void) { --- a/arch/s390/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/s390/mm/hugetlbpage.c @@ -264,24 +264,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long size; - char *string = opt; - - size = memparse(opt, &opt); - if (arch_hugetlb_valid_size(size)) { - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); - } else { - hugetlb_bad_size(); - pr_err("hugepagesz= specifies an unsupported page size %s\n", - string); - return 0; - } - return 1; -} -__setup("hugepagesz=", setup_hugepagesz); - static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) --- a/arch/sparc/mm/init_64.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/sparc/mm/init_64.c @@ -397,28 +397,6 @@ bool __init arch_hugetlb_valid_size(unsi return true; } - -static int __init setup_hugepagesz(char *string) -{ - unsigned long long hugepage_size; - int rc = 0; - - hugepage_size = memparse(string, &string); - - if (!arch_hugetlb_valid_size((unsigned long)hugepage_size)) { - hugetlb_bad_size(); - pr_err("hugepagesz=%llu not supported by MMU.\n", - hugepage_size); - goto out; - } - - add_huge_page_size(hugepage_size); - rc = 1; - -out: - return rc; -} -__setup("hugepagesz=", setup_hugepagesz); #endif /* CONFIG_HUGETLB_PAGE */ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) --- a/arch/x86/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/x86/mm/hugetlbpage.c @@ -191,22 +191,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); - } else { - hugetlb_bad_size(); - printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n", - ps >> 20); - return 0; - } - return 1; -} -__setup("hugepagesz=", setup_hugepagesz); - #ifdef CONFIG_CONTIG_ALLOC static __init int gigantic_pages_init(void) { --- a/include/linux/hugetlb.h~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/include/linux/hugetlb.h @@ -519,7 +519,6 @@ int huge_add_to_page_cache(struct page * int __init __alloc_bootmem_huge_page(struct hstate *h); int __init alloc_bootmem_huge_page(struct hstate *h); -void __init hugetlb_bad_size(void); void __init hugetlb_add_hstate(unsigned order); bool __init arch_hugetlb_valid_size(unsigned long size); struct hstate *size_to_hstate(unsigned long size); --- a/mm/hugetlb.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/mm/hugetlb.c @@ -3262,12 +3262,6 @@ bool __init __attribute((weak)) arch_hug return size == HPAGE_SIZE; } -/* Should be called on processing a hugepagesz=... option */ -void __init hugetlb_bad_size(void) -{ - parsed_valid_hugepagesz = false; -} - void __init hugetlb_add_hstate(unsigned int order) { struct hstate *h; @@ -3337,6 +3331,23 @@ static int __init hugetlb_nrpages_setup( } __setup("hugepages=", hugetlb_nrpages_setup); +static int __init hugepagesz_setup(char *s) +{ + unsigned long size; + + size = (unsigned long)memparse(s, NULL); + + if (!arch_hugetlb_valid_size(size)) { + parsed_valid_hugepagesz = false; + pr_err("HugeTLB: unsupported hugepagesz %s\n", s); + return 0; + } + + hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); + return 1; +} +__setup("hugepagesz=", hugepagesz_setup); + static int __init default_hugepagesz_setup(char *s) { unsigned long size;