From patchwork Thu Aug 5 19:02:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12421883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 439D4C4338F for ; Thu, 5 Aug 2021 19:04:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD61060EEA for ; Thu, 5 Aug 2021 19:04:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DD61060EEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 614D46B008A; Thu, 5 Aug 2021 15:03:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59DB26B0092; Thu, 5 Aug 2021 15:03:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4427A8D0001; Thu, 5 Aug 2021 15:03:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id 244236B008A for ; Thu, 5 Aug 2021 15:03:58 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D1CAC24886 for ; Thu, 5 Aug 2021 19:03:57 +0000 (UTC) X-FDA: 78441951714.27.1F78DFC Received: from new3-smtp.messagingengine.com (new3-smtp.messagingengine.com [66.111.4.229]) by imf07.hostedemail.com (Postfix) with ESMTP id 929C71001BF4 for ; Thu, 5 Aug 2021 19:03:57 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 39C435810C0; Thu, 5 Aug 2021 15:03:57 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 05 Aug 2021 15:03:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=Wc0GbL6NTbU4P ARfHU3RQzdhaDIhKKBfKRDAFccFqtg=; b=Ryn5X3bXridco5AVtFQnjx1sBJgN+ i0WN0ZlrHQY24/6dWbGmZlBP5f+R1hCAooamyTV2td/YJdtB0D48TdX+CrCcrv5J fBqNkSldXCisN2TyzlAn/jd3xfwgS3DvtcCcwTzPdQiK6Pnu5zGPNoll5+WfY1sX cizJiEKtN5BskiTXIINYptCOBvh/B4WhrebIB0wDIBDjdXXWZyaiuyc5DX6GCOq1 tFE4Eio+7r5rPV3O8xb6W5xGhw6qNzF1AFQ4a7qwUg9M5QSRitrJN1Bl1mhrdwHx rUvgf8VJr6qAq5YKfY8A0eE8IWmECcqT84UOalmBbrO4OK6/N/hRe0IiA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=Wc0GbL6NTbU4PARfHU3RQzdhaDIhKKBfKRDAFccFqtg=; b=HjLaHum4 z+hmthhaW4rvC0GeGl9U5UYBcmA90+RQv9sCg6rMXHd1ZlYPVuSxt71+e7RNUKD0 qIeGa+HDZkRYHcB/Fhxjh7knYdxN0hJCHRT91drZUP6AAxQscn17ZiVm5Xn/K6E9 aJpA4zMfDIBgormznM23Try4ZqJ5YOTHNR8HHnVlE3pg2Pf2KSY/S2aLjG+xykHb Kou1tXaeQ2ZzrvuQIROGJesn60DmnuEScMZFaQFgP/ySyCowFAexIsgIjT9l+GSg 2URe3BzG1ig9qat9r7NF7QrS3MeLNKUrt4p7bmwtC93DTTlVahtGcN+yKL32RHrD 24g7hHJ33EIP4A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrieelgdduvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 5 Aug 2021 15:03:56 -0400 (EDT) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , Michal Hocko , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan , Marc Zyngier , Catalin Marinas , Christoph Lameter , Quentin Perret , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 14/15] mm: introduce MIN_MAX_ORDER to replace MAX_ORDER as compile time constant. Date: Thu, 5 Aug 2021 15:02:52 -0400 Message-Id: <20210805190253.2795604-15-zi.yan@sent.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210805190253.2795604-1-zi.yan@sent.com> References: <20210805190253.2795604-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 929C71001BF4 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b=Ryn5X3bX; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=HjLaHum4; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.229 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Stat-Signature: b4f4dara31uncyb1nkn6sgixasn1j98k X-HE-Tag: 1628190237-917205 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan For other MAX_ORDER uses (described below), there is no need or too much hassle to convert certain static array to dynamic ones. Add MIN_MAX_ORDER to serve as compile time constant in place of MAX_ORDER. ARM64 hypervisor maintains its own free page list and does not import any core kernel symbols, so soon-to-be runtime variable MAX_ORDER is not accessible in ARM64 hypervisor code. Also there is no need to allocating very large pages. In SLAB/SLOB/SLUB, 2-D array kmalloc_caches uses MAX_ORDER in its second dimension. It is too much hassle to allocate memory for kmalloc_caches before any proper memory allocator is set up. Signed-off-by: Zi Yan Cc: Marc Zyngier Cc: Catalin Marinas Cc: Christoph Lameter Cc: Vlastimil Babka Cc: Quentin Perret Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 3 ++- include/linux/mmzone.h | 3 +++ include/linux/slab.h | 8 ++++---- mm/slab.c | 2 +- mm/slub.c | 7 ++++--- 6 files changed, 15 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index fb0f523d1492..c774b4a98336 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -16,7 +16,7 @@ struct hyp_pool { * API at EL2. */ hyp_spinlock_t lock; - struct list_head free_area[MAX_ORDER]; + struct list_head free_area[MIN_MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; unsigned short max_order; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 41fc25bdfb34..a1cc1b648de0 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -226,7 +226,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, int i; hyp_spin_lock_init(&pool->lock); - pool->max_order = min(MAX_ORDER, get_order(nr_pages << PAGE_SHIFT)); + + pool->max_order = min(MIN_MAX_ORDER, get_order(nr_pages << PAGE_SHIFT)); for (i = 0; i < pool->max_order; i++) INIT_LIST_HEAD(&pool->free_area[i]); pool->range_start = phys; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 09aafc05aef4..379dada82d4b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -27,11 +27,14 @@ #ifndef CONFIG_ARCH_FORCE_MAX_ORDER #ifdef CONFIG_SET_MAX_ORDER #define MAX_ORDER CONFIG_SET_MAX_ORDER +#define MIN_MAX_ORDER CONFIG_SET_MAX_ORDER #else #define MAX_ORDER 11 +#define MIN_MAX_ORDER MAX_ORDER #endif /* CONFIG_SET_MAX_ORDER */ #else #define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER +#define MIN_MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER #endif /* CONFIG_ARCH_FORCE_MAX_ORDER */ #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) diff --git a/include/linux/slab.h b/include/linux/slab.h index 2c0d80cca6b8..d8747c158db6 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -244,8 +244,8 @@ static inline void __check_heap_object(const void *ptr, unsigned long n, * to do various tricks to work around compiler limitations in order to * ensure proper constant folding. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) +#define KMALLOC_SHIFT_HIGH ((MIN_MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ + (MIN_MAX_ORDER + PAGE_SHIFT - 1) : 25) #define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 @@ -258,7 +258,7 @@ static inline void __check_heap_object(const void *ptr, unsigned long n, * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MIN_MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif @@ -271,7 +271,7 @@ static inline void __check_heap_object(const void *ptr, unsigned long n, * be allocated from the same page. */ #define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MIN_MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif diff --git a/mm/slab.c b/mm/slab.c index d0f725637663..0041de8ec0e9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -466,7 +466,7 @@ static int __init slab_max_order_setup(char *str) { get_option(&str, &slab_max_order); slab_max_order = slab_max_order < 0 ? 0 : - min(slab_max_order, MAX_ORDER - 1); + min(slab_max_order, MIN_MAX_ORDER - 1); slab_max_order_set = true; return 1; diff --git a/mm/slub.c b/mm/slub.c index b6c5205252eb..228e4a77c678 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3564,8 +3564,9 @@ static inline int calculate_order(unsigned int size) /* * Doh this slab cannot be placed using slub_max_order. */ - order = slab_order(size, 1, MAX_ORDER, 1); - if (order < MAX_ORDER) + + order = slab_order(size, 1, MIN_MAX_ORDER, 1); + if (order < MIN_MAX_ORDER) return order; return -ENOSYS; } @@ -4079,7 +4080,7 @@ __setup("slub_min_order=", setup_slub_min_order); static int __init setup_slub_max_order(char *str) { get_option(&str, (int *)&slub_max_order); - slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER - 1); + slub_max_order = min(slub_max_order, (unsigned int)MIN_MAX_ORDER - 1); return 1; }