From patchwork Tue Apr 7 03:07:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11477289 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0AF9E14B4 for ; Tue, 7 Apr 2020 03:07:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CCB7220801 for ; Tue, 7 Apr 2020 03:07:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="gUha0JQO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CCB7220801 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC5798E0041; Mon, 6 Apr 2020 23:07:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C4E758E0001; Mon, 6 Apr 2020 23:07:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B66758E0041; Mon, 6 Apr 2020 23:07:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 9B8BF8E0001 for ; Mon, 6 Apr 2020 23:07:08 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6941499BE for ; Tue, 7 Apr 2020 03:07:08 +0000 (UTC) X-FDA: 76679572536.02.shirt34_787fd66c5d102 X-Spam-Summary: 2,0,0,d847309660090897,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:355:379:800:960:966:967:968:973:988:989:1260:1263:1345:1359:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2525:2553:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6261:6653:7514:7576:7903:8599:8784:9025:9545:10004:10913:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12783:12986:13138:13231:13846:14040:14096:14181:14721:14849:21063:21080:21451:21627:21939:21990:30012:30054:30064:30070:30090,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: shirt34_787fd66c5d102 X-Filterd-Recvd-Size: 5365 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Apr 2020 03:07:07 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0246F20BED; Tue, 7 Apr 2020 03:07:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586228827; bh=RpYcl82ioK0+dvjwosWe7qPaUNnebt6uY6YWqr0tq5E=; h=Date:From:To:Subject:In-Reply-To:From; b=gUha0JQO9MjytQnid/gviQpQYbIRdVcQpGeMpfqqA5UzekXDHCAl1qEBeEiJxwJH2 D6Fw850l2QeMoCS8kT6Qn0scpyx3f7/1AnfniXGIGNLMYC0iRYYYgnWVv3dEzcOEES 5rway8Zu0xFeCKQhXKlCNIOfkGWN1BGiMn2FE+oA= Date: Mon, 06 Apr 2020 20:07:06 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bhe@redhat.com, dan.j.williams@intel.com, david@redhat.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, pankaj.gupta.linux@gmail.com, richard.weiyang@gmail.com, torvalds@linux-foundation.org Subject: [patch 060/166] mm/sparse.c: only use subsection map in VMEMMAP case Message-ID: <20200407030706.W1btM3O1y%akpm@linux-foundation.org> In-Reply-To: <20200406200254.a69ebd9e08c4074e41ddebaf@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Baoquan He Subject: mm/sparse.c: only use subsection map in VMEMMAP case Currently, to support subsection aligned memory region adding for pmem, subsection map is added to track which subsection is present. However, config ZONE_DEVICE depends on SPARSEMEM_VMEMMAP. It means subsection map only makes sense when SPARSEMEM_VMEMMAP enabled. For the classic sparse, it's meaningless. Even worse, it may confuse people when checking code related to the classic sparse. About the classic sparse which doesn't support subsection hotplug, Dan said it's more because the effort and maintenance burden outweighs the benefit. Besides, the current 64 bit ARCHes all enable SPARSEMEM_VMEMMAP_ENABLE by default. Combining the above reasons, no need to provide subsection map and the relevant handling for the classic sparse. Let's remove them. Link: http://lkml.kernel.org/r/20200312124414.439-4-bhe@redhat.com Signed-off-by: Baoquan He Reviewed-by: David Hildenbrand Cc: Dan Williams Cc: Michal Hocko Cc: Pankaj Gupta Cc: Wei Yang Signed-off-by: Andrew Morton --- include/linux/mmzone.h | 2 ++ mm/sparse.c | 25 +++++++++++++++++++++++++ 2 files changed, 27 insertions(+) --- a/include/linux/mmzone.h~mm-sparsec-only-use-subsection-map-in-vmemmap-case +++ a/include/linux/mmzone.h @@ -1143,7 +1143,9 @@ static inline unsigned long section_nr_t #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK) struct mem_section_usage { +#ifdef CONFIG_SPARSEMEM_VMEMMAP DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); +#endif /* See declaration of similar field in struct zone */ unsigned long pageblock_flags[0]; }; --- a/mm/sparse.c~mm-sparsec-only-use-subsection-map-in-vmemmap-case +++ a/mm/sparse.c @@ -209,6 +209,7 @@ static inline unsigned long first_presen return next_present_section_nr(-1); } +#ifdef CONFIG_SPARSEMEM_VMEMMAP static void subsection_mask_set(unsigned long *map, unsigned long pfn, unsigned long nr_pages) { @@ -243,6 +244,11 @@ void __init subsection_map_init(unsigned nr_pages -= pfns; } } +#else +void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) +{ +} +#endif /* Record a memory area against a node. */ void __init memory_present(int nid, unsigned long start, unsigned long end) @@ -705,6 +711,7 @@ static void free_map_bootmem(struct page } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ +#ifdef CONFIG_SPARSEMEM_VMEMMAP static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) { DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; @@ -731,6 +738,17 @@ static bool is_subsection_map_empty(stru return bitmap_empty(&ms->usage->subsection_map[0], SUBSECTIONS_PER_SECTION); } +#else +static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + return 0; +} + +static bool is_subsection_map_empty(struct mem_section *ms) +{ + return true; +} +#endif static void section_deactivate(unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap) @@ -792,6 +810,7 @@ static void section_deactivate(unsigned ms->section_mem_map = (unsigned long)NULL; } +#ifdef CONFIG_SPARSEMEM_VMEMMAP static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) { struct mem_section *ms = __pfn_to_section(pfn); @@ -813,6 +832,12 @@ static int fill_subsection_map(unsigned return rc; } +#else +static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + return 0; +} +#endif static struct page * __meminit section_activate(int nid, unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap)