From patchwork Tue Apr 7 03:07:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11477285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D34D292A for ; Tue, 7 Apr 2020 03:07:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0A6D20801 for ; Tue, 7 Apr 2020 03:07:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="lt9qoKLJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0A6D20801 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8A8798E003F; Mon, 6 Apr 2020 23:07:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 830DE8E0001; Mon, 6 Apr 2020 23:07:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 746B38E003F; Mon, 6 Apr 2020 23:07:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 5489A8E0001 for ; Mon, 6 Apr 2020 23:07:02 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 233FF99B4 for ; Tue, 7 Apr 2020 03:07:02 +0000 (UTC) X-FDA: 76679572284.23.joke77_77970154a1c1b X-Spam-Summary: 2,0,0,07f0aec49dcf1275,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:69:355:379:800:960:967:973:988:989:1260:1263:1345:1359:1381:1431:1437:1534:1543:1711:1730:1747:1777:1792:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:5007:6261:6653:7514:7576:7903:8599:9025:9545:10004:10913:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12679:12683:12783:12986:13846:14040:14096:14181:14721:14849:21080:21451:21627:21939:21990:30003:30012:30054:30064,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: joke77_77970154a1c1b X-Filterd-Recvd-Size: 4705 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Apr 2020 03:07:01 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9DF9D208FE; Tue, 7 Apr 2020 03:07:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586228821; bh=NbEt9ea6FXDe7H49kaK/+vvVl6FzCF3hLW83ZdOrzAU=; h=Date:From:To:Subject:In-Reply-To:From; b=lt9qoKLJsH8Y04+hifQ3f9JwKgdFsrca8LD9pqcnVVWuWtFg2uOAMQBi+qnFUaUBe YoZDAEPeNF/aX5DeLHpemjsiE108UJ+AesDYy5qFz/vh+9w07RjqYly1Pmxj8WIODY 2ZSwx8zZAJSDHdZAhHzkzsg5yM3+2j3jtsnYGprA= Date: Mon, 06 Apr 2020 20:07:00 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bhe@redhat.com, dan.j.williams@intel.com, david@redhat.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, pankaj.gupta.linux@gmail.com, richard.weiyang@gmail.com, torvalds@linux-foundation.org Subject: [patch 058/166] mm/sparse.c: introduce new function fill_subsection_map() Message-ID: <20200407030700.MqBZGplM7%akpm@linux-foundation.org> In-Reply-To: <20200406200254.a69ebd9e08c4074e41ddebaf@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Baoquan He Subject: mm/sparse.c: introduce new function fill_subsection_map() Patch series "mm/hotplug: Only use subsection map for VMEMMAP", v4. Memory sub-section hotplug was added to fix the issue that nvdimm could be mapped at non-section aligned starting address. A subsection map is added into struct mem_section_usage to implement it. However, config ZONE_DEVICE depends on SPARSEMEM_VMEMMAP. It means subsection map only makes sense when SPARSEMEM_VMEMMAP enabled. For the classic sparse, subsection map is meaningless and confusing. About the classic sparse which doesn't support subsection hotplug, Dan said it's more because the effort and maintenance burden outweighs the benefit. Besides, the current 64 bit ARCHes all enable SPARSEMEM_VMEMMAP_ENABLE by default. This patch (of 5): Factor out the code that fills the subsection map from section_activate() into fill_subsection_map(), this makes section_activate() cleaner and easier to follow. Link: http://lkml.kernel.org/r/20200312124414.439-2-bhe@redhat.com Signed-off-by: Baoquan He Reviewed-by: Wei Yang Reviewed-by: David Hildenbrand Acked-by: Pankaj Gupta Cc: Dan Williams Cc: Michal Hocko Signed-off-by: Andrew Morton --- mm/sparse.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) --- a/mm/sparse.c~mm-sparsec-introduce-new-function-fill_subsection_map +++ a/mm/sparse.c @@ -777,24 +777,15 @@ static void section_deactivate(unsigned ms->section_mem_map = (unsigned long)NULL; } -static struct page * __meminit section_activate(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) +static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) { - DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; struct mem_section *ms = __pfn_to_section(pfn); - struct mem_section_usage *usage = NULL; + DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; unsigned long *subsection_map; - struct page *memmap; int rc = 0; subsection_mask_set(map, pfn, nr_pages); - if (!ms->usage) { - usage = kzalloc(mem_section_usage_size(), GFP_KERNEL); - if (!usage) - return ERR_PTR(-ENOMEM); - ms->usage = usage; - } subsection_map = &ms->usage->subsection_map[0]; if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) @@ -805,6 +796,25 @@ static struct page * __meminit section_a bitmap_or(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); + return rc; +} + +static struct page * __meminit section_activate(int nid, unsigned long pfn, + unsigned long nr_pages, struct vmem_altmap *altmap) +{ + struct mem_section *ms = __pfn_to_section(pfn); + struct mem_section_usage *usage = NULL; + struct page *memmap; + int rc = 0; + + if (!ms->usage) { + usage = kzalloc(mem_section_usage_size(), GFP_KERNEL); + if (!usage) + return ERR_PTR(-ENOMEM); + ms->usage = usage; + } + + rc = fill_subsection_map(pfn, nr_pages); if (rc) { if (usage) ms->usage = NULL;