From patchwork Mon Oct 15 20:27:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 10642447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9EAE1057 for ; Mon, 15 Oct 2018 20:27:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9916A2A012 for ; Mon, 15 Oct 2018 20:27:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C0E32A01D; Mon, 15 Oct 2018 20:27:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E1F428D52 for ; Mon, 15 Oct 2018 20:27:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3A746B0269; Mon, 15 Oct 2018 16:27:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EE9896B026A; Mon, 15 Oct 2018 16:27:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB3696B026B; Mon, 15 Oct 2018 16:27:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 95F586B0269 for ; Mon, 15 Oct 2018 16:27:20 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id ce7-v6so16486245plb.22 for ; Mon, 15 Oct 2018 13:27:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=PajKcL7Er7JwxzKzf9Q+X++r+xGxxvzMKHu0x5jIoC4=; b=N96fWErnI6gWaamce9ZaJr7OeP+7JEj2iRhGRjilogwLq6uVT05QVbXFMUatfnDNhD dGYFVymmkN72dwwmGu6dAtOJB1wWJ67Rqg/HHag1vLx0zNu6wWFElgHSA+AL5bHsXqci IZNGuSgKANHqJGZ3DMiX3UkamH2FWiqBt2cNwDKAfLsmaPAqs+ZeJUfg1or0QtYBGntn FDws1YFM9kjLwXaIXAq8NNokMXD0rx6SgMD+E7J3ILA4GGw12RAIOuSxScJNx/yrocLp dEeG6pbDFlcV64F6V8kUmaBf0/ufN6yGRXNissWn+umaIibYUIke9erpP9aRFJfxm5PC aKHA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfogfDeJuwPwNI+OO4MlJGTeKNdRaYRZ0gpIsi5t4vuRXeLVouxh1 UISr2hTSiBGfo7UVaHpsdNbU0ZLQxDj+AwqzuW46gbPDATX1GWcRGz1qjHe7GGWdOUHxu1D1AEe 2sL+9vxJ2Acy3vQyqpr3KqUxr17JgnCYl+pUs45izT2nqoJG3KStZg5Jsv42DqY2g+Q== X-Received: by 2002:a62:2315:: with SMTP id j21-v6mr19254029pfj.90.1539635240268; Mon, 15 Oct 2018 13:27:20 -0700 (PDT) X-Google-Smtp-Source: ACcGV606YHnH/DF/xxQTNhuBXJQmIrtWRyIvloNmf3FonCv0bczBN7AiXRUtDfow4epiobv0t9L+ X-Received: by 2002:a62:2315:: with SMTP id j21-v6mr19253980pfj.90.1539635239095; Mon, 15 Oct 2018 13:27:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539635239; cv=none; d=google.com; s=arc-20160816; b=RlrdQBWTPLFFa6AhnB5RdYOVLqOXB6SrKLJcM4uNNa1qXGwYfEgYeaNdWx2V9ZLOHx jOVMHxBLwz9CrQlLmFM+vFMjOEvD+sa7GGEa3tweKOksfIkwv1tQEYXpKb9D1IzNuL5k H85/uT8i/XU3WJ643VDzu2yDb0S7jUgXEy2cTthLifYs6P1SJDYJEbMCz7a1uNlCSNQn 9283qXxHB/Qcd4MX3ANCMWn6v2ezWwe2kf0G979BTB81Is+Fxs/EM2Usa6cRLSCKheA2 h3op+hTauAKfPNR5f3rv+Oz7lf4NtE/agmBtCrK6aimkQB039xmUhhyGnxRAKMYKI0Ky utHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=PajKcL7Er7JwxzKzf9Q+X++r+xGxxvzMKHu0x5jIoC4=; b=CN7T8UPtEm7Q90vb+l23Z7Rkzz/oVPfY3WyRSKTvH+ewwB/jvEbd+UNQlbXvTYSUB2 TbFgr81BZdSY22Dhha7/x0OP63RWkdkLvhmPG7TYjonjvR4hYn4GVeYphUa+4w1BswX6 WfeTBv25xkKckH9RWi3t+0LUL2FIX7mYRNsvDgku96taucxAo7p6uLhj1sF8n/G8L8Bd DlaOYErjw4eK1vJImWF0uRrIAk0m12LdTtfT6nR0FGnEaiWNC3W8qUmWTEydq+Zw9XSC fNtiWbyussybqcawsv+jE8+w1O7vWImaOFTcrSym2SUxwtIUm7G2VOvWkjnoOSqxmWn9 +VlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga04.intel.com (mga04.intel.com. [192.55.52.120]) by mx.google.com with ESMTPS id 129-v6si11995227pfd.201.2018.10.15.13.27.18 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Oct 2018 13:27:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.120 as permitted sender) client-ip=192.55.52.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Oct 2018 13:27:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,385,1534834800"; d="scan'208";a="92202536" Received: from ahduyck-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.7.198.154]) by orsmga003.jf.intel.com with ESMTP; 15 Oct 2018 13:27:17 -0700 Subject: [mm PATCH v3 4/6] mm: Move hot-plug specific memory init into separate functions and optimize From: Alexander Duyck To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: pavel.tatashin@microsoft.com, mhocko@suse.com, dave.jiang@intel.com, alexander.h.duyck@linux.intel.com, linux-kernel@vger.kernel.org, willy@infradead.org, davem@davemloft.net, yi.z.zhang@linux.intel.com, khalid.aziz@oracle.com, rppt@linux.vnet.ibm.com, vbabka@suse.cz, sparclinux@vger.kernel.org, dan.j.williams@intel.com, ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net, mingo@kernel.org, kirill.shutemov@linux.intel.com Date: Mon, 15 Oct 2018 13:27:16 -0700 Message-ID: <20181015202716.2171.7284.stgit@localhost.localdomain> In-Reply-To: <20181015202456.2171.88406.stgit@localhost.localdomain> References: <20181015202456.2171.88406.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch is going through and combining the bits in memmap_init_zone and memmap_init_zone_device that are related to hotplug into a single function called __memmap_init_hotplug. I also took the opportunity to integrate __init_single_page's functionality into this function. In doing so I can get rid of some of the redundancy such as the LRU pointers versus the pgmap. Signed-off-by: Alexander Duyck --- mm/page_alloc.c | 232 ++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 159 insertions(+), 73 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 20e9eb35d75d..92375e7867ba 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1192,6 +1192,94 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, #endif } +static void __meminit __init_pageblock(unsigned long start_pfn, + unsigned long nr_pages, + unsigned long zone, int nid, + struct dev_pagemap *pgmap, + bool is_reserved) +{ + unsigned long nr_pgmask = pageblock_nr_pages - 1; + struct page *start_page = pfn_to_page(start_pfn); + unsigned long pfn = start_pfn + nr_pages - 1; +#ifdef WANT_PAGE_VIRTUAL + bool is_highmem = is_highmem_idx(zone); +#endif + struct page *page; + + /* + * Enforce the following requirements: + * size > 0 + * size < pageblock_nr_pages + * start_pfn -> pfn does not cross pageblock_nr_pages boundary + */ + VM_BUG_ON(((start_pfn ^ pfn) | (nr_pages - 1)) > nr_pgmask); + + /* + * Work from highest page to lowest, this way we will still be + * warm in the cache when we call set_pageblock_migratetype + * below. + * + * The loop is based around the page pointer as the main index + * instead of the pfn because pfn is not used inside the loop if + * the section number is not in page flags and WANT_PAGE_VIRTUAL + * is not defined. + */ + for (page = start_page + nr_pages; page-- != start_page; pfn--) { + mm_zero_struct_page(page); + + /* + * We use the start_pfn instead of pfn in the set_page_links + * call because of the fact that the pfn number is used to + * get the section_nr and this function should not be + * spanning more than a single section. + */ + set_page_links(page, zone, nid, start_pfn); + init_page_count(page); + page_mapcount_reset(page); + page_cpupid_reset_last(page); + + /* + * We can use the non-atomic __set_bit operation for setting + * the flag as we are still initializing the pages. + */ + if (is_reserved) + __SetPageReserved(page); + + /* + * ZONE_DEVICE pages union ->lru with a ->pgmap back + * pointer and hmm_data. It is a bug if a ZONE_DEVICE + * page is ever freed or placed on a driver-private list. + */ + page->pgmap = pgmap; + if (!pgmap) + INIT_LIST_HEAD(&page->lru); + +#ifdef WANT_PAGE_VIRTUAL + /* The shift won't overflow because ZONE_NORMAL is below 4G. */ + if (!is_highmem) + set_page_address(page, __va(pfn << PAGE_SHIFT)); +#endif + } + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + * + * Please note that MEMMAP_HOTPLUG path doesn't clear memmap + * because this is done early in sparse_add_one_section + */ + if (!(start_pfn & nr_pgmask)) + set_pageblock_migratetype(start_page, MIGRATE_MOVABLE); +} + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT static void __meminit init_reserved_page(unsigned long pfn) { @@ -5513,6 +5601,36 @@ void __ref build_all_zonelists(pg_data_t *pgdat) return false; } +static void __meminit __memmap_init_hotplug(unsigned long size, int nid, + unsigned long zone, + unsigned long start_pfn, + struct dev_pagemap *pgmap) +{ + unsigned long pfn = start_pfn + size; + + while (pfn != start_pfn) { + unsigned long stride = pfn; + + pfn = max(ALIGN_DOWN(pfn - 1, pageblock_nr_pages), start_pfn); + stride -= pfn; + + /* + * The last argument of __init_pageblock is a boolean + * value indicating if the page will be marked as reserved. + * + * Mark page reserved as it will need to wait for onlining + * phase for it to be fully associated with a zone. + * + * Under certain circumstances ZONE_DEVICE pages may not + * need to be marked as reserved, however there is still + * code that is depending on this being set for now. + */ + __init_pageblock(pfn, stride, zone, nid, pgmap, true); + + cond_resched(); + } +} + /* * Initially all pages are reserved - free ones are freed * up by memblock_free_all() once the early boot process is @@ -5523,51 +5641,61 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, struct vmem_altmap *altmap) { unsigned long pfn, end_pfn = start_pfn + size; - struct page *page; if (highest_memmap_pfn < end_pfn - 1) highest_memmap_pfn = end_pfn - 1; + if (context == MEMMAP_HOTPLUG) { #ifdef CONFIG_ZONE_DEVICE - /* - * Honor reservation requested by the driver for this ZONE_DEVICE - * memory. We limit the total number of pages to initialize to just - * those that might contain the memory mapping. We will defer the - * ZONE_DEVICE page initialization until after we have released - * the hotplug lock. - */ - if (zone == ZONE_DEVICE) { - if (!altmap) - return; + /* + * Honor reservation requested by the driver for this + * ZONE_DEVICE memory. We limit the total number of pages to + * initialize to just those that might contain the memory + * mapping. We will defer the ZONE_DEVICE page initialization + * until after we have released the hotplug lock. + */ + if (zone == ZONE_DEVICE) { + if (!altmap) + return; + + if (start_pfn == altmap->base_pfn) + start_pfn += altmap->reserve; + end_pfn = altmap->base_pfn + + vmem_altmap_offset(altmap); + } +#endif + /* + * For these ZONE_DEVICE pages we don't need to record the + * pgmap as they should represent only those pages used to + * store the memory map. The actual ZONE_DEVICE pages will + * be initialized later. + */ + __memmap_init_hotplug(end_pfn - start_pfn, nid, zone, + start_pfn, NULL); - if (start_pfn == altmap->base_pfn) - start_pfn += altmap->reserve; - end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); + return; } -#endif for (pfn = start_pfn; pfn < end_pfn; pfn++) { + struct page *page; + /* * There can be holes in boot-time mem_map[]s handed to this * function. They do not exist on hotplugged memory. */ - if (context == MEMMAP_EARLY) { - if (!early_pfn_valid(pfn)) { - pfn = next_valid_pfn(pfn) - 1; - continue; - } - if (!early_pfn_in_nid(pfn, nid)) - continue; - if (overlap_memmap_init(zone, &pfn)) - continue; - if (defer_init(nid, pfn, end_pfn)) - break; + if (!early_pfn_valid(pfn)) { + pfn = next_valid_pfn(pfn) - 1; + continue; } + if (!early_pfn_in_nid(pfn, nid)) + continue; + if (overlap_memmap_init(zone, &pfn)) + continue; + if (defer_init(nid, pfn, end_pfn)) + break; page = pfn_to_page(pfn); __init_single_page(page, pfn, zone, nid); - if (context == MEMMAP_HOTPLUG) - __SetPageReserved(page); /* * Mark the block movable so that blocks are reserved for @@ -5594,14 +5722,12 @@ void __ref memmap_init_zone_device(struct zone *zone, unsigned long size, struct dev_pagemap *pgmap) { - unsigned long pfn, end_pfn = start_pfn + size; struct pglist_data *pgdat = zone->zone_pgdat; unsigned long zone_idx = zone_idx(zone); unsigned long start = jiffies; int nid = pgdat->node_id; - if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone))) - return; + VM_BUG_ON(!is_dev_zone(zone)); /* * The call to memmap_init_zone should have already taken care @@ -5610,53 +5736,13 @@ void __ref memmap_init_zone_device(struct zone *zone, */ if (pgmap->altmap_valid) { struct vmem_altmap *altmap = &pgmap->altmap; + unsigned long end_pfn = start_pfn + size; start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); size = end_pfn - start_pfn; } - for (pfn = start_pfn; pfn < end_pfn; pfn++) { - struct page *page = pfn_to_page(pfn); - - __init_single_page(page, pfn, zone_idx, nid); - - /* - * Mark page reserved as it will need to wait for onlining - * phase for it to be fully associated with a zone. - * - * We can use the non-atomic __set_bit operation for setting - * the flag as we are still initializing the pages. - */ - __SetPageReserved(page); - - /* - * ZONE_DEVICE pages union ->lru with a ->pgmap back - * pointer and hmm_data. It is a bug if a ZONE_DEVICE - * page is ever freed or placed on a driver-private list. - */ - page->pgmap = pgmap; - page->hmm_data = 0; - - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - * - * Please note that MEMMAP_HOTPLUG path doesn't clear memmap - * because this is done early in sparse_add_one_section - */ - if (!(pfn & (pageblock_nr_pages - 1))) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - cond_resched(); - } - } + __memmap_init_hotplug(size, nid, zone_idx, start_pfn, pgmap); pr_info("%s initialised, %lu pages in %ums\n", dev_name(pgmap->dev), size, jiffies_to_msecs(jiffies - start));