From patchwork Mon Oct 15 20:27:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 10642449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 740EC15E2 for ; Mon, 15 Oct 2018 20:27:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61F3328D52 for ; Mon, 15 Oct 2018 20:27:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 564AA29A75; Mon, 15 Oct 2018 20:27:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AAAE828D52 for ; Mon, 15 Oct 2018 20:27:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FCA56B026B; Mon, 15 Oct 2018 16:27:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6AC7A6B026C; Mon, 15 Oct 2018 16:27:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 575A56B026D; Mon, 15 Oct 2018 16:27:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 0EE836B026B for ; Mon, 15 Oct 2018 16:27:27 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id t9-v6so16590744plq.15 for ; Mon, 15 Oct 2018 13:27:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=sfYbtpfRZKvnrEZl2ocBSlvq920MSzx5dNAnTBnXuZE=; b=XC6q8g/WBlxxR0JUsta/sLPuq9iiv+NjSX4WiN6UOsmx1Icng4NzL4+F4v6WB+zevr oZbxjEKVKXwExyyz7ghx/IfS/jrP3Erwgr8rId2wcSVaFU35l4576kR8zl9/J7pyyUNQ KQEXR3Ie0EXtupya41sVsNp6OAUju34SqbN0Ua+f2OxrCZLYOL6R2+czb2FFmK1Xi3pt GqRrRcLNIXEp5zAQJAoUCDk1mjA/WrCwbIjQ28wFsY6uMdri+36xIdaZnEAY5dJfZsQB WSIgTgpEDfk4ZMWKp9F8Yb8I3pac1hSnyka88aN42ubtIz7XykTEg5X6rGJCddZevA9M 5CHg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfog04DTzwNF+wmiAgbC37TZnol3SVJIwAWqGVqYWyjIOUAUt3aUq SGjFy5byOR84Rq2RFfW0kAv8WK5i1UYYrPbKFYMEgIidSafMWX+VPhfOxuwPfIcEnwEu6B6d3IH hWeYTSGgUEhVZIqLz2uJzw82ZtOXmalvPv3WIJucvlsObOoFfZZHan09z4XyMyR6phQ== X-Received: by 2002:a63:4107:: with SMTP id o7-v6mr17624541pga.256.1539635246705; Mon, 15 Oct 2018 13:27:26 -0700 (PDT) X-Google-Smtp-Source: ACcGV60wbLs2Iy2ME+H61DuUwP/Br4p9mS6EeuPLT4aqErUV0pzuQz5nWTtn/anAxDYrpDcf+ZSe X-Received: by 2002:a63:4107:: with SMTP id o7-v6mr17624502pga.256.1539635245838; Mon, 15 Oct 2018 13:27:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539635245; cv=none; d=google.com; s=arc-20160816; b=PzCBc6c7hu8C4N8q3a+UqDmyk6HzITCjeglL4WGEErM5GH+Pzy3iAJ5KLXN1lmWT5L PW/X13qi90YSjaJMXdBE2FdeuyVpwseYvj3kU+79bBVwQ2TsY6QXKHmOrFrJzLSuXfO6 JiEV3Npe9OB86W2FEUwUkVLdnG8z/5pu1Za8fgBYxjB+SHssL1f2G3Rodl1fnIPa7J/T Qh8s1BiEeMLA93Knr6j+zNDO3L+dkBSy+hs9+kQQnZmxkeFtY0zx7wB5YB+wAv1yg9En NtBlm9AYW7HxPmD87sB/0UA6YfJY9AkebOrsAj+lJY6q/MvtnwVTRf0CxB9NQ+aEUDb4 pqxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=sfYbtpfRZKvnrEZl2ocBSlvq920MSzx5dNAnTBnXuZE=; b=LC/Ljf5JUMT+rpMSsmq8od8XLTLCHMjt8U1S3NolpDvJ1lV8Cxgx9ucSuUePy/fxAA N04yFmGe6AsNMSHh2oq2uYrbLB5H/GHIF/x/6YYjljiFLWx+VuEsq15GayuJJLD4t/rN hRJrGXtlgo+8lddX8Ke08xYbw1HbGgWv+GPtobfqJeu14jXBMmeV7olzJ+M9J85yIQXW +A50QQEJBXRg2NeA+QSAc9cdQ9XbGQZVA2iqf8aTg9GuhJQcrVifr5yuqUoqFzkhmGYP E/kwQ7tZetZV+hY2bXD8BaShNoQ2kvbLI9TplUtn9Vdtbsw+96c1DTwnZZsKK6aMLaRf YuHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga12.intel.com (mga12.intel.com. [192.55.52.136]) by mx.google.com with ESMTPS id 187-v6si12691351pfe.182.2018.10.15.13.27.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Oct 2018 13:27:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.136 as permitted sender) client-ip=192.55.52.136; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Oct 2018 13:27:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,385,1534834800"; d="scan'208";a="92202549" Received: from ahduyck-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.7.198.154]) by orsmga003.jf.intel.com with ESMTP; 15 Oct 2018 13:27:23 -0700 Subject: [mm PATCH v3 5/6] mm: Use common iterator for deferred_init_pages and deferred_free_pages From: Alexander Duyck To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: pavel.tatashin@microsoft.com, mhocko@suse.com, dave.jiang@intel.com, alexander.h.duyck@linux.intel.com, linux-kernel@vger.kernel.org, willy@infradead.org, davem@davemloft.net, yi.z.zhang@linux.intel.com, khalid.aziz@oracle.com, rppt@linux.vnet.ibm.com, vbabka@suse.cz, sparclinux@vger.kernel.org, dan.j.williams@intel.com, ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net, mingo@kernel.org, kirill.shutemov@linux.intel.com Date: Mon, 15 Oct 2018 13:27:23 -0700 Message-ID: <20181015202723.2171.14482.stgit@localhost.localdomain> In-Reply-To: <20181015202456.2171.88406.stgit@localhost.localdomain> References: <20181015202456.2171.88406.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch creates a common iterator to be used by both deferred_init_pages and deferred_free_pages. By doing this we can cut down a bit on code overhead as they will likely both be inlined into the same function anyway. This new approach allows deferred_init_pages to make use of __init_pageblock. By doing this we can cut down on the code size by sharing code between both the hotplug and deferred memory init code paths. An additional benefit to this approach is that we improve in cache locality of the memory init as we can focus on the memory areas related to identifying if a given PFN is valid and keep that warm in the cache until we transition to a region of a different type. So we will stream through a chunk of valid blocks before we turn to initializing page structs. Signed-off-by: Alexander Duyck --- mm/page_alloc.c | 134 +++++++++++++++++++++++++++---------------------------- 1 file changed, 65 insertions(+), 69 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 92375e7867ba..f145063615a7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1488,32 +1488,6 @@ void clear_zone_contiguous(struct zone *zone) } #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT -static void __init deferred_free_range(unsigned long pfn, - unsigned long nr_pages) -{ - struct page *page; - unsigned long i; - - if (!nr_pages) - return; - - page = pfn_to_page(pfn); - - /* Free a large naturally-aligned chunk if possible */ - if (nr_pages == pageblock_nr_pages && - (pfn & (pageblock_nr_pages - 1)) == 0) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, pageblock_order); - return; - } - - for (i = 0; i < nr_pages; i++, page++, pfn++) { - if ((pfn & (pageblock_nr_pages - 1)) == 0) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); - } -} - /* Completion tracking for deferred_init_memmap() threads */ static atomic_t pgdat_init_n_undone __initdata; static __initdata DECLARE_COMPLETION(pgdat_init_all_done_comp); @@ -1525,48 +1499,77 @@ static inline void __init pgdat_init_report_one_done(void) } /* - * Returns true if page needs to be initialized or freed to buddy allocator. + * Returns count if page range needs to be initialized or freed * - * First we check if pfn is valid on architectures where it is possible to have - * holes within pageblock_nr_pages. On systems where it is not possible, this - * function is optimized out. + * First, we check if a current large page is valid by only checking the + * validity of the head pfn. * - * Then, we check if a current large page is valid by only checking the validity - * of the head pfn. + * Then we check if the contiguous pfns are valid on architectures where it + * is possible to have holes within pageblock_nr_pages. On systems where it + * is not possible, this function is optimized out. */ -static inline bool __init deferred_pfn_valid(unsigned long pfn) +static unsigned long __next_pfn_valid_range(unsigned long *i, + unsigned long end_pfn) { - if (!pfn_valid_within(pfn)) - return false; - if (!(pfn & (pageblock_nr_pages - 1)) && !pfn_valid(pfn)) - return false; - return true; + unsigned long pfn = *i; + unsigned long count; + + while (pfn < end_pfn) { + unsigned long t = ALIGN(pfn + 1, pageblock_nr_pages); + unsigned long pageblock_pfn = min(t, end_pfn); + +#ifndef CONFIG_HOLES_IN_ZONE + count = pageblock_pfn - pfn; + pfn = pageblock_pfn; + if (!pfn_valid(pfn)) + continue; +#else + for (count = 0; pfn < pageblock_pfn; pfn++) { + if (pfn_valid_within(pfn)) { + count++; + continue; + } + + if (count) + break; + } + + if (!count) + continue; +#endif + *i = pfn; + return count; + } + + return 0; } +#define for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) \ + for (i = (start_pfn), \ + count = __next_pfn_valid_range(&i, (end_pfn)); \ + count && ({ pfn = i - count; 1; }); \ + count = __next_pfn_valid_range(&i, (end_pfn))) /* * Free pages to buddy allocator. Try to free aligned pages in * pageblock_nr_pages sizes. */ -static void __init deferred_free_pages(unsigned long pfn, +static void __init deferred_free_pages(unsigned long start_pfn, unsigned long end_pfn) { - unsigned long nr_pgmask = pageblock_nr_pages - 1; - unsigned long nr_free = 0; - - for (; pfn < end_pfn; pfn++) { - if (!deferred_pfn_valid(pfn)) { - deferred_free_range(pfn - nr_free, nr_free); - nr_free = 0; - } else if (!(pfn & nr_pgmask)) { - deferred_free_range(pfn - nr_free, nr_free); - nr_free = 1; - touch_nmi_watchdog(); + unsigned long i, pfn, count; + + for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) { + struct page *page = pfn_to_page(pfn); + + if (count == pageblock_nr_pages) { + __free_pages_core(page, pageblock_order); } else { - nr_free++; + while (count--) + __free_pages_core(page++, 0); } + + touch_nmi_watchdog(); } - /* Free the last block of pages to allocator */ - deferred_free_range(pfn - nr_free, nr_free); } /* @@ -1575,29 +1578,22 @@ static void __init deferred_free_pages(unsigned long pfn, * Return number of pages initialized. */ static unsigned long __init deferred_init_pages(struct zone *zone, - unsigned long pfn, + unsigned long start_pfn, unsigned long end_pfn) { - unsigned long nr_pgmask = pageblock_nr_pages - 1; + unsigned long i, pfn, count; int nid = zone_to_nid(zone); unsigned long nr_pages = 0; int zid = zone_idx(zone); - struct page *page = NULL; - for (; pfn < end_pfn; pfn++) { - if (!deferred_pfn_valid(pfn)) { - page = NULL; - continue; - } else if (!page || !(pfn & nr_pgmask)) { - page = pfn_to_page(pfn); - touch_nmi_watchdog(); - } else { - page++; - } - __init_single_page(page, pfn, zid, nid); - nr_pages++; + for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) { + nr_pages += count; + __init_pageblock(pfn, count, zid, nid, NULL, false); + + touch_nmi_watchdog(); } - return (nr_pages); + + return nr_pages; } /*