From patchwork Wed Oct 17 23:54:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 10646623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB79113A4 for ; Wed, 17 Oct 2018 23:54:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 98356285A2 for ; Wed, 17 Oct 2018 23:54:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8BC26285CE; Wed, 17 Oct 2018 23:54:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFEFF285A2 for ; Wed, 17 Oct 2018 23:54:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9DDF6B027C; Wed, 17 Oct 2018 19:54:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A48ED6B027E; Wed, 17 Oct 2018 19:54:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C5136B0280; Wed, 17 Oct 2018 19:54:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 46F2B6B027C for ; Wed, 17 Oct 2018 19:54:39 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id o18-v6so21264048pgv.14 for ; Wed, 17 Oct 2018 16:54:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=bihAXs1i9XRIpTlw+e26Z6O8KNcgXHgKZW9dMZxzIao=; b=CgvAkEDPyQAiBnbSG+BS6dyPpHcTF0LklhExjwCFCGPV6DR4R3FLGw/CEKuHZiIyeq 60VzeQ+J2bMLHBQmpgSYW4KNeqYR9wn/kh4BGJU4U5s3BRa3Kh0xp8XgevLbxX5qHqN6 aa3autbGB/VFxeD251+lIBQD9kkqU4yBeoLsU4PzM1Yblmggtb3qhh/cqMNncBxfBPT8 wsXqG25xKzdZ3JX/Jqs3uaKQY5pV0YQzFMZGEsK5DFr96TMp48FLmTy21Ol98wq+Xgvh jvlmIOVp42wb2ke10d+hDDTIOQN3rDJ/NaUsVYdSVRQXvnpg7MpZqPrCKCRMbSmwOjL7 rAaQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfohTJzhDOViek3JF0Z4HumA/QCSTadegZ8QVoEOFsBHqn4eMLj5P AXTiZ0UNSZXFd8QRSIWGjT34UJMl8LE1CI1FNDqbA5E3wu8oZAezFW+Ak2pXDolypsy56RWLToK +ngfyrZFwte9U+90lX654YCLL/kqrOVHSFCubmERZ3ONUjO5Xln7+JgZACChufmwtrQ== X-Received: by 2002:a63:d70c:: with SMTP id d12-v6mr25903666pgg.110.1539820478946; Wed, 17 Oct 2018 16:54:38 -0700 (PDT) X-Google-Smtp-Source: ACcGV600SXNIy+u9FAsLfATOnqTL4NttdvDOejPa2KVst82ZoOzYO6/kCGuER+tQjess+frj6XnS X-Received: by 2002:a63:d70c:: with SMTP id d12-v6mr25903631pgg.110.1539820478024; Wed, 17 Oct 2018 16:54:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539820477; cv=none; d=google.com; s=arc-20160816; b=YpdqvdlQCN0Wcklp2B6FAPocBchAhkH/vttm+xxIWVYMoKwNpTDNu+LuTkk+qEabGC Bw3J47XsRsoM+935KwgV1J0Hx9FclXqnd1rsBMoQCl7K+nDMirAtuFQVvy1i9SZhQ1zm nqEkX0AMPKi+/x/UAdCOPtlhIYXjUnO9IhIX/NDkMXSeQZGFhXa5mGJSAwbnrBX/QTuq WqwGMnBSv3l5s/Tnb7TaV3nnr3nqjJg5VmdJj11PMIRGxKFAQFMLNIoCtTJ3VCNVX7LH qVtx1Kbk8K3i9D+ImKWjdvbT8QKALCdz5QKRPJd2E0EoSTAvG6Hgct7jGozWFUOkF6sn lWJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=bihAXs1i9XRIpTlw+e26Z6O8KNcgXHgKZW9dMZxzIao=; b=F945BAmmiDTvir6z9mXQ5EYPL9i2/BvlXYSw1Q58vhyV6PSA9YIZ9Fdpz6et2PUiF4 EQlj7sP6jQMndJ3QaEayi+3rWVUbbTOmWP3mwCIB3tOz+bzWLPWovvDNQ4vnwp2FzCUO vL/5800wM4j/HvgJPQcATz7+CTNtOibOokx32wkLPWyHz2MF+2aKZpMhbXUbQpFPMyJb a55+/AlzXwbvXsc5aZTjRrzp28vJIwHaPGYWGnbA2JPhJI2SYRzcs1wKVaoOq7Y1s13Z xDWlYiV2zIUN5ANheFOMnquNtP6vRZvWIoR9GKvjJXvYSpR1harfC6la9XaK/ExXRkRz ZE/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTPS id v17-v6si19533626pgn.108.2018.10.17.16.54.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Oct 2018 16:54:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 134.134.136.24 as permitted sender) client-ip=134.134.136.24; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of alexander.h.duyck@linux.intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=alexander.h.duyck@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Oct 2018 16:54:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,393,1534834800"; d="scan'208";a="100366649" Received: from ahduyck-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.7.198.154]) by orsmga001.jf.intel.com with ESMTP; 17 Oct 2018 16:54:37 -0700 Subject: [mm PATCH v4 6/6] mm: Use common iterator for deferred_init_pages and deferred_free_pages From: Alexander Duyck To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: pavel.tatashin@microsoft.com, mhocko@suse.com, dave.jiang@intel.com, alexander.h.duyck@linux.intel.com, linux-kernel@vger.kernel.org, willy@infradead.org, davem@davemloft.net, yi.z.zhang@linux.intel.com, khalid.aziz@oracle.com, rppt@linux.vnet.ibm.com, vbabka@suse.cz, sparclinux@vger.kernel.org, dan.j.williams@intel.com, ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net, mingo@kernel.org, kirill.shutemov@linux.intel.com Date: Wed, 17 Oct 2018 16:54:36 -0700 Message-ID: <20181017235436.17213.15091.stgit@localhost.localdomain> In-Reply-To: <20181017235043.17213.92459.stgit@localhost.localdomain> References: <20181017235043.17213.92459.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch creates a common iterator to be used by both deferred_init_pages and deferred_free_pages. By doing this we can cut down a bit on code overhead as they will likely both be inlined into the same function anyway. This new approach allows deferred_init_pages to make use of __init_pageblock. By doing this we can cut down on the code size by sharing code between both the hotplug and deferred memory init code paths. An additional benefit to this approach is that we improve in cache locality of the memory init as we can focus on the memory areas related to identifying if a given PFN is valid and keep that warm in the cache until we transition to a region of a different type. So we will stream through a chunk of valid blocks before we turn to initializing page structs. On my x86_64 test system with 384GB of memory per node I saw a reduction in initialization time from 1.38s to 1.06s as a result of this patch. Signed-off-by: Alexander Duyck --- mm/page_alloc.c | 134 +++++++++++++++++++++++++++---------------------------- 1 file changed, 65 insertions(+), 69 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e7fee7a5f8a3..f47d02e42cf7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1484,32 +1484,6 @@ void clear_zone_contiguous(struct zone *zone) } #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT -static void __init deferred_free_range(unsigned long pfn, - unsigned long nr_pages) -{ - struct page *page; - unsigned long i; - - if (!nr_pages) - return; - - page = pfn_to_page(pfn); - - /* Free a large naturally-aligned chunk if possible */ - if (nr_pages == pageblock_nr_pages && - (pfn & (pageblock_nr_pages - 1)) == 0) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, pageblock_order); - return; - } - - for (i = 0; i < nr_pages; i++, page++, pfn++) { - if ((pfn & (pageblock_nr_pages - 1)) == 0) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); - } -} - /* Completion tracking for deferred_init_memmap() threads */ static atomic_t pgdat_init_n_undone __initdata; static __initdata DECLARE_COMPLETION(pgdat_init_all_done_comp); @@ -1521,48 +1495,77 @@ static inline void __init pgdat_init_report_one_done(void) } /* - * Returns true if page needs to be initialized or freed to buddy allocator. + * Returns count if page range needs to be initialized or freed * - * First we check if pfn is valid on architectures where it is possible to have - * holes within pageblock_nr_pages. On systems where it is not possible, this - * function is optimized out. + * First, we check if a current large page is valid by only checking the + * validity of the head pfn. * - * Then, we check if a current large page is valid by only checking the validity - * of the head pfn. + * Then we check if the contiguous pfns are valid on architectures where it + * is possible to have holes within pageblock_nr_pages. On systems where it + * is not possible, this function is optimized out. */ -static inline bool __init deferred_pfn_valid(unsigned long pfn) +static unsigned long __next_pfn_valid_range(unsigned long *i, + unsigned long end_pfn) { - if (!pfn_valid_within(pfn)) - return false; - if (!(pfn & (pageblock_nr_pages - 1)) && !pfn_valid(pfn)) - return false; - return true; + unsigned long pfn = *i; + unsigned long count; + + while (pfn < end_pfn) { + unsigned long t = ALIGN(pfn + 1, pageblock_nr_pages); + unsigned long pageblock_pfn = min(t, end_pfn); + +#ifndef CONFIG_HOLES_IN_ZONE + count = pageblock_pfn - pfn; + pfn = pageblock_pfn; + if (!pfn_valid(pfn)) + continue; +#else + for (count = 0; pfn < pageblock_pfn; pfn++) { + if (pfn_valid_within(pfn)) { + count++; + continue; + } + + if (count) + break; + } + + if (!count) + continue; +#endif + *i = pfn; + return count; + } + + return 0; } +#define for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) \ + for (i = (start_pfn), \ + count = __next_pfn_valid_range(&i, (end_pfn)); \ + count && ({ pfn = i - count; 1; }); \ + count = __next_pfn_valid_range(&i, (end_pfn))) /* * Free pages to buddy allocator. Try to free aligned pages in * pageblock_nr_pages sizes. */ -static void __init deferred_free_pages(unsigned long pfn, +static void __init deferred_free_pages(unsigned long start_pfn, unsigned long end_pfn) { - unsigned long nr_pgmask = pageblock_nr_pages - 1; - unsigned long nr_free = 0; - - for (; pfn < end_pfn; pfn++) { - if (!deferred_pfn_valid(pfn)) { - deferred_free_range(pfn - nr_free, nr_free); - nr_free = 0; - } else if (!(pfn & nr_pgmask)) { - deferred_free_range(pfn - nr_free, nr_free); - nr_free = 1; - touch_nmi_watchdog(); + unsigned long i, pfn, count; + + for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) { + struct page *page = pfn_to_page(pfn); + + if (count == pageblock_nr_pages) { + __free_pages_core(page, pageblock_order); } else { - nr_free++; + while (count--) + __free_pages_core(page++, 0); } + + touch_nmi_watchdog(); } - /* Free the last block of pages to allocator */ - deferred_free_range(pfn - nr_free, nr_free); } /* @@ -1571,29 +1574,22 @@ static void __init deferred_free_pages(unsigned long pfn, * Return number of pages initialized. */ static unsigned long __init deferred_init_pages(struct zone *zone, - unsigned long pfn, + unsigned long start_pfn, unsigned long end_pfn) { - unsigned long nr_pgmask = pageblock_nr_pages - 1; + unsigned long i, pfn, count; int nid = zone_to_nid(zone); unsigned long nr_pages = 0; int zid = zone_idx(zone); - struct page *page = NULL; - for (; pfn < end_pfn; pfn++) { - if (!deferred_pfn_valid(pfn)) { - page = NULL; - continue; - } else if (!page || !(pfn & nr_pgmask)) { - page = pfn_to_page(pfn); - touch_nmi_watchdog(); - } else { - page++; - } - __init_single_page(page, pfn, zid, nid); - nr_pages++; + for_each_deferred_pfn_valid_range(i, start_pfn, end_pfn, pfn, count) { + nr_pages += count; + __init_pageblock(pfn, count, zid, nid, NULL, false); + + touch_nmi_watchdog(); } - return (nr_pages); + + return nr_pages; } /*