From patchwork Wed Jun 27 01:31:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baoquan He X-Patchwork-Id: 10490395 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7109F602D8 for ; Wed, 27 Jun 2018 01:31:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62A561FF29 for ; Wed, 27 Jun 2018 01:31:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56B1426E78; Wed, 27 Jun 2018 01:31:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9938A1FF29 for ; Wed, 27 Jun 2018 01:31:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 837BB6B026B; Tue, 26 Jun 2018 21:31:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7E9366B026C; Tue, 26 Jun 2018 21:31:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F0A26B026E; Tue, 26 Jun 2018 21:31:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id 3C7586B026B for ; Tue, 26 Jun 2018 21:31:45 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id t20-v6so487413qkj.15 for ; Tue, 26 Jun 2018 18:31:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=g1GF/bjCpC4Epi8z6y3iYX6FME33XMsQunoXfc3/0J8=; b=EvDeuryXAxpiHK2nekoHmVPSBtpZqYUHGKzEU9bLPC3+NlBND7jX3YPXuyLQPLTGi9 8bmuAxpxkHUejj3dlPTJpSFVwHPPpPZvHz/ZylaHGqkKPE3MqkJwhrQf/63B0TSXAGNX LmRUoY/WNFR6mETpZC5R/oJhz4BevRHunkOzsh/Z0meMMJ52YqVxEEHQXwSLUo8De5OB 9cpM6PbilC4GXqhfue9EtMThtWgNh36/J50RT1Rb0rgFI5rGAfnUGbsCDcBtH9AgaAA7 De7qUT/cp+eBRbIXR16OtP6m0aeY4ncYnBMH1er5uEU1X/PK0Teh+IaPtA+yl9aZ0Kqg nLjA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bhe@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E2VwFDowEdOpkmG9fx+CL7K1905qune+ajRE+54wYp2FqsOGy2V VVb9M2AVhdTbJf9tugHGIZQEaI/p5znOTskM/1dhFVQlWHwuKAQnuJIROP3LuFr1lPjNobAV3SG P/nWrEPsztnWslCAhn29GMrPvMYYU8v5US7eB2/D1cYO0QzkUsP4gcPQv7W/et7/nzg== X-Received: by 2002:a37:849:: with SMTP id 70-v6mr3543434qki.200.1530063105003; Tue, 26 Jun 2018 18:31:45 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfVgIOeaa0fCpz96fwhjjc4nW4Z0HSPjIFL4Gh6WrN0lXDraUo1x8RpkKhi+bOwR7LuEsIW X-Received: by 2002:a37:849:: with SMTP id 70-v6mr3543407qki.200.1530063104143; Tue, 26 Jun 2018 18:31:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530063104; cv=none; d=google.com; s=arc-20160816; b=Y//ZG8vTTJ271lxhhA/YZkOUkpqO4GlIHGeCAya+UkiUBNiwnQhNQRaYqhD1AbrLC0 f81hyB5UlpuLPSEz/ryii5R34D08yCP8UWeV09+bW2eNsvLeTLgorBhttFoAeynXkWLf 25Rn1FIWgV4VJmkwff/quzk7ZCYDEqLfTBvQVCtYDDne7ZROq//jOSJ+vPQjvOR5XfHn KGV4LxVK/O3s1zI9pFUqqVLkFEjALQ+Nxwpi+T+1ONNQczKXvFiZYAgh5yy1y+541dJN 1+QctXbD2wCOQ1lio7vt5CLf7B21G1/rx26ItFKOpFhKZ2W66cMPHLdACYxqLj7fX8C9 pgTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=g1GF/bjCpC4Epi8z6y3iYX6FME33XMsQunoXfc3/0J8=; b=0vvNo7yy1hWb1nZuAM8HURckxMUJfbltO/Y0nAr7sBt/4ytEaoCpcGq8D02o64iqE8 BhN+O49Hn3ksrijUiPpfjZwBPDTG5iS3zE/I6pleg/ZenX6I2rPrF4PpZuTenaL2jG3m ohcwpsrYoyoOsd13ZiymK1rwpkD9BS50qyzaYqXuU1LpgcdkPAFTQvGjvpZoJVEHVLNt mdaYaKrOGuWUtuH3Jd44PuVJFnkHyNfLJwRAN5S/ge/hNjczG5XJ0kq60XVQqe7DRWnx BpbGwmCa/xygDjrwfg0bC3m9yHzuDP/ZgdZ3jmW1vrsFxpj3rMiBXnWJWdk1rDmvdNrG dbSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bhe@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id p51-v6si2773794qtc.179.2018.06.26.18.31.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jun 2018 18:31:44 -0700 (PDT) Received-SPF: pass (google.com: domain of bhe@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bhe@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BB0E48A703; Wed, 27 Jun 2018 01:31:43 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-8-24.pek2.redhat.com [10.72.8.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 04F89111764C; Wed, 27 Jun 2018 01:31:39 +0000 (UTC) From: Baoquan He To: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, dave.hansen@intel.com, pagupta@redhat.com Cc: linux-mm@kvack.org, kirill.shutemov@linux.intel.com, Baoquan He Subject: [PATCH v5 4/4] mm/sparse: Optimize memmap allocation during sparse_init() Date: Wed, 27 Jun 2018 09:31:16 +0800 Message-Id: <20180627013116.12411-5-bhe@redhat.com> In-Reply-To: <20180627013116.12411-1-bhe@redhat.com> References: <20180627013116.12411-1-bhe@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 27 Jun 2018 01:31:43 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 27 Jun 2018 01:31:43 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'bhe@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In sparse_init(), two temporary pointer arrays, usemap_map and map_map are allocated with the size of NR_MEM_SECTIONS. They are used to store each memory section's usemap and mem map if marked as present. With the help of these two arrays, continuous memory chunk is allocated for usemap and memmap for memory sections on one node. This avoids too many memory fragmentations. Like below diagram, '1' indicates the present memory section, '0' means absent one. The number 'n' could be much smaller than NR_MEM_SECTIONS on most of systems. |1|1|1|1|0|0|0|0|1|1|0|0|...|1|0||1|0|...|1||0|1|...|0| ------------------------------------------------------- 0 1 2 3 4 5 i i+1 n-1 n If fail to populate the page tables to map one section's memmap, its ->section_mem_map will be cleared finally to indicate that it's not present. After use, these two arrays will be released at the end of sparse_init(). In 4-level paging mode, each array costs 4M which can be ignorable. While in 5-level paging, they costs 256M each, 512M altogether. Kdump kernel Usually only reserves very few memory, e.g 256M. So, even thouth they are temporarily allocated, still not acceptable. In fact, there's no need to allocate them with the size of NR_MEM_SECTIONS. Since the ->section_mem_map clearing has been deferred to the last, the number of present memory sections are kept the same during sparse_init() until we finally clear out the memory section's ->section_mem_map if its usemap or memmap is not correctly handled. Thus in the middle whenever for_each_present_section_nr() loop is taken, the i-th present memory section is always the same one. Here only allocate usemap_map and map_map with the size of 'nr_present_sections'. For the i-th present memory section, install its usemap and memmap to usemap_map[i] and mam_map[i] during allocation. Then in the last for_each_present_section_nr() loop which clears the failed memory section's ->section_mem_map, fetch usemap and memmap from usemap_map[] and map_map[] array and set them into mem_section[] accordingly. Signed-off-by: Baoquan He Signed-off-by: Baoquan He Reviewed-by: Pavel Tatashin --- mm/sparse-vmemmap.c | 5 +++-- mm/sparse.c | 43 ++++++++++++++++++++++++++++++++++--------- 2 files changed, 37 insertions(+), 11 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 640e68f8324b..a98ec2fb6915 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -281,6 +281,7 @@ void __init sparse_mem_maps_populate_node(struct page **map_map, unsigned long pnum; unsigned long size = sizeof(struct page) * PAGES_PER_SECTION; void *vmemmap_buf_start; + int nr_consumed_maps = 0; size = ALIGN(size, PMD_SIZE); vmemmap_buf_start = __earlyonly_bootmem_alloc(nodeid, size * map_count, @@ -297,8 +298,8 @@ void __init sparse_mem_maps_populate_node(struct page **map_map, if (!present_section_nr(pnum)) continue; - map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL); - if (map_map[pnum]) + map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL); + if (map_map[nr_consumed_maps++]) continue; ms = __nr_to_section(pnum); pr_err("%s: sparsemem memory map backing failed some memory will not be available\n", diff --git a/mm/sparse.c b/mm/sparse.c index b2848cc6e32a..2eb8ee72e44d 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -386,6 +386,7 @@ static void __init sparse_early_usemaps_alloc_node(void *data, unsigned long pnum; unsigned long **usemap_map = (unsigned long **)data; int size = usemap_size(); + int nr_consumed_maps = 0; usemap = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nodeid), size * usemap_count); @@ -397,9 +398,10 @@ static void __init sparse_early_usemaps_alloc_node(void *data, for (pnum = pnum_begin; pnum < pnum_end; pnum++) { if (!present_section_nr(pnum)) continue; - usemap_map[pnum] = usemap; + usemap_map[nr_consumed_maps] = usemap; usemap += size; - check_usemap_section_nr(nodeid, usemap_map[pnum]); + check_usemap_section_nr(nodeid, usemap_map[nr_consumed_maps]); + nr_consumed_maps++; } } @@ -424,29 +426,33 @@ void __init sparse_mem_maps_populate_node(struct page **map_map, void *map; unsigned long pnum; unsigned long size = sizeof(struct page) * PAGES_PER_SECTION; + int nr_consumed_maps; size = PAGE_ALIGN(size); map = memblock_virt_alloc_try_nid_raw(size * map_count, PAGE_SIZE, __pa(MAX_DMA_ADDRESS), BOOTMEM_ALLOC_ACCESSIBLE, nodeid); if (map) { + nr_consumed_maps = 0; for (pnum = pnum_begin; pnum < pnum_end; pnum++) { if (!present_section_nr(pnum)) continue; - map_map[pnum] = map; + map_map[nr_consumed_maps] = map; map += size; + nr_consumed_maps++; } return; } /* fallback */ + nr_consumed_maps = 0; for (pnum = pnum_begin; pnum < pnum_end; pnum++) { struct mem_section *ms; if (!present_section_nr(pnum)) continue; - map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL); - if (map_map[pnum]) + map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL); + if (map_map[nr_consumed_maps++]) continue; ms = __nr_to_section(pnum); pr_err("%s: sparsemem memory map backing failed some memory will not be available\n", @@ -526,6 +532,7 @@ static void __init alloc_usemap_and_memmap(void (*alloc_func) /* new start, update count etc*/ nodeid_begin = nodeid; pnum_begin = pnum; + data += map_count * data_unit_size; map_count = 1; } /* ok, last chunk */ @@ -544,6 +551,7 @@ void __init sparse_init(void) unsigned long *usemap; unsigned long **usemap_map; int size; + int nr_consumed_maps = 0; #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER int size2; struct page **map_map; @@ -566,7 +574,7 @@ void __init sparse_init(void) * powerpc need to call sparse_init_one_section right after each * sparse_early_mem_map_alloc, so allocate usemap_map at first. */ - size = sizeof(unsigned long *) * NR_MEM_SECTIONS; + size = sizeof(unsigned long *) * nr_present_sections; usemap_map = memblock_virt_alloc(size, 0); if (!usemap_map) panic("can not allocate usemap_map\n"); @@ -575,7 +583,7 @@ void __init sparse_init(void) sizeof(usemap_map[0])); #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER - size2 = sizeof(struct page *) * NR_MEM_SECTIONS; + size2 = sizeof(struct page *) * nr_present_sections; map_map = memblock_virt_alloc(size2, 0); if (!map_map) panic("can not allocate map_map\n"); @@ -584,27 +592,44 @@ void __init sparse_init(void) sizeof(map_map[0])); #endif + /* The numner of present sections stored in nr_present_sections + * are kept the same since mem sections are marked as present in + * memory_present(). In this for loop, we need check which sections + * failed to allocate memmap or usemap, then clear its + * ->section_mem_map accordingly. During this process, we need + * increase 'nr_consumed_maps' whether its allocation of memmap + * or usemap failed or not, so that after we handle the i-th + * memory section, can get memmap and usemap of (i+1)-th section + * correctly. */ for_each_present_section_nr(0, pnum) { struct mem_section *ms; + + if (nr_consumed_maps >= nr_present_sections) { + pr_err("nr_consumed_maps goes beyond nr_present_sections\n"); + break; + } ms = __nr_to_section(pnum); - usemap = usemap_map[pnum]; + usemap = usemap_map[nr_consumed_maps]; if (!usemap) { ms->section_mem_map = 0; + nr_consumed_maps++; continue; } #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER - map = map_map[pnum]; + map = map_map[nr_consumed_maps]; #else map = sparse_early_mem_map_alloc(pnum); #endif if (!map) { ms->section_mem_map = 0; + nr_consumed_maps++; continue; } sparse_init_one_section(__nr_to_section(pnum), pnum, map, usemap); + nr_consumed_maps++; } vmemmap_populate_print_last();