From patchwork Mon Jul 9 17:53:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 10515467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A43C60318 for ; Mon, 9 Jul 2018 17:53:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3924E28D04 for ; Mon, 9 Jul 2018 17:53:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D52F28D78; Mon, 9 Jul 2018 17:53:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A86128D04 for ; Mon, 9 Jul 2018 17:53:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5FDCA6B031B; Mon, 9 Jul 2018 13:53:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 539F96B031C; Mon, 9 Jul 2018 13:53:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36A736B031E; Mon, 9 Jul 2018 13:53:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-vk0-f69.google.com (mail-vk0-f69.google.com [209.85.213.69]) by kanga.kvack.org (Postfix) with ESMTP id 022B46B031B for ; Mon, 9 Jul 2018 13:53:32 -0400 (EDT) Received: by mail-vk0-f69.google.com with SMTP id o62-v6so6893198vko.1 for ; Mon, 09 Jul 2018 10:53:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references; bh=4SgNFnYGaVik9BBBJr44H16EIHmackJp8+ORuSv+Xy4=; b=YytvDeGsv4HQMAcdphKr8uk+RCNCwwRqNs6+AbcKC4nuKXcs4Ljnj8OnQszHVBdKqe bV13DHHZ+O7/ZjF7yq67alec/8/muY0H4bqZuLR8P8wsXUeN1E5nTYEIKpaqSD8VvZf5 ZVIYkvIj+pHi2NJPok5BOFbh+894v63rfbj9KL8/sL1orcZjkfsrB0YTmbSnevfhc2p5 MmSoL3SW8gASZ/GRgHxcbRyweH6BzbVV/51Fx972fe0fQgHKcz2l3yGk6q2mQmVz/jIY YINwDToKE2RE9IF+vuPLxAUtp3K4sXTtDjOQcpEtCU05bkbCNXQAQXpkRecShfL30wrI /Pmw== X-Gm-Message-State: APt69E0VCW70ne26RRTgD8INzlIxJA+o6oHwTFoCjM9swK5byAvDC79X lWWc8NYsUgelYAtSEtsXz714BWNqevvvZTdkeiG01+e5O8+QzGS1xGwkfndcN85w9hPXwo2O1GB wPWyDcYQC9Pq3ZNEFueZKW4gstckI3c9DSXlieiWtfMOqmeK4O+n4WymBe4wCnT6hQQ== X-Received: by 2002:a9f:2a89:: with SMTP id z9-v6mr13371668uai.6.1531158811674; Mon, 09 Jul 2018 10:53:31 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdHNgEXxRccez9nfyrKf7laEduAuSAcKFVVJlDRkDrf5XDatg/+jwFucug6lsz1EL5fwPO8 X-Received: by 2002:a9f:2a89:: with SMTP id z9-v6mr13371649uai.6.1531158810999; Mon, 09 Jul 2018 10:53:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531158810; cv=none; d=google.com; s=arc-20160816; b=x4qIfuuExOlfLtNTo2m8/Qgx0S/Zrbe0dhnhEQ8AdGub8omDap6rfwGhmM+lzbH8G1 7zwAScXiD/PN1G0Ud9YyQgdRRhGTHVfLGyS3+krleV1QpFF4EM58qfKzZ7IVkSZIpvm5 nKsAEHmvBmgjVNyMOx++CWGFThbbZtVA4/BcZPjpkw6OGtwNy2ArK1B7t0Dc5Yvdunkg Jhcy1+sOsPrUhKEiGTa1LyOBWDclvVKipB6/YpKsn67QKRIPu8q1EsAwUXGEYllKGYFb csXyGKNCwLBcttEjOC45iHnd8FBmYh4TR50krVlabHOSN02UU+/jd6+Ed8MUzInimI3w cJPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:to:from :dkim-signature:arc-authentication-results; bh=4SgNFnYGaVik9BBBJr44H16EIHmackJp8+ORuSv+Xy4=; b=xkni7uwrK1L8ebkYnM0f3dcCCC28D1e1QgDscv4Mx/I/53gtidXV4uczFEYDM8Ag8V Mdk1br5KCszPV1icR5dRi7d3zeGje7bEckFEWUBDlc+1cqotWWhlmlxsPGblS7yBQz1v WbE8fWMSELgHDizKjSn9hgt3vKeSQJ/9PiF4wYr7xmJLANaI/vHk/AG8GbNtz0zRn19r ERd/P7kkQinPEmdaVe2LJtjA0LWi+cdoPIZ4RA5MtfXZIy/zkrCRzJHq/5QLsNXu3R61 G9PiHYxI8e7mrAxzT8QnVrkJ9NbNBbe0xaR3iwLthXngPIEre0sRwXmZmWv4OXt1Pi3B Ynug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=iRhkekbT; spf=pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=pasha.tatashin@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from aserp2120.oracle.com (aserp2120.oracle.com. [141.146.126.78]) by mx.google.com with ESMTPS id w84-v6si5069351vkw.27.2018.07.09.10.53.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jul 2018 10:53:30 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.78 as permitted sender) client-ip=141.146.126.78; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=iRhkekbT; spf=pass (google.com: domain of pasha.tatashin@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=pasha.tatashin@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w69HnbX8103309; Mon, 9 Jul 2018 17:53:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=4SgNFnYGaVik9BBBJr44H16EIHmackJp8+ORuSv+Xy4=; b=iRhkekbT3xPjQajmphBMYDcLEK1AAZq7hN5wZT0vyXUm9B4+CKsqHJaH9fR2VNUaMSLA apANEEwhHbi9LTNb8v8cTorSwHtVZydNG0ZpNUVEcp8DVEwK0Qg3P7WsQM7BrZ1uDScY IVmSa2UMoqLVdp+oQ4WD6dCjRw3IddbysMNTUzBzg233nUM+GByj8qDtM+bR+t+Njw26 JxT0iO+xrAOo3hU7V3ILthfIx9F/9yz+PHlUdW/pFwNfmFqHrqPvN+spF2ssDa6I8LvH z/UQCn80MOvRp4Y/XV/VI+QATPrWBSs6cIG4IczshpOUqI216RThZ2Kib9fxGy0BjBYN 4w== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2120.oracle.com with ESMTP id 2k2p7dna0b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 09 Jul 2018 17:53:25 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w69HrPdf006309 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 9 Jul 2018 17:53:25 GMT Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w69HrPtb030678; Mon, 9 Jul 2018 17:53:25 GMT Received: from xakep.us.oracle.com (/10.154.140.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 09 Jul 2018 10:53:25 -0700 From: Pavel Tatashin To: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, mhocko@suse.com, linux-mm@kvack.org, dan.j.williams@intel.com, jack@suse.cz, jglisse@redhat.com, jrdr.linux@gmail.com, bhe@redhat.com, gregkh@linuxfoundation.org, vbabka@suse.cz, richard.weiyang@gmail.com, dave.hansen@intel.com, rientjes@google.com, mingo@kernel.org, osalvador@techadventures.net, pasha.tatashin@oracle.com Subject: [PATCH v4 3/3] mm/sparse: refactor sparse vmemmap buffer allocations Date: Mon, 9 Jul 2018 13:53:12 -0400 Message-Id: <20180709175312.11155-4-pasha.tatashin@oracle.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180709175312.11155-1-pasha.tatashin@oracle.com> References: <20180709175312.11155-1-pasha.tatashin@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8949 signatures=668705 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=947 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1806210000 definitions=main-1807090202 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When struct pages are allocated for sparse-vmemmap VA layout, we first try to allocate one large buffer, and than if that fails allocate struct pages for each section as we go. The code that allocates buffer is uses global variables and is spread across several call sites. Cleanup the code by introducing three functions to handle the global buffer: vmemmap_buffer_init() initialize the buffer vmemmap_buffer_fini() free the remaining part of the buffer vmemmap_buffer_alloc() alloc from the buffer, and if buffer is empty return NULL Signed-off-by: Pavel Tatashin --- mm/sparse-vmemmap.c | 72 ++++++++++++++++++++++++++------------------- 1 file changed, 41 insertions(+), 31 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 87ba7cf8c75b..4e7f51aebabf 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -46,8 +46,42 @@ static void * __ref __earlyonly_bootmem_alloc(int node, BOOTMEM_ALLOC_ACCESSIBLE, node); } -static void *vmemmap_buf; -static void *vmemmap_buf_end; +static void *vmemmap_buf __meminitdata; +static void *vmemmap_buf_end __meminitdata; + +static void __init vmemmap_buffer_init(int nid, unsigned long map_count) +{ + unsigned long sec_size = sizeof(struct page) * PAGES_PER_SECTION; + unsigned long alloc_size = ALIGN(sec_size, PMD_SIZE) * map_count; + + BUG_ON(vmemmap_buf); + vmemmap_buf = __earlyonly_bootmem_alloc(nid, alloc_size, 0, + __pa(MAX_DMA_ADDRESS)); + vmemmap_buf_end = vmemmap_buf + alloc_size; +} + +static void __init vmemmap_buffer_fini(void) +{ + unsigned long size = vmemmap_buf_end - vmemmap_buf; + + if (vmemmap_buf && size > 0) + memblock_free_early(__pa(vmemmap_buf), size); + vmemmap_buf = NULL; +} + +static void * __meminit vmemmap_buffer_alloc(unsigned long size) +{ + void *ptr = NULL; + + if (vmemmap_buf) { + ptr = (void *)ALIGN((unsigned long)vmemmap_buf, size); + if (ptr + size > vmemmap_buf_end) + ptr = NULL; + else + vmemmap_buf = ptr + size; + } + return ptr; +} void * __meminit vmemmap_alloc_block(unsigned long size, int node) { @@ -76,18 +110,10 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node) /* need to make sure size is all the same during early stage */ void * __meminit vmemmap_alloc_block_buf(unsigned long size, int node) { - void *ptr; - - if (!vmemmap_buf) - return vmemmap_alloc_block(size, node); - - /* take the from buf */ - ptr = (void *)ALIGN((unsigned long)vmemmap_buf, size); - if (ptr + size > vmemmap_buf_end) - return vmemmap_alloc_block(size, node); - - vmemmap_buf = ptr + size; + void *ptr = vmemmap_buffer_alloc(size); + if (!ptr) + ptr = vmemmap_alloc_block(size, node); return ptr; } @@ -282,19 +308,9 @@ struct page * __init sparse_populate_node(unsigned long pnum_begin, unsigned long map_count, int nid) { - unsigned long size = sizeof(struct page) * PAGES_PER_SECTION; unsigned long pnum, map_index = 0; - void *vmemmap_buf_start; - - size = ALIGN(size, PMD_SIZE) * map_count; - vmemmap_buf_start = __earlyonly_bootmem_alloc(nid, size, - PMD_SIZE, - __pa(MAX_DMA_ADDRESS)); - if (vmemmap_buf_start) { - vmemmap_buf = vmemmap_buf_start; - vmemmap_buf_end = vmemmap_buf_start + size; - } + vmemmap_buffer_init(nid, map_count); for (pnum = pnum_begin; map_index < map_count; pnum++) { if (!present_section_nr(pnum)) continue; @@ -303,14 +319,8 @@ struct page * __init sparse_populate_node(unsigned long pnum_begin, map_index++; BUG_ON(pnum >= pnum_end); } + vmemmap_buffer_fini(); - if (vmemmap_buf_start) { - /* need to free left buf */ - memblock_free_early(__pa(vmemmap_buf), - vmemmap_buf_end - vmemmap_buf); - vmemmap_buf = NULL; - vmemmap_buf_end = NULL; - } return pfn_to_page(section_nr_to_pfn(pnum_begin)); }