From patchwork Tue Nov 27 02:36:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 10699593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDC3E13BB for ; Tue, 27 Nov 2018 02:36:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BCCC62AA94 for ; Tue, 27 Nov 2018 02:36:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B045C2AA97; Tue, 27 Nov 2018 02:36:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32ADB2AA94 for ; Tue, 27 Nov 2018 02:36:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5775D6B453E; Mon, 26 Nov 2018 21:36:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5274F6B453F; Mon, 26 Nov 2018 21:36:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43D4E6B4540; Mon, 26 Nov 2018 21:36:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 011866B453E for ; Mon, 26 Nov 2018 21:36:52 -0500 (EST) Received: by mail-pl1-f199.google.com with SMTP id a10so1931161plp.14 for ; Mon, 26 Nov 2018 18:36:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=QqMTFfPLtrFwb5yVzgEc0NLnL/id0pPA+GHC18I3reM=; b=TICmxK2BGDQJBuRUnIb5XNb4d5hcO/CFAgi/ah/yDkyGhHIgJMU9gxtnu6O4QXcoHY hp21rE6rWwWcvyv6LYeloER1ykHie0d4u0FsZrbtWmspQlhHTwoljI5JYdZz40IrLwv7 EevlL/YijiMKVZ0RjO83v4n0Xfpo7wV4nzJV4qsP5Y1yDTHd687j42gLbFZmjjoz24Z2 UrQWLBftlDgUXXet8q93Fcg6Kex2JKNqHCNaflfpARrsqR6B1w6I5IZ4I+h9NMNtVTLO fKqe7tcR2piGRyyR3SWuEfwxHjRyR0y3YWqSKtMM1xgBjKNiQxqqW1ZQ1/As0/fu2mUA mUYw== X-Gm-Message-State: AA+aEWYU9X2z3Q4Ap6bbaCv91PxXg4Iok6pR2ngdZiALCtNIAbRl48a8 32dh15bM25AbU9N8QyJrgPRJRsClJ8GsIN4bZOoVomBh8L1IWGalwtM7o2O+Dor8oUUyFDPpC/o Adm8K6SvclJC7y8RShcVBa32Bhz6yZBI40L67UUxuy+kLx5gxhl/UnXge5oKNZU3mGOKFQZWyxN ZPw8Ww6Jb3Y+xI6Y/NxQEYMb2f1qkgE2zqiJgQkHezWpDrX7bq2GNtsH0xrszUuFIXLcrWqgJoX iPP90na1Z7Xk30Wz4d1T+xsIhrUrfvl+KKxixPB1mNeQuZotGIARmr8qJ6XU3Sm7a0sZs/ensUE AM+kaqrofgc7aGZcMMIRfPKuS4josS8cV64z/hmy8ygML6RvR+7a1831ZcEwUeHE8XXwrw/PoYL z X-Received: by 2002:a17:902:bd46:: with SMTP id b6mr30285363plx.231.1543286212480; Mon, 26 Nov 2018 18:36:52 -0800 (PST) X-Received: by 2002:a17:902:bd46:: with SMTP id b6mr30285328plx.231.1543286211599; Mon, 26 Nov 2018 18:36:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543286211; cv=none; d=google.com; s=arc-20160816; b=KLjKqpx9MBNXHVdxmCaL/INuaQ7FSsCjmarqd/rB5wgbP4c4AQ4qI3a5AP93KkPg/4 N/iSY/hORq+3XmXzZOBobRFq+RCsEexT3vn8m3vu0sd1wupdWq2scCMajoo8na/tFh07 aImY60jG4yVeM2Q0InOpgiICuUw2PUx2gfQRTjb2vyUVjBklK/CLM8NrxolXQQKUY00B JZHeJvSEEqMaWhNZLnyqiwtAoatWs1OaVib9p/w8Rx63dRkSJYNjSUriirZE5hqqLBB3 svuR1MBp77YyMu/t79rjwIOTzVP/2JgyJ3XFsmtWrdRgcrej5/2n8Zvphy+MbABWOQH9 viOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature; bh=QqMTFfPLtrFwb5yVzgEc0NLnL/id0pPA+GHC18I3reM=; b=L91OrxOlZkLu6dkTJ/t5pMYA+cAocY2UYx/xLbJp3H0JxJD5CaT2uGPYf7ASZh+OLm JcIAltyDWKd5j7OT97ellDwsDmoX8sUBsURe7UDw5wsRGlhuaa1iEX1Tl852xHKF86HA B1fJCe8/vf0pCWXxm8WiCLkSpzlojbDSXjEYGJnfrg07pKHv/e/Et40YdKJoX0BeCHUE IjiUU2e1MGkYC6yglfxbjtGCG2FDEPFtT/ApVKGWGgxByEx29okQhuvh9o8vWs8I/L2f usN6hZsnikHmyxj/tMqCzKQMyOwf1vPKSIuwjk2H9itC/2mq0ckVkPx/WBLJt5+5JwPe F9Xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lLrSy2md; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t6sor3324167pfj.58.2018.11.26.18.36.51 for (Google Transport Security); Mon, 26 Nov 2018 18:36:51 -0800 (PST) Received-SPF: pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lLrSy2md; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=QqMTFfPLtrFwb5yVzgEc0NLnL/id0pPA+GHC18I3reM=; b=lLrSy2mdDzJJ4BVfy4kBNAN4nWaQFw1ylau6h0k6UVhl+9GKVbVWWlnGPPBGpkgUST 7FHnUwVEDe9P/K5FOvao8LdDeLjl1FBvudaZWiK5qkKYC3rutAKiG3EgiYGw7Px7DC2r KiW4YDzTMx15h4dQVlQJJ/FxidvKDypW3aUgd4S0oiEItFV9L577xkBnQqzWOTbUS1r6 FjqmRuSKi0d4mVi96r7qlf22iWyyHoncLBJfVoXidXP0A5y1XFwxdLETGCRITAniqODy 2KdOm5KUINHlHqi6S3i1MGEvL46uKHqHVDrjD/dYvhJPfCBuuY/qA19SHmeJ48XkQENV 2+TQ== X-Google-Smtp-Source: AJdET5dKIfauPczXeC84/6jrHVG73hrEzO8hO3okQzc8AD1ngUzoCsPP+e9WI3irfqItXQGoKPfHzQ== X-Received: by 2002:a62:5c06:: with SMTP id q6-v6mr31483373pfb.171.1543286210973; Mon, 26 Nov 2018 18:36:50 -0800 (PST) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id k137sm2401866pfd.56.2018.11.26.18.36.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Nov 2018 18:36:50 -0800 (PST) From: Wei Yang To: akpm@linux-foundation.org, mhocko@suse.com Cc: linux-mm@kvack.org, Wei Yang Subject: [PATCH] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() Date: Tue, 27 Nov 2018 10:36:30 +0800 Message-Id: <20181127023630.9066-1-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.15.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP pgdat_resize_lock is used to protect pgdat's memory region information like: node_start_pfn, node_present_pages, etc. In function sparse_add/remove_one_section(), those data is not touched. This means it is not necessary to acquire pgdat_resize_lock to protect this area. Since the information needed in sparse_add_one_section() is node id to allocate proper memory. This patch also changes the prototype of sparse_add_one_section() to pass node id directly. This is intended to reduce misleading that sparse_add_one_section() would touch pgdat. Signed-off-by: Wei Yang --- include/linux/memory_hotplug.h | 2 +- mm/memory_hotplug.c | 2 +- mm/sparse.c | 17 +++++------------ 3 files changed, 7 insertions(+), 14 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 45a5affcab8a..3787d4e913e6 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -333,7 +333,7 @@ extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); extern bool is_memblock_offlined(struct memory_block *mem); -extern int sparse_add_one_section(struct pglist_data *pgdat, +extern int sparse_add_one_section(int nid, unsigned long start_pfn, struct vmem_altmap *altmap); extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f626e7e5f57b..5b3a3d7b4466 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -253,7 +253,7 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, if (pfn_valid(phys_start_pfn)) return -EEXIST; - ret = sparse_add_one_section(NODE_DATA(nid), phys_start_pfn, altmap); + ret = sparse_add_one_section(nid, phys_start_pfn, altmap); if (ret < 0) return ret; diff --git a/mm/sparse.c b/mm/sparse.c index 33307fc05c4d..a4fdbcb21514 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -662,25 +662,24 @@ static void free_map_bootmem(struct page *memmap) * set. If this is <=0, then that means that the passed-in * map was not consumed and must be freed. */ -int __meminit sparse_add_one_section(struct pglist_data *pgdat, - unsigned long start_pfn, struct vmem_altmap *altmap) +int __meminit sparse_add_one_section(int nid, unsigned long start_pfn, + struct vmem_altmap *altmap) { unsigned long section_nr = pfn_to_section_nr(start_pfn); struct mem_section *ms; struct page *memmap; unsigned long *usemap; - unsigned long flags; int ret; /* * no locking for this, because it does its own * plus, it does a kmalloc */ - ret = sparse_index_init(section_nr, pgdat->node_id); + ret = sparse_index_init(section_nr, nid); if (ret < 0 && ret != -EEXIST) return ret; ret = 0; - memmap = kmalloc_section_memmap(section_nr, pgdat->node_id, altmap); + memmap = kmalloc_section_memmap(section_nr, nid, altmap); if (!memmap) return -ENOMEM; usemap = __kmalloc_section_usemap(); @@ -689,8 +688,6 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat, return -ENOMEM; } - pgdat_resize_lock(pgdat, &flags); - ms = __pfn_to_section(start_pfn); if (ms->section_mem_map & SECTION_MARKED_PRESENT) { ret = -EEXIST; @@ -707,7 +704,6 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat, sparse_init_one_section(ms, section_nr, memmap, usemap); out: - pgdat_resize_unlock(pgdat, &flags); if (ret < 0) { kfree(usemap); __kfree_section_memmap(memmap, altmap); @@ -769,10 +765,8 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap) { struct page *memmap = NULL; - unsigned long *usemap = NULL, flags; - struct pglist_data *pgdat = zone->zone_pgdat; + unsigned long *usemap = NULL; - pgdat_resize_lock(pgdat, &flags); if (ms->section_mem_map) { usemap = ms->pageblock_flags; memmap = sparse_decode_mem_map(ms->section_mem_map, @@ -780,7 +774,6 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, ms->section_mem_map = 0; ms->pageblock_flags = NULL; } - pgdat_resize_unlock(pgdat, &flags); clear_hwpoisoned_pages(memmap + map_offset, PAGES_PER_SECTION - map_offset);