From patchwork Tue Sep 28 12:10:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Wandun X-Patchwork-Id: 12522369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B355C433F5 for ; Tue, 28 Sep 2021 12:02:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D3BE611CA for ; Tue, 28 Sep 2021 12:02:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3D3BE611CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 71914900003; Tue, 28 Sep 2021 08:02:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A24D900002; Tue, 28 Sep 2021 08:02:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 542D8900003; Tue, 28 Sep 2021 08:02:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0036.hostedemail.com [216.40.44.36]) by kanga.kvack.org (Postfix) with ESMTP id 43BA6900002 for ; Tue, 28 Sep 2021 08:02:35 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E8BF518259C33 for ; Tue, 28 Sep 2021 12:02:34 +0000 (UTC) X-FDA: 78636845028.35.8765ED9 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf01.hostedemail.com (Postfix) with ESMTP id D8DFE5067550 for ; Tue, 28 Sep 2021 12:02:32 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HJdLX4bX1zbmpv; Tue, 28 Sep 2021 19:58:12 +0800 (CST) Received: from dggpemm500002.china.huawei.com (7.185.36.229) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 20:02:29 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500002.china.huawei.com (7.185.36.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 20:02:28 +0800 From: Chen Wandun To: , , , , , , CC: Subject: [PATCH] mm/vmalloc: fix numa spreading for large hash tables Date: Tue, 28 Sep 2021 20:10:40 +0800 Message-ID: <20210928121040.2547407-1-chenwandun@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500002.china.huawei.com (7.185.36.229) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D8DFE5067550 X-Stat-Signature: xhz6dsn1dpkqjg19thjztyd9on11gtn1 Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=chenwandun@huawei.com X-HE-Tag: 1632830552-821750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Eric Dumazet reported a strange numa spreading info in [1], and found commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") introduced this issue [2]. Dig into the difference before and after this patch, page allocation has some difference: before: alloc_large_system_hash __vmalloc __vmalloc_node(..., NUMA_NO_NODE, ...) __vmalloc_node_range __vmalloc_area_node alloc_page /* because NUMA_NO_NODE, so choose alloc_page branch */ alloc_pages_current alloc_page_interleave /* can be proved by print policy mode */ after: alloc_large_system_hash __vmalloc __vmalloc_node(..., NUMA_NO_NODE, ...) __vmalloc_node_range __vmalloc_area_node alloc_pages_node /* choose nid by nuam_mem_id() */ __alloc_pages_node(nid, ....) So after commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"), it will allocate memory in current node instead of interleaving allocate memory. [1] https://lore.kernel.org/linux-mm/CANn89iL6AAyWhfxdHO+jaT075iOa3XcYn9k6JJc7JR2XYn6k_Q@mail.gmail.com/ [2] https://lore.kernel.org/linux-mm/CANn89iLofTR=AK-QOZY87RdUZENCZUT4O6a0hvhu3_EwRMerOg@mail.gmail.com/ Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") Reported-by: Eric Dumazet Signed-off-by: Chen Wandun --- mm/vmalloc.c | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f884706c5280..48e717626e94 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2823,6 +2823,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; + struct page *page; + int i; /* * For order-0 pages we make use of bulk allocator, if @@ -2833,6 +2835,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (!order) { while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request; + page = NULL; /* * A maximum allowed request is hard-coded and is 100 @@ -2842,9 +2845,23 @@ vm_area_alloc_pages(gfp_t gfp, int nid, */ nr_pages_request = min(100U, nr_pages - nr_allocated); - nr = alloc_pages_bulk_array_node(gfp, nid, - nr_pages_request, pages + nr_allocated); - + if (nid == NUMA_NO_NODE) { + for (i = 0; i < nr_pages_request; i++) { + page = alloc_page(gfp); + if (page) + pages[nr_allocated + i] = page; + else { + nr = i; + break; + } + } + if (i >= nr_pages_request) + nr = nr_pages_request; + } else { + nr = alloc_pages_bulk_array_node(gfp, nid, + nr_pages_request, + pages + nr_allocated); + } nr_allocated += nr; cond_resched(); @@ -2863,11 +2880,13 @@ vm_area_alloc_pages(gfp_t gfp, int nid, gfp |= __GFP_COMP; /* High-order pages or fallback path if "bulk" fails. */ - while (nr_allocated < nr_pages) { - struct page *page; - int i; - page = alloc_pages_node(nid, gfp, order); + page = NULL; + while (nr_allocated < nr_pages) { + if (nid == NUMA_NO_NODE) + page = alloc_pages(gfp, order); + else + page = alloc_pages_node(nid, gfp, order); if (unlikely(!page)) break;