From patchwork Thu Oct 28 21:36:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12591233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A67B2C433F5 for ; Thu, 28 Oct 2021 21:36:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59FBB610E5 for ; Thu, 28 Oct 2021 21:36:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 59FBB610E5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 045126B007B; Thu, 28 Oct 2021 17:36:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F372F6B007D; Thu, 28 Oct 2021 17:36:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E262C940007; Thu, 28 Oct 2021 17:36:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id BE4956B007B for ; Thu, 28 Oct 2021 17:36:26 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5C6EA30B48 for ; Thu, 28 Oct 2021 21:36:26 +0000 (UTC) X-FDA: 78747155172.09.B04EBF8 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP id 969E0E0019BA for ; Thu, 28 Oct 2021 21:36:14 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 889F1610C8; Thu, 28 Oct 2021 21:36:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1635456984; bh=WVhw8Y976ZTLMdEnvihPl2gK6wkVzy91Q+hCljqUI/4=; h=Date:From:To:Subject:In-Reply-To:From; b=J0NoVrz+HT0Ws9gk0ZrTHSj+YjLqZCqq0wX8TbNznbISt4ZXQI9oC8L24FpZ4Zwhb IQ5ARdkV2OTwPlTJPmhKNCWKaWTYFDms2FuBo5wn3nVhhOdEdPkmVIHQrPxnxFC7Kp S1jAm9j/6Ea8sRAff5wbURcGJvMvU7a743hz5RYA= Date: Thu, 28 Oct 2021 14:36:24 -0700 From: Andrew Morton To: akpm@linux-foundation.org, chenwandun@huawei.com, edumazet@google.com, guohanjun@huawei.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, npiggin@gmail.com, shakeelb@google.com, torvalds@linux-foundation.org, urezki@gmail.com, wangkefeng.wang@huawei.com Subject: [patch 07/11] mm/vmalloc: fix numa spreading for large hash tables Message-ID: <20211028213624.ioyXk3qpi%akpm@linux-foundation.org> In-Reply-To: <20211028143506.5f5d5e2cd1f768a1da864844@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Stat-Signature: ztm4xo8eeq973o55tiwh63ngr3nfh8qj X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 969E0E0019BA Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=J0NoVrz+; dmarc=none; spf=pass (imf30.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-HE-Tag: 1635456974-460156 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chen Wandun Subject: mm/vmalloc: fix numa spreading for large hash tables Eric Dumazet reported a strange numa spreading info in [1], and found commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") introduced this issue [2]. Dig into the difference before and after this patch, page allocation has some difference: before: alloc_large_system_hash __vmalloc __vmalloc_node(..., NUMA_NO_NODE, ...) __vmalloc_node_range __vmalloc_area_node alloc_page /* because NUMA_NO_NODE, so choose alloc_page branch */ alloc_pages_current alloc_page_interleave /* can be proved by print policy mode */ after: alloc_large_system_hash __vmalloc __vmalloc_node(..., NUMA_NO_NODE, ...) __vmalloc_node_range __vmalloc_area_node alloc_pages_node /* choose nid by nuam_mem_id() */ __alloc_pages_node(nid, ....) So after commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"), it will allocate memory in current node instead of interleaving allocate memory. [1] https://lore.kernel.org/linux-mm/CANn89iL6AAyWhfxdHO+jaT075iOa3XcYn9k6JJc7JR2XYn6k_Q@mail.gmail.com/ [2] https://lore.kernel.org/linux-mm/CANn89iLofTR=AK-QOZY87RdUZENCZUT4O6a0hvhu3_EwRMerOg@mail.gmail.com/ Link: https://lkml.kernel.org/r/20211021080744.874701-2-chenwandun@huawei.com Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") Signed-off-by: Chen Wandun Reported-by: Eric Dumazet Cc: Shakeel Butt Cc: Nicholas Piggin Cc: Kefeng Wang Cc: Hanjun Guo Cc: Uladzislau Rezki Signed-off-by: Andrew Morton --- mm/vmalloc.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) --- a/mm/vmalloc.c~mm-vmalloc-fix-numa-spreading-for-large-hash-tables +++ a/mm/vmalloc.c @@ -2816,6 +2816,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; + struct page *page; + int i; /* * For order-0 pages we make use of bulk allocator, if @@ -2823,7 +2825,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * to fails, fallback to a single page allocator that is * more permissive. */ - if (!order) { + if (!order && nid != NUMA_NO_NODE) { while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request; @@ -2848,7 +2850,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (nr != nr_pages_request) break; } - } else + } else if (order) /* * Compound pages required for remap_vmalloc_page if * high-order pages. @@ -2856,11 +2858,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, gfp |= __GFP_COMP; /* High-order pages or fallback path if "bulk" fails. */ - while (nr_allocated < nr_pages) { - struct page *page; - int i; - page = alloc_pages_node(nid, gfp, order); + while (nr_allocated < nr_pages) { + if (nid == NUMA_NO_NODE) + page = alloc_pages(gfp, order); + else + page = alloc_pages_node(nid, gfp, order); if (unlikely(!page)) break;