From patchwork Wed Jul 8 11:38:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 11651437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A44C913 for ; Wed, 8 Jul 2020 11:40:49 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D73DA20656 for ; Wed, 8 Jul 2020 11:40:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="GO+89gxw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D73DA20656 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To:From: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=PvVC1lIR/eIliptb+p2T4iiHhP03BnxGCcqkmldpDbE=; b=GO+89gxwTYJwSnkCaChPQueOSt d1ALB+2a2k1DVU64YtHnXAmV8d2t+oU4g5d1QcccgWcn4wmYdA0RzKOT8Shmz1QESbi0IxAi0Ahgb wb+o4ZQdhAJgRHBc+Dyhx3bIWzUf8CYKdgf7rumWfSS3F+7xhExVmf3vu/8UkbjAkG9cam6ga/0AW vvq/WVP7zrHBfVDTZ50L/WM/CTZREifb8fKMzYFDSGcBkTguDfpnR5z8La0WI9u9o6wPBDarj8WuB pH1mcZaasroeN98rKdVVvrdDzIxjXKs+H/v/ltIrpb5GpgL8lH7wW2/0oX1KBKVQwV9ZcxnKCCrbm cZ5MpQ+A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jt8Q5-0001iL-EL; Wed, 08 Jul 2020 11:39:21 +0000 Received: from lhrrgout.huawei.com ([185.176.76.210] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jt8Q3-0001hb-9s for linux-arm-kernel@lists.infradead.org; Wed, 08 Jul 2020 11:39:20 +0000 Received: from lhreml710-chm.china.huawei.com (unknown [172.18.7.108]) by Forcepoint Email with ESMTP id 62AB1B2622949C802037; Wed, 8 Jul 2020 12:39:12 +0100 (IST) Received: from lhrphicprd00229.huawei.com (10.123.41.22) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1913.5; Wed, 8 Jul 2020 12:39:12 +0100 From: Jonathan Cameron To: , Subject: [PATCH] arm64: numa: rightsize the distance array Date: Wed, 8 Jul 2020 19:38:25 +0800 Message-ID: <20200708113825.1429671-1-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 X-Originating-IP: [10.123.41.22] X-ClientProxiedBy: lhreml720-chm.china.huawei.com (10.201.108.71) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200708_073919_445486_32DACCC6 X-CRM114-Status: GOOD ( 15.84 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [185.176.76.210 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [185.176.76.210 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Barry Song , Lorenzo Pieralisi , Hanjun Guo , linuxarm@huawei.com, Jonathan Cameron , Sudeep Holla , Tejun Heo , Dan Williams Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Unfortunately we are currently calling numa_alloc_distance well before we call setup_node_to_cpu_mask_map means that nr_node_ids is set to MAX_NUMNODES. This wastes a bit of memory and is confusing to the reader. Note we could just decide to hardcode it as MAX_NUMNODES but if so we should do so explicitly. Looking at what x86 does, they do a walk of nodes_parsed and locally establish the maximum node count seen. We can't actually do that where we were previously calling it in numa_init because nodes_parsed isn't set up either yet. So let us take a leaf entirely out of x86's book and make the true assumption that nodes_parsed will definitely be set up before we try to put a real value in this array. Hence just do it on demand. In order to avoid trying and failing to allocate the array multiple times we do the same thing as x86 and set numa_distance = 1. This requires a few small modifications elsewhere. Worth noting, that with one exception (which it appears can be removed [1]) the x86 and arm numa distance code is now identical. Worth factoring it out to some common location? [1] https://lkml.kernel.org/r/20170406124459.dwn5zhpr2xqg3lqm@node.shutemov.name Signed-off-by: Jonathan Cameron Reviewed-by: Barry Song --- arch/arm64/mm/numa.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index aafcee3e3f7e..a2f549ef0a36 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -255,13 +255,11 @@ void __init numa_free_distance(void) { size_t size; - if (!numa_distance) - return; - size = numa_distance_cnt * numa_distance_cnt * sizeof(numa_distance[0]); - - memblock_free(__pa(numa_distance), size); + /* numa_distance could be 1LU marking allocation failure, test cnt */ + if (numa_distance_cnt) + memblock_free(__pa(numa_distance), size); numa_distance_cnt = 0; numa_distance = NULL; } @@ -271,20 +269,29 @@ void __init numa_free_distance(void) */ static int __init numa_alloc_distance(void) { + nodemask_t nodes_parsed; size_t size; + int i, j, cnt = 0; u64 phys; - int i, j; - size = nr_node_ids * nr_node_ids * sizeof(numa_distance[0]); + /* size the new table and allocate it */ + nodes_parsed = numa_nodes_parsed; + for_each_node_mask(i, nodes_parsed) + cnt = i; + cnt++; + size = cnt * cnt * sizeof(numa_distance[0]); phys = memblock_find_in_range(0, PFN_PHYS(max_pfn), size, PAGE_SIZE); - if (WARN_ON(!phys)) + if (!phys) { + pr_warn("Warning: can't allocate distance table!\n"); + /* don't retry until explicitly reset */ + numa_distance = (void *)1LU; return -ENOMEM; - + } memblock_reserve(phys, size); numa_distance = __va(phys); - numa_distance_cnt = nr_node_ids; + numa_distance_cnt = cnt; /* fill with the default distances */ for (i = 0; i < numa_distance_cnt; i++) @@ -311,10 +318,8 @@ static int __init numa_alloc_distance(void) */ void __init numa_set_distance(int from, int to, int distance) { - if (!numa_distance) { - pr_warn_once("Warning: distance table not allocated yet\n"); + if (!numa_distance && numa_alloc_distance() < 0) return; - } if (from >= numa_distance_cnt || to >= numa_distance_cnt || from < 0 || to < 0) { @@ -384,10 +389,6 @@ static int __init numa_init(int (*init_func)(void)) nodes_clear(node_possible_map); nodes_clear(node_online_map); - ret = numa_alloc_distance(); - if (ret < 0) - return ret; - ret = init_func(); if (ret < 0) goto out_free_distance;