From patchwork Thu Mar 28 21:15:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oreoluwa Babatunde X-Patchwork-Id: 13609647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 20830CD1283 for ; Thu, 28 Mar 2024 21:16:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) id 09213C433F1; Thu, 28 Mar 2024 21:16:55 +0000 (UTC) Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.kernel.org (Postfix) with ESMTPS id 1CC92C43390; Thu, 28 Mar 2024 21:16:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.kernel.org 1CC92C43390 Authentication-Results: smtp.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.kernel.org; spf=pass smtp.mailfrom=quicinc.com Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 42SIMCcU012083; Thu, 28 Mar 2024 21:16:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= qcppdkim1; bh=l0g1I6ZYjKU0wylGdofuZ7WQ7EHsy7s8fxAUkaXzb+8=; b=dZ oy+/F9Tsnmn32T1l0tPVAWYHRjZn2yHZXlKGGxju72T7OtciGPcnoxzJzf8CY3hU QyzLhPgpcTusr7OGOVhZ2CgLBmLhtcy3+5cd/myRXFJ+eJYULWV1QpH2BabYHcJG DoIxRqOkUIcTMu2BrpfRoozm9bxKuCl/zcVEFC5oz07VksUx/r+eQTHYXknGh4Ys Vu9VA77BSdCRkZy+tmmD/011leGW1BUgSMNG3EMSyyU4QANq5aSK2gotA8jPYG2D pwLivQhFrcsh+mV9XddCAPv+iPvb1ljq3gLkBt467iAURx8hJGAYWbrlnNeIhrsI GMaYMfIzHv/HQcEZrdcQ== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3x53nxjg6m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:35 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 42SLGY8o032402 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:34 GMT Received: from hu-obabatun-lv.qualcomm.com (10.49.16.6) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 28 Mar 2024 14:16:31 -0700 From: Oreoluwa Babatunde List-Id: To: , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , Oreoluwa Babatunde Subject: [PATCH v5 1/4] of: reserved_mem: Restruture how the reserved memory regions are processed Date: Thu, 28 Mar 2024 14:15:40 -0700 Message-ID: <20240328211543.191876-2-quic_obabatun@quicinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240328211543.191876-1-quic_obabatun@quicinc.com> References: <20240328211543.191876-1-quic_obabatun@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: MxJMA00oIyykQ4GCoFabALoPHlfjxzO2 X-Proofpoint-ORIG-GUID: MxJMA00oIyykQ4GCoFabALoPHlfjxzO2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-28_17,2024-03-28_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxlogscore=999 impostorscore=0 spamscore=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 malwarescore=0 adultscore=0 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2403210001 definitions=main-2403280152 The current implementation processes the reserved memory regions in two stages which are done with two separate functions within the early_init_fdt_scan_reserved_mem() function. Within the two stages of processing, the reserved memory regions are broken up into two groups which are processed differently: i) Statically-placed reserved memory regions i.e. regions defined with a static start address and size using the "reg" property in the DT. ii) Dynamically-placed reserved memory regions. i.e. regions defined by specifying a range of addresses where they can be placed in memory using the "alloc_ranges" and "size" properties in the DT. Stage 1: fdt_scan_reserved_mem() This stage of the reserved memory processing is used to scan through the reserved memory nodes defined in the devicetree and do the following on each of the nodes: 1) If the node represents a statically-placed reserved memory region, i.e. it is defined using the "reg" property: - Call memblock_reserve() or memblock_mark_nomap() as needed. - Add the information for the reserved region to the reserved_mem array. eg: fdt_reserved_mem_save_node(node, name, base, size); 2) If the node represents a dynamically-placed reserved memory region, i.e. it is defined using "alloc-ranges" and "size" properties: - Add the information for the region to the reserved_mem array with the starting address and size set to 0. eg: fdt_reserved_mem_save_node(node, name, 0, 0); Stage 2: fdt_init_reserved_mem() This stage of the reserved memory processing is used to iterate through the reserved_mem array which was populated in stage 1 and do the following on each of the entries: 1) If the entry represents a statically-placed reserved memory region: - Call the region specific init function. 2) If the entry represents a dynamically-placed reserved memory region: - Call __reserved_mem_alloc_size() which is used to allocate memory for the region using memblock_phys_alloc_range(), and call memblock_mark_nomap() on the allocated region if the region is specified as a no-map region. - Call the region specific init function. On architectures such as arm64, the dynamic allocation of the reserved_mem array needs to be done after the page tables have been setup because memblock allocated memory is not writable until then. This means that the reserved_mem array will not be available to store any reserved memory information until after the page tables have been setup. It is possible to call memblock_reserve() and memblock_mark_nomap() on the statically-placed reserved memory regions and not need to save them to the reserved_mem array until later. This is because all the information we need is present in the devicetree. Dynamically-placed reserved memory regions on the other hand get assigned a start address only at runtime, and since memblock_reserve() and memblock_mark_nomap() need to be called before the memory mappings are created, the allocation needs to happen before the page tables are setup. To make it easier to handle dynamically-placed reserved memory regions before the page tables are setup, this patch makes changes to the steps above to process the reserved memory regions in the following ways: Step 1: fdt_scan_reserved_mem() This stage of the reserved memory processing is used to scan through the reserved memory nodes defined in the devicetree and do the following on each of the nodes: 1) If the node represents a statically-placed reserved memory region, i.e. it is defined using the "reg" property: - Call memblock_reserve() or memblock_mark_nomap() as needed. 2) If the node represents a dynamically-placed reserved memory region, i.e. it is defined using "alloc-ranges" and "size" properties: - Call __reserved_mem_alloc_size() which will: i) Allocate memory for the reserved memory region. ii) Call memblock_mark_nomap() as needed. Note: There is no need to explicitly call memblock_reserve() here because it is already called by memblock when the memory for the region is being allocated. iii) Save the information for the region in the reserved_mem array. Step 2: fdt_init_reserved_mem() This stage of the reserved memory processing is used to: 1) Add the information for the statically-placed reserved memory into the reserved_mem array. 2) Iterate through all the entries in the array and call the region specific init function for each of them. fdt_init_reserved_mem() is also now called from within the unflatten_device_tree() function so that this step happens after the page tables have been setup. Signed-off-by: Oreoluwa Babatunde --- drivers/of/fdt.c | 5 +- drivers/of/of_private.h | 1 + drivers/of/of_reserved_mem.c | 134 +++++++++++++++++++++++++---------- 3 files changed, 100 insertions(+), 40 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index a8a04f27915b..527e6bc1c096 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -532,8 +532,6 @@ void __init early_init_fdt_scan_reserved_mem(void) break; memblock_reserve(base, size); } - - fdt_init_reserved_mem(); } /** @@ -1259,6 +1257,9 @@ void __init unflatten_device_tree(void) of_alias_scan(early_init_dt_alloc_memory_arch); unittest_unflatten_overlay_base(); + + /* initialize the reserved memory regions */ + fdt_init_reserved_mem(); } /** diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index 485483524b7f..9ea250b80657 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -9,6 +9,7 @@ */ #define FDT_ALIGN_SIZE 8 +#define MAX_RESERVED_REGIONS 64 /** * struct alias_prop - Alias property in 'aliases' node diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 8236ecae2953..db991de16cc0 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -27,7 +27,6 @@ #include "of_private.h" -#define MAX_RESERVED_REGIONS 64 static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; static int reserved_mem_count; @@ -106,7 +105,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node, phys_addr_t base, size; int len; const __be32 *prop; - int first = 1; bool nomap; prop = of_get_flat_dt_prop(node, "reg", &len); @@ -134,10 +132,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node, uname, &base, (unsigned long)(size / SZ_1M)); len -= t_len; - if (first) { - fdt_reserved_mem_save_node(node, uname, base, size); - first = 0; - } } return 0; } @@ -165,12 +159,69 @@ static int __init __reserved_mem_check_root(unsigned long node) return 0; } +/** + * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined + * reserved memory regions. + * + * This function is used to scan through the DT and store the + * information for the reserved memory regions that are defined using + * the "reg" property. The region node number, name, base address, and + * size are all stored in the reserved_mem array by calling the + * fdt_reserved_mem_save_node() function. + */ +static void __init fdt_scan_reserved_mem_reg_nodes(void) +{ + int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32); + const void *fdt = initial_boot_params; + phys_addr_t base, size; + const __be32 *prop; + int node, child; + int len; + + node = fdt_path_offset(fdt, "/reserved-memory"); + if (node < 0) { + pr_info("Reserved memory: No reserved-memory node in the DT\n"); + return; + } + + if (__reserved_mem_check_root(node)) { + pr_err("Reserved memory: unsupported node format, ignoring\n"); + return; + } + + fdt_for_each_subnode(child, fdt, node) { + const char *uname; + + prop = of_get_flat_dt_prop(child, "reg", &len); + if (!prop) + continue; + if (!of_fdt_device_is_available(fdt, child)) + continue; + + uname = fdt_get_name(fdt, child, NULL); + if (len && len % t_len != 0) { + pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n", + uname); + continue; + } + base = dt_mem_next_cell(dt_root_addr_cells, &prop); + size = dt_mem_next_cell(dt_root_size_cells, &prop); + + if (size) + fdt_reserved_mem_save_node(child, uname, base, size); + } +} + +static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname); + /* * fdt_scan_reserved_mem() - scan a single FDT node for reserved memory */ int __init fdt_scan_reserved_mem(void) { int node, child; + int dynamic_nodes_cnt = 0; + int dynamic_nodes[MAX_RESERVED_REGIONS]; const void *fdt = initial_boot_params; node = fdt_path_offset(fdt, "/reserved-memory"); @@ -192,8 +243,24 @@ int __init fdt_scan_reserved_mem(void) uname = fdt_get_name(fdt, child, NULL); err = __reserved_mem_reserve_reg(child, uname); - if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) - fdt_reserved_mem_save_node(child, uname, 0, 0); + /* + * Save the nodes for the dynamically-placed regions + * into an array which will be used for allocation right + * after all the statically-placed regions are reserved + * or marked as no-map. This is done to avoid dynamically + * allocating from one of the statically-placed regions. + */ + if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) { + dynamic_nodes[dynamic_nodes_cnt] = child; + dynamic_nodes_cnt++; + } + } + for (int i = 0; i < dynamic_nodes_cnt; i++) { + const char *uname; + + child = dynamic_nodes[i]; + uname = fdt_get_name(fdt, child, NULL); + __reserved_mem_alloc_size(child, uname); } return 0; } @@ -253,8 +320,7 @@ static int __init __reserved_mem_alloc_in_range(phys_addr_t size, * __reserved_mem_alloc_size() - allocate reserved memory described by * 'size', 'alignment' and 'alloc-ranges' properties. */ -static int __init __reserved_mem_alloc_size(unsigned long node, - const char *uname, phys_addr_t *res_base, phys_addr_t *res_size) +static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname) { int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32); phys_addr_t start = 0, end = 0; @@ -333,10 +399,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, uname, (unsigned long)(size / SZ_1M)); return -ENOMEM; } - - *res_base = base; - *res_size = size; - + fdt_reserved_mem_save_node(node, uname, base, size); return 0; } @@ -431,6 +494,8 @@ void __init fdt_init_reserved_mem(void) { int i; + fdt_scan_reserved_mem_reg_nodes(); + /* check for overlapping reserved regions */ __rmem_check_for_overlap(); @@ -449,30 +514,23 @@ void __init fdt_init_reserved_mem(void) if (prop) rmem->phandle = of_read_number(prop, len/4); - if (rmem->size == 0) - err = __reserved_mem_alloc_size(node, rmem->name, - &rmem->base, &rmem->size); - if (err == 0) { - err = __reserved_mem_init_node(rmem); - if (err != 0 && err != -ENOENT) { - pr_info("node %s compatible matching fail\n", - rmem->name); - if (nomap) - memblock_clear_nomap(rmem->base, rmem->size); - else - memblock_phys_free(rmem->base, - rmem->size); - } else { - phys_addr_t end = rmem->base + rmem->size - 1; - bool reusable = - (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL; - - pr_info("%pa..%pa (%lu KiB) %s %s %s\n", - &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K), - nomap ? "nomap" : "map", - reusable ? "reusable" : "non-reusable", - rmem->name ? rmem->name : "unknown"); - } + err = __reserved_mem_init_node(rmem); + if (err != 0 && err != -ENOENT) { + pr_info("node %s compatible matching fail\n", rmem->name); + if (nomap) + memblock_clear_nomap(rmem->base, rmem->size); + else + memblock_phys_free(rmem->base, rmem->size); + } else { + phys_addr_t end = rmem->base + rmem->size - 1; + bool reusable = + (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL; + + pr_info("%pa..%pa (%lu KiB) %s %s %s\n", + &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K), + nomap ? "nomap" : "map", + reusable ? "reusable" : "non-reusable", + rmem->name ? rmem->name : "unknown"); } } } From patchwork Thu Mar 28 21:15:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oreoluwa Babatunde X-Patchwork-Id: 13609645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C865CD11DF for ; Thu, 28 Mar 2024 21:16:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) id 19EF4C433F1; Thu, 28 Mar 2024 21:16:52 +0000 (UTC) Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.kernel.org (Postfix) with ESMTPS id 8934CC433C7; Thu, 28 Mar 2024 21:16:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.kernel.org 8934CC433C7 Authentication-Results: smtp.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.kernel.org; spf=pass smtp.mailfrom=quicinc.com Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 42SKDfJ2016218; Thu, 28 Mar 2024 21:16:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= qcppdkim1; bh=+MY78+lW5U3e/L0EKh1VpFdXxuRr29FRbMT8d19Xzfo=; b=Oe MPOJvMA5uIvef9XtZKoKUZBkeDjx8zjlNZRdLblLc7IrarXR3aYl/Dxj51MJvChl QnHYk1qX3PPjAWfSvGdnUTzJjFBrEIO4Pqjgo3LGUPDyUXl0yZpzRJjfbdeUT6wn o07DUiFxcnsjVaUbrazufs12vnd6qU0+xLQ+LTjjLyr5JJZu2TMdsI73ErQbgEwU 7Nd8sEhpBRHcSNpKX5YGKEA74iIx5eyo19QTTo1Wso0VijSyY3EftG7ppjcCVtUQ +2F4FQL/++dg5+cZBQw22NP7uXq3VwD/igjVCBShyerX6UkoLJaB3FPd2YZH1ROq yw6nZmH8JET4jaJEr5MQ== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3x54r6275n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:36 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 42SLGY8p032402 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:35 GMT Received: from hu-obabatun-lv.qualcomm.com (10.49.16.6) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 28 Mar 2024 14:16:31 -0700 From: Oreoluwa Babatunde List-Id: To: , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , Oreoluwa Babatunde Subject: [PATCH v5 2/4] of: reserved_mem: Add code to dynamically allocate reserved_mem array Date: Thu, 28 Mar 2024 14:15:41 -0700 Message-ID: <20240328211543.191876-3-quic_obabatun@quicinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240328211543.191876-1-quic_obabatun@quicinc.com> References: <20240328211543.191876-1-quic_obabatun@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: oXCDOQgYN7dmB4uBe8RxCQGEXTK2cGvB X-Proofpoint-ORIG-GUID: oXCDOQgYN7dmB4uBe8RxCQGEXTK2cGvB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-28_17,2024-03-28_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 priorityscore=1501 clxscore=1015 lowpriorityscore=0 malwarescore=0 bulkscore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2403210001 definitions=main-2403280152 The reserved_mem array is statically allocated with a size of MAX_RESERVED_REGIONS(64). Therefore, if the number of reserved_mem regions exceeds this size, there will not be enough space to store all the data. Hence, extend the use of the static array by introducing a dynamically allocated array based on the number of reserved memory regions specified in the DT. On architectures such as arm64, memblock allocated memory is not writable until after the page tables have been setup. Hence, the dynamic allocation of the reserved_mem array will need to be done only after the page tables have been setup. As a result, a temporary static array is still needed in the initial stages to store the information of the dynamically-placed reserved memory regions because the start address is selected only at run-time and is not stored anywhere else. It is not possible to wait until the reserved_mem array is allocated because this is done after the page tables are setup and the reserved memory regions need to be initialized before then. After the reserved_mem array is allocated, all entries from the static array is copied over to the new array, and the rest of the information for the statically-placed reserved memory regions are read in from the DT and stored in the new array as well. Once the init process is completed, the temporary static array is released back to the system because it is no longer needed. This is achieved by marking it as __initdata. Signed-off-by: Oreoluwa Babatunde --- drivers/of/of_reserved_mem.c | 58 +++++++++++++++++++++++++++++++++--- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index db991de16cc0..0aba366eba59 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -27,7 +27,9 @@ #include "of_private.h" -static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; +static struct reserved_mem reserved_mem_array[MAX_RESERVED_REGIONS] __initdata; +static struct reserved_mem *reserved_mem __refdata = reserved_mem_array; +static int total_reserved_mem_cnt = MAX_RESERVED_REGIONS; static int reserved_mem_count; static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size, @@ -55,6 +57,45 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size, return err; } +/* + * alloc_reserved_mem_array() - allocate memory for the reserved_mem + * array using memblock + * + * This function is used to allocate memory for the reserved_mem array + * according to the total number of reserved memory regions defined in + * the DT. + * After the new array is allocated, the information stored in the + * initial static array is copied over to this new array and the new + * array is used from this point on. + */ +static void __init alloc_reserved_mem_array(void) +{ + struct reserved_mem *new_array; + size_t alloc_size, copy_size, memset_size; + + alloc_size = array_size(total_reserved_mem_cnt, sizeof(*new_array)); + if (alloc_size == SIZE_MAX) + pr_err("Failed to allocate memory for reserved_mem array with err: %d", -EOVERFLOW); + + new_array = memblock_alloc(alloc_size, SMP_CACHE_BYTES); + if (!new_array) + pr_err("Failed to allocate memory for reserved_mem array with err: %d", -ENOMEM); + + copy_size = array_size(reserved_mem_count, sizeof(*new_array)); + if (copy_size == SIZE_MAX) { + memblock_free(new_array, alloc_size); + total_reserved_mem_cnt = MAX_RESERVED_REGIONS; + pr_err("Failed to allocate memory for reserved_mem array with err: %d", -EOVERFLOW); + } + + memset_size = alloc_size - copy_size; + + memcpy(new_array, reserved_mem, copy_size); + memset(new_array + reserved_mem_count, 0, memset_size); + + reserved_mem = new_array; +} + /* * fdt_reserved_mem_save_node() - save fdt node for second pass initialization */ @@ -63,7 +104,7 @@ static void __init fdt_reserved_mem_save_node(unsigned long node, const char *un { struct reserved_mem *rmem = &reserved_mem[reserved_mem_count]; - if (reserved_mem_count == ARRAY_SIZE(reserved_mem)) { + if (reserved_mem_count == total_reserved_mem_cnt) { pr_err("not enough space for all defined regions.\n"); return; } @@ -220,7 +261,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam int __init fdt_scan_reserved_mem(void) { int node, child; - int dynamic_nodes_cnt = 0; + int dynamic_nodes_cnt = 0, count = 0; int dynamic_nodes[MAX_RESERVED_REGIONS]; const void *fdt = initial_boot_params; @@ -243,6 +284,8 @@ int __init fdt_scan_reserved_mem(void) uname = fdt_get_name(fdt, child, NULL); err = __reserved_mem_reserve_reg(child, uname); + if (!err) + count++; /* * Save the nodes for the dynamically-placed regions * into an array which will be used for allocation right @@ -257,11 +300,16 @@ int __init fdt_scan_reserved_mem(void) } for (int i = 0; i < dynamic_nodes_cnt; i++) { const char *uname; + int err; child = dynamic_nodes[i]; uname = fdt_get_name(fdt, child, NULL); - __reserved_mem_alloc_size(child, uname); + + err = __reserved_mem_alloc_size(child, uname); + if (!err) + count++; } + total_reserved_mem_cnt = count++; return 0; } @@ -494,6 +542,8 @@ void __init fdt_init_reserved_mem(void) { int i; + alloc_reserved_mem_array(); + fdt_scan_reserved_mem_reg_nodes(); /* check for overlapping reserved regions */ From patchwork Thu Mar 28 21:15:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oreoluwa Babatunde X-Patchwork-Id: 13609649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B593CD128D for ; Thu, 28 Mar 2024 21:16:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) id 4101BC43390; Thu, 28 Mar 2024 21:16:55 +0000 (UTC) Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.kernel.org (Postfix) with ESMTPS id 1905EC43394; Thu, 28 Mar 2024 21:16:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.kernel.org 1905EC43394 Authentication-Results: smtp.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.kernel.org; spf=pass smtp.mailfrom=quicinc.com Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 42SKvtEP022652; Thu, 28 Mar 2024 21:16:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= qcppdkim1; bh=MAaPSlp2TcI43aJb4KF53ZXFC2Wi5a2pJ0Qp4ANe72c=; b=j7 TuDAHnebWZ+cS5U+NzjHvG7Kqw6Dd6nfQNumX7V6Prn7favMLlIxhg63pnQwW1qy cVfUHftDkXKk8pw7t3TW8XaprJhXbX+nB30gHY0XBQf7kd0qdtNojmPVqc3pn2pc ZO9vrGEBHTeAobqy024h3xsLwaZBADnJXvMeiRwiz1FChxoXU6LE5paWMiXMzdMD bvCrJ4Fr9YTIX38gxow9R8NwW8TQiqlKngI8cFzZcdl6Os40x1HRK0ghtUhliwPy k7r/qSyejCgKGMI2mqEpfMK02s2dpcx+2Qx1RKdSrJ5mkT3igCO0Xb3tEw7+3Jn2 jnM9gvGLrjM+6neQrJpw== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3x56njhqmg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:36 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 42SLGZO3012363 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:35 GMT Received: from hu-obabatun-lv.qualcomm.com (10.49.16.6) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 28 Mar 2024 14:16:32 -0700 From: Oreoluwa Babatunde List-Id: To: , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , Oreoluwa Babatunde Subject: [PATCH v5 3/4] of: reserved_mem: Use the unflatten_devicetree APIs to scan reserved mem. nodes Date: Thu, 28 Mar 2024 14:15:42 -0700 Message-ID: <20240328211543.191876-4-quic_obabatun@quicinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240328211543.191876-1-quic_obabatun@quicinc.com> References: <20240328211543.191876-1-quic_obabatun@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 6x-p6tUlPhzRdEau4XQqhhngxIyfRhkD X-Proofpoint-GUID: 6x-p6tUlPhzRdEau4XQqhhngxIyfRhkD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-28_17,2024-03-28_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 mlxlogscore=997 impostorscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2403210001 definitions=main-2403280152 The unflatten_devicetree APIs have been setup and are available to be used by the time the fdt_init_reserved_mem() function is called. Since the unflatten_devicetree APIs are a more efficient way of scanning through the DT nodes, switch to using these APIs to facilitate the rest of the reserved memory processing. Signed-off-by: Oreoluwa Babatunde --- drivers/of/of_reserved_mem.c | 77 +++++++++++++++++++++------------ include/linux/of_reserved_mem.h | 2 +- kernel/dma/coherent.c | 8 ++-- kernel/dma/contiguous.c | 8 ++-- kernel/dma/swiotlb.c | 10 ++--- 5 files changed, 64 insertions(+), 41 deletions(-) diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 0aba366eba59..68d1f4cca4bb 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -99,7 +99,7 @@ static void __init alloc_reserved_mem_array(void) /* * fdt_reserved_mem_save_node() - save fdt node for second pass initialization */ -static void __init fdt_reserved_mem_save_node(unsigned long node, const char *uname, +static void __init fdt_reserved_mem_save_node(struct device_node *node, const char *uname, phys_addr_t base, phys_addr_t size) { struct reserved_mem *rmem = &reserved_mem[reserved_mem_count]; @@ -109,7 +109,7 @@ static void __init fdt_reserved_mem_save_node(unsigned long node, const char *un return; } - rmem->fdt_node = node; + rmem->dev_node = node; rmem->name = uname; rmem->base = base; rmem->size = size; @@ -178,11 +178,11 @@ static int __init __reserved_mem_reserve_reg(unsigned long node, } /* - * __reserved_mem_check_root() - check if #size-cells, #address-cells provided + * __fdt_reserved_mem_check_root() - check if #size-cells, #address-cells provided * in /reserved-memory matches the values supported by the current implementation, * also check if ranges property has been provided */ -static int __init __reserved_mem_check_root(unsigned long node) +static int __init __fdt_reserved_mem_check_root(unsigned long node) { const __be32 *prop; @@ -200,6 +200,29 @@ static int __init __reserved_mem_check_root(unsigned long node) return 0; } +/* + * __dt_reserved_mem_check_root() - check if #size-cells, #address-cells provided + * in /reserved-memory matches the values supported by the current implementation, + * also check if ranges property has been provided + */ +static int __init __dt_reserved_mem_check_root(struct device_node *node) +{ + const __be32 *prop; + + prop = of_get_property(node, "#size-cells", NULL); + if (!prop || be32_to_cpup(prop) != dt_root_size_cells) + return -EINVAL; + + prop = of_get_property(node, "#address-cells", NULL); + if (!prop || be32_to_cpup(prop) != dt_root_addr_cells) + return -EINVAL; + + prop = of_get_property(node, "ranges", NULL); + if (!prop) + return -EINVAL; + return 0; +} + /** * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined * reserved memory regions. @@ -213,33 +236,38 @@ static int __init __reserved_mem_check_root(unsigned long node) static void __init fdt_scan_reserved_mem_reg_nodes(void) { int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32); - const void *fdt = initial_boot_params; + struct device_node *node, *child; phys_addr_t base, size; const __be32 *prop; - int node, child; int len; - node = fdt_path_offset(fdt, "/reserved-memory"); - if (node < 0) { + node = of_find_node_by_path("/reserved-memory"); + if (!node) { pr_info("Reserved memory: No reserved-memory node in the DT\n"); return; } - if (__reserved_mem_check_root(node)) { + if (__dt_reserved_mem_check_root(node)) { pr_err("Reserved memory: unsupported node format, ignoring\n"); return; } - fdt_for_each_subnode(child, fdt, node) { + for_each_child_of_node(node, child) { const char *uname; + struct reserved_mem *rmem; - prop = of_get_flat_dt_prop(child, "reg", &len); - if (!prop) + if (!of_device_is_available(child)) continue; - if (!of_fdt_device_is_available(fdt, child)) + + prop = of_get_property(child, "reg", &len); + if (!prop) { + rmem = of_reserved_mem_lookup(child); + if (rmem) + rmem->dev_node = child; continue; + } - uname = fdt_get_name(fdt, child, NULL); + uname = of_node_full_name(child); if (len && len % t_len != 0) { pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n", uname); @@ -269,7 +297,7 @@ int __init fdt_scan_reserved_mem(void) if (node < 0) return -ENODEV; - if (__reserved_mem_check_root(node) != 0) { + if (__fdt_reserved_mem_check_root(node) != 0) { pr_err("Reserved memory: unsupported node format, ignoring\n"); return -EINVAL; } @@ -447,7 +475,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam uname, (unsigned long)(size / SZ_1M)); return -ENOMEM; } - fdt_reserved_mem_save_node(node, uname, base, size); + fdt_reserved_mem_save_node(NULL, uname, base, size); return 0; } @@ -467,7 +495,7 @@ static int __init __reserved_mem_init_node(struct reserved_mem *rmem) reservedmem_of_init_fn initfn = i->data; const char *compat = i->compatible; - if (!of_flat_dt_is_compatible(rmem->fdt_node, compat)) + if (!of_device_is_compatible(rmem->dev_node, compat)) continue; ret = initfn(rmem); @@ -500,11 +528,6 @@ static int __init __rmem_cmp(const void *a, const void *b) if (ra->size > rb->size) return 1; - if (ra->fdt_node < rb->fdt_node) - return -1; - if (ra->fdt_node > rb->fdt_node) - return 1; - return 0; } @@ -551,16 +574,16 @@ void __init fdt_init_reserved_mem(void) for (i = 0; i < reserved_mem_count; i++) { struct reserved_mem *rmem = &reserved_mem[i]; - unsigned long node = rmem->fdt_node; + struct device_node *node = rmem->dev_node; int len; const __be32 *prop; int err = 0; bool nomap; - nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL; - prop = of_get_flat_dt_prop(node, "phandle", &len); + nomap = of_get_property(node, "no-map", NULL) != NULL; + prop = of_get_property(node, "phandle", &len); if (!prop) - prop = of_get_flat_dt_prop(node, "linux,phandle", &len); + prop = of_get_property(node, "linux,phandle", &len); if (prop) rmem->phandle = of_read_number(prop, len/4); @@ -574,7 +597,7 @@ void __init fdt_init_reserved_mem(void) } else { phys_addr_t end = rmem->base + rmem->size - 1; bool reusable = - (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL; + (of_get_property(node, "reusable", NULL)) != NULL; pr_info("%pa..%pa (%lu KiB) %s %s %s\n", &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K), diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index 4de2a24cadc9..b6107a18d170 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -10,7 +10,7 @@ struct reserved_mem_ops; struct reserved_mem { const char *name; - unsigned long fdt_node; + struct device_node *dev_node; unsigned long phandle; const struct reserved_mem_ops *ops; phys_addr_t base; diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c index ff5683a57f77..0db0aae83102 100644 --- a/kernel/dma/coherent.c +++ b/kernel/dma/coherent.c @@ -362,20 +362,20 @@ static const struct reserved_mem_ops rmem_dma_ops = { static int __init rmem_dma_setup(struct reserved_mem *rmem) { - unsigned long node = rmem->fdt_node; + struct device_node *node = rmem->dev_node; - if (of_get_flat_dt_prop(node, "reusable", NULL)) + if (of_get_property(node, "reusable", NULL)) return -EINVAL; #ifdef CONFIG_ARM - if (!of_get_flat_dt_prop(node, "no-map", NULL)) { + if (!of_get_property(node, "no-map", NULL)) { pr_err("Reserved memory: regions without no-map are not yet supported\n"); return -EINVAL; } #endif #ifdef CONFIG_DMA_GLOBAL_POOL - if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) { + if (of_get_property(node, "linux,dma-default", NULL)) { WARN(dma_reserved_default_memory, "Reserved memory: region for default DMA coherent area is redefined\n"); dma_reserved_default_memory = rmem; diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 055da410ac71..22507f7d74d9 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -456,8 +456,8 @@ static const struct reserved_mem_ops rmem_cma_ops = { static int __init rmem_cma_setup(struct reserved_mem *rmem) { - unsigned long node = rmem->fdt_node; - bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL); + struct device_node *node = rmem->dev_node; + bool default_cma = of_get_property(node, "linux,cma-default", NULL); struct cma *cma; int err; @@ -467,8 +467,8 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem) return -EBUSY; } - if (!of_get_flat_dt_prop(node, "reusable", NULL) || - of_get_flat_dt_prop(node, "no-map", NULL)) + if (!of_get_property(node, "reusable", NULL) || + of_get_property(node, "no-map", NULL)) return -EINVAL; if (!IS_ALIGNED(rmem->base | rmem->size, CMA_MIN_ALIGNMENT_BYTES)) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 86fe172b5958..22cf195f652c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1799,12 +1799,12 @@ static const struct reserved_mem_ops rmem_swiotlb_ops = { static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) { - unsigned long node = rmem->fdt_node; + struct device_node *node = rmem->dev_node; - if (of_get_flat_dt_prop(node, "reusable", NULL) || - of_get_flat_dt_prop(node, "linux,cma-default", NULL) || - of_get_flat_dt_prop(node, "linux,dma-default", NULL) || - of_get_flat_dt_prop(node, "no-map", NULL)) + if (of_get_property(node, "reusable", NULL) || + of_get_property(node, "linux,cma-default", NULL) || + of_get_property(node, "linux,dma-default", NULL) || + of_get_property(node, "no-map", NULL)) return -EINVAL; rmem->ops = &rmem_swiotlb_ops; From patchwork Thu Mar 28 21:15:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oreoluwa Babatunde X-Patchwork-Id: 13609648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80CCACD128E for ; Thu, 28 Mar 2024 21:16:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) id 4DBC6C43330; Thu, 28 Mar 2024 21:16:56 +0000 (UTC) Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.kernel.org (Postfix) with ESMTPS id 19022C433C7; Thu, 28 Mar 2024 21:16:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.kernel.org 19022C433C7 Authentication-Results: smtp.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.kernel.org; spf=pass smtp.mailfrom=quicinc.com Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 42SIJ2KR003277; Thu, 28 Mar 2024 21:16:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= qcppdkim1; bh=zvJ0vhX7wfZ6zdugc4E4PLPb1HUs3ONWnc6z4J9p1ZY=; b=pV GmHZDG1mADpzP6mjW9TW14aC2Ikm38r0SAtkaiCiGc9aSkjLonitPlQOoWA5yxBL I2gFEH3qZwrLnthVKH5XFNonlEAID01uX4HL2six6Q6a/+B2GyAJjZ07Zf0Z7EdY BOGY6D/k+Ds2fEIEebL6B6t7o9vu4m4o65Zv2JKEPlHOjCPqyY59sH4w+LIV8hxo /khQvkD+1V2bCu3XwXRQ9/0gF3Jtx+p9dIRgqUxiihEEt0qTWU35PvlGTxzOsyci WK2wGPO6cpbxnA9OOLAQTgHWtMfKsR7rLGJ9SJwDG78moMh6xqV34GpXDZ3Umk7I Nh3WH3r8DUmZtM4NRoGw== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3x53nxjg6p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:36 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 42SLGZO4012363 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 28 Mar 2024 21:16:36 GMT Received: from hu-obabatun-lv.qualcomm.com (10.49.16.6) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 28 Mar 2024 14:16:32 -0700 From: Oreoluwa Babatunde List-Id: To: , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , Oreoluwa Babatunde Subject: [PATCH v5 4/4] of: reserved_mem: Rename fdt_* functions to refelct use of unflatten_devicetree APIs Date: Thu, 28 Mar 2024 14:15:43 -0700 Message-ID: <20240328211543.191876-5-quic_obabatun@quicinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240328211543.191876-1-quic_obabatun@quicinc.com> References: <20240328211543.191876-1-quic_obabatun@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: EoxbyCpQiIhXCrV0MxNXBG7mGSeJY2OE X-Proofpoint-ORIG-GUID: EoxbyCpQiIhXCrV0MxNXBG7mGSeJY2OE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-28_17,2024-03-28_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxlogscore=999 impostorscore=0 spamscore=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 malwarescore=0 adultscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2403210001 definitions=main-2403280152 Rename the relevant fdt_* functions to a new naming scheme, dt_*, to reflect the use of the unflatten_devicetree APIs to scan through the reserved memory regions defined in the DT. Signed-off-by: Oreoluwa Babatunde --- drivers/of/fdt.c | 2 +- drivers/of/of_private.h | 2 +- drivers/of/of_reserved_mem.c | 22 +++++++++++----------- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 527e6bc1c096..7e1baf443286 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -1259,7 +1259,7 @@ void __init unflatten_device_tree(void) unittest_unflatten_overlay_base(); /* initialize the reserved memory regions */ - fdt_init_reserved_mem(); + dt_init_reserved_mem(); } /** diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index 9ea250b80657..75726feac881 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -177,7 +177,7 @@ static inline struct device_node *__of_get_dma_parent(const struct device_node * #endif int fdt_scan_reserved_mem(void); -void fdt_init_reserved_mem(void); +void dt_init_reserved_mem(void); bool of_fdt_device_is_available(const void *blob, unsigned long node); diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 68d1f4cca4bb..3ae5918a0024 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -97,10 +97,10 @@ static void __init alloc_reserved_mem_array(void) } /* - * fdt_reserved_mem_save_node() - save fdt node for second pass initialization + * dt_reserved_mem_save_node() - save the device_node for second pass initialization */ -static void __init fdt_reserved_mem_save_node(struct device_node *node, const char *uname, - phys_addr_t base, phys_addr_t size) +static void __init dt_reserved_mem_save_node(struct device_node *node, const char *uname, + phys_addr_t base, phys_addr_t size) { struct reserved_mem *rmem = &reserved_mem[reserved_mem_count]; @@ -224,16 +224,16 @@ static int __init __dt_reserved_mem_check_root(struct device_node *node) } /** - * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined + * dt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined * reserved memory regions. * * This function is used to scan through the DT and store the * information for the reserved memory regions that are defined using * the "reg" property. The region node number, name, base address, and * size are all stored in the reserved_mem array by calling the - * fdt_reserved_mem_save_node() function. + * dt_reserved_mem_save_node() function. */ -static void __init fdt_scan_reserved_mem_reg_nodes(void) +static void __init dt_scan_reserved_mem_reg_nodes(void) { int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32); struct device_node *node, *child; @@ -277,7 +277,7 @@ static void __init fdt_scan_reserved_mem_reg_nodes(void) size = dt_mem_next_cell(dt_root_size_cells, &prop); if (size) - fdt_reserved_mem_save_node(child, uname, base, size); + dt_reserved_mem_save_node(child, uname, base, size); } } @@ -475,7 +475,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam uname, (unsigned long)(size / SZ_1M)); return -ENOMEM; } - fdt_reserved_mem_save_node(NULL, uname, base, size); + dt_reserved_mem_save_node(NULL, uname, base, size); return 0; } @@ -559,15 +559,15 @@ static void __init __rmem_check_for_overlap(void) } /** - * fdt_init_reserved_mem() - allocate and init all saved reserved memory regions + * dt_init_reserved_mem() - allocate and init all saved reserved memory regions */ -void __init fdt_init_reserved_mem(void) +void __init dt_init_reserved_mem(void) { int i; alloc_reserved_mem_array(); - fdt_scan_reserved_mem_reg_nodes(); + dt_scan_reserved_mem_reg_nodes(); /* check for overlapping reserved regions */ __rmem_check_for_overlap();