From patchwork Wed Feb 17 03:44:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 8334181 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C3805C02AA for ; Wed, 17 Feb 2016 03:46:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C64092034A for ; Wed, 17 Feb 2016 03:46:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 213B5202FE for ; Wed, 17 Feb 2016 03:46:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965039AbcBQDqG (ORCPT ); Tue, 16 Feb 2016 22:46:06 -0500 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:59681 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965103AbcBQDp6 (ORCPT ); Tue, 16 Feb 2016 22:45:58 -0500 Received: from localhost by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 17 Feb 2016 13:45:56 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 17 Feb 2016 13:45:55 +1000 X-IBM-Helo: d23dlp02.au.ibm.com X-IBM-MailFrom: gwshan@linux.vnet.ibm.com X-IBM-RcptTo: devicetree@vger.kernel.org;linux-pci@vger.kernel.org Received: from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 4DEAD2BB005B; Wed, 17 Feb 2016 14:45:54 +1100 (EST) Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1H3jkGI24903690; Wed, 17 Feb 2016 14:45:54 +1100 Received: from d23av02.au.ibm.com (localhost [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1H3jIom018199; Wed, 17 Feb 2016 14:45:21 +1100 Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.192.253.14]) by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1H3jH6x016979; Wed, 17 Feb 2016 14:45:18 +1100 Received: from bran.ozlabs.ibm.com (haven.au.ibm.com [9.192.254.114]) by ozlabs.au.ibm.com (Postfix) with ESMTP id A2D2DA03F9; Wed, 17 Feb 2016 14:44:45 +1100 (AEDT) Received: from gwshan (shangw.ozlabs.ibm.com [10.61.2.199]) by bran.ozlabs.ibm.com (Postfix) with ESMTP id 9AD39E39C0; Wed, 17 Feb 2016 14:44:45 +1100 (AEDT) Received: by gwshan (Postfix, from userid 1000) id 8087A941E93; Wed, 17 Feb 2016 14:44:45 +1100 (AEDT) From: Gavin Shan To: linuxppc-dev@lists.ozlabs.org Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, benh@kernel.crashing.org, mpe@ellerman.id.au, aik@ozlabs.ru, dja@axtens.net, bhelgaas@google.com, robherring2@gmail.com, grant.likely@linaro.org, Gavin Shan Subject: [PATCH v8 41/45] drivers/of: Avoid recursively calling unflatten_dt_node() Date: Wed, 17 Feb 2016 14:44:24 +1100 Message-Id: <1455680668-23298-42-git-send-email-gwshan@linux.vnet.ibm.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1455680668-23298-1-git-send-email-gwshan@linux.vnet.ibm.com> References: <1455680668-23298-1-git-send-email-gwshan@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16021703-0017-0000-0000-000002D1641D Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In current implementation, unflatten_dt_node() is called recursively to unflatten device nodes in FDT blob. It's stress to limited stack capacity, especially to adopt the function to unflatten device sub-tree that possibly has multiple root nodes. In that case, we runs out of stack and the system can't boot up successfully. In order to reuse the function to unflatten device sub-tree, this avoids calling the function recursively, meaning the device nodes are unflattened in one call on unflatten_dt_node(): two arrays are introduced to track the parent path size and the device node of current level of depth, which will be used by the device node on next level of depth to be unflattened. All device nodes in more than 64 level of depth are dropped and hopefully, the system can boot up successfully with the partial device-tree. Also, the parameter "poffset" and "fpsize" are unused and dropped and the parameter "dryrun" is figured out from "mem == NULL". Besides, the return value of the function is changed to indicate the size of memory consumed by the unflatten device tree or error code. Signed-off-by: Gavin Shan Acked-by: Rob Herring --- drivers/of/fdt.c | 122 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 74 insertions(+), 48 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 3c69002..667a5b2 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -356,63 +356,90 @@ static unsigned long populate_node(const void *blob, return fpsize; } +static void reverse_nodes(struct device_node *parent) +{ + struct device_node *child, *next; + + /* In-depth first */ + child = parent->child; + while (child) { + reverse_nodes(child); + + child = child->sibling; + } + + /* Reverse the nodes in the child list */ + child = parent->child; + parent->child = NULL; + while (child) { + next = child->sibling; + + child->sibling = parent->child; + parent->child = child; + child = next; + } +} + /** * unflatten_dt_node - Alloc and populate a device_node from the flat tree * @blob: The parent device tree blob * @mem: Memory chunk to use for allocating device nodes and properties - * @poffset: pointer to node in flat tree * @dad: Parent struct device_node * @nodepp: The device_node tree created by the call - * @fpsize: Size of the node path up at the current depth. - * @dryrun: If true, do not allocate device nodes but still calculate needed - * memory size + * + * It returns the size of unflattened device tree or error code */ -static void *unflatten_dt_node(const void *blob, - void *mem, - int *poffset, - struct device_node *dad, - struct device_node **nodepp, - unsigned long fpsize, - bool dryrun) +static int unflatten_dt_node(const void *blob, + void *mem, + struct device_node *dad, + struct device_node **nodepp) { - struct device_node *np; - static int depth; - int old_depth; + struct device_node *root; + int offset = 0, depth = 0; +#define FDT_MAX_DEPTH 64 + unsigned long fpsizes[FDT_MAX_DEPTH]; + struct device_node *nps[FDT_MAX_DEPTH]; + void *base = mem; + bool dryrun = !base; - fpsize = populate_node(blob, *poffset, &mem, dad, fpsize, &np, dryrun); - if (!fpsize) - return mem; + if (nodepp) + *nodepp = NULL; + + root = dad; + fpsizes[depth] = dad ? strlen(of_node_full_name(dad)) : 0; + nps[depth++] = dad; + for (offset = 0; + offset >= 0; + offset = fdt_next_node(blob, offset, &depth)) { + if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH)) + continue; - old_depth = depth; - *poffset = fdt_next_node(blob, *poffset, &depth); - if (depth < 0) - depth = 0; - while (*poffset > 0 && depth > old_depth) - mem = unflatten_dt_node(blob, mem, poffset, np, NULL, - fpsize, dryrun); + fpsizes[depth] = populate_node(blob, offset, &mem, + nps[depth - 1], + fpsizes[depth - 1], + &nps[depth], dryrun); + if (!fpsizes[depth]) + return mem - base; + + if (!dryrun && nodepp && !*nodepp) + *nodepp = nps[depth]; + if (!dryrun && !root) + root = nps[depth]; + } - if (*poffset < 0 && *poffset != -FDT_ERR_NOTFOUND) - pr_err("unflatten: error %d processing FDT\n", *poffset); + if (offset < 0 && offset != -FDT_ERR_NOTFOUND) { + pr_err("%s: Error %d processing FDT\n", __func__, offset); + return -EINVAL; + } /* * Reverse the child list. Some drivers assumes node order matches .dts * node order */ - if (!dryrun && np->child) { - struct device_node *child = np->child; - np->child = NULL; - while (child) { - struct device_node *next = child->sibling; - child->sibling = np->child; - np->child = child; - child = next; - } - } - - if (nodepp) - *nodepp = np; + if (!dryrun) + reverse_nodes(root); - return mem; + return mem - base; } /** @@ -431,8 +458,7 @@ static void __unflatten_device_tree(const void *blob, struct device_node **mynodes, void * (*dt_alloc)(u64 size, u64 align)) { - unsigned long size; - int start; + int size; void *mem; pr_debug(" -> unflatten_device_tree()\n"); @@ -453,11 +479,12 @@ static void __unflatten_device_tree(const void *blob, } /* First pass, scan for size */ - start = 0; - size = (unsigned long)unflatten_dt_node(blob, NULL, &start, NULL, NULL, 0, true); - size = ALIGN(size, 4); + size = unflatten_dt_node(blob, NULL, NULL, NULL); + if (size < 0) + return; - pr_debug(" size is %lx, allocating...\n", size); + size = ALIGN(size, 4); + pr_debug(" size is %d, allocating...\n", size); /* Allocate memory for the expanded device tree */ mem = dt_alloc(size + 4, __alignof__(struct device_node)); @@ -468,8 +495,7 @@ static void __unflatten_device_tree(const void *blob, pr_debug(" unflattening %p...\n", mem); /* Second pass, do actual unflattening */ - start = 0; - unflatten_dt_node(blob, mem, &start, NULL, mynodes, 0, false); + unflatten_dt_node(blob, mem, NULL, mynodes); if (be32_to_cpup(mem + size) != 0xdeadbeef) pr_warning("End of tree marker overwritten: %08x\n", be32_to_cpup(mem + size));