From patchwork Thu May 23 17:07:53 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2608441 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork2.kernel.org (Postfix) with ESMTP id 66F7ADFB78 for ; Thu, 23 May 2013 17:12:48 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ2S-0004gb-Pt; Thu, 23 May 2013 17:10:54 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ1h-000391-6R; Thu, 23 May 2013 17:10:05 +0000 Received: from mail-wg0-x231.google.com ([2a00:1450:400c:c00::231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0K-0002tZ-Mj for linux-arm-kernel@lists.infradead.org; Thu, 23 May 2013 17:08:47 +0000 Received: by mail-wg0-f49.google.com with SMTP id y10so2374246wgg.16 for ; Thu, 23 May 2013 10:08:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=QB/4MINkD5PY2tyBt1e0CnKHRJd0EtnkfQvwiHrG8Ro=; b=aX8XJfmbH7Wn7Lrm5TOtZkVDlcnn6oqli7zvuFw5UAdUDoskV/WQn8yelAGyUEF49b +/MzvQIMxxhctA8GFmxglsMYMUXkchYFXZ5Li6IrDXn+PgyDGd9wxffvu1v8gjtiCJ/4 nlyqNEl6jnPr1pFHxyd/0ZRx4GzPTIZ4Os/Y/C9r1otc/oBBL+Hbp+ElwXnxVU+OuEms lFurY+mJIdkp0yHFxIO4bXZX1Fr08vqOyAPAhySsM9Alt4wnHfA5ceg6BEE+ggCfvxjY htAfaNso1IKKtX/AxvD0lnBpZcAldtY7DQ1388VKvYooJpn45Oa1Dxj/XwJF89CCuARx 8eYg== X-Received: by 10.181.13.112 with SMTP id ex16mr45709947wid.28.1369328898725; Thu, 23 May 2013 10:08:18 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.17 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:18 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 06/11] ARM64: mm: Restore memblock limit when map_mem finished. Date: Thu, 23 May 2013 18:07:53 +0100 Message-Id: <1369328878-11706-7-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQld0ImTQ1Qs+bt3tel7Kukbb92KeG1H5n7m+scxZZkCEjP+xZfsAibPS3dH76k8Gr/cz0Of X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130523_130841_547420_B0337F1D X-CRM114-Status: GOOD ( 12.83 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , patches@linaro.org, Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In paging_init the memblock limit is set to restrict any addresses returned by early_alloc to fit within the initial direct kernel mapping in swapper_pg_dir. This allows map_mem to allocate puds, pmds and ptes from the initial direct kernel mapping. The limit stays low after paging_init() though, meaning any bootmem allocations will be from a restricted subset of memory. Gigabyte huge pages, for instance, are normally allocated from bootmem as their order (18) is too large for the default buddy allocator (MAX_ORDER = 11). This patch restores the memblock limit when map_mem has finished, allowing gigabyte huge pages (and other objects) to be allocated from all of bootmem. Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/arm64/mm/mmu.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index eeecc9c..9fa027b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -297,6 +297,16 @@ static void __init map_mem(void) { struct memblock_region *reg; + /* + * Temporarily limit the memblock range. We need to do this as + * create_mapping requires puds, pmds and ptes to be allocated from + * memory addressable from the initial direct kernel mapping. + * + * The initial direct kernel mapping, located at swapper_pg_dir, + * gives us PGDIR_SIZE memory starting from PHYS_OFFSET (aligned). + */ + memblock_set_current_limit((PHYS_OFFSET & PGDIR_MASK) + PGDIR_SIZE); + /* map all the memory banks */ for_each_memblock(memory, reg) { phys_addr_t start = reg->base; @@ -307,6 +317,9 @@ static void __init map_mem(void) create_mapping(start, __phys_to_virt(start), end - start); } + + /* Limit no longer required. */ + memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE); } /* @@ -317,12 +330,6 @@ void __init paging_init(void) { void *zero_page; - /* - * Maximum PGDIR_SIZE addressable via the initial direct kernel - * mapping in swapper_pg_dir. - */ - memblock_set_current_limit((PHYS_OFFSET & PGDIR_MASK) + PGDIR_SIZE); - init_mem_pgprot(); map_mem();