From patchwork Thu Jun 13 08:57:05 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huang Shijie X-Patchwork-Id: 2715241 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BB027C1459 for ; Thu, 13 Jun 2013 09:21:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8EF5E201DE for ; Thu, 13 Jun 2013 09:21:03 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6CF69201BF for ; Thu, 13 Jun 2013 09:20:58 +0000 (UTC) Received: from merlin.infradead.org ([205.233.59.134]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Un3hQ-0007JL-7q; Thu, 13 Jun 2013 09:20:08 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Un3h5-0006vE-Dx; Thu, 13 Jun 2013 09:19:47 +0000 Received: from tx2ehsobe003.messaging.microsoft.com ([65.55.88.13] helo=tx2outboundpool.messaging.microsoft.com) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Un3gz-0006tN-PF for linux-arm-kernel@lists.infradead.org; Thu, 13 Jun 2013 09:19:43 +0000 Received: from mail62-tx2-R.bigfish.com (10.9.14.246) by TX2EHSOBE011.bigfish.com (10.9.40.31) with Microsoft SMTP Server id 14.1.225.23; Thu, 13 Jun 2013 09:19:20 +0000 Received: from mail62-tx2 (localhost [127.0.0.1]) by mail62-tx2-R.bigfish.com (Postfix) with ESMTP id 22E5C2E021A; Thu, 13 Jun 2013 09:19:20 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 3 X-BigFish: VS3(zzzz1f42h1ee6h1de0h1fdah1202h1e76h1d1ah1d2ah1fc6h1082kzz8275bhz2dh2a8h668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e23h1155h) Received: from mail62-tx2 (localhost.localdomain [127.0.0.1]) by mail62-tx2 (MessageSwitch) id 1371115158667340_30025; Thu, 13 Jun 2013 09:19:18 +0000 (UTC) Received: from TX2EHSMHS011.bigfish.com (unknown [10.9.14.228]) by mail62-tx2.bigfish.com (Postfix) with ESMTP id 93A86260076; Thu, 13 Jun 2013 09:19:18 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by TX2EHSMHS011.bigfish.com (10.9.99.111) with Microsoft SMTP Server (TLS) id 14.1.225.23; Thu, 13 Jun 2013 09:19:15 +0000 Received: from tx30smr01.am.freescale.net (10.81.153.31) by 039-SN1MMR1-005.039d.mgd.msft.net (10.84.1.17) with Microsoft SMTP Server (TLS) id 14.2.328.11; Thu, 13 Jun 2013 09:19:15 +0000 Received: from shlinux2.ap.freescale.net (shlinux2.ap.freescale.net [10.192.224.44]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id r5D9J7hL023978; Thu, 13 Jun 2013 02:19:10 -0700 From: Huang Shijie To: Subject: [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Date: Thu, 13 Jun 2013 16:57:05 +0800 Message-ID: <1371113826-1231-1-git-send-email-b32955@freescale.com> X-Mailer: git-send-email 1.7.1 MIME-Version: 1.0 X-OriginatorOrg: freescale.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130613_051942_010641_D59C66D7 X-CRM114-Status: GOOD ( 16.77 ) X-Spam-Score: -4.2 (----) Cc: Huang Shijie , will.deacon@arm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we want to steal 128K memory in the machine_desc->reserve() hook, we will hang up immediately. The hang reason is like this: [1] Stealing 128K makes the left memory is not aligned with the SECTION_SIZE. [2] So when the map_lowmem() tries to maps the lowmem memory banks, it will call the memblock_alloc(in early_alloc_aligned()) to allocate a page to store the pte. This pte page is in the unaligned region which is not mapped yet. [3] And when we use the memset() in the early_alloc_aligned(), we will hang right now. [4] The hang only occurs in the map_lowmem(). After the map_lowmem(), we have setup the PTE mappings. So in the later places, such as dma_contiguous_remap(), the hang will never occurs, This patch adds a global variable, in_map_lowmem, to check if we are in the map_lowmem() or not. If we are in the map_lowmem(), and we steal a SECTION_SIZE unaligned memory, we will use the memblock_alloc_base() to allocate the pte page. The @max_addr for memblock_alloc_base() is the last mapped address. Signed-off-by: Huang Shijie --- arch/arm/mm/mmu.c | 34 ++++++++++++++++++++++++++++++---- 1 files changed, 30 insertions(+), 4 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index faa36d7..56d1a22 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -113,6 +113,8 @@ static struct cachepolicy cache_policies[] __initdata = { } }; +static bool in_map_lowmem __initdata; + #ifdef CONFIG_CPU_CP15 /* * These are useful for identifying cache coherency @@ -595,10 +597,32 @@ static void __init *early_alloc(unsigned long sz) return early_alloc_aligned(sz, sz); } -static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, unsigned long prot) +static void __init *early_alloc_max_addr(unsigned long sz, phys_addr_t maddr) +{ + void *ptr; + + if (maddr == MEMBLOCK_ALLOC_ACCESSIBLE) + return early_alloc_aligned(sz, sz); + + ptr = __va(memblock_alloc_base(sz, sz, maddr)); + memset(ptr, 0, sz); + return ptr; +} + +static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, + unsigned long end, unsigned long prot) { if (pmd_none(*pmd)) { - pte_t *pte = early_alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE); + pte_t *pte; + phys_addr_t maddr = MEMBLOCK_ALLOC_ACCESSIBLE; + + if (in_map_lowmem && (end & SECTION_MASK)) { + end &= PGDIR_MASK; + BUG_ON(!end); + maddr = __virt_to_phys(end); + } + pte = early_alloc_max_addr(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE, + maddr); __pmd_populate(pmd, __pa(pte), prot); } BUG_ON(pmd_bad(*pmd)); @@ -609,7 +633,7 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long pfn, const struct mem_type *type) { - pte_t *pte = early_pte_alloc(pmd, addr, type->prot_l1); + pte_t *pte = early_pte_alloc(pmd, addr, end, type->prot_l1); do { set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)), 0); pfn++; @@ -1253,7 +1277,7 @@ static void __init kmap_init(void) { #ifdef CONFIG_HIGHMEM pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE), - PKMAP_BASE, _PAGE_KERNEL_TABLE); + PKMAP_BASE, 0, _PAGE_KERNEL_TABLE); #endif } @@ -1261,6 +1285,7 @@ static void __init map_lowmem(void) { struct memblock_region *reg; + in_map_lowmem = 1; /* Map all the lowmem memory banks. */ for_each_memblock(memory, reg) { phys_addr_t start = reg->base; @@ -1279,6 +1304,7 @@ static void __init map_lowmem(void) create_mapping(&map); } + in_map_lowmem = 0; } /*