From patchwork Tue May 5 16:11:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 6338881 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 41FF6BEEE1 for ; Tue, 5 May 2015 16:14:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B320520221 for ; Tue, 5 May 2015 16:14:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 15CF52021A for ; Tue, 5 May 2015 16:13:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YpfRr-0006vw-OT; Tue, 05 May 2015 16:11:55 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YpfRj-0006h8-Tl for linux-arm-kernel@lists.infradead.org; Tue, 05 May 2015 16:11:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8BF6E29; Tue, 5 May 2015 09:10:52 -0700 (PDT) Received: from collaborate-mta1.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTP id 1F1403F251; Tue, 5 May 2015 09:11:23 -0700 (PDT) Received: from leverpostej (leverpostej.cambridge.arm.com [10.1.205.151]) by collaborate-mta1.arm.com (Postfix) with ESMTPS id D68B113F630; Tue, 5 May 2015 11:11:21 -0500 (CDT) Date: Tue, 5 May 2015 17:11:12 +0100 From: Mark Rutland To: Hans de Goede Subject: Re: [REGRESSION?] ARM: 7677/1: LPAE: Fix mapping in alloc_init_section for unaligned addresses (was Re: Memory size unaligned to section boundary) Message-ID: <20150505161112.GB23758@leverpostej> References: <553F5B71.8030309@redhat.com> <20150505142210.GB20402@leverpostej> <5548D5DB.4020407@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <5548D5DB.4020407@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150505_091148_136805_F0DC2115 X-CRM114-Status: GOOD ( 20.67 ) X-Spam-Score: -5.0 (-----) Cc: "steve.capper@linaro.org" , Catalin Marinas , Sricharan R , linux-arm-kernel , "labbott@redhat.com" , "christoffer.dall@linaro.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP > > I wasn't able to come up with a DTB that would trigger this. Do you have > > an example set of memory nodes + memreserves? Where are your kernel and > > DTB loaded in memory? > > We've a single memory node/bank from 0x40000000 to end of memory, we > carve out a framebuffer at the end, and do not report that to Linux, so > the end becomes 0x40000000 + memory-size - fb-size, and we use no mem > reserves, because if we do use mem-reserves the mmap of the fbmem by > the simplefb driver fails as that maps it non cach-able and if it is > part of the main membank it already is mapped cachable. Sure. The only reason for caring about any memreserves was in case they inadvertently affected the memblock_limit or anything else generated by iterating over the memblock array. If you have none to begin with then they clearly aren't involved. > We substract exactly the necessary fb-size, one known fb-size which > triggers this is 1024x600 which means we end up substracting > 1024x600x4 bytes from the end of memory, so effectively we are > doing the same as passing a mem= argument which is not 2MiB aligned. Thanks for the info. It turns out my bootloader was silently rewriting the memory nodes, which was why I couldn't reproduce the issue with a DTB alone. With the memory node reg munged to <0 0x80000000 0 0x3FDA8000> without bootloader interference, TC2 dies similarly to what you described. As far as I can see the issue is not a regression; it looks like we'd previously fail to use a (1M) section unless we had precisely 1M or 2M of the section to map (as those are the only cases when end would be section aligned). The below hack prevents the issue by rounding the memblock_limit down to a full (2M) pmd boundary, so we don't try to allocate from the first section in a partial pmd. That does mean that if your memory ends on a 1M boundary you lose that last 1M for early memblock allocations. Balancing the pmd manipulation turned out to be a lot more painful than I'd anticipated, so I gave up on trying to map the first section in a partial pmd. If people are happy with the below diff I can respin as a patch (with comment updates and so on). Thanks, Mark. ---->8---- diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 4e6ef89..2ea13f0 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1125,9 +1125,9 @@ void __init sanity_check_meminfo(void) * occurs before any free memory is mapped. */ if (!memblock_limit) { - if (!IS_ALIGNED(block_start, SECTION_SIZE)) + if (!IS_ALIGNED(block_start, PMD_SIZE)) memblock_limit = block_start; - else if (!IS_ALIGNED(block_end, SECTION_SIZE)) + else if (!IS_ALIGNED(block_end, PMD_SIZE)) memblock_limit = arm_lowmem_limit; } @@ -1142,7 +1142,7 @@ void __init sanity_check_meminfo(void) * last full section, which should be mapped. */ if (memblock_limit) - memblock_limit = round_down(memblock_limit, SECTION_SIZE); + memblock_limit = round_down(memblock_limit, PMD_SIZE); if (!memblock_limit) memblock_limit = arm_lowmem_limit;