From patchwork Tue Jan 10 01:18:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9506139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 776BE60231 for ; Tue, 10 Jan 2017 01:19:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 77EC42849C for ; Tue, 10 Jan 2017 01:19:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 696A9284EA; Tue, 10 Jan 2017 01:19:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 036B22849C for ; Tue, 10 Jan 2017 01:19:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cQl5s-0003Lo-9v; Tue, 10 Jan 2017 01:19:20 +0000 Received: from mail-qk0-f174.google.com ([209.85.220.174]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cQl5d-0002ZE-Fy for linux-arm-kernel@lists.infradead.org; Tue, 10 Jan 2017 01:19:08 +0000 Received: by mail-qk0-f174.google.com with SMTP id 11so61302158qkl.3 for ; Mon, 09 Jan 2017 17:18:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=voRvyQHdr29p1v1NtCmRuvAnEAsjNwOVU0yytAH7/Yc=; b=JyotIOcdUgv4eNWI77lM2GJNa3jDTnu6DGieDicElBWQyRV6HJry8FXqIGW7Vl2Ecf K28sZ7nxZTSkFyFo+WccBFNEKeWXvQLcjYaqk7SmMMJNhG2y/vpXZ+9X7eB1qjYguq65 azc0R7Tb1yF3U18zBdnekQNr42d3WNIwccjJXls62khCEk1uo0t8ty/C7VI6P2Hd3HaE 4MsXu+aFcpgPA8FSbpleo0n2TazjFpHSWz2DQ9l7Y7n38gppgrldrNUgynTAqu2oIlkB 0XgAeWCer1QPjEzmGEPm+B7IFQHzQO8ReQzIpsbJZ8H4zSnQvkOSwe9j1Ax302Gfzj5z lJVQ== X-Gm-Message-State: AIkVDXKHnbyAYLm0vcNdtjFOiWd/rLwl1wt09H2rZn4SqjGljJNOmzro5yxqhlxVjLvTt9hw X-Received: by 10.55.203.74 with SMTP id d71mr470209qkj.269.1484011127054; Mon, 09 Jan 2017 17:18:47 -0800 (PST) Received: from labbott-redhat-machine.redhat.com ([2601:602:9802:a8dc::6b06]) by smtp.gmail.com with ESMTPSA id f7sm203635qtf.48.2017.01.09.17.18.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jan 2017 17:18:46 -0800 (PST) From: Laura Abbott To: Russell King , Nicolas Pitre , Grygorii Strashko Subject: [PATCHv3 2/2] arm: Adjust memory boundaries after reservations Date: Mon, 9 Jan 2017 17:18:33 -0800 Message-Id: <1484011113-8223-3-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1484011113-8223-1-git-send-email-labbott@redhat.com> References: <1484011113-8223-1-git-send-email-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170109_171905_653101_5D218199 X-CRM114-Status: GOOD ( 15.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lilja.magnus@gmail.com, Laura Abbott , festevam@gmail.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP adjust_lowmem_bounds is responsible for setting up the boundary for lowmem/highmem. This needs to be setup before memblock reservations can occur. At the time memblock reservations can occur, memory can also be removed from the system. The lowmem/highmem boundary and end of memory may be affected by this but it is currently not recalculated. On some systems this may be harmless, on others this may result in incorrect ranges being passed to the main memory allocator. Correct this by recalculating the lowmem/highmem boundary after all reservations have been made. Tested-by: Magnus Lilja Signed-off-by: Laura Abbott --- v3: Typos and style fixes --- arch/arm/kernel/setup.c | 6 ++++++ arch/arm/mm/mmu.c | 9 ++++++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 8a8051c..f4e5450 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -1093,8 +1093,14 @@ void __init setup_arch(char **cmdline_p) setup_dma_zone(mdesc); xen_early_init(); efi_init(); + /* + * Make sure the calculation for lowmem/highmem is set appropriately + * before reserving/allocating any mmeory + */ adjust_lowmem_bounds(); arm_memblock_init(mdesc); + /* Memory may have been removed so recalculate the bounds. */ + adjust_lowmem_bounds(); early_ioremap_reset(); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index b8f70a3..5cbfd9f 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1157,6 +1157,7 @@ void __init adjust_lowmem_bounds(void) phys_addr_t memblock_limit = 0; u64 vmalloc_limit; struct memblock_region *reg; + phys_addr_t lowmem_limit = 0; /* * Let's use our own (unoptimized) equivalent of __pa() that is @@ -1172,14 +1173,14 @@ void __init adjust_lowmem_bounds(void) phys_addr_t block_end = reg->base + reg->size; if (reg->base < vmalloc_limit) { - if (block_end > arm_lowmem_limit) + if (block_end > lowmem_limit) /* * Compare as u64 to ensure vmalloc_limit does * not get truncated. block_end should always * fit in phys_addr_t so there should be no * issue with assignment. */ - arm_lowmem_limit = min_t(u64, + lowmem_limit = min_t(u64, vmalloc_limit, block_end); @@ -1200,12 +1201,14 @@ void __init adjust_lowmem_bounds(void) if (!IS_ALIGNED(block_start, PMD_SIZE)) memblock_limit = block_start; else if (!IS_ALIGNED(block_end, PMD_SIZE)) - memblock_limit = arm_lowmem_limit; + memblock_limit = lowmem_limit; } } } + arm_lowmem_limit = lowmem_limit; + high_memory = __va(arm_lowmem_limit - 1) + 1; /*