From patchwork Tue Jan 3 20:56:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9495581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3E49F606B4 for ; Tue, 3 Jan 2017 20:59:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3254D271CB for ; Tue, 3 Jan 2017 20:59:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 270A427D0E; Tue, 3 Jan 2017 20:59:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AC4D8271CB for ; Tue, 3 Jan 2017 20:59:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1cOW9O-0007xJ-Hb; Tue, 03 Jan 2017 20:57:42 +0000 Received: from mail-qk0-f179.google.com ([209.85.220.179]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1cOW92-0007Xl-Us for linux-arm-kernel@lists.infradead.org; Tue, 03 Jan 2017 20:57:23 +0000 Received: by mail-qk0-f179.google.com with SMTP id u25so376946370qki.2 for ; Tue, 03 Jan 2017 12:57:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hQDDwgZUt0nhoMFw1LnnPAty9Nry0BrJKFmR2cfvKNA=; b=ZSd27VEbudUu7h99pLjZSMoPD+dC+VVDrSZNMgG41NyJrTgV2ysSnX5L93UozjQaL8 yS2DIaVmnOViTijLiPM2b02z3ptEoC3X9K7F+H1m9rlftKf3jZmJ//LnqmsYJo0RXHcX WII4BhXtV/70QqHcbJJ9Ad0gZxxlvvEBX32+HTdY2XberOb8+kSp3/2ehfdZU1AQ8WNz kCgyXcIeCfIcEJ+VuT0QEMpJBe63FwpDTkCaeCMeXz+757cp0mLC6PqIlnxgMisyzZ7S zsvif+jcSbjfYaV5scwu5OFUJ1PJ7ryeVo02YPET++I0vcQGu49guazYbA0MV8R04IHG MGmg== X-Gm-Message-State: AIkVDXJmnoRXwICsXaw29zC4ohnWMFFxSMgG+DKhOrHcJABzsXUQvgq2iNkegCBcf2gebL+X X-Received: by 10.55.190.6 with SMTP id o6mr47588845qkf.124.1483477019693; Tue, 03 Jan 2017 12:56:59 -0800 (PST) Received: from labbott-redhat-machine.redhat.com ([2601:602:9802:a8dc::6b06]) by smtp.gmail.com with ESMTPSA id f2sm44292959qtg.22.2017.01.03.12.56.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 03 Jan 2017 12:56:58 -0800 (PST) From: Laura Abbott To: Russell King , Nicolas Pitre , Grygorii Strashko Subject: [PATCH 2/2] arm: Adjust memory boundaries after reservations Date: Tue, 3 Jan 2017 12:56:50 -0800 Message-Id: <1483477010-18986-3-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1483477010-18986-1-git-send-email-labbott@redhat.com> References: <1483477010-18986-1-git-send-email-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170103_125721_214427_8A5DDC1F X-CRM114-Status: GOOD ( 14.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lilja.magnus@gmail.com, Laura Abbott , festevam@gmail.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP adjust_lowmem_bounds is responsible for setting up the boundary for lowmem/hihgmme. This needs to be setup before memblock reservations can occur. At the time memblock reservations can occur, memory can also be removed from the system. The lowmem/highmem boundary and end of memory may be affected by this but it is currently not recalculated. On some systems this may be harmless, on o thers this may result in incorrect ranges being passed to the main memory allocator. Correct this by recalculating the lowmem/highmem boundary after all reservations have been made. Signed-off-by: Laura Abbott --- arch/arm/kernel/setup.c | 8 ++++++++ arch/arm/mm/mmu.c | 9 ++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 8a8051c..4625115 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -1093,8 +1093,16 @@ void __init setup_arch(char **cmdline_p) setup_dma_zone(mdesc); xen_early_init(); efi_init(); + /* + * Make sure the calcualtion for lowmem/highmem is set appropriately + * before reserving/allocating any mmeory + */ adjust_lowmem_bounds(); arm_memblock_init(mdesc); + /* + * Memory may have been removed so recalculate the bounds. + */ + adjust_lowmem_bounds(); early_ioremap_reset(); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 32ecdfd..3f12a64 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1157,6 +1157,7 @@ void __init adjust_lowmem_bounds(void) phys_addr_t memblock_limit = 0; u64 vmalloc_limit; struct memblock_region *reg; + phys_addr_t lowmem_limit = 0; /* * Let's use our own (unoptimized) equivalent of __pa() that is @@ -1173,8 +1174,8 @@ void __init adjust_lowmem_bounds(void) if (reg->base < vmalloc_limit) { - if (block_end > arm_lowmem_limit) - arm_lowmem_limit = min( + if (block_end > lowmem_limit) + lowmem_limit = min( (phys_addr_t)vmalloc_limit, block_end); @@ -1195,12 +1196,14 @@ void __init adjust_lowmem_bounds(void) if (!IS_ALIGNED(block_start, PMD_SIZE)) memblock_limit = block_start; else if (!IS_ALIGNED(block_end, PMD_SIZE)) - memblock_limit = arm_lowmem_limit; + memblock_limit = lowmem_limit; } } } + arm_lowmem_limit = lowmem_limit; + high_memory = __va(arm_lowmem_limit - 1) + 1; /*