From patchwork Fri Jan 3 05:10:05 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 3430121 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 68168C02DC for ; Fri, 3 Jan 2014 05:13:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 58D542012E for ; Fri, 3 Jan 2014 05:13:33 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 934E62012B for ; Fri, 3 Jan 2014 05:13:31 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vyx4F-0005x4-Ns; Fri, 03 Jan 2014 05:13:07 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vyx4D-0007Ve-Ar; Fri, 03 Jan 2014 05:13:05 +0000 Received: from szxga01-in.huawei.com ([119.145.14.64]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vyx4A-0007VC-28 for linux-arm-kernel@lists.infradead.org; Fri, 03 Jan 2014 05:13:03 +0000 Received: from 172.24.2.119 (EHLO szxeml208-edg.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id BPN09889; Fri, 03 Jan 2014 13:11:30 +0800 (CST) Received: from SZXEML404-HUB.china.huawei.com (10.82.67.59) by szxeml208-edg.china.huawei.com (172.24.2.57) with Microsoft SMTP Server (TLS) id 14.3.158.1; Fri, 3 Jan 2014 13:11:26 +0800 Received: from lggeml427-hub.china.huawei.com (10.72.61.127) by szxeml404-hub.china.huawei.com (10.82.67.59) with Microsoft SMTP Server (TLS) id 14.3.158.1; Fri, 3 Jan 2014 13:11:28 +0800 Received: from localhost (10.107.197.247) by lggeml427-hub.china.huawei.com (10.72.61.127) with Microsoft SMTP Server (TLS) id 14.3.158.1; Fri, 3 Jan 2014 13:11:21 +0800 Date: Fri, 3 Jan 2014 13:10:05 +0800 From: Wang Nan To: Russell King Subject: [RESEND] Question: Whether a bank must be fully contained by a section? Message-ID: <20140103051005.GB81380@kernel-host> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.22 (2013-10-16) X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140103_001302_574263_0EDAC6F8 X-CRM114-Status: GOOD ( 12.50 ) X-Spam-Score: -3.1 (---) Cc: hui.geng@huawei.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP I have sent this question to linux-kernel@vger.kernel.org, but receive no answer. I think these should be a better place to discuss this question. Apologies if you got multiple copies of this email. ========= By reading the code of show_mem(), I found that there is an assumption that the page structs must be continuous for each bank: for_each_bank (i, mi) { ... page = pfn_to_page(pfn1); end = pfn_to_page(pfn2 - 1) + 1; do { ... page++; <-- pageframe must be continuous ... } while (page < end); ... } Therefore, a bank must be fully contained in a section in sparse memory mode, because page frames are allocated section by section (in sparse_init()). However, I didn't find other code which enforces this assumption. Instead, in arm_memory_present (arch/arm/mm/init.c), it seems that a bank may contain more than one section: arm_memory_present: ... for_each_memblock(memory, reg) memory_present(0, memblock_region_memory_base_pfn(reg), memblock_region_memory_end_pfn(reg)); ... memory_present: ... for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) { ... } ... Therefore, would you please consider the following patch, which removes the assumption that a bank must be fully contained in one section? =========================== From b2c4bb5807c755d92274e11bb00cc548fea62242 Mon Sep 17 00:00:00 2001 From: Wang Nan Date: Thu, 2 Jan 2014 13:20:02 +0800 Subject: [PATCH] use pfn_to_page in show_mem If a bank spans into different sections, the page structures of the bank may not continous. This patch uses pfn_to_page to recompute the address of struct page in show_mem from pfn, makes it to collect correct information even if a bank spans into different sections. Signed-off-by: Wang Nan --- arch/arm/mm/init.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 1f7b19a..3078e5a 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -97,16 +97,14 @@ void show_mem(unsigned int filter) for_each_bank (i, mi) { struct membank *bank = &mi->bank[i]; - unsigned int pfn1, pfn2; - struct page *page, *end; + unsigned int pfn, pfn_end; + struct page *page; - pfn1 = bank_pfn_start(bank); - pfn2 = bank_pfn_end(bank); - - page = pfn_to_page(pfn1); - end = pfn_to_page(pfn2 - 1) + 1; + pfn = bank_pfn_start(bank); + pfn_end = bank_pfn_end(bank); do { + page = pfn_to_page(pfn); total++; if (PageReserved(page)) reserved++; @@ -118,8 +116,8 @@ void show_mem(unsigned int filter) free++; else shared += page_count(page) - 1; - page++; - } while (page < end); + pfn++; + } while (pfn < pfn_end); } printk("%d pages of RAM\n", total);