From patchwork Fri Oct 4 01:59:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yanfei X-Patchwork-Id: 2986931 Return-Path: X-Original-To: patchwork-linux-acpi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EDC96BFF0B for ; Fri, 4 Oct 2013 01:59:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E051220444 for ; Fri, 4 Oct 2013 01:59:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2EB43203B4 for ; Fri, 4 Oct 2013 01:59:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751564Ab3JDB7l (ORCPT ); Thu, 3 Oct 2013 21:59:41 -0400 Received: from mail-pb0-f45.google.com ([209.85.160.45]:34186 "EHLO mail-pb0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751174Ab3JDB7k (ORCPT ); Thu, 3 Oct 2013 21:59:40 -0400 Received: by mail-pb0-f45.google.com with SMTP id mc17so3271246pbc.18 for ; Thu, 03 Oct 2013 18:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=YJ3YnfawqZ4D+waEn0BO9zTqOYl4eu9+biwIogpdzcQ=; b=d5DuOoM07VMUTSEyF5nMkyfcAPjrKK3E8cZAbZxvTo31PzcDIk1/xNQXSsK/1oamVl Ac1B5g6CnWlUzrv9/JOz5w/nK7hJTC2Pd1TIWIVbY0Lzi5TwMIxa7gpmLayNjSWpQ3OQ OWJrPgt9qiEsWzP708TupESNsv/tXVaTM2Ce3gLw6Bf3N5h46qCRi4lwVkILU2uMiZgU RPjatjH20naJ8nEjCUfgEeDkhizdiBfDWiu8SvM2wNHjuRMqCBiKhWKXHBHZSC/+cg9T 18kAWLvz9qDsY/JJQOjsROoCdJdI9Y8t4iJh+wGukgcepnzD1uchKWA0yzKt+0yT7iFR zjYw== X-Received: by 10.68.113.99 with SMTP id ix3mr300794pbb.180.1380851979604; Thu, 03 Oct 2013 18:59:39 -0700 (PDT) Received: from localhost.localdomain ([121.225.217.236]) by mx.google.com with ESMTPSA id 7sm14050938paf.22.1969.12.31.16.00.00 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 03 Oct 2013 18:59:38 -0700 (PDT) Message-ID: <524E20FC.3030607@gmail.com> Date: Fri, 04 Oct 2013 09:59:24 +0800 From: Zhang Yanfei User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.5) Gecko/20120607 Thunderbird/10.0.5 MIME-Version: 1.0 To: Andrew Morton , "Rafael J . Wysocki" , lenb@kernel.org, Thomas Gleixner , mingo@elte.hu, "H. Peter Anvin" , Tejun Heo , Toshi Kani , Wanpeng Li , Thomas Renninger , Yinghai Lu , Jiang Liu , Wen Congyang , Lai Jiangshan , isimatu.yasuaki@jp.fujitsu.com, izumi.taku@jp.fujitsu.com, Mel Gorman , Minchan Kim , mina86@mina86.com, gong.chen@linux.intel.com, vasilis.liaskovitis@profitbricks.com, lwoodman@redhat.com, Rik van Riel , jweiner@redhat.com, prarit@redhat.com CC: "x86@kernel.org" , linux-doc@vger.kernel.org, "linux-kernel@vger.kernel.org" , Linux MM , linux-acpi@vger.kernel.org, imtangchen@gmail.com, Zhang Yanfei , Tang Chen Subject: [PATCH part1 v6 3/6] x86/mm: Factor out of top-down direct mapping setup References: <524E2032.4020106@gmail.com> In-Reply-To: <524E2032.4020106@gmail.com> Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tang Chen This patch creates a new function memory_map_top_down to factor out of the top-down direct memory mapping pagetable setup. This is also a preparation for the following patch, which will introduce the bottom-up memory mapping. That said, we will put the two ways of pagetable setup into separate functions, and choose to use which way in init_mem_mapping, which makes the code more clear. Acked-by: Tejun Heo Acked-by: Toshi Kani Signed-off-by: Tang Chen Signed-off-by: Zhang Yanfei --- arch/x86/mm/init.c | 60 ++++++++++++++++++++++++++++++++++----------------- 1 files changed, 40 insertions(+), 20 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 04664cd..ea2be79 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -401,27 +401,28 @@ static unsigned long __init init_range_memory_mapping( /* (PUD_SHIFT-PMD_SHIFT)/2 */ #define STEP_SIZE_SHIFT 5 -void __init init_mem_mapping(void) + +/** + * memory_map_top_down - Map [map_start, map_end) top down + * @map_start: start address of the target memory range + * @map_end: end address of the target memory range + * + * This function will setup direct mapping for memory range + * [map_start, map_end) in top-down. That said, the page tables + * will be allocated at the end of the memory, and we map the + * memory in top-down. + */ +static void __init memory_map_top_down(unsigned long map_start, + unsigned long map_end) { - unsigned long end, real_end, start, last_start; + unsigned long real_end, start, last_start; unsigned long step_size; unsigned long addr; unsigned long mapped_ram_size = 0; unsigned long new_mapped_ram_size; - probe_page_size_mask(); - -#ifdef CONFIG_X86_64 - end = max_pfn << PAGE_SHIFT; -#else - end = max_low_pfn << PAGE_SHIFT; -#endif - - /* the ISA range is always mapped regardless of memory holes */ - init_memory_mapping(0, ISA_END_ADDRESS); - /* xen has big range in reserved near end of ram, skip it at first.*/ - addr = memblock_find_in_range(ISA_END_ADDRESS, end, PMD_SIZE, PMD_SIZE); + addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE); real_end = addr + PMD_SIZE; /* step_size need to be small so pgt_buf from BRK could cover it */ @@ -436,13 +437,13 @@ void __init init_mem_mapping(void) * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages * for page table. */ - while (last_start > ISA_END_ADDRESS) { + while (last_start > map_start) { if (last_start > step_size) { start = round_down(last_start - 1, step_size); - if (start < ISA_END_ADDRESS) - start = ISA_END_ADDRESS; + if (start < map_start) + start = map_start; } else - start = ISA_END_ADDRESS; + start = map_start; new_mapped_ram_size = init_range_memory_mapping(start, last_start); last_start = start; @@ -453,8 +454,27 @@ void __init init_mem_mapping(void) mapped_ram_size += new_mapped_ram_size; } - if (real_end < end) - init_range_memory_mapping(real_end, end); + if (real_end < map_end) + init_range_memory_mapping(real_end, map_end); +} + +void __init init_mem_mapping(void) +{ + unsigned long end; + + probe_page_size_mask(); + +#ifdef CONFIG_X86_64 + end = max_pfn << PAGE_SHIFT; +#else + end = max_low_pfn << PAGE_SHIFT; +#endif + + /* the ISA range is always mapped regardless of memory holes */ + init_memory_mapping(0, ISA_END_ADDRESS); + + /* setup direct mapping for range [ISA_END_ADDRESS, end) in top-down*/ + memory_map_top_down(ISA_END_ADDRESS, end); #ifdef CONFIG_X86_64 if (max_pfn > max_low_pfn) {