From patchwork Fri Jan 11 05:12:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D724514E5 for ; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C928A298E5 for ; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC7BD299BE; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BC72298E5 for ; Fri, 11 Jan 2019 05:14:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 651368E000A; Fri, 11 Jan 2019 00:14:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5FEFC8E0001; Fri, 11 Jan 2019 00:14:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EFA78E000A; Fri, 11 Jan 2019 00:14:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 011478E0001 for ; Fri, 11 Jan 2019 00:14:11 -0500 (EST) Received: by mail-pl1-f198.google.com with SMTP id x7so7568377pll.23 for ; Thu, 10 Jan 2019 21:14:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=hiQNSCzMY906mmOQ4BH7vs9j9zl6nChB4GM3SlWKyURz4a0togbzDwZteexqurIQKK 6vcoTbMbFRStRLX4xqigt5684ZYmmORw5kYLL1ES7dOcQF4T8F6HfQpknTHydM5j/DKE Y2fgwX96Fts3MFjjpKGAeoBaAPKsKk6AD0QioLOAvHhQiDjcI3XHxH+Ei54VEQeg9Bqu g22tlFI3oAGndAgDnABApSBVZvdLoDZewI4+H4z/JoJGEWnpz+vJnAJfbUeskZoXgGaN HW1ny8ElvmSuK/SQvTL1dNIQSdLjFgY1eC56+NOxTCzG1a36JAuSNH8AgHapdTFJ8x5o v2SQ== X-Gm-Message-State: AJcUukfkeufzdiaRhBfIUtw9MrI3WCNlISO4L2ALtEI8V0n5QOhkKYHa mYdME2pzx6n/wLOQ+N2tgfamc/4nPZcW9DIeyg3M4NtwLXqbHnDuLprprdyM0cb4KPlkebJllKN wP2mLDsxYWDY3FFyAGOjvihJv6SYC5NEk9d+81fzrxWFtW6U/C7d46z4R2RtGT+bo7BHsimb4XW eVLLszhuU8qeaFBnRyTpaVFuEaD1i/sxnHkWFIR0Q3AZLAGR3GeB7CBpnDCIA5y8Y8xnebnuIA2 PyARIF9aZ8u9zzVWrYHUKEVvOPqHj/Y8nVJRKoTPM4wwdNX1/SZ1V5TJcue+0raFwajFoLEybsu Bgt3R24dDkqKmibxU2IpisBHpAI4ymSNlijHDLX68PzdmOS8Cy6NCKE4o3kxJd0uPoAtXV8BJQs S X-Received: by 2002:a17:902:6b46:: with SMTP id g6mr13221479plt.21.1547183650643; Thu, 10 Jan 2019 21:14:10 -0800 (PST) X-Received: by 2002:a17:902:6b46:: with SMTP id g6mr13221446plt.21.1547183649741; Thu, 10 Jan 2019 21:14:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183649; cv=none; d=google.com; s=arc-20160816; b=GKXeV4uPKvkoUEug/t+ftcHLT/MZzgeh+p39ixo8iQndqUFgyGLvkxuRDE8jCQfyah MAUwTVmvjY37i/TKwUFOEV39VRMK3zBkXIpRgiuaVQwvzkU4kYu40nYdAnxcf4Trpam2 H1lu9cWXYLCobYZmeZerAnpJ8Ox/2xt5Jy6cayI/N5s7CwWDIvBbbP0axJgaYM3sqT6l bXdoFd+8p48xehZ/uHu0VhaVQZ9Mo7VS/woKN+fNL/vrVP3KJxHjDC+6uLqnxyw422y0 XAvlXXOjc/7j1GtECXO17pjopg02JhoQIHc9mhpDV/s4gX6969dI2OvWtuIOnmcX9EAo JmSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=f+nH7Xt+x8CkaYOaXN0biQRALclFYCO8l+uf3glS3HDcvooRxpzXeznjO3BvNEsHoh mgshFRGoAhS+iQUG+CwCc//TVl3/dwJRMu0tCGuv1ozD8I/Y+gWNn+651Nosw4k7dQsz 90UmmeXJoqxGGCTglwp7jQV8z29pZZFRSo4Yj72oS5e4jmw6dT3jnLKsqqz3pjTlWX1+ XdctrxJfjU46goP92hK+4rga6ZyXMnnPqiDBjre/fTB7VCyPDodxLMx9t29UJvwICeJY nhJ9nm4AVPNt0zcUy9/erdUFk8EA/8zrAF8u9O9l1EyFfVQSbY8N4dJskJKpamZ/rJLP 13og== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OkCRn4Rm; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q2sor1601500plh.10.2019.01.10.21.14.09 for (Google Transport Security); Thu, 10 Jan 2019 21:14:09 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OkCRn4Rm; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=OkCRn4RmDaTCWIV/VlqEcURdQeHooOBsggtOA4KSHGJ1ClFwy19LqiaQCia1SwBNKy pZHQZ84BZ8h50LkR1ja3p+I0NmEivjjzSbAFjnIEJBR/MTHMdg7G4GmsvoytwREHBZGL ajccvvhf7Aj9VGbNKj7+jnRLgmau6Fq0aa+H0HzOqV7E5+mGyeeheB+4LbSPKNwfpGX5 RtMOQG1dCfsKMl8hODWqb+G/ooKw2pr3b05rXMIdY4IWaThAznQdtH/MCq6/GAF2yvWX 1G12E6qLk3vWBdXsbx2nrJeTAYki46RuNPQJ517kU5Ma+oPNknZqDERAwKjnJkCDytvM yy8w== X-Google-Smtp-Source: ALg8bN519M8FSblftpFRFKZNcmDiRvlUMgPPvlgL7aMetK7LTEx85h37vl9npAeYzhOQCFfavx0WJw== X-Received: by 2002:a17:902:583:: with SMTP id f3mr13628711plf.202.1547183649391; Thu, 10 Jan 2019 21:14:09 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.14.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:14:08 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 7/7] x86/mm: isolate the bottom-up style to init_32.c Date: Fri, 11 Jan 2019 13:12:57 +0800 Message-Id: <1547183577-20309-8-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP bottom-up style is useless in x86_64 any longer, isolate it. Later, it may be removed completely from x86. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/mm/init.c | 153 +--------------------------------------------- arch/x86/mm/init_32.c | 147 ++++++++++++++++++++++++++++++++++++++++++++ arch/x86/mm/mm_internal.h | 8 ++- 3 files changed, 155 insertions(+), 153 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 003ad77..6a853e4 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -502,7 +502,7 @@ unsigned long __ref init_memory_mapping(unsigned long start, * That range would have hole in the middle or ends, and only ram parts * will be mapped in init_range_memory_mapping(). */ -static unsigned long __init init_range_memory_mapping( +unsigned long __init init_range_memory_mapping( unsigned long r_start, unsigned long r_end) { @@ -530,157 +530,6 @@ static unsigned long __init init_range_memory_mapping( return mapped_ram_size; } -#ifdef CONFIG_X86_32 - -static unsigned long min_pfn_mapped; - -static unsigned long __init get_new_step_size(unsigned long step_size) -{ - /* - * Initial mapped size is PMD_SIZE (2M). - * We can not set step_size to be PUD_SIZE (1G) yet. - * In worse case, when we cross the 1G boundary, and - * PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k) - * to map 1G range with PTE. Hence we use one less than the - * difference of page table level shifts. - * - * Don't need to worry about overflow in the top-down case, on 32bit, - * when step_size is 0, round_down() returns 0 for start, and that - * turns it into 0x100000000ULL. - * In the bottom-up case, round_up(x, 0) returns 0 though too, which - * needs to be taken into consideration by the code below. - */ - return step_size << (PMD_SHIFT - PAGE_SHIFT - 1); -} - -/** - * memory_map_top_down - Map [map_start, map_end) top down - * @map_start: start address of the target memory range - * @map_end: end address of the target memory range - * - * This function will setup direct mapping for memory range - * [map_start, map_end) in top-down. That said, the page tables - * will be allocated at the end of the memory, and we map the - * memory in top-down. - */ -static void __init memory_map_top_down(unsigned long map_start, - unsigned long map_end) -{ - unsigned long real_end, start, last_start; - unsigned long step_size; - unsigned long addr; - unsigned long mapped_ram_size = 0; - - /* xen has big range in reserved near end of ram, skip it at first.*/ - addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE); - real_end = addr + PMD_SIZE; - - /* step_size need to be small so pgt_buf from BRK could cover it */ - step_size = PMD_SIZE; - max_pfn_mapped = 0; /* will get exact value next */ - min_pfn_mapped = real_end >> PAGE_SHIFT; - last_start = start = real_end; - - /* - * We start from the top (end of memory) and go to the bottom. - * The memblock_find_in_range() gets us a block of RAM from the - * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages - * for page table. - */ - while (last_start > map_start) { - if (last_start > step_size) { - start = round_down(last_start - 1, step_size); - if (start < map_start) - start = map_start; - } else - start = map_start; - mapped_ram_size += init_range_memory_mapping(start, - last_start); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - last_start = start; - min_pfn_mapped = last_start >> PAGE_SHIFT; - if (mapped_ram_size >= step_size) - step_size = get_new_step_size(step_size); - } - - if (real_end < map_end) { - init_range_memory_mapping(real_end, map_end); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - } -} - -/** - * memory_map_bottom_up - Map [map_start, map_end) bottom up - * @map_start: start address of the target memory range - * @map_end: end address of the target memory range - * - * This function will setup direct mapping for memory range - * [map_start, map_end) in bottom-up. Since we have limited the - * bottom-up allocation above the kernel, the page tables will - * be allocated just above the kernel and we map the memory - * in [map_start, map_end) in bottom-up. - */ -static void __init memory_map_bottom_up(unsigned long map_start, - unsigned long map_end) -{ - unsigned long next, start; - unsigned long mapped_ram_size = 0; - /* step_size need to be small so pgt_buf from BRK could cover it */ - unsigned long step_size = PMD_SIZE; - - start = map_start; - min_pfn_mapped = start >> PAGE_SHIFT; - - /* - * We start from the bottom (@map_start) and go to the top (@map_end). - * The memblock_find_in_range() gets us a block of RAM from the - * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages - * for page table. - */ - while (start < map_end) { - if (step_size && map_end - start > step_size) { - next = round_up(start + 1, step_size); - if (next > map_end) - next = map_end; - } else { - next = map_end; - } - - mapped_ram_size += init_range_memory_mapping(start, next); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - start = next; - - if (mapped_ram_size >= step_size) - step_size = get_new_step_size(step_size); - } -} - -static unsigned long __init init_range_memory_mapping32( - unsigned long r_start, unsigned long r_end) -{ - /* - * If the allocation is in bottom-up direction, we setup direct mapping - * in bottom-up, otherwise we setup direct mapping in top-down. - */ - if (memblock_bottom_up()) { - unsigned long kernel_end = __pa_symbol(_end); - - /* - * we need two separate calls here. This is because we want to - * allocate page tables above the kernel. So we first map - * [kernel_end, end) to make memory above the kernel be mapped - * as soon as possible. And then use page tables allocated above - * the kernel to map [ISA_END_ADDRESS, kernel_end). - */ - memory_map_bottom_up(kernel_end, r_end); - memory_map_bottom_up(r_start, kernel_end); - } else { - memory_map_top_down(r_start, r_end); - } -} - -#endif - void __init init_mem_mapping(void) { unsigned long end; diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 49ecf5e..f802678 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -550,6 +550,153 @@ void __init early_ioremap_page_table_range_init(void) early_ioremap_reset(); } +static unsigned long min_pfn_mapped; + +static unsigned long __init get_new_step_size(unsigned long step_size) +{ + /* + * Initial mapped size is PMD_SIZE (2M). + * We can not set step_size to be PUD_SIZE (1G) yet. + * In worse case, when we cross the 1G boundary, and + * PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k) + * to map 1G range with PTE. Hence we use one less than the + * difference of page table level shifts. + * + * Don't need to worry about overflow in the top-down case, on 32bit, + * when step_size is 0, round_down() returns 0 for start, and that + * turns it into 0x100000000ULL. + * In the bottom-up case, round_up(x, 0) returns 0 though too, which + * needs to be taken into consideration by the code below. + */ + return step_size << (PMD_SHIFT - PAGE_SHIFT - 1); +} + +/** + * memory_map_top_down - Map [map_start, map_end) top down + * @map_start: start address of the target memory range + * @map_end: end address of the target memory range + * + * This function will setup direct mapping for memory range + * [map_start, map_end) in top-down. That said, the page tables + * will be allocated at the end of the memory, and we map the + * memory in top-down. + */ +static void __init memory_map_top_down(unsigned long map_start, + unsigned long map_end) +{ + unsigned long real_end, start, last_start; + unsigned long step_size; + unsigned long addr; + unsigned long mapped_ram_size = 0; + + /* xen has big range in reserved near end of ram, skip it at first.*/ + addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE); + real_end = addr + PMD_SIZE; + + /* step_size need to be small so pgt_buf from BRK could cover it */ + step_size = PMD_SIZE; + max_pfn_mapped = 0; /* will get exact value next */ + min_pfn_mapped = real_end >> PAGE_SHIFT; + last_start = start = real_end; + + /* + * We start from the top (end of memory) and go to the bottom. + * The memblock_find_in_range() gets us a block of RAM from the + * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages + * for page table. + */ + while (last_start > map_start) { + if (last_start > step_size) { + start = round_down(last_start - 1, step_size); + if (start < map_start) + start = map_start; + } else + start = map_start; + mapped_ram_size += init_range_memory_mapping(start, + last_start); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + last_start = start; + min_pfn_mapped = last_start >> PAGE_SHIFT; + if (mapped_ram_size >= step_size) + step_size = get_new_step_size(step_size); + } + + if (real_end < map_end) { + init_range_memory_mapping(real_end, map_end); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + } +} + +/** + * memory_map_bottom_up - Map [map_start, map_end) bottom up + * @map_start: start address of the target memory range + * @map_end: end address of the target memory range + * + * This function will setup direct mapping for memory range + * [map_start, map_end) in bottom-up. Since we have limited the + * bottom-up allocation above the kernel, the page tables will + * be allocated just above the kernel and we map the memory + * in [map_start, map_end) in bottom-up. + */ +static void __init memory_map_bottom_up(unsigned long map_start, + unsigned long map_end) +{ + unsigned long next, start; + unsigned long mapped_ram_size = 0; + /* step_size need to be small so pgt_buf from BRK could cover it */ + unsigned long step_size = PMD_SIZE; + + start = map_start; + min_pfn_mapped = start >> PAGE_SHIFT; + + /* + * We start from the bottom (@map_start) and go to the top (@map_end). + * The memblock_find_in_range() gets us a block of RAM from the + * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages + * for page table. + */ + while (start < map_end) { + if (step_size && map_end - start > step_size) { + next = round_up(start + 1, step_size); + if (next > map_end) + next = map_end; + } else { + next = map_end; + } + + mapped_ram_size += init_range_memory_mapping(start, next); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + start = next; + + if (mapped_ram_size >= step_size) + step_size = get_new_step_size(step_size); + } +} + +void __init init_range_memory_mapping32( + unsigned long r_start, unsigned long r_end) +{ + /* + * If the allocation is in bottom-up direction, we setup direct mapping + * in bottom-up, otherwise we setup direct mapping in top-down. + */ + if (memblock_bottom_up()) { + unsigned long kernel_end = __pa_symbol(_end); + + /* + * we need two separate calls here. This is because we want to + * allocate page tables above the kernel. So we first map + * [kernel_end, end) to make memory above the kernel be mapped + * as soon as possible. And then use page tables allocated above + * the kernel to map [ISA_END_ADDRESS, kernel_end). + */ + memory_map_bottom_up(kernel_end, r_end); + memory_map_bottom_up(r_start, kernel_end); + } else { + memory_map_top_down(r_start, r_end); + } +} + static void __init pagetable_init(void) { pgd_t *pgd_base = swapper_pg_dir; diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 4e1f6e1..5ab133c 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -9,7 +9,13 @@ static inline void *alloc_low_page(void) } void early_ioremap_page_table_range_init(void); - +void init_range_memory_mapping32( + unsigned long r_start, + unsigned long r_end); +void set_alloc_range(unsigned long low, unsigned long high); +unsigned long __init init_range_memory_mapping( + unsigned long r_start, + unsigned long r_end); unsigned long kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask);